What data product platforms do I recommend to my clients? Here's a short list of the analytical platforms that are my "go to" tools.
What data product platforms do I recommend to my clients? Here's a short list of the analytical platforms that are my "go to" tools.
Being a product owner is a little like being a parachute packer… Nobody pats you on the back and congratulates you for each successfully completed task. People only seem notice when you get it wrong—and then, they really notice.
One of the first tasks faced by product owners building a data product (analytical functionality embedded in a core product) is to decide: should we buy it or build it?
Recently, I’ve been frustrated with some of the articles I’ve read about building great analytics dashboards. Their titles often led me to believe that they contained must-have insights about creating dashboards that engage users, but in the end, most are about use of color, pretty charts, and the latest feature a particular vendor has launched. And in my experience as a data product builder, these things aren’t what will make your embedded analytics product a success.
Today analytics powerhouse Tableau announced Adam Selipsky as its new CEO and president. My thoughts.
Not too many years ago, I was charged with building my first analytics-based product for the company I had just joined. I wrote user stories, worked with Engineers and interaction designers, and visited users. I did everything right.
Except I didn’t.
In my experience building data products over the last ten years, the technical parts of the project are rarely what will jeopardize success. Between the sophistication of today’s embedded analytic platforms and the implementation services offered by vendors, you’re pretty much covered when it comes to the technical side of things. It’s the non-technical stuff that’s going to be your downfall if you’re not careful.
It's the nightmare that keeps analytics leaders up at night:
Our analytics—the ones we spent so much time and money on—the analytics that were supposed to generate new revenue and new customers, are failing.
Whether you are building a customer-facing data product or implementing analytics inside your organization for internal use, the fact is that some analytics projects do fail. But when they fail, they almost never fail the way you think they're going to...
As a data product builder — someone building customer-facing analytics that to will be part of a product — the needs are no different but achieving agility can be a real challenge. Oh sure, every analytics platform provider you might consider claims that they can connect to any data, anywhere, but this leaves a lot of wiggle room. Can you really connect to anything? How easy is it? How hard is it to change later? What about [insert new technology on the horizon here] that I just heard about? If you want to build an agile data product, you've got a tough road ahead... as I found out.
It's one of the most important decisions you can make when adding analytical capabilities into a product. And, unfortunately, it's an easy one to get wrong. What platform should you choose? There are a hundred options these days—each claiming to have what you need. How do you decide? Here are three tips...
Before you start evaluating features or selecting vendors, you need to get a handle on what YOU mean when you ask for analytics in your product. Although it seems simple, it really isn't so straightforward. There are many ways to "embed" analytics in a product and the method you choose can create limitations for your product roadmap and determine the overall user experience for your customers.
A great place to start is by understanding the "Three Faces of Embedded Analytics". It's a great starting point because this simple concept is directly tied to defining the underlying data you'll need to access and to the ways you might offer your data product to users. Get it wrong and you can end overlooking some valuable ways to generate revenue.
It happens every time. Every single time.
From the moment I meet a customer for that very first session, the clock is ticking. It might be seconds, it might be minutes, it might be hours—but it always happens. They always ask the same question: "how can I make money with my data?"
Back in the early 1990's, I created one of the first metric dashboards for the telecommunications company I had just joined to help improve business processes. My goal was simple: build a web-based dashboard that let management view charts showing the performance of key processes. Sounds pretty simple, right? Today it would be, but 20 years ago creating "business intelligence dashboards" was a convoluted process at best.
Here's what the workflow looked like back then. First, I created the intranet site that would serve as the holder for the dashboard charts. It was pure Microsoft FrontPage with a black background, animated GIFs, and crazy colors...
I had just accepted a new role at a small technology company when an email from my soon-to-be boss changed the direction of my career:
"We've got all this data we've gathered from our application. We should do something with it, like adding some sort of business intelligence. Start thinking about what we could do."
Back then, I had no idea just how little I knew about selecting a business intelligence vendor, getting the system implemented and integrated, and creating an actual product offering based on BI capabilities.
Three years and four analytics implementations later, I know just how naive I was and I've got the scars to prove it.
For me that first BI project was an exciting (if a little nerve-wracking) experience — I had no concept of what was around each corner and which decisions were going to haunt me later. I've implemented business intelligence into existing products multiple times now and learned quite a bit with each additional project.
I'd like to share the lessons I learned, the best practices, and the worst mistakes. You'll still make plenty of mistakes with your own project, but you can avoid the pitfalls I encountered by using the techniques that I learned through painful trial and error but which made my implementations successful.
If you do a quick search on Google for "business intelligence tools", you will find 110,000,000 results. Assuming 10% of those results are for BI tools, that you review two vendors per day, and that you have a team of five people to help in the assessment process — it would take you 30,000 years to complete your selection process.
This is a problem.
How do you cull through the massive number of BI offerings available today in a reasonable amount of time and without missing key players? Analysts' reports are your friends...
I started my search by making a list of all the potential players in business intelligence/analytics that were reasonable solutions for our needs (more on that to come). The problem was that I didn't have access to any Gartner reports and I didn't have the budget to pay $1,995 for the latest "Magic Quadrant". What I learned early on is that you can call any of the BI vendors such as GoodData, Birst, Tableau, or Alteryx and they will give you a copy of the latest Gartner report with little convincing. I called one of the vendors and explained that I needed to implement a white-labeled BI solution inside an existing web-based application and I had a copy of the latest Magic Quadrant report in my email within the hour. In fact, many of the vendors now allow you to get a copy of the report from their Web site after filling out a quick form. The only caveat is that it's often a truncated version only highlighting features of their product. Call a vendor and ask for the full report.
Why do you care about the full report? Because it has a comprehensive listing of all the vendors your should be reviewing — it becomes your master list from which to start narrowing down solution candidates. But how do you know who to include and exclude from your search? Even the Gartner report has too many options for you to fully demo each application. Start by defining your goals and personas.
One of the mistake I made was that I didn't frame our goals crisply. We had started early on by listing project goals so we knew that:
I thought I had a good handle on exactly what we needed. I didn't. Basic project goals are necessary, but not sufficient for BI implementation.
The second time I implemented analytics, I corrected a major oversight from project #1. I created personas describing exactly who would be using the system. By persona, I don't mean a marketing persona: "35-45 year old male who lives in San Diego and an enjoys the soulful sounds of the Backstreet Boys." I mean a user experience persona that describes the user you are serving and the problems that keep them up at night.
It is important to fully define these personas because your BI application will likely have a wide-range of users — from tactical users to executive-level strategic users. I overlooked this step during the first project and I ended up co-mingling tactical and strategic charts and graphs on the same dashboards. As a result, I ended up creating pages that didn't quite fit any one user type well. Charts that made sense for a front-line manager were useless to the COO. The second time around, we had a workshop and defined the personas that would use our system such as:
For each of these personas, we then created a quick persona brief, an example of which is shown below.
The purpose of the brief was to get us thinking about the users that needed to be served by each dashboard we would create. We still ended up with a mix of tactical and strategic roles, but on the second project we realized this and separated the charts more logically into different role-based pages.
I recommend that you start your process of vendor selection by making a list of the project goals and then move on to thinking about the specific users whose lives you are trying to improve through the use of analytics.
At this point in the implementation process you will have three key pieces of information: a long list of vendors from whom to choose, a list of your project goals, and a list of personas that you need to serve.
The next step is to eliminate those vendors whose products don't align with the needs of your users. For example, we quickly eliminated several "Traditional BI" vendors because although they had well-established products, they couldn't operate embedded into our web-based application. One strike — you're out. From the twenty initial vendors we started with (selected via the Gartner report), we narrowed the list to seven potential solutions by eliminating those couldn't fit with our operating model. We continued our narrowing process by evaluating the remaining candidates against a combination of factors.
To simplify the process, I created a matrix of both objective and subjective criteria. I listed the project goals as criteria, but also added items such as "ease of working with the vendor". Do not overlook the subjective criteria. While meeting the basic technical requirements is critical, it's the subjective stuff that will make your life either fun or hellish over the next few months.
For example, during the initial discussion with a particular vendor, the salesperson told our he'd of Engineering that he should go back and read the product specs webpage so that he'd be able to ask better questions on the next call. That incident gave us a hint into what we might expect working with the vendor's team and factored heavily into our "ease of doing business" category. And there wasn't another call.
Here's a sample of the matrix we used (vendor names sanitized to protect the guilty):
This wasn't the only mechanism we used to evaluate the vendors. For each candidate, I prepared a one-pager that covered items such as:
We combined all of these factors into our evaluation and selected three business intelligence vendors to perform proof-of-concept trials.
One item I didn't anticipate when starting the BI journey was how different the pricing models would be from vendor to vendor. The schemes I encountered included:
The incredible disparity in the pricing models created huge headaches for me and made it tough to compare pricing across systems. I solved the problem by calculating the cost of implementation through the end of year one, and then the cost of each year past year one. A key here was establishing the project assumptions based on the goals and personas. Our assumptions on that first project included:
Using this information, I went back to each of the seven finalists and asked them to calculate the costs for the first five years. If you use this approach — making the sales team do the costing for you — you'll save a lot of time.
With all the permutations for pricing that exist, I can't overstate how vital it is that your selected vendor's pricing model matches your intended product offering model. If you get it wrong, you can limit the potential ways in which you can offer the product to your users. As an example, we wanted to include basic BI for all existing customer accounts, all users, but with limited data. If we had chosen a vendor that based pricing on the number of users, we'd be in the position of either reducing margin as more people used the system or tying to limit the total users per account. This was the opposite of what we wanted to achieve — we wanted everyone using the system and getting hooked!
As a result, the vendors that based pricing on user counts were eliminated. Our product also had a very thin profit margin and we eliminated the vendor who wanted a percentage of our revenue from further consideration.
It was down to three vendors and time to see the applications in action using our data and our desired charts.
The proof of concept (POC) trials were designed to mimic what we might see in a production situation — similar data, similar charts — just less data and no integration. We wanted to see how fast each vendor could get up and running with our data as well as how quickly they would respond to changes to the requirements and the little challenges that always pop up in a complex project. The vendor that we eventually chose was able to perform not one, but three separate trials for us faster and with less stress than the second place vendor was able to get anything up and running.
As an added bonus, the POC also allowed us to "test" the personalities of the implementation team. We would be working with these people for the next 90 days — this was a great chance for a preview. If a problem came up on a Friday evening when the demo for CEO was on Monday morning, would they be willing and able to help? Would they quickly pick-up on our business challenges or would we need to explain our company's business model repeatedly? The answers to these questions were as valuable as the technical aspects of the POC and factored heavily into our final decision. We ended up choosing the vendor that had great capabilities, a great team, moved quickly, and felt more like a partner than a vendor. But now, we needed to negotiate the deal.
The work on the contract details was perhaps some of the most enjoyable, fulfilling time of my life. I look back on those hours and days fondly...
Oh wait, sorry, it was the most grueling, exhausting portion of the project. Not because of the lawyers (they were delightful people who just happened to enjoy hour-long conversations about mutual indemnification) but because of me and what I overlooked.
Here's where I made another major mistake in the first project — I got the legal and finance teams involved way too late. We'd already conducted the POC and had picked the vendor that we thought would be best for our product. We'd even settled on the pricing both for the product and implementation services. We're done, right? No. Wrong. It's details time.
As soon as the legal and finance teams became involved in the project, a couple of things became clear. First, I would need to spend significant time reviewing the project with them, explaining the business purpose, explaining our selection process, discussing security, etc. We even had extensive discussions about why we were buying a solution instead of building our own (hint: building is always more expensive and time-consuming). It was frustrating and took a significant amount of time.
I should have gotten these teams involved from day one so they could understand our decisions as we made them. My fault, lesson learned.
The second thing that became very obvious very quickly was how many little details I missed. Here are some of the items that I now know are essential to address early:
There are many more finance/legal issues that I've learned need early resolution, but these are the ones most deeply seared into my psyche for the future.
Vendor chosen, contract signed — it's time to get going!
And... wait. What are we building again? Yet another of the mistakes from the first project that I corrected in project #2 was a failure to have a good understanding of exactly what we wanted to show our users. Having the persona developed for the second project helped, but so did having a complete "dictionary" of the charts, metrics, and dimensions we needed.
For the first project, we burned through a significant part of our services budget just trying to get a handle on what kinds of questions we needed to answer, what data those answers required, and where that data was located. For the second project, we got a little smarter. The first thing we handed the vendor's project team was a spreadsheet with the following for each analytic to be displayed:
It doesn't have to be pretty, but providing a simple dictionary of charts, matrices, definitions, and dimensions will accelerate the initial phases of implementation and save you money along the way.
I'll never forget sitting at a meeting with our CEO where we revealed the results of our hard work from the past month. One of his first comments was "why does this chart say the cost of this type of repair was $27 million? Our whole budget is less than $20 million. This can't be right."
Oops. We hadn't performed quality assurance (QA) to the level we should have in that first project. Although we spent hours reviewing charts, testing performance, and tweaking colors and layout — we had missed an issue with the data we were providing the vendor. We had built a process to convert all expenses from local currency to U.S. dollars, and then forgot to change the loading process so that it used the new converted data. Rubles mixed with Yen mixed with Pesos. We never noticed it, but the CEO sure did. With business intelligence you've got one chance to get it right. Show someone data that they know is inaccurate and you'll have a hard time getting them to trust the system ever again.
The second time around I made sure this wouldn't happen again. At the start of the project I formed a small team of people with expertise in various parts of the business to review each and every chart and make sure it made sense. We provided the team with a copy of the same data we provided the vendor for the POC and had them run the same calculations manually. No silly calculation errors this time around. Again, lesson learned. The hard way.
We made it through that first implementation successfully and in less than 6 months. The second, third and fourth implementations took less time and were equally successful. But, some things took far longer than others and I recommend that you start on these tasks early on in your project.
As I performed each implementation project, I kept a running record of all the details that could be easily overlooked. Below is a sample of the mind-map I use:
Analytics are hot, they are addictive, and they sell products. But implementing business intelligence isn't a simple 1-2-3 type process. You can learn as you go as I did, but it's easier if you know where the trouble spots may lie and how to avoid them. I've wrapped up the steps I followed (during the fourth implementation, not the first!) into a flowchart that shows the steps you need to consider. They don't all have to be done in this exact sequence, but it will give you a sense of the basic process.
Pick a good vendor, understand your users, start on the little details early, and don't forget QA and you'll get your project successfully implemented and look like a rockstar.
Good luck with your project!
Back in 1994, I was a fresh young manager at MCI assigned to help the mid-atlantic region better understand and improve performance. Armed with a computer and—for the first time—a direct, non-dialup connection to this new thing called "the internet,” I started thinking... I was seeing all of these web sites (well, a couple dozen at that time) published by companies to display their product information; could we use our nascent intranet to publish our performance data internally?
The answer was "yes, sort of". With Microsoft FrontPage loaded on my speedy Intel 386-based machine, I designed some truly horrific performance dashboards. Black backgrounds, awful logos, lightning bolts—if it was a bad early 90's web cliche, it was on our intranet site. But, the fact was that we had a performance management dashboard complete with statistical process control chart images show performance, patterns, and root causes for our key metrics. As ugly as it was, it was one of the first intranet sites at MCI and certainly the first dashboard of process performance data.
The problem with our setup was twofold: first, we had to collect all of the data manually (it usually took weeks) and second, we had to create all of the analytical elements (charts, tables) as images and then import them into the website. The days of easily accessible data and business intelligence tools were still quite far off. But still, as primitive as our system was, it was a game-changer for performance management and analytics in our organization. For the first time, we could all see our performance in a single, common location.
I remembered this experience when I read an article that asked "are dashboards the next game-changer" and it made me think: are they? Are "dashboards" the next game-changer for businesses?
My opinion is that no, information dashboards in and of themselves are not the next game changer. Which leads to the next logical question — what is the next game changer for performance management analytics? To help see where I believe we need to go, let's take a quick look at where business analytics have been up until this point...
The first wave of dashboards was characterized by simply having such a thing as a "dashboard" on the web. Where before executives had relied on MS Excel spreadsheets generated on a daily, weekly, or monthly basis by a team of reporting analysts, those executives could now access that information via a machine interface. I say a "machine" interface, because it wasn't necessarily even a web-based dashboard. Frequently these first wave dashboards were standalone reporting applications that required the executive to login to the app to view the data (or more likely, they had a member of their staff login and print out the report for them to view offline).
A major issue with first wave dashboards was that there was a lack of available, meaningful data. I remember spending countless hours trying to figure out (a) where the information we needed lived and (b) how in the world could we get it out? The horrible, horrible term "screen scraping" was used and used frequently. If you could find the data, if you could get the data out, then you were still faced with getting the data into your dashboard system, creating meaningful analytics, and repeating every week or [cringe] day. These were not the "good old days.” They were the living with old, hard to reach, hard to manipulate data days.
The second wave, characterized by data access, made life far easier for most people who worked regularly building dashboards and analytics. Back-office business applications began providing import/export functionality or—better still—APIs that allowed for easy access to the data. Now you could extract data in a real-time or near real-time basis. The era of day or week long information lags was quickly disappearing.
The dashboard in tools themselves also took a huge leap forward. Systems like GoodData, QlikView, Birst and JasperSoft allowed average users to build sophisticated dashboards that could answer many of the questions posed by the business and their analytics could be modified quickly as the business needs evolved. Don’t like the barchart? How about a line chart? Let’s throw a trend line on that. This was an immense leap forward, but was still lacking what management users really required. These dashboards were still very much "read-only" systems—they displayed analytics to the user, but didn't give you any sense of what to do with the data being visualized.
And that brings us up to today... These days, getting our information via the internet is second nature. It's a pretty rarely occurrence in 2014 when we can't find the information we need via a quick Google search. We also have (or are getting close to) easy access to performance data—lots and lots of data. These two factors have made the dream of being able to combine disparate data sources into a single, easily-accessible business dashboard a reality. Today we can build dashboards that let us explore KPIs for customer satisfaction, operational efficiency, and sales funnel value all in a single, real-time portal.
And it's mostly useless.
Why? Because we haven't quite crested what I call the "third wave" of analytical dashboards—turning insight into action.
It's amazing that we can now access all of this information in real-time and drill down, drill over, chart, trend, and predict what is going on with our business, but really, this isn't the ultimate goal for analytical tools. The pinnacle of these systems is helping us to answer the question "so what?" This is the third wave.
What we need from the next generation of analytical systems is the ability to do the following:
The third wave for business analytics isn't a feature or more data—it's an analytical workflow that takes the knowledge gained from the huge volumes of available data and which helps you determine the most effective solution to improve performance. It’s about turning insights into action.
I predict that future, successful analytical tools will help users to not only visualize the data, but to pick out meaningful patterns (outliers, cycles, etc.). They will allow the user to annotate these patterns with the root causes of the pattern (snow in the mid-west, new employees trained that day, quarter close, etc.). They will present management with possible actions that can be taken when they see such a pattern—actions based on the performance gains that might be expected based on what's observed in the past (in short, the BI system has a memory of what you've tried before). And they will mark the point in time when you implemented a performance fix so that you can correlate the actions taken to future improvement.
The third wave isn't yet here, but it's coming. We've got all the tools at our disposal, the dashboards, the data, the commuting power... Now we need to combine these with the realization that without action, the insight provided by analytical tools isn't much more that eye candy. Let's stop building charts and graphs that look great, but leave us asking "so what?" Let's start building systems that take all the way from "wow—I didn't realize that" to "here's what we're going to do about it." That's the third wave of analytical tools for business. It's about turning insight into action and allowing us to use the data and tools at our disposal to make the most effective decisions possible.
Last year, we took our kids to DisneyLand. As someone interested in User Experience, I was just about as eager to go as the kids — after all, who does UX better than Disney?
Overall, it was a great time. We had what I'd imagine to be the quintessential Disney experience… The kids loved it, we survived, wallets were emptied. However, all is not perfect in mouse land. What I found surprised me a little bit. After hearing about how "everyone is a cast member" and the amazing lengths Disney takes to project their ideal image, I saw things that surprised me.
At the hotel, the Grand Californian, we had a great check-in experience. Very well-organized and clearly designed to move large numbers of people through the check-in process as efficiently as possible. But they also too this efficiency a bit too far in one instance.
On the first morning of our trip, we came back to our room between breakfast and braving the line in the actual park. Walking down the narrow halls of the hotel, I saw basket after basket of linens and towels sitting on the ground next to hotel doors. Clearly this is and efficient method for servicing the rooms — person A quickly drops off all linens, person B cleans the room without having to maneuver a cart around. Here's the problem, the same narrow halls that might make it problematic for housekeeping to move a cart through the halls also make it tough for guest to navigate two small kids (plus backpacks, etc) through the hall. It also breaks the whole "National Parks lodge" feeling that they are trying to replicate and makes it feel more like an assembly line. I don't have a great answer for this one, but to me it felt like Disney chose to sacrifice customer experience for efficiency. A valid choice, but maybe not one I would have made.
A bit later, as we were nearing the front of one of the countless lines, I got the inevitable "I need to use the bathroom" from one of my children. Out of the line, off to the bathroom we go. We used the facilities and were preparing to wash our hands when something odd struck me… At a place where it's really all about the kids, where each experience is carefully crafted, where everything is considered — the washbasins were set too far back in the counters for a four-year old to reach them without being lifted up. The adults had no problems but the kids? They were out of luck. Sure, it's a minor inconvenience in the overall scheme of things but compare this to the experience at some store like Whole Foods.
At our local Whole Foods, the counters are the same height and the sinks are set back at a normal distance, but they have placed a small retractable step underneath each counter. Pull it down with your foot, the child steps up, uses the sink, steps down, and the step retracts. Brilliant. This is what I would have expected to see at Disney more so than a grocery store. The Disney situation felt like it was designed, though I'm sure with great care, by an adult using an adult perspective. If that adult had considered the problem from the perspective of a small child, I'm sure they would have seen and addressed the issue. But they didn't.
Food time… What are you going to do? Leave the park and try to find reasonably priced food somewhere in Anaheim? No. You're going to find food in the park, pay quite a bit for it. And be happy about it. So at lunchtime we sought out and found a deli-type restaurant that looked pretty good (and honestly, the prices weren't too bad and the food wasn't horrible) and made our selections. Here's where the user experience broke down again. Many — not all but many — of the employees (sorry, cast members) in the restaurant seemed really annoyed to be serving customers. No smiles, no "how can I help you's", just emotionless slapping of orders on plates. I know that they serve many, many people each day. I know that this is not a job I would want (or could probably even do very well), but this isn't Bob's House-O-Meat. This is DISNEY. You are a "cast member." The simple lack of a smile and basic friendliness caused the entire Disney cloud of magic to dissipate for that 45 minutes in the restaurant. Probably not what Disney had intended.
We had a great time. The kids loved it, we loved it, and we'll likely go back some day. But it struck me how easily the UX veil can be pierced by a few small items that may have been simply overlooked. It struck me that large amounts of time and money devoted to a UX effort can be undone quickly and easily. Sometimes it's the little things that make all the difference.
Have you heard the news?
"60% of all software development projects fail to meet their goals."
Of course you've heard this. EVERYONE has heard this nugget of wisdom. It starts off presentations, it's used in consulting pitches, software integrators put it in their marketing materials, and IT departments promise it won't happen to them (or you). Here's the problem: it's probably wrong. I believe that, in fact, closer to 80% of enterprise software development projects fail to meet goals. The key is — it is a specific type of software project that nearly always fails. The type of development project that nearly always fails is the "old school" waterfall-type project. The kind that starts out with requirements crafted in excruciating detail, progresses to multiple layers of sign-off, is developed in several phases — each with their own system, unit, and user acceptance testing — and eventually finishes with a final result that doesn't fit the needs of a business that has long since moved on. Over the years I've seen software that was released that no longer fits an evolved business model, software that missed huge, key requirements, and software that was released just in time for an acquisition that changed the entire business environment.
Whew. I'm frustrated just thinking about it. Luckily, the software development industry (mostly) figured out that this was a problem quite while ago. Most successful projects today — especially externally facing consumer projects — follow a very different trajectory than the development projects of ten or even five years ago, emphasizing tighter contact with the customer, faster development cycles, and the testing of smaller chunks of code.
So what does this have to do with business process design?
Unfortunately, many business process redesign efforts make those old-school enterprise software projects look like Olympic champions by comparison. Unlike their software development counterparts, most practitioners of "process redesign" have not been so eager to bring their methods into the 21st century. In fact, while software design is largely light-years beyond where it was in the early 1990s, process design — for the most part — has changed very little. The practices learned many years ago a largely still followed:
And surprise, like the outmoded techniques for software design, the process design projects conducted in this manner also have an extremely high "failed to achieve results" rate — even worse than for IT projects in my experience. I speak from experience — this is exactly the way we used to perform process redesign work in the past. Redesigning a process using this "technique" was tedious and frustrating, both for us and for our clients. And, it was tough to achieve the desired result.
But it doesn't have to be this way.
Process redesign projects don't have to be lumbering, slow, painful exercises that rarely succeed in achieving their goals. By learning the hard-won lessons of software developers, you can dramatically increase your chances for success in your process redesign project.
When software development moved past traditional waterfall-style development, a new way of thinking emerged called "Agile Development." Agile development stresses speed over perfection, rapid development of small bits of functionality, and testing of all deployed code. How can this be used for business process improvement? Here are three of the main "agile" concepts and how you can use them to improve processes more rapidly and with a much higher success rate:
Lesson #1: Minimum Viable Process (MVP)
One of my favorite phrases is "the perfect is the enemy of the good," and nowhere is this more true in the design of business processes. In the past, businesses undergoing process redesign — whether they called it TQM, BPR, or Six Sigma — all made a similar mistake. They took far too long to develop the process, hoping for a "perfect" final design that met all objectives and avoided all constraints. As someone who has fallen prey to this seductive path myself, I can tell you with 100% certainty that there is no "perfect process" waiting around the corner, there is no "magic bullet," there is no single "correct solution." The process that is actually deployed and is actually in use is almost always better than that "perfect" process that exists only on a Visio diagram hanging on the wall. Business needs and goals change so quickly these days that you simply cannot afford to spend months designing the ultimate business process. By taking an extended period of time to develop our business processes, we risk a final product that was "perfect" for the situation that existed several months ago — but useless in today's environment.
So how do we reconcile the need to improve processes with the need to move quickly and get something that improves the situation up and running? One solution is called the "minimum viable process" or MVP. The concept is simple: design the simplest, most basic process that will get the job done and iterate from there. Ok... So what does THAT mean? It means that you dispose of just about everything that isn't directly related to delivering the output of the process until you can prove that without the pieces that are left, the process simply cannot function. It means that you design the process without the multiple re-work, validation, approval, and wait state loops that dominate most processes today. Treat each process checkpoint or approval state as a design failure — a process step that exists only because the process itself is inherently flawed in even needing a checkpoint — and try to design that step away. Obviously you won't be able to eliminate every single check & balance step in your process, but minimize them and see what happens. The key with the MVP design is that you need to get a new process out, up, and running as quickly as possible to test its performance in the real world. Those super-complex, "perfect" processes will need to reach the real-world stage at some point — wouldn't you rather have spent two-thirds less time in process design when you find out that your process has major flaws that must be corrected? Use the MVP as your initial test platform to challenge your assumptions and ideas about the new way of doing work. Then, use the next concept — Continuous Deployment — to make the process better fit the goals of the business.
Lesson #2: Continuous Development & Deployment
ANY process that you design — whether you spend days, weeks, or months building it — will have problems. You can count on it. I've designed and implemented many new processes over the past 15 or so years, and I have yet to see a single process that, once "in the wild," didn't have to change to some degree. With this being the situation, the key to a successful process design implementation is the pace at which you are able to effectively change the process design in response to the issues that you identify. Often, organizations take an "implement once and forget it" approach and unfortunately this results in an overall poor redesign result (part of that 60%). You have to find the process flaws and fix them quickly.
So how do you remedy this situation, recognize issues with processes and make changes that will better meet the design goals? The best practice for this is called "continuous deployment" and has grown in popularity in the software development community over the past few years. Here's how it works in the software world:
This all happens very quickly. In fact, one of the leading advocates of continuous deployment, Eric Ries, talks about how his company would deploy commercial software to the customer base multiple times per day. He stated that if each engineer didn't deploy at least every few days, it meant that something was wrong. You can make the same continuous development & deployment principle work for you when redesigning business processes. Adopt the philosophy that every day during the design cycle, something, anything, must be "shipped." It could be a new form for ordering, a prototype of an online database for tracking customer data, or a change to your CRM tool. The key is that you release constantly and learn from what happens. Think small frequent changes, not big delayed changes.
By now you might be thinking "Wait — we can't do that. What if we get it wrong? We need to perform testing/cost-benefit-analysis/executive review/financial review/legal approval/(insert committee here) review before we do anything. We could hurt the business." I don't believe that for a second. The potential for having a small "release" of a business process change — one that you monitor very closely to observe the results — damaging the business irreparably before you see the problem and release a process fix is very low. In fact, I would argue that these small process releases are much easier to monitor and problems are far easier to detect that when you perform one massive release at the end of a process redesign. World-leading design firm IDEO calls the concept of converting risk into smaller, manageable pieces "risk chunking" and uses it to ensure that their new product designs aren't an "all or nothing" proposition. You want to see risky? Forklift in a massive process implementation after eight or ten weeks of design work and try to identify the issues (or benefits) that are associated with what you just did. Now THAT'S risky!
Of course, if you release a new process or a process change and then ignore it and move on to the next challenge, you've missed the point. When performing continuous deployment of process, you must monitor the results. Did it work? Did it cause unintended consequences? The way to tell is through another software technique called "A/B testing."
Lesson #3: A/B Process Testing
You've created the smallest, leanest process possible, you've implemented it using continuous techniques, now what? Now you need to test the results. Often, process implementations are treated almost like a bullet to the head — one shot and it's over. The software world has taught us nothing if not the need for constant review of the effectiveness of each "release." Imagine software that was released, had bugs, and was never reviewed or fixed. How likely would you be to call that software a success or to recommend it to a friend or colleague? In the software world of agile development, a technique called "A/B testing" or "split testing" is used to determine the implications of a recent release.
Here's how A/B testing works: you are doing continuous, small deployments so each piece of functionality is relatively easy to understand in terms of its implication to the users. When you deploy this small functionality change (the "A" functionality), you deploy it to a sub-set of the users and compare to the users who are still using the old functionality (the "B" functionality). Think of it like a small, rapid beta test. This can have huge, beneficial implications for software — think of what would happen if you deployed a new "Buy Now" button to a website but accidentally colored the button the same as the page background. You now have, as Eric Ries says, "a hobby, not a business model." Obviously, you would prefer to detect an issue such as this sooner, rater than later.
Use the A/B testing concept for your business process changes. Instead of deploying a changed form, website, or process to the entire set of "users," deploy to a smaller set of test users and compare the differences. Did the new process perform the way you expected? If so, deploy the change to the rest of the process users. If not, go back, re-develop that part of the process and re-deploy. Continuous development and A/B testing are a tightly linked loop of design, development, deployment, testing, and re-development. Just remember that A/B testing without continuous deployment means that mistakes will be out in the wild much longer than they should and continuous deployment without A/B testing means that issues may go unnoticed for far too long.
We need to break out of that old cycle of developing monolithic processes only to have them fail to produce the results we anticipated. In an environment where every dollar counts more than ever, we just cannot afford a 60% plus failure rate in process redesign. It not only costs us time and money, but also credibility with employees. Use the lessons from software development and build lean, minimum viable processes, deploy them quickly and continuously, and test the results against the old process. Everything you implement won't be a success, but when a mistake does occur, you will find it quickly and be able to rapidly make the changes necessary to succeed when you implement the next time around.
Today the news came out that the unemployment rate in the United States reached a 26 year high. Given the bad economic news that we hear nearly every day, it's not surprising that people — and companies — are in a bit of a panic. Nearly every company that I know of is considering layoffs, killing major projects, reorganizing (read: eliminating lines of business), or just plain ceasing operations. Why? Because that's what you do in the face of such a terrifyingly bad economic downturn, right?
Wrong. At least, not if you want to exit the downturn performing better than your competition...
If you look back over time at the great companies such as Google, Microsoft, Johnson & Johnson and many others, they all have one striking characteristic in common. They all started during significant economic downturns. Crazy, huh? Who would ever want to start a business during such a risky, scary time? Someone who sees that every great challenge is also a huge opportunity to solve major problems, to serve undeserved markets, and to create enormous advantages over the competition. Those companies used an economic downturn as a competitive advantage to implement new technologies, tools, and idea while the incumbents were focused solely on survival.
Now is the time — while most companies have shifted into "panic mode" — to think about using the current economic situation as an enormous, once-in-a-lifetime chance to create a killer competitive advantage.
Here are three tips to get you started...
KILLER TIP #1: Hire your competition's best people
What? Wait a second, you're thinking of laying people off, not hiring people. Okay, maybe you need to reduce your workforce, but it's a mistake to do it wholesale and without thinking about the future. If you are laying people off, chance are, so is your competition. And the people they AREN'T laying off are still scared that they are next. It's the perfect time to poach the best and the brightest from your competition. Here's what you do: if you need to perform layoffs — fine. Target those individuals who:
The third bullet is the real kicker. In a reduced workforce situation, you need to have people who will be the leaders that carry the company into the future once the economy picks up. Not the lowest paid people. Not (necessarily) the ones who have been there the longest. But the best, brightest, and most energetic people who see your vision for the future and can start getting you there today. Look around at the companies that are driving you nuts on a regular basis. Find out who is driving the best product strategy, the best marketing plan, the best sales team. Go hire those people away from your competitor. These people will be the "Pathfinders" that will lead you past the competition and prepare you to dominate once a recovery has taken hold. If you have to layoff additional low performers in order to be able to hire these Pathfinders, do it. This may be the only time when you can get such people without breaking the bank.
KILLER TIP #2: Move to the Cloud
By now you've probably heard of "the cloud." If not, "the cloud" or "cloud computing" simply means that instead of buying (and maintaining) expensive servers, rack space, power, etc — you outsource this infrastructure and get on with your core business instead. Why do this? Because too many companies have enormous IT development or operational support groups focused on providing 24x7 care and feeding of mission critical servers, installation of applications, and other infrastructure maintenance. Is that really your core business? Why, in an environment where every single dollar counts, do you have people dedicated to patching desktop software, designing security models for your intranet systems, and maintaining servers? You'd be better off putting your dollars to work hiring those Pathfinders...
For example, several years ago, I worked at a company where we had a great idea for a new software product. This was before (or at the very beginning of) the "cloud-revolution" so we just used traditional methods for our development. We bought servers, we spent months on the security model, how to issue passwords, how to control access to parts of the application, how to deploy the system to end users, and how to do routine things like backing-up our data. Over the course of a year, we spent nearly $1 million and still didn't have a marketable product. Worse still, the system was so expensive to build, maintain, and deploy that we had to charge "per user" fees that were far too high for the market to bear. As you might expect, that business is now sleeping with the fishes.
Flash forward a few years. I still wanted to build the software. I thought it was a great idea with a great deal of merit and would be worth developing — but not the same way as we did before. I had absolutely no desire to build and maintain the underlying infrastructure. Instead, we used one of the cloud-based "Platform as a Service" systems on the market. This meant that we didn't have to buy any servers, we didn't have to develop a log-in mechanism, we didn't have to design back-up plans — that was all included in the service. We could focus all our efforts on the problem at hand —building the features we wanted to offer instead of worrying about infrastructure. Think of this as traveling from New York to D.C. by car. Do you really want to build the roads, gas stations, and restrooms it will take to get you there? Why not use the facilities that already exist for your convenience? This is what we did with our software and it made a huge difference. Instead of a team of 10, we had a team of — well, me. Instead of a year, it took us (me) three months to build (with more features, thank you very much). Instead of $1 million to build it, it cost a few thousand dollars. We can operate the software for no fixed costs and offer it to users at a very attractive price (and still make a profit). The software is now on the market, is very easy for us to deploy and maintain, and gets great reviews from users (self-promotion alert: it's called NextWave360and we think it's pretty great). If we had tried this WITHOUT using the cloud, we'd probably be answering debt collection calls while building a log-in screen right now...
There are two main ways you can use the cloud to your advantage to save costs: use it internally and use it externally.
Cloud Tip 1: Use the cloud for your internal applications
Between Google, Zoho, Salesforce.com, and the myriad of other cloud-based applications (see EIR #13 here), it's hard to justify putting a standalone copy of a software package on each person's desktop. It's expensive, it gets obsolete quickly, and it's hard to maintain (ever had to reload MS Office?). We switched to GMail for our mail system — free and we still use our same domain namebut we get far more space than we did before. There's really nothing to "maintain" and we save $50/user. We use EchoSign for document/contract management, Freshbooks for cloud-based bookkeeping, and our own cloud-based project tool for project management. We don't even own a web server.
Cloud Tip 2: Use the cloud to develop external applications
As in our software development example above, consider using one of the cloud-based "platform as a service" (referred to as "PaaS") systems to develop any applications that you offer to your customers, either internal or external. I know, I know... Your IT guy told you how it wasn't safe, it wouldn't be cost-effective it the long-run, it caused hurricanes, and you couldn't make the interface the proper shade of green (pantone 361). Sorry, I'm officially calling him out on that. As long as you pick a PaaS provider that is trustworthy, uses a good data center, does back-ups, and either is stable or escrows the code, you are no more at risk than you are developing on your own servers. In fact, I would argue that the speed to market, the ease of deployment, the ease of maintenance, and the ability to change the code on a dime if necessary far outweighs the arguments made by the IT traditionalists. This is the way software will be developed in the future — get out ahead of the curve and save some time and money while you are doing it. I recommend taking a look at TeamDesk, Cordys Process Factory, and Force.com.
KILLER TIP #3: Blow Up Your Processes
Admit it. You have been performing your business processes much the same way for a long time now. Maybe you did some "process improvement" and changed the way you executed a couple of steps last year. How'd that work out? I'm guessing it didn't make much of a difference. Now, in the middle of the economic chaos, is the time to change all that.
You need to seriously think about "blowing up" your processes. Before you start surfing the web for demolition supply stores, allow me to explain. When you are trying to gain efficiencies, reduce costs, and improve service, it can very difficult to make progress through evolutionary or incremental techniques such as traditional process improvement. Often, all the process "plaque" that has built up over the years gets in the way of making rapid significant improvement — "that's not the way we've always done it" becomes the rallying cry. If you want to exit the economic downturn better positioned than your competition to take advantage of new opportunities, you cannot afford to either ignore your processes or simple incrementally tweak performance. You need to challenge the way you do everything.
There is a story about a technology CEO who was faced with a stagnating company where costs were too high and effectiveness of their network was too low. Little progress had been made and something had to be done — the system couldn't continue to operate the way it had in the past and survive. The CEO met with the team one day and said "I have news to report. The network is gone." The team looked confused and he further explained, "Go look and see — it's gone. Now figure out what we do. Rebuild it."
A bit abrupt, but effective. He essentially had removed the option of minor improvements instead challenging the team to design a solution from the ground up. Think about what this would mean if you did the same thing with your billing process, your A/P process, your service delivery process. Think about what it would look like if you were a new start-up business designing the process from the ground up. Of course, you are likely thinking "we can't do that — it's too expensive. Now is the time to stick with what we have, not go changing things." If you want to exit the recession with the same old processes with which you entered, full of plaque, partially effective, then that's the right attitude. But what if, while your competitors are using the strategy of retrenchment (and as a process consultant, I can tell you — they are...), YOU choose to get really efficient and effective with your processes. While the economic rising tide may lift all ships, you've lightened and streamlined your ship during the low tide. It may be counterintuitive, but it is an effective strategy for using slow times to not only reduce inefficient practices now, but also to prepare for the future before your competition. Fix it now, reap the benefits down the road.
There you have it: three Killer Tips to help you view the economic downturn as a tool, not just as a calamity. As Warren Buffet is famous for saying, when everyone else is scared, that's when you need to be aggressively investing. The same holds true for performance improvement. Everyone out there is scared, struggling, fighting to survive. By snatching up the best people, using the new technologies available, and by revolutionizing your processes today during this climate, you will be well prepared to outperform your competition in the future.
It's tough out there. The poor economic conditions have led to slashed budgets, reduced headcount, and projects getting put on indefinite hold. The problem is - you still (hopefully) have a business to run. What do you do if you don't have the budget to support that huge enterprise app? How do you manage if you need to the same work or more with fewer people?
When you are a small company (like we are) or a big company with shrinking budgets, you have to find creative ways of getting things done. Here are just a few of the tools that we've found make doing business on a budget much less restrictive...
We got tired of using WebEx about 3 years ago. It seemed to take forever to get everyone online viewing the screen ("Can you see it now? How about now? Now?) and on top of that, we got to pay a small fortune for the privilege of the frustration. BUT - we still need to share screens and presentations with clients... Enter "DimDim." Dumb name, excellent application. Like WebEx, DimDim allows you to share screens, presentations, web pages, have white boards, etc. Unlike WebEx, it doesn't require users to download an application - it's a small java applet that runs when you access the meeting. Best of all - DimDim is FREE for up to 20 people in a meeting. Beyond 20, you can get a "Pro" or "Enterprise" account for about 10% of the cost of WebEx, but honestly, we've never seen the need. DimDim Web Meetings. (Highly Recommended.)
We work with clients all the time who have paper contracts that need to get printed, passed around, signed, then stored. It's no wonder trying to find that single signed piece of paper is liking trying to find a needle in a haystack. We don't use paper - we use EchoSign for all our contracts. EchoSign is an online document signature system that allows you to upload the document (from Word, Excel, etc...) and send it out for signature. The recipient gets email notification and can sign (or not sign) the document electronically. It then gets "filed" electronically in your EchoSign account. FREE for up to 5 documents at a time, EchoSign has paid versions for larger amounts of storage, more users, unlimited documents, etc. EchoSign.
In my opinion, Coghead has the potential to radically change the way in which companies view the IT department. That's a big claim, but Coghead is a truly amazing system. Quite simply - Coghead is a "cloud-based" platform that allows you to build any application you want, hosted on the web, for a pretty minimal per user fee. And - it's extremely easy to use. Here's what you can do with Coghead: instead of waiting months for your IT group to search for, buy, and implement a new CRM system - why not build a quick CRM application in Coghead and learn what works and doesn't work so you can provide IT with better requirements. Instead of using MS Excel or MS Access to track customer orders or inventory - use Coghead to build an online database complete with order forms, reports, and email notification when new items are added. Coghead doesn't require you to write code or understand programming languages. Instead, you drag and drop text fields, number fields, labels, etc. on to a Coghead "collection" - a tabbed web page - and arrange things any way you like. I can easily see how tools like Coghead could transform the central IT department in to the "keepers of the master database" with end-users developing the applications they need - in real time - and change them as quickly as the business changes. Coghead (Very Highly Recommended)
I know, I know - EVERYONE knows about Gmail, right? Well - did you know that you can use Gmail with your own domain name (email@example.com instead of firstname.lastname@example.org)? A recent survey showed that using Gmail at an enterprise level is orders of magnitude less expensive than running your own MS Exchange server. Gmail for domains has two versions - FREE and $50/year/user. We use the free version which gives you up to 7GB of mail space per user, a shared calendar system, Google Docs, and Google sites (a good replacement for MS Sharepoint). The paid version removes a small text "ad" on the webmail page, which you'll never notice if you use MS Outlook or Apple Mail, and which is pretty discrete anyway. For us, Gmail works really well - all our mail comes from "nextwaveperformance.com" and we have huge amount of storage. Plus - the "dashboard" for adding and managing new users in your account is really easy to use - far better than other systems I've tried. Gmail for Domains (Highly Recommended)
If you have to make flowcharts, MS Visio is pretty much the standard (unless you use a Mac, then do yourself a favor and get OmniGraffle). But - that standard comes with a price - about $270 for a single user copy of Visio. Wouldn't it be great if you could get the ability to build nice, clean flowcharts for, oh... let's say... FREE? You can. It's called LucidChart. LucidChart is an online flowcharter that again comes in two forms - a free account and a paid account. The paid account removes the LucidChart logo and costs $50/user/year. It also gives you some interesting collaboration tools that allow multiple people to work on the flowchart online. It doesn't read or write in Visio format, but if you can live with exporting as a PDF (which you should do anyway), LucidChart should get the job done. LucidChart (Recommended)
Let's get one thing out of the way right now. I can't stand MS Project. I think it is one of the worst pieces of software on the market today for reasons too numerous to mention but, many companies still use it. The problem is, it's pricey and companies generally have only a few people with a copy. OpenProj is the answer. OpenProj comes in two flavors - stand-alone and hosted. The stand-alone version (comparable to using the stand-alone MS Project) is - wait for it... FREE. That's right - everyone in your company can have a project management application that reads and writes in MS Project format free of charge. The server-based version - comparable to MS Project Server - is $20/user/month. This one seems like a no-brainer to me. Why would you ever pay for MS Project when you can get the same thing at no cost? OpenProj. (Recommended)
Since I originally wrote this, Platform-as-a-Service vendor Coghead has been acquired by SAP and is no longer offering the service to end users. In lieu of Coghead, I highly recommend TeamDesk as a web-based application system. Alternatively, you can find lots of info at the PowerInTheCloud website's PaaS Vendor list.
As a consultant who specializes in business process improvement, one of the most frustrating aspects of the job is coming back to a client weeks or months later to find out no substantial implementation progress has taken place. We've spent weeks redesigning processes to improve efficiencies only to find out that the flowcharts are sitting on a shelf somewhere gathering dust. When we ask the client "what's happening - why aren't you implementing?" - we get the same answer time and time again - "we've submitted change requests to the IT department and we're still waiting..."
As painful as it is for us - imagine how it must be for the client who hasn't yet realized the benefits from the improved process!
Luckily, a new age is upon us. The days of submitting requests for simple applications to overloaded development teams and hoping/praying that your application is highly prioritized are drawing to a close. The days of IT departments developing applications - only to find they built the wrong thing or key functionality is missing is slowing slipping away.
With the advent of revolutionary shared-infrastructure cloud-computing platforms such as Coghead, the way we interact with clients and the result we are able to achieve are radically different as compared to just one year ago.
In the past we would work side-by-side with our clients to develop a new business process and then leave hoping for a successful implementation later, now we prototype everything. We develop the process first, then sit right there with our clients and develop a mock-up - a mock-up that actually works - with the flowchart still hanging on the wall.
For example, last week we met with one of our clients that needed a new method for prioritizing their customers in to various support levels (gold support, silver support, bronze support, etc.) The problem was, the decision was fraught with emotion. The sales team always wanted "premium support" for every customer. The support team wanted to make sure that premium support was only for customers that REQUIRED premium support (and paid for it!). On top of that, once the support-level decision was made - it never got re-visited! Customers who warranted premium support 2 years ago were still receiving premium support today, even though their expenditures with our client had dropped by 50% or more. Something had to be done.
We worked with the client to develop a new set of criteria that we could use to rank the support needs of each client resulting in an overall "support score." Each client was then reviewed, scored and ranked into the appropriate support tier and here's where the magic of cloud computing comes in... Where in the past we would have used MS Excel to create the "ranking list" - for this client, we used Coghead to develop a web-based support scoring system. In a matter of 2 hours, we had a prototype that the client could use to enter client information and obtain an objective, criteria based decision on where clients should be slotted. What's more - since it was cloud-based and accessible via the web, the entire sales team could enter their information real-time AND re-visit the ranking on a periodic basis.
How long something like this have taken just 2 years ago? Maybe weeks, more likely months (and that is assuming that you could get funding approval!). The world has changed - cloud-computing now allows business to develop or change systems as rapidly as their process change to keep up with their customers' and the business's needs. Just remember, with this amazing power comes responsibility. The great positive that is cloud-computing can easily make things worse if you don't think through your processes before you start building a solution.
The platform-as-a-service provider mentioned in this article was acquired by SAP and is no longer available to the public. Our recommendation? Try dbFlex instead. It's similar to Quickbase or Coghead but for a lower price. (Disclosure: we are a dbFlex partner, but receive no compensation for directing users their way.)