In my last post I covered all of the great benefits of a successful implementation of a modern BI and analytics platform. Knowing all those benefits and how valuable they can be to an organization, it may be a wonder why so many companies still lag behind in their reporting capability. The fact is, many companies still remain hesitant to embrace significant change, and for good reason. Gartner predicts that as many as 85% of BI initiatives fail. Even more never return the value they promise as they fall victim to a number or technical and organizational pitfalls. Today I will cover some of the more common ones to be aware of and guidance on how to avoid them.
Organizational Buy-In
For a myriad of reasons, few initiatives are ever able to get the commitment from the entire organization that they need to make an impact. Different workstreams (Marketing, Purchasing, Sales) tend to operate in their own silos as well as independently from the key internal support organizations such as IT. Without support and participation throughout the organization, the best any initiative can hope for is incremental gains in their own productivity. However, to have a seat at the table in critical decisions, the drivers of any initiative must demonstrate increasing value and gain the trust of all key stakeholders involved.
While this can be tough for firms, especially those with old and established cultures, there is a proven path forward. Well executed change management and proper project selection are critical to winning over key players. It is all too easy to decide to go all-in and tackle the toughest areas of concern. While the payoff may be huge if successful, it is not guaranteed that the project will be successful and the time and cost of the project may erode confidence. Attempting to get new users up to speed on a new approach at the same time and with little experience is likely to further compound the issues. Because of that, it is much more effective to pilot the program with easily digestible projects with a core group of users that can demonstrate value while also building confidence. From there you can more easily champion the project's success internally and work to grow the footprint and audaciousness of the efforts going forward.
Fear of Implementation
All organizations have been burned by an expensive and failed implementation at one point or another. The risk of spending months on a project that is then difficult to justify afterwards is a constant concern for the leaders an organization. It is often much safer to take no action than to push the needle forward. This is especially true when projects are viewed only as technical upgrades or for non-critical business functions.
In addition to the risk of failure, the amount of human capital required to train new users and ensure adoption can be daunting. As organizations add more and more tools to their organization the willingness of users to leverage the tools decreases. The last thing key decision makers want is yet another tool that they have to manage and update to perform their job.
It is critical when evaluating a significant project to base the decision on business outcomes rather than technical features. If leaders can successfully articulate the business case for making a change it affects the tone of the conversation. Showing how users' business process and workflow can be streamlined and automated so they have less to focus on, not more, is essential to getting them to actively participate. Demonstrating specific things they can due to be more effective at their job is significantly more impactful than showing them the latest technical feature no matter how "cool" it may seem.
Finally, anything you can do to reduce the risk of failure should be considered. Shortening the timeline of the implementation, rolling out the effort geographically or by business function rather than a "big bang" approach, or reducing the scope of the functionality and increasing over time are all approaches that are effective at hedging the risk of a failed effort.
Ignoring Data Quality
You can't make good decisions based on bad data and nothing undermines trust in your reporting quicker than bad decisions. To compound the issue, the numerous points of data entry and interfaces that attempt to keep it in sync are fraught corruption. In too many cases, users have no option other than to review and validate data manually. Not only is this highly prone to error, but it is a tedious chore that is typically avoided if possible. Even when users are doing their best to keep up with issues, the limitations of how data is stored and used across multiple systems prevent accurate reporting even with the best of intentions. Finally, many visualization tools lack the integration and cleansing capability to address gaps in data appropriately.
While there is no silver bullet to clean and harmonize data across the organization, consistent standards, understanding the source of truth for data elements, and the proper tools can quickly build confidence in the integrity of your data. We recommend organizations formally identify what data elements are entered where so there is a single source of the truth and then track that their interfaces are propagating that data as effectively as possible. Where gaps in data can't be resolved at the point of entry, the right warehouse and data cleansing approach can easily resolve the rest. The goal should always be to minimize the role of manual correction as much as possible so that users can focus on essential and high value deliverable.
If you would like more information on how to properly manage your data or ensure the success of your next implementation, please reach out to us here to get started.