The role of data management in the “New Normal”
We’ve all heard the hackneyed phrase “Information is an Asset”. Yet despite decades of work by my friend and mentor John Ladley, and Garter and Forrester alumnus Doug Laney (to name but two), information and data don’t appear on the balance sheet of any organisation. At least not explicitly. As organisations of all sizes begin to assess how they will restructure and refocus their businesses and operating models in what is being referred to as “The New Normal”, that is an over sight that might be coming home to roost.
Time to Sober Up
What we are faced with is an external trigger for digital transformation in organisations of all sizes. I have watched in awe at how historically monolithic organisations like government departments have adjusted to things like remote working and how even the smallest country hardware store has figured out ways to take and fulfill orders on-line without sacrificing social distancing. But, as the adrenaline rush of the sprint starts to fade and as organisations realise that this might be more of a marathon, the gaps in the stop-gaps are starting to become visible and the challenges of an effective and sustainable digital transformation are coming to the fore.
However, I’m not surprised. In discussions with colleagues in the Leaders’ Data Group (www.dataleaders.org) over the last few months we have collectively bemoaned the fact that the last 30 years of data management effectively hasn’t happened as we see the same mistakes and the same flawed thinking repeating itself over and over again. And this is common across all industry sectors we have worked in. The statistics are depressing.
Over a decade ago (2007) I wrote a book on Information Quality Management strategy (published by Ark Group, now sadly out of print). In the research for that book I found stats that showed between 70% to 80% of ERP and CRM data transformation projects failed, in that they delivered late, didn’t deliver the required functionality, or cost more than budgeted. The underlying root cause of these failures was, in many cases if not the majority of them, an element of magical thinking about the data aspects of the initiative and a failure to properly define and manage key master data, metadata, and data quality issues. Organisations spent lots of money on a new shiny bucket and then moved the slop from their old systems into the new bucket, but without the manual filtering processes that kept the toxic slop from poisoning the organisation.
The latest trends in Digital Transformation show an unnerving similarity. Forbes (2016) report that 84% of Digital Transformation projects fail. According to Bain & Co., 75% of digital transformation projects fail, and only 5% meet or exceed expectations. McKinsey report similar numbers (2015), and others report a similar spread of 60% to 80% failure.
If the numbers are similar, what can we say about the root causes?
The Digital Hangover
Back in 2007 when I was researching my book, the common thread in the failure rates for projects was the data. Today, we have more data and more desire to use that data in online processes and in machine learning-enabled processes. The “lure of the shiny” , inevitably combined with pressing and valid commercial needs to evolve and grow businesses and markets, has lead to organisations investing (or at least exploring) the implementation of new technologies and capabilities. It’s just like 20+ years ago when we decided that this relational database thing and this thin-client infrastructure that the new fangled hypertext browser was enabling might make it worthwhile to consolidate data about customers in one place (CRM) and data about our inventory and other assets into another place (ERP). The only difference is that this time around we have 20+ years of legacy technical and governance debt accruing on the information we need to move into these new environments and on which we intend to train our intelligent machines.
Back then, the digital hangover was usually addressed by putting some manual processes in between the systems as a work-around, or batch processing to clean and rework data as part of an ETL process. But people were still interacting with the data and it was noticeable when data loads failed or expected information wasn’t visible on the screen or when reports didn’t add up properly.
However, the vast majority of organisations did sweet f.a. about that. Research from UCC has shown that less than 3% of organisations have data that meets basic data quality standards. This is costing the average organisation between 10% and 35% of turnover annually. But everyone is more concerned about the remote chance of a potential 4% of turnover fine for breaching GDPR, so this is the accepted cost of doing business. And the hangover continues (particularly when organisations miss the opportunity that proactive approaches to data protection risk management can bring to the table).
Today, the hangover results in machine-learning processes being trained on garbage data(resulting in many case studies of the truth of ‘garbage in, garbage out‘) or (worse in one way but preferable in others) machine learning projects simply not getting going because of an absence of data to train models. The hangover results in digital transformations failing, unsurprisingly, up to 85% of the time.
The hangover takes the form of poorly defined and managed metadata, poor data quality, missing in action data governance, and an incoherent data strategy that conflates “digital” and “data” with technology. In short: the hangover is the headache caused by how organisations continue to mismanage data. In Orchestrating Transformation
To borrow a phrase Bill Clinton (speaking at a time when we were just getting going with the whole CRM and ERP schmozzle): “It’s the data, stupid“.
The Hair of the Dog
So, the underlying disciplines of data are essential foundational capabilities for digital transformation. To get the shiny, we need to roll up our sleeves and do the grimy. By which I mean the unglamorous work of actually getting to grips with data quality and all the other ‘nuts and bolts’ of data. It’s not sexy, but if those fundamentals aren’t addressed then the execution of your data strategy and your digital transformation will be sub-optimal at best.
By way of analogy… I train in Aikido. It’s an elegant martial art with techniques that are both dynamic and devastating when applied correctly. I spend most of my time (when not in Covid-Lockdown) learning how to fall, roll, move, and stretch properly. Not glamorous, but essential to being able to execute the techniques properly without injuring myself or others. As the saying goes, the fastest way to Carnegie Hall is practice, practice, practice.
But when it comes to data-enabled business transformation (aka Digital Transformation) the temptation (the delusion?) is that the next shiny toy will fix all our ills. The consistency in failure rates over nearly thirty years of data management show that the fundamentals are just that. So, it’s time to have some of the hair of the dog and actually deal with these issues. Organisations that do will reap benefits. Organisations that don’t will continue to achieve the same ROI as setting a pile of cash on fire, but over a slightly longer time period.
The New Normal
In the “New Normal” when organisations are delivering projects and training remotely or your local store is doing online ordering and delivery, the data supply chain that feeds the successful delivery of desired outcomes will be even more essential as the interpersonal intermediary of the physical shop and the friendly helper won’t be there as a buffer. Therefore, it behooves any organisation gearing up for the New Normal to
- Think Digital and
- Do Data
If you want to find out more about how we can help, get in touch.