top of page

Is your Organization ready for the Age of Information?

We are living in the age of information. And yet, instead of broadening the view and revising how to treat information on a large scale, many businesses are still entangled in internal warfare to resolve endless discussions on minor details about access rights and (often self-imposed) data compliance regulations.

It is paradoxical – of course, top management knows that gut-based decision-making won’t get them far. And that reliable data is key. However, recent research suggests that, in fact, no more than 3% of companies' data meet basic quality standards!

Apparently, somewhere along the way from the top management’s decision to the IT implementation focus is lost. How come?


Are you making the right data right?

Over the last decades Business Intelligence was largely in the hands of IT. Traditionally, the drivers of data projects have been (and still are) mostly technical. Business needs have been only one of many factors that were taken into consideration. Thereby, the key areas of discussion have often been of political nature, revolving around application ownership and access rights.


This led to three key challenges businesses are facing today:

  • Setting a clear focus for their data strategy

  • Making data quality reliable

  • Removing barriers that block the flow of information across the business


It is time to cut to the chase and revise the "rules of the game”.

In this article, we shed some light on how organizations can approach their data strategy and a novel data ownership concept that gets businesses going with data.


Are you boiling the (data) ocean?

The traditional logic goes that with IT landscapes becoming more complex, more advanced technologies are required to store ever-increasing amounts and types of data.

The reality is that many issues can be solved with existing datasets that are small and by no means big data. And in cases where big data technology is actually required, the massiveness and diversity of datasets make wrongly set priorities extremely expensive.

This gives rise to the importance of having a clear and sound set of strategic principles which guide the execution of the data initiatives.

The traditional approach of funding large, technology-focused projects is no longer working because it is putting the cart before the horse. Instead of focusing on the most relevant use cases and data, these initiatives try to implement all-encompassing solutions that will make the business future-proof for all eternity (people love the illusion of solutions that make problems go away forever). It turns out that these comprehensive solutions are so complex that, in fact, they accomplish the opposite. Overwhelmed business units struggle to see the value of these solutions because they are unable to find useful and relevant applications. The result is that people get frustrated, few or none are using the solution, and ultimately the initiatives fail due to lack of adoption by the business.


Finding the right approach for your data strategy

A better approach to this is to focus on strategic data assets and build technology specifically tailored to this data. It is critical to target use cases and data that deliver value to the business early on.

The question is: how can use cases and data be prioritized?

From a high-level perspective, it is hard to objectively judge whether data is important. Therefore, traditional top-down approaches mostly fail to do the job because many different biased opinions and views blur the lines of the discourse.

The solution? Involve business users from the very start and make them part of the process of identifying relevant use cases along with the underlying data. Once a broad view of existing data and their importance is established, it is possible to initiate value-focused implementations that deliver benefits for business uses from day one, which in turn fosters adoption and sets the business up for the next stages of the data journey.


Who's your data?

So now that we have made sure we are working on the right things, let’s talk about making things right: we have yet to solve some data quality issues.

In many instances the challenge of showing the right data to the right people is approached from an infrastructure-perspective, and - seemingly - mastered only to witness a complete lack of adoption by decision-makers. The reason: they mistrust the quality of the data behind their pretty dashboards.

If it’s not better infrastructure and tools that will improve data quality – what is it?

Ultimately, the solution comes down to rethinking data ownership and, more fundamentally, evolving the “need-to-protect” mindset around information towards a “need-to-use” line of thought. This increases the usage of data which in turn raises data quality as errors show up frequently and can be fixed.


From application ownership to data ownership

The traditional approach for improving data quality is to appoint an application owner. They are entrusted with the task of ensuring that all data managed by and in their application meets certain quality requirements. Subsequently, KPIs are defined that revolve around the completeness and other technical measures around data, while the quality of the content (such as proper spelling of names or correct figures) can rarely be measured. This typically leads to a scenario in which the people that are taking care of improving the data quality perceive it as a tedious task that adds to their regular workload. They see only little benefit in providing the data besides being able to tick a to-do off their list. This makes data maintenance a low-focus, error-prone process that often results in poor content.

Even worse, while there is significant effort spent on improving data quality, it is often unclear whether the data is relevant (i.e. used) in the first place.

In the first part of this article, we looked at a use-case based data prioritization. It makes sure that the focus is on relevant data. The question left to answer is: How to design the organization to improve data quality?

The solution consists of three key components:

  • Shift the responsibility for the data from the application owner to the data owner – the person actually working with the data

  • Ensure that the data owner can benefit from the data rather than only having the obligation to provide it

  • Increase data usage both by the data owner as well as by others

Are you ready to shift?

These are simple directives. However, to achieve them organizations need to rethink how they treat information.


Traditional protective approaches ("need-to-protect") are fundamentally challenged in the information age. The value of data is often determined by its relevance, which is in turn determined by how often it is used. In an increasingly connected world, it is inevitable to shift towards a well-governed “need-to-share” mindset that fosters data usage and ultimately leads to better decision-making across the business.


Making data users data owners and incentivizing them to share their data with colleagues may seem to be a radical shift. And it is. But opening data silos and releasing that data in a well-governed environment is a first step towards better compliance and gives the organization a chance to truly embrace technology trends such as artificial intelligence.


 

If you want to get notified when a new article comes out, join my newsletter!



Like what you read? It would mean the world to me if you'd share it on social!

bottom of page