The importance of data to today’s businesses can’t be overstated. Studies show data-driven companies are 58% more likely to beat revenue goals than non-data-driven companies and 162% more likely to significantly outperform laggards. Data analytics are helping nearly half of all companies make better decisions about everything, from the products they deliver to the markets they target. Data is becoming critical in every industry, whether it’s helping farms increase the value of the crops they produce or fundamentally changing the game of basketball.

Used optimally, data is nothing less than a critically important asset. Problem is, it’s not always easy to put data to work. The Seagate Rethink Data report, with research and analysis by IDC, found that only 32% of the data available to enterprises is ever used and the remaining 68% goes unleveraged. Executives aren’t fully confident in their current ability—nor in their long-range plans—to wring optimal levels of value out of the data they produce, acquire, manage, and use.

What’s the disconnect? If data is so important to a business’s health, why is it so hard to master?

In the best-run companies, the systems that connect data producers and data consumers are secure and easy to deploy. But they’re usually not. Companies are challenged with finding data and leveraging it for strategic purposes. Sources of data are hard to identify and even harder to evaluate. Datasets used to train AI models for the automation of tasks can be hard to validate. Hackers are always looking to steal or compromise data. And finding quality data is a challenge for even the savviest data scientists. 

The lack of an end-to-end system for ensuring high-quality data and sharing it efficiently has indirectly delayed the adoption of AI.

Communication gaps can also derail the process of delivering impactful insights. Executives who fund data projects and the data engineers and scientists who carry them out don’t always understand one another. These data practitioners can create a detailed plan, but if the practitioner doesn’t frame the results properly, the business executive who requested them may say they were looking for something different. The project will be labeled a failure, and the chance to generate value out of the effort will fall by the wayside.

Companies encounter data issues, no matter where they are in terms of data maturity. They’re trying to figure out ways to make data an important part of their future, but they’re struggling to put plans into practice.

If you’re in this position, what do you do?

Companies found themselves at a similar inflection point back in the 2010s, trying to sort out their places in the cloud. They took years developing their cloud strategies, planning their cloud migrations, choosing platforms, creating Cloud Business Offices, and structuring their organizations to best take advantage of cloud-based opportunities. Today, they’re reaping the benefits: Their moves to the cloud have enabled them to modernize their apps and IT systems.

Enterprises now have to make similar decisions about data. They need to consider many factors to make sure data is providing a foundation for their business going forward. They should ask questions such as:

  • Is the data the business needs readily available?
  • What types of sources of data are needed? Are there distributed and diverse sets of data you don’t know about?
  • Is the data clean, current, reliable, and able to integrate with existing systems?
  • Is the rest of the C-level onboard with the chief data officer’s approach?
  • Are data scientists and end users communicating effectively about what’s needed and what’s being delivered?
  • How is data being shared?
  • How can I trust my data?
  • Does every person and organization that needs access to the data have the right to use it?

This is about more than just business intelligence. It’s about taking advantage of an opportunity that’s taking shape. Data use is exploding, tools to leverage it are becoming more efficient, and data scientists’ expertise is growing. But data is hard to master. Many companies aren’t set up to make the best use of the data they have at hand. Enterprises need to make investments in the people, processes, and technologies that will drive their data strategies.

With all of this in mind, here are 10 principles companies should follow when developing their data strategies:

1. Understand how valuable your data really is

How much is your data worth to you? This can be measured in a number of ways. There are traditional metrics to consider, such as the costs of acquiring the data, the cost to store and transmit it, the uniqueness of the data being acquired, and the opportunity to use it to generate additional revenue. Marketplace metrics affect the value of the data, such as data quality, age of the data, and popularity of a data product.

Your data could also be valuable to others. For example, suppose a hospital collects patient datasets that can generate value for your data. In that case, that data could be of interest to disease researchers, drug manufacturers, insurance companies, and other potential buyers. Is there a mechanism in place to anonymize, aggregate, control, and identify potential users of your data?

Opportunity, balanced by the cost it takes to deliver on it, is one way to determine the potential value of your data.

2. Determine what makes data valuable

While it may be hard to put an actual dollar value on your data, it’s easier to define the elements that contribute to data having a high degree of value. It can be reduced to a simple thought equation:

Completeness + Validity = Quality

Quality + Format = Usability

Usable Data + A Data Practitioner Who Uses it Well = VALUE

Your data project can’t proceed without good data. Is the quality of your data high enough to be worthwhile? That will depend, in part, on how complete the sample is that you’ve collected. Are data fields missing? Quality also depends on how valid the information is. Was it collected from a reliable source? Is the data current, or has time degraded its validity? Do you collect and store your data in accordance with industry and sector ontologies and standards?

Your data has to be usable for it to be worthy of investment. Setting up systems for data practitioners to use and analyze the data well and connect it with business leaders who can leverage the insights closes the loop.

3. Establish where you are on your data journey

Positioning a business to take full advantage of cloud computing is a journey. The same thinking should apply to data.

The decisions companies make about their data strategies depend largely on where they happen to be on their data journeys. How far along are you on your data journey? Assessment tools and blueprints can help companies pinpoint their positions. Assessments should go beyond identifying which tools are in a company’s technology stack. They should look at how data is treated across an organization in many ways, taking into account governance, lifecycle management, security, ingestion and processing, data architectures, consumption and distribution, data knowledge, and data monetization.

Consumption and distribution alone can be measured in terms of an organization’s ability to apply services ranging from business intelligence to streaming data to self-service applications of data analytics. Has the company implemented support for data usage by individual personas? Is it supporting individual APIs? Looking at data knowledge as a category, how advanced are the company’s data dictionaries, business glossaries, catalogs, and master data management plans?

Scoring each set of capabilities reveals a company’s strengths and weaknesses in terms of data preparedness. Until the company takes a closer look, it may not realize how near or far it is from where it needs or want to be.

4. Learn to deal with data from various sources

Data is coming into organizations from all directions—from inside the company, IoT devices, and video surveillance systems at the edge, partners, customers, social media, and the web. The hundreds of zettabytes of worldwide data will have to be selectively managed, protected, and optimized for convenient, productive use.

This is a challenge for enterprises that haven’t developed systems for data collection and data governance. Wherever the data comes from, there needs to be a mechanism for standardizing it so that the data will be usable for a greater benefit.

Different companies and different countries impose different rules on what and how information can be shared. Even individual departments within the same company can run afoul of corporate governance rules designating the paths certain datasets have to follow. That means enforcing data access and distribution policies. To seize these data opportunities, companies need to engineer pathways to discover new datasets and impose governance rules to manage them.

In manufacturing, companies on a supply chain line measure the quality of their parts and suppliers. Often, the machinery and the robotics they’re using are owned by the suppliers. Suppliers may want to set up contracts to see who has the right to use data to protect their own business interests, and manufacturers should define their data sharing requirements with their partners and suppliers up front.

5. Get a strategic commitment from the C-suite

Data benefits many levels of an organization, and personas at each of the affected levels will lobby for a particular aspect of the data value process. Data scientists want more high-powered, easy-to-use technology. Line-of-business leaders push for better, faster insights. At the top of the pyramid is the C-suite, which prioritizes the channeling of data into business value.

It’s critical to get C-level executives on board with a holistic data strategy. Doing it right, after all, can be disruptive. Extracting maximum value from data requires an organization to hire staff with new skill sets, realign its culture, reengineer old processes, and rearchitect the old data platform. It’s a transformation project that can’t be done without getting buy-in from the top levels of a company.

The C-suite is increasingly open to expanding organizations’ use of data. Next to customer engagement, the second highest strategic area of interest at the board level is leveraging data and improving decision-making to remain competitive and exploit changing market conditions, according to the IDC report “Market Analysis Perspective: Worldwide Data Integration and Intelligence Software, 2021.” In the same report, 83% of executives articulated the need to be more data driven than before the pandemic.

How should organizations ensure that the C-suite gets on board? If you’re a stakeholder without a C-level title, your job is to work with your peers to find an executive sponsor to carry the message to leaders who control the decision-making process. Data is a strategic asset that will determine a company’s success in the long run, but it won’t happen without endorsements at the highest levels.

6. In data we trust: Ensure your data is beyond reproach

As AI expands into almost every aspect of modern life, the risks of corrupt or faulty AI practices increase exponentially. This comes down to the quality of the data being used to train the AI models. How was the data produced? Was it based on a faulty sensor? Was there a biased data origin generated into the dataset? Did the selection of data come from one location instead of a statistically valid set of data?

Trustworthy AI depends on having trustworthy data that can be used to build transparent, trustworthy, unbiased, and robust models. If you know how a model is trained and you suspect you’re getting faulty results, you can stop the process and retrain the model. Or, if someone questions the model, you can go back and explain why a particular decision was made, but you need to have clean, validated data to reference.

Governments are often asked by policy watchdogs to support how they’re using AI and to prove that their analyses are not built on biased data. The validity of the algorithms used has sparked debates about efforts to rely on machine learning to guide sentencing decisions and make decisions about welfare benefit claims or other government activities.

The training of the model takes place in steps. You build a model based on data. Then you test the model and gather additional data to retest it. If it passes, you turn it into a more robust production model. The journey continues by adding more data, massaging it, and establishing over time if your model stands up to scrutiny.

The lack of an end-to-end system for ensuring high-quality data and sharing it efficiently has indirectly delayed the adoption of AI. According to IDC, 52% of survey respondents believe that data quality, quantity, and access challenges are holding up AI deployments.

7. Seize upon the metadata opportunity

Metadata is defined elliptically as “data that provides information about other data.” It’s what gives data the context that users need to understand a piece of the information’s characteristics, so they can determine what to do with it in the future.

Metadata standards are commonly used for niche purposes, specific industry applications like astronomical catalogs, or data types like XML files. But there’s also a case to be made for a stronger metadata framework where we can not only define data in common ways but also tag useful data artifacts along its journey. Where did this piece of data originate? Who has viewed it? Who has used it? What has it been used for? Who has added what piece of the dataset? Has the data been verified? Is it prohibited from use in certain situations?

Developing this kind of metadata mechanism requires a technology layer that is open to contributions from those viewing and touching a particular piece of data. It also requires a commitment from broad sets of stakeholders who see the value of being able to share data strategically and transparently.

Creating an additional open metadata layer would be an important step toward allowing the democratization of access to the data by enabling the transparent sharing of key data attributes necessary for access, governance, trust, and lineage. Hewlett Packard Enterprise’s approach to dataspaces is to open up a universal metadata standard that would remove the current complexities associated with sharing diverse datasets.

8. Embrace the importance of culture

Organizations want to make sure they’re getting the most out of the resources they’re nourishing—and to do that, they need to create cultures that promote best practices for information sharing.

Do you have silos? Are there cultural barriers inside your organization that get in the way of the proper dissemination of information to the right sources at the right times? Do different departments feel they own their data and don’t have to share it with others in the organization? Are individuals hoarding valuable data? Have you set up channels and procedures that promote frictionless data sharing? Have you democratized access to data, giving business stakeholders the ability to not only request data but participate in querying and sharing practices?

If any of these factors are blocking the free flow of data exchange, your organization needs to undergo a change management assessment focusing on its needs across people, processes, and technology.

9. Open things up, but trust no one

In all aspects of business, organizations balance the often conflicting concepts of promoting free and open sharing of resources and tightly controlled security. Achieving this balance is particularly important when dealing with data.

Data needs to be shared, but many data producers are uncomfortable doing so because they fear the loss of control and how their data could be used against them, or how their data could be changed or used inappropriately.

Security needs to be a top priority. Data is coming from so many sources—some you control, some you don’t—and being passed through so many hands. That means that security policies surrounding data need to be designed with a zero-trust model through every step of the process. Trust has to be established through the entire stack, from your infrastructure and operating systems to the workloads that sit on top of those systems, all the way down to the silicon level.

10. Create a fully functioning data services pipeline

Moving data among systems requires many steps, including moving data to the cloud, reformatting it, and joining it with other data sources. Each of these steps usually requires separate software.

Automating data pipelines is a critical best practice in the data journey. A fully automated data pipeline allows organizations to extract data at the source, transform it into a usable form, and integrate it with other sources.

The data pipeline is the sum of all these steps, and its job is to ensure that these steps happen reliably to all data. These processes should be automated, but most organizations need at least one or two engineers to maintain the systems, repair failures, and update according to the changing needs of the business.

Begin the data journey today

How well companies leverage their data—wherever it lives—will determine their success in the years to come. Constellation Research projects 90% of the current Fortune 500 will be merged, acquired, or bankrupt by 2050. If they don’t start now, they’ll be left behind. The clock is ticking.

Read the original article on Enterprise.nxt.

This content was produced by Hewlett Packard Enterprise. It was not written by MIT Technology Review’s editorial staff.

Similar Posts