It’s hard. Business users often find themselves struggling to derive analytics and insights from data that is far from accurate or reliable. Yet, there’s little they can do about it. When it comes to data management initiatives, business users are often kept out of the loop. It’s always the job of an IT team or a data analyst/engineer to carry out data management plans and processes, leaving business users grappling with the aftereffects of data that has been modified, merged, or deleted.
There is a constant passive-aggressive conflict between business users and IT teams when it comes to managing data, so much so that departments create their own data storage sources (excel files, CRMs, and other tools) to manage their data. This isolation and silo operation mode leads to fragmented business insights and analytics that fail to paint an accurate picture of the business’s KPIs, impacting strategic decision-making.
In this piece, I’ll show you how business users can take charge of their data and even be part of data management initiatives. But before, a quick definition of some key terms.
What is Data Management?
Data management refers to processes, concepts, and practices, that help businesses make sense of their data. The fundamental goal of data management is to deliver reliable, meaningful insights from raw data. Think of it as a journey from being mere coal to becoming a diamond ring. Data, when obtained in its raw form is a combination of mere text and numbers which on its own has no meaning. In order to be meaningful, the data needs to be refined, processed, and cleansed of errors to finally be usable.
What is Master Data Management?
When different departments start creating their own data sources, they end up getting “views” of the same data. Say, for instance, you have Customer A whose name, contact details, social media details, and demographic information are scattered in different systems – CRMs, excel sheets, customer support tools, billing & ticketing tools etc. The siloed and scattered data makes it difficult for businesses to get a comprehensive view of their customers which means they are then unable to get an accurate overview of customer touchpoints & customer pain points. Master data management is the process of gathering all these scattered views and delivering a consolidated, ‘golden’ view so businesses can get a clearer picture of their entities. For businesses, this master view is of crucial importance, especially at the time of annual audits, or at the time of making huge, strategic decisions.
For the master data management process to be successful, businesses need to make sure their data is complete and accurate and follows a defined data quality framework; otherwise, it becomes a nightmare to consolidate erroneous, duplicated, dirty data!
What is Data Quality?
Data quality refers to the framework or process that ensures a data source is clean of errors such as typos, invalid addresses, duplicate entries, junk texts, and so on. Data quality is generally measured on six metrics:
- Data completeness: the wholeness of data with no gaps or missing information.
- Data accuracy: error-free and reliable data.
- Timeliness: the measure of time to availability & accessibility of data.
- Uniqueness: data that is not duplicated or overlapping.
- Consistency: data that has consistent standards instead of variations.
- Validity: ensuring data is accurate, reliable, and valid before using.
All of this is checked, cleansed, and fixed by using data quality solutions of data cleansing tools.
Why Should Business Users Be Concerned by Any of This?
Because business users are the custodian of their data and no one has more urgency to get accurate data than business users.
For instance, when the marketing team has to run a campaign, they need access to updated, valid, accurate data. If they send out a campaign to 30% of invalid email addresses, it’s a huge loss of money! Similarly, all the brochures and direct mails that go out to irrelevant target audiences are added costs with no optimal returns.
Yet, the sad truth is most businesses do not see this as a data problem. In fact, most will ignore data problems because it’s a huge undertaking to extract your data, assess it for issues, and make a resolve to fix the problem.
Eventually, poor data quality affects insights and analytics. You’ll end up with false positives, inflated numbers, and analytics that are far from the truth.
How to Get Reliable Data for Insights and Analytics?
So how do you get around this? How do you make sure your team has access to clean data without causing an organizational upheaval? Here are some basic steps to take.
1. Check Your Data for 10 common errors
Before you involve decision-makers, check your data for the following problems:
- Duplicate entries
- Typos and spelling errors
- Incorrect numbers
- Invalid addresses
- Fields with spaces
- Fields with junk text
- Fields with missing information
- Inconsistent standards (US vs U.S or UK vs United Kingdom)
- Nicknames and abbreviations
- Different formats
You can use free data cleansing tools like WinPure to run a diagnosis of your data and create a report of the most commonly found issues.
2. Assess the Impact of These Findings
Assuming you’ve discovered that 30% of your data is duplicated. Now what? The impact of dirty data is not recognized until it is the main cause of a costly mistake or until the management realizes how much money they are losing (or how much money they can save!) if they are able to resolve data problems.
To help understand the impact, here’s a quick calculation.
Assuming it costs $1/customer data.
Then, it’s $100 = for 100 customers.
If 30% of this data is duplicated (that is the same person signed up at two different points or there’s a fault in data entry), that’s straight up $30 wasted on duplicate leads.
Assuming it costs you $10 to convert each customer (that is sales rep time, calling costs, mailing costs etc), then that’s $300 right down the drain.
Now 10X this number. 10,000 records and you’ll see how much money is being wasted every day on poor data alone!
3. Do a Pilot Test of Cleaning Your Data and Setting it Up for Use
Now that you’ve got a list of issues, and assessed the impact, you will need to do a pilot use case before alerting decision-makers.
In the pilot case, clean and dedupe 1,000 rows of data (or any size depending on how much accessibility you have). Once done, put that to use for a marketing campaign and note the impact in terms of ROI, response, and operational efficiency. Note how much time you’ve been able to save by not fixing every single row of data on Excel.
If you’ve used an MDM tool to get a consolidated view of the data, note how much it has helped you in getting personalized marketing or advertising right.
This research and testing will help you present a strong case to decision-makers and ensure that they rise to the occasion and enable a company-wide data cleanup.
4. Present Hard Facts to Decision Makers
Alation’s survey of 300 data and analytics leaders in the U.S. and Europe found 67 percent asserting their company’s C-level executives ignore data when making business decisions, relying instead on gut instinct. This is what you’re up against. Hence, it’s advisable to present hard facts + a successful use + ROI/cost projections to decision-makers so they can sign off on the urgent need to fix data quality issues.
5. Keep a Stead-fast Focus on Data Quality
Unfortunately, as I’ve often seen, even with all the hard facts and reports, your decision-makers may feel reluctant to take the steps necessary to implement an organization-wide data management strategy. In that case, you can either try asking them to invest in a small segment of the data (such as customer data over the past year) or in one department where the need for clean data is critical.
If despite all efforts, decision-makers are not willing to take up the initiative, you can continue to keep your focus on improving the data that is within your domain. With repeated efforts, the management will begin to see results.
Data management no longer should be restricted to the IT domain. Business users are the custodians of their data and therefore, they must begin to take charge of their data. Without the ownership from business users, it will be difficult to make use of data for obtaining accurate insights and analytics.