Data Science Advisory

Challenges in Data Science Projects and How to Tackle Them

Reading Time: 7 mins

According to Gartner, 80% of analytics projects will fail to deliver business outcomes. There are many challenges in data science projects that organizations fail to tackle. You might wonder why Gramener, a leader in the data science space, is giving you this bleak statistic.

We always want our clients to succeed, and we are with you along every step of the data science journey to ensure that we help you get actionable insights that become game-changers. 

Find out where does your organization stand in the levels of data science maturity
Take Free Data Maturity Assessment

Why Data Science Projects Fail?

One of the chief reasons why failure occurs is the lack of adoption.

There is often a gap between what the end-users need and what the solution is. This leads to situations like teams continuing to use spreadsheets even though the company has invested heavily in BI tools. 

This blog is based on a talk between Eric Weber, Head of Experimentation at Yelp, and Ganes Kesari, Chief Decision Scientist at Gramener. They spoke about a few of the major challenges in data science projects that organizations face and how they can tackle them. Three major questions that they tried answering are:

  • How can we get business users to trust the solution?
  • How can you measure RoI on a data science project?
  • How to bring about data literacy and culture?

Note: The following is not a transcript, and it has been edited for clarity and brevity. 

How Can We Get Business Users to Trust the Solution?

Ganes: We find that business users often don’t have confidence in data science solutions.

For example, if it’s a predictive model, people ask, “Why should we trust it? We have been comfortable doing it with intuition or heuristics. So why should we take a different approach.” How would you build users’ trust in an ML model or output as a whole?

Eric: They may trust your ability as a data scientist to do what you do, but they don’t understand why they should listen to a model that is more complex and black-boxed in the way it works when they have something they feel comfortable with.

We have to find that connection. 

If you’re going to connect a user with something new, you have to leverage what they’re already used to and say, “We’re making the transition from point A to point B because of C. It doesn’t discount or take away from your understanding of the business or the processes. But here’s the value you get.” 

You have to position it from a very human perspective. No one wants to be told that the way they were doing things before doesn’t make sense.  

You have to help them understand what the model does for them that they didn’t have before.

If the model is doing something that they already have, then why are you doing it? Be critical of yourself as much as you are of a previous model.

If a model that has to be adopted is not going to be relatively easy to use, what value do you provide?

What Value Does Your Data Science Solution Bring to the Table?

Eric: Understanding the value is one of the biggest challenges in data science project adoption. Value often comes in two forms.

The first is the direct potential to improve revenue.

The second is more indirect – to see time or effort being saved.

Does it make someone’s life easier? Are business users taking less time thinking through things? The more you can relate to the users, the more empathy and openness is created.

Ganes: Empathy is the keyword here. That is the core of user-centric design as well. Trying to understand who your end-user is and what their challenges are.

We have to answer questions like:

  • Where were they before this exercise, and how can they connect this with the after?
  • What does the new solution mean for them?
  • How does it impact their job?

We have to make the transition more comfortable and humane. 

How Do We Measure the Value of Data Science Projects?

Eric: If you look at it from a business perspective, how they think about investment and their business is either that it typically increases potential cash flow in terms of revenue or introduces cost savings.

In some cases, data science projects are directly attached to these two things and tend to have an immediate impact.

But in some cases, you may not be able to speak about how your solution led to a specific amount of revenue.

If you ship a feature, model, or fix a process, you may not be very clear about how it impacted income. So this is where people get a little uncomfortable. Everything you do has to be measurable. Everything you do has to have a clear impact. 

Revenue is easy to quantify. There are many things you build, which may not be easy to position in terms of revenue. In that case, it is more important that you place and create a path to the impact that you create. That means you have to sell what you’re doing.

This is where good managers and leaders make all the difference. They should be positioning you to work on high-value things so that it’s not hard to illustrate the impact. 

This is also where data science teams’ alignment within an organization matters a lot because they should be set up already to be working on high-value projects.

5 job roles to tackle data science projects

I consider it bad leadership if you don’t set up people to succeed. In a small organization, you may not have people who fully understand data science, and in that case, you need to be much more of an advocate for what you’re working on and why. 

Ganes: So you’re making two important points here.

First, we have to choose high-value problems; otherwise, it becomes difficult to demonstrate the real purpose of what is being created.

The second point is impact versus measurability. We need to measure the effect in some way. We should feel factors like revenue, cost-savings, quicker to hit the market, or making life smoother and more comfortable for the users.

Direct and Indirect Ways of Measuring Impact

Ganes: Assume that we have a predictive model that helps a user make a faster decision. They look at the output and make a decision. At times they use the gut feel because they disagree with what the model says, and at times they use the model. So their usage around it is fuzzy. How would you quantify the impact in such cases? Is there a proxy metric?

Eric: From a general standpoint, if you reduce the time for a user to make a decision, it’s more likely that they stay in the funnel, more likely that they convert.

Typically, you might be optimizing or creating a model that subtly changes user behavior. Still, you’re going to see users spending more time on a page converting faster and being more engaged because they are spending less time trying to find what they’re looking for.

In the outcome metrics, we have to think about whether we are making the user happy, and if we are doing that, how does this affect the revenue?

The impact will look different depending on the type of business and how it makes money. And this is why I advocate so much for understanding how the company generates revenue.

That way, even if you don’t directly affect income, you will know how it indirectly affects it in some way. That will allow you to position the strength and impact of what you’re doing. 

Ganes: One learning we’ve had is that A/B testing is useful. We can’t measure if someone is using the model decision.

But if you look at the entire lifecycle – one scenario where you have a lot of data science built into the process, and the other is more traditional, that’s one way to measure the impact of both these solutions by comparing their outcomes. 

How Can You Build Data Literacy and Culture in the Organization?

Eric: With data literacy, the idea is not that everyone becomes a data expert. For example, if you present statistics to somebody, I don’t expect to go up and say, the “p” value is 0.03.

What I want them to understand is that we are using it as a way to mitigate risk. We’re trying to use it as a way to say that we are sure of what we observed; it’s not just random choice. It’s not just about the numbers but also the interpretation of the numbers.

When we present them with measures like accuracy or sensitivity or specificity, what’s essential is not people knowing how to compute them, but that they are necessary to confirm whether we are hitting our targets. Whether something was successful.

So the idea behind data literacy is not that everyone becomes an Excel master or proficient in Python. It means that when I go into a room, I don’t have to justify and explain what every key decision metric means. 

We have to be able to teach and create that culture. The core of that is people have to feel valued and understood. If you go in and say, “You just don’t understand,” that will stop any move towards data literacy. 

It’s about becoming literate in different spaces, and our job is to try and help people become literate when it comes to using data in a business. And that’s something we have to be coaches on.

This is not really in your job description but ends up being present for everybody. 

Ganes: I’ve noticed in meetings that even if you drop in statistical terms unintentionally while speaking, people tune out.

It’s challenging for the next 10 minutes to get them back because they think we’re talking about something very technical that they don’t need to understand.

So, hand-holding and getting them to understand it is crucial, as this leads to better adoption of the projects. If you can understand and relate to it, it means that you’re going to use it. 

Get Maximum Returns on Your Data Science Projects

Yes! And we have a dedicated plan and package for it. Check out our data advisory consulting and workshop. We’ll help you assess your data maturity, create a data science roadmap, and create strategies to get maximum return on your data science investments.

Gramener - A Straive Company

Gramener – A Straive company is a design-led data science firm. We build custom Data & Al solutions that help solve complex business problems with actionable insights and compelling data stories.

Leave a Comment
Share
Published by
Gramener - A Straive Company

Recent Posts

Enhancing Warehouse Accuracy with Computer Vision

In today’s fast-paced world of e-commerce and supply chain logistics, warehouses are more than just… Read More

5 days ago

How AI is Redefining Quality Control and Supercharging OEE Optimization?

What does it mean to redefine the future of manufacturing with AI? At the heart… Read More

3 weeks ago

How is AI Transforming Cold Chain Logistics in Healthcare?

In 2022, Americans spent USD 4.5 trillion on healthcare or USD 13,493 per person, a… Read More

4 weeks ago

How Can CEOs Slash GenAI Costs Without Sacrificing Innovation?

In the rush to adopt generative AI, companies are encountering an unforeseen obstacle: skyrocketing computing… Read More

1 month ago

Top 7 Benefits of Using AI for Quality Control in Manufacturing

AI in Manufacturing: Drastically Boosting Quality Control Imagine the factory floors are active with precision… Read More

1 month ago

10 Key Steps to Build a Smart Factory

Did you know the smart factory market is expected to grow significantly over the next… Read More

1 month ago

This website uses cookies.