Episode 85: [Value Boost] The Office Politics Survival Guide for Data Science Experiments

Download MP3

[00:00:00] Dr Genevieve Hayes: Hello, and welcome to Your Value Boost from Value Driven Data Science, the podcast that helps data scientists transform their technical expertise into tangible business value, career autonomy, and financial reward. I'm Dr. Genevieve Hayes, and I'm here again with Miguel Curiel, product analytics manager at Bloomberg to turbocharge your data science career in less time than it takes to run a simple query.
[00:00:29] In today's episode, you'll discover practical strategies for navigating office politics in data-driven organizations, so you can successfully champion data science initiatives. Welcome back, Miguel.
[00:00:43] Miguel Curiel: Thank you, Genevieve, for having me. Very excited to be here.
[00:00:46] Dr Genevieve Hayes: Now here's something that most data science books and courses don't prepare you for. The reality that even the most brilliant analysis can fail if you can't navigate the human side of your organization. That is office politics. This is something I've often encountered in my own career, as I'm sure have most of our listeners in fact.
[00:01:08] Having worked in government for over a decade in the past, I actually had the fun of dealing with both office politics, along with actual politics. However, one of the things that really struck me when we were preparing for this, episode, and also when we're talking about the importance of business skills was that.
[00:01:33] Navigating office politics is gonna be particularly crucial when you're running an experiment. What is it about experiments that makes navigating office politics so much more challenging and critical than regular data science or analysis work? I.
[00:01:50] Miguel Curiel: What a brilliant introduction there. And yes, office politics and data roles will be common because you're usually challenging assumptions and bringing information to the table that may go against people's usual opinions. And add to that, the experimentation layer, people are placing bits, so to speak on their ideas.
[00:02:12] Because usually someone is proposing something, they want to see if it works. And in experiments you have we don't call it this way, but you either win or lose in your experiment. We tend to call it either you learn or you succeed. But if it's a scenario where the test isn't successful, then you have to go tell the person or the team who proposed it, Hey, your test didn't work.
[00:02:38] So it's getting used to that idea of learning or however you wanna call it. I prefer learning, not failure, for example.
[00:02:46] Dr Genevieve Hayes: I remember with my honors thesis doing a hypothesis test, and the aim was for this hypothesis test to show, I forget what it was to be statistically significant. It wasn't. And so the conclusion of my thesis after 10 months of work on it was Oh yeah.
[00:03:04] And no statistical evidence to show this is true. Yeah. Biggest letdown of an ending for my conclusion. And that was just, you know, a university project. I can imagine this would be way worse when you've got money at stake.
[00:03:20] Miguel Curiel: Yes, and people need to get comfortable with that idea, and even the biggest companies report that only between 10% or 30% that most of their experiments actually are statistically significant or winning experiments. So people need to get comfortable with, again, not failure. You're learning about what not to do and you're de-risking as well.
[00:03:43] Dr Genevieve Hayes: Mm. But I can see how you'd get a lot of resistance there when you come back with the wrong learning, so to speak. So what's your go-to strategy for overcoming this type of resistance?
[00:03:56] Miguel Curiel: It all starts for me with culture, being in the right mindset. Call it a growth mindset if you want. Having people be open again to learning. That's a whole deal with AB testing. You're learning about your users. Once you have that let's learn about what experimentation is because there's a whole science behind it.
[00:04:15] And sometimes when you say not statistically significant, people actually don't understand that. So it's first being in the right mindset. Then let's learn about AB testing or experimentation and then let's work together. That togetherness is really, really, really important. You can have individuals proposing ideas and running with them, but it's better if you run as a group.
[00:04:39] And there is one article out there that I'm, don't remember who wrote about it, but says, when tests come from teams, they perform better than when they come from individuals. Broadly speaking, mindset education. And work together.
[00:04:54] Dr Genevieve Hayes: It's a group learning experience, so to speak.
[00:04:58] Miguel Curiel: Exactly.
[00:04:59] Dr Genevieve Hayes: Do you struggle with getting stakeholders to buy into testing? If it's something they're convinced they already have the answer to.
[00:05:07] Miguel Curiel: Yes, definitely. And it definitely helps to bring research. To back any idea up. One, if you're the person proposing, Hey, let's do this experiment. Stakeholders or decision makers will already have a preconceived idea and they may be based off best practices they saw elsewhere, but just because it worked elsewhere doesn't mean it will work with your product.
[00:05:30] So it's always worth testing. Back it up with research and also conducting experiments is resource intensive usually. So you will require either designing resources, engineering resources, analytics resources. So the better you can back your ideas with research that definitely helps.
[00:05:48] Dr Genevieve Hayes: So it's convincing them that just because it worked at Netflix doesn't necessarily mean it's going to work at your organization. And that's by showing them the research that you are not Netflix, so to speak. What does that research look like?
[00:06:05] Miguel Curiel: Usually since a lot of these experiments are conducted by industry leaders and that data is not that readily available, it's hard to get the actual experiments. But there are great research. Is online there is one particular page that compiles AB tests.
[00:06:21] But again, not all of them will be publicly available. So then behavioral science is really my go-to. Understanding what academic researchers have actually looked into in terms of what drives behavior. So I use that as a proxy, talking about, oh, this researcher in whatever part of the world looked into streaming preferences, if you're talking about Netflix or streaming services, for example.
[00:06:44] So using that as a proxy
[00:06:45] Dr Genevieve Hayes: But how do you convince them that it's worthwhile investing in what you are doing and not just going with someone else's previous result.
[00:06:54] Miguel Curiel: then comes. Two angles. One, the potential ROI on it. If you can add a number to it, to the potential bottom line impact, great. You're already there. Making the case that if we even remotely experiment with this idea, we could have incremental gains. But also on the other side, from experimentation, it usually said that.
[00:07:18] There are really no best practices. So then referring to the experimentation world, that to your point, just because it worked for Netflix doesn't mean it won't work for us.
[00:07:26] Dr Genevieve Hayes: Okay, so just because it were for Netflix, it might not work for us. So if we went with Netflix as a result and it didn't work for us, then we've lost money. So therefore it's worthwhile trying the experiment. To make sure before you invest. So it's a de-risking thing. And then at the flip side, if it didn't work for Netflix, but it might work for us, there's a ROI if it does work for us.
[00:07:50] So it all comes down to challenging their mindset by showing them the numbers. And to Jerry McGuire show me the money type thing.
[00:08:00] Miguel Curiel: Yes. And you said it right, and also mindset, to your point, being in the correct mindset of having us all be open to learning and to. Or new ideas,
[00:08:12] Dr Genevieve Hayes: So what's one simple technique our listeners can start using tomorrow to help reduce this political friction around their data science work, be it experimentation or more traditional analysis.
[00:08:24] Miguel Curiel: one idea that I'm gonna
[00:08:26] Dr Genevieve Hayes: I.
[00:08:27] Miguel Curiel: take off of Rome Santiago's book, prove it or Lose it. Very active experimenter, find champions. Executive stakeholders, people with influence within your work. If that is you, great. But if not, partner with someone with equal or more influence to spearhead this program because again it all really comes down to people that is the most important aspect of experimentation.
[00:08:51] So the more you can partner with others, and especially if they have a great influence within the organization and they're open to experimenting. Better.
[00:09:01] Dr Genevieve Hayes: What's the ultimate political advice? Make friends and be likable.
[00:09:08] Miguel Curiel: Yes. Yeah. And again, very much also quoting Rome Santiago here. That is a great thesis within his book. Yes, be likable.
[00:09:16] Dr Genevieve Hayes: And that's a wrap for today's value boost. But if you want more insights from Miguel, you're in luck. We've got a longer episode with Miguel, where you'll learn a practical checklist for maximizing business impact through product analytics. And it's packed with no nonsense advice for turning your data skills into serious clout, cash, and career freedom.
[00:09:39] You can find it now wherever you found this episode or at your favorite podcast platform. Thanks for joining me again, Miguel.
[00:09:47] Miguel Curiel: Thank you, Genevieve.
[00:09:48] Dr Genevieve Hayes: And for those in the audience, thanks for listening. I'm Dr. Genevieve Hayes, and this has been Value-Driven Data Science.

Episode 85: [Value Boost] The Office Politics Survival Guide for Data Science Experiments
Broadcast by