Episode 90: Using LLMs to Become a More Effective Data Scientist
Download MP3[00:00:00] Dr Genevieve Hayes: Hello and welcome to Value Driven Data Science, the podcast that helps data scientists transform their technical expertise into tangible business value, career autonomy, and financial reward. I'm Dr. Genevieve Hayes, and today I'm joined by Colin Priest. Colin is an actuary, data scientist and educator who has held multiple CEO and general management roles where he has championed data-driven initiatives.
[00:00:30] He now lectures at the University of New South Wales where he specializes in adapting education for the age of ai. In this episode, you'll learn how to leverage LLMs to become a more effective data scientist. Including techniques to better communication, smarter problem solving, and identifying blind spots in your analysis.
[00:00:53] So get ready to boost your impact, earn what you're worth, and rewrite your career algorithm. Colin, welcome to the show.
[00:01:02] Colin Priest: Thanks vie. Thanks for having me on here. We're gonna talk about some stuff that I get excited about, so if I sound so excited, it's 'cause I'm a nerd.
[00:01:11] Dr Genevieve Hayes: I am a nerd too, and I think so is all of our audience, so I am quite happy to nerd out here. So when you think about using LLMs or generative AI to make you a better data scientists, it's likely the first thing that springs to mind is using AI to write better or faster code. And that's certainly one thing you can do, but if it's the only thing you're using it for, then you're missing out on some of the most powerful data science applications, as we frequently mention on this show.
[00:01:41] The most critical skills for data scientists aren't always the technical ones. They're often the communication and strategic thinking skills needed to translate technical expertise into real world value. And here's what's really exciting.
[00:01:57] LLMs don't just present opportunities for enhancing your technical skills. They also present opportunities for enhancing the non-technical skills needed to make data scientists valuable in their roles. Now I know you've been experimenting quite extensively with LLMs in your work to begin with. Can you give us a broad, a brief overview of some of the ways in which you are currently using them?
[00:02:24] Colin Priest: As you say i'm using the mainstream way writing code. I do that for speed 'cause I'm in a hurry. I'm the world's most impatient man. But there's so many more interesting things that you can do with LLMs and if we talk about data analytics, for example, there's a ton of unstructured data out there that are used to write overly complicated code to try and turn into something structured.
[00:02:50] And it would be so fragile 'cause then you'd run it and there'd be an edge case. Like, why is it not finding this here? Oh, they've used a different word here and I'm using the other key word. So for example, I was writing a dashboard two weeks ago. That was looking at share market volatility and share market volatility has what they call regimes.
[00:03:14] There's times when it's very volatile. There's times where it's very calm, and I was building a mainstream model for that. That predicts which one you're in and what's the probability and how likely is it to change. That's very mainstream, and I thought, let's bring in some unstructured data.
[00:03:30] And so I was doing this on the US market. Oh, what if I brought in the Federal Reserve minutes from their meetings? That's probably got a pretty good effect on what's going on in the investment markets. But it's really long and it's full of boring, rubbish, like all this administrative stuff.
[00:03:49] And so what I've eventually figured out was how to get the LLM to go find the sections that have. The commentary on the economy and the commentary on what interest rates should be and why. So that was the first thing I did. I said, let's just filter out all the administrative nonsense.
[00:04:06] And then I said, okay, well what is the sentiment? Do they sound positive? Do they sound negative? Do they sound unsure? Do they set certain now there are certain sentiment models out there and I could use those. But first I had to filter it because if you do sentiment on this boring administrative document, it's almost always gonna say neutral sentiment.
[00:04:25] Because it's dominated by administrative nonsense. And then I said, okay, now what I want is for you to categorize what their stance is. Are they dovish? Are they hawkish? And then give me three reasons why all of a sudden I've got a category, I've got a number, which is the sentiment.
[00:04:43] I've got some reasons that I can put on the dashboard. So when people say, well, why? Are you saying that this is the current thing, huh? It's on the dashboard. Here are the three reasons they said this. They said that, and the implications is that that was just so exciting. When I do it, I, you can tell from my voice, I get excited about boring dashboards.
[00:05:04] Dr Genevieve Hayes: I am just excited about the fact that you can actually get decent sentiment analysis now with LLMs, because I remember using the pre LLM models and if you had a lot of words that ended in not, like, don't, can't, et cetera, it was always negative. Otherwise it was positive.
[00:05:21] Colin Priest: We're both factories. So you know what I used to hate in the sentiment models? You mention mortality or morbidity in the text, the sentiment model go negative. But it's what we do for a living
[00:05:33] Dr Genevieve Hayes: Yes,
[00:05:34] Colin Priest: or catastrophe.
[00:05:37] Dr Genevieve Hayes: And the classic double negative. You know, you can't not rule this out, you.
[00:05:43] Colin Priest: Yes. Yeah. Oh, and actuaries love using that phrasing. And lawyers love that too.
[00:05:48] Dr Genevieve Hayes: Yes, and using can't or not, that'll send it into negative sentiment territory.
[00:05:54] Colin Priest: I did find I had to fine tune this. I had to give it examples of positive and negative sentiment so it understood what I meant by sentiment. But that is one of the beauties of this. If you've got one of those prebuilt models, you are stuck with whatever someone trained it on.
[00:06:09] So I was doing that and that was particularly cool. What else have I been experimenting cyber attacks. I'm helping out one of my colleagues that's his specialty in risk management. And one of the problems is getting data for that. There's a company that sells that data. Mortgage a house if you wanna pay for that.
[00:06:29] So I went off and found some websites that have that, but if you scrape that, it's all unstructured data again. And it's full of. Ads as well. So you've got all these stories about cyber attacks and you read now, now this is actually semantic making a generic comment that there's more attacks per, they can sell more.
[00:06:47] That wasn't an actual event. And so I set up some prompt engineering to eventually get it so it can filter out that nonsense and only give me websites that have actual cyber attacks, then start categorizing it for me. Because I need some more structured data for dashboards on this. What type of cyber attack was it?
[00:07:06] Was it one of those ones ransomware or was it one where they just stole data? Was it a phishing attack? Or was it a denial of service? Very different things. And understanding those makes a big difference. And then I needed to deduplicate, you know, how hard it is to deduplicate unstructured data.
[00:07:25] Oh.
[00:07:26] Dr Genevieve Hayes: Yep. Been there.
[00:07:28] Colin Priest: Yeah, I was doing that this morning, in fact on something, and I'm looking at him going, they're the same thing. Why did you say they're different? And so my colleague who doesn't use LLMs is just so excited that I can pull even Australians specific data now on cyber attacks, which he couldn't even get before us, was pretty much the only reliable source of data for him.
[00:07:50] So that's pretty interesting. And next, what I'm gonna do. And I've started experimenting with this, is simulate cyber attacks based on the historical data.
[00:07:59] Dr Genevieve Hayes: So how does that work?
[00:08:00] Colin Priest: Well, with LLMs, you know, you can get them to predict things a bit like a more mainstream machine learning model. You don't have to get it to predict an outcome you could get it to generate instead of just an outcome, an input, and an outcome.
[00:08:16] So give me an entire record.
[00:08:18] Dr Genevieve Hayes: So to come up with a scenario, is that it?
[00:08:20] Colin Priest: Well, there's different ways. So there's two ways. One is to create synthetic data, and what you do is you pull a bunch of sample data. This is called N Chop prompting, and then you tell the LLM Create me a new record that resembles these but is not the same, but is plausibly for the same data set and
[00:08:39] it'll generate one.
[00:08:40] And you just loop through this and you do this over and over again. The other way is what you are talking is say, just give me a scenario. And you can combine the two in the end, by the way. So you can say, okay, here is some historical data. Generate me a scenario, and then use the historical data to make it specific.
[00:08:58] Dr Genevieve Hayes: I can imagine this being used for something like catastrophe modeling.
[00:09:02] Colin Priest: I can imagine it too. There are some companies out there saying they're doing that. I haven't seen any examples yet, and I'm just a touch suspicious whether they've done it yet. Another piece of research I'm doing at the moment that you'll appreciate as an act entry, I'm doing this on reserving blowouts.
[00:09:18] And one that everyone will love and everyone gets excited about this. I'm also doing it on reputation damage events in banks
[00:09:26] Because they have a stress test where they've gotta talk through what would happen if they had reputation damage. And you've gotta come up with a scenario.
[00:09:34] You've gotta come up with what the effects are. You've gotta show whether mitigation will work or not.
[00:09:39] Dr Genevieve Hayes: Yeah, I'm just thinking, 'cause at the time of recording this, we've had Optus in the newspapers about the failure of the Triple O emergency number.
[00:09:49] Colin Priest: Yep, I saw that one. Also in the past week, one of the local Australian banks the government gave them money to use in a particular way. And they didn't, they lined to their own pockets. It's really interesting. I put a story about that on LinkedIn and that's one of my more viral stories, including people liking it who worked for that bank.
[00:10:10] They're probably ones not in that department who are so embarrassed that there are some a-holes. Working for the same company as them, and it was basically fraud. They were misusing government
[00:10:22] money that they were told to hold in trust.
[00:10:26] Dr Genevieve Hayes: So if you put in details of that incident with the bank, it could create a plausible scenario along those same lines,
[00:10:33] Colin Priest: So I'm using LLMs at both ends. I'm using it to collect historical events and to generate new ones. Then on top of that, i'm now putting in psychological profiles of key stakeholders and saying what are their reactions to this? And one of my friends who's a psychologist, he and I are working together and we put together this AI profile of why this event happened.
[00:10:58] Because this is the real value you can get back into risk management. Why did this happen? What were the incentives and disincentives? How would a person in this position react to the exact circumstance? And of course, it predicted the exact thing that happened. And then you can start playing around with the risk controls around that.
[00:11:19] Look at the incentives, look at the guardrails look at the reporting look at the governance, like who do you let do stuff without anyone else signing it off. And you can play with that and see what would reduce the risk of this happening.
[00:11:33] Dr Genevieve Hayes: One thing that's interesting, I'd say most people wouldn't have the psych background needed to put in the psych profile
[00:11:41] Colin Priest: No, so there's two different ways I did it. Initially a very mainstream way. I have a bit of a psych background, but I'm not an expert. I literally just prompted an ai. Once again, you put examples in of how, what particular types of people behave like.
[00:11:58] Dr Genevieve Hayes: What you could do is just, if you are dealing with someone who's senior enough, like for example, A CEO, there's probably public domain transcripts of speeches they've given. You could use the LLM to create the psych profile based on those transcripts and then put
[00:12:14] Colin Priest: friend did. So let's jump away from banks for a second and talk about something else. That was very high profile and it was the CEO Qantas, Alan Joyce.
[00:12:24] So my friend created a detailed psychological profile based upon things like TV interviews. What you want to get is stuff where they're candid and it hasn't been written by lawyers and decisions, historical decisions that he'd made.
[00:12:38] And we did that only using data up to 2020. And we then applied it to the next three years. And you look and you go, yep. He just keeps making stupid decisions. We modeled the other stakeholders, and you'd think, yeah, he's making decisions that improve the profitability.
[00:12:55] And you go, but if you model like one of the key shareholders, which is an industry super fund. You can predict that that industry super fund is gonna come and stomp on them in the annual general meeting, and force them to cut the executive pay as punishment for the destruction in value that they were doing in the brand value.
[00:13:15] Dr Genevieve Hayes: It's sort of like a war games type version of working out how a CE o's going to act in response to events, and then how the key stakeholders are going to react in response to the CE.
[00:13:29] Colin Priest: It's a bit of game theory happening with a lot of predictive modelling. And then you lay over the top of that some proper theory in risk management, in pr, in brand management, in communications. There's a lot of opportunities. I haven't published this yet, I'm still working on it. But it's just so damn exciting what you can do with this.
[00:13:49] Dr Genevieve Hayes: This is incredible. I have never gone to the same extent that you have with Alan Joyce, but I actually once had an important conversation I had to have with a key stakeholder who I'd known for a number of years. I. And I just put in a couple of sentences describing, this is who this person is, this is how they've typically responded in the past.
[00:14:12] Here are some examples of their behavior. Nothing that requires a psych degree because I don't have one. And then I said, okay, now I would like you to role play this conversation with me. And he got the other person perfect. It was like, this is exactly how they would behave.
[00:14:31] Colin Priest: They talk about inaccuracies in LLMs, but you've gotta think about what they're trained on. They're trained on people's communication and people's behavior. They're incredibly good at replicating human behavior and human communication. So we talk important stakeholders. My most important stakeholder. My wife I'm a fairly stubborn person and she's very different to me.
[00:14:55] She is a fashion designer. I'm a data scientist and an actuary and an academic. We talk totally different language. In fact, we literally do as well. English is not her first language. So we'd had this argument and I used ai. I know if I talk to her about the way, I think she won't understand a word 'cause that's not how she thinks.
[00:15:16] And so I'm chatting with the ai. I go, here's what my wife is like. Here's examples. Tell me how to best explain myself to her in a way she will understand
[00:15:27] Dr Genevieve Hayes: And did it work?
[00:15:28] Colin Priest: it. Did she reacted so much better than my normal way of saying, but here are the facts, here are the facts. Doesn't work on her.
[00:15:35] Dr Genevieve Hayes: Oh, golly.
[00:15:36] Colin Priest: Yeah, that's how when I discovered the problem was me. Yeah.
[00:15:39] Dr Genevieve Hayes: Yeah.
[00:15:41] Colin Priest: But we come back to your data scientist. They're often talking to people like a marketing manager. Marketing managers aren't data scientists. They think entirely differently. They use different language. They're motivated by different things.
[00:15:53] Being able to practice that and get ideas about what's the right way to chat to this person to get them to understand and get them aligned. And vice versa. What did they just say to me? It all sounded like jargon. Could you translate that into geek Speak for me please.
[00:16:11] Dr Genevieve Hayes: I have actually had that where I've been doing important conversations where I've recorded them using Zoom because I wanted to be able to refer back and I've actually put the transcript into AI and. Said, I'm not quite sure what this person said. Could you please translate this so that I can understand it?
[00:16:30] Colin Priest: Yes. So yeah. This is sort of stuff that I get very excited about 'cause it expands what is capable and it expands. The quality of what we can do?
[00:16:43] Dr Genevieve Hayes: Which LS do you typically use in your work?
[00:16:46] Colin Priest: I'm a bit of a womanizer when it comes to LLMs. I use all the mainstream ones. So I use chat, GPT, I use Claude, I use Gemini. I also use perplexity. Which isn't an LLM itself, it sits over the top of one or more of those. And on top of that, I also use some coding tools. Like cursor that are more specific.
[00:17:11] I find it useful to switch around between them sometimes. And I literally the most common thing that happens is I get one to check the other. 'Cause they are structured differently. Particularly with code. They'll think about code very differently. I tend to use chat GPT more than the others.
[00:17:30] But that's more a cost thing. I probably, at the moment than anything else, they've got an API that's cheaper than the others. But if I use images, Gemini at the moment is the best at creating images. I'm just astounded at that if I want diagrams. A clawed is the best at diagrams, so it's often the best way to explain something is to have a diagram that says, well, this is how this relates.
[00:17:53] It's the best at that.
[00:17:55] Dr Genevieve Hayes: So Claude can actually produce images now. Can it?
[00:17:58] Colin Priest: It can do diagrams, it can't do images.
[00:18:00] Dr Genevieve Hayes: Oh, okay.
[00:18:02] Colin Priest: They're different. So it's literally like it's making a flow chart or a hierarchy or a whatever. Sometimes that's better than an image. Other times an image is better and then I tend to use Gemini or open AI for that. But you know, jet Five has different modes. I dunno if you've played with it yet.
[00:18:22] It's only been out for a month. If you use the thinking mode, it won't do images for you anymore. It goes the clawed route and gives you diagrams. If you use the generic mode, it gives you images.
[00:18:33] Dr Genevieve Hayes: That's interesting.
[00:18:34] Colin Priest: Yeah, and I only realized that after I was unhappy with what it was producing and I just, ah, wonder what happens if I use the non-thinking mode and it gave me something better.
[00:18:42] And I go, Ooh, what's going on here? And then I figured it out. The images were literally diagrams. It was doing them as vector graphics and making them very geometric where I wanted something that looked a bit more natural.
[00:18:54] Dr Genevieve Hayes: So it sounds like you mentioned the chat, G-P-T-A-P-I, so it sounds like a lot of your work you're doing at a programmatic level, rather than using the chat interface.
[00:19:03] Colin Priest: just depends on what I'm doing at the time. The API is useful when I've gotta do something at scale. So we talk about, when I was generating synthetic data, I need an API for that. If I wanted to go and scrape data from hundreds of new stories, I need an API for that. But this morning I was developing lecture notes.
[00:19:24] I'm using the gooey for that. And it's like I bring in a few documents and I go, look, I wanna write a lecture on this topic. Can you synthesize these documents into a coherent lecture narrative? And, and then I play around with it. 'cause then I realized as usual, I hadn't been specific enough.
[00:19:41] But that's all right. And that's one of the beauties of the GUI is that that is the interactive where I haven't properly defined the problem yet.
[00:19:48] Dr Genevieve Hayes: So, do you have any LLM horror stories, things you tried using LLMs for and that just didn't work out and that made you think I'll never again.
[00:19:57] Colin Priest: Too many just last week or the week before I was using a couple of tools for coding, and I Oh wow. It's so fast. And i'm getting it to write some fairly sophisticated analysis and it out pops this really interesting result and I'm so excited about this 'cause oh, there's a research paper, , this is exciting.
[00:20:18] And then I looked at it and go, it doesn't look right. And it had literally, instead of using my real data, had put in mock test data and that's what it was showing. And it had done that without asking me and without telling me. And so this is where having more than one LLM is good. I picked another LLM and go, please audit this result.
[00:20:38] It doesn't look right to me. Why does it look like this? And it comes back and it says, well, it's using a simulated data. And it's like, where is this simulated data in here?
[00:20:50] Dr Genevieve Hayes: It's interesting what you say about using one LLM to check the other one because I actually came across a product on the internet that was designed to remove LLM hallucinations. And that was basically how it worked. So it had two competing LLMs and it would use one to check the other.
[00:21:10] And there was a whole research paper behind it that basically said that.
[00:21:13] Colin Priest: When you think about that, you know that comes back to some of our fundamental actuarial theory, independence of risk. If you've got two independent sources and they disagree, you trust it less but the average is gonna be more reliable than if you only use one. But you don't need to use more than one provider.
[00:21:29] You can literally have another prompt after the first one that says, go back and now be ultra critical, and find the flaws and unstated assumptions in what you just said.
[00:21:40] Dr Genevieve Hayes: Yeah, I've taken to doing things like that recently because I've been getting sick of the positivity, bias,
[00:21:46] Colin Priest: Oh
[00:21:47] Dr Genevieve Hayes: and,
[00:21:48] Colin Priest: It's so sycophantic.
[00:21:49] Dr Genevieve Hayes: yes,
[00:21:50] Colin Priest: Just this morning it was telling me how brilliant I was and I like to think I'm brilliant. I've got an inflated ego, but what I did was boring. It really, wasn't all that clever.
[00:22:00] Dr Genevieve Hayes: Two things that I've started doing recently, which I've found help overcome that is one of them is to say pretend you're a wise mentor of mine, who wants the best for me and is going to tell me if I'm going down the wrong track. And that tends to give a sort of more tough love thing without being.
[00:22:20] Horrible. And the other one, I actually stumbled on this the other day by mistake. I had this email conversation I was requesting a refund for something and it had gone from zero to crazy in like two emails and I'm like, did I do something wrong? 'cause why did this person, I was requesting it from just.
[00:22:40] Spit the dummy. And so I put the emails in, but I didn't want it to tell me, yes, you were perfect and this person was evil. So I said here is a business case study. And I just started referring to myself in the third person and put them in, you know, tell me how Genevieve behaved. Tell me how the other person behaved and tell me how they could improve.
[00:23:03] And I found because it didn't. Even know that the people in this were real, it gave a nice balanced assessment.
[00:23:11] Colin Priest: That's an important hint for getting better stuff out of LLMs. They've been trained to make you feel better and that's a bias. And they've been trained for certain things like that. So yesterday I was trying to model some financial complaint data. And the LLM model I was creating just didn't match what the remedies were that the banks were doing at all.
[00:23:34] And I eventually figured out the problem was I had told it the wrong persona to have, I'd told it to be fair and reasonable. That's not how banks operate. Operate when dealing with complaints and also some other things I'd done in the prompt had given it an action bias to be biased towards doing whatever you can to make the customer happy.
[00:23:55] And so I then changed the prompt and I made it a more legalistic prompt and I changed the job title that I'd given it to simulate. So before it was like your job was to, I can't remember what the title was I had, but it was you were to remedy problems. So there's action bias already in that title.
[00:24:12] I said, now you're a complaints officer and your job is to sift through and find any that have enough evidence you know, reject stuff that doesn't have evidence. Only do stuff when there's evidence there. All of a sudden the model starts replicating much, much better what the banks actually were doing in this data.
[00:24:31] Dr Genevieve Hayes: Yeah, so it's all about getting the personas correct.
[00:24:34] Colin Priest: Yeah. And the motivations and the guardrails, yes. But personas are incredibly important. It turns out in what an LLM does for you.
[00:24:44] Dr Genevieve Hayes: Yes. And that goes back to what you were saying, if you wanna use this as a data scientist to critique your work or to help you with things, it's about getting the personas of the key stakeholders correct.
[00:24:57] Colin Priest: Yeah, and so I've literally at one point of time said. You are my boss and I described to my boss, and you are a skeptic. You take a lot of convincing, but you can be convinced by evidence. Now, review this idea that I just had with that persona on and explain where you think things aren't as exciting or practical , as the idea makes out.
[00:25:19] And that was good 'cause then I didn't go and embarrass myself in front of my boss and pitch something as the most exciting, brilliant idea ever. When in fact it did have a a few things that needed dealing with if it was ever gonna be successful.
[00:25:34] Dr Genevieve Hayes: So. We've come up with a lot of different suggestions for how data scientists can use LLMs to help improve their work. For data scientists who wanna integrate LLMs more into their work processes and use them to increase their effectiveness in their jobs, where would you recommend they begin?
[00:25:52] Colin Priest: So as with everything, start with something you already understand. Because then if things go wrong, you'll notice it's also much easier for you to get right prompts 'cause you understand what you're doing. But it's very similar to where would you start with data analytics. You always start with something you understand that is safe and reliable.
[00:26:13] So start with something that you think either needs some better quality controls or some automation. Then the next thing is. Use it as an assistant initially rather than as an automation. So instead of getting it to do stuff for you initially, get it to give you ideas or recommendations and then you implement them.
[00:26:34] The next step after that is give it ideas and then you can approve them and it will implement them for you. So do it in baby steps because you'll find. That just like me giving the wrong instruction about how to handle bank complaints, there's a few things that you'll have blind spots on that you won't realize until you see the output.
[00:26:53] And if it is in an area you already understand, you'll pick it immediately, whereas, you know and then the other thing I'm gonna say is talking blind spots. Use it for counter arguments, what we were just talking about. So get it literally to give it a persona where it's got that like auditor personality or peer reviewer, that grumpy peer reviewer personality.
[00:27:17] And then say, well, tell me where I could be wrong. Tell me where I've made an implicit assumption, haven't stated it, haven't checked it. Tell me where I could have done this in a faster way or a more reliable way. And then the other thing is communication. Do a profile.
[00:27:33] You can help it. If you do back and forth, it can help you define that persona. And now you say, okay, now be that persona. I'm gonna practice communicating my results to you. I want you to tell me whenever I'm being too technical or too verbose, or i'm not giving you any reason to care.
[00:27:56] And you do that in private. By the time you polish that and you go into the actual person, they go, wow, this person's a genius. You know,
[00:28:03] Dr Genevieve Hayes: Yeah,
[00:28:06] Colin Priest: they haven't seen all the mistakes you made.
[00:28:08] Dr Genevieve Hayes: yeah, yeah. I've had some of those conversations where I've bombed out and it's like, I'm so glad this was only with Claude.
[00:28:14] Colin Priest: Yeah. Yeah.
[00:28:16] Dr Genevieve Hayes: Yeah. So for listeners who wanna get in contact with you, Colin, what can they do?
[00:28:22] Colin Priest: Easiest way to find me is on LinkedIn. There's only a few column priests out there.
[00:28:27] But yeah, that's the easiest way to find me. It is the beauty of having an uncommon name.
[00:28:32] Dr Genevieve Hayes: So I would agree with that.
[00:28:34] Colin Priest: Yeah. One L in Colin, by the way.
[00:28:38] Dr Genevieve Hayes: And there you have it. Another value packed episode to help turn your data skills into serious clout, cash, and career freedom. If you enjoyed this episode, why not make it a double next week? Catch Collins value boost a 10 minute episode where he shares one powerful tip for getting real results real fast.
[00:28:59] Make sure you're subscribed so you don't miss it. Thanks for joining me today, Colin, and for those in the audience, thanks for listening. I'm Dr. Genevieve Hayes, and this has been Value-Driven Data Science.
Creators and Guests