Episode 17: How to Avoid an AI Scandal

Download MP3

00:00:00 Dr Genevieve Hayes
Hello and welcome to value driven data science brought to you by Genevieve Hayes Consulting. I'm doctor Genevieve Hayes. And today I'm joined by Chris Doleman to discuss how you can avoid an AI scandal.
00:00:15 Dr Genevieve Hayes
Chris is the executive manager of data and algorithmic ethics at the Insurance Australia Group.
00:00:21 Dr Genevieve Hayes
A gradient institute fellow, and he regularly contributes to external research on responsible AI and AI ethics. In 2022, he was named the Australian Actuaries Institute's Actuary of the Year.
00:00:35 Dr Genevieve Hayes
In recognition of his work around data ethics and was also included in the Corinium Global intelligence business of data.
00:00:44 Dr Genevieve Hayes
List of the top 100 innovators in data and analytics. Chris, welcome to the.
00:00:50 Chris Dolman
Thanks for having me. It's great to be here.
00:00:53 Dr Genevieve Hayes
Great to have you. And before we go on, you have to do a disclaimer.
00:00:57 Chris Dolman
I do, yes. Thank you for that. So just just to let everyone know that I'm here in my personal capacity as Chris Doleman and anything I say should be attributed to me, not necessarily an employer or anyone else. You won't think I'm associated with.
00:01:10 Dr Genevieve Hayes
With that out.
00:01:11 Dr Genevieve Hayes
Of the way, let's get on with the.
00:01:14 Dr Genevieve Hayes
Elon Musk once described AI as being far more dangerous than nukes, and a recent study conducted at NYU found that 36% of AI researchers surveyed believed it plausible that decisions made by AI or machine learning systems could cause a catastrophe this century.
00:01:36 Dr Genevieve Hayes
That is at least as bad as an all out nuclear war.
00:01:39 Dr Genevieve Hayes
Now, I don't think the technology is quite at the existential threat level just yet, but it is at the point where AI can cause significant damage to the reputation of an organisation if it's used incorrectly.
00:01:54 Dr Genevieve Hayes
And a lot of organisations are now starting to wake up to that fact and have become very interested in understanding AI ethics and responsible AI.
00:02:05 Chris Dolman
That's right. That's a wonderful trend.
00:02:08 Dr Genevieve Hayes
The fact that you're working in a job with the title executive manager of data and Algorithmic Ethics suggests that your employer is one of these organisations.
00:02:17 Chris Dolman
Well, well, that's.
00:02:18 Chris Dolman
Right. And I like to think that we were one of the first in the market to to make that decision.
00:02:22 Chris Dolman
Certainly when I started that role almost five year down now, I think there were very few others that I was aware of in the market doing similar sorts of work.
00:02:32 Chris Dolman
And there's a lot more now than when I started, so that the trend has definitely been in a positive direction. So I think others have started to recognise that this is a.
00:02:40 Chris Dolman
Topic they need to take seriously.
00:02:43 Dr Genevieve Hayes
How did you first become interested in data and and AI ethics yourself?
00:02:48 Chris Dolman
It was kind of an organic sort of process, so I I spent part of my career as an actually doing all the usual things that actually do in insurance companies and then moved into the sort of AI data driven decisions space.
00:03:03 Chris Dolman
Probably around 2013. Fourteen sometime like that and I was basically building the data science practise at IG outside of the traditional actuarial areas, right?
00:03:15 Chris Dolman
So insurance companies have huge, you know, pricing teams and folks building capital models and reserve models and things like that. And so that's all well established.
00:03:24 Chris Dolman
But there's heaps of other decisions that you can apply data science to, so we spent a bit of time looking at claims, decisions and things like that.
00:03:31 Chris Dolman
And so I was basically building the team trying to do these sort of new and different things with with data and models rather than the traditional stuff. So I spent a few years building that team up and and doing some useful projects.
00:03:44 Chris Dolman
For the for the.
00:03:44 Chris Dolman
The company and it became pretty clear to me that there needed to be someone independent from that team with a suitable level of sort of technical knowledge about how things were actually done, to basically whether of sort of risk management hat, if you like.
00:03:58 Chris Dolman
So we obviously have a risk management function who do all sorts of risk management things, but you need to have a sort of.
00:04:04 Chris Dolman
Fairly specialised person to really look at the ethics issues that can arise from these systems and you don't typically find that in your average risk manager who's a bit more of a genre.
00:04:16 Chris Dolman
With us and there were loads of news stories around this time, sort of, you know, 2015, 2016 ish of AI systems going wrong.
00:04:23 Chris Dolman
And so I kept looking at that as the person managing this team, thinking well, I need to avoid a sort of issue like that. And and it'd be great if there was someone independent from the team who was, you know, challenging what.
00:04:35 Chris Dolman
I was doing.
00:04:36 Chris Dolman
And so I basically proposed that this, this job should be created and it would be really interesting if I was the one that was doing so that that kind of was how.
00:04:44 Chris Dolman
It came about, I guess. And yeah, we, we've gone from there.
00:04:48 Dr Genevieve Hayes
So you're the AI and ethics specialist for the whole of the organisation.
00:04:53 Chris Dolman
Basically, yeah.
00:04:54 Dr Genevieve Hayes
I think it's very good that IG's big enough so that it can employ a person in a role like you, because I would imagine that a lot of organisations wouldn't be big enough to have a dedicated AI ethics.
00:05:09 Chris Dolman
Yeah, that's probably true. So so you need to make sure if you're not of that sufficient scale, maybe your data science teams like 3 people and so you're not necessarily going to have someone purely dedicated to this particular topic that your risk management team maybe get suitably trained or that you've got some external advisors that you can bring in from time to time.
00:05:29 Chris Dolman
To help you on this topic or that the data science team gets trained properly, I mean that should happen anyway. I think the there needs to be some independence from that team to make sure that there's objectivity.
00:05:40 Dr Genevieve Hayes
AI ethics and data ethics are now becoming a part of a lot of data science training.
00:05:46 Dr Genevieve Hayes
A lot of university lecturers are now incorporating it into their courses, so I think you'll find that there's an increasing number of data scientists who are thinking about ethics from the word go.
00:05:59 Chris Dolman
I think that's right. And that's obviously a very good trend. So so that definitely people will have more of a skill set which is which is helpful.
00:06:06 Chris Dolman
I think you do still need to have an independent function though. Challenging what that team is doing because that team might get an instruction from management to, but you know, build a model to do with.
00:06:16 Chris Dolman
Thing, and maybe that team doesn't have the political power to object to that. If it's a illegitimate intent, for example, that that team might have that power, or it might not, and so you need an independent function, either a risk function or an audit function to to really be asking those sorts of questions. I do think there needs to be some training of or skill in second or third.
00:06:37 Chris Dolman
By to make that work.
00:06:39 Dr Genevieve Hayes
I think the other problem is data scientists really want to get involved in doing a lot of these projects and in their enthusiasm to do a particularly exciting sounding project, they'll often fail to stop and ask is this something they should be doing?
00:06:57 Chris Dolman
Well, that's right. It's the sort of can we should we dilemma which a lot of people face and and yeah.
00:07:03 Chris Dolman
If you're used to someone, if you're someone who will, you know, play around with new technology, try and build things in a bit of a hacky way, and just how things work, which a lot of engineering type.
00:07:13 Chris Dolman
People are like that just now.
00:07:15 Chris Dolman
Then it's quite easy to build something that maybe if you step back for a second and thought about it, you might.
00:07:20 Chris Dolman
You might not want to do that. I think that natural tendency certainly is there in that type of person, quite hard to resist that sometimes when there's new shiny toys around that we can play with is something people need to build as as a skill in themselves to learn.
00:07:34 Chris Dolman
Say no to things, even if they look exciting. If it's the wrong.
00:07:37 Dr Genevieve Hayes
Thing to be doing and learn to take that step back before they start diving in and embarking on the.
00:07:42 Dr Genevieve Hayes
I checked.
00:07:43 Chris Dolman
That's right, and it's not always about saying no as well. So some, I mean not just sometimes quite often you can you can pivot ideas into something that's acceptable and away from something that's not.
00:07:55 Chris Dolman
So you don't necessarily say we're not going to do that project at all. It's just just to no, no, sometimes you have to say that, but sometimes you can take something that might be not.
00:08:03 Chris Dolman
Quite so good and turn it into something that's actually OK. So, but if you if you don't step back and.
00:08:08 Chris Dolman
Think about that.
00:08:09 Chris Dolman
Stuff. Then you can.
00:08:10 Chris Dolman
You can go down the wrong track quite.
00:08:13 Dr Genevieve Hayes
Last year, you Co authored a chapter in the book Czechmate Humanity on the de risking of automated decisions. When you're talking about automated decision making or AI based decision making, what exactly do you mean?
00:08:28 Chris Dolman
Oh gosh, that's a good question. It's worth saying as well that whilst that's a chapter in a book which we can buy and I definitely encourage them to do that, that chapter is also available as a as a free paper, and we'll put a link in the podcast notes to that paper authored by myself and others at Gradient Institute. So we'll definitely share that with people. So what do we mean by automated?
00:08:46 Chris Dolman
Visions. We're a little bit hand WAVY typically on this definition, and I think that's, I don't think it's helpful to be too precise because otherwise that's just a really unproductive debate, right?
00:08:57 Chris Dolman
But whenever you're removing humans from a substantial part of the decision making process, maybe they're still involved, maybe at the end or maybe in defining the objectives. So you're going to have some people involved somewhere in the process.
00:09:09 Chris Dolman
Typically, but if generally you're taking data as an input and generating some output automatically, be that a prediction or a recommendation or some action to be taken, whether or not some human reviews that or not is sort of academic, then that's basically an automated system.
00:09:26 Chris Dolman
So that that I mean that could be like a really, really simple business rules take this, take this data from customers and generate a recommendation for a product to put on a website.
00:09:36 Chris Dolman
Or something, right?
00:09:37 Chris Dolman
So that's that's plenty of time in in retail today. Insurance companies do this when they set prices for policy. So you.
00:09:45 Chris Dolman
Go in and.
00:09:46 Chris Dolman
You go and buy a car insurance policy and you asked a bunch of questions. The answers for those questions will be used to work out the riskiness of you as an individual in your vehicle, and that will automatically generate a price at the end.
00:09:59 Chris Dolman
That's your quote for insurance, and that human being was involved in that process. Obviously, humans are involved in building the system that generates prices, but that particular interaction in the moment is done totally automatically.
00:10:11 Chris Dolman
So that's an automated system. Sometimes they're really complicated. Insurance pricing is pretty complicated sometimes they're really simple, but the risk management issues are.
00:10:19 Chris Dolman
Sort of the same doesn't complexity doesn't really matter that much, because all the risks that can occur can still occur for simple systems as well.
00:10:28 Dr Genevieve Hayes
And with that insurance example, I'd imagine before AI systems came about, probably what happened was there was a person in an office looking.
00:10:37 Dr Genevieve Hayes
Up a rate book.
00:10:38 Chris Dolman
That's right. And that rate book, that rate book is an automated system in some sort of abstracted sense, if you like.
00:10:44 Chris Dolman
So you take inputs being the data that the person has given you the person, and then you look up the numbers in that rate book and multiply them together or add them up or whatever.
00:10:53 Chris Dolman
You do and that generates an output that you don't, and you usually have authority in how that calculation works, right?
00:11:00 Chris Dolman
You're just looking up numbers and tables, but computers weren't invented and so you couldn't computers to do that.
00:11:06 Chris Dolman
Now you can so that could be automated, but humans were removed from a large part of that decision making process anyway traditionally.
00:11:13 Chris Dolman
They might have discretion at the end, though, right? And underwriters often do, even today for certain types of insurance.
00:11:20 Chris Dolman
So, but yeah, it's it's an example of an automated system and you can find, you know, insurance pricing models being proposed way, way back in the late 1600s very very pre computer.
00:11:32 Dr Genevieve Hayes
When I was a student actuary, one of our lecturers was a man in his 60s, so he'd seen he'd been in the industry for a very long time, and he was telling us about how insurance calculations were performed when he was a graduate.
00:11:47 Dr Genevieve Hayes
And what it involved was having a room full of graduates doing.
00:11:53 Dr Genevieve Hayes
Calculations with pen and paper all day, every day. And that was a computer in those days.
00:11:58 Chris Dolman
That's right, computer used to be a person. So and then. Then we invented automated mechanisms of doing those calculations, thankfully.
00:12:05 Chris Dolman
For us. So yeah. So so anyway, in, in this paper, shall I tell you a bit more about what what we sort.
00:12:11 Chris Dolman
Of said in that.
00:12:12 Dr Genevieve Hayes
Oh please, Dave.
00:12:13 Chris Dolman
We basically identify some of the core issues that arise when sort of automation occurs. It's useful to think of that by using examples, because I think it really brings some colour to to the discussion.
00:12:26 Chris Dolman
So, so maybe let's think about call centres, right and how we manage the humans and call centres and deal with the issues that those humans.
00:12:33 Chris Dolman
Cause and then compare that to like an automated version of a call centre, which is, you know, like a website doing a similar sort.
00:12:40 Chris Dolman
Job like what can go wrong in a call centre and I've never managed a call centre, but I've certainly observed them and sort of thought about them a bit.
00:12:48 Chris Dolman
I mean, you've got human beings on the phone with customers all day long making decisions. And so there's lots of different failure modes that can occur, right? You could have incompetence of those people on the phone. Maybe they're not properly trained.
00:13:01 Chris Dolman
Maybe they're just not very good at their job, and so they're making decisions that perhaps they shouldn't be. They can be inconsistent, so maybe they're making good decisions on one day, bad decisions on another day. So they're not taking the same action with the same input, if you like.
00:13:13 Chris Dolman
Maybe they're just having a bad day. Maybe they're tired. They didn't sleep well or maybe got personal issues going on.
00:13:19 Chris Dolman
It's distracting them. All these things can make human beings make similar sort of bad decisions. Maybe the rules they've been given are unclear, so maybe the the rule book that they've been given and trained on didn't describe this particular situation that they've come across. And so they've.
00:13:34 Chris Dolman
Got to use that.
00:13:35 Chris Dolman
Management. Maybe they get that wrong and they have free will, right? So they could, if they decide arbitrarily ignore the rules.
00:13:41 Chris Dolman
If they wanted to and just some decision, that's the wrong one according to the rules. But they can do that because they have free will. And so all these sorts of things can go wrong with human being.
00:13:51 Chris Dolman
So we've over many, many, many.
00:13:52 Chris Dolman
Years managing call centres worked out controls that we put in place in all sorts of businesses to manage people in that situation, right?
00:14:00 Chris Dolman
So they'll be rule books that are clear that are well written that people can understand, that they're trained on, and then you monitor people against adherence to those rules.
00:14:11 Chris Dolman
By some or function or risk function or what have you, who might be listening into calls occasionally, or checking recordings or things like that, making sure that those people are doing what they ought.
00:14:19 Chris Dolman
To be doing.
00:14:20 Chris Dolman
You usually have.
00:14:21 Chris Dolman
Some escalation or complaints mechanism, right? So customers aren't happy. They can talk to a manager or a higher up and so if things are going wrong.
00:14:28 Chris Dolman
That can be used as a control and that like those people problems, they just don't scale very well, right?
00:14:35 Chris Dolman
So if I've got someone in the call centre who's not doing their job properly, there's a limit to how much damage they can cause, because they can only be on the phone for, you know, 8 hours a day dealing with a certain number of customers. That's not my whole call centre so.
00:14:46 Chris Dolman
The damage can be quite limited, even when damage.
00:14:50 Chris Dolman
So we can manage that according to that limitation. When we automate that. So if we take some of those interactions away from people and now we've got some automated system and doing a similar thing like a website with some data driven prices built to it, say chat box, if you want to think about that.
00:15:06 Chris Dolman
But it doesn't have to.
00:15:07 Chris Dolman
Be that could.
00:15:07 Chris Dolman
Be anything really, you actually solve a lot.
00:15:09 Chris Dolman
Of those problems that you had with people.
00:15:11 Chris Dolman
A lot of those problems with people related to free will and sort of inconsistency, so computers they don't have free will. They're going to do exactly what they're told all day, every day until you turn them.
00:15:23 Chris Dolman
Off they're going to be completely consistent given the same inputs and you get the same outputs and so you can rely on them to do what you've told them to do.
00:15:31 Chris Dolman
The problem is they do literally exactly what they're told and say that common sense doesn't exist, so you'll get some spectacularly silly failures. Which human beings would not make.
00:15:43 Chris Dolman
But computers will because you haven't quite worked out how you programmed them, and maybe there's some weird edge cases that cause some funny problems.
00:15:51 Chris Dolman
There's loads of examples now in the literature of that, my favourite being the bald linesman example. Have you seen? Have you seen that one?
00:15:58 Dr Genevieve Hayes
No, I haven't.
00:15:59 Chris Dolman
This was a a soccer game in Scotland. I forget what the team was and they'd replaced the human camera operator with an automated camera operator that was just trained to follow the ball.
00:16:11 Chris Dolman
And so it was just, you know, moving around following the ball. But there was a linesman with the bald head and that looked from a distance, when certain lights sitting too much like a ball and so.
00:16:20 Chris Dolman
The game got missed. It just followed the linesman up and down the touch.
00:16:23 Chris Dolman
Like like no humans gonna make that error if you sell a human camera operator, follow the ball, they're not gonna fill the linesman, but the computer does it because weird errors happen that we can't comprehend and you might not maybe be monitoring them properly, right?
00:16:36 Chris Dolman
If you're used to monitoring people, then you've got, you know, risk managers who are listening into calls and doing spot cheques and things like that. You've got to monitor.
00:16:45 Chris Dolman
Automated systems very differently. It's more like an it sort of monitoring process. You're checking for systematic failures and down.
00:16:52 Chris Dolman
Time and it's totally different. So that's all in the way. The system works though the design of those systems can also fall down because traditionally management's just writing the rule book to tell the, you know, service staff how to operate.
00:17:06 Chris Dolman
Management aren't writing the rule book directly in most automated systems, they're declaring a goal that the automated system should have.
00:17:14 Chris Dolman
And giving that to the model builders with some data to try and train a model to achieve.
00:17:20 Chris Dolman
And management typically aren't used to specifying that goal to the level of precision that you really.
00:17:26 Chris Dolman
Need to and.
00:17:27 Chris Dolman
I I've heard throughout my career people say things like my goal is to maximise sales.
00:17:32 Chris Dolman
Well, that that's never been true at any time it's ever been said unless people are literally giving things away, right. So there's always constraints in place.
00:17:42 Chris Dolman
But quite often those constraints are sort of unsaid, right? We don't need to say don't give things away because people just know that the computer doesn't know.
00:17:50 Chris Dolman
And so if you don't tell it, that's part of the goals at the the constraint, it's going to give things away, but.
00:17:55 Chris Dolman
Eventually. So you've got to be quite careful how you.
00:17:57 Chris Dolman
Programme these things.
00:17:59 Chris Dolman
And maybe the AI devs or the model builders don't quite understand those implicit constraints, particularly if they're not from the industry, because some of those might be quite subtle.
00:18:09 Chris Dolman
So you've got to be quite careful that there's that level of understanding as you instruct people down and managers, if they're not built.
00:18:16 Chris Dolman
Automated systems before don't really know how to do that, so that's quite tricky. And then upwards, like there's no manual at the end, so the manager can't pick up the instruction book and say, OK, I'm I'm happy with how we're instructing those decisions to be.
00:18:29 Chris Dolman
Days. You can't do that, right? You might have some explainable AI process to give them a feel for what's going on, but it's not quite the same thing.
00:18:37 Chris Dolman
And so do they really have confidence in how those decisions are being made? Perhaps not. And so all sorts of different failure mode is what I'm trying to illustrate, which didn't really exist in human systems.
00:18:47 Chris Dolman
We sort of solved those by automation, but we've created a whole bunch of new ones and so we've got to really adapt the way our government works to to deal with that.
00:18:56 Dr Genevieve Hayes
Sounds like the biggest problem with human systems is humans going rogue. And the biggest problem with AI systems is AI not going rogue.
00:19:06 Chris Dolman
Umm yeah, that might be an interesting way of thinking about it, although maybe Rogue is an interesting word to explore.
00:19:12 Chris Dolman
I mean, it depends what we mean by rogue. When we talk about humans in in that sense. And the way you used it, then you're sort of talking about humans, like, almost deliberately breaking the rules or like using their using their free will in a way that we wouldn't.
00:19:25 Chris Dolman
Desire. So that's kind of what we mean by a rogue human rogue AI is. So it's true that it's sort of not going rogue and that it's kind of obeying the instructions precisely.
00:19:36 Chris Dolman
But maybe we didn't quite know what the instructions were I.
00:19:38 Chris Dolman
Think that's the problem? So it's.
00:19:40 Chris Dolman
True that it's not going rogue, but maybe it's kind of going rogue in our minds because we didn't know how we'd instructed.
00:19:46 Chris Dolman
That's that's perhaps the core of the issue.
00:19:49 Chris Dolman
But there are other issues as well around scalability, right? So Rogness will scale very well with an automated system, whereas it doesn't with humans.
00:19:57 Dr Genevieve Hayes
Well, unless you get a complete uprising. But that's probably not going to happen in a non revolutionary.
00:20:04 Chris Dolman
Yeah, yeah. I mean, look, humans. Yeah, you can. I can get together and scale it that way, but it's it's not very quick usually.
00:20:11 Chris Dolman
Yeah. So then we we basically take that sort of discussion which you have at the start of this paper about how systems fail differently to humans. And we say well.
00:20:19 Chris Dolman
This then translates into some risks that you need to be mindful of when you're building these systems, and we sort of have 3 categories for that, so there.
00:20:29 Chris Dolman
Issues of legitimacy. So are you building something that's like that should be built that people can accept as like socially acceptable?
00:20:36 Chris Dolman
Is it legal? Is it suitably transparent? If that's something that ought to exist, those sorts of things, that's right.
00:20:42 Chris Dolman
At the start of like the development cycle, are you building something that ought to be built? Then we took about design issues so.
00:20:48 Chris Dolman
If you got something that ought to be built, did you actually design a system that achieves that goal, or did you make some error somewhere and get that wrong, and then you have execution problems? So even if you build something, even if you have a legitimate.
00:21:00 Chris Dolman
10 You build it correctly, you can still mess up the execution. You know you get drift and all sorts of things that can go wrong.
00:21:07 Chris Dolman
And if you don't detect those things, then that can cause massive failures. So we go into those in quite some depth in the paper with sort of real case studies to illustrate where these have actually caused real failure in in the in the wild.
00:21:21 Chris Dolman
So we're not making this stuff up. It's genuine stuff that's occurred before. So people should be very concerned about that because it could happen to them.
00:21:28 Dr Genevieve Hayes
I thought the case studies were one of the more interesting parts of that chapter.
00:21:32 Dr Genevieve Hayes
I'd actually like to go through one of them here. That one that I'm familiar with, which is the Robo death scandal.
00:21:40 Chris Dolman
Yeah. Yeah, we can talk about Robo debt for a bit. I actually wrote another paper a couple of years before this on AI scandals, and I used robo debt in that as well.
00:21:48 Chris Dolman
So I've used it twice. It's.
00:21:50 Chris Dolman
A really, really good.
00:21:51 Chris Dolman
Illustrative example of some of the things that can go.
00:21:53 Chris Dolman
Wrong. It's worth noting that there's a real Commission going on at the moment about it, so we'll probably learn more about what actually went on in due course.
00:22:02 Chris Dolman
So a lot of this is sort of speculation and based on what's already available in public, I think I find most helpful about Robert, I don't know about you, is that it was such a simple system and so quite often.
00:22:13 Chris Dolman
What I find in this space is people seem to take the view that the issue is the level of complexity in these systems, and so we almost need new mechanisms and new types of risk management or laws or rules because of complexity of AI systems. And that might be true sometimes.
00:22:31 Chris Dolman
But if we only have that attitude, then we forget about like robo debt, which was really, really simple. I mean, you can write down the maths on the back of a napkin.
00:22:38 Chris Dolman
There's no complexity there at all, so if you insist that all your new you know regulations and laws should apply to complex AI stuff, you're gonna miss things like robo debt. So I think it's really important to focus on the automation debt rather than the.
00:22:52 Chris Dolman
Yeah, Black City there.
00:22:53 Dr Genevieve Hayes
For any of our listeners who are unfamiliar with Robo.
00:22:56 Dr Genevieve Hayes
Debt robo debt is the nickname used to describe the automated debt recovery system used by the Australian Tax Office between July 2015 and November 2019. The system used a computer algorithm to match welfare payments with averaged income data.
00:23:17 Dr Genevieve Hayes
And then automatically issued debt notices where overpayments were identified. However, the use of averaged income data instead of actual income data was incorrect.
00:23:28 Dr Genevieve Hayes
Leading to $1.763 billion of unlawfully claimed debt, and as you mentioned before, royal Commission, does that sound right to you?
00:23:40 Chris Dolman
It it it does with, with one correction perhaps. I think you said that it was the Ato system and my understanding is it was well, not Centrelink, but whatever the body is that manager, Centrelink, DHS, I think it is a provided the data.
00:23:54 Chris Dolman
But I think the system the system was built by another department, that's my understanding anyway, but that might be incorrect.
00:24:00 Chris Dolman
The annualised income data came from the ETA, but I think others used it, perhaps unwisely, and the way the system should have ran. My understanding is it should have looked at fortnightly earnings or weekly earnings, because that's how your I think it's fortnightly. It should have been.
00:24:15 Chris Dolman
Because that's how your benefits were supposed to be paid relative to your fortnightly earnings.
00:24:19 Chris Dolman
And obviously if you've got uneven earnings during the year, if you're like a a casual worker or something, then an A simple average of your annualised income is not going to be the same as using your fortnightly earnings in every fortnight and then calculating the benefits fortnight. And so it doesn't take a genius to work out the maths there.
00:24:39 Chris Dolman
That's prob.
00:24:41 Dr Genevieve Hayes
But the people who were building the system would have been people who themselves received their income on a fortnightly basis.
00:24:49 Dr Genevieve Hayes
They were probably unfamiliar with or had no personal experience with, people who did have uneven income, so it didn't even occur to them to build those into the system.
00:25:00 Chris Dolman
Perhaps. I mean I I think and I think this is where if you're writing a procedure to enact something written in regulation, you've got to be really, really careful that you get people familiar with the regulation involved to make sure that it really, really is doing the thing that it ought to be doing.
00:25:16 Chris Dolman
So certainly I'd expect the average data scientist is not going to be familiar with the nuances of, you know, that sort of regulation.
00:25:25 Chris Dolman
They're just not going to look at it before. So in the design process, you need to have people who have deep familiarity with that.
00:25:31 Chris Dolman
I mean, I don't know what happened in that particular situation. I'm sure we'll find out in due course.
00:25:37 Chris Dolman
And there's been a lot of speculation.
00:25:39 Dr Genevieve Hayes
If you can go back in time to when robo debt was first developed, what would you have done differently to mitigate the the risks in?
00:25:47 Chris Dolman
Well, I think if we look at the three categories, I sort of very quickly described before, I think you've got to focus on each of them and then work out whether there's issues there.
00:25:58 Chris Dolman
So legitimacy, right? Is this thing a legitimate thing to do? Well, it was ultimately found to be illegal. So clearly it's illegitimate.
00:26:07 Chris Dolman
And so someone with suitable legal training probably should have worked that out before the system was even developed. At its inception, that should have been worked out, certainly further down into the development cycle. It could also have been reviewed, and that could have been worked out.
00:26:21 Chris Dolman
At the very least, that saves a lot of development time, right? Not building something and taking the time to build something that's illegitimate is useful, because you can then go and do something that is legitimate, but it also prevents a lot of harm from occurring, so you know, that's that's good.
00:26:37 Chris Dolman
Even if we thought it was legitimate, though, so even if we think that's a that's a legitimate goal, we should go and collect debt recovery in this way, even if we thought that were true, there's obvious design problems.
00:26:48 Chris Dolman
I mean the the maths averaging method for the maths of the averaging method that we discussed was obviously wrong. If we thought that it was a legitimate aim to go and collect that stuff.
00:26:58 Chris Dolman
We should have implemented that.
00:27:00 Chris Dolman
Directly and not use some proxy that had obvious issues. So there's design issues that even without that though, even if you thought we've got a legitimate aim, we've designed it properly.
00:27:12 Chris Dolman
Execution also had problems. Here. We were sending notices out to people and that gave them, you know, they they were scared.
00:27:20 Chris Dolman
They were worried they didn't have an obvious Ave to go and, you know, ask questions and complain. It dragged out for a long period of time for a lot of people and there was genuine harm from.
00:27:30 Chris Dolman
That so the execution was problematic for that reason, because even if it's legitimate, you shouldn't be giving people those sorts of thoughts. That's not.
00:27:38 Chris Dolman
Good. And it took a long time, right for the issues to properly get identified and accepted and rectified there was.
00:27:46 Chris Dolman
A number of.
00:27:46 Chris Dolman
Years that went by which for such a simple issue probably shouldn't have been the case, so there's definite issues in execution there as well. So failures in lots of categories and.
00:27:58 Chris Dolman
Yeah, an example I like you is just to illustrate how things can go wrong, even even in the simplest of situations.
00:28:04 Dr Genevieve Hayes
My understanding was that the government was receiving complaints about the system within the first six months of it being implemented, and it reminds me of when I was teaching a number of years back. I used to have this rule for my class.
00:28:18 Dr Genevieve Hayes
If one person asks me about something I've taught, then it might just be that they can't understand it for some reason.
00:28:26 Dr Genevieve Hayes
But if three or more people are asking me the same question, then I've done something wrong. I think if the government had have had some rule like that, if one or two people are complaining, it's them, but once.
00:28:38 Dr Genevieve Hayes
The volume of complaints gets above a certain number. It's probably there's probably some issue with the system. You should look into it.
00:28:46 Dr Genevieve Hayes
Then that might have helped mitigate the situation.
00:28:49 Chris Dolman
Yeah, yeah, I I don't disagree. And that's it's a useful rule of thumb. I've used similar things myself in the past when looking at things like complaint status.
00:28:57 Chris Dolman
So yeah, there there's always people who are going to complain, and then there's a lot more people who will have a similar issue but aren't saying anything for whatever.
00:29:05 Chris Dolman
Reason and so when you do hear from the few loud voices, you need to take them seriously. Not because of them, but because they might be representative of a whole bunch of other people who are staying quiet.
00:29:18 Chris Dolman
When we've built systems and we've been involved in that, we're always looking at complaints data and we talk about that in our we we did a case study.
00:29:26 Chris Dolman
With car insurance, total losses and how we used an AI system to improve the customer service process there and we were really watchful and mindful for complaints from that.
00:29:36 Chris Dolman
When we, particularly when we first launched it, when we sort of thought, this is probably going to work well, but you know.
00:29:41 Chris Dolman
Can't be 100% confident until something in the wild.
00:29:44 Chris Dolman
And so we monitored that quite carefully and thankfully, there was nothing material that came out of that, that one serious complaint would have would have caused a a lot of review to happen.
00:29:55 Dr Genevieve Hayes
And even the minor complaints, they can be I'm not gonna say red flags, but amber lights to what you could improve in a later version of the.
00:30:07 Chris Dolman
Well, that's right. And you know, it might not be something that requires, you know, turning it off and total rectification, but maybe it's an improvement that can be made. So useful information at the least.
00:30:18 Dr Genevieve Hayes
Probably the biggest thing that's come out of the AI space in recent times has been Chachi, PT and I don't think I've recorded a episode lately where we haven't mentioned chat.
00:30:28 Dr Genevieve Hayes
GPT. Do you think the use of new cutting edge AI algorithms amplifies these risks that you've just described?
00:30:37 Chris Dolman
It can do. I think it. I think the risks are usually the same and so the sort of categories that have broadly outlined are are fairly similar no matter what you're doing. But you can get amplification from.
00:30:51 Chris Dolman
I think a few different ways, but certainly the level of excitement can often push people to do things without careful thought.
00:30:58 Chris Dolman
And so whenever I see extreme excitement, I'm almost like the boring person at the party, right? So it.
00:31:04 Chris Dolman
Like there's extreme excitement there, but what are we really going to do and have we thought about it carefully?
00:31:10 Chris Dolman
We need to be pretty careful about that stuff, so the majority of people are getting really, really excited about chat, chat, GBT.
00:31:18 Chris Dolman
But some of it's been interesting, right? You've seen it's almost like a sport at the moment, trying to break it in.
00:31:24 Chris Dolman
Lots of new.
00:31:24 Chris Dolman
Ways. It's like content police whack Amole, right? So someone comes up.
00:31:29 Chris Dolman
With a way.
00:31:29 Chris Dolman
Of breaking it two days later, it gets patched and so you can't replicate that anymore.
00:31:34 Chris Dolman
Someone else finds a new way to get around that. What that tells you is just it's fairly unstable. It's in a fairly unstable state at the moment, right? And so if you're thinking of using something like chat.
00:31:45 Chris Dolman
Put in a in a serious way like you can use it for entertainment, and that's sort of.
00:31:49 Chris Dolman
There's no problem.
00:31:50 Chris Dolman
There, if you're using it in a serious way to make serious decisions, you've got to be quite aware of some of the failures that have already occurred and the likelihood of future failures that we don't even know about it yet, because someone's disco.
00:32:04 Chris Dolman
These things are quite vulnerable to sort of adversarial attack, aren't they, as we've seen?
00:32:08 Dr Genevieve Hayes
I'm just waiting for the first lawsuit against chat EPT.
00:32:12 Chris Dolman
Oh, you, you and me both. It's, it's and and these things, I mean it, but it it might not be against chat.
00:32:19 Chris Dolman
GPT. It might be against some application that uses chat GPT in in some part of part of the system might be how it how it comes about.
00:32:28 Chris Dolman
But it's going to be it's going to be pretty interesting to see how it all all unfolds. I mean, these things have already been criticised for what's the word?
00:32:37 Chris Dolman
It's it's sort of IP infringement, right? So you know dredging information from the Internet for, you know, photos that obviously are attached to.
00:32:48 Chris Dolman
Particular creators or?
00:32:51 Chris Dolman
Their books that have been written by particular people and might be under IP restrictions and then using that in a in a model that then generates creative content.
00:33:00 Chris Dolman
I know that IP lawyers have been having lots of debates about how to how to deal with that sort of problem.
00:33:05 Chris Dolman
I think we need to work out what we want first. Like how how do we even contemplate IP in these sorts of?
00:33:11 Chris Dolman
Situations and what do we actually want? And then does the current IP law sort of work in the way that we would like?
00:33:19 Chris Dolman
And if not, let's reform it. I think that's how we need to sort of think through this problem. Often people are going to what are the rules say today and that's understandable. I think we probably take a step back and ask ourselves like where we?
00:33:29 Chris Dolman
Really want and I think that's hard, but we're probably gonna have.
00:33:32 Chris Dolman
To do that at some point.
00:33:33 Dr Genevieve Hayes
What I think is really interesting, and this is in the context of AI generated images, is that the two major stock photo websites, Getty Images and Shutterstock are both taking very different approaches to AI generated image algorithms like stable.
00:33:53 Dr Genevieve Hayes
Diffusion. Getty Images is my understanding is that they've got a lawsuit against the company that built stable diffusion for breach of Co.
00:34:04 Dr Genevieve Hayes
You're right. Whereas Shutterstock have come out with their own AI generated image tool based on the images that they have in their library and they're paying the content creators for the use of those images.
00:34:19 Chris Dolman
Yeah, it's it's been super interesting just to see the different responses. I mean, I saw I think it was one of those two, I forget which one it was and someone had written a paper where they'd, they'd reconstituted one of those images from the training set, including the watermark and everything.
00:34:34 Chris Dolman
And so it was. It was obviously intended to show that that that particular image had been part of the training.
00:34:34 Dr Genevieve Hayes
Oh wow.
00:34:41 Chris Dolman
Set. It was quite a stunning piece of work. Obviously it was a little bit grainy, but you could tell that it said it was just stuck or or the other one. I forget which one it was, but that was that was only the other day. So this space is rapidly evolving.
00:34:54 Chris Dolman
I have no insightful prediction for where it will end up really, but it's very interesting to watch.
00:35:00 Dr Genevieve Hayes
And with chat GPT that's built on GPT 3, and I believe that's also what Microsoft GitHub Copilots built on. And there's the lawsuits that's out against that.
00:35:14 Chris Dolman
Yeah. And and there there's other.
00:35:16 Chris Dolman
Issues with that as well, right? So some friends of mine put a put an article in the Australian Financial Review earlier this week, just on some of the exploitative labour practises that we used in in construction of jet GBT. So it's like low paid workers in Africa somewhere. If we get the country looking at fairly horrible.
00:35:34 Chris Dolman
Content and trying to work out how to moderate it, sort of to generate the moderation rules and what have you and with very little help for those people who are often traumatised when they've seen horrible things as part of their day job for days on end.
00:35:48 Chris Dolman
So it's a a job we need to take. I mean, the job perhaps needs to be done somewhere that we need to look after.
00:35:55 Chris Dolman
People who are doing it because it's a traumatising thing and outsourcing it to low paid people overseas is perhaps not a good idea.
00:36:05 Dr Genevieve Hayes
It's a difficult one because and I don't think it should be outsourced to low paid people overseas.
00:36:11 Dr Genevieve Hayes
That the question I have is who should be doing that work? Because I can't think of anyone on the planet who I would wish that job on.
00:36:22 Chris Dolman
Now, and I think it's a very tricky one at the very least, though, you need to give people proper support, don't you?
00:36:29 Chris Dolman
If someone's chosen to do that job, even in full knowledge that it might traumatise them and they go, yeah, OK, that's that's something I'm comfortable with.
00:36:36 Chris Dolman
I'll go ahead and do it anyway. You still need to give them support if if they end up getting traumatised by it and they're like psychological help or whatever.
00:36:43 Chris Dolman
So I mean you, you get this today in in lots of content moderation areas. So folks that have looked at YouTube videos for years have got similar sorts of issues. It's it's a tough problem because it's perhaps a job that we need.
00:36:56 Chris Dolman
Some people somewhere to be.
00:36:57 Chris Dolman
Doing but we need to look after those people, don't we?
00:37:00 Dr Genevieve Hayes
The problem is that even if you say we're not just going to put this in a area where people are low income people, we'll let people choose whether they do it.
00:37:11 Dr Genevieve Hayes
The people who are going to volunteer for that sort of work are going to be the people who have no choice but to do that sort of work.
00:37:18 Chris Dolman
That's right. Choice is. Yeah, it's not never full choice, is it? So it's pretty tricky that, I guess as society we need to be comfortable about whatever approach is.
00:37:30 Chris Dolman
And leaving up to the free market might not be the best way.
00:37:35 Dr Genevieve Hayes
I wish I had a solution to that one, but I just I can't think of anything off hand.
00:37:40 Chris Dolman
Yeah. No, I, I I certainly don't. But I think at the very least, we should as society demand that people get looked after properly and people smart and me should be trying to think of a proper solution to that problem. I certainly don't have any great ideas.
00:37:54 Dr Genevieve Hayes
So what strategies could data scientists or the organisations that employ them used to identify and avoid potential AI scandals before they occur?
00:38:05 Chris Dolman
We about this a lot in the paper we mentioned earlier. So again, we like categories of three. So we have categories of three again here.
00:38:12 Chris Dolman
So people and culture being one category, routines and processes being the other category and technical practises and tools being the final one.
00:38:20 Chris Dolman
And I think the most important one of the three is people and culture, making sure that the people designing.
00:38:26 Chris Dolman
And authorising systems have the right incentives in place.
00:38:31 Chris Dolman
And then incentivized to put something in market that might harm people, the right training in place so they understand the sorts of failures that can occur.
00:38:38 Chris Dolman
And they're aware of that and they're responsible for preventing those sorts of failure modes. It's quite important to align those sorts of interest. I'm a big fan of broad sort of code design, so.
00:38:51 Chris Dolman
Not dreaming up a system in your own 4 walls and then assuming it's going to be great and going and launching it on the population, but actually involving the affected with the design of that system.
00:39:01 Chris Dolman
And that can be done in a whole bunch of ways, but at the very least a sort of slow, staggered launch process where you're almost like a medical trial, right, slowly launched things out and see if they're working or going horribly wrong.
00:39:13 Chris Dolman
And if it seems to be working OK, broadened the audience slightly, and then test it again and slowly increase it. That's at the very least.
00:39:22 Chris Dolman
Something that should be done, particularly when you're making an important decision, but you can also bring them into the design process as well.
00:39:28 Chris Dolman
And there's lots of lots of published mechanisms for doing that. So sisters and juries and things like that have been talked about overseas, which is 1 approach.
00:39:36 Chris Dolman
And just having diversity in the people building the tools, I think is a is a good mindset to have, but it's it's useful to recognise that most data science people, most data science teams are like half a dozen people.
00:39:51 Chris Dolman
So you're just not going to have the half dozen people, because that's just not possible. That's not enough.
00:39:56 Chris Dolman
People to have.
00:39:57 Chris Dolman
With this, so you need to make sure that those people are properly trained on issues of diversity so that their minds are suitably broadened so they can think about some of the errors or issues that might occur to people who don't look like them. And so there's role taking type approaches that you can apply to help build people skills in.
00:40:19 Chris Dolman
The area which we encourage say people encounter is probably most important category having routines is really useful though, so just, you know, checklists, processes to make sure that the things that you've worked out with your sort of slow brain need to happen actually happen when push comes to shove and you got to make decisions quickly. If you've got a.
00:40:39 Chris Dolman
Procurement team that are buying software tools with embedded AI products in the.
00:40:44 Chris Dolman
Them what are the questions they need to ask when they're buying?
00:40:46 Chris Dolman
Those tools well.
00:40:48 Chris Dolman
Giving a list of things to your procurement team to make sure that that list always gets asked every time is the approach to take rather than rolling someone in at a certain point in time to ask the things that's top of mind at that point.
00:41:00 Chris Dolman
So making sure that checklist exists, if you've got a development process again, what are the things that you ought to be asking at each stage?
00:41:07 Chris Dolman
Of the development.
00:41:07 Chris Dolman
Process and so like we have these things and internally now that I've helped to build.
00:41:12 Chris Dolman
So people have a set of kind of awkward questions to answer, which is helpful because otherwise you've got to sort of think of that every time.
00:41:20 Chris Dolman
So it helps you not miss things. The other thing is to try and avoid. Avoid really, really long, horrible drawn out complaints processes. So if you've got a system where mistakes are going to happen.
00:41:33 Chris Dolman
Because it's just inevitable things are going to go wrong at some point, and that's true of me.
00:41:38 Chris Dolman
Right. Nothing's infallible. You need to have a way of dealing with the really silly errors that everyone will accept when some human looks at it that it was a silly error just dealing with that.
00:41:48 Chris Dolman
Quickly and moving on rather than dragging people through a horrible long winded complaints process. So we talk about we we came up with a term for this, we called it.
00:41:57 Chris Dolman
The empty of contestability which was.
00:41:59 Chris Dolman
Quite. Uh, quite proud.
00:42:01 Chris Dolman
So that's before you, because contestability often gets talked about as a as a concept in these in these topics, right that you don't want to get to contestability, which is like a adversarial sort of situation.
00:42:11 Chris Dolman
Quite often, if you're going to an arbitrator or whatever you wanna do something before that to deal with all the silly, silly.
00:42:17 Chris Dolman
Errors that are gonna happen.
00:42:19 Chris Dolman
So there's a wonderful example of this in the paper.
00:42:22 Chris Dolman
That we use which which is my current favourite example of AI failure because it's just stunning. This is parking ticket in the UK based on image recognition. I don't know if you have you seen.
00:42:33 Chris Dolman
This one you might not have had a.
00:42:35 Dr Genevieve Hayes
No, I haven't.
00:42:36 Chris Dolman
This was a parking ticket, or sorry, a driving offence ticket issued to someone for driving in a bus line and say they received this letter in the mail saying you were driving in a bus lane.
00:42:49 Chris Dolman
Here's the evidence of you doing that, and so they opened the letter. Have a look at the evidence and it's photos of a of a street with the bus line.
00:42:57 Chris Dolman
There are no cars in the picture at all. There is a picture of a woman with a T shirt with a word and the middle of her T-shirts as knitter.
00:43:05 Chris Dolman
It looks a little bit like a number plate, but not that much, but enough to fool this automated system into thinking that that was a number.
00:43:12
Right.
00:43:12 Chris Dolman
And Nitto wasn't the person's number plate, but it was close enough to those letters that trick the system.
00:43:18 Chris Dolman
And so they're obviously gone. That's this person's number plate they were driving in this bus lane, not a, not a woman walking in the bus lane.
00:43:26 Chris Dolman
And so this fine got issued and that sort of failure is going to happen with purely automated systems.
00:43:32 Chris Dolman
Place the images. Things are gonna go wrong but just totally daft, but the response was really good, right? So the person who got the fine frowned a bit, I presume left a bit, I presume, called the number that they could call.
00:43:45 Chris Dolman
And said hey.
00:43:46 Chris Dolman
Have a look at this. It doesn't look great. Clearly there's been some mistake and the person on the phone just obviously went.
00:43:53 Chris Dolman
Oh yeah, that's a bit silly. Had a bit of a laugh about it. Move up. You need to have those sorts of quick and easy processes to deal with the just silly failures that are never going to stand up in any sort of proper adversarial.
00:44:04 Chris Dolman
Process so you don't want to just have contestability you want to make it a bit.
00:44:07 Chris Dolman
Redemptive and just cut off all the regular silly errors. If you think they're likely.
00:44:12 Chris Dolman
So I I quite, I quite like that example. At the moment it's a it's a good one, I think just to illustrate what connector, the last category we talk about here is sort of technical practises and tools.
00:44:22 Chris Dolman
So this is stuff for the people building systems or the people managing them to make sure they know what's going on. So things like dashboards, control panels, proper monitoring stuff.
00:44:32 Chris Dolman
Like that proper technical documentation so that you know everyone gets hit by a bus. There's still documentation there, so you know how your system works.
00:44:40 Chris Dolman
It's just there's classic risk management stuff that risk managers sort of know that often people building AI systems don't know cuz not risk managers by training. So we need to have the sort of classical risk management.
00:44:53 Chris Dolman
Onsets translated into data science, practise lots of things.
00:44:57 Chris Dolman
To be done.
00:44:58 Chris Dolman
There. So we make, as I say, a whole bunch of recommendations in the paper.
00:45:01 Chris Dolman
We again use case studies and examples to try and link that back to.
00:45:05 Chris Dolman
The failure modes that we talked.
00:45:06 Chris Dolman
Earlier and so yeah, lots of things that people can be doing, like right now to avoid issues occurring because there's also regulation coming in this space, right.
00:45:16 Chris Dolman
Maybe we'll talk about that next, but we don't have to wait for regulation to come along to tell us what to do.
00:45:21 Chris Dolman
There's a whole bunch of things we could do right now to derisk our systems, and we should probably be doing those things before we're told to.
00:45:29 Dr Genevieve Hayes
What do you reckon the regulation will look like when it comes in?
00:45:32 Chris Dolman
It's been interesting to watch what's happened in Europe, right? So I've said in other places and I'll say again today, I really don't like the use approach to regulation in this space.
00:45:43 Chris Dolman
I think it's got a number of floors. I think the first thing we have to do is to just apply the laws we already have and the regulations we already have.
00:45:53 Chris Dolman
And in many situations, there's enough there already to be getting on with right? Most sort of high stakes decisions in society.
00:46:02 Chris Dolman
Already have regulation? It already exists. We just need to use it. So if you're doing AI and financial services, that's going to be captured by the very broad definitions in financial services, law and regulation that already exist, it's not the case that there's nothing there. I it's just totally unregulated and it's it needs new rules you need to.
00:46:22 Chris Dolman
Take the rules that already exist and apply them. That's hard though, because we need to translate those rules into precise terms to make them implementable and to help people understand whether they're complying.
00:46:35 Chris Dolman
With them or not?
00:46:37 Chris Dolman
And often we've not done that because a lot of these rules were written like free computer, almost certainly free, lots of automation.
00:46:44 Chris Dolman
And so we perhaps haven't worked out exactly how these things ought to apply in an automated system. And so myself with the Actuaries Institute and Human Rights Commission, tried to make a stab at this for human rights and discrimination.
00:46:58 Chris Dolman
Rules for insurance pricing and underwriting recently. So we sort of said.
00:47:02 Chris Dolman
That's a topic where we know there's uncertainty. Let's try and write some guidance to help practitioners understand exactly what the rules mean in the content.
00:47:09 Chris Dolman
In the very narrow context of insurance price, you can underwrite it, which now is 1 particular example of a decision, quite an important decision, but only one example and so that sort of translation that we've tried to do there.
00:47:22 Chris Dolman
Here's the you know, rules written in words. Here's how that applies in your mathematical system. It needs to.
00:47:28 Chris Dolman
Occur more broad?
00:47:29 Chris Dolman
I think so. That's a big job to be done, but it must be done because we already have all these rules.
00:47:35 Chris Dolman
There already exist and So what the EU's doing is doing a separate set of rules for AI systems defined somehow, and the definition keeps changing. So by the time we publish this podcast, has probably changed again. But whatever the definition is.
00:47:49 Chris Dolman
There's going to be things that.
00:47:50 Chris Dolman
Aren't captured by it.
00:47:52 Chris Dolman
So that's not good. You want to make sure that you're capturing everything as best you can. It also competes with that sector regulation that I spoke about.
00:48:00 Chris Dolman
So you've got existing rules in place and then this new AI rules set. So which one applies? Probably both. And so how does that work? Maybe you've got competition in the rules. That's not ideal most.
00:48:12 Chris Dolman
Problematic creates a bit of A2 tier approach with human systems. So if I've got I've got like Gary, the bank manager issuing loans, and it's just Garry's decision what to do. That's the really old school right 1800s type stuff.
00:48:27 Chris Dolman
You know that's how human decisions have been made for a long, long time. There's rules that apply to Garry's decisions, right?
00:48:33 Chris Dolman
He's got to care about fair lending laws. However they look in the country he's in, he's got to care about fairness.
00:48:39 Chris Dolman
If that's embedded in the law setting the countries.
00:48:42 Chris Dolman
In you don't want to have Garry's decision being governed by a different set of rules.
00:48:47 Chris Dolman
To an automated Gary and maybe the decision if it's the same decision being illegal in one sense and not in another, that would just be a disaster.
00:48:56 Chris Dolman
But that's unfortunately what you're going to get if you have these competing rule systems based on either AI being in scope or not for particular decisions. So it's it's a really, I think, not ideal way to regulate.
00:49:10 Chris Dolman
You want to be technology neutral and say here's an important decision. Here's some regulation that applies. However, that decision is made.
00:49:17 Chris Dolman
Make it with Gary the bank manager. Make it with some fancy AI systems. Don't care. You got to apply.
00:49:21 Chris Dolman
These rules, and this is how they work in those different contexts. That's what I think we need to do.
00:49:27 Chris Dolman
Obviously the EU is not going that way, but maybe other countries will. I think the UK is being a lot more circumspect on on what they're going to do and Australia is sort of sitting and watching, I think, and trying to work things out. So we'll have to see what happens here, but it's.
00:49:41 Chris Dolman
It's going to be an interesting space for the next few years to see how.
00:49:44 Dr Genevieve Hayes
It all evolves. Is there anything on your radar in the API data and analytics space that you think is going to become important in the next three to five years?
00:49:53 Chris Dolman
The most important thing I think is going to be almost what I said 5 minutes ago. It's the translation of existing rules into practise.
00:50:00 Chris Dolman
You know, if you've got rules that say you need to treat customers fairly well, what does that actually mean in practise when you're building an AI system, there's a almost entire academic discipline. Now on fairness, definitions and AI systems.
00:50:14 Chris Dolman
So we need to make some choices about how to actually implement these rules in practise, and I reckon that's going to probably come to a head over the next few years as we realise we've got the rules we need. We just need to work out.
00:50:26 Chris Dolman
Say, and maybe we'll realise that we didn't actually know quite what we wanted when we wrote those rules. It was just sort of nice, broad sounding terms and it all, you know, avoid horrible things on on the edges.
00:50:38 Chris Dolman
But in the middle, we've got to be a bit more precise.
00:50:41 Chris Dolman
What does it actually mean? What we've got to work that out. So I think that's that's going to be a big topic over the next few.
00:50:48 Dr Genevieve Hayes
And what final advice would you give to data scientists looking to create business value from data?
00:50:54 Chris Dolman
Shiny new toys are very exciting but very distracting, and so both in yourself and in your business. Try and work out how to focus on value creation, not on shiny new toys. And if your business executives are seeing, you know.
00:51:14 Chris Dolman
ETA or Dally or whatever else and getting really excited about it, that's fine.
00:51:18 Chris Dolman
But workout how to take that excitement and generate a productive discussion out of it. So don't just focus on shininess, but try and focus on value and value.
00:51:29 Chris Dolman
Might be just doing really really simple things. Well, I'm a big advocate for what?
00:51:34 Chris Dolman
I call boring.
00:51:35 Chris Dolman
AI. So if it's, if it's a model that was invented.
00:51:39 Chris Dolman
50 years ago.
00:51:40 Chris Dolman
It's valuable. It doesn't matter.
00:51:43 Chris Dolman
Go ahead, build it, implement it, move on to the next valuable thing.
00:51:46 Dr Genevieve Hayes
This sounds well aligned with Andrew Angel's concept of data centric AI versus model centric AI. Don't just focus on getting the flashiest model. Focus on getting your data good and using a simpler model.
00:52:02 Chris Dolman
That's right. I mean we we did this in our, I didn't talk about it much in this session, but never mind.
00:52:07 Chris Dolman
I've spoke about it before. I sort of made a total loss thing that we built a few years ago, tried some natural language processing in there, and we tried.
00:52:16 Chris Dolman
Sort of. The modern techniques, it wasn't checked. It wasn't GBT, and that was had at that point. But but we used some other things that have been invented more recently.
00:52:24 Chris Dolman
Couldn't get it to work in a reasonable amount of time. Didn't really give any lift to what we're trying to do.
00:52:29 Chris Dolman
And so just fell back on, you know, counting words and doing averages and that sort of thing. So you can read about that in that report.
00:52:37 Chris Dolman
But it's really really simple stuff, but it still works, and so if we had an exec that said, you know, you've got to do the shiniest new thing, that's got to be part of the system, well, we might still be here today trying to get it to work, whereas we've had two years of value instead because we were prepared to do something basic and get it into.
00:52:54
OK.
00:52:55 Dr Genevieve Hayes
Yeah, get a minimum viable product out first and then improve it as.
00:52:58 Dr Genevieve Hayes
You go along.
00:53:00 Chris Dolman
That's right, a minimum viable product can be, you know, simple linear model with nothing complex. Just get it you know.
00:53:07 Dr Genevieve Hayes
So for listeners who want to learn more about you or get in contact, what can they do?
00:53:12 Chris Dolman
Well, I'm. I'm on LinkedIn so you can get in contact with me that way. That's probably the best way.
00:53:17 Chris Dolman
There's also links on my profile to a whole bunch of papers and things I've written, so if you're interested in those, you can find them there and you can get in.
00:53:26 Chris Dolman
Touch with me that way, it's probably easiest.
00:53:29 Dr Genevieve Hayes
Thank you for joining me today.
00:53:31 Chris Dolman
No, thank you very much. Been great.
00:53:33 Dr Genevieve Hayes
And for those in the audience, thank you for listening. I'm doctor Genevieve Hayes, and this has been value driven data science brought to you by Genevieve Hayes Consulting.

Episode 17: How to Avoid an AI Scandal
Broadcast by