Episode 43: Shaping the Future of AI
Download MP3[00:00:00] Dr Genevieve Hayes: Hello and welcome to Value Driven Data Science brought to you by Genevieve Hayes Consulting. I'm Dr. Genevieve Hayes and today I'm joined by Dr. Eric Daimler to discuss his extraordinary work in shaping the future of AI. Eric is the chair, CEO and co founder of Connexus AI and has previously co founded five other companies in the technology space.
[00:00:25] He served under the Obama administration as a presidential innovation fellow for AI and robotics in the executive office of president as the sole authority driving the agenda for us leadership in research, commercialization, and public adoption of AI and robotics. He is also the author of the upcoming book.
[00:00:47] The future is formal, the roadmap for using technology to solve society's biggest problems. Thanks. Eric, welcome to the show.
[00:00:56] Dr Eric Daimler: Thank you. It's good to be here.
[00:00:58] Dr Genevieve Hayes: Two years ago, no one could imagine the impact generative AI would have on our world. And most of us can't even begin to imagine the impact the next generation of AI will have on our world two years from now.
[00:01:12] The only thing that is certain is uncertainty, but that uncertainty brings with it great opportunities as well as choices. We can choose to sit back and let the future of AI play out in front of us. or engage with this new technology and shape the future of AI and the world as we know it. Now, Eric, you've clearly chosen to take the latter approach and have already played a role in shaping the future of AI in numerous different ways, which I hope to explore in this episode.
[00:01:46] However, I'm sure the first thing everyone asks you about when you're at conferences or parties, and that our listeners are most keen to hear about is your time serving as an advisor to the White House under President Obama. So, could you tell us a bit about what that involved? With
[00:02:05] Dr Eric Daimler: sure. It's funny. It's generous for you to think anybody's going to ask me something at parties once they find out what I'm doing. They may just pass and move on. But I certainly grateful to spend time with you. Working in the US federal government, it was a privilege that I hope to be able to repeat someday.
[00:02:23] I was certainly going in as cynical as many people are about the nature of government bureaucracy and all that. But was really impressed with the people that I had the privilege to work alongside both in the administration and in the larger civic space, the larger executive branch really quite a remarkable experience my own Job, was speaking humbly on behalf of the president, I guess you could say to other members of the executive branch about the future role of ai across the US government, setting research goals, where we wanna be taking the vision to be serving.
[00:03:01] The American people and allies of America more broadly this was coordinating the goals from defense department of the energy department, commerce, labor and so forth. And we were at that time, then coordinating these different visions. For where we want to look to put money into robotics or AI.
[00:03:23] The vision at the time identified the aspects of collaborative robotics, robots working alongside workers, and then looking out into the future about the ubiquity of these collaborative robotics. The requirement that then emerges in that particular context is that we have to have rules that are established in such a way that they can be both defined for the machine and understood by the people working alongside those machines so that everybody's behavior Is predictable just for safety, let alone efficacy.
[00:04:02] We would then describe the research parameters upon which we would have different thrusts of initiatives in these different departments. And you can kind of imagine how this might be about this play out and how this would be important at something of the scale of the U. S. Government where. The context under which somebody might think of a robot is very different for health and human services versus obviously a military application, but also an energy application.
[00:04:29] Those are different applications of robots, trying to create a unified theme about needs to get researched and then created out in the world is where we work. It was a privilege actually to see these AI implementations at the largest. possible scales. When I did academic research at Stanford and at Carnegie Mellon and University of Washington, we'd work in, whatever scale we'd work in, but you'd multiply that times by a hundred or a thousand if you're looking across the U.
[00:05:00] S. government. It's really quite breathtaking to see, hundreds of systems that needed to be integrated with integrity. One place that I was fascinated to see was the evolution of aerospace systems, starting with NASA, where these rockets would become increasingly complex that extended their development schedule and the concomitant costs.
[00:05:24] To their schedule. This are all required a different approach so that they wouldn't by the end of a project's life cycle be developing from, an iPhone six or something , as they got to end of a rocket, they want to have the latest technology at all stages, the impairment. Precluding that flexibility of adopting new technology within one system came down to the integration of databases.
[00:05:50] Oddly this catalyzed what I'm doing today. The integration of these databases was then a manual process introducing errors and introducing the schedule delays and the costs. And that needed to get addressed so that databases, as pedantic as that sounds, then needed to be integrated at scale, at speed, and without an increase in cost, so that we could count on with our lives.
[00:06:18] The end result of these systems. There's a whole bunch of other panoply of benefits that comes out of the commercialized discovery. But that's what I worked on. And that's what I discovered. And that's what I was able to observe during the time that I was in the federal government.
[00:06:32] Dr Genevieve Hayes: regard to the integration of databases was the issue the physical integration of them or was it getting permission to share data between different government organizations? Because I've worked in government before and one of the big issues we had was that you had to actually get memorandums of understanding signed and things like that in order to share data between one Part of government and another part,
[00:07:04] Dr Eric Daimler: Right? We'll call that a bureaucracy problem. So we're putting the bureaucracy problem aside. It's not a hardware issue it's a software issue so it's a technical issue, and the technical issue is complicated by itself. You certainly have entity resolution and disambiguation issues.
[00:07:21] The cleaning of the data in the easy vernacular. But it's a technical issue to compose the models. The data is really easy, but it's the models that become more complex. And that is the technical issue. The outcome of that solution does also address the bureaucracy part. So when you have Two databases, and just to try to imagine that this is difficult and in just a conversation without visuals to Excel sheets, all of us have worked in Excel and all of us have created tables in Excel that begin to go beyond our ability to hold them in our head, you know, a hundred rows, a thousand rows, whatever number that is where you think, oh, man, I don't even remember what was in these rows or in these tabs within these documents.
[00:08:07] If you then try to compose them, you know, merge them, exchange data between them, whatever process you want to provide between two models, that really can be break your head to then try to solve the problems. Created by the merging of these documents. I have to create a testing regime that I can count on and that I can defend to others.
[00:08:29] This is where the complexity is introduced. And that's just could be from one person with two models. But if you have something that's as complex as a nuclear reactor or an aircraft, Or a health system. This then begins to involve hundreds, if not thousands of people that have to reconcile these models and test them so that they are dependable, , down to some level of reliability.
[00:08:55] So much better is to apply an AI to that whole process so that you can compose those models in a way that is dependable, and you get the benefits of increased speed and decreased costs with not introducing any new human errors to the process.
[00:09:11] Dr Genevieve Hayes: as I said before, I've worked in government in the past. And one of the things that. I think most government workers would find really frustrating is that you come up with a brilliant idea and you get the green light to go ahead with it. And then suddenly there's a shift in the machinery of government and the new people don't want to go ahead with it.
[00:09:34] Did you find that the ideas that you proposed when you were working as a presidential advisor ended up getting implemented or did. The machinery of government get in the way.
[00:09:46] Dr Eric Daimler: I am quite grateful that not only did my work continue, but it expanded. There were other scientific advisors within the government at the time, but I was the only one that had the authority on artificial intelligence.
[00:09:59] Right place, right time. That expanded after I had left and expanded into I think what was formerly called an initiative and not to then say that there's three people doing my job, but, there's now three people doing my job so for whatever weaknesses people want to point out in the subsequent administration in this domain.
[00:10:15] It went quite well in expanding into a larger role that continued to the next administration and then even the current administration where obviously a lot of people now are putting their hands in getting properly appropriately involved in the expressions of AI. So it's expanded what I can say about the point, though, that you are kind of implicitly making is that, , I worked to implement possible technical solutions.
[00:10:44] It naturally would be part of my job to conceive of and advocate for potential policy solutions. Policy solutions such as putting in circuit breakers, such as requiring the identification of data provenance and data lineage. But what I see today that's unfortunate that might be an answer to your issue of.
[00:11:07] Government bureaucracy taking over a role is, I think too many people are talking about policy with less grounding in it's implementation. We actually have that right now as and identified danger among the defense vulnerabilities for the U. S. and its allies. guys. That when the threat of cyber security began to rise or or begin to be recognized as a rising perhaps about a decade ago, we started training people in cyber security, but in cyber security policy, and we have too few people implementing.
[00:11:49] Cyber security technology in other words, people can talk about it, but they can't actually do much with it. We need a balance between the people that it can actually implement cyber security technologies and talk about cyber security policy. Similarly, I think we need a better.
[00:12:06] Balance right now, where we are over rotating on a policy well, under appreciating the people in government that understand appropriately a technology. The problem with a lack of understanding in these areas is the people will misunderstand. What's hard and what's easy. They will think that what is super easy is actually hard and they'll think what's hard is actually really easy.
[00:12:30] And this presents an unnecessary friction or an unfortunate friction in us being able to present the right solutions for the American people and their allies.
[00:12:41] Dr Genevieve Hayes: That's very interesting what you're saying about how with cyber security, people were being trained in policy rather than the technical aspect I've seen that happen time and time again in so many technical fields. There was something in our newspapers in Australia a number of years back about how Not enough people were taking subjects like physics in year 11 and 12.
[00:13:02] So in order to get more people to take physics, they decided to get rid of a lot of the technical calculations and introduce essays on things like whether nuclear power was a good idea or not. And it's like. It doesn't actually solve the problem, you know.
[00:13:19] Dr Eric Daimler: Yeah, I mean, I suppose it's good to have people understand the general concepts of indoor plumbing or nuclear power, but you ultimately need people to implement the infrastructure. Of indoor plumbing or a nuclear power plant or cyber security technology. You can't have a whole bunch of people talking about it.
[00:13:38] Although there's a role for talking about policy to be sure.
[00:13:41] Dr Genevieve Hayes: I've been in discussion groups about AI in the past, where it came to light in the middle of the discussion group that I was the only person who could actually program a computer or had actually touched any of these things technically. And I'm thinking to myself, how did everyone else actually get into their jobs
[00:14:01] Dr Eric Daimler: I had this experience. I was the only one in many rooms with a technical degree. Yeah. A lot of lawyers in Washington, DC.
[00:14:08] Very few computer science undergrad people, let alone computer science PhDs. Not that that's required, but some amount of technical understanding I found to be super helpful.
[00:14:18] Dr Genevieve Hayes: Oh yeah. I've never regretted having a technical understanding in my job because well, it just makes life easier and it makes it easier to give instruction to people who are working under you because they know you actually know what they're talking about.
[00:14:32] Dr Eric Daimler: That's true in a lot of roles. Yeah.
[00:14:33] Dr Genevieve Hayes: Oh yeah. How do you actually land a job like advising the White House?
[00:14:38] Because I'm guessing that's not something that would be advertised on an internet job board.
[00:14:43] Dr Eric Daimler: For this particular opportunity, as I recall it, it was just a fortunate call and my wife said, this is a call I have to take. Even though it would be somewhat burdensome and the lowest paying job I ever had. , but worth it moving to DC for a time.
[00:14:56] I think it was, like I said, a series of kind of fortunate events that are gonna right place right time. I went to school in the right places and the places that I went had a history of people working in government. So I was known to other people serving in these technical capacities within the US government.
[00:15:13] And, interested and somewhat facile in the conversations that I knew government was working to address. I was not new to the conversations around ai. I've been doing this for. 20 plus years, my PhD is in the area. So, it wasn't an accident that suddenly my name would come up when looking for a candidate for such a role.
[00:15:29] And if you wanted somebody that had touched AI from the perspective of an academic researcher and an entrepreneur, commercializing products and as an investor looking to take nascent ideas and making them reality. , there may have been people that have done, each of those things at a higher level, but I don't.
[00:15:50] I don't think any at that time and perhaps not now, but at the time, I don't think there was any people that had done it. All three of those at a reasonable level in combination I think that was a rare, if not unique combination of skills that probably made my candidacy more attractive.
[00:16:06] Dr Genevieve Hayes: So when you were talking before, you mentioned that your experiences while working with the US government were what inspired your current company, Conexus AI. Can you tell us a bit about what Conexus does?
[00:16:22] Dr Eric Daimler: Sure. You know, one of the pieces of research that got funded when I was in the US government was based on a. Discovery by a faculty member at MIT in the math department, professor David Spba, who had written a few books on category theory. He had an insight that the nature of category theory in creating a meta level of understanding that could connect.
[00:16:48] Further relationships and knowledge across in dimensions could be applied to databases. This is what got applied in NASA and the Department of Defense and the Department of Commerce for problems in logistics. I became aware of that technology when it got funded for a couple of reasons. One is I happen to be working with folks in the White House at the time when it came across and got funded.
[00:17:14] Second of all, when I saw the technology, I understood it and third of all, I guess the intermediary between those is, people like to be trusted me and told me about the technology. So then when they talked to me, I could understand what they were saying. So combination of just happy accidents.
[00:17:29] So when I got out of government I thought that was an interesting idea, seeing these very large, the largest implementations be transformed from this technology. So, I put some money into the developing this technology into a potential commercial enterprise. And then I jumped in as a co founder and CEO With two partners to turn this into a venture fundable business put in some additional money, my own and raise some money from friends and turn it into a traditional venture funded business and scaling up as one does with these sort of things and making them viable commercial enterprises at scale.
[00:18:09] Dr Genevieve Hayes: So I've never come across category theory before, but it sounds interesting. Can you tell me a bit about that?
[00:18:15] Dr Eric Daimler: Yeah, so my academic research was in this domain of mathematics called graph theory and graph theory. Many people may be familiar with, especially the visualizations where they look like spider webs. Those are representations of relationships, connections between relationships. So, you and I are connected because we are talking today, right?
[00:18:35] That's 1 vector of our relationship. We might now have another vector of relationship called academia. We publish papers, perhaps in the same venues. And that has a context around it, or each of those has contexts around it. But if you and I had another relationship called being in a triathlon, we would have a very different relationship called competitors.
[00:18:54] Right? That's the beauty of graph theory where it can respect different sorts of relationships between different entities, category theory. You might quite think of, it's not quite like this, but because it's kind of technically related to type theory, which is one of the backbones behind quantum compilers.
[00:19:13] Quantum compilers really wouldn't be able to be interpretable by humans without type theory. Quantum computer groups will hire a lot of type theorists and category theorists, but those are, close cousins. Category theory can often just be thought of as an n dimensional graph theory. So 3D chess and so forth, right?
[00:19:33] You just think of it in an infinitely rich dimensional graph theory. So you can have an infinitely rich set of relationships across an infinitely rich set of dimensions. You can think of it as a sort of meta math. That it's a math above math, or the math of math, whatever you you wanna say. So you can divide any sets of relationships along that branch of mathematics.
[00:19:58] And then what we do, what Connexus does, what the AI does is it applies this to databases. It imports the knowledge represented by schemas models could be formulas in Excel imports. These models in Excel into one big system and is able to reconcile the models within that system based on this concept of meta relationships.
[00:20:21] Ultimate relationships and how this would get expressed for people that data science is out of as a sort of universal data warehouse. People are used to creating data lakes and then data warehouses out of those data lakes. This would be a universal data warehouse, a data warehouse to rule them all across all data, across all models, across an infinitely scalable organization.
[00:20:41] Yeah,
[00:20:50] yeah, yeah. So the idea in practice, so One of Connexus AI's clients is a large oil company. I don't know much about oil, oil companies operations, but I've come to learn some things. So , one of the operations is that you can't just take an oil well from one space to another space.
[00:21:10] You actually have to look at the space above ground, below ground, and design a new wealth. Design, whether you got to have flange wear and a flange out of what material and so forth. To do this, you will have a geologist involved. You will have then a petroleum engineer involved, a civil engineer, mechanical engineer, the person that actually is going to build it and put it in the ground.
[00:21:29] All those people need to come up with their own view of the space in which this well is going to reside. They all have this conception called maximum area surface pressure that has to exist on this. Well and one person might say it is 1.2 plus or minus five, another might say it's 1.5 plus or minus two and so forth.
[00:21:53] How do you reconcile these tolerances for a maximum area surface pressure? And do those. Tolerances cancel each other out. Are they additive? How does that reconcile the process today is that you will quite literally, and it's kind of shocking to the extent to which this still happens in 2024 that you will put your models in Excel and you will email.
[00:22:17] Excel attachments to your colleagues and try to reconcile these Excel as email attachments between a group. And then because you can have catastrophic outcomes people can die from badly designed wells, you will have an internal auditor that verifies the consensus among this group of experts.
[00:22:33] And then you'll have an external government auditor, at least in the United States to verify the internal auditors work. This happens quite literally hundreds of thousands of times a year. Just these multiple iterations of engineering models. How much better would it be to have the AI consume the models among the Excel sheets.
[00:22:53] And automatically be able to ingest these models to detect logical contradictions within those models. That is the sort of thing that is enabled by category theory.
[00:23:05] Dr Genevieve Hayes: Okay. So effectively it's comparing them and looking for differences between them.
[00:23:12] Dr Eric Daimler: Any sort of logical contradiction, it's not necessarily just differences, because, the isomorphic to each other. And that could be necessarily a difference. But you're looking for where you could have a, I'm not going to call it necessarily a corruption, but the model in composition it become corrupted and a technical way of saying it would be, you're going to find those errors at compile time instead of at run time.
[00:23:33] Which is today where those errors have found, today, you'll fly and crash and fly and crash, you know, how airplanes today get designed and to use the airplane analogy where our software is also deployed. There are formal methods for designing a fuselage or deciding a wing or designing an engine, but there are not yet.
[00:23:52] And we hope to address this with our software formal methods for composing the wing and the engine and the fuselage in a way that can guarantee the integrity of those different models of the systems that ultimately comprise an aircraft. That's the type of application of a meta model that will look for those logical contradictions or look for how those models will actually compose and before the airplane is built, Before it needs to be tested, the logical contradictions are spit back, if there are ones, to the experts to try to reconcile those.
[00:24:25] Again, at compile time, right, before the system is implemented. That's another example of category theory in operation.
[00:24:32] Dr Genevieve Hayes: That's incredible.
[00:24:34] Dr Eric Daimler: Sounds like magic, but it is a provable safe AI, which is really where the whole idea expands into this is safe AI and practice. So to come full circle to how we'd started, people can talk about safe AI and they can talk about policies, data lineage, data providence circuit breakers and all that.
[00:24:50] But this is safe AI. In an implementation. This is the technical manifestation of what AI will look like over the next 10 years. Predictions of time are pretty difficult, but I can describe to you what AI is generally just going to look like in the future. One way of expressing AI in the future is that you will be able to detect and identify these logical contradictions and increasingly large complex systems at a proverbial click of a button whereas today they take kind of exponentially increasingly amount of people power and time with concomitant costs.
[00:25:22] Dr Genevieve Hayes: In order to detect those logical errors between the different models, do the models have to be encoded in a specific way? So I'm thinking in terms of The symbolic logic that I studied back when I was an undergraduate.
[00:25:36] Dr Eric Daimler: Hmm. It's a good question. It's a particularly good question because it actually will define the skill set. Of the future, what we will all need to do with more and more skill is define our understanding from our expertise in our experience with sufficient clarity for a machine to read it.
[00:25:55] That's really what it comes down to. If a machine can interpret it, if a machine can read it, then that's good enough. It really doesn't need to be encoded in any other particular way. But you need to be doing modeling with a degree of precision. We've all done what we shouldn't be doing, which is hard coding some data in Excel instead of using formulas.
[00:26:15] So if you're not doing hard coding, if you're actually putting in models and formulas, so therefore the machines can read them, then you have something to work with. That's really the trick. Data modeling, which may be a declining skill needs to be re emphasized for everybody in the future.
[00:26:30] Everybody needs to become a little more disciplined in their communications, maybe a little bit more like lawyers, Lawyers need to become a little bit more like engineers with the precision with which they speak and then engineers need to become a little bit more like machines in the way they speak our progressive.
[00:26:50] iteration of work will be removing ambiguity. That's the job to be done. We will learn from large language models in a probabilistic AI. We will recontextualize based on our expertise and feed what new facts we need to encode into the symbolic AI that then is fed back into the AI, which is then again, fed back to us.
[00:27:13] That's the cycle of work that will be never ending.
[00:27:16] Dr Genevieve Hayes: You just mentioned the phrase symbolic AI there and we're contrasting that with stochastic AI. I'm guessing stochastic AI is what we're seeing at the moment in things like chat GPT the stochastic nature is what gives rise to things like hallucinations. What is symbolic AI and how does that compare to stochastic AI?
[00:27:37] Dr Eric Daimler: What is fact based and what is not is the easy way to look at it, right? These confabulation or hallucinations in the popular vernacular, popular, ve, inaccurate vernacular those are stochastic. Those are probabilistic, which means that you'll always have a long tail. Against which failures can possibly emerge.
[00:27:54] Right? If we think back to our middle school education middle school logic induction. That's what it is. Induction. You, you see one swan, you see two swans, you see three white swans, four white swans, you can see a thousand white swans, and induction will infer all swans are white.
[00:28:13] That's induction. You can see a million swans. All swans are white. That's induction. But on the flip side of that, you know, bottoms up, tops down, on the flip side of that is deduction. The old Sherlock Holmes thing, right? All animals with wings are birds. This animal has wings, therefore it's a bird. That is deduction.
[00:28:30] You start with facts, you end with a fact. That is fundamentally different than induction, because you could start with nothing but facts for induction, but wind up with not facts. That's the fundamental difference. The future is gonna be combining those in the way that I described for this virtuous circle or circle you know, that is the research agency at the US Defense Department.
[00:28:53] darpa, they call this neuro symbolic ai, but that's the model that I'd invite people to adopt, because that's what's going to happen. Induction and deduction can combine. Neurosymbolic AI, this is what models will be in the future. This is another way of, of talking about AI or automated systems for the future.
[00:29:13] It's where it has to go. It's where it's going.
[00:29:15] Dr Genevieve Hayes: So what would a symbolic AI version of Chetch EPT look like?
[00:29:19] Dr Eric Daimler: Yeah, you know they say it a little bit right now, you'll hear people that if they're not completely religious zealots in the LLM camp they will talk about embeddings and those are Yeah. Not not symbols. They don't want to talk about symbols, but there are facts that are embedded in these systems.
[00:29:38] And so you need to constrain those with facts. And so we have some people that will use LLMs to generate Excel models. You will use LLMs to generate SQL code, right? SQL code. Or whatever else you will have LLMs generate, large language models generate. You will then want the symbolic AI, the facts, the embeddings, or whatever.
[00:30:00] You will want the facts then to constrain or double check the accuracy or efficacy. Of those generative artifacts, whether it's Excel or whether it is sequel code or anything else. So that's how they could work in concert in a practical way. This is the safe AI membrane around generative probabilistic ai.
[00:30:21] This is safe AI in practice, people talk about safe ai, but this is what safe AI will mean, will be AI with the constraints of facts put on it by. symbols.
[00:30:34] Dr Genevieve Hayes: So is this the same as retrieval augmented generative AI? Or is this slightly different from that?
[00:30:40] Dr Eric Daimler: I'd say that's different because usually when I hear people talking about that, they are not using the technology of deterministic AI. They are putting probabilistic models on top of probabilistic models, but, fair enough if other people have different interpretations of that term.
[00:30:56] Dr Genevieve Hayes: Okay, , so how do you get the facts into the system then? In order that it can check those using this method.
[00:31:04] Dr Eric Daimler: Yeah, they're good questions. The facts already exist from the experts. Now, If somebody's doing a whole bunch of hand waving, and you can often hear it in just two , people speaking, even, smart people will, through the nature of being human, want to come to an agreement and do some degree of hand waving.
[00:31:23] Sometimes in the case that I'm doing right now, like literal hand waving, you know, the other person will be kind enough to agree or nod their head. So you think you have some sort of understanding. It's in that hand waving that the ambiguity needs to be identified and then disambiguated.
[00:31:40] The models otherwise are already represented in people's expertise. So, what you do, what I do, what people do is they will develop a set of experiences that can develop into an expertise about a particular subject that then is represented in whatever medium it's represented in. It could start out in a document format, but it ultimately needs to be in numbers.
[00:32:01] Those numbers are the models that then are input into the system. Those are the models. That's how the AI, the machines will read the expertise. There's nothing else that's necessary, but that is necessary. Importing, English or whatever language is not unambiguous.
[00:32:16] You can just look at the law and find that there are logical contradictions all over. This is why it can't by itself be encoded in these languages without further clarification.
[00:32:27] Dr Genevieve Hayes: So what you're talking about as a fact, it would be something like force equals mass times acceleration as opposed to entry from Wikipedia.
[00:32:36] Dr Eric Daimler: It's correct.
[00:32:37] Dr Genevieve Hayes: Okay. And those facts are the symbols that are referred to in symbolic AI, is that right?
[00:32:44] Dr Eric Daimler: Yeah. And maximum area surface pressure to use the story. We have earlier, another one that's maybe a little squishier, but still accurate is risk control. So we had one client that wanted to look at their cyber security risk. Now today, people just know that they have a cyber security risk that it exists.
[00:33:05] And they might think from first principles around the risks that they have identified as vulnerabilities, and then they'll have an argument in English, often among smart, well meaning people about where resources should be applied to mitigate those risks. So if I have a vulnerability of phishing attacks, I want to apply resources to train my staff about how to diminish the risk of being vulnerable to a phishing attack.
[00:33:33] That's an argument from first principles based on my intuition or set of experiences but argued in English. What instead can be done and what Connexus AI has done with one particular, very large global client is they were able to take The identified risks from departments across this global organization and compose all of those assessments of risks across every individual I.
[00:34:02] T. departments assessment up to the board level. And map that onto a risk model, and there's a whole bunch of risk models. They used this one called fire. So we map the composition of this risk model onto this fire framework. Then the board can have a principle discussion about the resources to be applied for mitigating against cyber security risk.
[00:34:27] Once I have this risk model, I can then argue from a principled place with my colleagues about, oh, we'll put 30 million here, 13 million there, 45 million against this other one, as a principled response to the risk framework and risk model and risk assessments throughout my global organization.
[00:34:45] Dr Eric Daimler: And instead of just arguing in English. So this is a way of describing this formalization or quantification or disambiguation along a continuum where to some extent this is machine readable. But to some extent I'm just elevating the conversation to a more principled place.
[00:35:01] I'm not guessing. Another example is, during Covid there's a large every large oil company actually experienced this, but one of 'em was our client where. they were doing their ordinary thing but experience negative oil futures. Now, when oil futures were negative, that broke their model that they otherwise had represented.
[00:35:22] And they were left guessing for a period of time about their production numbers and how they should be interacting with the rest of their oil. Internal supply chain for oil because they previous hadn't considered than the model. Wouldn't be able to adapt to these changes outside their model.
[00:35:37] Had they had this AI applied. At the time, then the model actually could have responded to the data and adapted itself. This is the nature of a I the model could have changed to the data as the world was changing, and then given the executives, the leadership, a principled way of being interact with negative oil futures.
[00:35:56] It's just another example of the benefit of implementing these smarter A. I. S. As opposed to in that at that time, the machine learning trained on past past behavior.
[00:36:06] Dr Genevieve Hayes: I can see this being very important in a field like finance, where you know, the finance industry has had a long history of people building brilliant models to predict the stock market and then having the underlying paradigm of the market shift and people losing a lot of money. But if you had some sort of AI that could determine when the underlying nature of the market has shifted and that you need to go to a new model, this would save people a lot of money.
[00:36:40] Dr Eric Daimler: You know what we do addressing this point is, Texas AI is deployed in a money center bank for risk management. And so what we do for this large money center banks, risk management department is make more of their own data available. And so they know how to do risk models. And I think they know how to do pretty dependable risk models.
[00:36:59] They've been around for a long time. But they have trouble bringing in again, these databases from across the organization. They collect all this data, but if they want to then find the data and do comprehensive reports on the data, what if analysis and so forth, it takes a large amount of effort. It's really a project to bring in more data.
[00:37:19] What Connexus AI provides, Is connects to say I provides the ability to more quickly, more reliably bring in the data so that they could have a richer risk analysis might say a richer set of discussion of what with their own data so that what they say is they hold.
[00:37:37] less excess reserves against their exposure. A similar example happens for us with a public utility in Europe, where they have a set of resources they are deploying and want to make sure that they are not. Firing up. Old coal plants in reserve unnecessarily. So what we do is make more of their data available across databases of the different vendors that supply the different generative systems across their energy network.
[00:38:07] The result is that we're allowing them to hold less of this excess. Fossil fuel burning or traditional energy burning capacity available. So we reduce their carbon footprint. As a benefit to that particular example? So one of the financial sector where it's able to reduce.
[00:38:25] Excess capital held for increased efficiency of a money center bank The other is it's able to reduce the amount of fossil fuel burnt, to help hold and reserve for a swath of europe,
[00:38:35] Dr Genevieve Hayes: So to change the topic a bit, you've got a book coming out soon called, The Future is Formal, The Roadmap for Using Technology to Solve Society's Biggest Problems. Can you tell us a bit about that? In particular, what does the title mean?
[00:38:50] Dr Eric Daimler: you know, the idea that the future is formal is that we will all be Making sure that we are speaking with increased specificity and formalization, again, in the traditional computer science term. I know it's a fun alliteration for a nerd. But the idea is that we all need to be practicing the degree to which we will move from Ambiguity to specificity in whatever expertise we hold, all of our skill sets will experience an increasingly short half life.
[00:39:24] Yours, mine, everybody's. So not just if I, learn French, the value of that diminishes over time as automation takes hold. That may be one of the more obvious ones, but even in other domains whatever it is, it could be. As in computer science, we have other languages whose value, their skill of mastery has diminished over time.
[00:39:48] Going further about what the Futurist Formal provides is that as we begin to have increasing specificity in our own expertise, we begin to capture this ability to communicate with others with more alacrity. One of the reasons that some of the largest organizations in the world exist is in their ability to communicate with themselves and coordinate with themselves under a common framework.
[00:40:19] If in the future, we are able to scale our individual ability to communicate with clarity with each other or other small groups of people, we will enable this explosion in collaboration that other small groups will be able to collaborate and create networks of collaboration that will begin to simulate today is large organizations.
[00:40:46] So you might say some of the largest organizations in the future will really be networks of smaller organizations. That's really where this future of mass collaboration can go going further today's elements. What they do is their expansions on some of my not my academic research in particular, but something that touched on my academic research, , from many moons ago, which was this prediction of words.
[00:41:12] And prediction of we'll say, an autocomplete, you know, today we're at a 32, 000 word autocomplete and some people will say what I had done a long time ago, , is one of the many people is be able to predict people's responses to some extent. And I did this in a really structured environment around central banks and corporate malfeasance, but what we're going to be able to do.
[00:41:33] And I don't know whether this is five years, 10 years or 15 years, but this is what's coming is expand on that idea with the scale that we've seen develop with LLMs to be able to not only have a 32, 000 word autocomplete about what I might say that develops into an essay or at least a draft paragraph that I can interact with, but also can begin to create how people will respond to what I say.
[00:42:01] So to some extent, we can kind of imagine how this would come because if I, for example, in this conversation say that after this. Exchange. I'm going to go running. You're not going to say randomly. Happy birthday. That would be a non sequitur. You're going to say have a good run or I'll see you later.
[00:42:16] You'll say something, you know, generally because humans wanted to have agreement and they're going to try to come to some sort of consensus. You're going to see something that's kind of reasonable to what I had said. That's what we will be able to predict. With computers over the next, however long, but that's where it's going.
[00:42:33] It's going to be predicting what I will say and then predicting how people will respond. So those are a couple of different ways that I'm talking about. The future is formal and what's enabled in the future is, the safe AI. This is what's going to happen over the next 10 years.
[00:42:48] Dr Genevieve Hayes: I like this idea because as a computer scientist this is the way my brain works and I'm happy to translate things into nice formal statements. I remember from when I was studying logic at university, I found logic to be really easy because that was just the way I thought. But I know there were other people in the class who just really struggled with it and found it difficult to translate concepts into the formal statements that you needed in order to do the logical problem solving.
[00:43:24] How do you propose to get people like that on board with this formal future that you're describing in your book?
[00:43:32] Dr Eric Daimler: Yeah, I think there's a couple ways to say it. And, the third, which I just hope that people don't do. So, there's always a place for artisans or niche markets. And that can happen either intentionally or by accident. There is a market today for people playing music In person there's a market for today of people making pottery and selling handmade pottery today or, or cooking, these are things that had been automated for many, many years for our lifetimes.
[00:44:03] And yet there's a value in handcrafted goods in many domains. So that's one area is people can be artisans or have niche markets. That's great. Another is that people will experiment with Collaborating with these technologies, and this is probably what I would suggest for most people whether you are inclined towards logic or not you know, computers can help facilitate your thought you know, what do you mean?
[00:44:29] What do you mean? That sort of prompt is the simplest way of describing for many people. Or prompting how to clarify what is meant to be communicated in a particular area. It doesn't have to be binary of not logic and then logic in the sense of a computer scientist, we don't need a program in the sense of learning the syntax of any particular computer programming language.
[00:44:54] But we need to be practiced and working towards. just diminishing our degree of ambiguity. I think everybody can get on board with that. If somebody says to go to my house, you need to take a left. Well, you need to tell me in what street, you need to tell me when, that's a degree of specificity, that's pretty important.
[00:45:10] If I'm going to take an airplane, you need to tell me from where to where, tell me exactly where the airport is. This is the level of specificity people can get behind. The degree to which I can have the computer take care of the precision of the GPS coordinates, that's a different level, and maybe we can have other people take over at that point, if you will, I don't know, but what, what I discourage people from doing, if those are the first two options, you know, one option is becoming a niche player, maybe on purpose and the second is collaborating with machines.
[00:45:38] The third option is ignoring it, and I hope people don't do that, because if you just ignore it and keep doing what you're doing and hoping things will turn out, you're going to be surprised. And I don't want people to be surprised. Because they're going to be unpleasantly surprised.
[00:45:52] I don't know when any one individual job is going to be reduced in importance or be perhaps be eliminated. But that's the nature of the change today. It's not that the world is changing faster, which, it is. But, how that often expresses itself is the abruptness of the change. When automation had taken place in the past, whether it's elevators or switchboard operators, we had a generation for people to adapt.
[00:46:15] Maybe my kids wouldn't go into that career, but I could continue my career until it expired today. For example, I was involved in earlier in my career, the using of machines to automate some of financial traders. Work, the New York Stock Exchange, for example, when I was young, was a bustling pit of activity.
[00:46:38] But today it's mostly a tourist attraction and a media backdrop. And that's because of people like me programming computers to automate much of what those people on the floor did. When I was at one of these banks early in my career, and we were programming in that particular time, the automation of the trading of U.
[00:46:56] S. Treasuries, we would program it, we would watch what people did, we'd program a little more, we'd watch what people did, we'd program it, we'd watch what people did, and then I would go to my boss and I'd say, Okay, we're done. This does what that person does. They didn't need to wait until a Thursday afternoon to fire the staff.
[00:47:15] Digital technology is just abrupt like that. And banks are kind of heartless and soulless like that. There's not a term where we just think about it, once digital technology is on the nature of it is it doesn't work, it doesn't work, it doesn't work, it doesn't work, and then it works.
[00:47:30] infinitely well at infinite scale and the department is eliminated. So that's the third option. People will be surprised and they will be unpleasantly surprised. Don't do that. And you don't do that by taking the second option or the first option become an artisan. First option, the second option engage with the technology and collaborate with the technology.
[00:47:49] Dr Genevieve Hayes: And what final advice would you give, not just to people who are uncomfortable with this technology, but to people who are comfortable, such as the data scientists in our audience who are looking to create business value from data?
[00:48:03] Dr Eric Daimler: You know, I think Data science is often a misnomer because 80 percent of the work of data scientists is doing data engineering, which is really unpleasant work. And that's what's getting automated away. So I encourage the data scientists. So look at how much data engineering they're doing and the automation tools that are available to make that work materially more pleasant, for leaders, often when they will look for data across the organization.
[00:48:26] They're expecting data to come back to them instantly or maybe in hours or days when it actually can take months to do their annual planning. Those days are numbered. that time is coming to an end. And so data scientists that are aware of that the half-life of that age will be well served for everybody else.
[00:48:43] I think that they will be better prepared by experimenting with the new tools that are coming online. And working every day to start with the experimentation of these tools and the engagement and integration of these tools into their workflow. That's how best to prepare for the future.
[00:49:07] Dr Genevieve Hayes: So for listeners who want to learn more about you or get in contact, what can they do?
[00:49:12] Dr Eric Daimler: Conexus. com is our company and then for me LinkedIn is probably the place where I am most active, but Conexus. com is the place where well, let's say the future is unfolding.
[00:49:22] Dr Genevieve Hayes: And when can our listeners expect to see your book in the, in the online bookshops? Thanks.
[00:49:29] Dr Eric Daimler: Yeah, right, the bookshelves, the proverbial virtual bookshelves. Yeah, those are my publisher will ask every day. So I would say. 2025 is the expectation. Yeah. But in the meantime, I can pitch my wife's book which is on corporate culture called Reculturing by the McGraw Hill Business Press. So in waiting for my book I will proudly pitch my wife's excellent book, which was one of the best sellers when it was released, but still highly relevant for those people looking to operationalize corporate culture.
[00:49:54] Dr Genevieve Hayes: Okay. And what's your wife's name?
[00:49:56] Dr Eric Daimler: I'm also dying. There you go. That's what people can do.
[00:50:02] Dr Genevieve Hayes: So thank you for joining me today, Eric.
[00:50:05] Dr Eric Daimler: It's good to be here. This is a lot of fun.
[00:50:07] Dr Genevieve Hayes: And for those in the audience, thank you for listening. I'm Dr. Genevieve Hayes, and this has been Value Driven Data Science brought to you by Genevieve Hayes Consulting.
Creators and Guests
