Episode 105: From AI Idea to Production Reality

Download MP3

[00:00:00] Dr Genevieve Hayes: Hello and welcome to Value Driven Data Science, where data professionals become strategic experts. I'm your host, Dr. Genevieve Hayes, and today I'm joined by Santosh Karveti. Santosh is the CEO and founder of Pro Arch, a technology consultancy that helps enterprises operationalize AI securely and its scale.
[00:00:24] His expertise spans critical infrastructure industries, including power generation, manufacturing and healthcare, where he has seen firsthand how AI can drive business transformation in complex regulatory environments. In this episode, we'll be exploring how to build a compelling business case for AI initiatives and how to operationalize those initiatives once the business case is approved.
[00:00:52] Let's dive in. Santosh, welcome to the show.
[00:00:55] Santosh Kaveti: Thank you so much for having me. Glad to be here.
[00:00:59] Dr Genevieve Hayes: One of the things I love most about the explosion of AI technologies is that it's suddenly made it possible for data scientists to develop ideas that might otherwise have never moved from their maybe someday list. Chances are, if you're listening to this show, you've probably developed such a project yourself, or at the very least, have an idea.
[00:01:20] You plan to build as soon as you get the time. I, for one, have several such ideas that I'm looking forward to. Trying out technological capability is no longer the obstacle that it used to be even one or two years ago. However, with all these new found possibilities, there's a gap that nobody's talking about.
[00:01:39] The gap between having a good AI idea and actually getting it funded, approved, and deployed within a larger organization. Bridging this gap isn't a technical problem, but a strategic one. The first step involved doesn't involve writing code or prompting an LLM, but rather building an effective business case, one that aligns with the goals of the business.
[00:02:04] Santosh, in your work with enterprises, how often do you see AI initiatives fail? Not because of the technology, but because the business case wasn't there from the start.
[00:02:16] Santosh Kaveti: So more often than I really would like to say that happens. We often see. Companies get into doing mode, working with AI tools without really understanding where they are in their AI readiness and maturity. That involves, of course, and includes a proper business case.
[00:02:43] As you rightly said. Technology and technical capabilities are no longer blockers. And technically things are only becoming easier even in the last six to eight months. There have been so many technological advancement to solve very hard problems, especially around context engineering which is the essence in the world of an enterprise to build a proper ai.
[00:03:11] But going back to what I was saying, technology has never been an issue even before. Some are easy, some are difficult problems to solve. It's organization's approach, strategy and methodology on how they go about getting to the point of starting their AI journey. That dictates, honestly speaking, your success or failure and how expensive your failure potentially could be as well.
[00:03:42] I've seen time and again organizations. Not being ready from culture perspective. People's training, just AI readiness and awareness from boards to operators, just staying on that one point even date. I don't believe birds have a good framework awareness or training to ask right questions. When it comes to ai, everyone knows they have to do something with ai.
[00:04:16] They see in their heads the vision to either reduction of risk, cost savings, or potentially new streams of revenue. They see the world echoing that over and over again. But when they sit down with their teams, where do they start? What questions are they really supposed to ask? That's just one example when it comes to board, even to the executive management level, then comes your, again, they're not ready from their data perspective, security perspective, so all the foundational pieces that are must haves, these are prerequisites.
[00:04:53] Without having that in place, if you jump into AI journey outcomes are gonna be unpredictable. It could be too costly for you to deal with. I always say that a human makes a mistake. There is a cost to it, but in most cases, the cost is manageable.
[00:05:10] When AI makes a mistake, in most cases, the cost is gonna be unmanageable. It could be, disastrous as well, especially when it comes to critical infrastructure industries where we actually work with. So when safety is a concern, you can't afford to make a mistake.
[00:05:30] So the foundational pieces will have to be right without which you can't, you can increase your product individual productivity to a certain extent. Everybody can use a copilot. But that's gonna only take you so far. If you really wanna operationalize ai. And do it at speed and scale.
[00:05:47] The speed and scale the technology today allows. You have to have a good starting point. And probably the biggest challenge for most companies is to go through the journey or even to say, how do I get to that good starting point? What does it look like for me? And kid you not, that's where most companies just are struggle a lot.
[00:06:06] I'm just gonna, put it that way.
[00:06:08] Dr Genevieve Hayes: When you're dealing with boards, based on my past experience, my guess is that there are gonna be two extremes. There'll be the boards that are super conservative, who are scared. Of making mistakes and who just veto all AI projects because they just don't understand it. And the ones that do the opposite and approve it because they feel like this is what they should be doing, but are doing it possibly without the depth of knowledge that they need.
[00:06:37] Is that what you've found?
[00:06:39] Santosh Kaveti: Yeah, absolutely. You hit the nail there. So yes, there are typically two camps. There is an emerging camp now, a group of people who are beginning to understand, they look. We understand the potential gains of ai, but we also need to understand the risks of AI and make informed decisions.
[00:06:57] I'm glad to see that third camp is emerging. But historically. More often than not, those are the two camps. You have folks who are just saying no, and because they see only the risks and then the folks who say yes to everything because they see the potential, gains.
[00:07:17] Dr Genevieve Hayes: And I think in both cases there's this education piece that's incumbent on the data scientist whereby if you've got a board that's saying no because they don't understand it, then you need to educate them so that they can understand what the risks and benefits are. But if you've got.
[00:07:36] One of the boards that might be saying yes too readily, you also need to educate them so that they can be confident in the decision making that they're undertaking.
[00:07:47] Santosh Kaveti: You said it right. If data scientists and data professionals can look at this as a problem of, look, I'm trying to explain to someone who will make a huge investment decision, but they're trying to understand what the risks are and how can we mitigate the risks?
[00:08:04] Do we even have the ability to mitigate the risks? And if so, how will we do it? And what's the best way to get to the ROIs, to speed and scale without. Running into these risks and what's the governance looks like? I think that will go long way. In my opinion. So absolutely that education training and giving them some framework to work with on how can they make better decisions as a board.
[00:08:32] We help number of boards do that, by the way. We help them make better decisions when it comes to AI investments because sometimes they struggle. Because of the composition of the board their own personal journeys, educations biases. In many cases they struggle but if you can align them together.
[00:08:49] And say, look, we have to go on the AI journey. Not going on this journey is not an option. Okay? However, how can we do it the right way? And what does it really mean? Let's not get too ahead of ourselves and imagine we're gonna change the company upside down with ai. And at the same time, let's not shut down all the doors and say, no we're gonna do nothing in ai.
[00:09:13] Either of those things don't work. But the good news is that. Because of the entire industry's constant echo of successes and failures, there is definitely momentum and eagerness to learn, and I definitely see that has gone up significantly in the last, I would say, even six to eight months.
[00:09:33] Dr Genevieve Hayes: As a data scientist watching this AI wave it seems a lot like history repeating itself because I saw this all 10 years ago with the whole ML wave where every organization wanted to be doing machine learning.
[00:09:48] And back in the days when the ML wave was going through, there was that statistic that went around that 90% of ML initiatives fail to deploy. Do you see that same sort of trend happening with AI initiatives that they're failing to deploy or is there a higher deployment rate than with the ML projects that were going around 10 years ago?
[00:10:12] Santosh Kaveti: That's a really good question. So taking a step back, I've seen this cycle a few times. Once when the SaaS wave started. Adoption of SaaS. Then the cloud wave started. Everybody said everybody will be on the cloud period. Then the big data wave came along and did the same thing.
[00:10:36] And of course, ai especially ml and I think we've seen this multiple times this time around. It's different. Mainly because of the speed and the scale and the investments that hyperscalers are putting into everything. And that matters for these reasons.
[00:10:56] So what we are seeing is as far as the individual productivity goes, there are. Undoubtedly gains period. And sometimes based on the tools that individual, employees have provided the gains could be anywhere from 20% to 70, 75%. It also depends on that individual's constant training and education and evolution.
[00:11:23] So there, it's about just investment into training and tools beyond that. Yes, you are right. What we're seeing is chaos and failure across the board, not lack of ideas. Every company today has an inventory of excellent AI use cases, agents that they want to build, but very few have a way to productionize them.
[00:11:49] Operationalize them, and do it securely. I'm making sure that there is some governance in place now if I leave. The cultural aspects, hierarchical organizational design, everything is being disrupted by the way AI is disrupting even the org structure in so many ways. Previous constraints.
[00:12:11] That forced companies to adopt the current, or structures, in my opinion, are no longer relevant for future companies who will thrive in a different or structure, especially if they can leverage ai. But it goes back to that point, doing it the right way, operationalizing ai and typically it boils down to two to three categories.
[00:12:35] One, their data. Whether it's poor quality of data, siloed data, fragmented data, the, data maturity, governance, and security are huge. Blockers security in general is a huge blocker, not understanding how to bring security considerations into AI Native world. We're seeing that as probably the biggest eye openers will come in the future.
[00:13:03] Unfortunately I'm not hoping to see those, but that's what I'm really concerned is going to happen because if you don't really secure your AI environments and you don't understand how to secure them, you don't understand the risks. Every SaaS application today says, I have AI in it. And then on top of that, you are building your own ai.
[00:13:23] Where are the risks and the visibility into these risks? Quantifying these risks is a huge concern. The second blocker is security and governance and risks. The third blocker is a big one. When you look at an enterprise, you have tens, if not sometimes hundreds of applications working.
[00:13:42] Mostly in silos with some integration at some level between some core applications that you have in place and you try to plug in an AI agent into that world, most often it's a disaster. I'm using a strong word there, but the reason is everybody's trying to bolt on an ai. On top of the current systems design that's in place, let's say you resolve your data problems, you resolve your security problems, you brought it to a certain level of maturity, you still cannot simply take an AI agent and plug it in and expect to get the ROI and outcome that you really are truly, going after.
[00:14:28] And the answer there is in AI native world, you have to reimagine your current processes. That's the only way for you to maximize your ROI and get to a true outcome where you can tangibly see either reduce risk, save costs, or make more money. Most companies, they think they need a lot more education and training to reimagine their workflows and how would they look like in an AI native world?
[00:14:57] Adding or bolting on an AI agent or an AI application layer will only take you so far. I'm not gonna say it won't give you some gains, but that's not maximizing the ROI.
[00:15:08] Dr Genevieve Hayes: So it's like strapping a rocket pack onto a horse as opposed to using a car.
[00:15:13] Santosh Kaveti: Yeah. Yes. Yes. Like building an AI penthouse with sand as your foundation,
[00:15:20] Dr Genevieve Hayes: so through your work with Pro aq, what are you doing differently so that you can guarantee success for your clients in the AI space?
[00:15:31] Santosh Kaveti: Thank you so much for asking that question. I'll give you an example. A healthcare provider one of our customers, they came to us. They're in the Medicare Medicaid space here in the us especially in New York. So they came to us with the problem statement was about AI or adoption of ai or building an AI agent or AI use case.
[00:15:56] What our teams I know I'm toing my own horn, but this is what we've learned through a lot of mistakes and failures is really try to understand why. And making sure that we help them solve the real problem. Okay. Alright. Let's establish that they've gotten the ai, the problem statement right?
[00:16:17] On where they need to apply ai. Our goal is not to go jump into doing that, but to making sure that A, do they really have the foundation that's needed to apply ai? In this particular case, they didn't. Their data quality was so poor that if they had simply applied AI on top of it, there was no way they were going to get the results that they were hoping for.
[00:16:46] So what we had ended up doing is really walk them back through and say, no, this is the problem that you first need to fix. Let's address this problem for one domain. Do it quickly. 'cause everybody wants to see the proof of value. And if you feel like we've done this the right way and you're able to see the results, then expand it to other functional domains for them, labor allocation is a big deal, in the way they service their clients.
[00:17:11] So when we started looking at their data and where the data comes from. It was very clear to us that there was no ownership, there was no stewardship, there was no governance, and therefore the quality was poor. So we really needed to establish a framework to show them this is your level of maturity today, and this is where you really need to be to start using ai, and this is how we can get there.
[00:17:35] And together working with them, we were able to get there and then they started seeing results. In fact, in the process of getting there, they saw several. Takeaways and even ROI, just by looking at the data and then looking at all the flaws that they have, whether it's fragmentation, whether it's silo, whether it's poor quality, user error, error handling, validation, lack of ownership, lack of stewardships.
[00:18:03] But essentially, once this saw the value. This is how it shifted their mind. They basically said, okay, so my data all of a sudden of this domain now became a product and I'm able to use that. Across the enterprise for whatever upstream use cases, they might be including ai. So once they realize the value of data as a product.
[00:18:28] Then they felt, okay, this is good. We can just extend this now to all the domains that they have, about 12 to 15 domains. So they started going through and extending the same concept across their company. That's one example of, a really good success story of where they started or what they want to do.
[00:18:49] But then had to go back and solve the foundational issues and then go back up again and start leveraging AI in the process because we have strong security practice early on. One of the things we have realized even about, I would say 10 to 12 years ago. Is the need for security to become the core DNA of everything we did, whether it is in cloud, whether it's in data, whether it is in infrastructure or even ab dev.
[00:19:19] How do we embed security where it's not a bolt on or a last phase or we just need to do pen testing and everything will be fine. And that taught us so many things. So doing that with the proper governance and security measures in place, give them the confidence that, okay, Now they can consider fairly if we help them do data classification, data lineage, data security and quickly being able to monitor to see if there is any leakage anywhere, through application of ai.
[00:19:49] They are now confident that, okay, we have systems in place to prevent and even react and protect us if there's a need to be. So that's what, we do all of this to say, bringing cloud data security together as a one conversation and one narrative to solve the business problem is probably what our teams are very good at.
[00:20:13] Dr Genevieve Hayes: So what you're advocating for here is. Multi-functional teams. You're not saying that a data scientist can solve it all, or a cybersecurity expert can solve it all, or a data engineer can solve it all. You're saying that basically for an organization to succeed, all of those teams need to work together in order to deliver a result,
[00:20:41] Santosh Kaveti: Absolutely. So the way I look at this is you used the word cross-functional teams, and I like the word, decision teams. Together, they have to make the decisions as one unit with the single-minded goal that, okay, we need to achieve this outcome. We're not here to deliver a report, or we're not even here to deliver an agent.
[00:21:05] Ultimately, the client is looking to do this particular, they want to improve labor allocation and probably save 20%. By leveraging, ai, that's what they want to do, okay? That's the decision we're going after, but ultimately I could not agree with you more. Some of them are naturally, probably more positioned to be narratives like I would say data scientists and data professionals naturally could do so much better to narrate.
[00:21:34] And say, this is how this will come out in the end. But yes, without cybersecurity expert, without a strong cloud expert coming together and building everything, there will be a whole.
[00:21:49] Dr Genevieve Hayes: Many of our listeners are data scientists who don't have expertise in cybersecurity. They might be able to have at least some expertise in data engineering, but they're still not gonna be as strong as. A career data engineer. If you have a data scientist who is wanting to get an AI project off the ground and doesn't have that other expertise, how can they go about forging those connections and getting their project off the ground?
[00:22:21] Santosh Kaveti: That is a great question because we run into that all the time ourselves. The way we bridge it, we invest in, thanks to ai, this is where AI can play a big role. Role. We bridge that gap by first. Yes. We can't expect a data scientist to really become. A purview expert or, the defender for iot or defender for cloud expert or AI readiness, in the red teaming expert.
[00:22:48] But we want data scientists to know that this is what is needed for me to finish this as a product and deliver, I need data classification and lineage so that the data is not exposed. And let's say we are building an agent. Agent has to be treated like any other human being would, and therefore all the controls will have to be applied.
[00:23:11] That awareness is super important. And what we've seen is once they go through one or two projects together, they naturally build that awareness. It becomes part of what they think about and say, Hey, can you help me with this? I see a need for better classification here. You review expert, can you come in and help me?
[00:23:30] Classify this data and what are the different sensitivity labeling that we can apply? Why, how does it relate to the organizational policies and what does it really mean end of the day? But just pulling the right resources to be able to ask right questions. They need some training.
[00:23:46] They need to know what's available in the market, what questions they should ask from security and compliance perspective. Data scientists will have to develop that skill to ask really good compliance because even when we work with the. Industry. Like industry solutions in energy. We work with a lot of power plants in manufacturing.
[00:24:07] Again, critical infrastructure. Some of them are defense contractors. Again, healthcare providers. Compliance is a big deal to all of them. So yes, we don't expect data scientists to understand nip low, medium high, or CMMC compliance. But they need to know that what do I need to do to make sure that the data is compliant and who can help me get there and who can help me not only audit Certify, but be there as an expert with me when I'm doing this.
[00:24:41] That goes long way.
[00:24:43] Dr Genevieve Hayes: It's like I was reading a newspaper article yesterday where a CEO of an Australian tech company was saying the roles of the future are gonna become a lot broader in their responsibilities.
[00:24:54] And so it sounds like those data scientists, instead of just sitting there and coding all day, they're now going to have that level of awareness across. The different functions within the organization and of where they sit so that they can do their job effectively.
[00:25:12] Santosh Kaveti: I could not agree with you more. In fact. Maybe in quantum computing because it's not mainstream yet, but you no longer can leverage a technical skill and build a professional successful career out of it. I think you have to become more business centric. And technical skills are foundational, right?
[00:25:34] Without those skills you can't really build anything on top of that. But what you build on top of it. To work with others, to work with your decision team to create that collaboration and always single-mindedly focusing on, okay, this is the outcome and this is the narrative that I need to create, and how am I gonna create that with the help of this team I think data scientists are the crux of understanding the data.
[00:25:59] So they are really close to the value chain. Of the outcome. So they are naturally positioned to become really great story alerts and narratives and business problem solvers.
[00:26:10] Dr Genevieve Hayes: So if any of our listeners have been sitting on an AI idea, waiting for the right moment to push it forward, what's one step they can take tomorrow to start turning that idea into a. Reality.
[00:26:21] Santosh Kaveti: But they should have started yesterday, in my opinion, I can share what worked for me. I, myself start always with an idea. Probably try to solve one or two big problems every week. And I try to leverage the tool set that I have access to. I, both in my personal world and also professional world.
[00:26:44] A lot of experimentation we've seen the trend, right? We went from chat bots to agents to now, coworkers to creating unique skills that you want. This is a constant evolution, so I myself struggle, but do it is to understand each tool set that's available to me and what it can and cannot do.
[00:27:11] And try to experiment. I typically land in a decent place once I start the journey and stick with it. And then after, of course, it's evolution. Nowadays the technology's evolving so fast that you can't simply one and done. You have to keep it going. I have I probably created close to 25 to 30 agents that I work with, and even many skill sets that I work with normally would've taken me.
[00:27:38] Months, weeks to process the data or information now takes a matter of minutes because I put in the time to start doing it. Short answer, there are tools. And you just have to jump in and start doing it as if you have an idea. I can promise you technology is no longer the limitation. The start will always be bumpy, but as you get comfortable, the more you experiment, the more you understand the nuances, more you understand both technical nuances and the functional nuances, the better.
[00:28:07] You'll slowly start getting it.
[00:28:09] Dr Genevieve Hayes: So as a startup cost to begin with, but the payoff's worth it.
[00:28:13] Santosh Kaveti: It is definitely worth it. Oh, definitely.
[00:28:15] Dr Genevieve Hayes: So for listeners who wanna get in contact with you, Santosh, what can they do?
[00:28:20] Santosh Kaveti: Best source is my LinkedIn profile. I try to be as active as I can be Santosh Kaveti. Also through ProArch, these would be the two best sources.
[00:28:29] Dr Genevieve Hayes: And that's it for today's episode of Value-Driven Data Science. But if you want more from Santosh next week, you can catch our Value Boost episode where we explore the situations where AI isn't the answer. How to recognize those situations and the strategic skill of knowing when to push back. And if you found today's episode useful and think others would benefit, please leave us a rating and review on your favorite podcast platform.
[00:28:57] That way we'll be able to reach more data scientists just like you. Thanks for joining us today, Santosh,
[00:29:03] Santosh Kaveti: Thank you so much for having me.
[00:29:05] Dr Genevieve Hayes: and for those in the audience, thanks for listening. I'm Dr. Genevieve Hayes, and this has been value-Driven Data science.

Episode 105: From AI Idea to Production Reality
Broadcast by