Christopher Savoie, Co-Founder and CEO of Zapata AI: Pioneering the Next Generation of AI Solutions in Business

3 months ago 102

In an era where artificial intelligence is transforming industries at an unprecedented pace, Zapata AI is on the forefront of innovation and strategic application. At the helm of this pioneering company is Christopher Savoie, a visionary leader whose career spans the fascinating intersection of machine learning, biology, and chemistry. In an exclusive interview, we explore how this multidisciplinary approach has shaped his vision for AI development at Zapata AI. From co-inventing the technology behind Apple’s Siri to spearheading predictive analytics in racing, he shares invaluable insights and lessons that continue to drive Zapata AI’s groundbreaking advancements. Join us as we explore the technological marvels and future prospects of AI through the eyes of one of its most influential architects.

Your career spans a fascinating intersection of machine learning, biology, and chemistry. How has this multidisciplinary approach influenced your vision for AI development at Zapata AI?

We’ve developed a platform – Orquestra – that allows us to deliver these same algorithms and capabilities across different verticals, including telco, automotive and biopharma – all industries that I’ve actually had the opportunity to work in during my career. I’ve had the good fortune of working for category leading companies in all of these industries – Nissan in automotive, Verizon in telecom and GNI Group in biopharma – so I have firsthand knowledge of the industrial scale problems these industries face. Moreover, the work that I’ve done in different types of AI really has helped us, I think, be very strategic in how we apply our technology in this new generation of generative AI to ensure we can actually help these companies be more efficient and proactive.

As a co-inventor of AAOSA, the technology behind Apple’s Siri, what lessons from that experience have you applied to your work at Zapata AI?

It’s like déjà vu all over again in the sense that when we started that project, a lot of the natural language understanding engines were these big monolithic, big grammar type approaches that weren’t working very well. They were trying to be everything for everyone for an entire language. You needed a grammar for German, a grammar for Italian and a grammar for English that understood the entire language. What we realized is that by breaking these up into small language models and having ensembles of smaller models working together to solve a problem was a better approach. We’re coming to that conclusion now in this world of LLM’s and generative AI. I think the way forward is going to be using ensembles of smaller, more compact, more specific, and more specialized models, and having those models work together to solve problems.

Zapata AI has demonstrated the ability to predict yellow flag events in racing well in advance. Can you elaborate on the technology and algorithms behind these predictions?

I can’t reveal the actual algorithms that we’re using because that’s proprietary to our customer, Andretti Global. But what I can say is that we use a number of different machine learning approaches across the spectrum of complexity to predict what might happen on the track. I think the really cool aspect of our technology is that while we train things on the cloud with 20 years of historical data, we’re able to take those models, deploy them and use streaming live data to update them dynamically based on what’s happening on the track. That’s obviously important in auto racing, but it’s also important in other customer applications that we have. For instance, trading strategies where market data is being updated dynamically and in real time. That is something we’re doing with Sumitomo Mitsui Trust Bank.

What challenges did you face in integrating live streaming sensor and telemetry data from race cars, and how did you overcome them?

Race cars generate gigabytes of data every race. That adds up to terabytes of data across Andretti’s history. Not only is that a lot of data, but it’s coming in fast during the race. The challenge is in taking that streaming data, combining it with historical data, and then cleaning and processing that data as it comes in so it can be used by our AI applications in real-time. On top of that, you don’t always have internet on the racetrack, so we need to be able to run all the analytics on the edge. To overcome this, we built a data pipeline that automates that data processing so the AI can give real-time insights on the team’s race strategy. This all happens on the edge in our Race Analytics Command Center, basically a big truck full of computers and GPU servers.

Another challenge is missing data. For some data, like the tire slip angle, you can’t actually place a sensor to measure it, but it would be really useful to know for things like predicting tire degradation. We can actually use generative AI to deep-fake the missing data using historical data and correlations with other real-time data, in effect creating “virtual sensors” for these unmeasurable variables.

With the capability to predict race events like yellow flags, how do you envision Zapata AI transforming other industries beyond motorsports?

Our predictive capability is directly applicable to anomaly detection and proactive planning in a lot of emergency management situations – outage types of situations – across many industries. For example, in telco, imagine getting an alert ahead of time that your network was going to fail and being able to pinpoint which hop of it failed first. That’s very useful in telco, but also for energy grids or anything that has networks of devices that are intermittently connected to the outages.

Given your extensive background in legal issues surrounding AI and data privacy, what are the key regulatory challenges that AI companies must navigate today?

For one, there isn’t one single uniform standard of regulations across continents or countries. For instance, Europe doesn’t necessarily have the same regulatory standards as the U.S. or vice versa. There are also export control and geopolitical issues surrounding AI and who can actually touch certain models because its sensitive technology that can be used for good, but bad as well. While we understand the concerns, I think there is some worry on the industry side that government agencies may be over regulating a bit too quickly before we even know what the challenges really are. That can have an unintended consequence of stifling innovation. Using our models to predict yellow flags is one thing, but using these same models to predict cancer can actually save lives. So over regulating too quickly might prevent us from innovating in areas that could really be good for humanity.

How do you see the role of generative AI evolving in the next five years, particularly in business and automation?

As a result of the success of OpenAI, we’ve seen a lot of language-based paths that have created some efficiencies in the industry. But it’s kind of limited to the language areas like helping people create marketing copy or code. I think the impact of generative AI is really going to start accelerating especially now that we are deploying some numerical applications that have the potential to eliminate many of the industrial scale problems businesses encounter. Being able to use generative AI to affect things like logistics or operations is going to create more revenues and reduce costs for business of all sizes.

What are the potential ethical implications of using AI to predict and influence real-time events, such as in racing, and how does Zapata AI address these concerns?

Well, the truth is we’ve been trying to predict things for a long time, so it’s not like that’s a big secret. Predictive analytics has been around for decades if not longer. People have been trying to predict the weather for a long time. But, new, more enhanced abilities of doing that will give us a greater ability to be predictive. Can that be misused? Perhaps, but I think that can apply to any technology. I think generative AI really has the capability to transform the world as we know it for the better. Being able to predict things like climate events can allow people to evacuate sooner and save lives. Or, with cancer, having the capability to predict the disease altogether or how quickly it might spread is a gamechanger. Even things like using generative AI to predict where there might be an incident in a crowd full of people can allow emergency services to figure out a better egress or exit plan ahead of time. The best part about this technology is it transcends industries. Whether it’s a racing team trying to figure out the best time to pit a car, or a bank trying to determine the best trading strategies, or a police officer with risk assessment, generative AI modeling can – and is already actually – helping people do their jobs better. There are risks to be mindful of for sure, but I really believe this technology will have an outsized impact on creating enduring value for humanity.

How does Zapata AI ensure that its predictive models remain accurate and reliable over time, especially as the volume and complexity of data continue to grow?

Our models are living models, which makes our business model very sticky. Unlike software, you can’t just deploy them, forget about them and not add features. These models are living things. If the data moves, your model becomes invalid. With Zapata AI, our whole engagement model – our platform and software – is built for this era of something where you have to be responsive to changes in the data that we don’t have control of. You have to constantly monitor these models and you need an infrastructure that allows you to respond to changes that you don’t control.

Looking ahead, what is your ultimate vision for Zapata AI, and how do you plan to achieve it?

We’ve said from the very beginning that we want to solve the hardest, most difficult mathematical challenges for all types of industries. We’ve made a lot of progress in this regard already and plan to continue doing so. Ultimately, the platform that we built is very horizontal and we think that it can become an operating system, if you will, for model development and deployment in various environments.

Read Entire Article