Emerging tech is messy and artificial intelligence is perhaps the messiest. By 2026, Gartner anticipates organizations that develop trustworthy purpose-driven AI will see over 75% of AI innovations succeed compared to 40% among those that don’t. Anyone who has been following the news on AI in 2022 knows of the high rate of AI project failures. Somewhere between 60-80% of AI projects are failing according to different news sources and analysts.
These projections alone are fairly concerning, but when set next to the broader rate of IT project failure they become even more painful. At least two-thirds of large tech projects fail to deliver on time, within budget, or to user expectations. Why is the failure rate of AI projects so high, and why is it so drastically different from the broader IT landscape? What causes failure at such catastrophic levels?
We’ve previously discussed the business philosophy that successful companies employ to see value from AI. While certainly valuable, it’s challenging to directly apply the mission statement for the smart enterprise to an organization. In this post, we’ll address API-led and other technical aspects of AI implementation, such as how API-led addresses AI and how it can lower the high failure rates of these projects.
Before we dig too far into AI and API-led, if you’re not already familiar with the API-led approach to integration I recommend learning more about what API-led connectivity is before going any further.
What do integration and AI have in common? Fundamentally, any AI implementation is an integration problem and the failure rate is largely due to a failure of the integrations surrounding the AI itself. A solid integration architecture is a must for AI to be successful.
Think about AI and integration like plumbing. Where integration primarily concerns itself with data flow, AI relies on data to provide insights to the organization. AI augments the data, but without a good method of transferring data to and from the AI system, any AI implementation will fall flat. Outside of AI, integration architecture predicts success or failure for AI projects.
AI also struggles with problems that have already been dealt with in the integration space. Chief among these is large, monolithic architectures for these models. Oftentimes, AI systems are designed to solve a single problem, and as a result the system is highly interconnected and only serves a single use case.
In the integration space, we’ve seen this create problems for scalability and development time. The same is true for AI systems. Monolithic models take a long time to develop, can’t be reused in upcoming projects to increase efficiency, and don’t scale effectively.
If you’re already familiar with API-led, these ideas will be familiar to you. API-led artificial intelligence is a simplified, standardized way of viewing AI. The goal is to create small, reusable building blocks that you can deploy throughout your organization. For the more AI knowledgeable reader, this might seem impossible, or at least ill-advised. How can you break an AI model up into smaller pieces? While there are certainly models that can’t be broken down, many systems can.
Let’s use a recommender system for a fast food restaurant as an example. Suppose you want to recommend menu items to my customers as they order, and you’d like to deploy this system for drive-thru, in-store, and mobile ordering. The traditional way to solve this is building three systems for each use. The team builds a model for drive-thru, tests it, and deploys it for use, then starts all over again for the next system. While some of the data going into these systems might be the same, there is relevant data for drive-thru that we don’t have for in-store or mobile.
What does the API-led approach look like? First, we look at data features that are relevant to all three of the systems, like current offers, time of day, and restaurant location. With this, we build a single AI system that can be used for all three problems. While it won’t work by itself, we can augment the results of this system for each use case with smaller add-on models. That common model we built above can be reused for two other use cases without modification, dramatically reducing the time to market for subsequent but related systems.
API-led AI allows us to leverage the benefits of API-led for AI implementation. API-led creates a future-proof foundation that accelerates development through reuse and abstraction. API-led has already proven to be an effective solution to the problems artificial intelligence needs to address.
Learn more about the benefits of API-led by diving into MuleSoft’s catalog of integration case studies or Contact us
ลงทะเบียนเข้าสู่ระบบ เพื่ออ่านบทความฟรีไม่จำกัด