AI holds the promise to transform core industry functions, from increasing the resilience of complex supply chains for manufacturers and retailers, to more accurate and real-time risk assessment in financial services and cybersecurity. It’s no wonder nearly 85% of CEOs identify AI as a strategic priority, leading to an increase in enterprise investment in data science even as macro volatility has led to cuts elsewhere.
However, once you go past the headline numbers, you see increasing disillusionment, with only 10% of companies achieving significant ROI from AI. While executives talk up their investments in data and AI to investors (the mention of artificial intelligence on earnings calls nearly quintupled from 2015 to 2020), the actual implementation of AI in the majority of enterprises can often lead to internal cynicism.
ROI in AI: It’s not just about the model
When talking about the success of AI, often the conversation focuses on particular models in production.
Any one model can fail or succeed, but the real success of data and AI in an organization is if it’s embedded throughout your products and services in a way that drives measurable business value and is easy to scale and update. That means an organization will need to go from one initial machine learning MVP to multiple use cases with scalable, repeatable processes and the right insights to understand when your ML is underperforming so you can iterate and optimize. Put another way, AI has to become repeatable, scalable, and measurable in order to be successful for the enterprise:
- Repeatable: The most common complaint we hear from customers is that standing up new AI use cases feels like reinventing the wheel each time. The more data teams can rely on standardized and automated processes, even across different data environments and use cases, the more willing different business functions will be to test and adopt AI into their own operations. But this will require MLOps that can regularly accomplish in seconds what previously required weeks/months — so for example, can you quickly and easily conduct data access audits for compliance in such highly regulated industries like financial services or healthcare & life sciences?
- Scalable: If your compute costs and headcount increase linearly as you increase your AI, you will often lose any advantages gained from AI. Running (aka, inferencing or scoring) is server intensive with live data pipelines. It can become cost prohibitive as you scale up the number of models in production, or analyze big data, or manage complex models like neural networks. Additionally, the first instinct when ML investments fail is to throw more headcount at it — more data scientists or ML engineers. But we actually take a different approach — what tools can you use so that you need less headcount for the intense but simple work that should be automated?
- Measurable: For continued buy in, leaders need to evangelize how returns from AI take different forms such as cuttings costs (e.g., reducing OpEx and CapEx through better predictive maintenance), incremental revenue (e.g., reaching more profitable customers through improved segmentation), or loss avoidance (e.g., preventing costly security breaches). What’s key is that these are goals for the broader business — not just the IT organization, which primarily measures ROI through cost or risk reduction.
Success with AI needs executive engagement
When we look at our own customers, the difference between successful and unsuccessful AI investments comes down to executive engagement and setting expectations. That is, more than just providing strategic vision and resources like headcount and budget, these C-suite executives are actively engaged in driving operational excellence of data science in their organizations and taking the long view.
They have a broad understanding about the machine learning life cycle (fig 2) so they can home in and push their organizations at the bottlenecks by asking the right questions like:
- Is it an upstream data engineering issue around being able to ingest and process the data needed for the problem we are looking to solve? E.g., if we are perhaps looking to build a predictive maintenance model, are our data pipelines set up for streaming IoT sensor data?
- Or is it a midstream issue around the difficulty developing ML models? E.g., do we have data scientists with the right industry expertise to not just pull a demand forecasting model off the shelf but ones that understand the seasonal dynamics to our products?
- Or is the bottleneck downstream in the inability to replicate the performance of models in a dev environment in production or else the inability to scale up the processes for deploying, monitoring, and managing dozens or even hundreds of models at once?
- How repeatable or efficient are the processes at each stage in this lifecycle? What can I automate? What does this cost me in people and time if I go from 1 model to 5 to 20?
The Last Mile of the the ML Lifecycle
Without minimizing the importance of the upstream or midstream, we often find that the last mile of ML — actually getting it live, into production, and generating business value — is an afterthought even though live production is where the ROI from AI investments comes. According to Gartner, only 53% of projects even make it from prototype to production. From this same Gartner report: “CIOs and IT leaders find it hard to scale AI projects because they lack the tools to create and manage a production-grade AI pipeline.”
That is why Wallaroo is hyper focused on the “last mile” of machine learning:
- Deployment: Deploy models with a single line of code. Wallaroo provides a high level Python SDK and lower level APIs to give you the widest range of integration options for your model deployment strategy, all from the convenience of your familiar tools and workflows.
- Running/Inferencing: Scale to handle more models, more data, or more complexity with speed and ease. Wallaroo’s advanced resource management ranges from manual, to basic, to advanced autoscaling.
- Observability & Insights: Monitor and quickly identify any sources of model performance degradation. Additionally the Wallaroo platform provides simplified monitoring capabilities, with full auditing view into who accessed which model and where.
- Optimization: Update models without downtime to the business. Easy capabilities for A/B testing and experimentation.
ML is hard. It doesn’t have to be.
As machine learning and AI become core to how businesses operate, every executive, from the CEO to the CTO to COO, must get familiar with the basic machine learning process and where it might be falling short in their organization. By learning how to ask the right questions and understanding how to measure returns, they can get a full picture of how their AI investments can drive improvements to the business.
It’s also important to understand what an organization should completely own and what it needs outside help on. Machine learning is hard, so focusing your own internal resources on where your core assets and IP lies increases the chances of success and can bring agility.
If you are looking to start generating return from your AI, reach out to us at firstname.lastname@example.org. We help enterprises understand how to take machine learning from proof of concept to a powerful tool impacting operations in production. We look forward to helping you identify blockers to scaling AI from isolated experiments to a broad, enterprise-wide capability.