Making AI Ubiquitous
How often are you surprised when you turn on the tap and clean water comes out? You probably don’t think twice about it except when something goes wrong. And that should be the goal for plumbing — when it’s nothing special and you don’t have to think about it.
But that’s not what AI is like now. Right now it requires so much time and effort to operationalize that it is mostly reserved for only a few use cases, or, even worse, for proof-of-concept limbo. The delay doesn’t come from data wrangling or from data scientists finding predictive patterns which they turn into machine learning models. The issue comes back to plumbing (getting from source to end user), in this case, AI plumbing.
The environment in which a data scientist trains a model is usually vastly different from the live, real world production environment. This means weeks or months of additional reengineering to turn the model into production-ready code, often in the form of compute-heavy containers based on open source software like Spark and Kubeflow.
Now if those weeks of reengineering was all that was required to get machine learning live and providing value for the business, it would already be a significant imposition of friction and would on its own make AI reserved for special use cases. But it is a hurdle with a straightforward DevOps solution. If AI were like other software, they could just publish at the end of all this time and move on to the next model.
But AI is not like other software. AI needs to be continuously monitored to ensure that its predictions are still consistent with what is going on in the real world and continuously iterated upon to update AI models as the world changes. What if you have a great recommendation engine but consumer tastes change? Or you have a predictive maintenance model that works for summer when the weather is warm and dry but as the climate gets cooler suddenly the noises picked up by your sensors no longer mean “this machine is in danger of breaking down” but rather “there are a few additional creaks as the machine warms up in the cold”?
Most machine learning model solutions provide almost no easy visibility into the ongoing performance of models in production. So when you combine deployment processes taking weeks or months with little to no visibility, it means slow iteration and flying blind unless each model is carefully overseen by individual data scientists and ML engineers.
In a word, this means AI has to be “special” the way a pet is specially cared for from deployment through to ongoing oversight.
What we do is we make AI common, unspecial, boring — as ubiquitous as running water when you turn on the faucet. By making deployment as simple as a single line of python and as fast as a few seconds, and then automating model observability to immediately alert you when models start performing outside of established benchmarks, we make the plumbing of AI something you no longer have to think about.
Suddenly the friction for your business units to integrating AI, testing its effectiveness, and monitoring its ongoing accuracy is gone. Now AI is no longer for the special use cases but rather a tool available across the business for all use cases where it makes sense.
That’s what our aim is with AI. We make AI so easy to deploy and manage that your business doesn’t think twice about using it. They can just rely on Wallaroo to handle the plumbing and they can focus on finding where it makes sense to use. AI is no longer special. It becomes common. You no longer have to think about it.
For the business it’s just ubiquitous.