From the Founder: Some Thoughts on Our Space Force Announcement and Edge AI
Today we announced a Phase I award with the US Space Force related to solving edge model deployment challenges for use cases like satellite life extension, on-orbit refueling, active debris removal, and the reuse & recycling of materials to build the foundation for assembly and manufacturing in space.
There’s a lot that we can say about our work with the US government, but what I wanted to focus on was the use cases…or better said, the environment in which these use cases take place — the edge.
The public cloud revolution gave enterprises the ability to quickly spin up compute and storage capacity. Previously any sort of new IT use case required months of provisioning physical space on a rack and limited business units to only a certain amount of on-prem software. Data science as we know it now where data teams can experiment quickly, sifting through vast amounts of data to find patterns and build ever more precise models would be practically impossible in an on-prem world (or at least limited to only a few hyperscale enterprises with the capital to build and maintain vast server farms).
But as enterprises large and small developed these predictive algorithms (or “machine learning models”), they have come to learn that building the models is not enough. They need to operationalize these models so that they are used by the business.
And this is why edge ML deployment has become so important. In fact, over 40% of our current pipeline involves the edge, spanning industries like manufacturing, telecommunications, automotive, energy & utilities, logistics, and travel & hospitality. The models can be built anywhere, but in order to generate value they need to be deployed at the edge, due to latency requirements measured in milliseconds or else due to environments with limited/no connectivity like gas pipeline or oil derricks far from broadband.
This is part of why I believe we are seeing accelerating growth at Wallaroo as a Series A startup competing with cloud and SaaS giants. Based on our experience working in or with data science teams in various organizations, we noticed that
1) The enterprise data ecosystem is messy, with each business unit using its own data platforms and tools based on their own unique needs.
2) Where a model is trained and where a model is deployed can be two wholly different environments. For example, we worked with an AdTech client who was all-in on AWS, which was great for their customers also on AWS, but meant they were having a hard time deploying their AI for customers in different clouds.
As a result, enterprises have moved away from trying to find the one magic bullet platform (which often requires a migration that takes longer, costs more, and delivers less than expected) and instead are looking for ML deployment solutions that can fit into their data ecosystem as it exists now.
At Wallaroo our mission is to get our customers, whether private enterprise or public sector, to start generating value from their AI faster. We designed our model deployment platform to be agnostic to where and how a model is developed (that is, we support a model in any framework coming from any cloud or on-prem environment) and then made it simple to deploy and run on just about any environment (for example take a model developed on SageMaker in AWS and deploy to Azure or any other cloud, or on-prem, or at the edge).
Edge machine learning has been a forcing function for enterprises to rethink their approach to MLOps, particularly when it comes to the last mile of how they deploy, manage, and observe their models in production. Our work with the US Space Force in use cases for their satellites is certainly an extreme example, but any enterprise dealing with connected devices generating sensor data will soon, if they haven’t already, need start rethinking how they will take a model built in data scientist’s laptop and turn that into production-ready software that can run on compute-constrained edge environments.