Edge machine learning deployment architecture on Wallaroo

  • How do you deploy a model to an environment that might not have consistent or any connectivity?
  • How do you run that model efficiently in power and compute constrained environments?
  • How do you monitor the ongoing accuracy of predictions in a live environment?
  • How do you manage versioning to make sure all devices have the latest model?
  • How do you run A/B tests or stage experiments on a subset of locations or devices to validate before rolling out to all clients?
  1. The model artifact (for example, a notebook) which is the end result of the model development
  2. The model registry managing which models go to which devices
  3. The “fat edge” environment where IoT devices are managed — not just the ML models but everything that goes with fleet management like software updates, security, data aggregation going into and out of the IoT devices, etc.
  4. The IoT device itself, consisting of sensors picking up external data and software (including the ML model) running on constrained compute, power, and (possibly) connectivity.
Overview of the Wallaroo edge deployment



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


Wallaroo enables data scientists and ML engineers to deploy enterprise-level AI into production simpler, faster, and with incredible efficiency.