Agile for AI is not a fully solved problem. We tackled this by leveraging our Agile thought leadership combined with our data science, data engineering, and DevOps expertise to adapt standard Agile approaches for this kind of work. The result is a repeatable Agile framework that’s proven and effective. We call it the Rapid AnalytiX Framework.
A Framework for Effective AI Strategy and Scaling
Rapidly innovate and build solutions that scale with effective management of the machine learning lifecycle. MLOps is one of the most valuable sets of practices for making AI and ML work. Automated routines for building, training, and deploying your ML solutions will accelerate their delivery and pay long-term dividends.
What is MLOps?
MLOps is a series of practices and tools that introduce reproducibility, transparency and automation into the processes for training, testing, deploying, monitoring and governing AI solutions. That automation allows teams to explore new ideas, lower costs of development and operation, rapidly get feedback on them, and focus on the ones that will be most effective.
AI projects involve a lot of exploration and experimenting because the problems solved by AI aren’t often linear. To approach the problem systematically and efficiently, the AI model implementation needs to be fast, reliable, and repeatable by automating tests, training and deployment pipelines. Additionally, the project team needs to standardize the infrastructure all the way from development into production and plan solid security built-in throughout.
Monitoring the AI model to confirm that it continues to work as expected is an essential element of MLOps. Monitoring is vital because ML algorithms adapt in response to new data and experiences to improve without human direction. We need to monitor the model outputs to detect and correct for model drift over time.
Benefits of MLOps
MLOps can reduce friction, remove bottlenecks and bring machine learning workflows into production. Some of the benefits you can expect to gain by introducing MLOps into your organization are:
Enjoy more opportunities for reuse through well-documented learning and hypothesis testing of ML models and their evolution.
Always perform safely and securely through automated validation and testing procedures.
Continually improve through rapid feedback loops employing automated tooling and reliable infrastructure.
Meet your business needs through optimizing ML algorithms for business metrics that track model accuracy, bias, and production readiness.
6 Key Aspects of an MLOps Approach
MLOps covers multiple aspects of ML solution lifecycle. We’ve divided our MLOps approach into six core areas that provide an effective foundation for introducing MLOps and maximizing its advantages.
1. Understanding the Business Need
Perhaps the most important aspect of MLOps is ensuring that any AI solution solves a clear business problem. AI projects are challenging and don’t follow a linear, predictable path. There are many unknowns, and this means there is often a lot of exploration involved. Because of this, it’s very important that we approach this work in an Agile manner with rapid feedback loops in place to apply learning. This approach makes sure you are building the right thing and building the thing right.
Understanding the business problems and goals will guide the development approach and influence the design of the model. At the beginning of an AI project, ideas are generated through discussions with users and stakeholders and documented using templates like the Lean Canvas.
Teams also capture the associated data sources and possible success metrics to help prioritize the ones needed to move forward. We timebox performing some initial prioritization of scenario ideas.
2. Data Governance and Ingestion
ML solutions are only as good as their underlying data. An MLOps approach puts processes in place to validate that data is effectively managed and governed so that your models remain as accurate as possible.
Data ingestion pipelines must onboard all data sources reliably to fuel the model result. The processes put in place need to apply data quality business rules, security, and access policies. Your team should automatically apply these as much as possible wherever you’ve established data governance policies and quality metrics.
An example of a potential process could be using automated thresholds to trigger alerts or stop job processing when it reaches its defined limits. This could be setting a minimum level of data populated for key data elements to track completeness or a maximum level of unexpected values for a data element to ensure the project meets data quality standards. Experts recommend data volume thresholds for each source to alert for unexpected peaks or drops in volume for further analysis.
3. Model Development
Once the data has been ingested, aggregated, and preprocessed, it is ready to be used for model development. For a model to maximize its eventual business impact, it must be performant in solving the business problem. Additionally, it needs to align the business metrics in terms of model accuracy, scalability, reproducibility, and availability.
Projects must control data changes, environment variations and model parameters to effectively manage in model training. Proper training process should be:
by changing dependencies and randomness
be able to handle larger training datasets, parallelized pipelines
using containerization to envelop all changes
to reduce expensive compute overhead
the models need to be trained using the most time and cost efficient hardware
When we’re ready to build a model, we adopt the approach of building in small increments and regularly validating for technical correctness and added value. Sometimes, we may find that we actually can’t get this to work as we want it to. For example, maybe a model with sufficient accuracy just doesn’t run fast enough, or the data doesn’t support the kind of predictive inference that our users would get value from. In that case, validation serves as a systematic way to cut our losses and pivot to a new solution approach.
4. Model Operationalization (Model Ops)
MLOps introduces reproducible processes for taking ML models from development into production. When a new, improved model is ready, you can take full advantage of it with confidence that training and deploying were correct.
You can automate the ML production pipelines to retrain the models with new data, depending on your use case. For example, it can retrain on demand, on a schedule, or upon the availability of new training data or if there is significant model performance degradation. Deployment can occur on multiple platforms including Cloud, Hybrid, Edge, and Mobile depending on the model’s size, computational cost, data security and privacy restrictions, and immediacy of results.
5. Model Monitoring
We monitor 3 main categories of metrics to observe model performance with our MLOps approach.
Monitors model speed, availability, and scaling.
Monitors model accuracy, false positives, and negatives.
Model Use Metrics:
Tracks the frequency of model use and how it’s using it.
It’s important to automate model monitoring and use triggers to send warnings or act based upon defined thresholds or metric changes.
6. Model Security
Many emerging threats can compromise or manipulate a machine learning system. Common AI model security risks include adversarial attacks, data poisoning, online model attacks, distributed denial of service (DDOS), transfer learning, and data phishing.
Making sure that the data being input into the model is free from bias, accurate, complete, untampered with, and secure is paramount to receiving the correct output from the model.
Additionally, while there are many benefits to building models off existing architecture (cost, effort, accuracy), that also opens the newer models for attacks without proper security measures.
Learn How MLOps Can Work For Your Organization’s Goals
MLOps is a complex set of practices and tools that is nevertheless necessary for organizations and enterprises worldwide to grow and scale their AI investments.
Read our complimentary eBook, MLOps 101: Scalable Processes for ML Development & Use, to learn more about this critical development in the machine learning and AI sector.
Contact Our Experts