Toggle Menu

Insights / Artificial Intelligence (AI) / Does My Model Smell Funny? Top Takeaways from Technical Debt in Machine Learning Systems

December 29, 2017

Does My Model Smell Funny? Top Takeaways from Technical Debt in Machine Learning Systems

4 mins read

Sniffing Out Technical Debt in Machine Learning Solutions

With the democratization and open-sourcing of machine learning tools, there has been an explosion of interest in incorporating machine learning into existing systems, or building stand alone machine learning solutions.

However, as Google Researchers Sculley et al. have astutely observed in their recent paper, Hidden Technical Debt in Machine Learning Systems, the widespread adoption and implementation of these tools and techniques is both a blessing and a curse. They allow for increasingly more impressive solutions to long-standing problems in the industry, see Image recognition or Natural Language Processing, but they also bring with them new and unexpected opportunities for “technical debt”. For the uninitiated, “technical debt” is a programming term commonly used as shorthand to refer to problems that arise when programs are built quickly with an emphasis on solving a problem with the most convenient solution rather than the best solution.

In the data science arena, new techniques are developing faster than the existing systems can be updated to incorporate the old ones, and you likely do not have time to dive into every white paper put out by the Googles and Facebooks and internalize each one of their main lessons. This post is an attempt to summarize Sculley et al.’s original paper for easy consumption.

The tech industry often refers to “smells” as easily identifiable coding patterns that usually indicate that technical debt is accruing nearby. In order to help better understand the problems, I’ll summarize each of the papers’ main takeaways as a “smell” along with recommendations for mitigation.

As Sculley et al. point out “[machine learning systems’s technical debt] may be difficult to detect because it exists at the system level rather than the code level.” In other words, issues caused by the hasty deployment of these systems often arise because of factors external (or in addition) to how the code was written.

Here are five common symptoms that suggest your model might be starting to smell funny:

1. General Model Smells
What to look for:
  • Data types are implemented inconsistently throughout the model.
  • Various parts of the model are written in different programming languages.
How to mitigate:
  • Decide and commit to data types in advance, socialize handling best practices among the team.
  • Enforce a one-language per model rule, make sure your team is aware of why this is important (i.e. it will make it easier to build effective testing and onboard new team members).
2. Entanglement
What to look for:
  • Dependencies on external systems.
How to mitigate:
  • Develop a strategy to identify changes in prediction behavior when they occur.
  • Employ techniques for visualizing data entering and leaving your system.
3. Correction Cascades
What to look for:
  • Did a recent change in predication accuracy correspond to data corrections in any of the systems it depends on?
How to mitigate:
  • Build the correction into your main model and remove the dependency, or simply accept the risk of relying on a system that was created for a separate issue than the one your model seeks to address.
4. Feedback Loops
What to look for:
  • The predictions of your machine learning product are influencing the systems that it’s behavior is based on.
How to mitigate:
  • Identify and isolate which data your system is using and what systems (software or business processes) are in turn using its predictions as inputs.
  • Also consider incorporating some degree of randomization in your solutions implementation.
5. Pipeline Jungles
What to look for:
  • Data from each system has its own multi-step data processing script built uniquely to extract the data that comes from it and then shoehorn it into the right format.
How to mitigate:
  • If your model is in production and you identify this type of debt, it might actually more cost effective in the long run to scrap the current data wrangling process and start over.
  • Otherwise, accept the additional risk and compounding demands on your team’s time and energy required to address any issues with the data.
Bottom Line

Sculley et al. go into much greater detail on potential causes of technical debt, and are much more thorough in their recommendations for how to proactively address them. If you still have questions, read the full article yourself, discuss it with your team, and above all be vigilant when developing and depending upon a machine learning solution for your business decisions!

You Might Also Like


Overcoming Obstacles to Continuous Improvement in Your Organization

Does driving change in your organization sometimes feel like an uphill climb? You’ve tried implementing...


Responsible AI for Federal Programs

Excella AI Engineer, Melisa Bardhi, join host John Gilroy of Federal Tech Podcast to examine how artificial intelligence...