Toggle Menu

Insights / Artificial Intelligence (AI) / Understanding New AI With Old Math, and Why We Should Care

March 30, 2018

Understanding New AI With Old Math, and Why We Should Care

4 mins read

We’ve seen The Matrix, i,Robot, 2001: A Space Odyssey, The Terminator, etc, imagining AI will destroy humanity. We’ve heard Elon Musk and various other businessmen decry AI as menacing. We’ve read how dangerous the black box of AI can be, and why its “explainability” is crucial to society. We’ve seen what can go wrong when an algorithm doesn’t have the data it needs. Indeed, we’ve had everything under the sun warn us about the black box of AI and our blind trust in its decisions. We are being implored to better understand what goes on under the AI hood.

Beyond aiding researchers’ ethical imperative to protect society from bad algorithms, better understanding what’s under the hood is becoming critical to even continue advancing the field of AI at this juncture. Ali Rahimi delivered a divisive key lecture at NIPS 2017, condemning current practices in deep learning as alchemy, and imploring researchers to take a more scientific approach utilizing first principles going forward.

To these ends, many efforts have been made to elucidate how neural networks “think.” Google, for instance, has been proactive by creating Lucid, which helps visualize how CNNs see and reason over images. In fact, there have been many recent strides towards understanding computer vision, which is wonderful for, e.g., both advancements in and ethical constraints on self-driving vehicles. Unfortunately, despite the tremendous advances in natural language processing (NLP) with the advent of deep learning and word embeddings, our understanding of deep NLP is largely heuristic, and the field remains a bit of a “black box.”

Excella AI Research has recently published a paper to arXiv on exactly this issue. The paper, “The emergent algebraic structure of RNNs and embeddings in NLP,” reveals that word embeddings, which are mathematical representations of words that empower machines to understand language, naturally form a Lie group, which is a well-understood mathematical structure. The paper further shows that recurrent neural networks (RNNs), a very popular and powerful class of neural networks for processing sequences of data, learn what’s called a representation of the group together with a vector for the words to “act on.” This representation specifically performs something called “parallel transport” on the vector, which can be thought of as moving a stick (or anything else that has an ‘orientation’) along a curve.

This discovery connects the nigh inscrutable realm of deep NLP to mathematics that have been studied and well-understood for over a century. Now we have the tools to examine RNNs and word embeddings using a more familiar framework – math.

Computer science continually advances by abstracting away much of the nitty-gritty to expose the more functional, mathematical elements of an algorithm. This is a large, and powerful, abstraction with some serious implications.

  • First, allaying public fears of the dangers of AI requires a better understanding of the neural networks behind decision-making. In NLP, by studying the Lie group structure, we can analytically follow – or even predict! – how an RNN behaves as it reads or generates a sentence. We can predict or reverse-engineer how a word acts on the network’s understanding of a sentence. This is the machine equivalent of knowing someone so well you can predict what they’re thinking.
  • Second, any new, fundamental discovery begets new technology. In the paper, we propose a new class of recurrent-like neural networks that solve a much more general set of differential equations, which should be capable of modeling and generating more complex sequential data/language.
  • Third, words are typically represented as vectors in deep learning, and heuristic arguments are made to constrain the relative numerical values of these. These heuristics are typically iteratively improved with the release of each new word embedding scheme. However, none are explicitly the “correct” way to embed words – hence why they are heuristics. This paper gives insight into the natural embedding scheme words seek: groups. A Lie group-based embedding scheme can assist neural networks in understanding language more accurately.

These impacts and their benefits to data science are immediately clear. Within the business realm, these improvements translate to better results across all NLP-based products.

  • For intelligent assistants and chatbots, this means more accurate machine understanding of received messages (spoken or written), and the capability for the machine to respond with more thoughtful, dynamically generated text.
  • For fraud detection, better text classification and information comparison for audit reports and social media posts improves fraud identification.
  • Sentiment analysis on social media for assessing the public’s perception of a company.

Broadly, this mean improvements to everything from computer-generated content like Twitter bots to automatic content moderation and text summarization.

In these exciting times of rapid AI and deep learning development, it is critical to advance thoughtfully, both to sustain the field’s momentum, and to ensure AI enters our society reasonably. We have taken exciting steps towards this goal, and eagerly push forward to stay at and define the cutting edge. If you have more questions, please get in touch!

You Might Also Like

Resources

Simplifying Tech Complexities and Cultivating Tech Talent with Dustin Gaspard

Technical Program Manager, Dustin Gaspard, join host Javier Guerra, of The TechHuman Experience to discuss the transformative...

Resources

How Federal Agencies Can Deliver Better Digital Experiences Using UX and Human-Centered Design

Excella UX/UI Xpert, Thelma Van, join host John Gilroy of Federal Tech Podcast to discuss...