Can AI Explain Its Choices? Google Brain Scientists Developing a Translator for Artificial Intelligence
Machine learning technology has allowed deep-learning networks to teach themselves how to perform highly complicated tasks like driving, spotting insurance fraud, and making complex health diagnoses. This tremendous complexity, however, prevents people from understanding how much, if not all, of the decision processes work. A researcher with Google Brain is working to mitigate this problem through “translation” and “interpretability” research to bridge the gap between AI and humans. In one example, the team of researchers developed a “Testing with Concept Activation Vectors” (TCAV) system that allowed researchers to ask a specific AI how significantly a specific concept or variable impacted its reasoning. For example, how heavily is the concept of “stripes” weighted in an AI trained to identify zebras in images? While the TCAV was designed and tested for image recognition AI, it can be adjusted for use with other AI models.
A lead scientist with the project described it as follows: “There are two branches of interpretability. One branch is interpretability for science: if you consider a neural network as an object of study, then you can conduct scientific experiments to really understand the gory details about the model, how it reacts, and that sort of thing. The second branch of interpretability…is interpretability for responsible AI…the goal of the second branch of interpretability is: Can we understand a tool enough so that we can safely use it? And we can create that understanding by confirming that useful human knowledge is reflected in the tool.”