Share on LinkedInTweet about this on TwitterShare on Google+Email this to someone

Why did I get turned down for a loan? Why is my insurance so expensive? Algorithms are the answer: black box formulas that determine so many aspects of today’s life. Critics call them WMD’s – Weapons of Math Destruction. But maybe there is relief in sight from XAI.

Algorithm – (from Wikipedia) An Arabic word from 825CE that has come down to us through the centuries to mean a series of steps to process input in such a way as to produce a desired outcome. Coupled with Big Data (another gift from our increasingly digital world) they have come to surround and saturate our lives. They cover everything from recommending things to buy or watch, to will you get a job, keep the one you have or even go to jail.

Designed by humans, they are opaque and often unintentionally embody human biases and prejudices. Author Cathy O’Neil exposes their harmful effects in her book – “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. (Here is also her TED Talk.) Besides numerous examples she points out that algorithms are also driving class distinctions in our society. The masses get measured and processed by computer; the rich elite gets personal review and attention. It’s pretty sobering reading but things are getting worse.

O’Neil describes our current world where the most of these algorithms are designed by humans (hence their flawed nature). If we could hold their feet to the fire and expose hidden assumptions we stand a chance of fixing some of the issues. But as organizations increasingly turn to Artificial Intelligence (AI) to analyze big data and predict outcomes, they cannot even explain how the AI reaches its conclusions.

These are truly black boxes. Advocates promote them as tireless assistants that will make our lives and work easier while boosting our performance and accuracy. But they raise troubling questions. Should a doctor accept a diagnosis made by an AI when the machine cannot explain how it arrived at that diagnosis?

AI’s are not programmed. They are given a whole bunch of big data as input, given a goal to achieve and then turned loose. No one describes the steps – a program – that they should go through to arrive at the goal instead they learn from the inputs. Feels a little spooky doesn’t it? But, that’s how Google’s Deep Mind AlphaZero learned how to play world-class chess in four hours.

AI’s appear to see patterns that we humans do not pick up. In one notorious case a researcher’s AI appeared to predict whether an individual was straight or gay just from their picture! It can’t explain how it does it but it appears to be uncannily accurate.

That’s where a new trend is emerging. It’s called XAI or Transparent Artificial Intelligence. Deep machine learning performs an amazing number of recursive steps as it seeks patterns in the data provided as input. The XAI concept is for the machine to explain how it arrived at the particular chain that led to its conclusion.

Image Courtesy DARPA

An organization called AI Now, comprised of researchers from Google Open Research, Microsoft Research, and New York University, recently issued a call for 10 changes the A.I. community needs to make in 2017. At the top of that list: “Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education should no longer use ‘black box’ A.I. and algorithmic systems.”

I would add quite a few more actors beyond public agencies like those who assess credit worthiness, job qualifications, etc. Which ones would you recommend be added?

It’s funny but kind of tragic. We have trusted the “secret sauce” of algorithms designed by humans to operate on big data sets but when it comes to trusting machines – look out. Sigh! – Maybe it’s a round about way to get there but hopefully the rise of the machines will help us spread sunshine on this whole algorithm mess.

Share on LinkedInTweet about this on TwitterShare on Google+Email this to someone
Share This