The future of “human-in-the-loop” computing
Lukas Biewald recently wrote an opinion piece on how “human-in-the-loop computing” is the future of machine learning. He focuses on several examples of how human-in-the-loop computing happens right now, so I wanted to think about how human-in-the-loop computing will evolve over time.
With artificial intelligence, the goal is to identify tasks we can teach computers to do, and then use computers to perform those tasks in an automated fashion. Even though “artificial intelligence” sounds like a lofty phrase, the scope of tasks that AI can do is actually fairly narrow. Often, AI is really just about expressing a problem in a structured way with input variables, and then exploring the permutations of the input variables in an efficient way until we find a solution to the problem. Perhaps this illustrates why we can easily teach computers to predict the price of a living space in New York City given its various characteristics (bedrooms, bathrooms, square footage, proximity to Manhattan), but we can’t easily teach a computer to answer “if you stick a pin into a carrot, does it make a hole in the carrot or in the pin?” [1]
Every time we implement artificial intelligence in a task, the general public no longer perceives it to be “artificial intelligence” (this is called the AI effect). I think a corollary is that as soon as we implement an AI task, it becomes automatable. In the article, Biewald shows how right now, our best AI systems are used to automate some tasks and humans intervene to augment the AI’s knowledge. However, the aim should not for AI and humans to work alongside on the same tasks.
The nut we have yet failed to crack with AI is how to replicate common sense decision-making and inference, but humans are very good at that. So until we unlock the key to common sense in AI, humans will always be better at these simple analytical tasks, at drawing inferences and conclusions. And tasks that are automatable are no longer value-creating tasks. Businesses ought to employ their human capital by doing the stuff that is not automatable.
(As a side-note, an implicit assumption I make in these arguments is that once AI is achieved for a task, that task becomes automatable to everyone as the algorithms become widely known. In the short run, businesses can sustain a competitive advantage by being the only ones with AI to perform a task, because ostensibly, they can do that task much more efficiently than competitors who don’t have that AI. But even with intellectual property protections, in the long run, competitors are likely to be able to replicate AI algorithms.)
I conclude by saying that AI and humans should be doing fundamentally different things (AI = automatable tasks, humans = non-automatable tasks). But human-in-the-loop computing implies that AI and humans are on the same “loop”, so to speak — why is it here to stay? Well, we should strive to automate more tasks. But until we get strong AI, the fact remains that we need to train AI for these more advanced tasks, and the way we train AI is by having humans create and provide human-classified training examples.
We’re seeing artificial intelligence pop up in many areas of our day-to-day lives. As computers get smarter and smarter, we should expect to see more and more of this. Even when AI completely automates something away, while we’re working on higher-level tasks, the AI will be there — beside us, watching us — as we train it while we exercise the cognitive faculties that make us human.
[1] this example was stolen from Ernest Davis and Gary Marcus from their review article in CACM, Sept 2015.