The deceptively simple things AI can't do today

artificial intelligence deceptively simple things AI cannot do

Imagine you'd like to create a Pinterest-style search engine featuring 10,000 healthy vegan recipes from hundreds of different blogs. For each recipe, you want to grab the link to the publication, for the attribution, get a nice image, identify the main ingredients, source the nutritional data and craft a short description.  

If you ask a human (or in this case, ideally, a team of humans), the task is pretty simple. The brief won't last longer than the time it takes to read the introduction of this article. The human operator will figure out where to find vegan blogs, how to select the healthiest recipes and will collect all the requested data. It will just take time, a lot of time if you want to curate 10,000 recipes. 

You could of course scrape the raw data (but you'd still need to identify which sources to scrape) and your operators would have to grab the relevant pieces of information from the scraped content, since there won't be any consistency in the layout of the recipes, preventing you from setting a specific location in your scraper method for the ingredients, the nutritional facts, etc. Moreover, you'd probably end up with a huge pile of useless data, making it easier to just ask the operator(s) to find their way on the open web, using their own judgment.

Would it be possible to use AI for simple research tasks? 

The seemingly trivial scenario I've just outlined is still beyond the reach of any form of artificial intelligence. The concept of "figuring things out" in an ocean of unstructured data can't be addressed by an AI which doesn't understand what it's doing.

A machine can already automated a series of clearly identified repetitive narrow tasks. That's what UIPath or Automation Anywhere are selling via their RPA (Robotic Process Automation) offering (that's basically what you can do with Automator on your Mac, for simple scenarios).  It is the type of workflows you can also automate with Zapier or Integromat, connecting multiple applications.  

In our example, if the ingredients were always presented in a very similar way, with a clear combination of key:value, along with their nutritional data, you could automate the collection process and send the data to a table. But if the information is scattered around the pages, your dumb machine won't be able to curate it. What we need to achieve such a feat is a form of cognitive automation.

Decently educated humans just know what "vegan" or an "ingredient" is, they know how to identify healthy recipes, they're able to quickly write a short teaser to introduce the recipe (which is more difficult than summarising the instructions based on some statistical model). 

Humans bring to the task a wealth of personal experience. They can carry out their simple mission with a quick glance at the pages. No need for a long configuration. 

How to train an AI to emulate human experience? 

The latest cutting edge artificial intelligence developed by OpenAI, GPT-3, trained with 175 billion parameters, is able to write intelligible text from a short prompt. The output is based on the probabilities of "what's usually coming next" in a piece, based on the massive ingested corpus. The model can be configured to emulate a specific style. But the machine doesn't understand a single word of what it is regurgitating, because the AI lacks an essential feature: consciousness. To a certain extent, the machine could fake comprehension by paraphrasing itself but it would never, at this stage of AI evolution, be able to truly explain the gist of its sentences. 

The question is: how do you achieve a state of consciousness which would be sufficient to perform simple research tasks? Do you need full-blown consciousness to figure things out as humans do. There's probably an intermediary phase where you could identify and mimic the skills required to spot specific information in a pile of unstructured data, through supervised training, attempting to reconcile what's on the pages with a set of examples. It could be good enough, a smart trick, but there would be a lot of silly mistakes due to a lack of genuine topic comprehension. You might end up with a "cup" or a "fork" in the ingredients of your recipe. I'm not even elaborating on the short introduction part of the brief, which requires creativity, derived from experience, gained through years of self-awareness / consciousness. 

Does AI need a body to develop consciousness (and general intelligence)? 

Some scientists argue that developing a conscious AI would even require a body (not necessarily made of flesh and bones), to physically interact with the outside world. For us, humans (and for animals in general), it's an essential part of our personal development. The information perceived through our five senses shapes our understanding of the world, from a very young age. The emotions we experience through our body are also critical to our cognitive abilities. Not all animals develop self-awareness but all have sufficient intuition to cope with the uncertainties of an open environment. To easily accomplish a simple research assignment, a machine should base its judgment on a wealth of experience. But mechanical intuition would probably not be enough to understand the brief and execute it properly. See how long it takes to train a dog to perform a few tricks. Try a word which isn't part of the training instructions and the dog will just wag its tail, looking at you confused.

We're still very far from being able to ask an AI agent to figure things out and find its own way through the open web to collect, curate and annotate data. But it's an exciting venture I'd love to explore! Contact me via email if you'd like to join.





Comments