Google’s DeepMind made a breakthrough with its ‘Gato’ model. It still ain’t even close to a human brain.

Whew… The Armageddon is averted.  The reports of the death of humankind are greatly exaggerated.  For now…

 

Let’s get to the story and offer our take.

 

Google’s DeepMind claims it developed a perfect human-level AI with its new ‘Gato’ model.  One of the developers of the model proclaimed “The Game is Over!”  Humans have been defeated by robots.  AI can do anything humans can.

 

 

First, what does it mean to “achieve human-level AI”?  Last time we checked a teenager could learn to drive in less than a week, but after over a decade we still don’t have a self-driving car.

 

Second, upon closer examination, the Gato model is cool.  But it’s nowhere close to what humans can do.

 

Gato is truly a Jack of all trades, a master of none.  It can do 604 tasks at the same time.  That’s cool.  But almost every task it does, it does it at a mediocre level.  Well, I guess that would make it human. 😊

 

Its chat dialogue is unnatural.  Any human could guess she/he is talking to a non-human.

 

Gato’s image captioning is poor and sometimes contradictory.

 

The mediocreness of the Gato algorithm is almost by construction.  It doesn’t treat tokens differently whether they are words in a chat or movement vectors in a block-stacking exercise. It’s all the same. It is one neural network that can work with multiple kinds of data to perform multiple kinds of tasks.

 

Another Gato’s weakness is its lack of adaptability.  Just like GPT-3, Gato is a transformer model.  It is very general, also by construction.  However, it is weak in transforming learning from one situation into another, like humans do.

 

We must also talk about the ethical issues with Gato.  As you know, we’ve done a significant amount of work on best practices of AI in healthcare, as part of IFCC’s Working Group on Artificial Intelligence and Genomic Diagnostics (WG-AIGD).

 

We strongly believe health equity and unbiasedness are critical parts of the future of the digital health industry.  The authors of the DeepMind paper state, “Additionally, while cross-domain knowledge transfer is often a goal in ML research, it could create unexpected and undesired outcomes if certain behaviors (e.g. arcade game fighting) are transferred to the wrong context… The ethics and safety considerations of knowledge transfer may require substantial new research as generalist systems advance.”

 

DeepMind seems to be doing everything to spite OpenAI.  We get it.  Healthy competition is always good.  But shouldn’t a new model at least address an “old” model’s issues?  18 months ago DeepMind critiqued GPT-3 for making simple errors and even suggesting to “kill” someone.

 

It’s time for DeepMind now to put its own model to a rigorous test.

 

We do applaud the fact that Gato is much leaner than GPT-3, using “only” 1.18 billion parameters in its main version vs GPT-3’s 175 billion parameters.  However, the cost is a massive issue, as of now.  Only corporations could afford GPT-3 and Gato.

 

It’s not easy.  But we are going to get “there” one day.  That day is not today…

 

Humankind is still going strong.

 

Perhaps a ‘León’ algorithm would beat humans one day. 😉

 

WellAI Team

wellai.health

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: