In AI, You Want to Be a Jazz Band

As we continue developing the latest research in Artificial Intelligence, we like making parallels with other areas of science and life in general.  We think there is a perfect metaphor from music that explains the AI market.  Since helping people fight their health problems is our passion, we’ll focus on AI in healthcare.


The AI market is currently overwhelmingly at the two opposite extremes – a high school band and a 7,500-person orchestra.  Both extremes are perfectly acceptable and have their audiences.  However, there is not much in the middle.


We’d like to argue that being in the middle – i.e. being a jazz band – not only has its merit but maybe a better approach to Artificial Intelligence and Machine Learning.


1) A High School band (aka Rules-based tech companies) – think WedMD, Babylon or Infermedica.


Imagine a high school band.  It’s a group of talented but inexperienced students.  Everyone admires them until they go to college, and not many would remember they were in a band, with exception of very loyal customers – their parents.


The high school band is a rules-based tech company.  It has the pre-set repertoire (a decision tree) that they play at concerts typically attended by close friends and family. To change the repertoire (the decision tree) takes an exponentially large amount of time and effort because students now have to learn new sheet music, and that’s not even their ‘thing’ – they have other classes they need to prepare for and they have SATs and so on.


With regards to rules-based approaches to machine learning, we liked this LinkedIn post by Laszlo Sragner.


The author makes a point that most machine learning projects take a “rules-based system” as the baseline, with the argument that “it’s simple” and “can address most problems”.  Both of these arguments are wrong.  Rules-based trees become unruly and computationally expensive very quickly.  The second argument of applying rules-based without thinking about what the problem you are solving is exactly how we ended up in the state of digital health we are in right now.


To summarize, rules-based models:

  • are easy to code,
  • are very computationally expensive,
  • are hard to change (these models failed when COVID started).



2)  A 7,500-person orchestra (aka BigTech’s ‘monster models’) – think Facebook, OpenAI, Google.


Now imagine a 7,500-person orchestra performing at a large stadium.  It’s very impressive and breathtaking.  But what does it try to achieve?  An impression of something enormous?   Or does it want to perform beautiful music?  It certainly doesn’t achieve the latter – people in the front rows may not be enjoying that much because the sound is too loud and there may be some additional distracting noise coming from the sound speaker.  People at the higher levels of the stadium may not hear very well because of the echo.


That’s exactly what’s happening with the latest multi-trillion-parameter AI models.  Their goal is “one size fits all” –  to solve a range of problems with one big shiny model, similar to the massive music show at the stadium.  These models are so enormous, they were named “monster AI models”.  They are good for some problems, but certainly not for all of them.


It’s complexity for the sake of complexity.  It’s the race for the most supersized AI that began in 2020 when OpenAI invented its famous GPT-3 model with 175 billion parameters, 10 times more than its predecessor, GPT-2.


But then 2021 came along, or the year of monster AI models, and smashed the power and, in many cases, the accuracy of GPT-3.  For example, the Beijing Academy of AI recently announced Wu Dao 2.0 model, with 1.75 trillion parameters!  Not to be outdone, there are rumors GPT-4 would have 100 trillion parameters.


As in the example of the orchestra at the stadium, the ‘jack of all trades’ approach cannot solve every problem.  Medicine turned out to be especially tricky.  When GPT-3 was used as a chatbot to help with health issues, a lot of answers were outright wrong, according to an AI ethics study by Deepmind.


Another problem is the culture within the BigTech companies.  If you are a Ph.D. in data science from CalTech working for Google, how are you going to prove yourself to your boss?  Certainly not by arguing to use an old version of convolutional neural nets, even if that’s the best model to solve the problem.


To summarize, the BigTech monster models:

  • are computationally expensive (even their pre-trained versions),
  • are unwieldy and are not meant for smaller problems.



3)  A jazz band (aka an ensemble of neural nets and optimization) – think WellAI.


Now imagine a jazz band.  It achieves the goal of delivering beautiful music.  But there is so much more flexibility and virtuosity.


WellAI is in-between the two camps of models described above.  WellAI’s models are not rules-based, but pure machine learning and optimization.  The problem of scientific diagnostics was thought of as a problem of smaller tasks, and therefore an ensemble of interconnected neural nets was used, each neural net having its role in solving the problem, just like a jazz band where each jazz musician plays his/her instrument and dynamically adjusts to a new melody.


To summarize, the ensemble of interconnected neural nets:

  • is computationally inexpensive, as the big problem was split into smaller tasks; we don’t need an army of servers and people supporting them,
  • is very dynamic (unlike rules-based) – the new medical knowledge is constantly added to the system,
  • is flexible – it could be easily trained for specific tasks like pediatrics or dermatology, or for various languages,
  • is accurate because it was trained for a specific task and also because of some proprietary math innovations.



In AI, You Want to Be a Jazz Band



Please check out this and other latest WellAI blogs, articles, opinions, videos and press releases on AI trends and digital health innovations at


Stay healthy!  Stay knowledgeable about your health.


WellAI Team

Leave a Reply

Your email address will not be published.

%d bloggers like this: