The eternal fight between ConvNets and Transformers continues. Well, an eternity in machine learning is about 5 years. 😊
The Facebook AI researchers published a nice paper a few days ago titled “A ConvNet for the 2020s”:
What we gathered from this study is what we have been preaching all along: if you are a data scientist, don’t throw yourself into days and weeks of testing the most “recommended” algorithm out there. Evaluate the problem. Think what the best and the most effective way to solve the problem. Throwing “the latest and the greatest” at the problem may impress your boss. And that’s OK. But be humble. Sometimes a least squares regression could be the best solution.
In this Facebook AI study, the authors are challenging the argument that in computer vision, Vision Transformers (ViTs) have been superior to ConvNets in image classification and recognition. As the world moves to super high definition images right on your iPhone, Transformers have become computationally expensive and also started to lose in modeling complexity to some of the latest ConvNets algorithms. The hybrid approach sometimes works best.
In our ongoing educational series, Daniel Satchkov, head of machine learning at WellAI, briefly talks about the beauty and the elegance of Convolutional Nets and their applications in dermatology and radiology, and also explains how cats see the world. 😊
We are at the very beginning of understanding applications of mathematics and data science in medicine and other fields of life!
Stay healthy! Stay knowledgeable about your health.
Happy and Healthy 2022!