The changing AI landscape a fireside chat with Dan Teodosiu and Mohan Mahadevan

At our latest portfolio webinar, our Executive In Residence Dan Teodosiu sat down with Mohan Mahadevan, Chief Science Officer at Tractable, for a fireside chat on the evolution of AI – including the paradigm shifts in machine learning, and how foundational models have changed the current AI landscape.

Mohan holds a PhD in Theoretical Physics and has over 24 years of experience developing ML and Computer Vision systems. Before Tractable, he was leading Machine Learning and Computer Vision research at Onfido, Amazon, and KLA-Tencor.

I) What have the key evolutions in machine learning been?

Over the last 25 years, there have been four main shifts in machine learning paradigms:

  1. In the late 90s, we used to handcraft features and do a lot of signal processing or other classical methods to generate features, and then pick the best decision boundary method (such as SVMs, decision trees etc). This depended a lot on non-scalable individual ingenuity to create impactful, interpretable, correctable and elegant solutions.
  2. From about 2000 to 2013 or so, we evolved to generalised feature extractors like SIFT, specialised methods to make ML models robust and capable like gradient boosted trees, out of distribution detection and correction, and specialised pipeline composition for high performance ML in production – what we might call hand-crafted Neuro-symbolic AI.
  3. From 2013-2018 or so, deep learning methods jointly optimised feature engineering and decision boundaries directly from data in extremely capable ways. This was driven by large data sets, large models and large and fast computation. This era included all kinds of developments such as RNNs, LSTMs, Resnet backbones, Mask RCNNs and Deep RL (Alpha Go) etc. This was the era of trying to build good underlying data representations.
  4. We had the first glimpses of foundational models by 2017-2018 (“Attention is all you need” [2017]) and the first useful zero shot and few shot methods. From about 2019, the norm of building ML models changed to mostly start with transformers based foundational models in NLP and CV and use the embeddings to build out a solution. The transformers discover and use long range correlations very effectively and multiple transformer layers can build multiple levels of hierarchy with large models becoming immensely capable.We moved away from representation learning and toward representation tuning.

Hypercharge this, and we get to GPT.

But throughout all this evolution, one thing remains unchanged: the value of data. One can only build on top of these models if well curated, “right” data in the “right” volumes is available. Foundational models have not changed the value of the data moat.

II) What are some key recommendations for building applications?

  1. You have to build applications on top of foundational models, else you are likely to quickly fall behind in performance, robustness of representation and breadth and depth in capability.
  2. You have to be able to respond fast to adopt newer larger and more capable models as they get released. The speed and automation in experimentation with newer models is more critical than ever. Think modularly to take advantage of the best foundational models, while staying independent of a particular foundational model.
  3. The quality of the data and labels used to fine tune systems or evaluate new model capability is important. As we move towards fine tuning from well learnt data representations, it is important to not lead these representations astray with noisy data. The right data is more important than more data.
  4. Stay on top of your compute cost budget: processing power is expensive. For products that were perhaps using CPUs in the cloud, there is now no alternative but to use GPUs or TPUs or other expensive cloud compute machines.
  5. Robust evaluation and performance measurement frameworks are important. A robust evaluation criteria is also important, both in a sandbox setting and in production. It must provide the right metrics and the design of corrective actions, exception handling and active learning loops. MLOps development is indispensable.

For all of the above, the importance of the data moat has become even more significant. Modelling based on public data offers no moat. We are specifically referring to private data being the moat here. The relevant questions are: How do you bootstrap to get this data? How quickly can you get to a reasonable and representative volume? How do you measure and maintain the quality of the private data and labels?

III) How can companies and teams adapt and stay on top of things?

This is a tough environment with so many researchers and so much happening in the world of AI and foundational models. You have to assume that LLMs are going to evolve and evolve rapidly, e.g. representation with multimodalities (images, videos, audio, text) or the latest foundational models in computer vision (e.g. SAM) just released by Meta.

To avoid becoming obsolete in weeks or months, there are several actions that could be very useful:

  1. A strong data moat provides (some) insurance even if the underlying LLM technology changes.
  2. Be paranoid and evaluate in-house to predict at-risk elements of the products that are likely to be commoditized by more capable models across NLP or CV or RL and correspondingly plan scenarios.
  3. Build infrastructure to increase the speed of evaluating and adopting the latest foundational models so that you can remain at the forefront.
  4. Focus on delivering high performance solutions for handling the long tail as a unique selling point.
  5. Be prepared to cut your losses and pivot if a capability is now covered by a more capable LLM or other foundational models.

You also need your ML engineers to be system-thinkers – able to solve problems as a system rather than in isolation, and be able to be self-critical. Additionally, some capabilities are built very well externally, so there is no need to overly rely on internal capacities, e.g. it’s good to adopt some of the technologies built by FAANGs and emerging players like Stability.ai or the TII with Falcon-40B models.

Related content