What can't AI do?

 


    Going by the picture above it certainly struggles a fair amount with creating words in images. These artifacts or strange portions of an AI's work such as a person having more than five fingers or just not generating the correct image at all are known as hallucinations. In my last post, I pointed out that machine learning models have a theoretical limit to what they can learn. Although I want to be clear that not all AI is machine learning, but all machine learning is AI as machine learning is a subset of AI that uses neural networks to learn.

    Recently, Computerphile, a lovely YouTube channel interviewed Dr. Mike Pound at the University of Nottingham about a new paper that came out about whether or not generative AI has already peaked or rather the possibility that we will get further and further diminishing returns.


    I will not go in depth over the video as you can watch it for yourself nor in depth over the paper, but I would like to pull a few graphs from the paper and discuss them because they remind me of another study about evolution.

    The paper itself is about whether or not AI models will be able to achieve what is known as "zero-shot" performance where an AI model can identify something or perform a task on its first try. As Dr. Pound puts it with "enough data about cats and dogs, the elephant is assumed" or that there should be some emergent behavior that is not within the training data. However, the paper suggests that the amount of data for this to be possible is a staggering impossibility with models in their current state.

Figure 1

    
    Figure 1 shows that the pre-training frequency of a certain concept has a linear relationship to the log-scaled performance of the concept in zero-shot evaluation (Udandarao et al., No “Zero-Shot” Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance 2024). This means that the amount of data to achieve good zero-shot performance we need to exponentially increase our data for linear performances in improvement and with the amount of data already given to machine learning models we may be seeing the exponential improvement of AI turn into more sigmoidal growth and plateau. 
    
    Now I mentioned an experiment earlier about evolution because sigmoidal growth does not mean it stops improving it just means that it grows at a logarithmic pace after the exponential phase. This is similar to a finding about evolution, of whether or not there is a limit to the improvement that evolution can achieve. The LTEE experiment that has been running continuously for a few decades has run several thousand generations of E-Coli through an unchanging environment and found that evolution "fit the proposed power law model, and, indeed, fit within predictions of the model from earlier data. These results suggest that, contrary to previous thinking, adaptation and adaptive divergence can potentially increase indefinitely, even in a constant environment" (Scharping, Could evolution ever yield a 'perfect' organism? 2015).
    
    I would take a guess that our current AI models will progress at a similar pace unless we develop a completely different model. Granted, in the same experiment a sudden rapid evolution took place in which a completely new trait emerged, and it could be similar with current AI models.

References

Scharping, N. (2015, December 17). Could evolution ever yield a “perfect” organism? Discover. https://web.archive.org/web/20151220192433/http://blogs.discovermagazine.com/d-brief/2015/12/16/the-search-for-the-perfect-organism/#.VncAdvLP1qY 

Udandarao, V., Prabhu, A., Ghosh, A., Sharma, Y., Torr, P., Bibi, A., Albanie, S., & Bethge, M. (2024). No “Zero-Shot” Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance. Arvix. https://arxiv.org/pdf/2404.04125 








Comments

Popular posts from this blog

Week 1 - Why AI?

AI and Modern Marketing