AI Teaching AI Risks

Ai teaching AI risks
73 / 100

AI teaching AI risks

When AI undertakes the role of educating other AI entities, a phenomenon termed “model collapse” becomes a concern. Model collapse materializes when generative AI models are trained using content generated by other AI systems, rather than content originating from human creators. Consequently, these models deteriorate as they gradually lose the knowledge obtained from human-generated data, instead mimicking patterns they have previously encountered.

In correspondence with Cosmos, Ilia Shumailov, a researcher specializing in machine learning and a co-author of a relevant paper, draws a parallel using the subsequent analogy: Imagine a model presented with a dataset containing 90 yellow items and 10 blue items. Due to the predominance of yellow items, the model starts to alter the blue items, giving them a greenish hue. Over time, the model’s memory of the blue items fades away. With each successive iteration of synthetic data, anomalies vanish, and the outputs progressively deviate from an accurate representation of reality, culminating in nonsensical outcomes.

The initial solution may seem straightforward—avoid training AI on content generated by other AIs. However, the proliferation of AI-generated content across the internet has already begun. As Ross Anderson, a security expert and co-author of Shumailov’s paper, aptly notes, just as humanity has littered the oceans with plastic waste and saturated the atmosphere with carbon dioxide, the digital realm is at risk of being inundated with mundane and uninformative content.

View our content creation services for your business. Blog posts. Social Media posts. Website copy and more. 

The question arises: Will AI inevitably spiral out of control? Any AI model that begins producing gibberish would likely be deactivated by the tech company that initially deployed it. Nonetheless, Aditi Raghunathan, a computer scientist from Carnegie Mellon University, warns of subtler dangers. She underscores that the real peril lies in inconspicuous flaws, such as biases that seep into AI systems as they conform to the preferences of the majority.

What is a model collapse in AI?

In the realm of Large Language Models (LLMs) like ChatGPT, an alarming issue of irreversible defects, degeneration, and what’s termed as “Model Collapse” has emerged, akin to ticking time bombs. A recent research paper titled “The Curse of Recursion: Training on Generated Data Makes Models Forget” sheds light on this concern, revealing that incorporating model-generated content during training leads to permanent flaws in resultant models, causing the loss of tails within the original content distribution.

The researchers behind the study, including Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, and Ross Anderson, pinpoint this phenomenon as “Model Collapse.” Their work indicates that this issue can manifest across a range of learned generative models, including Variational Autoencoders, Gaussian Mixture Models, and Large Language Models (LLMs).

In their research, the team delves into the underlying principles of this phenomenon, illustrating its widespread occurrence among various generative models. The significance of addressing Model Collapse becomes apparent when considering the utilization of vast-scale data collected from the internet for training. Remarkably, LLMs and Generative AI, which were assumed to symbolize unceasing progress upon their recent public introduction, might indeed be concealing a deeper issue of Degenerative AI.

The emergence of Model Collapse within systems like LLMs challenges the earlier assumption of unfettered advancement and prompts experts to discuss the potential inevitability of these systems undergoing deterioration.

Read the full article –

Follow our social media
Recent Posts