What is an AI winter and is one coming?

What is an AI winter and is one coming?

AI winter is a term that describes funding cuts in research and development of artificial intelligence systems. 

This usually follows after a period of overhype and under-delivery in the expectations of AI systems capabilities. Does this sound like today’s AI? 

Over the past few months, we’ve observed several key generative AI systems failing to meet the promise of investors and Silicon Valley executives – from the recent launch of Open AI’s GPT-4o model to Google’s AI Overviews to Perspective’s plagiarism engine and a ton more.

While such periods are typically temporary, they can impact the industry’s growth. 

This article tackles:

Brief history of AI winters and the reasons each one occurred

The field of AI has a rich (albeit quite short) history, marked by periods of intense excitement followed by somewhat of a disappointment. These periods of decline are what we now call AI winters.

The first one occurred in the 1970s. Early AI projects like machine translation and speech recognition failed to meet the ambitious expectations set for them. Funding for AI research dried up, leading to a slowdown in progress. 

Several factors contributed to the first AI winter. 

In a nutshell, researchers over-promised the capabilities of what AI could achieve in the short term. 

Even now, we don’t fully understand human intelligence, making it hard to replicate in AI.

Another key factor was that the computing power available at the time was insufficient to handle the growing demands of the AI field, which inevitably halted progress in the area. 

Some progress was observed in the 1980s with the development of expert systems, which successfully solved specific problems in limited domains. This period of excitement lasted until the late 1980s and early 1990s when another AI winter arrived.

This time, the reasons were more closely related to the death of one computing technology – the LISP machine, which was replaced by more efficient alternatives. 

Simultaneously, expert systems failed to meet expectations when prompted with unexpected inputs, leading to errors and erosion of trust. 

One key effort in replacing the LISP machines was the Japanese Fifth Generation project.

This was a collaboration between the country’s computing industry and government that aimed to revolutionize AI operating systems and computing techniques, technologies and hardware. It ultimately failed to meet most of its goals.  

Despite research in AI continuing throughout the 1990s, many researchers avoided using the term “AI” to distance themselves from the field’s history of failed promises. 

This is quite similar to a trend observed at the moment, with many prominent researchers carefully signifying the specific area of research they are operating in and avoiding using the umbrella term. 

AI interest grew in the early 2000s due to machine learning and computing advances, but practical integration was slow.

Despite this period being referred to as the “AI spring,” the term “AI” itself remained tarnished by past failures and unmet expectations. 

Investors and researchers alike shied away from the term, associating it with overhyped and underperforming systems. 

As a result, AI was often rebranded under different names, such as machine learning, informatics or cognitive systems. This allowed researchers to distance themselves from the stigma associated with AI and secure funding for their work.

From 2000 to 2020, IBM’s Watson was a prime example of the failed integration of AI, following the company’s promise to revolutionize healthcare and diagnostics. 

Despite its success on the game show Jeopardy!, the AI super project faced significant challenges when applied to real-world healthcare. 

The Oncology Expert Advisor, in collaboration with the MD Anderson Cancer Center, struggled to interpret doctors’ notes and apply research findings to individual patient cases. 

A similar project at Memorial Sloan Kettering Cancer Center encountered problems due to the use of synthetic data, which introduced bias and failed to account for real-world variations in patient cases and treatment options. 

When Watson was implemented in other parts of the world, its recommendations were often irrelevant or incompatible with local healthcare infrastructures and treatment regimens. 

Even in the U.S., it was criticized for providing obvious or impractical advice. 

Ultimately, Watson’s failure in healthcare highlights the challenges of applying AI to complex, real-world problems and the importance of considering context and data limitations.

Meanwhile, several AI-related trends emerged. These niche technologies gained buzz and funding but quickly faded after failing to live up to the hype.

Think of:

Chatbots. 

IoT (internet of things).

Voice-command devices.

Big data.

Blockchain.

Augmented reality.

Autonomous vehicles. 

All of these areas of research and development still have a ton of potential, but investor interest has peaked at separate periods in the past. 

Source: Google Trends

Overall, the history of AI is a cautionary tale of the dangers of hype and unrealistic expectations, despite also demonstrating the resilience and progress of the industry’s mission. Despite the setbacks, AI technologies have evolved. 

Dig deeper: No, AI won’t change your marketing job: A contrarian perspective

Characteristics and lessons learned from past AI winters

Generative AI is the most recent iteration in the cycle of AI breakthrough, hype, investment and multi-faceted technology integration in many areas of life and business. 

Let’s track whether it is currently headed toward an AI winter. But before that, allow me to briefly recap the lessons learned from each past AI winter. 

Each AI winter shares the following key milestones: 

Hype cycle

AI winters often follow periods of intense hype and inflated expectations.

The gap between these unrealistic expectations and the actual capabilities of AI technology leads to disappointment and disillusionment.

Technical barriers

AI winters frequently coincide with technical limitations.

Whether it’s a lack of computational power, algorithmic challenges or insufficient data, these barriers can significantly impede progress.

Financial drought

As enthusiasm for AI wanes, funding for research and development dries up.

This lack of investment can further stifle innovation and exacerbate the slowdown.

Backlash and skepticism

AI winters often witness a surge in criticism and skepticism from both the scientific community and the public.

This negative sentiment can further dampen the mood and make it difficult to secure funding or support.

Strategic retreat

In response to these challenges, AI researchers often shift their focus to more manageable, less ambitious projects.

This can involve rebranding their work or focusing on specific applications to avoid the negative connotations associated with AI.

Then a niche breakthrough occurs, starting the cycle all over again.

AI winters aren’t just a temporary setback; they can really hurt progress.

Funding dries up, projects get abandoned and talented people leave the field. This means we miss out on potentially life-changing technologies.

Plus, AI winters can make people suspicious of AI, making it harder for even good AI to be accepted.

Since AI is becoming increasingly integrated into our countries’ economies, our lives and many businesses, a downturn hurts everyone.

It’s like hitting the brakes just as we start making progress toward achieving some of the world’s biggest tech-related goals like AGI (artificial general intelligence).

These cycles also discourage long-term research, leading to a focus on short-term gains.

Despite stalling progress, AI winters offer valuable learning experiences. They remind us to be realistic about AI’s capabilities, focus on foundational research and ensure diverse funding sources.

Tinggalkan Balasan

Alamat email Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *