Title: AI Experts Clash Over Exaggerated Threats of Human Extinction, Big Tech Accused of Fearmongering

Subtitle: Andrew Ng’s Remarks Spark Debate Among Leading Figures in Artificial Intelligence

Renowned AI scientist Andrew Ng, formerly associated with Google Brain, has sparked a public debate among the biggest names in artificial intelligence by questioning the validity of claims that AI poses an existential risk to humanity. Ng alleges that Big Tech is exaggerating these threats for its own gain. This disagreement challenges the views of prominent AI leaders such as Demis Hassabis of DeepMind and Sam Altman of OpenAI.

In an interview with The Australian Financial Review, Ng expressed his belief that certain large tech companies are fabricating fears of AI-induced human extinction to avoid competition from open source initiatives. He explains that such fearmongering has become a tool for lobbyists to push legislation that is detrimental to the open-source community.

While Ng refrained from naming specific individuals, proponents of AI’s catastrophic risks include Elon Musk, who was once Ng’s student, as well as Sam Altman, Demis Hassabis, Geoffrey Hinton, and Yoshua Bengio. These figures have made claims about the potential dangers of AI, particularly in light of recent advances in generative AI tools like ChatGPT.

Geoffrey Hinton, a British-Canadian computer scientist widely regarded as one of the godfathers of AI, responded to Ng’s comments and defended the notion of AI posing an existential threat. He highlighted his departure from Google, emphasizing his freedom to openly discuss this critical issue.

Yann LeCun, Meta’s chief AI scientist and another AI godfather closely associated with Hinton, aligned himself with Ng’s perspective. LeCun accused Hinton and Yoshua Bengio of inadvertently aiding those seeking to restrict AI research and development by supporting limitations on open research, open-source code, and open-access models. LeCun expressed concerns that excessive regulation aimed at mitigating AI risks could stifle the growth and collaboration within the open-source AI community, leading to concentration of power in the hands of a few companies.

Meredith Whittaker, president of messaging app Signal and chief advisor to the AI Now Institute, dismissed the claims of AI being an existential risk as a “quasi-religious ideology” detached from scientific evidence. Whittaker accused Big Tech of leveraging this ideology to advance its own interests, diverting attention from more immediate real-world problems like copyright infringement and job displacement.

The clash between AI experts highlights the ongoing debate surrounding the possible dangers of AI and the motives behind various AI narratives. As the field continues to evolve, different perspectives from industry leaders will shape the trajectory of AI research, policymaking, and its implications for society.

By smith steave

I have over 10 years of experience in the cryptocurrency industry and I have been on the list of the top authors on LinkedIn for the past 5 years.