Opinion: The risks of AI could be catastrophic. We should empower company workers to warn us | CNN (2024)

Editor’s Note: Lawrence Lessig is the Roy L. Furman Professor of Law and Leadership at Harvard Law School and the author of the book”They Don’t Represent Us: Reclaiming Our Democracy.” The views expressed in this commentary are his own. Read moreopinionat CNN.

CNN

In April, Daniel Kokotajlo resigned his position as a researcher at OpenAI, the company behind Chat GPT. He wrote in a statement that he disagreed with the way it is handling issues related to security as it continues to develop the revolutionary but still not fully understood technology of artificial intelligence.

Opinion: The risks of AI could be catastrophic. We should empower company workers to warn us | CNN (1)

Lawrence Lessig

On his profile page on the online forum “LessWrong,” Kokotajlo — who had worked in policy and governance research at Open AI — expanded on those thoughts, writingthat he quit his job after “losing confidence that it would behave responsibly” in safeguarding against the potentially dire risks associated with AI.

And in a statement issued around the time of his resignation, he blamed the culture of the company for forging ahead without heeding the warning about the dangers it might be unleashing.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,”Kokotajlo wrote.

OpenAI pressed him to sign an agreement promising not to disparage the company, telling him that if he refused, he would lose his vested equity in the company. The New York Times has reported thatequity was worth $1.7 million. Nevertheless, he declined, apparently choosing to reserve his right to publicly voice his concerns about AI.

When news broke about Kokotajlo’s departure from OpenAI and the alleged pressure from the company to get him to sign a non-disclosure agreement, the company’s CEO Sam Altman quickly apologized.

“This is on me,”Altman wrote on X, (formerly known as Twitter),“and one of the few times I’ve been genuinely embarrassed running openai; I did not know this was happening and I should have.” What Altman didn’t reveal is how many other company employees/executives might have been forced to sign similar agreements in the past. In fact, for many years and according to former employees, the company hasthreatened to cancelemployees’ vested equity if they didn’t promise to play nice.

OpenAI revealed GPT-4o on Monday. SOPA Images/LightRocket/Getty Images Related article Opinion: Scarlett Johansson has a point

Altman’s apology was effective, however, in tamping down attention to OpenAI’s legal blunder of requiring these agreements.The company was eager to move on and most in the press were happy to oblige. Few news outlets reported the obvious legal truth that such agreements wereplainly illegal under California law. Employees had for years thought themselves silenced by the promise they felt compelled to sign, but a self-effacing apology by a CEO was enough for the media, and the general public, to move along.

We should pause to consider just what it means when someone is willing to give up perhaps millions of dollars to preserve the freedom to speak. What, exactly, does he have to say? And not just Kokotajlo, but themany other OpenAI employees who have recently resigned, many now pointing to serious concerns about the dangers inherent in the company’s technology.

I knew Kokotajlo and reached out to him after he quit; I’m now representing him and 10 other current and former OpenAI employees on a pro bono basis. But the facts I relate here come only from public sources.

Many people refer to concerns about the technology as a question of “AI safety.” That’s a terrible term to describe the risks that many people in the field are deeply concerned about. Some of the leading AI researchers, includingTuring Prize winner Yoshua Bengio andSir Geoffrey Hinton,the computer expert and neuroscientist sometimes referred to as “the godfather of AI,” fear the possibility of runaway systems creating not just “safety risks,” but catastrophic harm.

Video Ad Feedback

Decoding generative artificial intelligence

23:06 - Source: CNN

And while the average person can’t imagine how anyone could lose control of a computer (“just unplug the damn thing!”), we should also recognize that we don’t actually understand the systems that these experts fear.

Companies operating in the field of AGI —artificial general intelligence,which broadly speaking refers to thetheoretical AI research attempting to create software with human-like intelligence, including the ability to perform tasks that it is not trained or developed for —are among the least regulated, inherently dangerous companies in America today. There is no agency that has legal authority to monitor how the companies develop their technology or the precautions they are taking.

Instead, we rely upon the good judgment of these corporations to ensure that risks are adequately policed. Thus, as a handful of companies race to achieve AGI, the most important technology of the century, we are trusting them and their boards to keep the public’s interest first. What could possibly go wrong?

CNN/Ian Berry Related article Opinion: We’ve reached a turning point with AI, expert says

This oversight gap has now led a number of current and former employees at OpenAI to formally ask the companies to pledge to encourage an environment in which employees are free to criticize the company’s safety precautions.

Their “Right to Warn” pledge asks companies:

First, to commit to revoking any “non-disparagement” agreement. (OpenAI has already promised to do as much; reports are that other companies may have similar language in their agreements that they’ve not yet acknowledged.)

Second, it asks companies to pledge to create an anonymous mechanism to give employees and former employees a way to raise safety concerns to the board, to regulators and to an independent AI safety organization.

Third, it asks companies to support a “culture of open criticism,” to encourage employees and former employees to speak about safety concerns so long as they protect the corporation’s intellectual property.

Finally — perhaps most interestingly — it asks companies to promise not to retaliate against employees who share confidential information when raising risk-related concerns, but pledges that employees would first channel their concerns through a confidential and anonymous process — if, and when, the company creates it. This is designed to create the incentive to build a mechanism to protect confidential information while enabling warnings.

Get our free weekly newsletter

Such a “Right to Warn” would be unique in the regulation of American corporations. It is justified by the absence of effective regulation, a condition that could well change if Congress got around to addressing the risks that so many have described. And it is necessary because ordinary whistleblower protections don’t cover conduct that is not itself regulated.

The law — especially California law — would give employees a wide berth to report illegal activities; but when little is regulated, little is illegal. Thus, so long as there is no effective regulation of these companies, it is only the employees who can identify the risks that the company is ignoring.

Even if the AI companies endorsed a “Right to Warn,” no one should imagine that it would be easy for any current or former employee to call out an AI company. Whistleblowers are not favorite co-workers, even if they are respected by some. And even with formal protections, the choice to speak out inevitably has consequences for their future employment opportunities — and friendships.

Obviously, it is not fair that we rely upon self-sacrifice to ensure that private corporations are not putting profit above catastrophic risks. This is the job of regulation. But if these former employees are willing to lose millions for the freedom to say what they know, maybe it is time that our representatives built the structures of oversight that would make such sacrifices unnecessary.

Opinion: The risks of AI could be catastrophic. We should empower company workers to warn us | CNN (2024)

FAQs

What is the catastrophic risk of artificial intelligence? ›

Catastrophic risk is commonly interpreted to mean the chance of a single event resulting in large numbers of injuries, fatalities, or extensive property damage. F–N curves are used by many industries to characterize such events.

What is the danger of AI in the workplace? ›

Data Bias. AI systems depend on historical data to make its predictions and decisions. This data may contain biases that could propagate into the software, leading to inappropriate decisions based on race, gender, or religion. This could result in unfair practices, reputational damage, and legal consequences.

What are the risks of AI? ›

Real-life AI risks

Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.

Why we should be concerned about AI? ›

If AI systems are trained on biased data, they may perpetuate and amplify biases and discrimination. Ensuring fairness in AI decisions is a crucial but complex challenge. Loss of privacy. AI algorithms thrive on data, much of which is personal.

What are the catastrophic risks of AI? ›

These risks are organised into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase ...

How does AI take risks instead of humans? ›

An example of AI taking risks in place of humans would be robots being used in areas with high radiation. Humans can get seriously sick or die from radiation, but the robots would be unaffected. And if a fatal error were to occur, the robot could be built again.

How will AI affect employees? ›

Several studies have suggested AI may be beneficial for worker productivity across tasks like business writing, programming, customer support, and consulting. These studies compared groups with and without AI assistance on the number of tasks completed, time to complete tasks, and in some cases the quality of output.

What jobs will be at risk with AI? ›

The Most Vulnerable and Impacted Professions

Roles focused on data analysis, bookkeeping, basic financial reporting and repetitive administrative tasks are highly susceptible to automation. Jobs involving rote processes, scheduling and basic customer service are increasingly handled by AI.

What are the risks of AI in human resources? ›

AI risks specific to HR

AI-powered HR systems rely on vast amounts of employee data. If that data isn't properly secured and private data gets breached by a cybercriminal, it can lead to major headaches including lawsuits.

Can AI cause human extinction? ›

The report, released this week by Gladstone AI, flatly states that the most advanced AI systems could, in a worst case, “pose an extinction-level threat to the human species.”

How does AI affect us negatively? ›

There is a potential risk of diminishing critical thinking skills if users depend too heavily on AI-generated content without scrutiny. Also, as these models are trained on vast amounts of internet text, they might unknowingly propagate biases present in their training data.

What are 3 disadvantages of AI? ›

Top 5 disadvantages of AI
  • A lack of creativity. Although AI has been tasked with creating everything from computer code to visual art, it lacks original thought. ...
  • The absence of empathy. ...
  • Skill loss in humans. ...
  • Possible overreliance on the technology and increased laziness in humans. ...
  • Job loss and displacement.
Jun 16, 2023

Is AI a threat to jobs? ›

A 2023 study by McKinsey estimated that half of today's work activities could become automated by 2060, signaling the potential for drastic changes to the workforce in the coming decades. The adoption of AI has already been associated with job cuts.

Is AI good or bad for us? ›

AI is neither inherently good nor bad. It is a tool that can be used for both beneficial and harmful purposes, depending on how it is developed and used. It is important to approach AI with caution and responsibility, ensuring that it is developed and used in an ethical and transparent manner.

Is AI hurting or helping society? ›

Artificial intelligence can dramatically improve the efficiencies of our workplaces and can augment the work humans can do. When AI takes over repetitive or dangerous tasks, it frees up the human workforce to do work they are better equipped for—tasks that involve creativity and empathy among others.

What is the existential risk of AI? ›

Existential risk from artificial general intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

What is high-risk AI? ›

An AI system is considered high-risk if it is used as a safety component of a product, or if it is a product itself that is covered by EU legislation. These systems must undergo a third-party assessment before they can be sold or used.

How is AI harmful to the environment? ›

Negative impacts

Consume a lot of electricity. Use vast amounts of water. Develop biases against nature and animals. Enable less sustainable transport choices.

What are the perceived risks of AI? ›

A large body of literature investigating AI-related risks already exists, showing that perceived risks usually center around the misuse of AI by government or tech companies, loss of privacy, and surveillance (Barth & de Jong, 2017; Kozyreva et al., 2020; Kozyreva et al., 2021; Park & Jones-Jang, 2022; Zhang & Dafoe, ...

Top Articles
Roasted Butternut Squash Quinoa Salad Recipe
Vegetarian Swedish Meatballs Recipe
Your Blog - Sheri Blonde
How To Check Your Rust Inventory Value? 🔫
Salons Open Near Me Today
Uta Kinesiology Advising
Becu Turbotax Discount Code
Abc Order Hs Login
Things to do in Wichita Falls on weekends 12-15 September
Tacos Diego Hugoton Ks
Steve Bannon Issues Warning To Donald Trump
Las Mejores Tiendas Online en Estados Unidos - Aerobox Argentina
Martimelons
Western Gold Gateway
EventTarget: addEventListener() method - Web APIs | MDN
Director, Regional People
Ups Store Pineville La
Dr. Katrina (Katrina Hutchins) on LinkedIn: #dreambig #classof2025 #bestclassever #leadershipaugusta
Winnie The Pooh Sewing Meme
Fajr Azan Time Today
Fortnite Chapter 5: All you need to know!
Isaimini 2023: Tamil Movies Download HD Hollywood
Ap Computer Science Principles Grade Calculator
Gary Keesee Kingdom Principles Pdf
Wdl Nursing Abbreviation
Blackwolf Run Pro Shop
David Goggins Is A Fraud
Pioneer Justice Court Case Lookup
Bluestacks How To Change Master Instance
Rugged Gentleman Barber Shop Martinsburg Wv
Craigslist Tampa: Your Ultimate Guide To Online Classifieds
Hendraheim Skyrim
Nationsotc.com/Bcbsri
Used Drift Boats For Sale Craigslist
Unblocked Games 66E
South Carolina Title Transfer Does Sc Require Notary Seal For Auto Title Transfer
William Sokol National Security Advisor Resigns
How Did Laura Get Narally Pregnant
Facebook Marketplace Winnipeg
Matrizen | Maths2Mind
China Rose Plant Care: Water, Light, Nutrients | Greg App 🌱
Comcast Business Downdetector
My Vidant Chart
Scotlynd Ryan Birth Chart
Barbie: A Touch of Magic
Builders Best Do It Center
Ark Extinction Element Vein
1Wangrui4
Joann Stores Near Me
Firsthealthmychart
Temperature At 12 Pm Today
Deciphering The "sydneylint Leaked" Conundrum
Latest Posts
Article information

Author: Terrell Hackett

Last Updated:

Views: 6638

Rating: 4.1 / 5 (72 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Terrell Hackett

Birthday: 1992-03-17

Address: Suite 453 459 Gibson Squares, East Adriane, AK 71925-5692

Phone: +21811810803470

Job: Chief Representative

Hobby: Board games, Rock climbing, Ghost hunting, Origami, Kabaddi, Mushroom hunting, Gaming

Introduction: My name is Terrell Hackett, I am a gleaming, brainy, courageous, helpful, healthy, cooperative, graceful person who loves writing and wants to share my knowledge and understanding with you.