Table of Contents
References & Edit History Related Topics

Artificial Intelligence (AI)

Is Artificial Intelligence Good for Society?
External Websites

Artificial intelligence (AI) is the use of “computers and machines to mimic the problem-solving and decision-making capabilities of the human mind,” according to IBM[1]

The idea of AI dates back at least 2,700 years. As Adrienne Mayor, research scholar, folklorist, and science historian at Stanford University, explains: “Our ability to imagine artificial intelligence goes back to ancient times. Long before technological advances made self-moving devices possible, ideas about creating artificial life and robots were explored in ancient myths.” [2]

Mayor notes that the myths about Hephaestus, the Greek god of invention and blacksmithing, included precursors to AI. For example, Hephaestus created the giant bronze man Talos, which had a mysterious life force from the gods called ichor. Hephaestus also created Pandora and her infamous and powerful jar/box, as well as a set of automated servants made of gold that were given the knowledge of the gods. Mayor concludes, “Not one of those myths has a good ending once the artificial beings are sent to Earth. It’s almost as if the myths say that it’s great to have these artificial things up in heaven used by the gods. But once they interact with humans, we get chaos and destruction.” [2]

The modern notion of AI largely began when Alan Turing, who contributed to breaking the Nazis’ Enigma code during World War II, created the “Turing test” to determine if a computer is capable of “thinking.” The value and legitimacy of the test have long been debated. [1][3][4]

The “Father of Artificial Intelligence,” John McCarthy, coined the term “artificial intelligence” as “the science and engineering of making intelligent machines.” He would go on to create the computer programming language LISP (which is still used in AI), host computer chess games against human Russian opponents, and develop the first computer with “hand-eye” capability, all important building blocks for AI. [1][5][6][7]

AI technology continued to grow at a rapid pace during the 1950s. And, as computers became cheaper in the 1960s and ’70s, AI programs flourished, and U.S. government agencies including the Defense Advanced Research Projects Agency (DARPA) began to fund AI-related research. But computers were still too weak to manage the language tasks researchers asked of them. Another influx of funding in the 1980s and early ’90s furthered the research, including the invention of expert systems. But progress again waned with another drop in government funding. [10]

More recently, advances in computer storage limits and speeds have opened new avenues for AI research and implementation, aiding scientific research and forging new paths in medicine for patient diagnosis, robotic surgery, and drug development. [1][10][11][12]

Now, artificial intelligence is used for a variety of everyday implementations including facial recognition software, online shopping algorithms, search engines, digital assistants like Siri and Alexa, translation services, automated safety functions on cars, cybersecurity, airport body scanning security, poker playing strategy, and fighting disinformation on social media[13][58]

For more on the history of AI, see ProCon’s Historical Timeline.

Pros and Cons at a Glance

PROSCONS
Pro 1: AI can make everyday life more enjoyable and convenient, while improving our health and standard of living. Read More.Con 1: AI will harm the standard of living for many people by causing mass unemployment. Read More.
Pro 2: AI makes work easier for students and professionals alike. Read More.Con 2: AI undermines critical thinking skills for students and adults alike. Read More.
Pro 3: AI helps marginalized groups by offering accessibility for people with disabilities. Read More.Con 3: AI hurts racial minorities by repeating and exacerbating racism. Read More.
Pro 4: Artificial intelligence can improve workplace safety. Read More.Con 4: Artificial intelligence poses dangerous privacy risks. Read More.
Pro 5: AI can function as a reliable research partner. Read More.Con 5: AI can spread politicized, even dangerous misinformation. Read More.

Pro Arguments

 (Go to Con Arguments)

Pro 1: AI can make everyday life more enjoyable and convenient, while improving our health and standard of living.

Why sit in a traffic jam when a map app can navigate you around the car accident? Why fumble with shopping bags searching for your keys in the dark when a preset location-based command can have your doorway illuminated as you approach your now unlocked door? [23]

Why scroll through hundreds of possible TV shows when the streaming app already knows what genres you like? Why forget eggs at the grocery store when a digital assistant can take an inventory of your refrigerator and add them to your grocery list and have them delivered to your home? All of these marvels are assisted by AI technology. [23]

AI-enabled fitness apps boomed during the COVID-19 pandemic when gyms were closed, increasing the number of AI options for at-home workouts. Now, you can not only set a daily steps goal with encouragement reminders on your smart watch, but you can ride virtually through the countryside on a Peloton bike from your garage or have a personal trainer on your living room TV. For more specialized fitness, AI wearables can monitor yoga poses or golf and baseball swings. [24][25]

AI can even enhance your doctor’s appointments and medical procedures. It can alert medical caregivers to patterns in your health data as compared to a vast library of medical data, while also doing the paperwork tied to medical appointments so doctors have more time to focus on their patients, resulting in more personalized care. AI can even help surgeons be quicker, more accurate, and less invasive in their operations. [26]

Smart speakers including Amazon’s Echo can use AI to soothe babies to sleep and monitor their breathing. Using AI, speakers can also detect regular and irregular heartbeats, as well as heart attacks and congestive heart failure. [27][28][29]

AI is even beginning to excel at creative writing, producing fiction and poetry that some readers enjoy. Some observers predict that TV and film scripts will also soon benefit from the compositional powers of AI.

Pro 2: AI makes work easier for students and professionals alike.

Much like the calculator did not signal the end of students’ grasp of mathematics, typing did not eliminate handwriting, and Google did not herald the end of research skills, AI does not signal the end of reading and writing or of education in general. [78][79]

Elementary school teacher Shannon Morris explains that AI tools like “ChatGPT can help students by providing real-time answers to their questions, engaging them in personalized conversations, and providing customized content based on their interests. It can also offer personalized learning resources, videos, articles, and interactive activities. This resource can even provide personalized recommendations for studying, help with research, provide context-specific answers, and offer educational games.” She also notes that teachers’ more daunting tasks like grading and making vocabulary lists can be streamlined with AI tools. [79]

For adults AI can similarly make work easier and more efficient, rather than signaling the rise of the robot employee. Pesky, time-consuming tasks like scheduling and managing meetings, finding important emails amongst the spam, prioritizing tasks for the day, and creating and posting social media content can be delegated to AI, freeing up time for more important and rewarding work. The technology can also help with brainstorming, understanding difficult concepts, finding errors in code, and learning languages via conversation, making daunting tasks more manageable. [80]

AI is a tool that, if used responsibly, can enhance both learning and work for everyone. Carri Spector of the Stanford Graduate School of Education says, “I think of AI literacy as being akin to driver’s ed: We’ve got a powerful tool that can be a great asset, but it can also be dangerous. We want students to learn how to use it responsibly.” [81]

Pro 3: AI helps marginalized groups by offering accessibility for people with disabilities.

Artificial intelligence is commonly integrated into smartphones and other household devices. Virtual assistants, including Siri, Alexa, and Cortana, can perform innumerable tasks from making a phone call to navigating the internet. People who are deaf and hearing impaired can access transcripts of voicemail or other audio, for example. [20]

Other virtual assistants can transcribe conversations as they happen, allowing for more comprehension and participation by those who have impairments that affect their communication. Using voice commands with virtual assistants can help people with mobility disabilities who may have difficulty navigating small buttons or screens or turning on a lamp. [20]

Apps enabled by AI on smartphones and other devices, including VoiceOver and TalkBack, can read messages, describe app icons or images, and give information such as battery levels for visually impaired people. Other apps, such as Voiceitt, can transcribe and standardize the voices of people with speech impediments. [20]

Wheelmap provides users with information about wheelchair accessibility, and Evelity offers indoor navigation tools that are customized to the user’s needs, providing audio or text instructions and routes for wheelchair accessibility. [20]

Other AI implementations, such as smart thermostats, smart lighting, and smart plugs, can be automated to work on a schedule to aid people with mobility or cognitive disabilities to lead more independent lives. [21]

More advanced AI projects can combine with robotics to help physically disabled people. HOOBOX Robotics, for example, uses facial recognition software to allow a wheelchair user to move their wheelchair with facial expressions, making movement easier for seniors and those with ALS or quadriparesis[22]

Pro 4: Artificial intelligence can improve workplace safety.

AI doesn’t get stressed, tired, or sick, three major causes of human accidents in the workplace. AI robots can collaborate with or replace humans for especially dangerous tasks. For example 50 percent of construction companies that used drones to inspect roofs and other risky tasks saw improvements in safety. [14][15]

Artificial intelligence can also help humans be safer. For instance, AI can ensure employees are up to date on training by tracking and automatically scheduling safety or other training. AI can also check and offer corrections for ergonomics to prevent repetitive stress injuries or worse. [16]

An AI program called AI-SAFE (Automated Intelligent System for Assuring Safe Working Environments) aims to automate the workplace personal protective equipment (PPE) check, eliminating human errors that could cause accidents in the workplace. As more people wear PPE to prevent the spread of COVID-19 and other viruses, this sort of AI could protect against large-scale outbreaks. [17][18][19]

In India, AI was used during the coronavirus pandemic to reopen factories safely by providing camera, cell phone, and smart wearable device-based technology to ensure social distancing, take employee temperatures at regular intervals, and perform contact tracing if anyone tested positive for the virus. [18][19]

AI can also perform more sensitive tasks in the workplace such as scanning work emails for improper behavior and types of harassment.[15]

Pro 5: AI can function as a reliable research partner.

While AI can be wrong, limited, biased, or misleading, so can every other information source including textbooks, the Internet at large, and people. Whether you’re looking for a new restaurant, writing a college research paper, or trying to cure cancer, the job of any researcher is to distinguish between good and bad information. AI is simply a tool. For example, ChatGPT and Google AI, which now offer citations, are now as reliable as a search engine or a library card catalog in that researchers should go to the primary sources and evaluate the information for themselves. [103]

Leo S. Lo, dean of the College of University Libraries and Learning Sciences at the University of New Mexico, who calls AI "my new favorite research partner," noted, a "limitation of ChatGPT is that it cannot replace critical thinking. Although it can help researchers generate ideas, it cannot replace their ability to critically think about research. In addition, it may not possess the same depth and nuance as a human researcher." As such, AI is simply another research tool. [104]

As a tool, a 2024 study found AI was helpful for academic research in several areas: "idea generation, content structuring, literature synthesis, data management, editing, and ethical compliance." Note that the list does not include the actual research or writing the paper. Instead, AI is a brainstorming tool that has the entire Internet at it’s disposal, which also lends itself to "literature synthesis" (collating information from multiple sources). AI can also be used to put data in nicely formatted tables and point out where a comma is missing. All of that help leaves researchers time and effort to focus on the actual research component. [105]

The study noted that AI can be especially helpful when researchers need information from another field of study: "AI holds immense potential to revolutionise and streamline interdisciplinary research, acting as a bridge between diverse fields. Its advanced data analysis capabilities enable it to uncover patterns and correlations that might be invisible to human researchers, thereby fostering new insights and theories. AI can process and synthesize vast amounts of information from different disciplines, helping researchers in one field to utilize findings from another, leading to innovative solutions." Again, the AI is "helping" researchers, not running wild in a lab by itself. [105]

When asked if it is a good research partner, ChatGPT offers, "Absolutely! I can help you find reliable sources, summarize information, organize your thoughts, and even suggest angles you might not have considered." Google’s AI also promises to "streamline" the peer review process, especially "in the initial stages [by] automating tasks like identifying potential reviewers and summarizing research findings, potentially speeding up the review process." [106][107]

As a research partner, AI is helping in myriad ways. It’s speeding up drug discovery, analyzing huge quantities of particle accelerator data, automating repetitive tasks such as protein folding, making weather predictions, assessing bee behavior patterns, reviewing audio for bird and electric wire collisions, using game theory to catch animal poachers, evaluating social media posts to locate and track animals on the endangered list, predicting regions of poverty, and even identifying areas of city water pipes that need upgrades. In other words, AI is an enthusiastic and super-powered intern—ready to do the both grunt work as well as high-level thinking. [108][109][110]

Pro Quotes

Co-founder of LinkedIn Reid Hoffman stated:

I truly believe that by giving billions of people access to A.I. tools they can use in whatever ways they choose, we can create a world where A.I. augments and amplifies human creativity and labor instead of simply replacing it....

Tech skeptics have long used the adjective “Orwellian” to cast everything from a video recommendation feature to turn-by-turn navigation apps as threats to individual autonomy, but the history of technological innovation in the 21st century tells a different story. In “1984,” George Orwell’s classic novel of state oppression, powerful telescreens enable a totalitarian regime to rule over dispossessed proles with unchecked omnipotence. But today we live in a world where individual identity is the coin of the realm — where plumbers and presidents alike aspire to be social media influencers and cultural power flows increasingly to self-made operators.…

I believe A.I. is on a path not just to continue this trend of individual empowerment but also to dramatically enhance it. [97]

David Brooks, opinion columnist for The New York Times, stated:

I don’t think A.I. is going to be as powerful as many of its evangelists think it will be. I don’t think A.I. is ever going to be able to replace us — ultimately I think it will simply be a useful tool. In fact, I think instead of replacing us, A.I. will complement us. In fact, it may make us free to be more human.

Many fears about A.I. are based on an underestimation of the human mind. Some people seem to believe that the mind is like a computer. It’s all just information processing, algorithms all the way down, so of course machines are going to eventually overtake us.

This is an impoverished view of who we humans are.… The brain is its own universe. Sometimes I hear tech people saying they are building machines that think like people. Then I report this ambition to neuroscientists and their response is: That would be a neat trick, because we don’t know how people think. [101]

Medha Bankhwal, Michael Chui, Ankit Bisht, Roger Roberts, and Ashley van Heteren, all of consulting firm McKinsey & Company, stated:

By collaborating to find ways to put AI to work at scale for social good, mission-driven organizations, governments, foundations, universities, ecosystems of developers, and businesses can help solve some of the world’s most challenging and intractable problems. They can help thwart human trafficking, ensure girls and children all over the world receive the education they deserve, protect forests from illegal deforestation, support the health and safety of pregnant women and newborns, and so much more. If these things aren’t worth fighting for, what is?[102]

Con Arguments

 (Go to Pro Arguments)

Con 1: AI will harm the standard of living for many people by causing mass unemployment.

AI robots and other software and hardware are becoming less expensive and need none of the benefits and services required by human workers, such as sick days, lunch hours, bathroom breaks, health insurance, pay raises, promotions, and performance reviews, which spells trouble for workers and society at large. [51]

Some 48 percent of experts believed AI will replace a large number of blue- and even white-collar jobs (including Hollywood and TV script writing), creating greater income inequality, increased unemployment, and a breakdown of the social order. [35]

The axiom “everything that can be automated, will be automated” is no longer science fiction. Self-checkout kiosks in stores like CVS, Target, and Walmart use AI-assisted video and scanners to prevent theft, alert staff to suspicious transactions, predict shopping trends, and mitigate sticking points at checkout. These AI-enabled machines have displaced human cashiers. About 11,000 retail jobs were lost in 2019, largely due to self-checkout and other technologies. In 2020, during the COVID-19 pandemic, a self-checkout manufacturer shipped 25 percent more units globally, reflecting the more than 70 percent of American grocery shoppers who preferred self- or touchless checkouts. [35][52][53][54][55]

An October 2020 World Economic Forum report found 43 percent of businesses surveyed planned to reduce workforces in favor of automation. Many businesses, especially fast-food restaurants, retail shops, and hotels, automated jobs during the COVID-19 pandemic. [35]

Income inequality was exacerbated over the last four decades as 50–70 percent of changes in American paychecks were caused by wage decreases for workers whose industries experienced rapid automation, including AI technologies. [56][57]

Con 2: AI undermines critical thinking skills for students and adults alike.

The idea that the Internet is making us stupid is legitimate, and AI is like the Internet on steroids.

With AI bots doing everything from research to writing papers, from basic math to logic problems, from generating hypotheses to performing science experiments, from editing photos to creating “original” art, students of all ages will be tempted (and many will succumb to the temptation) to use AI for their school work, undermining education goals. [82][83][84][85][86]

“The academic struggle for students is what pushes them to become better writers, thinkers and doers. Like most positive outcomes in life, the important part is the journey. Soon, getting college degrees without AI assistance will be as foreign to the next generation as payphones and Blockbuster [are to the current generation], and they will suffer for it,” says Mark Massaro, professor of English at Florida SouthWestern State College. [83]

A June 2023 study found that increased use of AI correlates with increased student laziness because of a loss of human decision-making. Similarly, an October 2023 study found increased laziness and carelessness as well as a decline in work quality when humans worked alongside AI robots. [87][88][89]

The implications of allowing AI to complete tasks are enormous. We will see declines in work quality and human motivation as well as the rise of dangerous situations from deadly workplace accidents to George Orwell’s dreaded “groupthink.” And, when humans have become too lazy to program the technology, we’ll see lazy AI, too. [90]

“An overreliance on technology will further sever the American public from determining truth from lies, information from propaganda, a critical skill that is slowly becoming a lost art, leaving the population willfully ignorant and intellectually lazy,” explains Massaro. [73][83]

Con 3: AI hurts racial minorities by repeating and exacerbating racism.

Facial recognition has been found to be racially biased, easily recognizing the faces of white men while wrongly identifying Black women 35 percent of the time. One study of Amazon’s Rekognition AI program falsely matched 28 members of the U.S. Congress with mugshots from a criminal database, with 40 percent of the errors being people of color. [22][36][43][44]

AI has also been disproportionately employed against Black and Brown communities, with more federal and local police surveillance cameras in neighborhoods of color, and more social media surveillance of Black Lives Matter and other Black activists. The same technologies are used for housing and employment decisions and TSA airport screenings. Some cities, including Boston and San Francisco, have banned police use of facial recognition for these reasons. [36][43]

One particular AI software tasked with predicting recidivism risk for U.S. courts—the Correctional Offender Management Profiling for Alternative Sanctions (Compas)—–was found to falsely label Black defendants as high risk at twice the rate of white defenders, and to falsely label white defendants as low risk more often. AI is also incapable of distinguishing between when the N-word is being used as a slur and when it’s being used culturally by a Black person. [45][46]

In China, facial recognition AI has been used to track Uyghurs, a largely Muslim minority. The U.S. and other governments have accused the Chinese government of genocide and forced labor in Xinjiang, where a large population of Uyghurs live. AI algorithms have also been found to show a “persistent anti-Muslim bias,” by associating violence with the word “Muslim” at a higher rate than with words describing people of other religions including Christians, Jews, Sikhs, and Buddhists. [47][48][50]

Con 4: Artificial intelligence poses dangerous privacy risks.

Facial recognition technology can be used for passive, warrantless surveillance without the knowledge of the person being watched. In Russia, facial recognition was used to monitor and arrest protesters who supported jailed opposition politician Aleksey Navalny, who was found dead in prison in 2024. Russians fear a new facial recognition payment system for Moscow’s metro will increase these sorts of arrests. [36][37][38]

Ring, the AI doorbell and camera company owned by Amazon, has partnered with more than 400 police departments, allowing the police to request footage from users’ doorbell cameras. While users were allowed to deny access to any footage, privacy experts feared the close relationship between Ring and the police could override customer privacy, especially when the cameras frequently record activity on others’ property. The policy ended in 2024, but experts say other companies allow similar invasions. [39][91]

AI also follows you on your weekly errands. Target used an algorithm to determine which shoppers were pregnant and sent them baby- and pregnancy-specific coupons in the mail, infringing on the medical privacy of those who may be pregnant, as well as those whose shopping patterns may just imitate pregnant people. [40][41]

Moreover, artificial intelligence can be a godsend to crooks. In 2020 a group of 17 criminals defrauded $35 million from a bank in the United Arab Emirates using AI “deep voice” technology to impersonate an employee authorized to make money transfers. In 2019, thieves attempted to steal $240,000 using the same AI technology to impersonate the CEO of an energy firm in the United Kingdom. [42]

Con 5: AI can spread politicized, even dangerous misinformation.

“The ability to create websites that host fake news or fake information has been around since the inception of the Internet, and they pre-date the AI revolution,” according to engineering and machine learning expert Walid Saad. “With the advent of AI, it became easier to sift through large amounts of information and create ‘believable’ stories and articles. Specifically, LLMs [large language models] made it more accessible for bad actors to generate what appears to be accurate information. This AI-assisted refinement of how the information is presented makes such fake sites more dangerous." [111]

A 2024 study noted highlights of the AI misinformation problem: "With the advent of generative artificial intelligence (AI), the internet has become a breeding ground for fake news and misinformation. The phenomenon of fake news and misinformation has had significant impacts across various sectors, including the world of finance and politics. A notable example occurred in mid-January 2023, when the spread of a false report stating that the SEC (U.S. Securities and Exchange Commission) had approved a spot-listed ETF (exchange-traded fund) caused volatility in Bitcoin prices. In May 2023, an instance of generative AI being used to create a fictitious image of a building near the Pentagon in Washington D.C. engulfed in black flames, leading to turmoil in the U.S. stock market. Additionally, fabricated images of a former U.S. president being arrested and a fashionably dressed Pope in a white puffer coat were examples of fake news created using AI-generated fake photographs. [112]

Google’s AI chatbot Gemini even generated historical inaccuracies by inserting people of color into historical events they never participated in—including Black Nazi soldiers and Black Popes—further damaging historical literacy. [73]

AI can also be politicized by both AI creators and AI users. A May 2023 study by the Brookings Institute found that the AI knowledge source routinely supported left-leaning positions on hot button issues including abortion and gun control, while AI robocalls were banned by the FCC for imitating President Joe Biden’s voice during the 2024 election. In 2025, political operatives broadcast a lewd AI-generated video of President Trump and Elon Musk on all Department of Housing and Urban Development (HUD) headquarter monitors. [75][76][77][92][113]

Equally troubling, as exemplified by robocalls and deep-fake videos, AI can be virtually indistinguishable from humans. Impressionable people can be swayed into harmful actions including but not limited to eating disorders, suicide, and assassination. For example, a British man was arrested in a plot to kill Queen Elizabeth in 2021 after a chatbot encouraged him to do so, saying his assassination plan was "very wise." AI bots have also created and publicized potentially deadly recipes and recommended harmful solutions to losing weight. [114][115][116]

Marjorie Wallace, founder and chief executive of mental health charity SANE, says "the rapid rise of artificial intelligence has a new and concerning impact on people who suffer from depression, delusions, loneliness and other mental health conditions." As the technology becomes more and more indistinguishable from reality, we will all need to be vigilant about and protected from the dangerous uses of AI. [116]

Britannica Chatbot logo

Britannica Chatbot

Chatbot answers are created from Britannica articles using AI. This is a beta feature. AI answers may contain errors. Please verify important information using Britannica articles. About Britannica AI.

Con Quotes

Japanese-British author Kazuo Ishiguro stated:

AI will become very good at manipulating emotions. I think we’re on the verge of that. At the moment we’re just thinking of AI crunching data or something. But very soon, AI will be able to figure out how you create certain kinds of emotions in people – anger, sadness, laughter....

If I was deploying that kind of gift for the service of a politician or for a large corporation that wanted to sell pharmaceuticals, you wouldn’t necessarily think it was commendable, you’d be highly suspicious of it. But if I’m doing it in the service of telling a story, that is considered to be something really valuable.… It’s something that increasingly makes me feel uneasy, because I haven’t been praised for my incredible style, or because in my fiction I exposed great injustices in the world. I’ve usually been praised for producing stuff that makes people cry.… They gave me a Nobel prize for it. [100]

Yuval Noah Harari, historian and founder of the social impact company Sapienship, stated:

Information technology has always been a double-edged sword. The invention of writing spread knowledge, but it also led to the formation of centralized authoritarian empires. After Gutenberg introduced print to Europe, the first best sellers were inflammatory religious tracts and witch-hunting manuals. As for the telegraph and radio, they made possible the rise not only of modern democracy but also of modern totalitarianism.

Faced with a new generation of bots that can masquerade as humans and mass-produce intimacy, democracies should protect themselves by banning counterfeit humans — for example, social media bots that pretend to be human users. Before the rise of A.I., it was impossible to create fake humans, so nobody bothered to outlaw doing so. Soon the world will be flooded with fake humans.

A.I.s are welcome to join many conversations — in the classroom, the clinic and elsewhere — provided they identify themselves as A.I.s. But if a bot pretends to be human, it should be banned. If tech giants and libertarians complain that such measures violate freedom of speech, they should be reminded that freedom of speech is a human right that should be reserved for humans, not bots. [98]

Johnny Gabriele, head analyst of blockchain economics and AI integration at The Lifted Initiative, stated:

Looking back at history, large technological leaps have only widened wealth inequality. In my opinion, the only technology that has the power to do the opposite is cryptocurrency....

At the end of the day, this tech revolution will reward those who can master it and punish those who ignore it. If things get bad enough, there are already talks about universal basic income, but the jury is still out on whether this will lead to utopia or dystopia. [99]

Historical Timeline

B.C.E.

Approximately 600 B.C.E.: Greek God Hephæstus Creates Artificial Beings

Hephaestus, the Greek god of invention and blacksmithing, creates precursors to AI, including the giant bronze man, Talos, who had a mysterious life force from the gods called ichor. Hephaestus also created Pandora and her infamous and powerful jar/box, as well as a set of automated servants made of gold that were given the knowledge of the gods. [2]

1950 - 1999

1950 - Alan Turning Creates the Turing Test

Alan Turing, a mathematician who worked for Bletchley Park cracking German codes during World War II, developed the Turing Test to determine if a machine were capable of thinking. Using a tool called “the imitation game” (hence the name of the 2014 movie about Turing), Turing’s test uses an interrogator who must try to distinguish between a human and a computer by asking questions. Theoretically, if the interrogator can not tell which is the human and which is the computer, the computer is thinking.

Turing believed the interrogator would only have about a 70 percent chance of distinguishing between the human and the computer by 2000. However, the value and legitimacy of the test have long been debated.

—Noel Sharkey, “Alan Turing: The Experiment That Shaped Artificial Intelligence,” bbc.com, June 21, 2012

Stanford Encyclopedia of Philosophy, “The Turing Test,” October 4, 2021

1956 - John McCarthy Coins the Term “Artificial Intelligence”

The “Father of Artificial Intelligence,” John McCarthy, coined the term “artificial intelligence” when he, with Marvin Minsky and Claude Shannon, proposed a 1956 summer workshop on the topic at Dartmouth College. McCarthy defined artificial intelligence as “the science and engineering of making intelligent machines.”

—Computer History Museum, “John McCarthy,” computerhistory.org (accessed October 7, 2021)

—Andy Peart, “Homage to John McCarthy, the Father of Artificial Intelligence (AI),” artificial-solutions.com, October 29, 2020

—Andrew Myers, “Stanford’s John McCarthy, Seminal Figure of Artificial Intelligence, Dies at 84,” news.stanford.edu, October 25, 2011

1955-1956 - Logic Theorist Program Created

Allen Newell, J.C. Shaw, and Herbert Simon created Logic Theorist, the first AI program designed to mimic how humans solve problems, to solve problems from Principia Mathematica (1910–13) written by Alfred North Whitehead and Bertrand Russell.

—History Computer, “Logic Theorist – Complete History of the Logic Theorist Program,” history-computer.com (accessed October 7, 2021)

1958 - Perceptron Invented

Frank Rosenblatt invented the Perceptron, which he claimed was “the first machine which is capable of having an original idea.” Though the machine was hounded by skeptics, it was later praised as the “foundations for all of this artificial intelligence.”

—Melanie Lefkowitz, “Professor’s Perceptron Paved the Way for AI – 60 Years Too Soon,” news.cornell.edu, September 25, 2019

1959 - Artificial Intelligence Laboratory Founded at MIT

John McCarthy and Marvin Minsky founded the AI Project at the Massachusetts Institute of Technology. According to MIT,

The Lab pioneered new methods for image-guided surgery and natural-language-based Web access, produced new generations of micro displays, made haptic [meaning the ability to touch] interfaces a reality, and developed bacterial robots and behavior-based robots that are used for planetary exploration, military reconnaissance and in consumer devices.

The AI Project collaborated so frequently with the Laboratory for Computer Science, that the two merged into the Computer Science & Artificial Intelligence Laboratory in 2003.

—Computer History Museum, “Marvin Minsky,” computerhistory.org (accessed February 5, 2025)

—Massachusetts Institute of Technology Computer Science & Artificial Intelligence Laboratory, “Mission & History,”csail.mit.edu (accessed February 5, 2025)

1960 - John McCarthy Creates LISP Computer Programming Language

Developed at the Massachusetts Institute of Technology, LISP

was founded on the mathematical theory of recursive functions (in which a function appears in its own definition). A LISP program is a function applied to data, rather than being a sequence of procedural steps as in FORTRAN and ALGOL.

LISP became a language commonly used for AI programming.

Encyclopaedia Britannica, “LISP,” britannica.com, February 1, 2025

1964 - 1966 - Joseph Weizenbaum Creates ELIZA

ELIZA (also called “the Doctor,” but named for Eliza Doolittle in Pygmalion) is a natural language conversation program (now called a chatbot) created to mimic a Rogerian psychotherapist. Weizenbaum explained why he made this choice:

ELIZA performs best when its human correspondent is initially instructed to "talk" to it, via the typewriter of course, just as one would to a psychiatrist. This mode of conversation was chosen because the psychiatric interview is one of the few examples of categorized dyadic natural language communication [meaning communication between two parties] in which one of the participating pair is free to assume the pose of knowing almost nothing of the real world. If, for example, one were to tell a psychiatrist "I went for a long boat ride" and he responded "Tell me about boats", one would not assume that he knew nothing about boats, but that he had some purpose in so directing the subsequent conversation. It is important to note that this assumption is one made by the speaker. Whether it is realistic or not is an altogether separate question. In any case, it has a crucial psychological utility in that it serves the speaker to maintain his sense of being heard and understood. The speaker further defends his impression (which even in real life may be illusory) by attributing to his conversational partner all sorts of background knowledge, insights and reasoning ability. But again, these are the speaker’s contribution to the conversation.

ELIZA can still be found at masswerk.at/elizabot.

—mass:werk, “Eliza,” masswerk.at (accessed February 5, 2025)

1966 - John McCarthy Stages Computer Chess Games

Carried out via telegraph, McCarthy pitted his MIT-created Kotok-McCarthy program, a chess-playing program run on IBM computers, against a Russian program created at the Institute for Theoretical and Experimental Physics. McCarthy’s program lost two matches and drew two.

McCarthy would later say that board games, including chess, are the “Drosophila of artificial intelligence,” comparing the games to the fruit flies that are important to genetics research.

—Andy Peart, “Homage to John McCarthy, the Father of Artificial Intelligence (AI),” artificial-solutions.com, October 29, 2020

—Andrew Myers, “Stanford’s John McCarthy, Seminal Figure of Artificial Intelligence, Dies at 84,” news.stanford.edu, October 25, 2011

1968 - Alexey Ivakhnenkko Proposes New Approach to AI Progamming

In the article “Group Method of Data Handling,” Ivakhnenkko proposed what is now known as deep learning: “data handling and algorithmic layering that led to in-depth statistical learning that allowed machines to learn how to identify objects and trends.”

—Microsoft, “The History of AI,” microsoft.com, August 11, 2023

1972 - Psychiatrist Kenneth Colby Develops Parry Chatbot

While at Stanford University, Colby developed Parry (short for “paranoia”) to mimic someone with paranoid schizophrenia using the LISP computer language. Described as “ELIZA with attitude,” the chatbot was able to trick human psychiatrists, thus beating the Turing Test.

Parry and Eliza met for a chat on January 21, 1973, the text of which can be found here.

—Wolfgang Saxon, “Kenneth Colby, 81, Psychiatrist Expert in Artificial Intelligence,” nytimes.com, May 12, 2001

October 1981 - 1983 - Chatbot Racter Publishes Short Story and Novel

Racter (short for Raconteur) was invented by William Chamberlain and Thomas Etter. The chatbot responded to questions with “long, nonsensical (if grammatically correct) paragraphs,” that one critic said placed Racter “on the edge of artificial insanity.”

However verbose, Racter wrote a short story that was published in the October 1981 edition of Omni, and a novel, The Policeman’s Beard Is Half Constructed, published by Chamberlain in 1983. Allegations that Chamberlain had a heavier hand in the writing than advertised have haunted the book.

—Rebecca Roach, “Chatbots and Literature,” egomedia.supdigital.org (accessed February 5, 2025)

1995 - A.L.I.C.E. Chatbot Allows Developer Input

Richard Wallace developed A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) after monitoring Eliza’s conversations. When A.L.I.C.E. did not recognize a sentence or phrase, Wallace added a response, giving the bot greater flexibility.

—Encyclopaedia Britannica, “Chatbot,” January 25, 2025

1995 - Microsoft Launches Bob

Microsoft’s Bob rearranged desktop displays to resemble rooms (instead of rows upon rows of small program icons). Rover, a cartoon dog, was also featured to provide guidance. Bob was soundly criticized and retired in 1996 as a result.

—Lola Osinoiki, “History of AI at Microsoft,” blogs.bath.ac.uk, July 18, 2024

February 1996 - May 1997 - IBM’s Computer Deep Blue Faces Chess Master Garry Kasparov

In their first match in 1996, Deep Blue won the first game but Kasparov, the reigning world chess champion, won the match 4-2.

But the tables turned in 1997 when Kasparov, still the reigning champion, was defeated in a six-game marathon. IBM noted, “Deep Blue was able to evaluate 200 million chess positions per second, achieving a processing speed of 11.38 billion floating-point operations per second, or flops.”

“I have to pay tribute,” said Kasparov. “The computer is far stronger than anybody expected.”

—IBM, “Deep Blue,” ibm.com (accessed February 4, 2025)

1997 - Microsoft Launches Clippy

Launched with Microsoft Office 97, Clippit (nicknamed “Clippy”) would ask questions like “It looks like you’re writing a letter. Would you like some help with that?” in an effort to make a complex suite of software easier to use. Clippy was rudimentary and often mocked for not being helpful. Microsoft retired Clippy in 2007.

—Katy Rumbelow, “From Clippy to Copilot: How Microsoft’s Gimmick Paved the Way for AI,” changingsocial.com, November 28, 2024

1997 - Jabberwacky Launched for Entertainment

Unlike its predecessors that were created for computer learning or support systems, Jabberwacky was created by Rollo Carpenter to entertain, could be operated entirely by voice, and it learned via sound and other sensory inputs. Jabberwacky was marked “legacy only” on December 21, 2022. It’s offspring, Cleverbot, is still in operation and can learn from human responses to questions.

—Cleverbot.com (accessed February 5, 2025

—Jabberwacky, “About the Jabberwacky AI,” jabberwacky.com (accessed February 5, 2025)

1998 - Kismet Developed at MIT

Kismet is a robot head developed to recognize emotions in humans via facial expression and is programmed to interact accordingly.

Robots, “Kismet,” robotsguide.com (accessed February 7, 2025)

2000 - 2019

2001 - Elbot Praised for Humor and Snark

Created by Fred Roberts, the chatbot has additional “wiggle” per Roberts to produce more human-like responses. That wiggle also gives Elbot more humor, sarcasm, and snark, which is a leap forward for AI chatbots.

—Abish Pius, “What is the Elbot Chatbot? What Makes it So Smart,” aiplusinfo.com, September 24, 2022

2001 - SmarterChild Released for Use on AOL Instant Messenger (AIM) and MSN Messenger

Created by engineer Timothy Kay at ActiveBuddy, Inc., the SmarterChild chatbot had over 9 million conversations with chatroom users. The bot could provide real-time weather forecasts, stock quotes, the day’s news, movie showtimes, and more.

Miscrosoft acquired SmarterChild’s parent company in 2007 and discontinued the bot.

—Mike Kalil, “Remembering Smarterchild, the Pioneering AI Chatbot of the Early 2000s,” mikekalil.com, February 18, 2024

June 4, 2001 - Radiohead Released GooglyMinotaur Chatbot

Active Buddy, Inc., (which created the SmarterChild chatbot), Capitol Records, and the rock band Radiohead created the chatbot GooglyMinotaur, released simultaneously with the band’s fifth studio album, Amnesiac. Fans could friend GooglyMinotaur in AOL’s AIM chatrooms, have a conversation with the bot, and add to its knowledge. Six months later, after about 60 million messages, the bot was taken offline.

—Robin Bechtel, “Meet the Chatbot Radiohead Launched 22 Years before ChatGPT,” chatbotsmagazine.com, April 13, 2016

January 13, 2011 - February 16, 2011 - IBM’s Watson Wins Jeopordy!

Watson, an AI system, was designed specifically to answer questions on Jeopardy! as the next step after teaching computers to play chess. Watson played a practice match against champions Ken Jennings and Brad Rutter on January 13 and won the 15-question round. Watson won the first match, though weaknesses in the AI bot’s programming were revealed. Watson ultimately won all three matches, securing a $1 million prize that IBM donated to World Vision and World Community Grid. Jennings and Rutter donated half of their $300,000 and $200,000 winnings respectively.

—Larry Dignan, "IBM’s Watson Wins Jeopardy Practice Round: Can Humans Hang?"zdnet.com, January 13, 2011

—Associated Press, “Computer Crushes the Competition on ‘Jeopardy!,’” apnews.com, February 15, 2011

—IBM, "Jeopardy! And IBM Announce Charities to Benefit from Watson Competition" newsroom.ibm.com, January 13, 2011

—Bruce Upbin,” IBM’s Supercomputer Watson Wins It All with $367 Bet,” forbes.com August 11, 2011

October 4, 2011 - Siri Added to iPhone 4S

Siri became “the first widely available virtual assistant available on a major tech company’s smartphone” when it was released on Apple’s OS with the iPhone 4S. Siri was created by Stanford Research Institute International researchers Dag Kittlaus, Tom Gruber, and Adam Cheyer, who released the assistant to the App Store in February 2010. Apple quickly bought the app for more than $200 million.

Encyclopaedia Britannica, “Siri,” britannica.com, January 3, 2025

May 2014 - Microsoft Launches XiaoIce in China

XiaoIce (pronounced Shao-ice) is a social chatbot that embodies the characteristics of a teen girl. Per researchers, Xiaoice was

developed on an empathetic computing framework that enables the machine (social chatbot in our case) to recognize human feelings and states, understand user intents, and respond to user needs dynamically. XiaoIce aims to pass a particular form of the Turing Test known as the time-sharing test, where machines and humans coexist in a companion system with a time-sharing schedule. If a person enjoys its companionship (via conversation), we can call the machine “empathetic.”

—Li Zhou, et al, “The Design and Implementation of XiaoIce, an Empathetic Social Chatbot,” Computational Linguistics, direct.mit.edu, 2020

John Markoff and Paul Mozur, “For Sympathetic Ear, More Chinese Turn to Smartphone Program,” nytimes.com, July 31, 205

June 18, 2014 - Eugene Goostman Passes Turing Test

Created in 2001 by Vladimir Veselov, Eugene Demchenko, and Sergey Ulasen in St. Petersburg, Eugene Goostman is a chatbot that mimics a 13-year-old boy from the Ukraine. Eugene won the Turing Test 2014 Prize by tricking a panel of 30 judges into thinking it was a human 33 percent of the time (a passing grade is 30 percent).

Kevin Warwick, a visiting professor at the University of Reading where the competition was held in conjunction with RoboLaw, stated:

Some will claim that the Test has already been passed. The words Turing Test have been applied to similar competitions around the world. However this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing’s Test was passed for the first time on Saturday.

He also noted that this was a “wake-up call to cybercrime” and that it is now “important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true ... when in fact it is not”

—University of Reading, “Turing Test Success Marks Milestone in Computing History,” reading.ac.uk, June 8, 2014

April 2, 2014 - Cortana Debuts

Microsoft introduced its AI assistant on the Windows Phone 8.1. The AI assistant was fraught with issues ranging from low revenue to the hardware failure and was slowly exiled, only to be completely retired in June 2023.

—Ayush Pande, “On This Day 10 Years Ago, Cortana Landed on Windows Phone as a Digital Assistant,” April 2, 2024

November 2014 - Alexa Debuts on Amazon Echo

Amazon’s digital assistant was secretly developed by Amazon Lab126, which bought several start-ups to collect the necessary data, including Ivona, a Polish start-up that created a speech-synthesizing program (Spiker) that would be the basis of Alexa.

Encyclopaedia Britannica, “Amazon Alexa,” britannica.com, January 19, 2025

July 1, 2015 - Google Photos AI Uses Racist Tags

The AI program automatically tags photos uploaded to Google Photos. The app began tagging photos of Black people as “gorillas.” Earlier in the year, the app was criticized for tagging images of dogs as “horses.” Google quickly promised to rectify the mistakes, but the company was widely criticized for a lack of diversity.

—BBC, “Google Apologises for Photos App’s Racist Blunder,” bbc.com, July 1, 2025

December 11, 2015 - OpenAI Launches

OpenAI launched with the following mission statement:

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.

—OpenAI, “Introducing OpenAI,” openai.com, December 11, 2015

March 23, 2016 - Microsoft Tay Shut Down within 16 Hours of Launch

Tay was based on a successful Microsoft bot in China called Xiaoice (launched in 2014). But, set loose in American Twitter (now X) as @TayandYou and embodying a teen girl, Tay quickly succumbed to a programming vulnerability and began posting highly offensive comments. The bot was swiftly removed by Microsoft.

—Peter Lee, “Learning from Tay’s Introduction,” blogs.microsoft.com, March 25, 2026

—Quincy Walters and Ben Brock Johnson, “Good Bot, Bad Bot | Part IV: The Toxicity of Tay,” wbur.org, December 2, 2022

2017 - 2019 - BBC Produces AI Interactive Storytelling Project

The BBC, in collaboration with Rosina Sound, released The Inspection Chamber, “an interactive science fiction comedy story.” Users could engage with the AI storytelling via Amazon Echo.

—BBC Taster, “The Inspection Chamber,” bbc.com (accessed February 5, 2025)

2018 - Microsoft Publishes Book on the Future of AI

The Future Computed: Artificial Intelligence and Its Role in Society was written by Brad Smith, then president and chief legal officer of Microsoft, and Harry Shum, executive vice president of Microsoft AI and Research Group, and their teams.

The book noted:

Many jobs will continue to require uniquely human skills that AI and machines cannot replicate, such as creativity, collaboration, abstract and systems thinking, complex communication, and the ability to work in diverse environments.

As well as:

As automation and AI take on tasks that require thinking and judgment, it will become increasingly important to train people — perhaps through a renewed focus on the humanities — to develop their critical thinking, creativity, empathy, and reasoning.”

—Rebecca Roach, “Chatbots and Literature,” egomedia.supdigital.org (accessed February 5, 2025)

—Brad Smith and Harry Shum, “The Future Computed: Artificial Intelligence and Its Role in Society,” blogs.microsoft.com, January 17, 2018

August 2018 - Schools Increasingly Track Students with AI

Software companies Gaggle, Securly, and GoGuardian, in conjunction with schools, begin tracking students by scanning the words they type on school devices. Alerts are sent to school personnel if a student types something related to drugs, self-harm (including suicide), bullying, or harm of others (including shootings).

The companies only need parental consent to track a student’s activity, and some schools choose not to disclose tracking to the students themselves.

—Simone Stolzoff, “Schools Are Using AI to Track What Students Write on Their Computers,” qz.com, August 19, 2018

July 22, 2019 - Microsoft Partners with OpenAI

Microsoft invested $1 billion in OpenAI to build AGI (“strong AI,” or “artificial general intelligence”) products. In turn, OpenAI will use Microsoft’s cloud server exclusively.

—OpenAI, “Microsoft Invests in and Partners with OpenAI to Support Us Building Beneficial AGI,” openai.com, July 22, 2019

2020 - 2023

May 2020 - Thomson Reuters Sues ROSS Intelligence

Thomson Reuters sued ROSS Intelligence “alleging the AI/legal research company unlawfully copied content from Thomson Reuter’s legal research platform Westlaw for the purpose of training its AI-based platform.” The case is ongoing as of February 6, 2025.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

November 30, 2022 - ChatGPT Launched by OpenAI

When asked, “when was ChatGPT created?,” the chatbot responded:

ChatGPT was created by OpenAI and first launched in November 2022. The model behind ChatGPT is based on GPT-3.5, and later versions, such as GPT-4, have also been introduced to improve performance. OpenAI initially released the model for public use as a research preview to explore the possibilities of conversational AI. Since its debut, it has undergone regular updates and improvements.”

Per Encylopædia Britannica:

Language models produce text based on the probability for a word to occur based on previous words in the sequence. By being trained on about 45 terabytes of text from the Internet, the GPT-3 language model used by ChatGPT calculates that some sequences of words are more likely to occur than others. For example, “the cat sat on the mat” is more likely to occur in English than “sat the the mat cat on” and thus would be more likely to appear in a ChatGPT response.

—ChatGPT, chatgpt.com (accessed February 6, 2025)

—Encyclopaedia Britannica, “ChatGPT,” britannica.com, February 4, 2025

November 3, 2022 - Class Action Lawsuit Filed against GitHub, Microsoft and OpenAI

A group of anonymous plaintiffs brought suit alleging that Codex and Copilot were created with the illegal use of copyrighted materials. The case was stayed as of February 6, 2025.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

January 13, 2023 - Visual Artists Sue Stability AI

Visual artists Sarah Andersen, Kelly McKernan, Karla Ortiz, Hawke Southworth, Grzegorz Rutkowski, Gregory Manchess, Gerald Brom, Jingna Zhang, Julia Kaye and Adam Ellis alleged “direct and induced copyright infringement, DMCA [Digital Millennium Copyright Act] violations, false endorsement and trade dress claims based on the creation and functionality of Stability AI’s Stable Diffusion and DreamStudio, Midjourney Inc.’s eponymous generative AI tool, and DeviantArt’s DreamUp.” The trial is scheduled for April 5, 2027.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

July 18, 2023 - Open AI Partners with American Journalism Project

OpenAI committed $5 million to the American Journalism Project to

explore ways the development of artificial intelligence (AI) can support a thriving, innovative local news field, and ensure local news organizations shape the future of this emerging technology.

—OpenAI, “Partnership with American Journalism Project to Support Local News,” openai.com, July 18, 2023

February 3, 2023 - Getty Images Sues Stability AI

Getty Images sued “accusing Stability AI of infringing more than 12 million photographs, their associated captions and metadata, in building and offering Stable Diffusion and DreamStudio. This case also includes trademark infringement allegations arising from the accused technology’s ability to replicate Getty Images’ watermarks in the generative AI outputs.” The case is ongoing as of February 6, 2025.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

March 22, 2023 - Tech Giants Call for Six-Month Pause

With AI development exploding in popularity, assorted technology leaders and writers and industry CEOs, including Elon MuskSteve Wozniak, Craig Peters (CEO of Getty Images), author Yuval Noah Harari, and politician Andrew Yang, signed an open letter calling for a six-month pause on AI “systems more powerful than GPT-4,” which ChatGPT was based on. The letter states, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable…. AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” Within a day of its release, the letter had garnered 1380 signatures—from engineers, professors, artists, and grandmothers alike.

—Future of Life Institute, “Pause Giant AI Experiments: An Open Letter,” futureoflife.org, March 29, 2023

—Harry Guinness, “What Is GPT? Everything You Need to Know about GPT-3 and GPT-4,”zapier.com, October 9, 2023

May 8, 2023 - Brookings Institute Notes Political Bias in AI

The authors found that the AI knowledge source routinely supported left-leaning positions on issues of the day. For example, it gave affirmative answers to questions about a woman’s “right to an abortion,” the “benefits” of illegal immigration, the “banning” of semi-automatic weapons, and “raising taxes” on corporations and the wealthy. Though the authors noted that “it’s possible that the [prompts] will not always produce the same responses that we observed” and that “building an ‘unbiased’ chatbot is an impossible goal.” [92]

—Jeremy Baum and John Villasenor, “Commentary The Politics of AI: ChatGPT and Political Bias,” brookings.edu, May 8, 2023

June - September 2023 - Three Separate Lawsuits Filed against OpenAI

Three groups of authors (Paul Tremblay and Mona Awad; Sarah Silverman, Christopher Golden, and Richard Kadrey; and Michael Chabon, David Henry Hwang, Matthew Klam, Rachel Louise Snyder, and Ayelet Waldman) filed lawsuits on June 28, July 7, and September 8 alleging that OpenAI had illegally used their works to train ChatGPT. The cases were consolidated on November 6, 2023. The case is ongoing as of February 6, 2025.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

May 30, 2023 - Wellness Chatbot Taken Offline After Weight Loss Focus

Tessa, a chatbot for the National Eating Disorders Association (NEDA), was taken offline after tests showed the bot giving harmful advice about how to lose weight. NEDA posted to Instagram:

It came to our attention last night that the current version of the Tessa Chatbot running the Body Positive program, may have given information that was harmful and unrelated to the program.

—Lauren McCarthy, “A Wellness Chatbot Is Offline After Its ‘Harmful’ Focus on Weight Loss,” nytimes.com, June 8, 2023

—NEDA, instagram.com, May 30, 2023

June 22, 2023 - OpenAI CEO Testifies Before Senate

Sam Altman, CEO of OpenAI, concluded:

This is a remarkable time to be working on AI technology. Six months ago, no one had heard of ChatGPT. Now, ChatGPT is a household name, and people are benefiting from it in important ways. We also understand that people are rightly anxious about AI technology. We take the risks of this technology very seriously and will continue to do so in the future. We believe that government and industry together can manage the risks so that we can all enjoy the tremendous potential.

—OpenAI, “Testimony Before the U.S. Senate,” openai.com, June 22, 2023

—OpenAI, “Questions for the Record,” openai.com, June 22, 2023

July 7, 2023 - Authors Sue Meta

Authors Richard Kadrey, Sarah Silverman, Christopher Golden, Ta-Nehisi Coates, Junot Díaz, Andrew Sean Greer, David Henry Hwang, Matthew Klam, Laura Lippman, Rachel Louise Snyder, Lysa TerKeurst, Jacqueline Woodson, Mike Huckabee, Christopher Farnsworth, Christopher Golden, David Kinnaman, John Blase, and Tsh Oxenreider sued Meta alleging Meta AI illegally used the authors’ works to train LLaMA language models. The case is ongoing as of February 6, 2025.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

July 11, 2023 - Authors and Other Creatives Sue Google AI

Steve Almond, Sarah Andersen, Burl Barer, Jessica Fink, Kirsten Hubbard, Mike Lemos, Jill Leovy, Connie McLennan, and Jingna Zhang sued Google alleging “direct infringement arising from the scraping and use of copyrighted works to train Google’s AI products (including Gemini).” The case is ongoing as of February 6, 2025.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

August 12, 2023 - AI Creates Deadly Recipes

A New Zealand grocery store’s AI bot tasked with creating recipes suggested an “aromatic water mix” that was actually deadly chlorine gas, poison bread sandwiches, and mosquito-repellant pot roast, as well the more harmless Oreo vegetable stir-fry and yogurt dumpling guacamole. While humans inputted the dangerous ingredient suggestions, it seemed clear to many that this AI bot had no guardrails like other bots.

—Matt Novak, “Supermarket AI Gives Horrifying Recipes for Poison Sandwiches and Deadly Chlorine Gas,’ forbes.com, August 12, 2023

September 12, 2023 - Coke Releases Limited Edition Drink Co-created with AI

The company stated,

Coca‑Cola® Y3000 Zero Sugar was co-created with human and artificial intelligence by understanding how fans envision the future through emotions, aspirations, colors, flavors and more. Fans’ perspectives from around the world, combined with insights gathered from artificial intelligence, helped inspire Coca‑Cola to create the unique taste of Y3000.

—Coca-Cola Company, “Coca‑Cola® Creations Imagines Year 3000 With New Futuristic Flavor and AI-Powered Experience,” coca-colacompany.com, September 12, 2023

September 19, 2023 - January 5, 2024 - Authors’s Groups Sue OpenAI

Jonathan Alter, the Authors Guild, David Baldacci, Kai Bird, Mary Bly, Taylor Branch, Rich Cohen, Michael Connelly, Sylvia Day, Jonathan Franzen, John Grisham, Elin Hilderbrand, Christina Baker Kline, Maya Shanbhag Lang, Victor LaValle, Eugene Linden, George R.R. Martin, Daniel Okrent, Jodi Picoult, Douglas Preston, Roxana Robinson, Julian Sancton, George Saunders, Stacy Schiff, Hampton Sides, James Shapiro, Jia Tolentino, Scott Turow, Simon Winchester, and Rachel Vail filed a lawsuit on September 19, 2023.

That case was joined with a suit filed by Nicholas A. Basbanes and Nicholas Ngagoyeanes (professionally known as Nicholas Gage). All of the authors and the Authors Guild alleged that OpenAI and Microsoft infringed authors’ copyrights by using their texts to train AI. The case is ongoing as of February 6, 2025.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

October 17, 2023 - Mike Huckabee and Others Sue Bloomberg Media

Mike Huckabee and his co-plantiffs alleged that Bloomberg L.P. and Bloomberg Finance L.P. illegally trained their LLM (large language model) on copyrighted works. The case is ongoing as of February 6, 2025.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

October 18, 2023 - Music Publishers Sue Anthropic

Several large music publishers, including Concord Music Group, Inc, sued “alleging that Anthropic improperly (1) created and used unauthorized copies of copyrighted lyrics to train Claude, its generative AI product; and (2) copied, distributed, and publicly displayed those lyrics through Claude’s outputs without their copyright management information (CMI).” The case is ongoing as of February 6, 2025.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

October 30, 2023 - Biden Establishes Standards for AI Safety and Security

President Joe Biden signed an executive order on artificial intelligence that “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”

Vice President Kamala Harris stated:

We have a moral, ethical and societal duty to make sure that A.I. is adopted and advanced in a way that protects the public from potential harm. We intend that the actions we are taking domestically will serve as a model for international action.

Despite such precautions, experts noted that many of the new standards would be difficult to enforce, especially as new concerns and controversies over AI evolve almost daily.

—Cecilia Kang and David E. Sanger, “Biden Issues Executive Order to Create A.I. Safeguards,” nytimes.com, October 30, 2023

—White House, “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,” whitehouse.gov, October 30, 2023

December 2, 2023 - McDonald’s Forms AI Partnership with Google Cloud

Brian Rice, McDonald’s Executive Vice President and Global Chief Information Officer, says:

Connecting our restaurants worldwide to millions of datapoints across our digital ecosystem means tools get sharper, models get smarter, restaurants become easier to operate, and most importantly, the overall experience for our customers and crew gets even better.”

Upgrades were expected for the customer app and the self-service kiosks in the restaurants globally.

—McDonald’s, McDonald’s and Google Cloud Announce Strategic Partnership to Connect Latest Cloud Technology and Apply Generative AI Solutions Across its Restaurants Worldwide,” corporate.mcdonalds.com, December 6, 2023

December 27, 2023 - June 27, 2024 - Newspapers Sue Microsoft and OpenAI

The New York Times, Daily News, and the Center for Investigative Reporting filed separate lawsuits that were later consolidated. The news sources allege Microsoft and OpenAI illegally used copyrighted content in AI training. The case was ongoing as of February 6, 2025.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

2024

January 24, 2024 - Pope Francis Warns Against AI

For the 54th World Day of Social Communications, Pope Francis delivered a speech “Artificial Intelligence and the Wisdom of the Heart: Towards a Fully Human Communication.” He stated:

Wisdom of the heart, then, is the virtue that enables us to integrate the whole and its parts, our decisions and their consequences, our nobility and our vulnerability, our past and our future, our individuality and our membership within a larger community....

A gift of the Holy Spirit, it enables us to look at things with God’s eyes, to see connections, situations, events and to uncover their real meaning. Without this kind of wisdom, life becomes bland, since it is precisely wisdom—whose Latin root sapere is related to the noun sapor—that gives “savour” to life....

Such wisdom cannot be sought from machines. Although the term “artificial intelligence” has now supplanted the more correct term, “machine learning”, used in scientific literature, the very use of the word “intelligence” can prove misleading. No doubt, machines possess a limitlessly greater capacity than human beings for storing and correlating data, but human beings alone are capable of making sense of that data. It is not simply a matter of making machines appear more human, but of awakening humanity from the slumber induced by the illusion of omnipotence, based on the belief that we are completely autonomous and self-referential subjects, detached from all social bonds and forgetful of our status as creatures.

Human beings have always realized that they are not self-sufficient and have sought to overcome their vulnerability by employing every means possible. From the earliest prehistoric artifacts, used as extensions of the arms, and then the media, used as an extension of the spoken word, we have now become capable of creating highly sophisticated machines that act as a support for thinking. Each of these instruments, however, can be abused by the primordial temptation to become like God without God (cf. Gen 3), that is, to want to grasp by our own effort what should instead be freely received as a gift from God, to be enjoyed in the company of others.

—Vatican, “Message of His Holiness Pope Francis For the 58th World Day of Social Communications,” vatican.va, January 24, 2024

February 8, 2024 - FCC Outlaws Use of AI Voices in RoboCalls

The Federal Communications Commission (FCC) outlawed the use of AI-generated voices in robocalls after a New Hampshire political group was found to be placing robocalls featuring an AI-generated voice that mimicked President Joe Biden in an effort to suppress Democratic party primary voting. 

—Federal Communications Commission, “FCC Makes AI-Generated Voices in Robocalls Illegal,” fcc.gov, February 8, 2024

—Ali Swenson and Will Weissert, “AI Robocalls Impersonate President Biden in an Apparent Attempt to Suppress Votes in New Hampshire,” pbs.org, January 22, 2024

—Shannon Bond, “The FCC Says AI Voices in Robocalls Are Illegal,” npr.org, February 8, 2024

February 14, 2024 - Study Finds Carbon Emissions Are Higher for Human Writers and Illustrators than for AI

The study’s abstract reveals:

As AI systems proliferate, their greenhouse gas emissions are an increasingly important concern for human societies. In this article, we present a comparative analysis of the carbon emissions associated with AI systems (ChatGPT, BLOOM, DALL-E2, Midjourney) and human individuals performing equivalent writing and illustrating tasks. Our findings reveal that AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts. Emissions analyses do not account for social impacts such as professional displacement, legality, and rebound effects. In addition, AI is not a substitute for all human tasks. Nevertheless, at present, the use of AI holds the potential to carry out several major activities at much lower emission levels than can humans.

—Bill Tomlinson, et al, “The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans,” nature.com, February 14, 2024

February 21, 2024 - Google’s Gemini AI Over-corrects Racial Bias

In early 2024, Gemini AI users noticed the tool returning images of racially diverse historical figures, portraying German Nazi soldiers and American Founding Fathers as people of color, in a suspected over-correction of historical racial bias in AI tools. Google posted to X:

We’re aware that Gemini is offering inaccuracies in some historical image generation depictions.… We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.

Adi Robertson, “Google Apologizes for ‘Missing the Mark’ after Gemini Generated Racially Diverse Nazis,” theverge.com. February 21, 2024

February 28, 2024 - Raw Story Media and AlterNet Sue OpenAI

The media organizations alleged the use of their media to train ChatGPT was a violation of the Digital Millennium Copyright Act (DMCA). The claims were dismissed, but the media organizations may later be given opportunity to amend their complaint.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

February 28, 2024 - The Intercept Media Sues OpenAI and Microsoft

The news organization alleged the use of their media to train ChatGPT was a violation of the Digital Millennium Copyright Act (DMCA). All but one claim was dismissed by the court. The remaining claim is still under consideration by the court.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

March 8 - May 2, 2024 - Authors Sue NVIDIA

Authors Abdi Nazemian, Brian Keene, and Stewart O’Nan sued NVIDIA Corporation on March 8, 2024. Their lawsuit was followed by one filed by authors Andre Dubus III and Susan Orlean on May 2, 2024. The cases were combined and were ongoing as of February 6, 2025.

—Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed February 6, 2025)

March 13 - October 8, 2024 - OpenAI Expands News Media and Content Provider Partnerships

Throughout much of 2024, OpenAI announced new partnerships with news and content providers:

  • March 13 - Le Monde and Prisa Media
  • May 16 - Reddit
  • May 29 - The Atlantic
  • May 29 - Vox Media
  • June 27 - TIME
  • August 20 - Condé Nast
  • September 26 - GEDI (Italian news conglomerate)
  • October 8 - Hearst

—Open AI, “Global News Partnerships: Le Monde and Prisa Media,” openai.com, March 13, 2024

Open AI, “OpenAI and Reddit Partnership,” openai.com, May 16, 2024

Open AI, “A Content and Product Partnership with The Atlantic,” openai.com, May 29, 2024

Open AI, “A Content and Product Partnership with Vox Media,” openai.com, May 29, 2024

Open AI, “Strategic Content Partnership with TIME,” openai.com, June 27, 2024

Open AI, “OpenAI partners with Condé Nast,” openai.com, August 20, 2024

Open AI, “OpenAI and GEDI Partner for Italian News Content,” openai.com, September 26, 2024

Open AI, “OpenAI and Hearst Content Partnership,” openai.com, October 8, 2024

March 12, 2024 - Google Limits Election-Related Questions

Globally, fears were expressed that AI Chatbot Gemini could undermine the democratic process in 2024 elections. As a result, Google agreed to limit election-related queries in countries with 2024 elections, particularly in India, South Africa, the United Kingdom, and the United States. Google’s India team stated, “Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses.”

—Nick Robins-Early, “Google Restricts AI Chatbot Gemini from Answering Questions on 2024 Elections,” theguardian.com, March 12, 2024

—Jagmeet Singh, “Google Won’t Let You Use Its Gemini AI to Answer Questions about an Upcoming Election in Your Country,” techcrunch.com, March 12, 2024

July 2, 2024 - Report: Ukraine Uses AI-powered Drones in War with Russia

The New York Times journalists Paul Mozur and Adam Satariano summarize:

What the [tech] companies are creating is technology that makes human judgment about targeting and firing increasingly tangential. The widespread availability of off-the-shelf devices, easy-to-design software, powerful automation algorithms and specialized artificial intelligence microchips has pushed a deadly innovation race into uncharted territory, fueling a potential new era of killer robots.

The most advanced versions of the technology that allows drones and other machines to act autonomously have been made possible by deep learning, a form of A.I. that uses large amounts of data to identify patterns and make decisions. Deep learning has helped generate popular large language models, like OpenAI’s GPT-4, but it also helps make models interpret and respond in real time to video and camera footage. That means software that once helped a drone follow a snowboarder down a mountain can now become a deadly tool.

—Paul Mozur and Adam Satariano, “A.I. Begins Ushering In an Age of Killer Robots,” nytimes.com, July 2, 2024

June 10, 2024 - OpenAI Partners with Apple

Apple will integrate ChatGPT into their device’s operating systems and Siri.

—OpenAI, “OpenAI and Apple Announce Partnership to Integrate ChatGPT into Apple Experiences,” openai.com, June 10, 2024

July 10, 2024 - Open AI Partners with Los Alamos National Laboratory

The partnership will explore the use of AI to “advance bioscientific research.”

—OpenAI, “OpenAI and Los Alamos National Laboratory Announce Bioscience Research Partnership,” openai.com, July 10, 2024

August 14, 2024 - AI Allows Man with ALS to Speak

In July 2023, four electrode arrays with 64 spikes each were implanted in Casey Harrell’s brain at the University of California, Davis. The spikes pick up electric impulses from neurons that fire in the brain during speech movements.

ALS (amyotrophic lateral sclerosis) is a degenerative neurological disorder that causes muscle atrophy and paralysis. Harrell had not yet completely lost his ability to speak, but his speech was difficult to parse.

After a brief training period, the AI system could accurately record what Harrell was saying within a 50-word vocabulary. The next day, the AI had a 90 percent accuracy across 125,000 words, including sentences. Doctors were pleasantly surprised when the AI voiced for Harrell “I’m looking for a cheetah” in response to Harrell’s daughter arriving in a cheetah outfit. Over eight months, the AI increased its vocabulary to 6,000 unique words at a 97.5 percent accuracy.

—Benjamin Mueller, “A.L.S. Stole His Voice. A.I. Retrieved It.,” nytimes.com, August 14, 2024

—Nicholas S. Card, “An Accurate and Rapidly Calibrating Speech Neuroprosthesis,” nejm.org, August 14, 2024

September 4, 2024 - Man Arrested for Fraud, Accused of Using AI to Create Fake Bands

Michael Smith was accused of using AI to create hundreds of thousands of fake songs by fake bands that he uploaded to streaming services to be listened to by fake listeners while Smith collected a very real $10 million in royalties between 2017 and 2024.

Smith created about 10,000 fake streaming accounts and then he created software that would stream only his fake songs on loops from a series of computers, playing the songs about 661,440 times daily, which he valued at over $3,000 per day.

Smith faces charges of wire fraud and money laundering conspiracy, among others.

—Maia Coleman, “The Bands and Fans Were Fake. The $10 Million Was Real.” nytimes.com, September 5, 2024

October 2024 - Mother Sues Character.AI. Over Child’s Death by Suicide

A 14-year-old boy died by suicide in Florida in February 2024. His mother alleges that an AI chatbot named “Daenerys Targaryen” after the Game of Thrones character coerced the vulnerable teen to commit suicide.

—Kevin Roose, “Can A.I. Be Blamed for a Teen’s Suicide?,” nytimes.com, Oct. 24, 2024

October 2024 - AI Companies Look to Nuclear Power

Microsoft, Google and Amazon made deals with nuclear power plant operators to power their data centers, which have been heavily criticized for their energy consumption. Microsoft has paid an energy company to re-open Three Mile Island in Pennsylvania, the site of a partial nuclear meltdown in 1979, while Amazon and Google were looking to build small modular reactors. The companies have previously invested in wind and solar energy, but those renewable energies are not available 24-hours a day, while nuclear is.

—Ivan Penn and Karen Weise, “Hungry for Energy, Amazon, Google and Microsoft Turn to Nuclear Power,” nytimes.com, October 16, 2024

October 24, 2024 - Biden Signs National Security Memo with AI “Guardrails”

The policy suggestions for safe AI use by federal government agencies were set to go into effect after Biden leaves office.

—David E. Sanger, “Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools,” nytimes.com, October 24, 2024

October 26, 2024 - AI Mortality Calculator Introduced

The AI-ECG Risk Estimator (AIRE) uses a dataset of 1.16 million electrocardiograms (ECG) tests from 189,539 patients to make predictions about future heart failure. The AI program can correctly identify the risk of death after an ECG test in 78 percent of cases, which allows for early intervention, according to researchers, and more aggressive preventative measures.

—ET Online, “Is This AI Tool Accurate Enough to Predict Your Death? Here’s What Scientists Say,” economictimes.indiatimes.com, October 26, 2024

October 28, 2024 - AI Beats Human Doctors in Diagnosing Patients

Researchers found that doctors using AI did not diagnose patients better than if AI were not used. However, AI by itself outperformed human doctors significantly.

Doctors not using ChatGPT in diagnoses got the medical condition correct about 74 percent of the time. Doctors using ChatGPT were slightly better at 76 percent. ChatGPT, by itself, got the diagnosis correct 90 percent of the time.

The study suggests AI models may be helpful to doctors. However, the researchers warned: “Results of this study should not be interpreted to indicate that LLMs [large language models] should be used for diagnosis autonomously without physician oversight.”

—Ethan Goh, et al., “Large Language Model Influence on Diagnostic Reasoning A Randomized Clinical Trial,” Health Informatics, jamanetwork.com, October 28, 2024

—Gina Kolata, “A.I. Chatbots Defeated Doctors at Diagnosing Illness,” nytimes.com, November 17, 2024

November 4, 2024 - Meta to Allow U.S. Military Use of AI Models

Meta made their open source Llama models “available to U.S. government agencies and contractors working on national security applications.” Previously, Meta’s AI acceptable use policy banned the use of the software for “military, warfare, [and] nuclear industries.”

—Mike Isaac, “Meta Permits Its A.I. Models to Be Used for U.S. Military Purposes,” nytimes.com, November 4, 2024

November 14, 2024 - Studies Find Humans Prefer AI-generated Poetry

In two connected studies, non-expert study participants were given poems by Geoffrey Chaucer, William Shakespeare, Samuel Butler, Lord Byron, Walt Whitman, Emily Dickinson, T.S. Eliot, Allen Ginsberg, Sylvia Plath, and Dorothea Lasky, as well as five poems generates by ChatGPT 3.5 “in the style” of each human poet.

In the first study, participants were asked to identify whether the ten poems presented to them were human- or AI-generated. The first study found “participants were more likely to guess that AI-generated poems were written by humans than they were for actual human-written poems.... The five poems with the lowest rates of ‘human’ ratings were all written by actual human poets; four of the five poems with the highest rates of ‘human’ ratings were generated by AI.

In the second study, participants were asked to rate how much they liked the poems presented to them. When the participants were told the poem was AI-generated (whether it was or not), they rated it lower. However, when not told who or what generated the poem, AI-generated poems scored higher than human-generated poems.

—Brian Porter and Edouard Macher, “AI-Generated Poetry Is Indistinguishable from Human-Written Poetry and Is Rated More Favorably,” Scientific Reports, nature.com, November 14, 2024

November 14, 2024 - “Daisy” Introduced to Scramble “Granny Scam Artists” in United Kingdom

Virgin Media’s O2, a British cell phone company, has introduced “Daisy,” an AI chatbot who will chatter away at scammers about her kitten Fluffy and gets confused about the scammers’s instructions to keep them away from their senior targets for up to 40 minutes per call.

Murray Mackenzie, the company’s director of fraud, stated: "We’re committed to playing our part in stopping the scammers, investing in everything from firewall technology to block out scam texts to AI-powered spam call detection to keep our customers safe.”

—Virgin Media’s O2, “O2 Unveils Daisy, the AI Granny Wasting Scammers’ Time,” news.virginmediao2.co.uk, November 2024

—Alana Wise, “A Phone Company Developed an AI ‘Granny’ to Beat Scammers at Their Own Game,” npr.org, December 10, 2024

November 19, 2024 - HarperCollins Will Allow Use of Books for AI Training

HarperCollins stated that it had:

reached an agreement with an artificial intelligence technology company to allow limited use of select nonfiction backlist titles for training AI models to improve model quality and performance

—Ella Creamer, “HarperCollins to Allow Tech Firms to Use Its Books to Train AI Models,” theguardian.com, November 19, 2024

November 20, 2024 - AI Coca-Cola Ads Slammed Online as Not “the Real Thing”

Coke released three AI ads for the Christmas 2024 season with the slogan “it’s always the real thing” followed by a disclaimer: “Created by Real Magic AI.”

One critic posted on TikTok—“Coca-Cola just put out an ad and ruined Christmas”—and many echoed the sentiment. A Coke spokesperson stated, “The Coca-Cola Company has celebrated a long history of capturing the magic of the holidays in content, film, events and retail activations for decades around the globe.”

—Alex Vadukul, “Coca-Cola’s Holiday Ads Trade the ‘Real Thing’ for Generative A.I.,” nytimes.com, November 20, 2024

November 29, 2024 - Canadian News Outlets Sue OpenAI

Five Canadaian news companies, including the Globe and Mail, the Toronto Star and the Canadian Broadcasting Corporation, have sued OpenAI for about $14,700 per article used to train ChatGPT, plus a share of OpenAI profits. The ask could add up to billions of dollars.

—Matina Stevis-Gridneff, “Major Canadian News Outlets Sue OpenAI in New Copyright Case,” nytimes.com, November 29, 2024

December 4, 2024 - Google Releases AI Weather Forecast Bot

GenCast is an AI-generated probabilistic weather model that is able to forecast global weather for 15 days more accurately than the existing forecasting tools. Most reliable forecasting models can only relay 10 days of weather. Furthermore,

The new GenCast agent takes a radically different approach from mainstream forecasting, which uses room-size supercomputers that turn millions of global observations and calculations into predictions. Instead, the DeepMind agent runs on smaller machines and studies the atmospheric patterns of the past to learn the subtle dynamics that result in the planet’s weather.

Researchers believe the technology can save lives of those in the paths of hurricanes or other natural disasters and help level the socioeconomic field by offering more information to more people more quickly

—William J. Broad, “Google Introduces A.I. Agent That Aces 15-Day Weather Forecasts,” nytimes.com, December 4, 2024

December 4, 2024 - UCLA Announces AI Comparative Literature Class

Taught by Zrinka Stahuljak a professor of comparative literature and of European languages and transcultural studies, the class will include AI-generated textbook, class assignments, and teaching assistants’ resources.

Stahuljak explains:

Because the course is a survey of literature and culture, there’s an arc to what I want students to understand.

Normally, I would spend lectures contextualizing the material and using visuals to demonstrate the content. But now all of that is in the textbook we generated, and I can actually work with students to read the primary sources and walk them through what it means to analyze and think critically.

—Sean Brenner, “Comparative Lit Class Will Be First in Humanities Division to Use UCLA-Developed AI System,” newsroom.ucla.edu, December 4, 2024

December 9, 2024 - Parents File Suit Against Character.AI for Alleged Abuse

The parents of a 9-year-old girl allege Character.AI. exposed her to “hypersexualized content. The parents of a 17-year-old allege the chatbot told the minor that self-harm “felt good,” that it sympathized with kids who murder their parents:

You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse,’” the bot allegedly wrote. “I just have no hope for your parents [frowning face emoji].

The lawsuit alleges that “this [behavior] was ongoing manipulation and abuse, active isolation and encouragement designed to and that did incite anger and violence."

—Bobby Allyn, “Lawsuit: A Chatbot Hinted a Kid Should Kill His Parents Over Screen Time Limits,” npr.org, December 10, 2024

December 12, 2024 - Klarna Chief Executive Claims AI Can Replace Humans at Work

Sebastian Siemiatkowski said, “I am of the opinion that A.I. can already do all of the jobs that we, as humans, do.” He also claimed that AI has allowed Klarna, a Swedish tech firm, to stop hiring humans altogether as of September 2023, though the company still had listed job openings as of that date.

—Noam Scheiber, “Why Is This C.E.O. Bragging About Replacing Humans With A.I.?,” nytimes.com, February 2, 2025

—Bloomberg Technology, “Klarna CEO Says AI Is Replacing Workers,” bloomberg.com, December 12, 2024

2025

January 3, 2025 - Reports of Religious Leaders Experimenting with AI

Rabbi Josh Fixler trained his “Rabbi Bot,” created with the help of a data scientist, on his old sermons to be able to deliver a full sermon and answer questions in a voice similar to his. Other religious leaders use AI to do theological research and help with sermon writing. And still others are using AI to translate their sermons into other languages for real time broadcast.

Some believe the use of AI could bring younger members to the faiths, while others are skeptical that AI could relay the word of their God.

—Eli Tan, “At the Intersection of A.I. and Spirituality,” nytimes.com, January 3, 2025

January 7, 2025 - Representative Schweikert Introduces Bill to Allow AI to Write Prescriptions

David Schweikert (R-AZ) introduced H.R.238 - Healthy Technology Act of 2025 with no cosponsors. The bill would

amend the Federal Food, Drug, and Cosmetic Act to clarify that artificial intelligence and machine learning technologies can qualify as a practitioner eligible to prescribe drugs if authorized by the State involved and approved, cleared, or authorized by the Food and Drug Administration, and for other purposes.

—Congress, “H.R.238 - Healthy Technology Act of 2025,” congress.gov (accessed March 18, 2025)

—David Schweikert, “This Bill Could Make It Legal for AI to Prescribe Medicine,” schweikert.house.gov, February 27, 2025

January 9, 2025 - 41 Percent of Surveyed Companies Say They Will Downsize and Use AI

The World Economic Forum survey found 41 percent of global employers will downsize and use AI for some jobs. However, 77 percent said they would re-skill or up-skill their current employees to work with AI.

—Natalie Chandler, “Survey: 41% of Companies Worldwide Plan to Downsize and Use AI,” slate.com, January 9, 2025

January 13, 2025 - Biden Administration Issues AI Rules

The rules govern how AI chips and models can be shared with foreign countries, dividing the globe into three categories:

  1. The U.S. and 18 close allies are exempted from the rules.
  2. Countries under embargoes (Russia and China, for example) will continue to be embargoed from buying AI.
  3. All other countries will be subject to import caps.

—Ana Swanson, “Biden Administration Adopts Rules to Guide A.I.’s Global Spread,” nytimes.com, January 13, 2025

January 13, 2025 - Washington Post Investigation Finds Misuse of AI by Police

The investigation “into police use of facial recognition software” in 23 departments

found that law enforcement agencies across the nation are using the artificial intelligence tools in a way they were never intended to be used: as a shortcut to finding and arresting suspects without other evidence.

Of the 23 departments, 15 of them in 12 states arrested suspects using only AI matches and no other evidence despite no other corroborating evidence and often the presence of contradictory evidence. The investigation shows that in at least six cases, police arrested the wrong person and did not even check alibis or look for evidence.

At least 75 police departments us AI facial recognition in the United States. Departments are not required to disclose use of the software.

—Douglas MacMillan, David Ovalle, and Aaron Schaffer, “Arrested by AI: Police Ignore Standards After Facial Recognition Matches,” washingtonpost.com, January 13, 2025

January 15, 2025 - Gallup Poll Finds Americans Unknowingly Use AI

When asked if they have used AI in the past seven days, 36 percent of Americans said yes, while 50 percent said no, and 14 percent were not sure.

However, when asked if they used tools that are AI-driven like forecasting apps or social media, 50 percent of those who said they had not used AI in the last week reported using at least one, and another 37 percent reported using four or more.

—Ellyn Maese, “Americans Use AI in Everyday Products Without Realizing It,” news.gallup.com, January 15, 2025

January 15 - February, 2025 - Open AI Expands Media Partnerships

OpenAI announced partnerships with:

—OpenAI, “Partnering with Axios Expands OpenAI’s Work with the News Industry,” openai.com, January 15, 2025

—OpenAI, “OpenAI Partners with Schibsted Media Group,” openai.com, February 10, 2025

—OpenAI, “OpenAI and Guardian Media Group Launch Content Partnership,” openai.com, February 14, 2025

January 16, 2025 - Apple to Disable AI News Summaries

After British news outlets complained the AI summaries were misrepresenting news, Apple will disable news summaries and add a warning on other apps indicating that there could be errors in the summaries. The company plans to make the feature available after updates. AI capabilities are only available on the iPhone 15 and 16, and only in English-speaking countries so far.

—Tripp Mickle, “Apple Plans to Disable A.I. Features Summarizing News Notifications,” nytimes.com, January 17, 2025

January 20, 2025 - Trump Revokes Biden AI Executive Order

President Donald Trump revoked Biden’s executive order that sought to mitigate the risks of AI.

—Reuters, “Trump Revokes Biden Executive Order on Addressing AI Risks,” reuters.com, January 21, 2025

January 21, 2025 - Stargate Announced by Trump

President Donald Trump announced that three tech companies—OpenAI, SoftBank, and Oracle—would join forces to create Stargate, an AI company that Trump called the “largest AI infrastructure project in history” that will build “the physical and virtual infrastructure to power the next generation of AI.”

The company will focus in particular on developing AGI (artificial general intelligence), or “strong AI,” which aims to duplicate human intellectual abilities. (AI may perform human tasks, but Strong AI will even determine the appropriate action to take.) The company is expected to launch with $100 billion in funding from the tech trio as well as MGX; the technology to be developed is expected to have wide-ranging uses, assisting medical diagnoses and treatments as well as the national security of the United States and its allies. Along with Arm, Microsoft, and NVIDIA, the initial data center is already under construction in Texas.

—OpenAI, “Announcing The Stargate Project,” openai.com, January 21, 2025

—Clare Duffy, “Trump Announces a $500 Billion AI Infrastructure Investment in the US,” cnn.com, January 21, 2025

January 22, 2025 - AI Failed to Detect Gun Brought to School and Used in Shooting

A 17-year-old student brought a gun to school and killed a 16-year old classmate in Nashville. Antioch High School is equipped with Omnilert, but the AI software failed to detect the gun because of camera positioning. The shooter was too far away from the camera for the software to detect the firearm. The software did alert when police arrived brandishing guns.

—Minyvonne Burke and Jon Schuppe, “AI Weapon Detection System at Antioch High School Failed to Detect Gun in Nashville Shooting,” nbcnews.com, Januart 23, 2025

January 23, 2025 - SafeRBot Released in Conjuction with Urbana Police Department

Developed at the School of Information Sciences at the University of Illinois Urbana-Champaign by associate professor Yun Huang, PhD student Yiren Liu, and BSIS (bachelor of science in information systems) student Tony An, SafeRBot is designed to take some of the work load off of 911 dispatch centers.

The bot can collect information from non-emergency calls when a community member would prefer not to speak to a human or when a dispatcher is not available. SafeRBot is multilingual and responds to the language spoken by the community member. Police agencies can then access and download data from the bot to take action.

SafeRBot was developed with input from the Urbana Police Department and the Police Training Institute at the University of Illinois Urbana-Champaign.

—University of Illinois at Urbana-Champaign, “Chatbot Offers Empathetic, Multilingual Crime Reporting to Ease Dispatcher Workload,” techxplore.com, January 23, 2025

January 23, 2025 - “Humanity’s Last Exam” Featured in New York Times

Because AI can now easily excel at S.A.T.-level tests, Dan Hendrycks of the Center for AI Safety developed “Humanity’s Last Exam” (dates for development were not provided). Questions for the exam, many of which are at the graduate education level, were submitted by experts in a multitude of fields, with one example being:

Hummingbirds within Apodiformes uniquely have a bilaterally paired oval bone, a sesamoid embedded in the caudolateral portion of the expanded, cruciate aponeurosis of insertion of m. depressor caudae. How many paired tendons are supported by this sesamoid bone? Answer with a number.

AI models that completed the exam scored between 1.5 percent (Google’s Gemini 1.5 Pro) and 8.3 percent (OpenAi’s o1 system).

—Kevin Roose, “When A.I. Passes This Test, Look Out,” nytimes.com, January 23, 2025

January 27, 2025 - DeepSeek Upends AI World

An advanced AI assistant from the little-known Chinese company DeepSeek became the top free app in the Apple App Store, pushing OpenAI’s ChatGPT into second place. The release caused U.S. tech stocks to drop sharply, leading some commentators to claim, “DeepSeek . . . is AI’s Sputnik moment.” They compared the AI competition between China and the U.S. “to the space race between the U.S. and the Soviet Union and the event that forced the U.S. to realize that its technological abilities were not unassailable.”

—Angela Yang and Jasmine Cui, “A New AI Assistant from China Has Silicon Valley Talking,” nbcnews.com, January 27, 2025

January 30, 2025 - OpenAI Partners with U.S. National Laboratories

The goal of the partnership is “to supercharge” the U.S. National Labs’ “scientific research using” OpenAI’s “latest reasoning models.” The AI models will be used for:

  • Accelerating the basic science that underpins U.S. global technological leadership
  • Identifying new approaches to treating and preventing disease
  • Enhancing cybersecurity and protecting the American power grid
  • Achieving a new era of U.S. energy leadership by unlocking the full potential of natural resources and revolutionizing the nation’s energy infrastructure
  • Improving U.S. security through improved detection of natural and man-made threats, such as biology and cyber, before they emerge
  • Deepening our understanding of the forces that govern the universe, from fundamental mathematics to high-energy physics

—OpenAI, “Strengthening America’s AI Leadership with the U.S. National Laboratories,” openai.com, January 30, 2025

February 2, 2025 - Open AI Releases Deep Research

Deep Research, an AI agent which is used via ChatGPT, “can do work for you independently—you give it a prompt, and ChatGPT will find, analyze, and synthesize hundreds of online sources to create a comprehensive report at the level of a research analyst.”

—OpenAI, “Introducing Deep Research,” openai.com, February 2, 2025

February 10, 2025 - Nonsense Phrases Mark AI-generated Scientific Papers

In 2022, a Russian chemist (pseudonym Paralabrax clathratus) first noted the nonsense phrase “vegetative electron microscopy” in a now-retracted paper published in Springer Nature’s Environmental Science and Pollution Research. The phrase is thought to originate from an AI scanner ignoring columns in a 1959 paper and reading all of the text from left to right, with “vegetative” in the left column and “electron microscope” lining up in the right column. Further searches turned up almost 24 more articles with the phrase, adding to the estimated thousands of AI articles “polluting the scientific literature.” Retraction Watch found no other articles with the phrase had been retracted as of their February 2025 report.

—Retraction Watch, “As a Nonsense Phrase of Shady Provenance Makes the Rounds, Elsevier Defends Its Use,” retractionwatch.com, February 10, 2025

February 11, 2015 - Thomson Reuters Wins AI Copyright Case

The ruling is the first major win against generative AI companies. Thomson Reuters accused Ross Intelligence of infringing copyright laws by using Westlaw (which is owned by Thomson Reuters) content to train AI chatbots. U.S. Circuit Court Judge Stephanos Bibas ruled in favor of Thomson Reuters, stating: “none of Ross’s possible defenses holds water. I reject them all.”

Ross Intelligence shut down in 2021 due to the cost of litigation.

—Kate Knibbs, “Thomson Reuters Wins First Major AI Copyright Case in the U.S.,” wired.com, February 11, 2025

February 21, 2025 - OpenAI Reports Evidence of A.I.-Powered Chinese Surveillance Tool

OpenAI found what it’s deemed “Peer Review,” and AI surveillance tool that collects anti-Chinese social media posts in real time from Western countries.

OpenAI also found “Sponsored Discontent” that generates posts in English to criticize anti-Chinese posts as well as translate articles critical of the U.S. into Spanish for distribution in Latin America. And, Open AI found an investment scam campaign that may be based in Cambodia.

—Cade Metz, “OpenAI Uncovers Evidence of A.I.-Powered Chinese Surveillance Tool,” nytimes.com, February 21, 2024

February 24, 2025 - American Psychological Association (APA) Warns Against AI “Therapists”

Arthur C. Evans, Jr., chief executive of the APA, said:

They [AI bots] are actually using algorithms that are antithetical to what a trained clinician would do.… Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is.

—Ellen Barry, “Human Therapists Prepare for Battle Against A.I. Pretenders,” nytimes.com, February 24, 2025

February 25, 2025 - AI Video of Trump and Musk Broadcast at HUD

Computer monitors at the Department of Housing and Urban Development (HUD) headquarters in Washington, D.C., briefly displayed an obscene video of President Trump and Elon Musk with the words “Long Live the Real King.” The video was discovered by employees returning to the headquarters (after being ordered to do so by the Trump administration) for their first full-day back working in the office.

—Christopher Flavelle, et al, “Fake Video of Trump and Musk Appears on TVs at Housing Agency,” nytimes.com, February 24, 2025

March 4, 2025 - OpenAI Announces Partnerships with 15 Research Institutions

OpenAI stated:

Uniting institutions across the U.S. and abroad, NextGenAI aims to catalyze progress at a rate faster than any one institution would alone. This initiative is built not only to fuel the next generation of discoveries, but also to prepare the next generation to shape AI’s future.

The partnership includes California Institute of Technology, the California State University system, Duke University, University of Georgia, Harvard University, Howard University, Massachusetts Institute of Technology, University of Michigan, University of Mississippi, The Ohio State University, University of Oxford, Paris Institute of Political Studies, Texas A&M University, Boston Children’s Hospital, and the Boston Public Library.

—OpenAI, “Introducing NextGenAI: A Consortium to Advance Research and Education with AI,” openai.com, March 4, 2025

March 10, 2025 - Lila Sciences Goes Public

Lila Sciences seeks to use AI that has been trained on scientific data from publications and other experiments to run scientific experiments in automated labs with few humans to help. The company has already seen some success with novel antibodies and carbon capture catalysts.

—Steve Lohr, “The Quest for A.I. ‘Scientific Superintelligence,’” nytimes.com, March 10, 2025

March 10, 2025 - Experts Recommend Policy Update for Voice Recognition in Courtrooms

According to Rebecca Wexler, Emily Cooper, Hany Farid, and Sarah Barrington, professors of law and a Ph.D. student respectively:

Under the current Federal Rules of Evidence, someone trying to introduce an audio recording of a voice can satisfy the authentication standard for admissibility merely by putting a witness on the stand who says they are familiar with the person’s voice and the recording sounds like them.

This is problematic if the audio recording is of an AI voice that sounds like a real person. In that case, the recording is still admissible in court if a witness says it sounds like a specific person. The authors noted that an AI clone is judged to be the real person 80 percent of the time.

The authors recommend giving judges the discretion to exclude recordings that are suspected to be AI-generated.

—Rebecca Wexler, et al., “AI-Generated Voice Evidence Poses Dangers in Court,” lawfaremedia.com, March 10, 2025

March 12, 2025 - Sam Altman Unveils AI Model That Is “Good at Creative Writing”

Sam Altman, CEO of OpenAI, posted to X:

we trained a new model that is good at creative writing (not sure yet how/when it will get released). this is the first time i have been really struck by something written by AI; it got the vibe of metafiction so right.

—Sam Altman, x.com, March 11, 2025

March 2025 - Yale Law School Scholar Suspended After AI-news Site Reports Terrorist Link

Helyeh Doutaghi, deputy director of the Law and Political Economy Project at Yale University, was barred from campus and placed on administrative leave after an AI-powered news site, “Jewish Onliner,” said she was a member of a terrorist group. Though pro-Palestinian, Doutaghi denied membership in Samidoun, which was called a “sham charity” that raised money for the Popular Front for the Liberation of Palestine (PFLP) by the Treasury Department in 2024.

—Stephanie Saul, “Yale Suspends Scholar After A.I.-Powered News Site Accuses Her of Terrorist Link,” nytimes.com, March 12, 2025

March 20, 2025 - AI Saves Man’s Life According to New York Times Report

Joseph Coates has a rare blood disorder, POEMS syndrome, and he expected to die of the disease as treatment after treatment failed. Then immunologist David Fajgenbaum used AI to create a custom, unconventional mix of chemotherapy, immunotherapy, and steroids. The combination made Coates well enough to receive more traditional treatments, including a stem cell transplant, and he went into remission.

While Coates’ was the focus of the article, The New York Times lists other examples of AI helping to find novel treatments and uses of drugs to help patients suffering from rare diseases. “Other laboratory discovery techniques have already put drug repurposing on the map,” Donald C. Lo, a scientific lead at Remedi4All, said, “A.I. just puts rocket boosters on that.”

—Kate Morgan, “Doctors Told Him He Was Going to Die. Then A.I. Saved His Life,” nytimes.com, March 20, 2025

March 23, 2025 - Author Richard Osman Encourages Authors to “Have a Good Go” at Meta for Copyright Infringement

The British author of the popular Thursday Murder Club series, Richard Osman, posted on X (Twitter):

Copyright law is not complicated at all. If you want to use an author’s work you need to ask for permission. If you use it without permission you’re breaking the law. It’s so simple. It’ll be incredibly difficult for us, and for other affected industries, to take on Meta, but we’ll have a good go!

The author included a statement by The Society of Authors:

We are disappointed but not surprised that Meta has used millions of pirated books to develop its AI systems. As a matter of urgency, Meta needs to compensate the rightsholders of all the works it has been exploiting.

—Richard Osman, x.com, March 23, 2025