Artificial Intelligence (AI)

Is Artificial Intelligence Good for Society?
print Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

External Websites

Artificial intelligence (AI) is the use of “computers and machines to mimic the problem-solving and decision-making capabilities of the human mind,” according to IBM. [1]

The idea of AI dates back at least 2,700 years. As Adrienne Mayor, research scholar, folklorist, and science historian at Stanford University, explains: “Our ability to imagine artificial intelligence goes back to ancient times. Long before technological advances made self-moving devices possible, ideas about creating artificial life and robots were explored in ancient myths.” [2]

Mayor notes that the myths about Hephaestus, the Greek god of invention and blacksmithing, included precursors to AI. For example, Hephaestus created the giant bronze man, Talos, which had a mysterious life force from the gods called ichor. Hephaestus also created Pandora and her infamous box, as well as a set of automated servants made of gold that were given the knowledge of the gods. Mayor concludes, “Not one of those myths has a good ending once the artificial beings are sent to Earth. It’s almost as if the myths say that it’s great to have these artificial things up in heaven used by the gods. But once they interact with humans, we get chaos and destruction.” [2]

The modern notion of AI largely began when Alan Turing, who contributed to breaking the Nazi’s Enigma code during World War II, created the Turing test to determine if a computer is capable of “thinking.” The value and legitimacy of the test have long been debated. [1][3][4]

The “Father of Artificial Intelligence,” John McCarthy, coined the term “artificial intelligence” when he, with Marvin Minsky and Claude Shannon, proposed a 1956 summer workshop on the topic at Dartmouth College. McCarthy defined artificial intelligence as “the science and engineering of making intelligent machines.” He later created the computer programming language LISP (which is still used in AI), hosted computer chess games against human Russian opponents, and developed the first computer with “hand-eye” capability, all important building blocks for AI. [1][5][6][7]

The first AI program designed to mimic how humans solve problems, Logic Theorist, was created by Allen Newell, J.C. Shaw, and Herbert Simon in 1955–56. The program was designed to solve problems from Principia Mathematica (1910–13) written by Alfred North Whitehead and Bertrand Russell[1][8]

In 1958 Frank Rosenblatt invented the Perceptron, which he claimed was “the first machine which is capable of having an original idea.” Though the machine was hounded by skeptics, it was later praised as the “foundations for all of this artificial intelligence.” [1][9]

As computers became cheaper in the 1960s and ’70s, AI programs such as Joseph Weizenbaum’s ELIZA flourished, and U.S. government agencies including the Defense Advanced Research Projects Agency (DARPA) began to fund AI-related research. But computers were still too weak to manage the language tasks researchers asked of them. Another influx of funding in the 1980s and early ’90s furthered the research, including the invention of expert systems by Edward Feigenbaum and Joshua Lederberg. But progress again waned with another drop in government funding. [10]

In 1997 Gary Kasparov, reigning world chess champion and grand master, was defeated by IBM’s Deep Blue AI computer program, a major event in AI history. More recently, advances in computer storage limits and speeds have opened new avenues for AI research and implementation, aiding scientific research and forging new paths in medicine for patient diagnosis, robotic surgery, and drug development. [1][10][11][12]

Now, artificial intelligence is used for a variety of everyday implementations including facial recognition software, online shopping algorithms, search engines, digital assistants like Siri and Alexa, translation services, automated safety functions on cars, cybersecurity, airport body scanning security, poker playing strategy, and fighting disinformation on social media. [13][58]

With the field growing by leaps and bounds, on Mar. 29, 2023, tech giants including Elon Musk,  Steve Wozniak, Craig Peters (CEO of Getty Images), author Yuval Noah Harari, and politician Andrew Yang published an open letter calling for a six-month pause on AI “systems more powerful than GPT-4.” (The latter, “Generative Pre-trained Transformer 4,” is an AI model that can generate human-like text and images.) The letter states, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable…. AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” Within a day of its release, the letter had garnered 1380 signatures—from engineers, professors, artists, and grandmothers alike. [59][62]

On Oct. 30, 2023, President Joe Biden signed an executive order on artificial intelligence that “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.” Vice President Kamala Harris stated, “We have a moral, ethical and societal duty to make sure that A.I. is adopted and advanced in a way that protects the public from potential harm. We intend that the actions we are taking domestically will serve as a model for international action.” [60][61]

Despite such precautions, experts noted that many of the new standards would be difficult to enforce, especially as new concerns and controversies over AI evolve almost daily. AI developers, for example, have faced criticism for using copyrighted work to train AI models and for politically skewing AI-produced information. Generative programs such as ChatGPT and DALL-E3 claim to produce “original” output because developers have exposed the programs to huge databases of existing texts and images,material that consists of copyrighted works. OpenAI and Anthropic, as well as other AI companies, have been sued by The New York TimesMicrosoft, countless authors including Jodi Picoult, George R.R. Martin, Sarah Silverman, and John Grisham, music publishers including Universal Music Publishing Group, and numerous visual artists as well as Getty Images, among others. Many companies’ terms of service, including Encyclopaedia Britannica, now require that AI companies obtain written permission to data mine for AI bot training. [63][64][65][66][67][68][69][70]

Controversy arose yet again in early 2024, when Google’s AI chatbot Gemini began skewing historical events by generating images of racially diverse 1940s German Nazi soldiers and Catholic popes (including a Black female pope). Republican lawmakers accused Google of promoting leftist ideology and spreading disinformation through its AI tool. Globally, fears have been expressed that such technology could undermine the democratic process in upcoming elections. As a result, Google agreed to correct its faulty historical imaging and to limit election-related queries in countries with forthcoming elections. Similarly, the Federal Communications Commission (FCC) outlawed the use of AI-generated voices in robocalls after a New Hampshire political group was found to be placing robocalls featuring an AI-generated voice that mimicked President Joe Biden in an effort to suppress Democratic party primary voting. [71][72][73][74][75][76][77]

(This article first appeared on ProCon.org and was last updated on September 9, 2024.)

PROSCONS
Pro 1: AI can make everyday life more convenient and enjoyable, improving our health and standard of living. Read More.Con 1: AI will harm the standard of living for many people by causing mass unemployment as robots replace people. Read More.
Pro 2: AI makes work easier for students and professionals alike. Read More.Con 2: AI can be easily politicized, spurring disinformation and cultural laziness. Read More.
Pro 3: AI helps marginalized groups by offering accessibility for people with disabilities. Read More.Con 3: AI hurts racial minorities by repeating and exacerbating human racism. Read More.
Pro 4: Artificial intelligence can improve workplace safety. Read More.Con 4: Artificial intelligence poses dangerous privacy risks. Read More.

Pro Arguments

 (Go to Con Arguments)

Pro 1: AI can make everyday life more convenient and enjoyable, improving our health and standard of living.

Why sit in a traffic jam when a map app can navigate you around the car accident? Why fumble with shopping bags searching for your keys in the dark when a preset location-based command can have your doorway illuminated as you approach your now unlocked door? [23]

Why scroll through hundreds of possible TV shows when the streaming app already knows what genres you like? Why forget eggs at the grocery store when a digital assistant can take an inventory of your refrigerator and add them to your grocery list and have them delivered to your home? All of these marvels are assisted by AI technology. [23]

AI-enabled fitness apps boomed during the COVID-19 pandemic when gyms were closed, increasing the number of AI options for at-home workouts. Now, you can not only set a daily steps goal with encouragement reminders on your smart watch, but you can ride virtually through the countryside on a Peloton bike from your garage or have a personal trainer on your living room TV. For more specialized fitness, AI wearables can monitor yoga poses or golf and baseball swings. [24][25]

AI can even enhance your doctor’s appointments and medical procedures. It can alert medical caregivers to patterns in your health data as compared to a vast library of medical data, while also doing the paperwork tied to medical appointments so doctors have more time to focus on their patients, resulting in more personalized care. AI can even help surgeons be quicker, more accurate, and less invasive in their operations. [26]

Smart speakers including Amazon’s Echo can use AI to soothe babies to sleep and monitor their breathing. Using AI, speakers can also detect regular and irregular heartbeats, as well as heart attacks and congestive heart failure. [27][28][29]

Pro 2: AI makes work easier for students and professionals alike.

Much like the calculator did not signal the end of students’ grasp of mathematics knowledge, typing did not eliminate handwriting, and Google did not herald the end of research skills, AI does not signal the end of reading and writing or of education in general. [78][79]

Elementary school teacher Shannon Morris explains that AI tools like “ChatGPT can help students by providing real-time answers to their questions, engaging them in personalized conversations, and providing customized content based on their interests. It can also offer personalized learning resources, videos, articles, and interactive activities. This resource can even provide personalized recommendations for studying, help with research, provide context-specific answers, and offer educational games.” She also notes that teachers’ more daunting tasks like grading and making vocabulary lists can be streamlined with AI tools. [79]

For adults AI can similarly make work easier and more efficient, rather than signaling the rise of the robot employee. Pesky, time-consuming tasks like scheduling and managing meetings, finding important emails amongst the spam, prioritizing tasks for the day, and creating and posting social media content can be delegated to AI, freeing up time for more important and rewarding work. The technology can also help with brainstorming, understanding difficult concepts, finding errors in code, and learning languages via conversation, making daunting tasks more manageable. [80]

AI is a tool that, if used responsibly, can enhance both learning and work for everyone. Carri Spector of the Stanford Graduate School of Education says, “I think of AI literacy as being akin to driver’s ed: We’ve got a powerful tool that can be a great asset, but it can also be dangerous. We want students to learn how to use it responsibly.” [81]

Pro 3: AI helps marginalized groups by offering accessibility for people with disabilities.

Artificial intelligence is commonly integrated into smartphones and other household devices. Virtual assistants, including Siri, Alexa, and Cortana, can perform innumerable tasks from making a phone call to navigating the internet. People who are deaf and hearing impaired can access transcripts of voicemail or other audio, for example. [20]

Other virtual assistants can transcribe conversations as they happen, allowing for more comprehension and participation by those who have impairments that affect their communication. Using voice commands with virtual assistants can help people with mobility disabilities who may have difficulty navigating small buttons or screens or turning on a lamp. [20]

Apps enabled by AI on smartphones and other devices, including VoiceOver and TalkBack, can read messages, describe app icons or images, and give information such as battery levels for visually impaired people. Other apps, such as Voiceitt, can transcribe and standardize the voices of people with speech impediments. [20]

Wheelmap provides users with information about wheelchair accessibility, and Evelity offers indoor navigation tools that are customized to the user’s needs, providing audio or text instructions and routes for wheelchair accessibility. [20]

Other AI implementations, such as smart thermostats, smart lighting, and smart plugs, can be automated to work on a schedule to aid people with mobility or cognitive disabilities to lead more independent lives. [21]

More advanced AI projects can combine with robotics to help physically disabled people. HOOBOX Robotics, for example, uses facial recognition software to allow a wheelchair user to move their wheelchair with facial expressions, making movement easier for seniors and those with ALS or quadriparesis[22]

Pro 4: Artificial intelligence can improve workplace safety.

AI doesn’t get stressed, tired, or sick, three major causes of human accidents in the workplace. AI robots can collaborate with or replace humans for especially dangerous tasks. For example 50% of construction companies that used drones to inspect roofs and other risky tasks saw improvements in safety. [14][15]

Artificial intelligence can also help humans be safer. For instance, AI can ensure employees are up to date on training by tracking and automatically scheduling safety or other training. AI can also check and offer corrections for ergonomics to prevent repetitive stress injuries or worse. [16]

An AI program called AI-SAFE (Automated Intelligent System for Assuring Safe Working Environments) aims to automate the workplace personal protective equipment (PPE) check, eliminating human errors that could cause accidents in the workplace. As more people wear PPE to prevent the spread of COVID-19 and other viruses, this sort of AI could protect against large-scale outbreaks. [17][18][19]

In India AI was used in the midst of the coronavirus pandemic to reopen factories safely by providing camera, cell phone, and smart wearable device-based technology to ensure social distancing, take employee temperatures at regular intervals, and perform contact tracing if anyone tested positive for the virus. [18][19]

AI can also perform more sensitive tasks in the workplace such as scanning work emails for improper behavior and types of harassment.[15]

Con Arguments

 (Go to Pro Arguments)

Con 1: AI will harm the standard of living for many people by causing mass unemployment as robots replace people.

AI robots and other software and hardware are becoming less expensive and need none of the benefits and services required by human workers, such as sick days, lunch hours, bathroom breaks, health insurance, pay raises, promotions, and performance reviews, which spells trouble for workers and society at large. [51]

48% of experts believed AI will replace a large number of blue- and even white-collar jobs, creating greater income inequality, increased unemployment, and a breakdown of the social order. [35]

The axiom “everything that can be automated, will be automated” is no longer science fiction. Self-checkout kiosks in stores like CVS, Target, and Walmart use AI-assisted video and scanners to prevent theft, alert staff to suspicious transactions, predict shopping trends, and mitigate sticking points at checkout. These AI-enabled machines have displaced human cashiers. About 11,000 retail jobs were lost in 2019, largely due to self-checkout and other technologies. In 2020, during the COVID-19 pandemic, a self-checkout manufacturer shipped 25% more units globally, reflecting the more than 70% of American grocery shoppers who preferred self- or touchless checkouts. [35][52][53][54][55]

An Oct. 2020 World Economic Forum report found 43% of businesses surveyed planned to reduce workforces in favor of automation. Many businesses, especially fast-food restaurants, retail shops, and hotels, automated jobs during the COVID-19 pandemic. [35]

Income inequality was exacerbated over the last four decades as 50–70% of changes in American paychecks were caused by wage decreases for workers whose industries experienced rapid automation, including AI technologies. [56][57]

Con 2: AI can be easily politicized, spurring disinformation and cultural laziness.

The idea that the Internet is making us stupid is legitimate, and AI is like the Internet on steroids.

With AI bots doing everything from research to writing papers, from basic math to logic problems, from generating hypotheses to performing science experiments, from editing photos to creating “original” art, students of all ages will be tempted (and many will succumb to the temptation) to use AI for their school work, undermining education goals. [82][83][84][85][86]

“The academic struggle for students is what pushes them to become better writers, thinkers and doers. Like most positive outcomes in life, the important part is the journey. Soon, getting college degrees without AI assistance will be as foreign to the next generation as payphones and Blockbuster [are to the current generation], and they will suffer for it,” says Mark Massaro, professor of English at Florida SouthWestern State College. [83]

A June 2023 study found that increased use of AI correlates with increased student laziness because of a loss of human decision-making. Similarly an Oct. 2023 study found increased laziness and carelessness as well as a decline in work quality when humans worked alongside AI robots. [87][88][89]

The implications of allowing AI to complete tasks are enormous. We will see declines in work quality and human motivation as well as the rise of dangerous situations from deadly workplace accidents to George Orwell’s dreaded “groupthink.” And, when humans have become too lazy to program the technology, we’ll see lazy AI, too. [90]

Google’s AI chatbot Gemini even generated politically motivated historical inaccuracies by inserting people of color into historical events they never participated in, further damaging historical literacy. “An overreliance on technology will further sever the American public from determining truth from lies, information from propaganda, a critical skill that is slowly becoming a lost art, leaving the population willfully ignorant and intellectually lazy,” explains Massaro. [73][83]

Con 3: AI hurts racial minorities by repeating and exacerbating human racism.

Facial recognition has been found to be racially biased, easily recognizing the faces of white men while wrongly identifying Black women 35% of the time. One study of Amazon’s Rekognition AI program falsely matched 28 members of the U.S. Congress with mugshots from a criminal database, with 40% of the errors being people of color. [22][36][43][44]

AI has also been disproportionately employed against Black and Brown communities, with more federal and local police surveillance cameras in neighborhoods of color, and more social media surveillance of Black Lives Matter and other Black activists. The same technologies are used for housing and employment decisions and TSA airport screenings. Some cities, including Boston and San Francisco, have banned police use of facial recognition for these reasons. [36][43]

One particular AI software tasked with predicting recidivism risk for U.S. courts—the Correctional Offender Management Profiling for Alternative Sanctions (Compas)—–was found to falsely label Black defendants as high risk at twice the rate of white defenders, and to falsely label white defendants as low risk more often. AI is also incapable of distinguishing between when the N-word is being used as a slur and when it’s being used culturally by a Black person. [45][46]

In China facial recognition AI has been used to track Uyghurs, a largely Muslim minority. The U.S. and other governments have accused the Chinese government of genocide and forced labor in Xinjiang, where a large population of Uyghurs live. AI algorithms have also been found to show a “persistent anti-Muslim bias,” by associating violence with the word “Muslim” at a higher rate than with words describing people of other religions including Christians, Jews, Sikhs, and Buddhists. [47][48][50]

Con 4: Artificial intelligence poses dangerous privacy risks.

Facial recognition technology can be used for passive, warrantless surveillance without the knowledge of the person being watched. In Russia facial recognition was used to monitor and arrest protesters who supported jailed opposition politician Aleksey Navalny, who was found dead in prison in 2024. Russians fear a new facial recognition payment system for Moscow’s metro will increase these sorts of arrests. [36][37][38]

Ring, the AI doorbell and camera company owned by Amazon, has partnered with more than 400 police departments, allowing the police to request footage from users’ doorbell cameras. While users were allowed to deny access to any footage, privacy experts feared the close relationship between Ring and the police could override customer privacy, especially when the cameras frequently record activity on others’ property. The policy ended in 2024, but experts say other companies allow similar invasions. [39][91]

AI also follows you on your weekly errands. Target used an algorithm to determine which shoppers were pregnant and sent them baby- and pregnancy-specific coupons in the mail, infringing on the medical privacy of those who may be pregnant, as well as those whose shopping patterns may just imitate pregnant people. [40][41]

Moreover, artificial intelligence can be a godsend to crooks. In 2020 a group of 17 criminals defrauded $35 million from a bank in the United Arab Emirates using AI “deep voice” technology to impersonate an employee authorized to make money transfers. In 2019, thieves attempted to steal $240,000 using the same AI technology to impersonate the CEO of an energy firm in the United Kingdom. [42]

Discussion Questions

  1. Is artificial intelligence good for society? Explain your answer(s).
  2. What applications would you like to see AI take over? What applications (such as handling our laundry or harvesting fruit and fulfilling food orders) would you like to see AI stay away from? Explain your answer(s).
  3. Think about how AI impacts your daily life. Do you use facial recognition to unlock your phone or a digital assistant to get the weather, for example? Do these applications make your life easier or could you live without them? Explain your answers.

Take Action

  1. Consider Kai-Fu Lee’s TED Talk argument that AI can “save our humanity.”
  2. Listen to AI-expert Toby Walsh discuss the pros and cons of AI in his interview at Britannica.
  3. Examine the “weird” dangers of AI with Janelle Shane’s TED Talk.
  4. Consider how you felt about the issue before reading this article. After reading the pros and cons on this topic, has your thinking changed? If so, how? List two to three ways. If your thoughts have not changed, list two to three ways your better understanding of the “other side of the issue” now helps you better argue your position.
  5. Push for the position and policies you support by writing U.S. senators and representatives.

Sources

  1. IBM Cloud Education, “Artificial Intelligence (AI),” ibm.com.com, June 3, 2020
  2. Aaron Hertzmann, “This Is What the Ancient Greeks Had to Say about Robotics and AI,” weforum.org, Mar. 18, 2019
  3. Imperial War Museums, “How Alan Turing Cracked the Enigma Code,” iwm.org.uk (accessed Oct. 7, 2021)
  4. Noel Sharkey, “Alan Turing: The Experiment That Shaped Artificial Intelligence,” bbc.com, June 21, 2012
  5. Computer History Museum, “John McCarthy,” computerhistory.org (accessed Oct. 7, 2021)
  6. Andy Peart, “Homage to John McCarthy, the Father of Artificial Intelligence (AI),” artificial-solutions.com, Oct. 29, 2020
  7. Andrew Myers, “Stanford’s John McCarthy, Seminal Figure of Artificial Intelligence, Dies at 84,” news.stanford.edu, Oct. 25, 2011
  8. History Computer, “Logic Theorist – Complete History of the Logic Theorist Program,” history-computer.com (accessed Oct. 7, 2021)
  9. Melanie Lefkowitz, “Professor’s Perceptron Paved the Way for AI – 60 Years Too Soon,” news.cornell.edu, Sep. 25, 2019
  10. Rockwell Anyoha, “The History of Artificial Intelligence,” sitn.hms.harvard.edu, Aug. 28, 2017
  11. Victoria Stern, “AI for Surgeons: Current Realities, Future Possibilities,” generalsurgerynews.com, July 8, 2021
  12. Dan Falk, “How Artificial Intelligence Is Changing Science,” quantamagazine.org, Mar. 11, 2019
  13. European Parliament, “What Is Artificial Intelligence and How Is It Used?,” europarl.europa.eu, Mar. 29, 2021
  14. Irene Zueco, “Will AI Solve Your Workplace Safety Problems?,” pro-sapien.com (accessed Oct. 13, 2021)
  15. National Association of Safety Professionals, “How Artificial Intelligence/Machine Learning Can Improve Workplace Health, Safety and Environment,” naspweb.com, Jan. 10, 2020
  16. Ryan Quiring, “Smarter Than You Think: AI’s Impact on Workplace Safety,” ehstoday.com, June 8, 2021
  17. Nick Chrissos, “Introducing AI-SAFE: A Collaborative Solution for Worker Safety,” gblogs.cisco.com, Jan 23, 2018
  18. Tejpreet Singh Chopra, “Factory Workers Face a Major COVID-19 Risk. Here’s How AI Can Help Keep Them Safe,” weforum.org, July 29, 2020
  19. Mark Bula, “How Artificial Intelligence Can Enhance Workplace Safety as Lockdowns Lift,” ehstoday.com, July 29, 2020
  20. Carole Martinez, “Artificial Intelligence and Accessibility: Examples of a Technology that Serves People with Disabilities,” inclusivecitymaker.com, Mar. 5, 2021
  21. Noah Rue, “How AI Is Helping People with Disabilities,” rollingwithoutlimits.com, Feb. 25, 2019
  22. Jackie Snow, “How People with Disabilities Are Using AI to Improve Their Lives,” pbs.org, Jan. 30, 2019
  23. Bernard Marr, “The 10 Best Examples of How AI Is Already Used in Our Everyday Life,” forbes.com, Dec. 16, 2019
  24. John Koetsier, “AI-Driven Fitness: Making Gyms Obsolete?,” forbes.com, Aug. 4, 2020
  25. Manisha Sahu, “How Is AI Revolutionizing the Fitness Industry?,” analyticssteps.com, July 9, 2021
  26. Amisha, et al., “Overview of Artificial Intelligence in Medicine,” Journal of Family Medicine and Primary Care, ncbi.nlm.nih.gov, July 2019
  27. Sarah McQuate, “First Smart Speaker System That Uses White Noise to Monitor Infants’ Breathing,” washington.edu, Oct. 15, 2019
  28. Science Daily, “First AI System for Contactless Monitoring of Heart Rhythm Using Smart Speakers,” sciencedaily.com, Mar. 9, 2021
  29. Nicholas Fearn, “Artificial Intelligence Detects Heart Failure from One Heartbeat with 100% Accuracy,” forbes.com, Sep. 12, 2019
  30. Aditya Shah, “Fighting Fire with Machine Learning: Two Students Use TensorFlow to Predict Wildfires,” blog.google, June 4, 2018
  31. Saad Ansari and Yasir Khokhar, “Using TensorFlow to keep farmers happy and cows healthy,” blog.google, Jan. 18, 2018
  32. M Umer Mirza, “Top 10 Unusual but Brilliant Use Cases of Artificial Intelligence (AI),” thinkml.ai, Sep. 17, 2020
  33. Benard Marr, “10 Wonderful Examples Of Using Artificial Intelligence (AI) For Good,” forbes.com, June 22, 2020
  34. Calum McClelland, “The Impact of Artificial Intelligence - Widespread Job Losses,” iotforall.com, July 1, 2020
  35. Aaron Smith and Janna Anderson, “AI, Robotics, and the Future of Jobs,” pewresearch.org, Aug. 6, 2014
  36. ACLU, “Facial Recognition,” aclu.org (accessed Oct. 15, 2021)
  37. Pjotr Sauer, “Privacy Fears as Moscow Metro Rolls out Facial Recognition Pay System,” theguardian.com, Oct. 15, 2021
  38. Gleb Stolyarov and Gabrielle Tétrault-Farber, “‘Face Control’: Russian Police Go Digital against Protesters,” reuters.com, Feb. 11, 2021
  39. Drew Harwell, “Doorbell-Camera Firm Ring Has Partnered with 400 Police Forces, Extending Surveillance Concerns,” washingtonpost.com, Aug. 28, 2019
  40. David A. Teich, “Artificial Intelligence and Data Privacy – Turning a Risk into a Benefit,” forbes.com, Aug. 10, 2020
  41. Kashmir Hill, “How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did,” forbes.com, Feb. 16, 2012
  42. Thomas Brewster, “Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find,” forbes.com, Oct. 14, 2021
  43. ACLU, “How is Face Recognition Surveillance Technology Racist?,” aclu.org, June 16, 2020
  44. Alex Najibi, “Racial Discrimination in Face Recognition Technology,” harvard.edu, Oct. 4, 2020
  45. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias,” propublica.org, May 23, 2016
  46. Stephen Buranyi, “Rise of the Racist Robots – How AI Is Learning All Our Worst Impulses,” theguardian.com, Aug. 8, 2017
  47. Paul Mozur, “One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority,” nytimes.com, Apr. 14, 2019
  48. BBC, “Who Are the Uyghurs and Why Is China Being Accused of Genocide?,” bbc.com, June 21, 2021
  49. Jorge Barrera and Albert Leung, “AI Has a Racism Problem, but Fixing It Is Complicated, Say Experts,” cbc.ca, May 17, 2020
  50. Jacob Snow, “Amazon’s Face Recognition Falsely Matched 28 Members of Congress with Mugshots,” aclu.org, July 26, 2018
  51. Jack Kelly, “Wells Fargo Predicts That Robots Will Steal 200,000 Banking Jobs within the Next 10 Years,” forbes.com, Oct. 8, 2019
  52. Loss Prevention Media, “How AI Helps Retailers Manage Self-Checkout Accuracy and Loss,” losspreventionmedia.com, Sep. 28, 2021
  53. Anne Stych, “Self-Checkouts Contribute to Retail Jobs Decline,” bizjournals.com, Apr. 8, 2019
  54. Retail Technology Innovation Hub, “Retailers Invest Heavily in Self-Checkout Tech amid Covid-19 Outbreak,” retailtechinnovationhub.com, July 6, 2021
  55. Retail Consumer Experience, “COVID-19 Drives Grocery Shoppers to Self-Checkout,” retailcustomerexperience.com, Apr. 8, 2020
  56. Daron Acemoglu and Pascual Restrepo, “Tasks, Automation, and the Rise in US Wage Inequality,” nber.org, June 2021
  57. Jack Kelly, “​​Artificial Intelligence Has Caused A 50% to 70% Decrease in Wages—Creating Income Inequality and Threatening Millions of Jobs,” forbes.com, June 18, 2021
  58. Keith Romer, “How A.I. Conquered Poker,” nytimes.com, Jan. 18, 2022
  59. Future of Life Institute, “Pause Giant AI Experiments: An Open Letter,” futureoflife.org, Mar. 29, 2023
  60. Cecilia Kang and David E. Sanger, “Biden Issues Executive Order to Create A.I. Safeguards,” nytimes.com, Oct. 30, 2023
  61. White House, “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,” whitehouse.gov, Oct. 30, 2023
  62. Harry Guinness, “What Is GPT? Everything You Need to Know about GPT-3 and GPT-4,”zapier.com, Oct. 9, 2023
  63. Michael M. Grynbaum and Ryan Mac, “The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work,” nytimes.com, Dec. 27, 2023
  64. Darian Woods and Adrian Ma, “Artists File Class-Action Lawsuit Saying AI Artwork Violates Copyright Laws,” npr.org, Feb. 3, 2023
  65. Dan Milmo, “Sarah Silverman Sues OpenAI and Meta Claiming AI Training Infringed Copyright,” theguardian.com, July 10, 2023
  66. Olafimihan Oshin, “Nonfiction Authors Sue OpenAI, Microsoft for Copyright Infringement,” thehill.com, Nov. 22, 2023
  67. Matthew Ismael Ruiz, “Music Publishers Sue AI Company Anthropic for Copyright Infringement,” pitchfork.com, Oct, 19, 2023
  68. Alexandra Alter and Elizabeth A. Harris, “Franzen, Grisham and Other Prominent Authors Sue OpenAI,” nytimes.com, Sep. 20, 2023
  69. Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed Feb. 26, 2024)
  70. Encyclopaedia Britannica, “Encyclopaedia Britannica, Inc. Terms of Use,” corporate.britannica.com (accessed Feb. 26, 2024)
  71. Josh Hawley, “Hawley to Google CEO over Woke Gemini AI Program: ‘Come Testify to Congress. Under Oath. In Public.,’” hawley.senate.gov, Feb. 28, 2024
  72. Adi Robertson, “Google Apologizes for ‘Missing the Mark’ after Gemini Generated Racially Diverse Nazis,” theverge.com. Feb. 21, 2024
  73. Nick Robins-Early, “Google Restricts AI Chatbot Gemini from Answering Questions on 2024 Elections,” theguardian.com, Mar. 12, 2024
  74. Jagmeet Singh, “Google Won’t Let You Use Its Gemini AI to Answer Questions about an Upcoming Election in Your Country,” techcrunch.com, Mar. 12, 2024
  75. Federal Communications Commission, “FCC Makes AI-Generated Voices in Robocalls Illegal,” fcc.gov, Feb. 8, 2024
  76. Ali Swenson and Will Weissert, “AI Robocalls Impersonate President Biden in an Apparent Attempt to Suppress Votes in New Hampshire,” pbs.org, Jan. 22, 2024
  77. Shannon Bond, “The FCC Says AI Voices in Robocalls Are Illegal,” npr.org, Feb. 8, 2024
  78. Nicholas Carr, The Shallows: What the Internet Is Doing to Our Brains, 2020
  79. Shannon Morris, “Stop Saying ChatGPT Is the End of Education—It’s Not,” weareteachers.com, Jane. 12, 2023
  80. Juliet Dreamhunter, “33 Mindblowing Ways AI Makes Life Easier in 2024,” juliety.com Jan. 9, 2024
  81. Carrie Spector, “What Do AI Chatbots Really Mean for Students and Cheating?,” acceleratelearning.stanford.edu, Oct. 31, 2023
  82. Aki Peritz, “A.I. Is Making It Easier Than Ever for Students To Cheat,” slate.com, Sep. 6, 2022
  83. Mark Massaro, “AI Cheating Is Hopelessly, Irreparably Corrupting Us Higher Education,” thehill.com, Aug. 23, 2023
  84. Sibel Erduran, “AI Is Transforming How Science Is Done. Science Education Must Reflect This Change.,” science.org, Dec. 21. 2023
  85. Kevin Dykema, “Math and Artificial Intelligence” nctm.org, Nov. 2023
  86. Lauren Coffey, “Art Schools Get Creative Tackling AI,” insidehighered.com, Nov. 8, 2023
  87. Sayed Fayaz Ahmad et al., “Impact of Artificial Intelligence on Human Loss in Decision Making, Laziness and Safety in Education,” Humanities and Social Sciences Communications, ncbi.nlm.nih.gov, June 2023
  88. Tony Ho Tran, “Robots and AI May Cause Humans To Become Dangerously Lazy,” thedailybeast.com, Oct. 18, 2023
  89. Dietlind Helene Cymek, Anna Truckenbrodt, and Linda Onnasch, “Lean Back or Lean In? Exploring Social Loafing in Human–Robot Teams,” frontiersin.org, Oct. 18, 2023
  90. Brian Massey, “Is AI The New Groupthink?,” linkedin.com, May 11, 2023
  91. Associated Press, “Ring Will No Longer Allow Police to Request Users’ Doorbell Camera Footage,” npr.org, Jan. 25, 2024