Mohammad Hosseini and Kristi Holmes: After 3 years of ChatGPT, it’s clear Illinois needs more AI safeguards

https://ift.tt/zimb710

Three years ago, ChatGPT entered our world and changed the way many of us interact with our phones, computers, work and life in general. What began as a fun novelty essentially kick-started an artificial intelligence boom. Today, numerous AI tools are embedded in research, classrooms, hospitals, law firms and other industries, promising efficiency and creativity. As we mark this anniversary, we must ask: At what cost and at whose expense do we realize this efficiency and creativity?

From a financial perspective, the rise of ChatGPT and other AI tools has fueled huge investments in technology and data centers. In fact, George Saravelos of Deutsche Bank recently wrote that without these investments, the U.S. economy “would be close to, or in, recession this year.” He added that since this level of investment is unsustainable, and companies that have invested in AI are not yet seeing the expected return on their investments, the current economic boon will likely be short-lived.

Even so, the owners and shareholders of companies that build AI tools and infrastructure have had big wins. Share prices have surged since the start of the AI boom: Microsoft, from $249.65 on Nov. 30, 2022, to about $485; Nvidia, from $16.91 to about $180; Meta, from $117.38 to about $635; and Amazon, from $96.54 to about $230, as of last week. Concurrently, a firm that tracks announcements of major layoffs has reported that AI was the second-most frequently cited cause of workforce reductions in October, resulting in 31,039 job cuts in the U.S. AI has created new jobs, but since the returns on AI investments do not seem as promising, the newly hired may lose their jobs sooner than expected.

As tech giants pour billions into building data centers to run AI models, AI’s environmental footprint is ballooning: Unprecedented volumes of electricity and water are being consumed, and tens of millions of tons of carbon dioxide are being released. AI companies rarely disclose detailed data about the energy, water and carbon costs of their operations, and much of the footprint is not directly measurable, meaning that even the staggering figures publicly discussed are almost certainly undercounts.

Either way, these costs are disproportionately borne by the communities near these facilities. Residents near major data centers report rising utility prices, stressed water supplies and land-use conflict. Further upstream are developing countries, where the necessary rare earth metals are extracted.

The negative impacts of AI on developing nations — unequal access, increased inequality and a digital divide — have been a concern for quite some time. AI is also hurting developed countries. For example, there is rising alarm over the negative impacts of generative AI on trust-based relationships, such as those between citizens and politicians or between patients and doctors. Research shows that AI tools risk undermining the patient-physician relationship by eroding empathy, shared decision-making and trust. Likewise, AI-generated deepfakes can undermine democracy by making it harder for citizens to distinguish truth from manipulation.

Another disturbing trend pertains to the mental health consequences of excessive AI use. Reports of AI-induced psychosis — an inability to distinguish reality from nonreality — and suicides linked to prolonged AI chatbot interactions are mounting. These cases often involve vulnerable individuals such as minors and individuals with preexisting mental health issues, who use AI chatbots such as ChatGPT excessively and establish parasocial relationships with them. After a while, these users start treating chatbots as trusted confidants or therapists. 

Despite these risks, there is a lack of federal regulation to build more safeguards and sanction malicious users. Congress has debated AI governance but mostly opted for deregulation to limit impacts on innovation. In this vacuum, states such as California and Colorado have passed legislation to set a higher bar for transparency, bias audits and consumer rights.

Illinois too, has been active. In a progressive move, the General Assembly passed and Gov. JB Pritzker signed into law legislation to limit the use of AI in in therapy and psychotherapy services. Currently, four bills — H.B. 3506, S.B. 1929, S.B. 1792 and S.B. 2203 are moving through committees in Springfield. H.B. 3506 would require AI developers to produce, implement and publicly post a safety and security protocol. It also would mandate that developers publish every 90 days a risk assessment report to outline emerging risks, mitigation steps and significant model changes. At least once a year, companies would need to hire an independent third-party auditor to verify compliance. H.B. 3506 also includes whistleblower protections, rules for redacting sensitive information and civil penalties for violations, though these penalties are capped at $1 million.

This penalty ceiling raises a critical question: What happens when AI systems cause harms that far exceed that amount? By limiting liability, the bill could end up offering disproportionate protection to the very firms whose technologies impose the greatest risks.

These bills are a good start but do not address overuse, disclosure or environmental costs. Future laws in Illinois could require AI providers not only to warn that outputs may be inaccurate (which some currently do), but also to display concise notices about the risks of excessive use and overreliance on AI and publish environmental labels that estimate energy use, water consumption and carbon dioxide per model and per user session. Warnings akin to those on addictive substances, perhaps: “Prolonged use of AI chatbots may affect mental health. This tool is not a substitute for professional care.”

Another missing piece is disclosure. Mandatory content-labeling laws are needed to require that all AI-generated text, images, audio, videos and virtual forms be explicitly marked in all contexts, including on social media. This transparency would help users distinguish human communication from synthetic content. It also potentially would help close a dangerous regulatory gap by ensuring that companies bear responsibility for identifying AI-generated content, enabling meaningful oversight, forensic auditing and legal accountability when AI is used to deceive or defraud.

Such measures could curb misuse without stifling innovation and communicate that there are social and planetary costs to AI. 

Mohammad Hosseini, Ph.D., is an assistant professor in the Department of Preventive Medicine at Northwestern University’s Feinberg School of Medicine. Kristi Holmes, Ph.D., is a professor of preventive medicine and the director of Galter Health Sciences Library at Northwestern’s Feinberg School of Medicine.

Submit a letter, of no more than 400 words, to the editor here or email letters@chicagotribune.com.

Top Feeds

via Opinion https://ift.tt/WbiD3cN

December 1, 2025 at 05:22AM

Leave a comment