Generative AI encompasses a range of legitimately incredible tools with impressive utility. It can be intuitive and fun; I’ve personally used ChatGPT for practical, frivolous, and tedious matters. For these reasons its seismically overhauling what seems like every possible industry. As a society we may not be able to appreciate the full effect of these changes on the collective until time has elapsed and we look back at what’s been wrought.

Generally speaking, AI is a polarizing topic amongst the public, eliciting strong feelings. Fear and panic is often based on assumptions of what AI is, how it works, and what it is capable of. Since such models are evolving in real time, public understanding of what it can and can’t do may be outdated by the time the public gets up to speed. Most people have a general awareness of generative AI; there has been extensive, breathless media coverage over the past few years, both positive and critical. What I don’t think is being discussed with appropriate urgency is the repercussions of this technology. My delight using ChatGPT soured when I learned about its environmental costs.

Who bears these costs once these modalities are embedded in our social, informational, and educational architecture? Are those costs too dear? Who gets to decide? How can the public know what they’re engaging with when the private industry promoting it (in gauzy, heartwarming Superbowl ads no less) obscure facts around the energy requirements necessary to keep it going. Demands which will only increase as its adopted in greater numbers and assimilated into workflows and daily life.
Fister and Head discuss how ChatGPT is reshaping information structures, drawing a parallel to Wikipedia which also profoundly influenced the information infrastructure of education and society at large. Like ChatGPT today, it prompted hand-wringing within the educational establishment over its potential to facilitate plagiarism. The authors come to the conclusion ChatGPT should not be banned from the classroom, which I agree with in the context of it purely as a tool.
To liken it to Wikipedia misses the mark though; Generative AI is fundamentally different from the preceding waves of tech breakthroughs in its capacity for environmental degradation and propensity for tailoring results based on “customer satisfaction rather than facts”. Despite the authors’ acknowledgement of these realities, their article concludes that “we should consider how we can play a greater role in deciding what happens to our knowledge environments rather than leaving it up to a handful of big tech companies.”
Certainly. Yet if its the case society must accept this technology as part of the greater information ecosystem, must we also accept it on the terms private industry is offering? Terms indicating destruction, little oversight, and potential for great abuse through proliferation of dis-information?
Why is the environmental factor relevant to its use in LIS? As a metadiscipline, LIS is fundamentally bound by the interconnected nature of knowledge and its transmission. I would argue that its a moral imperative for information professionals to advocate for responsible stewardship of such tools, for the sake of our planet and those who live here (everyone). The tools we use demand scrutiny not just in how they function but the cost we pay to use them.
References (listed in order of appearance):
https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/
https://link.springer.com/article/10.1007/s11023-024-09705-w
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
https://www.youtube.com/watch?v=-7e6g11BJc0
https://www.insidehighered.com/opinion/views/2023/05/04/getting-grip-chatgpt