There have been a few information revolutions in the long arc of human history, from the printing press to the radio to the internet. Generative AI, which allows users to generate fully developed content with just a few words, may be the next in that list. After the explosive growth of large language models (LLMs) like ChatGPT and Claude and image generators like DALL-E and Midjourney over the past few years, as well as models that create video (Sora), music (Suno), and even podcasts (NotebookLM), it’s hard to deny. Even if you never seek out the tools yourself, they have been integrated into our daily lives. Most Google search results pages are topped with an AI summary, and if you use Windows 11, Microsoft’s Copilot has been integrated into the operating system.
More worryingly, AI-generated output has proliferated on the internet. In a world where it is pretty trivial for a tech-savvy someone to set up a social media account that replies to people using an LLM, it can be hard to tell if the person responding to you is actually a human. Because image generators can create not only stylized art but also photorealistic images, it is easier than it ever has been to create things that look real, but aren’t. While most complex video generation is still pretty obviously AI-looking, at the current breakneck pace of progress, it might not be too long before it will be difficult to trust videos as well.
All of this adds up to an extremely treacherous information landscape. How do you know what’s credible when so many things can be generated at the drop of a hat? Libraries, as information institutions, have a responsibility to at the very least stay abreast of developments in this area.
AI can of course be an incredible tool, and libraries can run programs to help their patrons use it to their advantage. For example, academic libraries can run workshops on how to use LLMs as a personal tutor, including information on best practices for double-checking the information it provides. English as a second language teachers can, as part of their curriculum, teach students how to use LLMs as a conversation partner that can correct their grammar and teach new vocabulary in real time.
However, libraries, particularly any staff who do research, must be aware of the traps created both by directly using LLMs to find information and by the prevalence of AI-generated content online, and must do their best to pass these insights on to patrons. Perhaps we can do this through specific AI literacy programming that goes into depth about how you might be able to recognize AI-generated content, or how to double check information it generates. Perhaps we can do this in a more playful way by incorporating that AI literacy into more fun programs that use the tool. However we do it, it is imperative that libraries remain a source of unbiased, reliable information in the minds of our patrons.
Hi Alice – This is a great post. I have been really interested in AI throughout my MLIS journey and agree there are opportunities and risks. I will carry your description of “an extremely treacherous information landscape” past this class – it’s a great description. It’s not bad, it’s not good – it is something to be navigated with care, awareness, and intention.