
A fun, if impossible, challenge is to imagine how technology will ultimately affect how people interact with information. Douglas Adams famously said that trying to predict the future is a mug’s game, but whether our exact predictions come to pass is not so important as having made them. Drawing a horizon gives us a sense of agency. It allows us to engage in planning even when we know the plans will change by the time we get there.
The NMC Horizon Report (2017) successfully predicted the rise of AI basically on the dot. Their projection was in four to five years and ChatGPT was release in 2022. They also previously listed machine learning, which is what AI currently is without so much gloss. Their observation that, “short term trends often do not have an abundance of concrete evidence pointing to their effectiveness” certainly resonates here at the end of 2025 as the AI bubble is predicted to pop.
The advent of AI for higher education is billed as a way to patch the problems introduced by increasing scale. Remote learning allows schools to admit more students but at the cost of individualized student experience. Learning mangement systems like Blackboard and Canvas are more efficient, but instructors can’t possibly respond meaningfully to an order of magnitude more students. The assembly line experience renders the education more alienating, though perhaps this is inevitable given the nature of online education as less embodied.
Either way, the prospect of AI in education is more individualized attention and reducing the effective student-teacher ratio. (The actual ratio may, paradoxically, increase if fewer instructors are now required for even greater numbers of students.)

A new technological affordance comes with two questions for librarians and teachers: (1) the general problem everyone faces about how to apply technology to generate value in their field and (2) the additional question of how to teach patrons and students the digital literacy they need to live in a world where that technology now exists. This is true even if they choose not to use it (EDUCAUSE, 2025) on ethical or practical grounds.
In teaching digital literacy around AI, the inability to scale LLMs may actually be a pedagogical feature not a limitation. Desktop or mobile hardware must run simpler models but this allows the experimenter to identify failure cases more easily. Thus students may ultimately better understand the importance of human-in-the-loop and the risk-benefit of using AI if they can more easily see it fail. This makes smaller models arguably better candidates for a learning lab environment. Running the models locally in the library also has privacy benefits for patrons since chats are not shared with technology companies rapacious for data.
References
Adams Becker, S., Cummins, M, Davis, A., Freeman, A., Giesinger Hall, C., Ananthanarayanan, V., Langley, K., & Wolfson, N. (2017). NMC Horizon Report: 2017 Library Edition. The New Media Consortium.
Robert, J., Muscanell, N., McCormack, M., Pelletier, K., Arnold, K., Arbino, N., Young, K., & Reeves, J. (2025). 2025 EDUCAUSE Horizon Report: Teaching and learning edition. EDUCAUSE.
Image Credits
“img_7818” by Michael Hicks, CC BY 2.0
Arvo, J., & Kirk, D. (1987). Fast ray tracing by ray classification. ACM Siggraph Computer Graphics, 21(4), 55-64.
Livestreaming the Library slides



