Cognitive Surplus and AI

Photo by Steve Johnson on Unsplash

Ever since @michael introduced the idea of cognitive surplus in Module 3, I’ve been wondering how AI might fit into that equation. When Shirkey first wrote about cognitive surplus in 2010, he described how the digital tools that emerged with Web 2.0 allowed people to use their free time to shift from passive information consumers to active creators and collaborators (Shirky, 2010). Generative AI tools like ChatGPT may take the idea of cognitive surplus even further by freeing up more time and mental energy,  potentially expanding people’s capacity to create and contribute.

Tools like ChatGPT have proven incredibly popular—CNN (2023) reported ChatGPT reached one million users within five days of its launch—but they continue to have challenges. Most notably, AI still generates inaccurate information that seems plausible, a problem euphemistically referred to as “hallucinations” (ColdFusion, 2023). Additionally, many people worry about the broader impacts of AI: threats to academic integrity, widening of the digital divide, potential job displacement, and even speculative fears about malevolent AI systems.

As AI technology evolves, it’s worth keeping Shirky’s concerns about cognitive surplus in mind. He argued that economic incentives can distort collaborative projects by corrupting people’s intrinsic motivations (Shirky, 2010). Fisher and Head (2023) essentially prove Shirky’s point. They note that Wikipedia, a nonprofit, crowd-sourced product of Web 2.0, ultimately became a reliable and trusted source of information. It’s a perfect example of what cognitive surplus can produce when intrinsic motivation drives participation. In contrast, social media platforms, which are also products of Web 2.0, have been shaped by profit motives, resulting in the spread and amplification of disinformation. As AI companies rush to monetize their platforms, we need to be aware of how economic pressures could influence the development and use of these tools. Shirky’s warning is still relevant: the true potential of cognitive surplus may depend on whether these technologies serve the public good—or commercial interests.

References

CNN. (2023, January 25). Hear professor’s prediction on the future of AI tools [Video]. YouTube. https://www.youtube.com/watch?v=I4psRiE_YaM&t=3s

ColdFusion. (2023, March 27). AI is evolving faster than you think [GPT-4 and beyond] [Video]. YouTube. https://www.youtube.com/watch?v=DIU48QL5Cyk&t=237s

Fisher, B., and Head, A. J. (2023, May 4). Getting a grip on ChatGPT. Inside Higher Ed. https://www.insidehighered.com/opinion/views/2023/05/04/getting-grip-chatgpt

Shirky, C. (2010, June). How cognitive surplus will change the world [Video]. Ted Conferences. https://www.ted.com/talks/clay_shirky_how_cognitive_surplus_will_change_the_world

 

 

This entry was posted in Reflections. Bookmark the permalink.

2 Responses to Cognitive Surplus and AI

  1. Hi @ginamik! I’m struck by how well you’ve connected cognitive surplus to the AI landscape and the comparison between Wikipedia and social media platforms. It perfectly illustrates how economic incentives can shape technological outcomes and whether these tools will ultimately serve public good or commercial interests. We must consider not just what AI can do, but what values should guide its development.

  2. @ginamik I agree with @angela that you connected really one of our foundational ideas about this class to artificial intelligence. You offer a positive yet thoughtful way forward with utilizing AI that could benefit so many.

    Some good citations in this post as well!

Leave a Reply

The act of commenting on this site is an opt-in action and San Jose State University may not be held liable for the information provided by participating in the activity.

Your email address will not be published. Required fields are marked *