Rutgers University · Spring 2026

Understanding, Designing, and Building Social Media

For three decades, social media meant humans connecting with humans. Now AI is rewriting that equation—augmenting how we communicate, generating the content we consume, and increasingly acting as participants in social spaces. This seminar examines what we're losing, what we're gaining, and what we might still choose to build.

14 Weeks · Seminar Format
No Programming Required
Graduate Seminar
Kiran Garimella

Kiran Garimella

Professor, Rutgers University

Social Media in the Age of AI

Social media was built on a simple premise: people connecting with people. Platforms like Facebook, Twitter, and YouTube became the infrastructure for human expression, community formation, and public discourse. But artificial intelligence is now fundamentally transforming this landscape through three distinct shifts:

01 Augmentation

AI modifying our language and self-presentation. From Smart Reply suggesting your responses to Grammarly reshaping your voice, algorithms increasingly mediate how we express ourselves to others.

02 Generation

AI creating the content we consume. Deepfakes blur the line between real and synthetic. Generative models can produce infinite feeds of plausible content. Authenticity becomes a design problem.

03 Agency

AI acting as social participants. Bots, agents, and NPCs now occupy roles once reserved for humans—as moderators, companions, collaborators, and community members.

To understand these shifts, we must first understand what they're disrupting. The course unfolds in three parts:

Part I The Human Baseline Weeks 1–6

We establish the sociological vocabulary: third places, identity and self-presentation, online communities, and social network theory. These concepts form the foundation for understanding what AI is transforming.

Part II The AI Turn Weeks 7–10

We examine how AI disrupts the baseline: algorithmic curation and its politics, AI-mediated communication, synthetic media and the authenticity crisis, and the challenges of content moderation at scale.

Part III The Future of Sociality Weeks 11–13

We turn to design and speculation: social simulacra, generative agents, the "dead internet" hypothesis, and what human connection might mean when the humans are increasingly hard to find.

Course assignments gear you toward critiquing platform design choices and developing ideas for new online communities. Programming skills are not required—only a willingness to think critically about the digital spaces we inhabit and imagine alternatives.

Weekly Topics & Readings

Readings

1 Ray Oldenburg. 1989. The Great Good Place. Chapters one and two.
2 Robert D. Putnam. 2015. Bowling alone: America's declining social capital. In The city reader (pp. 188-196). Routledge.
3 Sherry Turkle, "Connected but Alone" TED Talk

Readings

1 Erving Goffman (1956). The presentation of self in everyday life. Harmondsworth. Pages 1-58, intro and most of Chapter 1.
2 Hancock, J. T., Naaman, M., & Levy, K. (2020). "AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations." JCMC.
3 Bernie Hogan (2010). The Presentation of Self in the Age of Social Media: Distinguishing Performances and Exhibitions Online. Bulletin of Science, Technology & Society, 30(6), 377–386.

Readings

1 William Whyte. 1964. How to Live in a City Video
2 Nicole B. Ellison. 2007. Social network sites: Definition, history, and scholarship. Journal of Computer‐Mediated Communication, 13(1), 210-230.
3 Barry Wellman et al. 2003. The social affordances of the Internet for networked individualism. Journal of Computer‐Mediated Communication, 8(3).
Class Activity: Research Pitch 1

Readings

1 Jim Hollan and Scott Stornetta. "Beyond being there." SIGCHI Conference on Human factors in computing systems, pp. 119-125. 1992.
2 Mark S. Ackerman. "The intellectual challenge of CSCW: the gap between social requirements and technical feasibility." Human–Computer Interaction 15, no. 2-3 (2000): 179-203.
3 Amy Jo Kim. Nine Principles for Community Design.
Class Activity: Research Pitch 2

Readings

1 Sara Kiesler, Robert Kraut, Paul Resnick, and Aniket Kittur. "Regulating behavior in online communities." Building successful online communities: Evidence-based social design 1 (2012).
2 Lawrence Lessig (2009). Code v2.0. Chapters 3 & 7.
3 Catherine Grevet and Eric Gilbert. "Piggyback prototyping: Using existing, large-scale social computing systems to prototype new ones." 2015.
Due: Project Topic

Readings

1 Mark S. Granovetter (1973). The strength of weak ties. American journal of sociology, 1360-1380.
2 Jeffrey Travers and Stanley Milgram (1969). An Experimental Study of the Small World Problem. Sociometry, Vol. 32, No. 4, pp. 425-443.
3 Eric Gilbert & Karrie Karahalios (2009). Predicting tie strength with social media. SIGCHI Conference on Human Factors in Computing Systems (pp. 211-220).
Due: Assignment 1

Readings

1 Langdon Winner (2017). Do artifacts have politics? In Computer Ethics (pp. 177-192). Routledge.
2 Crawford, K. (2021). Atlas of AI. Chapter: "Affect."
3 Gary King, Jennifer Pan, and Margaret E. Roberts. "How Censorship in China Allows Government Criticism but Silences Collective Expression." American Political Science Review 107, no. 2 (2013): 326–43.
Due: Project Proposal

Readings

1 Thomas Erickson and Wendy A. Kellogg. "Social translucence: an approach to designing systems that support social processes." ACM TOCHI 7, no. 1 (2000): 59-83.
2 Bonnie A. Nardi, Steve Whittaker, and Erin Bradner. 2000. Interaction and outeraction: instant messaging in action. CSCW '00. pp. 79–88.
3 Mieczkowski, H., et al. (2021). "AI-Mediated Communication: Language Use and Interpersonal Effects in a Referral Contest." ACM.
Due: Assignment 2

Readings

1 Donath, J. (2020). "The Robot Dog Fetches for Whom?" In Robot Ethics 2.0. Routledge.
2 Paris, B., & Donovan, J. (2019). "Deepfakes and Cheap Fakes." Data & Society.
3 Shagun Jhaver et al. (2018). Online Harassment and Content Moderation: The Case of Blocklists. ACM TOCHI, 25, 2, Article 12.

Readings

1 James Grimmelmann (2015). The virtues of moderation. Yale JL & Tech., 17, 42.
2 Gillespie, T. (2020). "Content moderation, AI, and the question of scale." Big Data & Society.
3 Sap, M., et al. (2019). "The Risk of Racial Bias in Hate Speech Detection."

Readings

1 Park, J. S., et al. (2023). "Generative Agents: Interactive Simulacra of Human Behavior." ACM.
2 Park, J. S., et al. (2022). "Social Simulacra: Creating Populated Prototypes for Social Computing Systems." ACM.
3 Yutong Zhang et al. (2025). "The Rise of AI Companions: How Human-Chatbot Relationships Influence Well-Being." arXiv.
Due: Project Preliminary Draft

Readings

1 Bak-Coleman, J. B., et al. (2021). "Stewardship of global collective behavior." PNAS.
2 Argyle, L. P., et al. (2023). "Out of One, Many: Using Language Models to Simulate Human Samples." arXiv.
3 Törnberg, P. (2024). "Simulating Social Media Using Large Language Models to Evaluate Alternative News Feed Algorithms." arXiv.
Due: Assignment 3

Readings

1 Maggie Appleton (2024). "The Expanding Dark Forest and Generative AI."
2 Erik Hoel (2024). "The Semantic Apocalypse" (or "Here lies the internet, murdered by generative AI"). The Intrinsic Perspective.
3 Ted Chiang (2023). "ChatGPT Is a Blurry JPEG of the Web." The New Yorker.

No Readings

Class Activity: In-class presentations of final projects. Each student will get roughly 10 minutes to present.
Due: Project Presentations & Final Report