Future of Social Media Research Recap2025-11-24

A couple weeks ago I was in the UK for the Oxford Internet Institute's workshop on the Future of Social Media Research. Common themes: the difficulties of accessing social media data, the influence of AI, cross-platform studies, polarization/misinformation, etc. Many great conversations and many impressive presentations by impressive people. Here I'm just going to highlight a few -- those I found myself taking a lot of notes for, and a few quick hits at the bottom.
Extra birds in this post because I don't get to Europe all that often and even the common birds are exciting. :)


Petter Törnberg's opening keynote, "Studying Digital Media in the Post-Social Media Era," did what a good opening keynote should do: deliver some provocative big-picture frames for attendees/speakers to reference and debate. We're at the tail end of the "social media paradigm," he argued, and it's time to think about what a "post-social media paradigm" might look like. What is the social media paradigm? It's a view of social media as though it provides a window to "real behavior," serving as stages for expression/performance but especially as public spheres (a Habermasian framing that's present even in the critical arguments about social media becoming echo chambers). The social media paradigm also includes the platform economy, platform capitalism, and the tensions between participation and capitalist logic.

Why is it the end of the social media paradigm? Well, for one, people just don't like or trust social media or the companies behind them anymore, and are leaving the big platforms or lurking more and posting less. Social media is less about deliberation and participation and more about consumption and narrative. The rise of generative AI means platforms aren't even dependent on users to generate content anymore. Meanwhile, group chats and private messaging are booming (it's not that people stopped being social, after all).

So how to approach the post-social media paradigm? Törnberg floated a few ideas. For one, return to old theories and methods related to broadcast and audiences rather than participation and contribution. The relevant unit is content (exposure, algorithmic visibility, audience reception), not relationships between people. Chatbots need to be understood as not just a source of information but as a new medium for communication that produce, circulate, and shape meaning. Likewise platform capitalism gives way to AI capitalism, moving away from the attention economy and new forms of lock-in replace network effects.

It's a useful compilation of patterns, and I find most useful the basic idea that the way we talk about platforms in scholarship still prioritizes the "social" part of social media when both the infrastructural and behavioral sociality of TikTok is so radically different from, say, Friendster. I'd agree the tools and language of mass communication are more relevant now than it was a decade ago. But I'm not sure we're at the end of the social media paradigm. In our work on YouTube and TikTok, we see a whole lot more, not fewer, uploads, even before they submitted to sloppification. More to the point, when you look behind algorithmic curation on YouTube, you see many videos clearly not meant for a wide audience. Harshita Snehi and I wrote a paper recently on the "Ethics of Accidental Vlogs," inspired by the experience of encountering so much private-in-public content on YouTube, and we're working on another paper about a pattern of small group or friends/family content that seems especially pronounced in India. The ways in which people use these platforms varies considerably across cultures, with varied combinations of public, private, live, uploaded, text, and video that defy any sort of clear shift even within a single platform. Regardless, certainly a useful polemic to get people thinking about new constraints, exigencies, and opportunities.


There's been a lot written about movement from the massive one-size-fits-all platforms to smaller, niche ones, but less research about those alternatives. So I was happy to see Mareike Lisker's presentation about the way researchers handle Mastodon data. As above, ethical handling of user-generated content is something I've been thinking about a lot lately, and the results of her study were surprising. Basically, most researchers just didn't care about those instance-level rules. Some even published or licensed their datasets.

Julia Ebner presented her work analyzing the language of extremists to understand whether/how people who go on to commit acts of violence talk, as compared to those who merely express extremist ideas. She looked at manifestos for this project, but it's something that can (and should) be applied to a range of extremist forums. Among the features of language associated with violence are violence justification, existential threat, outgroup othering, and identity fusion (personal identity and group identity).

One of the more surprising findings was from Jan Zilinsky's study of partisanship in LLM political advice. Basically, he asked chatbots for political recommendations, testing the conditions under which they would recommend one of the 2024 US presidential candidates. The surprise was that GPT-5 was the outlier in multiple tests. It really didn't want to provide a recommendation, whereas the others, when provided with a list of preferences, would oblige. In various combinations of statements, Claude and GPT-4o appeared to lean democratic while GPT-5 was easily the most Republican-leaning.


Andreu Casas tracked 20k YouTube channels for removals. 2.17% were suspended by the end of the tracked year, for typically vague reasons. Looking superficially at politics, you might conclude that conservative channels were suspended more often, but then if you actually analyze the content, toxicity and misinformation are what strongly correlates to removal, not political leaning.

Dario Landwehr had a clever study examining platform announcements about changes to their content moderation policies in order to determine whether the effects of announced changes are visible in practice. The metric used is raw moderation actions, without the denminator (so unable to determine a percentage change), but there's a positive association of changes in moderation and announcements around the dates of the announcements (less so thereafter). But the most significant changes take place without being associated with an announcement.

For my part, I gave a short talk on "The Quotidian Web," which is what we've come to call part of our long-term research project on YouTube and TikTok at the Initiative for Digital Public Infrastructure. It's a mixed methods approach to studying large video platforms and their everyday uses that can be tough to access on systems tuned for attention rather thwind up getting obscured by the logics of platform capitalism infrastructures and interfaces tuned for platform capitalism. Ethan Zuckerman and I have a paper pending which lays it all out -- so hopefully more on that soon.


Additional brief notes:

  • People who talk to a chatbot trained to reduce affective polarization had their animosity reduced by about 2% on average. Effective strategies: ask people to engage in self-reflection, emphasize shared values, acknowledge/mirror feelings. Least effective: providing evidnce/statistics (but the chatbot struggled to do that concretely anyway). (Thursgood, Mosleh, Voelkel, & Rand).
  • Jasmin Riedl's presentation on the digital sexulized violence against women politicians during the 2025 German Federal election highlighted a shortcoming of DSA-mandated transparency data: it's insufficient for real-time tracking/analysis. So her lab pays for access to the Twitter API (side note: first time I've heard that in some time!).
  • Counterspeech by celebrities may be effective in reducing posting and reposting of hate content, lasting three months (Eaman Jahani).
  • Hans Hanley built some tools to study narrative networks across news outlets. Basically, the tools help to determine if multiple stories are talking about the same thing (seems similar to what e.g. GDELT does?), clustering and describing the cluster. Used Media Cloud for source discovery but not analysis/scraping.