IESE Insight
Social media are dividing us. Here’s how we can fight back
These are the business and policy choices to help make social media healthier for democracy.
In 2010, I had the privilege of going to Stanford and Berkeley in California for a year, and it was a time when social media platforms like Facebook and Twitter were taking off. They were fun, popular and inclusive — everybody was using them, especially social activists. The pro-democracy movements that rippled across North Africa and the Middle East in 2010 and 2011 were famously facilitated by social media, earning the Arab Spring the nickname of the Twitter Revolution. Much of my early research on social media was on positive developments like these.
In 2026, the picture is rather different. Social media platforms, once hailed as champions of democracy and dialogue, have become commodified spaces in which their business models incentivize hate speech, misinformation, polarization and the political fragmentation of society, benefiting corporate and political elites while eroding democracy.
Not everyone agrees. The Journal of Management Studies recently invited me and colleagues to debate whether social media platforms were indeed a threat to democracy. My view is that they are, for the same reasons that others argue they are not: the assumption that the market will sort things out and that more technology will solve the problem. In both cases, I believe the market and technological perspectives are insufficient to protect democracy. Here’s why — and what managers and policymakers can do to address the threats.
Market, technological and political perspectives on social media
Social media have been primarily viewed through two lenses, both central to democracy:
- the market perspective, where social media serve as a “marketplace of ideas” — a concept put forward by the U.S. Supreme Court in upholding constitutionally protected free speech, based on the principle that, by allowing freewheeling public debate, the best ideas will invariably rise to the top.
- the technological perspective, where decentralized technological solutions provide access to information that informs citizens and enables them to engage freely in the decision-making process, regardless of their backgrounds, making social media open, inclusive and participatory.
Both perspectives have merit in relation to democracy — yet they also have flaws, failing to account for the reality of platforms as they exist today.
Algorithms favor emotional content, undermining quality discourse and turning the “marketplace of ideas” into polarized echo chambers in which the best ideas, let alone truthful ones, have little chance of rising to the top. And though technology may provide a space for debate, the mere existence of openness and participation aren’t enough; good technology must encourage quality argumentation, not just arguments.
I point to a third perspective — the political one — because we cannot ignore the extent to which social media platforms have evolved into political actors today. Their influence and control of the public sphere can actually manipulate the public discourse in ways that undermine the democratic norms and ideals that the market and technological perspectives purport to support.
How the politicization of social media degrades democratic norms
Not all social media are created equal. Mastodon and Bluesky, for example, are not yet in the same league as YouTube, Facebook, Instagram, X and TikTok, which account for the vast majority of daily social media users. It’s among these Big Tech players where I see the biggest threats, precisely because of their massive concentration of power.
Their business models create formidable barriers that don’t easily allow people to leave. Their profitability depends on commoditizing people’s social interactions, and to maximize those, the algorithm rewards and reinforces sensational or extreme content most likely to trigger a reaction. Whistleblowers from social media companies have alleged that managers told them to allow more questionable content into user’s feeds because more outrage drove engagement.
We also cannot ignore how malicious agents have exploited these endemic features to interfere in elections and sway outcomes in their favor, undermining the integrity of free and fair elections.
These platforms are not impartial or agnostic, despite their claims to the contrary. Decisions to reinstate previously banned figures and to cut back on content moderation are political choices. Elon Musk’s takeover of Twitter to align it with overtly conservative causes and to throw his weight behind Donald Trump’s reelection put paid to the view of social media as a neutral town square.
The market perspective may have sufficed 20 years ago when platforms simply were straightforward places of information exchange, and technological infrastructures were indeed a digital means of empowering citizen participation in ways that were cheap, easy and fast. Yet these perspectives, which largely informed early regulation and legislation of the digital space, failed to consider how these platforms would eventually shape the public discourse and collectively inform (or distort) public opinion.
Everyone is entitled to their own private opinions. Yet each of those individual opinions are meant to come together to form a collective decision for the common good — that’s the democratic ideal. With today’s social media, all these individual opinions are not arriving at a collective understanding; rather, they are coalescing around different bubbles in which people hold distinct, atomized opinions. And those opinions aren’t formed from being exposed to many quality inputs (the marketplace of ideas). Social media have removed information depth and analysis, so we hold fast to ideas that we haven’t really thought about in any meaningful, educated way.
There are many decisions that need to be made collectively, but without any shared reality or collective understanding, how do we decide laws, for example? What is acceptable in our neighborhoods? Do we build a road? Do we create a park? Do we plant trees?
Before social media, these kinds of debates were taking place in public forums with diverse inputs solicited from across the community; and after a legitimately recognized process of deliberation, a collective decision would be made, which maybe wasn’t some people’s first choice, but the majority accepted it.
This normative-building process, fundamental to deliberative democracy, has been upended by powerful and partisan social media. These companies own the infrastructure via which debates are mediated and opinions formed, giving them an outsized influence in co-creating public opinion and sidestepping democratic processes altogether.
What can we do?
Recognizing social media’s increasingly central political role in democratic processes, my coauthors and I propose several concrete actions for improving platforms’ democratic capacity. Besides wanting platforms to boost transparency, accountability, openness and inclusivity, our suggested actions are aimed at enhancing two things in particular:
- conduciveness to argumentation: a platform’s capacity to foster productive deliberation and constructive dialogue.
- consequentiality: the capacity to transform the above into public policies for positive societal impact.
Our recommendations apply to managers and policymakers, not to mention individual users of social media.
Urgent actions for managers
Companies that run or design social media platforms obviously bear the biggest responsibility, though all of us, to a greater or lesser extent, whether through advertising or other business dependencies, should be aware of the political dimension of our social media involvements. This raises more than reputational concerns.
Here are some specific actions:
Ensure effective credential verification systems
Platforms must have better systems of expertise validation, akin to the old Twitter’s blue-check system, which unfortunately has become a paid-for mockery under X. During the COVID-19 pandemic, we saw Facebook verifying medical experts and taking down or flagging false medical claims, though as platforms rely more heavily on AI systems and less on human moderators and validators, their credentialing processes are getting hit or miss. Admittedly, the balance between expertise and widespread public participation is a difficult one to get right. But agreeing on a proper credentialing system, and taking it out of the unknown “black box” in which it exists now, would help.
Encourage constructive rather than reactive engagement
Social media platforms need to privilege affordances that promote constructive engagement. Fundamentally, platforms must redesign their algorithmic architectures to avoid amplifying emotionally inflammatory content. Again, there is a balance to strike between filtering out hate speech and allowing diverse viewpoints. Duet features, which allow side-by-side, back-and-forth interactions, can be better employed to help users engage in actual dialogue, building upon each other’s ideas through conversation, rather than mere reactive forms of engagement.
Implement smarter recommendation algorithms
Recommendation algorithms, like those Amazon uses to suggest other products you might like, need to do more than simply inject random opposing viewpoints into users’ feeds. They must identify and promote content that can bridge different communities, connecting neighboring conversations but in a coherent way and with an awareness of the broader public discourse.
Promote sustained engagement
Platform architectures are currently driven by short-term attention metrics and churn. Trending topics and viral content take precedence over sustained, reasoned discourse around consequential issues critical for a healthy society to function well. Platforms could identify substantive topics of public relevance, and then create spaces and mechanisms that promote extended engagement, such as rewarding users who contribute thoughtfully and meaningfully to the conversation over time, rather than just contributing quick, reductive emojis.
Urgent actions for policymakers
As raised by the previous points, democratic governance involves a series of inherent tensions — between expert and non-expert participation, between diversity and consensus, between centralization and decentralization, between flexibility and stability. It’s a question, not of eliminating these tensions, but of balancing and managing them well for healthy societies. It is good that all people can freely participate in social media — but is it good that their knowledge levels all carry equal weight and the status of expertise is actively undermined? Should the voices of influencers, whose authority is derived from popularity and charisma, be heard more and get more attention than doctors, sociologists, scientists or professors?
The tensions raised by democratizing voices are familiar for public policymakers, governments and similar institutional stakeholders operating in democratic societies. In line with their experience in navigating these issues in other realms of public life, I propose the following:
Change the definition of platforms
Reconceptualizing social media platforms as democratic institutions like any other would require the development of a social contract between platform businesses and diverse stakeholders, beyond shareholders, to safeguard their public service function.
Increase regulatory frameworks
While the EU’s Digital Services Act establishes mandatory oversight mechanisms and includes civil society in platform governance, it inadequately addresses algorithmic control. If platforms are reconceptualized as democratic institutions, not just as private businesses focused on maximizing profits for shareholders, then this necessitates different regulatory oversight and governance mechanisms beyond those required for traditional technological infrastructures.
Establish public-service social media
Emulating the model of the BBC or other national public broadcasters would make deliberation, inclusivity and civic engagement integral features, with a clear public-service mandate enshrined in their license to operate. These media could be governed through independent oversight bodies that have statutory powers over both content and algorithmic architectures.
Break them up
How about breaking up Big Tech? This would reduce the concentration of power and the algorithmic amplification of polarizing content. This is not as radical as it sounds. Think of the many public utilities that were privatized, then broken up and returned to public control due to failures in service delivery or negative externalities on society. If we see an industry that is clearly dysfunctional, as many social media platforms are today, then there is a long historical precedent of government intervention to right those wrongs, from Standard Oil in the early 1900s to AT&T in the 1980s. It is ultimately the government’s role and duty to decide on such matters, and if social media platforms are not abiding by certain rules, then it is in the government’s remit to stop platforms from engaging in illegal, harmful or uncompetitive practices, as in the recent Google antitrust case (which is under appeal).
In another recent lawsuit, Google and Meta were found negligent for knowingly engineering social media to be as addictive as possible, to the detriment of young users’ mental health. Some say such verdicts are growing signs that the tide may be turning against social media companies.
Many countries are closely watching Australia’s world-first ban on under-16s accessing social media (including Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, Twitch, X and YouTube) as they consider passing something similar in their own countries. Spain announced its own planned ban in February 2026, aiming to improve on some of the weaknesses detected from the Australian experience related to age verification checks, while also criminalizing the manipulation of algorithms that amplify harmful content.
In 2024, Brazil’s Supreme Court banned X for spreading misinformation about its presidential election, closing the X office and freezing accounts for Elon Musk’s satellite and internet provider Starlink. Despite initially kicking up a fuss, Musk eventually backed down, paid a $5 million fine and agreed to comply with judicial orders before X’s service was allowed to resume. This shows that legal means do exist and can work to enforce social media compliance with democratic norms — if governments are bold enough to exercise their authority in this space.
Actions we can take at the individual level
Government action brings me to my final point: the role of individual action in effecting change. If democratic governance is meant to represent the will of the people, then many of the reforms I suggest begin from citizens demanding more than they are currently getting. Indeed, restricting children’s use of social media comes from grassroots action and letter-writing campaigns that bring such issues to the attention of elected representatives. Whether we impose some or no restrictions depends on the broader social consensus. For that we need:
Citizen literacy
There needs to be more education on the mechanisms of social media platforms and the impact of individuals’ online interactions.
Conscious engagement
Individuals can be much more conscientious of their contributions to digital discussions and their influence on the broader public discourse.
Digital detoxes
In line with the previous point, movements like Digital Minimalism could be encouraged, which is not Luddism but about being more intentional and mindful of your personal use of technology and social media platforms, recognizing their addictive capacity and potentially toxic effects.
Lifelong learning
Each of us must continue to develop our digital competencies, particularly in relation to cybersecurity and data protection. There should be many more opportunities, not just in our workplaces but in public libraries, schools, civic centers and other nonprofit community organizations, for everyone to avail themselves of workshops and training programs on digital competencies, especially for marginalized groups, to help us bridge digital divides.
Ultimately, it’s time we reframe our view of social media. These platforms have outgrown their roots as mere communication tools with occasional political spillovers — they are now part of the machinery of democracy itself. As such, we must treat platform design choices as political decisions, subject to the same scrutiny we would give any other publicly responsible body.
Given social media’s role in shaping how societies form opinions, debate issues and come to collective judgments, it’s more important than ever that we ask not whether these digital spaces allow people to come together and say whatever they like, but do they enhance citizens’ deliberative capacity, facilitating true democratic discourse and strengthening the state of our democracies overall?
MORE INFO: “Social media is a threat for democracy! A political perspective for analyzing and diminishing harm” by Itziar Castello, Elanor Colleoni, Andreas Georg Scherer and Hannah Trittin-Ulbrich is published in the Journal of Management Studies (2025).
The other sides of the debate:
- “Make social media social again: how platform interoperability can fix social media and future-proof democracy” by J.P. Vergne.
- “Are social media platforms a threat to democracy? An ecosystem governance perspective” by Carmelo Cennamo and Jovana Karanovic.
This article is based on a session that Itziar Castello-Molina delivered at IESE Business School in Barcelona, organized by IESE’s Institute for Sustainability Leadership in February 2026.
This article is included in IESE Business School Insight online magazine No. 172 (May-Aug. 2026).
READ ALSO:
Misinformation in social networks: How one bad apple can spoil the bunch
Fighting fake news: tips to keep your biases in check
Why your biggest cybersecurity risk isn’t technology — it’s your people
