Social media has become central to how we inform and are informed. Between freedom of expression and lack of accountability, an urgent reflection is needed: how can transparency and rigor be ensured without compromising public debate?

Over the past two decades, social media has evolved from simple platforms for interaction among friends into true hubs of information, entertainment, and influence, profoundly transforming how individuals consume and share content.

What began with forums and networks such as Hi5 or Facebook has evolved into an ecosystem dominated by digital giants such as Meta, which owns Facebook, Instagram, and WhatsApp, as well as TikTok, X, and YouTube. These platforms now shape opinions, influence consumer behavior, and, in many cases, have a direct impact on social and political processes. In this context, for millions of people, particularly among younger generations, social media is no longer just a space for online socialization but has largely replaced traditional media as the primary source of information. This rapid growth has brought with it a structural transformation in how information circulates in contemporary society. The centrality of these platforms in everyday informational life gives them a level of power comparable to, and sometimes greater than, that of traditional media outlets.

However, unlike newspapers, radio, and television, which operate under clear legal frameworks and institutional oversight, social media platforms often continue to present themselves as mere technological infrastructures rather than media actors with editorial responsibility. This raises an unavoidable question: if these platforms inform, influence, and impact the public sphere so directly, why do they continue to escape the same level of accountability required of traditional media?

In traditional media, there is a well-established regulatory framework that seeks to balance freedom of expression with social responsibility. In Portugal, for example, entities such as the ERC are tasked with supervising content, ensuring pluralism, combating disinformation, and protecting vulnerable audiences. This framework includes mechanisms such as the right of reply, rules regarding editorial responsibility, limits on misleading advertising, and legal provisions against hate speech, all aimed at ensuring that the media environment operates within minimum standards of rigor, transparency, and accountability. On social media, by contrast, this balance is considerably more fragile. The algorithms responsible for organizing and distributing content often prioritize what generates more interaction, more clicks, more shares, and more viewing time, creating incentives for sensationalist, polarizing, or emotionally extreme content to gain greater visibility.

In this context, the discussion around regulation should not be interpreted as an attempt at censorship or a limitation of public debate, but rather as a response to structural problems that have become evident over the past decade. These include the rapid spread of disinformation, the opacity of algorithmic systems that determine visibility, and the absence of clear mechanisms for institutional accountability. Today, false or misleading content can reach millions of people within minutes, without any structure comparable to the editorial responsibility that characterizes traditional media. Unlike a newspaper, which is institutionally accountable for the content it publishes, responsibility on social media is dispersed among individual users, automated systems, and internal policies that are often not transparent.

At the same time, any attempt at regulation must recognize a fundamental principle that cannot be ignored. Social media platforms also function as spaces for individual expression, civic participation, and public debate, which makes user autonomy a central component of their democratic value. Regulating these platforms cannot mean controlling opinions, limiting pluralism, or restricting public debate. The real challenge is to find mechanisms that introduce appropriate levels of transparency, accountability, and contextualization of information without undermining the freedom of expression that defines the digital environment.

Some steps have already been taken in this direction, particularly in the European context. The European Union has recently approved the Digital Services Act, a legislative framework that establishes new obligations for large digital platforms, requiring greater transparency regarding how recommendation algorithms function, more effective mechanisms for reporting illegal content, and stronger measures to protect minors. In addition, this legislation requires platforms to assess and mitigate systemic risks associated with the spread of disinformation or the manipulation of public opinion, formally recognizing the impact these digital infrastructures can have on the functioning of democratic societies.

In Germany, for example, the NetzDG law requires digital platforms to remove illegal content, such as hate speech or incitement to violence, within a relatively short timeframe, under penalty of significant fines.

Although these solutions are far from perfect and continue to generate debate among academics and policymakers, they demonstrate that it is possible to introduce rules into the digital space without eliminating freedom of online participation. The path forward will likely involve further developing these types of regulatory mechanisms, adapting traditional media regulation tools to the digital context.

Among the most discussed proposals is the need for greater algorithmic transparency, allowing researchers, regulators, and society at large to understand how certain content is promoted or amplified by platforms. At the same time, there have been calls for the creation of contextualization systems for widely disseminated content, particularly on sensitive topics such as public health, electoral processes, or social conflicts. The aim of these measures would not be to prevent content from being published, but rather to ensure that potentially misleading information is accompanied by factual context or credible sources.

 

Additionally, a true digital right of reply could be developed, allowing false or misleading information that has achieved significant reach to be corrected proportionally to its visibility, replicating mechanisms that already exist in traditional media. At the same time, platforms could be required to publish regular reports on the criteria guiding content moderation, content promotion, and the overall functioning of their recommendation systems, thereby increasing transparency for users, researchers, and regulatory bodies.

Ultimately, social media platforms are no longer merely private spaces for interaction among individuals. They have become true digital public squares with enormous social, economic, and political impact, shaping how information circulates and how opinions are formed in contemporary societies. Ignoring this reality means allowing one of the largest communication systems in history to operate with minimal public scrutiny. Regulating social media as media outlets does not imply limiting freedom of expression, but rather ensuring that such freedom coexists with accountability, transparency, and the protection of the digital public sphere.

 

André Alves, Marketing Director of CATÓLICA-LISBON