
“In a big setback for internet freedom in Brazil, the country’s supreme court has unilaterally swept away its law roughly paralleling Section 230, and opened the way for platform liability for many kinds of disapproved online speech,” notes a legal scholar at the Cato Institute.
As David Inserra, who studies technology and restrictions on free expression, explains:
Brazil’s Supreme Court finalized its decision to fundamentally undermine Brazil’s liability protections for platforms hosting online speech. The ruling continues a series of decisions by Brazilian courts to act as unaccountable prosecutors, judges, and legislators, who have seized the right to determine what online speech is allowed. And particularly relevant to Americans, it clearly shows the importance of liability protection to resisting censorship and promoting free expression online.
The 8–3 ruling scraps major portions of the Marco Civil da Internet, a law passed by the Brazilian legislature and signed into law by President Rousseff in 2014. The core provision of the law generally held that online websites and platforms were not liable for user-generated content posted on their sites—commonly known as intermediary liability—with a few exceptions. The Marco Civil was inspired by the US’s Section 230 of the Communications Act.
With this ruling, the Brazilian Supreme Court has decided that the Marco Civil was allowing too much speech the Court considered dangerous. And even though the Brazilian Congress recently debated and rejected changing the Marco Civil, the Court decided that “to protect fundamental rights and democracy,” the laws duly passed by the elected representatives of the people of Brazil need to be replaced by a system that places less value on free speech.
In its ruling, “the Court institutes a broad liability regime that holds online platforms liable” for many kinds of speech, such as “anti-democratic acts” and “hate speech.” The duty to restrict such speech could give ideological interest groups a veto power over various viewpoints, because even in the U.S., progressives have defined facts and conservative views as “hate speech” and “disinformation.” For example, the taxpayer-funded Global Disinformation Index classified a factually-accurate blog post by a black lawyer as “white supremacy content” and “disinformation” because it pointed out that the black crime rate is higher than the crime rate for other races. That factually-accurate blog post was characterized as white supremacist even though the federal Bureau of Justice Statistics has confirmed that the black crime rate is higher. Rates of committing homicide “for blacks were more than 7 times higher than the rates for whites” between 1976 and 2005, according to the Bureau of Justice Statistics in its publication, Homicide Trends in the United States. As the BJS noted in a later version of that same publication, Homicide Trends in the United States, “Blacks are disproportionately represented as both homicide victims and offenders….The offending rate for blacks (34.4 per 100,000) was almost 8 times higher than the rate for whites (4.5 per 100,000).
In 2022, Facebook classified as “hate speech” the statement by Congresswoman Marsha Blackburn that “Biological women have no place in women’s sports.”
Progressives have claimed that commonplace views about racial or sexual subjects are “hate speech.” That includes criticizing feminism, affirmative action, homosexuality, or gay marriage, and various opinions about how to address sexual harassment or alleged racism in the criminal justice system.
Inserra adds that
At any point in the future, the Court can find that a platform has allowed too much “hate speech” on its platform and hold it liable. The Court also creates a notice and takedown regime for any other unlawful acts or fake accounts. Anyone can report such content to a platform, and the platform must promptly remove it or otherwise be liable.
What this means is that platforms must proactively remove a lot of speech that the court considers most harmful, and they must remove even more categories of “illegal” speech whenever someone reports it to the platforms. There are many problems with this approach. It forces platforms to review millions of posts every day to discover items not against their policies but against the laws of Brazil. Moderators and AI enforcement tools will be expected to be experts in Brazilian law, and because that simply isn’t realistic, companies will be forced to aggressively remove any content that they think might come close to violating the law. Various types of mental health content could be removed for fear of being viewed as supporting self-harm. Content that is critical of Israel or Hamas might be considered hateful speech and therefore removed.
Leftists celebrated on the progressive web site Bluesky, saying that censorship is needed to suppress speech they think undermines “human dignity” by hurting their feelings. One wrote, “Here, human dignity is a constitutional right. Whether you are online or not, you have to abide by these principles.”
Both normal people, and experts, run the risk of running afoul of broad bans on “hate speech.” For example, in 2019, Twitter applied its “rules against hateful conduct” to briefly ban an expert on sexuality for stating in passing that transsexualism is a mental disorder. Twitter did that even though the “bible of psychiatry,” the DSM-5, indicated at the time that transsexualism is a disorder, and the expert chaired the group that worked on that section of the DSM-5. So an expert sharing his opinion was deemed hate speech.