Social-Media Platforms Agree to New European Rules on Online Posts
EU's new code of practice on disinformation aims to prevent advertising from appearing alongside posts deemed to be intentionally false or misleading
Facebook owner Meta Platforms Inc., Twitter Inc. and other social-media platforms have agreed to abide by tougher European Union standards for policing online postings, offering a preview of the type of rules big tech companies will face under a coming digital-content law.
The EU's code of practice on disinformation, unveiled on Thursday, replaces an earlier, voluntary set of guidelines for addressing online content that officials view as deliberately false or misleading.
Officials said they intend to make portions of the new code obligatory for major platforms under the new law, the Digital Services Act.
Under the new code -- agreed to by an array of companies also including ByteDance Ltd.'s TikTok and Alphabet Inc.'s Google -- social-media platforms will be expected to take steps to prevent advertising from appearing alongside what policy makers describe as intentionally false or misleading information.
Platforms will also be expected to provide users with more tools for identifying such content online.
Platforms that have volunteered to abide by the new code before elements of it become mandatory will be expected to submit a first report explaining how they have implemented it early next year.
The new code of practice is part of a broader effort by the EU to rein in the power of large technology companies, ranging from how they handle user data to how they treat competitors to what they do with potentially harmful content.
Earlier this year, the EU agreed on a separate law called the Digital Markets Act, which imposes fairness obligations on a handful of big tech platforms, backed by the potential to levy large fines.
Digital content has been an area of particular focus -- and debate. Europe is seeking to take a leading role to address what policy makers say is a deluge of fake information on topics ranging from Covid-19 to the war in Ukraine, which they add can be amplified through social-media platforms.
But defining harmful falsehoods, and deciding what to do about them, is difficult.
Some EU officials expressed concern in 2021 about the suspension of then-U.S. President Donald Trump from platforms including Twitter and Facebook.
Officials say such issues are addressed in the new Digital Services Act, which requires companies to have robust appeals mechanisms to challenge content removals.
Representatives from several major tech companies said they welcomed the EU's new standards.
Meta, which also operates Instagram, said the company would continue to use research and technology to combat the spread of false information. Twitter said it remained committed to tackling the issue, including through the EU's new code.
Google called the code an important instrument in fighting against deliberately false and misleading information.
"The global pandemic and the war in Ukraine have shown that people need accurate information more than ever and we remain committed to making the Code of Practice a success," a Google spokesman said.
A TikTok representative said the company participated in drafting the new code and would continue its efforts "to combat disinformation and promote authentic online experiences for our communities."
EU officials said the code of standards would be linked to the bloc's new Digital Services Act, which European lawmakers and member states agreed to earlier this year.
The law, which could take effect for the largest online platforms as early as next year, sets out new rules for removing illegal content. It will also require the largest social-media platforms to conduct risk assessments on content that regulators view as potentially harmful.
The largest platforms, defined as those with more than 45 million users in the EU, that repeatedly break the code and fail to properly address risks could be subject to fines of up to 6% of their global annual revenue once the new law comes into effect, officials said.
"We now have very significant commitments to reduce the impact of disinformation online and much more robust tools to measure how these are implemented across the EU in all countries and in all its languages," said Vera Jourova, the EU's vice president for values and transparency.
One of the aims of the new code of practice is to limit financial incentives for deliberately spreading false information by making it harder for those who spread the material to profit from related advertising revenues, the EU said.
Companies are also expected to show what they are doing to tackle fake accounts and to provide users with tools to recognize and report deliberately false or misleading information.
A new task force that includes civil society groups and regulators will act as a watchdog and evaluate companies' compliance.