Twitter announced a new feature on Tuesday that allows users to flag content that may contain misinformation, a scourge that only intensified during the pandemic.
“We are testing a feature that allows you to report seemingly misleading tweets-as you can see,” the social network said from its security account.
We are testing a feature that allows you to report seemingly misleading tweets-as you can see. Starting today, some people in the U.S., South Korea, and Australia will find the option to mark the tweet as “misleading” after clicking on the report tweet.
— Twitter Safety (@TwitterSafety) August 17, 2021
Starting Tuesday, some users from the United States, South Korea, and Australia will be able to see a button to select “It is misleading” after clicking “Report Tweet”.
Users can then be more specific and mark misleading tweets as potentially containing misinformation about “health,” “politics,” and “other.”
“We are evaluating whether this is an effective method, so we start small,” said the San Francisco-based company.
“We may not take action on every report in the experiment, nor can we respond to it, but your comments will help us identify trends and thereby increase the speed and scale of our broader misinformation work.”
Like Facebook and YouTube, Twitter is often criticized by critics who say that it has not done enough to combat the spread of misinformation.
But the platform does not have the resources of its Silicon Valley neighbors, so it usually relies on experimental techniques that are cheaper than recruiting an army of moderators.
Such efforts have increased as Twitter strengthened its misinformation rules during the COVID-19 pandemic and during the US presidential election between Donald Trump and Joe Biden.
For example, Twitter started in March to block users who have been warned five times for spreading false information about vaccines.
The network began tagging Trump’s tweets and posted banners warning him of misleading content during his 2020 re-election campaign, and then the then president was eventually barred from entering the venue for posting messages that incited violence and discredited the election results. Website.
Concerns about misinformation about the COVID-19 vaccine have become so rampant that Biden said in July that Facebook and other platforms have a responsibility to “kill” people and allow false information surrounding the vaccine to spread.
He turned his head back to clarify that false information itself may harm or even kill those who believe it.