Twitter announced a new feature on Tuesday that allows users to flag content that may contain misinformation, a scourge that only intensified the pandemic.

“We are testing a feature that allows you to seemingly misleading tweets-as you can see,” the social network said from its security account.

Starting Tuesday, some users from the United States, South Korea, and Australia will be able to see a button to select “It is misleading” after clicking “ Tweet”.

Users can then be more specific and mark misleading tweets as potentially containing misinformation about “health,” “politics,” and “other.”

“We are evaluating whether this is an effective method, so we start small,” said the San Francisco-based company.

“We may not take action on every in the experiment, nor can we respond to it, but your comments will help us identify trends and thereby increase the speed and of our broader misinformation work.”

Like Facebook and YouTube, Twitter is often criticized by critics who say that it has not done enough to combat the spread of misinformation.

But the platform does not have the resources of its Silicon Valley neighbors, so it usually relies on experimental techniques that are cheaper than recruiting an army of moderators.

See also  Gmail, Google Chat get custom activity status options

Such efforts have increased as Twitter strengthened its misinformation rules the COVID-19 pandemic and during the US presidential election between Donald Trump and Joe Biden.

For example, Twitter started in March to block users who have been warned five for spreading false information about vaccines.

The network began tagging Trump’s tweets and posted banners warning him of misleading content his 2020 re-election campaign, and then the then president was eventually barred from entering the venue for posting messages that incited violence and discredited the election results. Website.

The moderator is ultimately responsible for determining which content actually Twitter’s terms of use, but the network said it hopes to eventually use a system that relies on manual and automatic analysis to detect suspicious posts.

Concerns about misinformation about the COVID-19 vaccine have become so rampant that Biden said in July that Facebook and other platforms have a responsibility to “kill” people and allow false information surrounding the vaccine to spread.

He turned his head back to clarify that false information itself may harm or even kill those who believe it.