By expanding on an existing feature that warns users who attempt to retweet content that has already been flagged as “misleading information,” Twitter will now issue the same warning when users attempt to like content that has been similarly tagged.
As the 2020 presidential election approaches, Twitter announced in September that he planned to launch a number of policies aimed at reducing misinformation around vote totals, committing to put in place “additional warnings and restrictions on Tweets with a deceptive information label from politicians American (including candidates and campaign accounts).
The platform quickly delivered on that promise, flagging one of President Donald Trump’s tweets with a warning that “some or all of the content shared in this tweet is disputed.” just hours to go until general election voting begins.
In a tweet, Twitter said these prompts and others like them helped reduce tweets of misleading information quotes by 29%, prompting the platform to unveil similar slowdowns designed to slow users’ propensity to “Like” tweets that contain lies.
The ability for users to pause and think before clicking “like” or “retweet” is part of a wider range of features aimed at curbing the spread of disinformation that Twitter recently unveiled. When users try to retweet on a tweet that contains a link to an article they haven’t read, for example, the site now prompts a message encouraging the user to, you know, read the article before sharing it. blindly with their subscribers.
The decision to add warning tags to “Like” tweets was first report on by Jane Manchun Wong, Hong Kong-based software engineer popular for unveiling new features that apps like Twitter, Instagram and TikTok are beta testing by reverse engineering their code.
Although Twitter initially claimed the new features would be in place “at least” until Election Day, the fact that they are still launching more than three weeks later suggests that a longer-term approach to de-amplification some content may be in progress.