Twitter Is Testing A Self-Edit Function So You Can Take Back The Mean Stuff You Said
Back when the world was young (January 2020), Twitter CEO Jack Dorsey shut down users’ requests for an edit button. In a video interview with Wire, a Twitter user asked, “[C]an we get that edit button in 2020?”
And Dorsey replied, “No.”
The CEO explained how Twitter started as an SMS service and the company wanted to keep this vibe. “When you send a text, you can't really take it back,” he said. And though Twitter has considered various workarounds, including a one-minute window for edits, Dorsey declared that “we’ll probably never do it.”
And thus you’ll have to live with the ghosts of your angry, stupid, or misspelled tweets for the rest of your life.
But now that the world is burning (April 2020), it seems Dorsey and his team have had a change of heart. Recently, Twitter rolled out a self-edit option for select iOS users, announcing the experiment in, of course, a tweet from its Twitter Support account:
“When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.”
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.— Twitter Support (@TwitterSupport) May 5, 2020
Think of the prompt like an invisible angel or your disembodied conscience or your Mom, asking you: “Are you sure you want to say that to Chad? I know Chad is a Chad but maybe you can ignore the dumb sh*t she posted.”
And then you’ll say, “You’re right Twitter! I will change my hurtful words. Instead of, ‘Suck it, Chad,’ I will say, ‘I will pray for your soul, Chad.’”
The reminder to review your tweet appears when Twitter’s artificial intelligence and machine learning detect harmful, abusive, or hateful language. The social media giant will probably lean on its Hateful Conduct Policy to identify questionable content, which it lists as violent threats, wishing someone harm, inciting fear, and more.
As the world relies on social media, especially Twitter, to sort out the many points of view about how to get ourselves out of this mess, a little time to rethink what you say may help keep things peaceful and you sane.