March 21, 2021
Someone did an experiment and whenever a user made a personal attack, social media pop up with a prompt: "Your comment may contain content that could harm others, do you really want to post it?" After seeing this prompt, many people chose not to post.
Another scholar has created several bot accounts on Twitter. They automatically search for tweets with the word "nigger" in them and use some criteria to determine if the tweet is posted by a white person or if it is abusive to black people. If it was, the bot would post a reply saying, "Hey, remember: When you harass people with that kind of language, you're hurting real people." He found that the prompting actually worked somewhat, and some people stopped posting such racist, hateful content after being reminded of it.
有人就做了个实验,每当有用户发表人身攻击的言论时,就让社交媒体跳出提示:“你这条内容可能含有伤害到他人的内容,你真的要发布吗?”看到这条提示之后,很多人就选择了不再发布。
还有一位学者在推特上创建了几个机器人账号。它们会自动搜索带有“黑鬼(nigger)”这个词的推文,并且通过一些标准来判断:这条推文是不是由白人发布的、是不是辱骂黑人的。如果是的话,机器人就会发布一条回复,说:“嘿,请记住:当你用这种语言骚扰别人的时候,你伤害了真实存在的人。”他发现,这种提示还真的有些作用,一些人被提醒之后,就不再发表这种种族歧视的仇恨性内容了。
Why are so many people angry on social media? (pt. 7)
Someone did an experiment andwhere whenever a user made a personal attack, the social media platform had a pop up with a prompt: " saying “Your comment may contain content that could harm others, do you really want to post it?"
After seeing this prompt, many people chose not to post.
Another scholar has created several bot accounts on Twitter.
They automatically search for tweets with the word "nigger" in them and use some criteria to determine if the tweet is posted by a white person or if it is abusive to black people.
If it was, the bot would post a reply saying, "Hey, remember: When you harass people with that kind of language, you're hurting real people."
He found that the prompting actually worked somewhat, and some people stopped posting such racist, hateful content after being reminded of it.
|
Why are so many people angry on social media? (pt. 7) This sentence has been marked as perfect! |
|
Someone did an experiment and whenever a user made a personal attack, social media pop up with a prompt: "Your comment may contain content that could harm others, do you really want to post it?" Someone did an experiment |
|
After seeing this prompt, many people chose not to post. This sentence has been marked as perfect! |
|
Another scholar has created several bot accounts on Twitter. This sentence has been marked as perfect! |
|
They automatically search for tweets with the word "nigger" in them and use some criteria to determine if the tweet is posted by a white person or if it is abusive to black people. This sentence has been marked as perfect! |
|
If it was, the bot would post a reply saying, "Hey, remember: When you harass people with that kind of language, you're hurting real people." This sentence has been marked as perfect! |
|
He found that the prompting actually worked somewhat, and some people stopped posting such racist, hateful content after being reminded of it. This sentence has been marked as perfect! |
You need LangCorrect Premium to access this feature.
Go Premium