British girl Molly Russell died in 2017 at the age of 14. Prior to this act, he searched online for images of suicide and self-harm. That alone is painful enough, but later I found out that such images were “recommended” by her favorite social media platform. Molly's Instagram feed was full of these images.
A few months after Molly's death, Pinterest automatically sent her an email, and the algorithm continued to recommend fresh images of self-harm. Some of them were torn thigh images and manga with girls hanging their necks. Molly's father sued Instagram and Pinterest. The reason was that she allowed such blatant images to be posted and sent her to Molly's feed to help her commit suicide.
Harm of "Recommend"
Molly's father's awareness of this miserable situation spurred the argument that social media such as Instagram and Pinterest are exacerbating the “mental health crisis” of young people. Social media may have contributed to the rise of “suicide generation”. The suicide rate for British teenagers is double that of eight years ago.
Following Molly's death, voices for change are rising. For example, UK Ministry of Health Matt Hancock pointed out that social media companies need to wipe out such content, and that firms who don't follow it will prosecute. In response to such severe criticism, Instagram banned “images of explicit self-harm”. This is a step forward from the previous rules that only prohibited beautifying self-harm and suicide.
But there are more harmful problems that cannot be addressed by simple bans alone. Social media platforms not only host these problematic content, but also recommend them to those who are most affected.
“Recommending” and simply “getting it” are completely different. Academic papers that support it are also appearing one after another. Whether self-harm, hoax, recruiting terrorists, or conspiracy, the platform has done more than make it easier to find such content. It is helping to increase it in the way of “Recommendations”.
Algorithms that cannot be judged
Our research team explored how content that promotes eating disorders is displayed as a “recommendation” for Instagram, Pinterest, and Tumblr users. Social media platforms continue to display such content through algorithms, despite having a clear rule that doesn't post content that hurts you, and blocking certain hashtags.
Social media users receive “Recommendations”. For a personalized, more enjoyable experience.
When you search for interior ideas, the feed will immediately show a sample picture of the paint, and an amateur interior designer to follow is recommended. In other words, as with eating disorders, the more you search for an account that promotes it or an account that posts a self-injury image, the more the platform learns the user's interest and pushes it deeper Go.
As Molly's father noticed, this recommendation system does not “judge”. Display what the user likes. It doesn't matter if you really like it. Continue to display even if it is against the platform community guidelines.
If you are looking for a graphic image of self-injury or just following a user who is talking about your depression, the recommendation system fills your feed with “suggestions”. And you are reshaping your mental health.
The harmful effects of bans on the community
The content that I didn't want to see in particular was recommended and the number will continue to increase. The Instagram discovery page, the Pinterest home feed, and the Tumblr dashboard. The social media account quickly turns into a distorted mirror of the surprise house in the amusement park. It not only shows you the state of your heart, but also enlarges and distorts it.
Of course, if the “ban” by the platform is perfect, only the most desirable content that social media should provide will be recommended. But it is clear that this is not the case. Not because of lack of effort. Content management is surprisingly difficult.
The boundary between good and bad content is always ambiguous. Even inexperienced reviewers must distinguish between content that promotes self-harm and content that encourages recovery in just a few seconds. Some of the new posts that are uploaded in large numbers every day may be overlooked by surveillance. Furthermore, self-injury is just one of the symptoms of mental illness. Even if Instagram promises to crack down on content that describes self-harm, other content remains untouched.
Furthermore, the ban is not only imperfect, but it can be harmful in itself. Many users who struggle with self-injury and suicide are likely to find support and realistic support online. Social media can give these users a supportive community, valuable advice, a sense of relief and a sense of acceptance.
And these communities can sometimes spread images that seem shocking to others. As a proof that someone is suffering, or as a screaming call for help, as a sign of respect for the return of life from a death trap. There is a risk that these communities will disappear if they are completely banned.
Now is the time for more discussion
The problem isn't just about erasing the fresh content. The platform needs to be aware that appropriate content varies from person to person. You should also understand that searching for and finding yourself is different from being encouraged to see more, and that what is good for an individual may not be good for everyone.
There is not enough discussion about the recommendation system. It's probably because it's too common or most users don't understand how it works.
You might laugh at your nose on Facebook recommending Amazon products you've already purchased, or you might be disgusted by Spotify, who just listens to James Brandt's song and decides that you like it. right. But researchers have begun more serious discussions about the recommendation function.
Safiya Umoja Noble, in his book “Algorithms of Oppression: How Search Engines Reinforce Racism” He criticizes Google for amplifying the idea.
Related article: Google search that creates political conflicts, deep-rooted problems with its algorithms