As someone born after the turn of the millennia, I am, if nothing else, a product of the social media age. Whether that was having an email from when I turned 8, lying about my age to make a Facebook at 10, or by middle school being completely addicted to Instagram and Snapchat, throughout my life social media has been ever-present. And, while Millennials and older generations did have social media o some extent, I think people in and around my generation were the first to have social media be so integral to their lives from such a young age. I mean everyone I knew had a smartphone by the start of middle school, and even those who didn’t manage to have an Instagram somehow. But being the first generation to grow up like this wasn’t without its hiccups: especially earlier on, older people didn’t understand how significant social media was, which left a lot of room for cyberbullying to fester. Now, people have started to take notice and act, but even with an entire generation having grown up and experienced this, people still don’t really know what to do.
Now, you may at this point be asking yourself, what does my childhood and cyberbullying have to do with disability on social media. Well, a few weeks ago it came out that popular social media app TikTok was censoring and “shadow banning” people with disabilities. Now, on its face, this action in and of itself is horrid and inexcusable, but what I found especially interesting was their justification for doing so. They claimed it would “protect users with a high risk of bullying,” and while Bytedance (TikTok’s parent company) has now come out and apologized for this, the fact remains that they, likely in earnest, believed that they were doing more good than harm in “protecting” the users they censored.
More than anything else, I think that this gets at one of the most interesting dichotomies in our society today, one which goes far beyond the world of disabled users on TikTok: should social media be a free and unrestricted place for us to share our lives in, or do platforms (and now increasingly governments) have a responsibility to protect us from ourselves online, so to speak. It’s not an easy question. In the context of a platform like TikTok, it’s impossible to gauge what’s best for a user. By posting a video on a public platform like that, you’re inherently putting yourself out there for the world. And, for a lot of disabled people, that can be empowering, to know that people out there see you and value you. But, at the same time, the internet is a cruel and unforgiving place, where even the most benign thing can spiral into hate and controversy. This goes beyond disability, with something as simple as disliking a genre of music or a musician being enough for you to get targeted by a mob of unforgiving fans.
TikTok, in their ban, clearly made an unacceptable choice. Many people with experience receiving hate online (including myself on occasion), know what they’re getting in to and have made the judgment themselves that posting the content is worth it. But others may have no idea, and even with the best comment filtering and moderation, there’s still a good chance that a post blowing up will have responses ranging from personal insults to threats of violence/assault.
In an ideal world, TikTok wouldn’t have to police the content like this as everyone would be perfectly empathetic to everyone else, or in a slightly less ideal world, they could carefully review every post and comment for hate or discrimination, but with how fast these platforms grow, both of those seem practically impossible. Even thinking about content moderation might remind you of recent articles
about content moderators, ranging from the horrible working conditions mixed with PTSD-like trauma from seeing hate all day to gross mishandling of sensitive information.
So, while long-term we should look towards solutions that don’t rely on these social media companies making the right decision, for now, we can’t expect our government (in the US) to do that either. Social media companies have proven time and time again that when we ask them to police content they decisively fail. Whether that’s Facebook’s gross mishandling of the fake news situation or YouTube struggling to keep extremist content off their platform, we have seen repeated failure after failure here. And while I don’t mean to say social media companies should stop trying to remove harassment and grossly offensive content, their incompetence in controlling content in any more complex way shows that they cannot be allowed to make choices on behalf of users.
More than anything, social media companies need to stop trying to decide for us each of these things and instead must give users more control over these choices. While TikTok and similar sites might provide a private profile, many people want to show off their content, so if TikTok really believes some of these public profiles will be targeted for harassment, they can provide tools for users to choose how they want their content to be shared. Otherwise, social media takes away our dignity and autonomy. So, when TikTok chooses to do so along lines of social and political identities, they are not only infringing upon some of our essential human rights, but also doing so in racist, sexist, ableist, and otherwise discriminatory ways.
And while I’ll leave discussions of the actual implications of such discriminations in cases like TikTok’s, I think it’s not much of a stretch to say we should expect more than this of the platforms that are so key to our everyday lives. I grew up on social media like Facebook in a time when the first seeds of these larger challenges were being planted, and it was still near unbearable to many. So if we want the next generation of social media to do better by us, we must expect better from them, and if the government won’t step in to regulate, than we must as individuals keep watch for any attempts platforms to undermine us.