For many adults, the most dangerous thing about social media is the potential to accidentally like your former high school crush’s photo (especially if it’s a ways down their Instagram grid). But for children, the risks are far more complicated. Today’s most popular social media platforms — Facebook, YouTube, and Twitter, to name a few — can’t be trusted to protect kids because they weren’t made for children in the first place. Rather, these platforms are meant to increase followers and engagement (read: revenue). And in this race for growth, creating kid-safe content is low on the priority list.
The biggest threats are easy to identify: cyberbullying, inappropriate content, and the sharing of personal information (name, birthday, location, etc.) have obvious negative impacts for kids, teens, and adults. Statistics Canada reported that one in five people have been victims of online abuse, which includes the sharing of private photos or receiving threats via direct messages. Though young kids aren’t necessarily exposed to this exact activity, it’s only a matter of time before they move from Disney Club Penguin to Reddit. Many children are also at risk of phishing scams — frauds aren’t selective when it comes to ripping people off. And even so-called “positive” online behavior can have consequences; social validation, measured by likes and comments, can cause anxiety and affect children’s self-esteem. But those are just the things we can see at the user-interface level.
There’s so much lurking behind the scenes that it can be hard to quantify exactly how complicated the world wide web really is. Even something as basic as a user profile can be an issue: how are kids supposed to parse real profiles from fake, and genuine interaction from baiting trolls? If a billion-dollar company like Twitter struggles to identify bots with multiple accounts, what chance does a 10-year-old have? And let’s not forget about every social platform’s complicated algorithms. An innocent desire to watch Dora the Explorer can lead to recommended YouTube videos with NSFW language or violent content. It’s a frustrating reality that creators go out of their way to trick kids (and parents) with misleading titles — thousands of popular cartoon characters have been used in bizarre and inappropriate parody videos.
Avoiding inappropriate content is like playing internet Whack-a-Mole — you never know when something will pop up or where it will come from. Transparency and control are the keys to online safety; we shouldn’t have to question the integrity of what kids are exposed to online. Unfortunately, “solutions” often require parents to report when comments and videos are inappropriate; YouTube often relies on viewers to flag violent or discriminatory content. Other social platforms, similarly, expect parents to manually adjust privacy settings that are buried deep within account settings.
What’s more, tech giants have conditioned users to trade privacy for access: many games require social media accounts to earn playtime, free apps are saturated with in-app purchases, and advertisements are repetitive and targeted. Every scrap of data is harvested to create ads that are indistinguishable from genuine user-generated content, especially for kids who are just beginning to explore the online world.
Keeping kids safe on social media is a tricky task. Young kids should only have to worry about connecting, playing, and sharing — and parents should feel confident in letting their children experience the best of technology, risk free.
Photo Credits: AlesiaKan / Shutterstock, soi7studio / Shutterstock, Piotr Swat / Shutterstock, Twin Design / Shutterstock