Facebook had revealed it’s going to take steps to reduce the amount of anti-vaccine information users of its social media platform come into contact with. This is thanks to a recent upsurge in the number of measles cases in the US. For a while now, there have been many who believe vaccines to be potentially harmful, rather than a preventive cure. There’s some medical evidence to suggest that vaccines can be harmful and can have negative side-effects, but only in rare cases. However, many believe that vaccines are somewhat more dangerous than what the current medical evidence suggests. The increase in measles cases is thought to be linked with posts warning against vaccines going viral. Parents reading these posts have decided that vaccines are in fact dangerous and have not had their children vaccinated, which has in turn led to the increase in measles cases.
This brings about the issue of social media networks and how much influence they have over the content we read on their platforms. With vaccines, there is a chance, albeit apparently a very small one as mentioned above, that they can do harm. So surely you can understand some parents not wanting to have their children vaccinated? But on the other hand, if a lack of vaccination results in an increase in things like measles, are social media networks doing good by removing anti-vaccine content and essentially keeping people convinced that vaccines are good?
Social media networks are all about enabling people to communicate with others around the world on absolutely anything whatsoever. These networks are all about bringing people from all walks of life together, getting people’s voices heard and generating conversations. Of course, there’s always going to be the argument that anything that’s posted on social media should never be removed or edited in any way in the name of free speech. However, in recent years there have been calls for social media networks such as Twitter and Facebook to start censoring some of the content that’s posted on their platforms. Should these networks just let people post whatever they want, or should they intervene? If they’re going to start intervening and censoring (or policing) content, which content should get censored and where does it go from there? These networks have seemingly taken it upon themselves to monitor their content and prevent the spread of information they find harmful, misleading or even fake. But then there’s so much information out there that could be seen as harmful, misleading or fake, so aren’t the social media networks setting themselves a huge, nigh-on impossible task?
Ultimately, Facebook may well be doing good in censoring anti-vaccination posts. If fewer people get the measles because of Facebook’s censoring, surely that’s a good thing? What would happen though if one day Facebook censored content that actually turned out to be helpful? Imagine if Facebook’s decision to censor content resulted in loads of people falling ill. Would Facebook be held accountable? It’s a tricky situation of the social media age. Censoring apparently can be helpful, but if you’re going to start censoring content in any way, you have to be extremely careful how you approach it.