The Supportive Role of Tech Platforms in Disease Outbreaks
Saskia v. Popescu
Saskia v. Popescu, PhD, MPH, MA, CIC, is a hospital epidemiologist and infection preventionist. During her work as an infection preventionist, she performed surveillance for infectious diseases, preparedness, and Ebola-response practices. She holds a doctorate in Biodefense from George Mason University where her research focuses on the role of infection prevention in facilitating global health security efforts. She is certified in Infection Control and has worked in both pediatric and adult acute care facilities.
Vaccine-preventable disease outbreaks are on the rise and social media has a responsibility to help rein it in
Measles has been making a comeback in recent years and, with the growth of the anti-vaccine movement, it’s poised to become even more common. With outbreaks ongoing and 159 confirmed cases reported across 10 US states since the start of 2019, much of the attention has been directed toward vaccine exemptions. Many states are looking to rein in the ability to opt out of vaccinations for school-age children for personal or philosophical beliefs, as is allowed in 17 states. In fact, the debate on vaccine exemptions has become increasingly partisan.
Fueling the debate is the anti-vaccine (or anti-vaxxer) movement on social media platforms like Facebook and Twitter. As these platforms have increased in popularity, so, too, has the ability to spread misinformation related to health care. It’s become a real problem, and now many are calling for the tech industry to own its role.
For public health proponents, it can be frustrating and exhausting trying to correct the misinformation in these anti-vaxxer posts. A recent investigation by The Guardian found that even neutral search terms (think “vaccination” or “immunizations”) on social media yields a startling amount of anti-vaccine context on both Facebook and YouTube.
But the road to changing social media algorithms is tough, and a lot of tech platforms are leery to do so. Megan Garcia, senior fellow at New America and director of growth for New America's National Network, recently drew attention to this in an article, noting that “platforms have issued responses on the spectrum from all-out content banning (Pinterest) to taking modest steps to address unscientific information on vaccines (YouTube) to treading water while they examine the issue (Facebook). Rep. Adam Schiff, a Democrat from California, sent a letter earlier this month to Facebook and Google expressing his concern that companies are ‘surfacing and recommending’ anti-vaccination content. Amazon, when confronted by CNN with the prevalence of anti-vaccination content, referred reporters to its content guidelines page, which says it provides customers with a ‘variety of viewpoints’ but reserves the right ‘not to sell certain content, such as pornography and other inappropriate content.’"
The rise of these vaccine-preventable diseases in the age of social media and tech has revealed a very real relationship that needs to be addressed. Garcia proposed several options for managing this rather novel situation—from an international collaboration between the World Health Organization and the tech industry to share best practices and help provide better information, to companies owning the responsibility of moderating their own content, like Facebook has done with terrorist propaganda. Another potential option is to use artificial intelligence and human screening practices as a mechanism for identifying unscientific content in a form similar to malware.
“The public health/science community is well positioned to help individual tech companies or any consortia that develop determine which research has been deemed credible and which has been debunked. Some of this is apparent already via organizations like the [US Centers for Disease Control and Prevention], but it always helps to have a credible third party reaffirm sound science,” Garcia told Contagion® regarding how the public health and science communities can support better information-sharing. Regarding the role of politics and guiding how tech and private companies might make these larger decisions, she noted that “technology companies are operating in an environment in which they have tried to stay out of politics, but have been found to be wittingly and unwittingly allowing users to post content that has been shown to have an impact on voters. This is clear, for example, with the use of Russian bots to spread divisive messages in the 2016 US campaign for president. In this complicated environment, any issue that has political interest is profoundly difficult for technology companies to address without being labeled political. However, I make the case that during times of public health emergencies, tech companies have a responsibility to respond in a measured way.”
If the partisanship within the vaccine debate could be eliminated, Garcia notes that there are options, such as those she suggested, but she also points out that “large technology companies risk being labeled partisan whenever they choose to moderate any content, but that has not stopped them from rightly creating programs to limit terrorist propaganda and child pornography. They should be equally empowered to take steps to limit public health emergencies, especially when they primarily impact children.”