Consumers want social media companies to take more responsibility for data privacy and misinformation on their platforms


Social media can reveal a lot about individuals, from their location and people they interact with, to their hobbies, beliefs, political views, and interests. These are all pieces of information which can be used to build profiles that can, in turn, be used for commercial purposes to sell to that person.

A survey by Cheetah Digital and Econsultancy found that 52% of respondents found personalised and targeted advertising on social media sites based on recent shopping on other sites to be “creepy” (1).

With this in mind, and the growing awareness around privacy, social media and the companies that operate them have become contentious subjects for scrutiny. 

How much data social media companies gather, how they package and sell that back to third parties, along with larger considerations for Big Tech companies that have rapidly expanded into huge entities with large global customer bases and advertisers - these are all valid concerns around how consumer privacy slots in. 

When platforms use our own data in algorithms to show us more relatable content and ads, it can be obvious to us - sometimes we can even trace it back to a specific web search or conversation we’ve had on the platform…this evidence that a company knows all about us can create a sense of uneasiness that also contributes to people believing that a company is intruding.
— Madeleine Peterson, Trends Researcher at WhistleOut (2)

According to research by WhistleOut, 85% of Americans think at least one of the tech giants is listening to their conversations, and 57% were unsure about what companies did with the data they gathered on users (3). 

Privacy concerns were at the root of this mistrust, with many respondents claiming they have deleted apps over privacy concerns and 57% of respondents believing the US federal government should ban major tech companies if they violate user privacy.  

This mistrust is widespread across Big Tech. For example, when WhatsApp changed its privacy policy and made users accept new terms to continue using the app - including how they share data with their parent company, Meta - this caused a significant backlash, and many users contemplated moving away from the app for their communication purposes, to apps like Telegram and signal (4)(5). 

On the flip side, in 2021, Apple’s iOs 14.5 introduced a new feature, App Tracking Transparency, which compelled app developers to give users a choice and require their permission to track their data across different apps and websites, as well as sell that data to other companies (6). Whilst this change received a lot of blowback from companies and app developers who argued that it would make it harder and pricier to target users, it was largely welcomed by consumers who favoured the transparency and option to opt out of being tracked (7). 

Breaches, scandals, investigations, ties to government surveillance and threatened bans all contribute to the American public’s increasing distrust of Big Tech. With events in 2020 such as the election and the pandemic, users are also paying close attention to how companies are handling not only user data, but the spread of misinformation on a platform.
— Madeleine Peterson, Trends Researcher at WhistleOut (8)

Corporate responsibility around misinformation is a key tie in to data privacy, as it suggests that it comes from a position of mishandling data. Implications of online cookie tracking and search algorithms can be combined with user location, age, gender, and other browsing information to tailor content that not only appeals to the individual, but filters out contrary information which reinforces confirmation bias. This “filter bubble” means that people get minimal exposure to news and information not in line with their pre-existing notions and beliefs (9). 

Consequently, several of the large social media platforms have unwittingly become a breeding ground for creating thought silos, exacerbating political differences, and in some cases even radicalising users (10). 

The onset of the 2018 Cambridge Analytica scandal, whereby the firm had collected data on tens of millions of users without their knowledge or consent in 2015, was a major reputational hit for Meta and did affect their consumer base. In 2019, the first full months after the news broke, activity on Facebook dropped by almost 20% and confidence in the company fell by as much as 66% (11)(12). Additionally, Pew Research found that 1 in 4 Americans, particularly young people, cut back on their activity on Facebook, with 26% of respondents having deleted Facebook from their phones, and 54% of adults adjusting their privacy settings (13).

Meta consequently went on a campaign of regaining public trust in becoming much more transparent about their data collecting, sharing, and protection practices, as well as giving consumers more control over how their data is used. Nevertheless, their reputation has arguably still not recovered, and this was only exacerbated by further revelations from the Facebook whistleblower, Frances Haugen, in 2021 (14).

As consumers get more clued up on data rights issues, they are taking more initiative in looking for companies with good track records of protecting their data and disengaging from those with poor track records. 


This is part of a 5 part series, “Consumers are moving to services that protect their data and privacy”, which will explore consumer attitudes towards data privacy, social media and video surveillance - in an age where technology is relying more and more on personal and biometric data.

Click here for:

Part 1: People are realising the benefits of “invisible” methods of authentication for data security

Part 2: Consumers are reaching a “tipping point”: personal data management is a priority - laws and company practices should reflect that

Part 4: Attitudes towards AI: there remains a lack of trust and general awareness from consumers


References: 

  1. https://www.ana.net/miccontent/show/id/aa-consumer-privacy 

  2. https://www.forbes.com/sites/jenniferhicks/2020/10/27/heres-how-2020-created-a-tipping-point-in-trust-and-digital-privacy/?sh=2e020d204fc5 

  3. https://www.whistleout.com/CellPhones/Guides/americans-think-companies-are-spying 

  4. https://www.digitaltrends.com/mobile/is-now-the-time-to-dump-whatsapp/ 

  5. https://www.bbc.co.uk/news/technology-55684595 

  6. https://www.theverge.com/2021/4/27/22405474/apple-app-tracking-transparency-ios-14-5-privacy-update-facebook-data 

  7. https://appleinsider.com/articles/21/07/27/apple-sees-positive-customer-reaction-to-app-tracking-transparency 

  8. https://www.forbes.com/sites/jenniferhicks/2020/10/27/heres-how-2020-created-a-tipping-point-in-trust-and-digital-privacy/?sh=2e020d204fc5 

  9. https://www.forbes.com/sites/jenniferhicks/2020/10/27/heres-how-2020-created-a-tipping-point-in-trust-and-digital-privacy/?sh=2e020d204fc5 

  10.  https://www.rand.org/content/dam/rand/pubs/research_reports/RR400/RR453/RAND_RR453.pdf 

  11.  https://www.theguardian.com/technology/2019/jun/20/facebook-usage-collapsed-since-scandal-data-shows 

  12.  https://www.nbcnews.com/business/consumer/trust-facebook-has-dropped-51-percent-cambridge-analytica-scandal-n867011 

  13.  https://www.forbes.com/sites/jeanbaptiste/2018/09/10/more-than-1-in-4-americans-have-deleted-facebook-survey-reveals/?sh=2f40a5cb4947 

  14.  https://www.bbc.co.uk/news/live/world-us-canada-58805645/page/2 

Images:

Image 1: shutterstock

Image 2: https://www.article19.org/resources/social-media-councils-consultation/


Previous
Previous

Attitudes towards AI: there remains a lack of trust and general awareness from consumers

Next
Next

May 2022: don’t miss these security and data privacy webinars and events this month