HomeData EngineeringData NewsImportance of Data privacy

Importance of Data privacy

Yes, your data is used to sell you shoes. But it also may be used to sell you an ideology.

When I tell people I write about data privacy, I usually get something along the lines of these two responses:

“Is Facebook listening to me? I got an ad for parrot food, and the only possible explanation is that Facebook heard my friend tell me about his new pet parrot, because he mentioned that exact brand, which I never even heard of before.”

(No, Facebook isn’t.)

Here’s the other:

“I’m sure that’s important to someone, but I don’t have anything to hide. Why should I care about data privacy?”

A ton of personal and granular data is collected about us every day through our phones, computers, cars, homes, televisions, smart speakers — anything that’s connected to the internet, basically, as well as things that aren’t, like credit card purchases and even the information on your driver’s license. We don’t have a lot of control over much of this data collection, and we often don’t realize when or how it’s used. That includes how it may be used to influence us.

Maybe that takes the form of an ad to buy parrot food. But it may also take the form of a recommendation to watch a YouTube video about how globalist world leaders and Hollywood stars are running a pedophile ring that only President Trump can stop.

“Internet platforms like YouTube use AI that deliver personalized recommendations based on thousands of data points they collect about us,” Brandi Geurkink, a senior campaigner at Mozilla Foundation who is researching YouTube’s recommendation engine, told Recode.

Among those data points is your behavior across YouTube parent company Google’s other products, like your Chrome browsing habits. And it’s your behavior on YouTube itself: where you scroll down a page, which videos you click on, what’s in those videos, how much of them you watch. That’s all logged and used to inform increasingly personalized recommendations to you, which may be served up through autoplay (activated by default) before you can click away.

She added: “This AI is optimized to keep you on the platform so that you keep watching ads and YouTube keeps making money. It’s not designed to optimize for your well-being or ‘satisfaction,’ despite what YouTube claims. As a result, research has demonstrated how this system can give people their own private, addictive experience that can easily become filled with conspiracy theories, health misinformation, and political disinformation.”

The real-world harm this can cause became pretty clear on January 6, when hundreds of people stormed the Capitol building to try to overturn the certification of an election they were convinced, baselessly, that Trump won. This mass delusion was fed by websites that, research has shown, promote and amplify conspiracy theories and election misinformation.

“The algorithmic amplification and recommendation systems that platforms employ spread content that’s evocative over what’s true,” Rep. Anna Eshoo (D-CA) said in a recent statement. “The horrific damage to our democracy wrought on January 6th demonstrated how these social media platforms played a role in radicalizing and emboldening terrorists to attack our Capitol. These American companies must fundamentally rethink algorithmic systems that are at odds with democracy.”

For years, Facebook, Twitter, YouTube, and other platforms have pushed content on their users that their algorithms tell them those users will want to see, based on the data they have about their users. The videos you watch, the Facebook posts and people you interact with, the tweets you respond to, your location — these help build a profile of you, which these platforms’ algorithms then use to serve up even more videos, posts, and tweets to interact with, channels to subscribe to, groups to join, and topics to follow. You’re not looking for that content; it’s looking for you.

This is good for users when it helps them find harmless content they’re already interested in, and for platforms because those users then spend more time on them. It’s not good for users who get radicalized by harmful content, but that’s still good for platforms because those users spend more time on them. It’s their business model, it’s been a very profitable one, and they have no desire to change it — nor are they required to.

“Digital platforms should not be forums to sow chaos and spread misinformation,” Sen. Amy Klobuchar (D-MN), a frequent critic of Big Tech, told Recode. “Studies have shown how social media algorithms push users toward polarized content, allowing companies to capitalize on divisiveness. If personal data is being used to promote division, consumers have a right to know.”

But that right is not a legal one. There is no federal data privacy law, and platforms are notoriously opaque about how their recommendation algorithms work, even as they’ve become increasingly transparent about what user data they collect and have given users some control over it. But these companies have also fought attempts to stop tracking when it’s not on their own terms, or haven’t acted on their own policies forbidding it.

Over the years, lawmakers have introduced bills that address recommendation algorithms, none of which have gone anywhere. Rep. Louis Gohmert (R-TX) tried to remove Section 230 protections from social media companies that used algorithms to recommend (or suppress) content with his “Biased Algorithm Deterrence Act.” A bipartisan group of senators came up with the “Filter Bubble Transparency Act,” which would force platforms to give users “the option to engage with a platform without being manipulated by algorithms driven by user-specific data.” Meanwhile, Reps. Eshoo and Tom Malinowski (D-NJ) plan to reintroduce their “Protecting Americans from Dangerous Algorithms Act,” which would remove Section 230 protections from platforms that amplify hateful or extremist content.

For their part, platforms have made efforts to curb some extremist content and misinformation. But these only came after years of allowing it largely unchecked — and profiting from it — and with mixed results. These measures are also reactive and limited; they do nothing to stop or curb any developing conspiracy theories or misinformation campaigns. Algorithms apparently aren’t as good at rooting out harmful content as they are at spreading it. (Facebook and YouTube did not respond to request for comment.)

It’s pretty much impossible to stop companies from collecting data about you — even if you don’t use their services, they still have their ways. But you can at least limit how algorithms use it against you. Twitter and Facebook give you reverse chronological options, where tweets and posts from people you follow show up in the order they’re added, rather than giving priority to the content and people they think you’re most interested in. YouTube has an “incognito mode” that it says won’t use your search and watch history to recommend videos. There are also more private browsers to limit data collection and prevent sites from linking you to your past visits or data. Or you can just stop using those services entirely.

And, even in algorithms, there is agency. Just because a conspiracy theory or misinformation makes its way into your timeline or suggested videos doesn’t mean you have to read or watch, or that you’ll automatically and immediately believe them if you do. The conspiracies might be much easier to find (even when you weren’t looking for them); you still choose whether or not to go down the path they show you. But that path isn’t always obvious. You might think QAnon is stupid, but you will share #SaveTheChildren content. You might not believe in QAnon, but you’ll vote for a Congress member who does. You might not fall down the rabbit hole, but your friends and family will.

Or maybe an algorithm will recommend the wrong thing when you’re at your most desperate and susceptible. Will you never, ever be so vulnerable? Facebook and YouTube know the answer to that better than you do, and they’re willing and able to exploit it. You may have more to hide than you think.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link

 

Most Popular