Please disable your adblock and script blockers to view this page

Mark Zuckerberg’s write-up defends Facebook’s murky data-sharing practices: Report

.

Mark Zuckerberg’s latest attempt to explain Facebook’s data-sharing practices is notable for its omissions as well as what it plays up and plays down.

In a Wall Street Journal op-ed 24 January titled “The Facts About Facebook,” the CEO doubles down on previous talking points while leaving out, for example, a potential Federal Trade Commission investigation over its privacy practices.

Here’s look at Zuckerberg’s claims in the op-ed:

Do you really trust Facebook to safely handle your privacy?

Do you really trust Facebook to safely handle your privacy?

ZUCKERBERG: “We don’t sell people’s data.” — 24 January 2019.

THE FACTS: Sure, Facebook doesn’t technically “sell” your information. Instead, it rents it out, gives it away and sometimes just doesn’t know how to protect it, as we’ve seen with Cambridge Analytica and other mishaps.

And while Facebook doesn’t sell user data directly to third parties, it makes money from the information. Advertisers choose the types of users they want to reach, based on their location, age and even their political leanings and presumed ethnicity. Facebook identifies which users fit the criteria and shows those people the ads. So technically the information stays with Facebook, but it’s used to do the advertiser’s bidding.

And as the Cambridge Analytica scandal revealed, Facebook has been sharing data with third parties. In that case, a political data-mining firm, Cambridge Analytica, managed to get data on as many as 87 million Facebook users through a personality-quiz app that was purportedly a research tool. With apps, Facebook isn’t selling data — it’s giving the data to makers of apps for free.

ZUCKERBERG: “People consistently tell us that if they’re going to see ads, they want them to be relevant.”

THE FACTS: Zuckerberg doesn’t say how people were posed this question or how the user surveys were conducted.

He does say that to comply with new European data rules, Facebook had asked users for permission to use data to improve ads. In such cases, he says, “the vast majority agreed because they prefer more relevant ads.”

But framing the issue as one of relevance to users glosses over Facebook’s business model of allowing companies to target advertisements based on people’s information.

In a recent survey of US Facebook users, the Pew Research Center found that more than half of users are “not comfortable” with Facebook compiling information about their interests for ad targeting. In a 2012 survey, Pew found that about two-thirds of internet users “disapprove of search engines and websites tracking their online behaviour in order to aim targeted ads at them.”

ZUCKERBERG: “Another question is whether we leave harmful or divisive content up because it drives engagement. We don’t…The only reason bad content remains is that the people and artificial-intelligence systems we use to review it are not perfect.”

THE FACTS: Facebook does have policies against clearly defined harmful content such as hate speech, abuse, inciting violence and other things. It’s true that Facebook’s human and AI systems are not catching all the bad stuff.

But there are also gray areas — posts that are divisive but don’t run afoul of anti-harassment policies, as well as news stories that aren’t outright fakes but misleading. Such posts typically remain available on Facebook.

Tech2 is now on WhatsApp. For all the buzz on the latest tech and science, sign up for our WhatsApp services. Just go to Tech2.com/Whatsapp and hit the Subscribe button.

..
..

Post a Comment

[blogger]

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget