Facebook Algorithms Continue to Sell Racism… Why?

Give a voice to the voiceless!

Facebook Algorithms Continue to Sell Racism... Why.

FACEBOOK ALGORITHM METHOD CREATES RACIST GROUP LISTS FOR MARKETING STRATEGY SELLS

Can Facebook claim ignorance?  Can they chalk it up to the massive swamp of data that churns thru its systems, only to be understood well after the fact?  Or perhaps basic decency just keeps getting screwed by the algorithm method, like so much else these days.  Whatever you think, and whatever Facebook says until they actually open the doors to its system’s methods and internal oversight, the company seems to keep making money by selling the wrong things and in the wrong ways.  But last week Pro Publica came forward with the break that Facebook’s ad targeting system (algorithm method) was promoting to sell ads to its users that were categorized as “Jew Haters.”  So to be a little clearer, Facebook’s system identified users that hate Jews and then offered to sell thematic advertising to them as a group.

SCENARIO SHOWS SAME SYSTEM OF CATEGORIZATION THAT ALLOWED MICRO MARKETING IN PRESIDENTIAL CAMPAIGN, ALL BEHIND A FACEBOOK CURTAIN

Sure. We know what you’re thinking, this is nothing to do with the kind of shenanigans that involved the data that Facebook just handed over to the Mueller investigation looking into Russia’s interfering in last Fall’s national election.

Anyway, as these things are now wont to do, “Jew Haters” became a hot topic on Twitter after the Pro Publica piece hit the virtual world, and within a day Facebook announced that “Jew Haters” and similar racist groups that had been ranked within its algorithm method, had ben removed from its advertising queue.  Then Facebook continued its pattern of offering its lame distant and content free explanation.

Read More:

Umbrella-sharing Startup Loses 300k Umbrellas in 90 Days

FACEBOOK HIDES BEHIND “THE SYSTEM MADE ME DO IT” APOLOGY, HAL 9000 STARTS QUERYING DAVE

As Facebook explains, the categories were algorithmically determined based on what users themselves put into the Employee and Education slots. Enough people had listed their occupation as racist bile like “Jew Hater,” their employer as “Jew Killing Weekly Magazine,” or their field of study as “Threesome Rape” that Facebook’s algorithm, toothless by design, compiled them into targetable categories.

FACEBOOK SYSTEM CAN IDENTIFY OFFENSIVE TERMS, YET NONE PRECLUDED FROM MARKETING SCHEME

Facebook’s response is repetitious in emphasizing that users themselves self-reported the data and they removed the categories as soon as it was aware of them. But claiming ignorance of its own algorithms lets Facebook equivocate more obvious questions: What does it tell us about Facebook that Nazis can proudly self-identify on their platform? Why can’t these algorithms determine that words like “rape,” “bitch,” or “kill” aren’t valid occupational terms? Facebook says its AI can detect hate speech from users—so why, seemingly, did Facebook choose not point its AI at the ad utility?

Despite a user base of two billion people, Facebook as a company has very few human faces. There’s COO Sandberg, CEO Zuckerberg, and very few others. So when a company of this size—one this reliant on automation—makes as huge a mistake like embedding anti-semitism within its revenue schemes, there’s no one to blame. Even the apology is uncredited, with no human contact listed, save for the nameless press@fb.com boilerplate.

FACEBOOK AUTOMATION A BUG, OR CONVENIENT FEATURE TO HIDE BEHIND?

Zuckerberg and his cohorts made algorithmic decision-making the heart of its ad-targeting revenue scheme, and then enshrouded those systems in a black box. And as Facebook’s user base has grown, so have its blindspots.

Last year, lawyers filed a class action suit against Facebook over concerns that its ad-targeting scheme violated the Civil Rights Act. In addition to self-reported ad targeting, Facebook also compiled data to place users into categories they may not even be aware of. In October, Pro Publica revealed that, based on data like friend groups, location, likes, etc., Facebook put users into categories analogous to race, called an “ethnic affinity.”

Read More:

Abuser, Racist Guard Gets Tricked By Peanut Butter Jailbreak

PREVIOUSLY, FACEBOOK HAL 9000 WAS CATEGORIZING USERS BY “ETHNIC AFFINITY”

Advertisers could then either target or exclude users based on their affinity, a grave concern in a country that outlaws denying people housing and employment based on their race. They ended  “ethnic affinity” targeting after the backlash. Unlike with the “Jew Hater” debacle, where Facebook said it didn’t know what its algorithms were doing, here they claimed it couldn’t foresee the disproportionate impact of its algorithms. Call that algorithmic idiocy.

SHOW ME THE MONEY FACEBOOK SYSTEM WILL MARKET TO NAZI MONKEYS, ANYONE AT ALL AS LONG AS REVENUE CREATED, PUBLIC IN DARK

Why do Facebook’s algorithms keep abetting racism? The more specific answer is hidden inside Facebook’s black box, but the broader answer may be: It’s profitable. Each user is a potential source of revenue for the company. And the more they use the site, the more ads they engage, the more shareable content they produce, and the more user insight they can generate for the social media giant. When users reveal themselves as racist, anti-semitic, and so on, what obligation do they have to remove them or frustrate its own revenue structure? Does removing or censoring users violate their first amendment rights?

In both the original Pro Publica report and the follow-up from Slate, researchers have called for a public database of Facebook’s ad-targeting categories and a broader, de-automation push across the company. At this point, they can no longer deny the sore need for an ethical and moralistic compass somewhere within its advertising business; the company’s algorithms and its racist and anti-semitic controversies are linked. It’s time for an enormous paradigmatic shift towards accountability, out in the open, and not another tepid half step from Facebook within the comfort of its black box.

Give a voice to the voiceless!

Leave a Reply

Your email address will not be published. Required fields are marked *

Tiny Machines Can Deliver Drugs, Treatment Redefining Internal Medicine

China Imports $300 Million In Israeli Lab Grown Meat