Facebook’s Ad System seems to discriminate by race, Research shows

200

A research carried out by a team led by two computer scientists, Muhammad Ali and Piotr Sapiezynski of Northeastern has concluded that Facebook’s own systems are influenced by the race and gender of its users when it presents them with ads.

According to the Economist, the research was published yesterday by researchers from Northeastern University in Boston, the University of Southern California and Upturn, a Washington-based advocacy group. Though the research has not yet been through a peer-review process, six experts contacted by The Economist for comment said asked the research appeared sound.

The American government had, last week, sued Facebook for allowing advertisers to exclude whole categories of people from seeing ads for housing—couples with children, non-Americans, non-Christians, disabled people, Hispanics, and so on. The US Department of Housing and Urban Development (HUD) said this violated the Fair Housing Act, which bans discrimination against certain “protected” groups. Muhammad and Sapiezynski’s research appears to add weight to HUD’s claim.

The Economist reported that Facebook has tried to clean up its act, shutting down tools which allowed advertisers to aim at Facebook users based on age, gender, and zip code but HUD is seeking “appropriate relief” for Facebook’s past actions nonetheless. HUD’s lawsuit also accused Facebook itself of discrimination against minorities through the algorithms it uses to run its advertising business. These are the same ones that Facebook uses to maximise click-throughs and views, and therefore revenue.

Mssrs Sapiezynski and Ali tested Facebook’s systems by paying for advertisements and observing to whom they were delivered. They provided hundreds of pairs of ads, each of which was identical in all but one characteristic. They found that, for instance, an ad with the same image was delivered to fewer black people if it claimed to refer to a property for sale rather than one for rent.

Editor’s Picks  FG paid no ransom to abductors of released Sokoto, Kaduna children - Idris

They also show that the race of people depicted in images affected which groups were more likely to see the ads. An ad for cheap houses for sale, which depicted white families, was delivered to an audience that was 85% white. An identical ad that contained pictures of black families was served to an audience comprising around 73% white users. This suggests that fewer black people saw ads for cheap or affordable housing when those ads used pictures of white people.

The researchers also found a disparity based on gender: jobs for supermarket attendants and janitors tended to be delivered to women, whereas ads for lumberjacks were more likely to be delivered to men.

“Even a well-meaning advertiser might end up reaching a mostly white and/or mostly male audience,” said Mr Sapiezynski, summing up his research. “That’s because Facebook’s opaque algorithms, trained on historically biased data, predict that those people will be most interested.”

The research offers compelling evidence that Facebook is using “machine vision”, whereby powerful computers scan images and recognise what they depict. This is something that has long been assumed but never proven. The researchers established the use of machine vision by changing the transparency of the images they used in their ads, so that they were visible to machines but not to humans. Otherwise identical ads with different pictures of black and white families were still routed to different groups of people.

Advertising relies to a large extent on trying to reach particular groups of people. Sellers of luxury watches want to sell to rich people, for instance, who are more likely to be white than black. But the ability of algorithms to reach the intended audience by sifting vast amounts of personal data is causing growing dismay. This is especially true of Facebook because of its scale relative to traditional media. Moreover, its advertising systems are too complex to be understood at a glance. This makes it harder to draw a clear line between ads that are clearly discriminatory and those than are merely discomfiting.

Editor’s Picks  Alleged NSIPA Fraud: President Tinubu suspends Betta Edu

Christian Sandvig, Director of the Centre for Ethics, Society and Computing at the University of Michigan, said the research showed that Facebook is making “drastic, important, and potentially illegal editorial decisions all the time by using algorithmic systems to identify audiences”. Mr Sandvig was not involved in the work.

Facebook appears to accept the findings. In a statement, Elisabeth Diana, a Facebook spokeswoman, said: “We stand against discrimination in any form. We’ve made important changes to our ad-targeting tools and know that this is only a first step. We’ve been looking at our ad-delivery system and have engaged industry leaders, academics, and civil-rights experts on this very topic—and we’re exploring more changes.”

The researchers take pains to point out that they are not making sweeping claims about Facebook’s entire ad-delivery system, given that they monitored its behaviour in only a few situations. There is no suggestion that Facebook designed its systems to discriminate intentionally. But its machine-learning software, in the process of training itself on the data of Facebook’s users in order to tailor ads to their interests, appears to have absorbed some of their prejudices.

The worry extends beyond Facebook to all systems that rely on machine learning, including the majority of digital-content providers. “This paper is telling us that if your parents never went to college, it is quite likely that an algorithm will look at your pattern of clicks and associations and conclude that you are not interested in college. If you’re black, it will decide that you are less interested in buying a house,” says Mr Sandvig.

Editor’s Picks  Macron not ruling out ground troops to help Ukraine win war

Technology companies tend to claim that they are shielded from liability for these kinds of harmful effects. The Communications Decency Act states that digital platforms are not responsible for the unlawful behavior of their users. But the research appears to show that Facebook’s own systems are contributing to discrimination.

The results throw doubt on Facebook’s claims to be blind to race, says David Garcia, a researcher at the Complexity Science Hub in Vienna, Austria. “Perhaps there is no table in the Facebook databases called ‘race’, but these results suggest that some race-related discrimination in advertisement is taking place,” he says.

After two years of abysmal public relations, the research is another blow to Facebook. Last month Mark Zuckerberg, Facebook’s boss, tried to take back the initiative by calling for extensive regulation of digital-technology firms. He argued, for instance, that tech firms “shouldn’t make so many important decisions about speech on our own”. But he was noticeably quiet on the matter of Facebook’s advertising model. There, regulation may come sooner than he expects—and probably not in the form he is hoping for.

 

Posted By Oyedeyi Samson with The Economist Reports