Facebook is reviewing its ad targeting system after investigators found they could direct ads to self-described anti-Semites.
Researchers at ProPublica found several ad categories for people who had declared that they “hated” Jews.
Facebook said its algorithms automatically created the categories when analysing user interests.
It said it would trim the number of categories available and check the list before letting advertisers see it.
The ProPublica researchers found the anti-Jewish categories while conducting a larger investigation into the way Facebook targets adverts at users.
To find out if the classifications were real, it bought ads that combined the three separate anti-Semitic categories with several others that were about far-right topics.
ProPublica said it had to use several categories because Facebook would not let it buy adverts for the small number of users who had described themselves as being anti-Semitic. One category had only two Facebook users in it.
The three adverts it prepared, which advertised ProPublica’s work, were approved and posted to the news feeds of the people who had revealed they were interested in the right-wing topics.
The data informing the advertising categories on Facebook was generated automatically, said the ProPublica reporters, and was created from content people explicitly shared on the site as well as by what they revealed via their activity.
In a statement, Rob Leathern, product management director at Facebook, said it had now removed the “targeting fields”. The social network said no-one appeared to have used the ad categories before ProPublica uncovered them.
Mr Leathern said Facebook did not allow hate speech to appear on its site.
“Our community standards strictly prohibit attacking people based on their protected characteristics, including religion,” said Mr Leathern, “and we prohibit advertisers from discriminating against people based on religion and other attributes.”
However, he said, there were times when information appeared on Facebook that violated its standards.
He said it was building “guardrails” into its processes to stop offensive self-reported profile traits being used as ad categories.
“We know we have more work to do,” he said.