Sunday, December 6, 2020

Facing Up to AI

"Facebook’s AI mistakenly bans ads for struggling businesses" by Sarah Frier Bloomberg, November 29, 2020

New York-based businesswoman Ruth Harrigan usually sells her honey and beeswax products in souvenir shops, but with COVID-19 pausing tourism, she’s been almost entirely dependent on Facebook ads to drive online sales. On Nov. 11, this new financial lifeline was abruptly cut when the social media company blocked her HoneyGramz ad account for violating its policies. She couldn’t imagine what about her tiny honey-filled gifts would have triggered the problem. Friends told Harrigan to just wait a couple of days and the problem might resolve itself.

She waited until she lost an estimated $5,000 in revenue and "was getting a little anxious thinking, 'Oh my God, Black Friday is around the corner, most of my sales for the year happen in November and December and that's it, and if I'm shut down any longer than this, it'll cripple me.'" -- and then she will $tarve.

Harrigan is one of millions of small business advertisers who have come to rely on Facebook because the coronavirus has shut down many traditional retail channels. The social media giant has provided new sales opportunities for these entrepreneurs, but also exposed them to the company’s misfiring content-moderation software, limited options for customer support and lack of transparency about how to fix problems.

Facebook’s human moderators have focused on election and COVID-19 misinformation this year, so the company has leaned more on artificial intelligence algorithms to monitor other areas of the platform. That’s left many small businesses caught in Facebook’s automated filters, unable to advertise through the service and frustrated because they don’t know why.

Even if an ad account gets restored, businesses lose crucial momentum. Facebook’s advertising algorithm takes a couple of weeks to figure out which users may be interested in an ad, to refine the targeting.

“Facebook almost doesn’t realize the impact of their own algorithm and what that means,” said Jessica Grossman, chief executive of digital marketing firm In Social,. There seemed to be no logic to the account bans imposed on In Social’s clients, she added. A pizza vending machine company, a reusable water bottle company, a coffee delivery service, a business coach and a hair weave company were all suspended. 

Try arguing with it.

"We know it can be frustrating to experience any type of business disruption, especially at such a critical time of the year," Facebook said in a statement. "While we offer free support for all businesses, we regularly work to improve our tools and systems, and to make the support we offer easier to use and access. We apologize for any inconvenience recent disruptions may have caused," but while business owners agree that Facebook is a lifeline during the pandemic, they say it’s also an unreliable partner. Facebook’s ban on political ads around the US election, for instance, affected companies that have no connection to politics, like a business selling bracelets to benefit refugees. A seed company was also blocked for sharing a picture of Walla Walla onions — which were “overtly sexual,” according to Facebook’s AI.

GFP Delivered, a Chicago-based produce company advertising a way for people to avoid the grocery store during COVID-19, had its Facebook ads shut down for two months without clear explanation, according to owner George Fourkas. He said he was able to fix the problem only after reaching out to old college friends who work at Facebook. 

The more things change..... it's all whom you know still.

Facebook has been automating content moderation for years, a transition it highlights in a quarterly report detailing how much content the company removes. In more nuanced categories such as “hate speech,” Facebook removed almost 95 percent of violating posts automatically in the third quarter, up from just 53 percent two years ago, but that increase comes with more corrections. Facebook removed 22 million posts for hate speech in the third quarter, more than 3 times as many as a year earlier. The number of posts it later restored jumped by 40 percent.

Advertisers have been particularly hurt by these automated decisions in recent months. The overreaction by Facebook’s AI is a side effect of the company taking more responsibility for the content on its platform, according to Guy Rosen, Facebook’s vice president of integrity. “As we take more action, we remove more content, there’s more opportunities also for those to be in error,” he said during a recent press call.

Everything comes with side effects, even the va¢¢ines.

That’s what HoneyGramz’s Harrigan was told happened to her account. She eventually got desperate enough to Google names of Facebook employees who might help. She found Rob Leathern, the company’s director of ad products, and sent him a message on Twitter. Miraculously, he responded. A few hours later, Facebook sent an e-mail restoring her account.

"They just said they turned it off in error," Harrigan said. "They didn't give me any feedback. They just reset the whole thing as if it never happened," but Harrigan won’t forget. She printed off the e-mail and pinned it to her office whiteboard. “It was really, really scary,” she said.....

--more--"

Can you imagine what will happen when they take over health care?

It will will literally be the death of you.

"Facebook to start policing antiBlack hate speech more aggressively than antiwhite comments, documents show" by Elizabeth Dwoskin, Nitasha Tiku and Heather Kelly Washington Post, December 3, 2020

Facebook is embarking on a major overhaul of its algorithms that detect hate speech, according to internal documents, reversing years of so-called “race-blind” practices.

Those practices resulted in the company being more vigilant about removing slurs lobbed against white users while flagging and deleting innocuous posts by people of color on the platform.

The overhaul, which is known as the WoW Project and is in its early stages, involves re-engineering Facebook’s automated moderation systems to get better at detecting and automatically deleting hateful language that is considered “the worst of the worst,” according to internal documents describing the project obtained by The Washington Post. The “worst of the worst” includes slurs directed at Black people, Muslim people, people of more than one race, the LGBTQ community, and Jewish people, according to the documents. 

Gonna sew your lips shut, so to speak.

As one way to assess severity, Facebook assigned different types of attacks numerical scores weighted based on their perceived harm. For example, the company’s systems would now place a higher priority on automatically removing statements such as “Gay people are disgusting” than “Men are pigs.”

Facebook has long banned hate speech — defined as violent or dehumanizing speech— based on race, gender, sexuality, and other protected characteristics. It owns Instagram and has the same hate speech policies there, but before the overhaul, the company’s algorithms and policies did not make a distinction between groups that were more likely to be targets of hate speech versus those that have not been historically marginalized. Comments like “White people are stupid” were treated the same as anti-Semitic or racist slurs.

In the first phase of the project, which was announced internally to a small group in October, engineers said they had changed the company’s systems to deprioritize policing contemptuous comments aboutwhites,” “men” and “Americans.” Facebook still considers such attacks to be hate speech, and users can still report it to the company; however, the company’s technology now treats them as “low-sensitivity” — or less likely to be harmful — so that they are no longer automatically deleted by the company’s algorithms. That means roughly 10,000 fewer posts are now being deleted each day, according to the documents.

Yes, it is okay to hate a certain group of people, but none other.

The shift is a response to a racial reckoning within the company as well as years of criticism from civil rights advocates that content from Black users is disproportionately removed, particularly when they use the platform to describe experiences of discrimination. 

That's where the print algorithm must have cut it off.

Some civil rights advocates said the change was overdue.

“To me this is confirmation of what we’ve been demanding for years, an enforcement regime that takes power and historical dynamics into account,” said Arisha Hatch, vice president at the civil rights group Color of Change, who reviewed the documents on behalf of The Post but said she did not know about the changes.

“We know that hate speech targeted towards underrepresented groups can be the most harmful, which is why we have focused our technology on finding the hate speech that users and experts tell us is the most serious,” said Facebook spokeswoman Sally Aldous. “Over the past year, we’ve also updated our policies to catch more implicit hate speech, such as content depicting Blackface, stereotypes about Jewish people controlling the world, and banned Holocaust denial.”

It's always about them at bottom!

Because describing experiences of discrimination can involve critiquing white people, Facebook’s algorithms often automatically removed that content, demonstrating the ways in which even advanced artificial intelligence can be overzealous in tackling nuanced topics.

Imagine what it will do when it views humans themselves as a virus.

Oh, wait, they made a bunch a movies about just that!

“We can’t combat systemic racism if we can’t talk about it, and challenging white supremacy and white men is an important part of having dialogue about racism,” said Danielle Citron, a law professor specializing in free speech at Boston University Law School, who also reviewed the documents, “but you can’t have the conversation if it is being filtered out, bizarrely, by overly blunt hate speech algorithms.” 

Yeah, strange how we can't talk about the unmentionable Jewi$h $upremaci$m that dominates our lives today, and if you do mention it your are removed like a Jew in Nazi Germany!

In addition to deleting comments protesting racism, Facebook’s approach has at times resulted in a stark contrast between its automated takedowns and users’ actual reports about hate speech. At the height of the nationwide protests in June over the killing of George Floyd, an unarmed Black man, for example, the top three derogatory terms Facebook’s automated systems removed were “white trash,” a gay slur, and “cracker,” according to an internal chart obtained by The Post and first reported by NBC News in July. During that time period, slurs targeted at people in marginalized groups, including Black people, Jewish people, and transgender people, were taken down less frequently.

Whenever I hear someone say white trash I always say to them that's a terrible thing to say because it's the equivalent of the N-word, but I'm proud to be called a cracker.

As protests over Floyd’s death sparked national soul searching in June, Facebook employees raged against the company’s choices to leave up racially divisive comments by President Trump, who condemned protesters. They also debated the limits of personal expressions of solidarity, like allowing Black Lives Matter and Blue Lives Matter slogans as people’s internal profile pictures. Black employees met with senior executives to express frustration over the company’s policies.

In July, Facebook advertisers organized a high-profile boycott over civil rights issues, which put pressure on the company to improve its treatment of marginalized groups. It was also bitterly criticized by its own independent auditors in a searing civil rights report, which found Facebook’s hate speech policies to be a “tremendous setback” when it came to protecting its users of color. More than a dozen employees have quit in protest over the company’s policies on hate speech. An African American manager filed a civil rights complaint against the company in July, alleging racial bias in recruiting and hiring.

My feeling is as long as it doesn't cross into action, it's an opinion.

Complaints by Black users continue, with some saying they are seeing posts removed with increased frequency even as the WoW project gets underway.

In one instance in November, Facebook-owned Instagram removed a post from a man who asked his followers to “Thank a Black woman for saving our country,” according to screenshots posted by social media users at that time. The user received a notice that said “This Post Goes Against Our Community Guidelines” on hate speech, according to the screenshots.

--more--"

Related:


Also see:

"With vaccines against COVID-19 on the verge of being rolled out around the world, Facebook said it will start removing false claims about the immunizations that have been debunked by public health experts. The move announced Thursday adds to Facebook’s policy of taking down misinformation about the deadly virus that could lead to imminent physical harm. The type of posts that could be removed on Facebook or Instagram include false claims about the safety, efficacy, ingredients, or side effects of the vaccine, Facebook said. These could include claims that the vaccines contain microchips or anything else not on the official ingredient list. In October, Facebook said it would ban ads that discourage people from getting vaccines in general, not just for COVID."

The New World Order and Great Reset is all about the vaccine, folks, so don't take it.