Facebook, after getting raked over the coals for the role it played in spreading disinformation and propaganda during the 2016 U.S. presidential election, outlined several new steps it’s taking ahead of the 2020 election season.

Among its new measures: Facebook has banned paid ads that suggest voting is useless or that urge people not to vote. The company also is launching a new U.S. presidential candidate ad-spending tracker and is adding more advertising-spend details at the state and regional level to help users analyze advertiser and candidate efforts to reach voters geographically.

Facebook also said it will more prominently label content on Facebook and Instagram that has been rated false (or partly false) by a third-party fact-checker.

However, the company has said it will not ban political ads that include false claims — which CEO Mark Zuckerberg, in a widely criticized speech last week, couched in the context of the company’s commitment to protecting free speech. “This isn’t about money,” he said on a call with reporters Monday. “Banning political ads would favor incumbents… I don’t think that’s what we want to do.”

Among other election-protection initiatives, Facebook announced that it updated its policy about taking action against “coordinated inauthentic behavior” that will let improve its ability “to counter new tactics and bad actors.”

Facebook claims that in the last year, it has disrupted more than 50 individual campaigns from multiple nation-states trying to interfere in elections. Since 2016, it said, it has blocked some 200 such attempts. Most of the attempts have originated from Russia, while Facebook is also seeing increasingly sophisticated attacks from Iran and China, Zuckerberg said on the call Monday.

“The existence of this activity shows actors are continuing to try to influence elections… in the U.S. and around the world,” Zuckerberg said. On Monday, Facebook announced it removed four separate networks comprising dozens of accounts, pages and groups attempting to spread misinformation in the U.S., North Africa and Latin America. Three of those originated in Iran and one came out of Russia, the company said.

In addition, Facebook announced that it will invest $2 million — although that’s a minuscule fraction of its total revenue — to support media-literacy projects both on Facebook and elsewhere.

Starting next month, the company said, it will show the confirmed owners of a Facebook Page and will label state-controlled media on their respective pages and in its ad library, and will make it clear if an ad ran on Facebook, Instagram, Messenger or Audience Network.

The company also is launching Facebook Protect, a program aimed to provide better security for accounts of elected officials, candidates, their staffs and others “who may be particularly vulnerable to targeting by hackers and foreign adversaries.” For pages enrolled in Facebook Protect, participants will be required to turn on two-factor authentication and their accounts will be monitored for hacking.

Facebook’s latest attempts to reduce election interference come after Zuckerberg last year was hauled before Congress to explain how his company’s platform was used by Russian-funded trolls to try to influence the 2016 U.S. election. According to the social-media giant, Russia-linked content reached an estimated 126 million people during the 2016 campaign season and into 2017.

Per a New York Times report, Facebook had information about Russia’s efforts to spread propaganda across the platform as early as the spring of 2016 — an allegation Facebook has denied.

Meanwhile, data on millions of Facebook users improperly ended up in the possession of Cambridge Analytica, a now-defunct political consulting firm that used the information to target voters during the 2016 presidential election on behalf of Donald Trump’s campaign.

This Wednesday (Oct. 23), Zuckerberg is slated to testify before the U.S. House Committee on Financial Services about Facebook’s Libra crypto-currency plans.

Popular on Variety

Source: Read Full Article