Meta has been making big plans in preparation for Kenya’s upcoming elections in August, but it might not be enough.

Election campaigns are often riddled with hate speech, misinformation, and malicious targets at candidates. The company is working on a dedicated Kenyan Elections Operation Centre which has rolled out policies and products to limit or completely remove fake news across its companies—Facebook, Instagram, WhatsApp, and Messenger. 

Meta has also teamed up with third-party fact-checkers in Kenya: AFP, PesaCheck, and Africa Check. They will fact-check posts that make claims of any kind about the elections. 

So what’s wrong?

Well, Meta works hard, but disinformers work harder. 

Last week, a report by advocacy group Global Witness, and Foxglove, a legal non-profit firm, revealed that Facebook is still unable to detect hate speech within its own ads, just weeks to Kenya’s August 9 Presidential elections. 

Kenya’s National Cohesion and Integration Commission (NCIC) read the report, and on July 29, directed Facebook to get its act together within 7 days or face suspension. 

Meta-statising efforts

In response, Meta shared that it has built more advanced detection technology, increased the size of its global team focused on safety and security to more than 40,000 people, and hired more content reviewers to review content across its apps in more than 70 languages—including Swahili.

In fact, this year alone, it has removed over 79,000 pieces of content from Kenya that violated its hate speech and violence policies. 

Zoom out: After the NCIC’s proclamation, Kenya’s ICT minister Joseph Mucheru subsequently said in a Twitter post that the country has no plans to shut down Facebook or enact an internet ban, but that

remains to be seen.