Two Kenyan cabinet bosses have issued strong statements opposing an ultimatum handed down to Meta last week by the nation’s cohesion watchdog. The interior minister and minister for information, communication, and technology said separately that Kenya will take no action against Facebook ahead of next week’s national elections.
On Friday, the National Cohesion and Integration Commission (NCIC) said it would move to have Meta suspended from operating in Kenya unless the company took further action within seven days to stem the flow of hate speech and misinformation focused on the country’s upcoming election.
The NCIC, founded in 2008 to mitigate inter-ethnic conflict in the wake of unprecedented post-election violence, issued the ultimatum at a news conference Friday held jointly with members of the human rights group Global Witness. The nonprofit was present to disclose independent findings that showed Facebook had repeatedly approved ads designed to instigate ethnic violence among Kenya’s more than 40 tribes.
Reuters reported Monday, however, that the Kenyatta government would not move to suspend Facebook ahead of the August 9 elections, citing remarks by Joe Mucheru, Kenya’s minister for information, communication and technology.
“We do not have a plan to shut down any of these platforms,” Mucheru told the news service, adding that NCIC “should have consulted widely because they don’t have the power to shut anybody down.”
The NCIC said if Facebook failed to heed it warning, its recommendation to suspend the service would be made to Kenya’s communications authority, which oversees telecommunications and e-commerce in the country.
In a tweet on Saturday, Interior Minister Fred Matiang’i said that shutting off access to the platform would infringe upon the free speech rights of Kenya’s more than 11 million Facebook users. Matiang’i further sought to distance the Kenyatta administration from the NCIC’s statements by claiming the warning to Meta had been issued by its commissioners in a “personal capacity.”
“[W]e welcome the constitutional right of citizens to express themselves on matters of national interest without fear of victimization,” he said.
As Gizmodo reported Friday, Global Witness and Foxglove, another not-for-profit based in the U.K., conducted multiple tests designed to gauge Facebook’s ability to detect and prevent ads designed to instigate violence in Kenya along ethnic lines. Facebook repeatedly failed the tests in the country’s two most common languages: English and Swahili.
The groups attempted to publish ads which they described as “dehumanising, comparing specific tribal groups to animals and calling for rape, slaughter, and beheading.” Facebook repeatedly approved the ads, which the groups took down before any users could see them.
Facebook’s human rights record has been roundly criticised in numerous countries, including Kenya, which has a history of conflicts circling its elections since the founding its multi-party system in the early 1990s.
Following general elections in 2007, unprecedented violence swept the country in all but two of its provinces. An official inquiry the following year determined that in some cases violence had been instigated by local business and political leaders. While it occurred spontaneously in some areas, investigators found it had been planned in others.
Documents leaked by Facebook whistleblower Frances Haugen show that Facebook is aware of its impact in the country, though it remains largely optimistic that social media in general will play a mostly positive role in Kenya’s electoral process. One document reviewed by Gizmodo notes that while some posts are “laced with hatred and intimidation,” others contain “peace messages” designed to counter hatred. It goes on to say Facebook has been used to “incite violence, but also to mitigate it.”
The same document, dated around November 2018, states that areas not serviced by local peace committees — a tool deployed to solve tensions and manage conflict in various forms since the 1990s — are “more likely to be triggered by ethnic based content.”
In a statement last week, Meta said it worked with “dedicated teams of Swahili speakers” to help remove harmful content “quickly and at scale.” But asked by Gizmodo how many Swahili speakers it employs to help moderate content, Meta declined to say.
“We don’t typically provide a breakdown of how many people we have reviewing content in a particular country or language as this wouldn’t show the whole picture,” a spokesperson said, adding that “a lot of reports for violating content” are image based and therefore “don’t require local language expertise.”
Meta also said it employed a “team of subject matter experts working on the election,” including those with “expertise in misinformation, hate speech, elections and disinformation.” Asked specifically how many subject matter experts it had hired to help safeguard Kenyan elections, it again declined to provide a number.
“We have specialist central teams working on complex issues — like misinformation and terrorism — huge teams of people who are focused on developing the automation that is proactively detecting violating content in different countries around the world. Those teams would not be reflected in the number of local language content reviewers either,” they said.