CEOs Believe Their Executive Teams Lacks AI Savviness
Only 44% of CIOs are deemed by their CEOs to be “AI-savvy”, according to a survey by Gartner.
Abusive comments, harassment, and incitement to violence easily slip through online platforms’ content moderation tools, found a new report from the EU Agency for Fundamental Rights (FRA). It shows that most online hate targets women, but people of African descent, Roma, and Jews are also affected.
A lack of access to platforms’ data and understanding of what constitutes hate speech hampers efforts to tackle online hate. FRA called for more transparency and guidance to ensure a safer online space for all. FRA’s online content moderation report looks at the challenges of detecting and removing hate speech from social media. It highlights that there is no commonly agreed definition of online hate speech. Online content moderation systems are also not open to researchers’ scrutiny. This makes it difficult to get a full picture of the extent of online hate and hampers efforts to tackle it.
The analysis of posts and comments published on social media platforms between January and June 2022 reveals that out of 1,500 posts already assessed by content moderation tools, more than half (53%) are still considered hateful by human coders. Women are the main targets of online hate across all researched platforms and countries. Most hate speech towards women includes abusive language, harassment, and incitement to sexual violence. People of African descent, Roma, and Jews are most often targets of negative stereotyping. Almost half (47%) of all hateful posts are direct harassment.
To tackle online hate, FRA says that the EU and online platforms should provide safer online space for all, more guidance, capture all forms of online hate, test technology for bias, and ensure access to data for independent research. To prevent online hate, FRA says platforms should pay particular attention to protected characteristics like gender and ethnicity in their content moderation and monitoring efforts. Very large online platforms should include misogyny in their risk assessment and mitigation measures under the Digital Services Act (DSA).
It is not always clear what is considered hate speech and what is protected under freedom of speech. The EU and national regulators should provide more guidance on identifying illegal online hate. To ensure that different types of online hate are detected, the European Commission and national governments should create and fund a network of trusted flaggers, involving civil society. The police, content moderators, and flaggers should be properly trained, to ensure that platforms do not miss or over-remove content.
Between January and June 2022, FRA collected almost 350,000 posts and comments based on specific keywords. Human coders assessed about 400 random posts from each country to determine if they were hateful. 40 random posts were then assessed in more detail by coders and legal experts. This report shows the different types of hate speech found across the countries, target groups, and platforms covered.
“The sheer volume of hate we identified on social media clearly shows that the EU, its Member States, and online platforms can step up their efforts to create a safer online space for all, with respect for human rights including freedom of expression. It is unacceptable to attack people online just because of their gender, skin color, or religion,” said Michael O’Flaherty, Director of FRA.