of comments on company profiles contain hate, negative reviews, spam and scams.
of social ad comments contain harmful content, but many go unchecked by marketers.
that’s how much time social media managers spend moderating 10,000 comments.
Human moderators would need 2,000 hours a month to check all comments on our social networks. elv.ai saves us €8,500 monthly.
With AI-only solutions, we had to manually check comments because hate speech remained. That’s why we switched to elv.ai to ensure our comments are protected.
In Refresher, as well as in other media, we have long been dealing with the problem of increasing toxicity, conspiracies and misinformation or various attacks in the digital space. That’s why I’m glad that elv.ai is helping us with moderating comments on social media, so we can focus on journalistic work and collaborations with clients.
For the Representation of the European Commission in Slovakia, communication with Slovak citizens – whether in virtual or real contact – is key. It is important for us to discuss with people the topics that interest them, to answer their questions, or to provide them with the latest opinions on hot topics that move our society. We also try to keep in touch with people who do not share the same views, we are open to polite discussion, but we do not want to give space on our social networks to comments containing vulgarities, insults, hoaxes, dangerous misinformation and other illegal content. For this reason, we are very happy to partner with Elv.Ai, making the discussions under our posts a safe space for open exchange of opinions and sharing of facts for all those interested in a real discussion. With the help of Slovak elves and their latest AI technology, we will continue to create a space for constructive dialogue and develop a respectable debate on our social networks about the European Union and its priorities.
Working with elv.ai allows us to better protect our followers and clients from harmful content, vulgarisms and misleading information. We see it as our social responsibility to create access to facts for policyholders and the general public, as well as a safe space for discussion.
Especially during the elections, we see a significant amount of comments under our posts, and unfortunately it is not always possible to moderate all the vulgar and inappropriate reactions. As a project that focuses on factual information, we have partnered with elv.ai to create an environment based on factual discussion and mutual respect, even in our discussions.
Disinformation and hate speech online are among the main topics we have been dealing with at DigiQ for a long time. We are extremely pleased that elv.ai has managed to find an effective way to moderate comments on social media. We believe that through our cooperation we will be able to contribute even more to the cultivation of online discussions.
Given our shared goals of understanding the spread of misinformation, I reached out to Elves to see if they would be interested in a research collaboration. We are now working together on a project that will identify not only the prevalence and content of misinformation on Slovak Facebook, but also the type of content that causes misinformation in the first place. As a researcher, I am excited to work on this project with such a great team!
The Department of Journalism of the Faculty of Arts of the Comenius University is the oldest university-level journalism school in Slovakia. In its mission to educate future journalists and media workers, one of the priorities of the department is to be mindful of the principles of democracy and to always stand on the side of truth. The current information crisis brings with it many challenges for journalists and the education system, which is why it is necessary to join forces with real experts. The cooperation with elv.ai opens up a unique space for incorporating innovative technologies into academic research and education. The common goal is to participate in research on communication in the specific conditions of the online space, but also to cultivate it and combat misinformation and hate speech.
The rules were created based on the existing community guidelines from Meta and are also based on the current DSA legislation. All our trained moderators, as well as the AI model, follow the moderation manual. We hide comments if they show the following characteristics:
1. Vulgarisms
Comments containing rude and pejorative words together with the “censored” form of these expressions – sh*t, f*ck, etc.
2. Insults
They target specific people and specific groups of people. These include, for example, various dehumanizing expressions that compare people to animals, diseases, objects, dirt and similar terms. We also include racism and comments attacking gender or orientation.
3. Violence
Comments that approve violent acts, or by their content encourage the execution of these acts, or threaten the person concerned in some way.
4. Hoaxes, misinformation, harmful stereotypes
Comments that in a certain way try to obfuscate, refute, spread false news about events, about the course of which we already have verified information. This also includes classic conspiracy theories, stereotypes, and myths that attack specific groups of people (for example, Jews or Roma).
5. SCAMs, frauds
This group includes those comments that are published by various bots or fraudsters for the purpose of deceiving/getting rich from ordinary users.
If we are not sure of our decision, or if we think the comment is on the edge, we always prefer to approve it.
Moderation for our clients increased polite comments by encouraging more users to join the discussion. After 5 months of moderation the rte of harmful comments went from 21% to 10% and Increase in the total number of comments from 36k to 41k.
We collaborated with CulturePulse on the emotion analysis feature. CulturePulse technology focuses on more than 90 dimensions of human psychology and culture, including morality, conflict and social issues.
The basis of their AI model focuses not only on traditional machine learning processes, but also on shared evolutionary patterns that occur in different cultures. The algorithm uses over 30 years of clinically proven cognitive science to decode the beliefs that influence people’s behavior.
We fight against hoaxes and misinformation to protect brands on their social networks.