Choose the right plan for your business

Try 2 weeks for free with any plan
Annually (-15%)

Starter

€ 69

/ month
(2-week free trial)

Essential

€ 245

/ month
(2-week free trial)

Expert

€ 635

/ month
(2-week free trial)

Enterprise

Custom

Connect your API

Only Essential Data Required

Detailed Plan Comparison

Features
Starter
Essential
Expert
Enterprise
AI moderation
Human moderation
Moderation by your team
Human moderation response time
3 hours
1 hour
30 min
10 min
Moderating paid posts
Moderation customization (stopwords)
Social media profiles
2
Unlimited
Unlimited
Unlimited
Number of ad accounts
2
5
Number of comments
3 000
15 000
50 000
Unlimited
Languages
1
3
Unlimited
Unlimited
User blocking
Automated comment liking
Suggested comments for reply
Engagement automations
AI comment reply
Audience insights
Watchlist of top fans & haters
Custom reporting
API Integration
2FA
Disqus Integration
Service level agreement
Personal onboarding
Account manager
Email and Phone support
Price
69€
245€
635€
custom
FAQ

Any questions?

How does human moderation work?
Our moderators are native speakers moderating their language who are there to decide wether a comment should be approved or hidden when the AI model is not 100% sure. We also have moderators dedicated to overviewing all comments hidden by AI. They are all trained in Elv’s moderation guidelines.

The rules were created based on the existing community guidelines from Meta and are also based on the current DSA legislation. All our trained moderators, as well as the AI model, follow the moderation manual. We hide comments if they show the following characteristics:

1. Vulgarisms

Comments containing rude and pejorative words together with the “censored” form of these expressions – sh*t, f*ck, etc.

2. Insults

They target specific people and specific groups of people. These include, for example, various dehumanizing expressions that compare people to animals, diseases, objects, dirt and similar terms. We also include racism and comments attacking gender or orientation.

3. Violence

Comments that approve violent acts, or by their content encourage the execution of these acts, or threaten the person concerned in some way.

4. Hoaxes, misinformation, harmful stereotypes

Comments that in a certain way try to obfuscate, refute, spread false news about events, about the course of which we already have verified information. This also includes classic conspiracy theories, stereotypes, and myths that attack specific groups of people (for example, Jews or Roma).

5. SCAMs, frauds

This group includes those comments that are published by various bots or fraudsters for the purpose of deceiving/getting rich from ordinary users.

If we are not sure of our decision, or if we think the comment is on the edge, we always prefer to approve it.

Moderation for our clients increased polite comments by encouraging more users to join the discussion. After 5 months of moderation the rte of harmful comments went from 21% to 10% and Increase in the total number of comments from 36k to 41k.

We collaborated with CulturePulse on the emotion analysis feature. CulturePulse technology focuses on more than 90 dimensions of human psychology and culture, including morality, conflict and social issues.

The basis of their AI model focuses not only on traditional machine learning processes, but also on shared evolutionary patterns that occur in different cultures. The algorithm uses over 30 years of clinically proven cognitive science to decode the beliefs that influence people’s behavior.

Save 30% of Your Social Media Manager's Time

Connect your social media accounts in 2 minutes.

Copyright © 2024 All rights reserved ELV.AI

Developed by