Yasna.ai - our AI platform for in-depth interviews

Skills an AI-powered Moderator Must Master

and which one is the hardest to automate
Artem Tinchurin
Co-founder at Yasna and Fastuna
When you create an IT system that is supposed to automate human work, it's important to have a clear answer to this question:

"What human skills should a robot be able to emulate, exactly?"
It's also important to set the bar adequately.
Our team came up with this set of skills that an average (good!) moderator should have.
An average (good!) moderator:
  • 1
    Creates a safe space for conversation
  • 2
    Follows the guide, asks questions, and listens attentively
  • 3
    Probes to obtain meaningful and relevant answers
  • 4
    Probes for detailed responses
  • 5
    Evaluates the relevance and completeness of answers received
  • 6
    Keeps the conversation on track and focused on the topic
  • 7
    Refrains from expressing personal opinions or making judgments
  • 8
    Manages personal biases
This is a skill set shared by most moderators. But a minority of outstanding moderators go beyond that.
An outstanding moderator:
  • 1
    Possesses the ability to navigate and address unconscious matters
  • 2
    Is capable of seeing the big picture while attending to details and peculiarities
  • 3
    Can read between the lines, prioritize, and pinpoint what truly (!) matters
At Yasna.ai we are now working to reliably replicate the work of an average good moderator. Granted, that is a hard enough nut to crack - look at the list of skills!

In my view, we've made significant progress on this path.
Which skill is the hardest to master?
Perhaps, Evaluates the relevance and completeness of answers received.

For example:

You asked what a respondent did not like about a certain service. They answered "I didn't like the delivery." Obviously, this is a relevant answer, but the automoderator must somehow determine that this answer is not sufficient.

Let's assume it's not too difficult. The automoderator then clarifies what exactly they didn't like about the delivery and receives the answer "I didn't like the courier."

Then it's easy to imagine the chain "What exactly didn't you like about the courier > His clothes", "What was wrong with the clothes > They were dirty" and so on. There is also a possible dialogue thread "Maybe you didn't like something else?"


When should an automoderator consider an answer sufficient and move on? And what about a human moderator, how would they solve this problem?
Innovate with confidence
Validate product and marketing decisions with real people in 24h using our hassle-free testing tool