How do we moderate content rendered with the WSL Studio and API?

At WellSaid Labs, we consider it an honor that we get to help innovative people amplify their creativity with our AI voices, and with that honor comes a responsibility that we take very seriously– making sure that our voices are used ethically.

Why do we moderate content?

WellSaid moderates content to protect the listeners, the voice actors, our community, the general public, and our employees by adhering to our company values. 

What kind of content is prohibited?

WellSaid has a whole range of voice content that the actors nor we want to be associated with, including Sexually explicit content, Abusive language, Extreme obscenity, Hate speech, Unlawful language, and Impersonation.

These are the kinds of content we moderate:

  • Sexually explicit content: We do not produce content that is pornographic in nature.
  • Abusive language: This includes language depicting physical violence or threats.
  • Extreme obscenity: Our voice actors prefer their avatars not be used for this language.
  • Hate speech: Renderings of racist or other hate speech will not be tolerated.
  • Unlawful language: Speech that violates federal laws is not allowed in WSL Studio.
  • Impersonation: We do not allow language that attempts to impersonate others without consent.

We at WellSaid Labs understand there are complexities to these moderations, including healthcare educators, story writing, and other specific use cases. Which are taken into account in our content moderation process. 

What happens if I try to render prohibited content?

WellSaid uses sophisticated content moderation tools built to scan for even subtle variations of prohibited content right at the moment of rendering.

When a user attempts to render prohibited text, our content moderation software will prevent it from being produced until reviewed. 

Language that violates our Terms of Service alerts our content moderation team, causing an account freeze until determined if a violation occurred or further investigation is needed.

Are there additional safeguards?

Yes, we continue to manually spot-check content that may be on the border of prohibited to verify that our automated methods catch harmful language and do not unnecessarily flag valid content. 

*This safeguard is in place for all of WellSaid Lab’s voice products– Studio, API, and Custom Voices. Please review our content moderation guidelines in Terms of Service. WellSaid Labs is forthcoming that we will not tolerate violations of our ethical code prior to doing business with us.

Note: Our goal of creating voices worth listening to and our commitment to “AI for good” motivates us to continually improve content moderation to protect our community. 

For additional information and clarification, please see our
blog post on content moderation.