Click here to see the blog posting that is associated with this FAQ.

What about false positive rates with the filter?

Our false positive rates are industry-leading, but not 0. We expect fewer impacts of false positives when our new moderation process is fully spun up so we can review automatic suspensions in a timely fashion and lift bans from incorrectly-suspended users. Overall, we’ve noticed that a user with one false positive is more likely to experience future false positives than the general user population.

What else do you have planned to increase safety?

We are continually working closely with OpenAI to make our safety tools, including our filters, more robust. We’re also working on features that give you more control over what content you see and from which users.

When will Explore come back?

We have created a new home page where we hope to deliver all the most valued features of explore and more. We hope to make this an even better experience than Explore was, while providing a safe environment for users. We’ve still got a lot of work ahead, but we hope to have the major features ready within the next 3 months.

Why have some comments/questions been removed?

We continue to moderate content on all of our platforms, and comments or questions that seem like spam or otherwise violate our terms of service or community guidelines have been removed.

Additionally, some feature requests submitted to the Feature Request Board contained an inordinate amount of spam, harassment, or sexually explicit content in either the post or in a large percentage of the comments. Our team has temporarily removed those from public visibility until we can dedicate more time to keeping up with the moderation needs of those topics. We continue to value constructive feedback and suggestions from the community.

In the meanwhile, if your idea is not publicly visible, but you did receive confirmation that it was successfully submitted, please do not resubmit the idea. We have many duplicates of some ideas right now, which will need to be merged when they become public. Adding duplicates of pre-existing ideas slows down the process of moderating and publishing ideas.

Why can’t you just keep unpublished content completely private (like Signal)?

Services like Signal rely on individual devices encrypting messages until they reach their destinations. For the AI to respond to your actions, your messages and the AI’s messages cannot be encrypted while being processed. We would only be able to encrypt content end-to-end if each device you use to play AI Dungeon could run its own instance of GPT-3, which currently isn’t possible.

There’s something I don’t understand or would like clarification about in the Content Policy, Community Guidelines, ToS or Privacy Policy.

We expect to make several revisions in the near future on these documents, and your feedback is welcome. Please send it to [email protected].