Content Ratings: Hidden Ways They Impact Your AI Dungeon Experience

"The art of life is a constant readjustment to our surroundings." - Kakuzo Okakaura

Content ratings. You’re about to read 5,000 words about content ratings (plus a few other related topics). It’s the kind of feature that can be easily overlooked but actually has a profound impact on your experience on AI Dungeon. Content ratings are part of a delicate balancing act that allows us to create more value for our players and community, and enable AI Dungeon to thrive. Without them, many of our latest improvements wouldn’t have been possible, including the recent doubling of context and adding new AI models to our lineup.

Today, as we launch a new content rating system—together with related features like new home page content, first-party content, and broader content rating preferences—we want to share with you how these changes not only address specific feedback from many of you in the community but also set the stage for us to do even more exciting things with AI Dungeon!

We spent hundreds of hours researching and planning these features and wrote over 30,000 words of analysis and strategy along the way. We’d like to share some of that research, analysis, and rationale with you. We’ll also share principles and values that influenced our approach to these new features to add context for how they will improve your experience on AI Dungeon.

Unique Aspects of AI Dungeon

Although our name and visual identity are inspired by the idea of an AI-based Dungeons and Dragons dungeon master, the experience you can have on AI Dungeon isn’t limited to fantasy or adventure-style narratives. AI allows the game experience to be adapted to different themes, and you can find scenarios from a countless number of themes including sci-fi, slice-of-life, fan fiction, and more.

Some user-generated content features more mature themes, such as intense violence, horror, or sex. While this content can be popular, it can also make players feel uncomfortable or disinterested. What each of us finds to be acceptable or fun is a deeply personal preference. Even amongst our players who enjoy mature content, we’ve found that they often take issue with other types of mature content and disagree about what they feel is publishable on our platform.

The longer we’ve spent learning about our players and their preferences, the more we’ve realized that there isn’t an easy solution to this challenge. Content is varied, preferences are personal.

These challenges are present on many content and social platforms as well. Companies like Facebook, YouTube, and X/Twitter spend millions of dollars on both content moderation and personalization algorithms to mitigate the diversity of content and user preferences.

We don’t have the vast resources these companies do, so we need to be scrappy and efficient in our approach to this problem. We want to build AI Dungeon into a generational company and platform, and we know the choices we make have an impact on our potential for growth and sustainability.

Lessons we’ve learned

Given our unique product and community, the lessons we’ve learned from the past few years of managing our platform carry significant weight in our design process. We’ve learned a few important lessons over the years.

Things that went well

Prior to the Phoenix redesign, it was very difficult for new creators to break through and rise in popularity. The home page at the time was dominated by hand-picked creators and AI Dungeon originals. As a result, we had a few extremely popular creators. With Phoenix we started featuring new content from the community and from emerging creators and have seen a growing number of new creators find success on AI Dungeon and bring interesting new scenarios for players.

Our communications with creators have also improved. Our community and support team work directly with creators to discuss content discoverability and moderation. As part of this communication improvement, we started a Creator Program where qualifying creators would have more direct access to our community and moderation team. Our creators know who they can turn to if they have any questions or concerns. Members of our executive team spend a significant amount of one-on-one time discussing creator needs directly. We know our creators are vital to the AI Dungeon community, and we want to do everything we can to help them create more engaging and dynamic content.

We’ve also strengthened our moderation process. We’ve landed on an approach to content moderation that our small team can support and that meets the needs of our community. Players report content they find objectionable, and our moderation team reviews each report manually. These reports not only alert us to problematic content, but also help us to understand what kinds of content make players feel uncomfortable or unsafe. This feedback has helped us make improvements to our Community Guidelines. Our moderation team also does periodic reviews of new content to help find content needing moderation before the community sees it. We’ve implemented better notifications so creators understand why content has received moderation actions. Our team is available to discuss any questions or concerns. We’ve also made several iterations to our Community Guidelines to clarify what is publishable. This process has been more effective and transparent than past approaches.

Downsides with our past approaches

Our old content rating system, a binary choice between “Safe for Work” and “Not Safe for Work”, was inadequate. It had three key shortcomings that frustrated many of you. First, it wasn’t granular enough, which meant that even the SFW category would feature content that made players uncomfortable. Second, the meaning of NSFW is understood differently across the community, which led to confusion about how to rate content. Generally, NSFW is understood as sexually explicit content, but we were using the NSFW label to cover other mature themes like violence and gore. Third, creators didn’t like content being labeled as NSFW if it had mature themes, since the label carried a different connotation.

We also didn’t adequately account for creator incentives with the previous iteration of the home page. Naturally, creators want their content viewed by as many players as possible. However, the home page could only feature SFW content, a well-intentioned design decision we made with player safety in mind. Ironically, this decision contributed to making the home page less inviting or comfortable for some players over time as creators published content that technically met our criteria for SFW content but still made players uncomfortable. In collaboration with our creator community, we iterated on our guidelines several times, and it only felt like the issue became worse. We’ll talk more about how this experience helped us formulate key principles we implemented in the new system.

We also learned about an effect coined by Nassim Nicholas Taleb called the “minority rule” principle, where a minority of a population can have a disproportionate impact on a community. In his article, Taleb describes how people who only eat Kosher foods may be a small minority in a given area, and yet influence all local restaurants and grocery stores to only sell Kosher food. A Kosher (or halal) eater will never eat nonkosher (or nonhalal) food, but a nonkosher eater isn’t banned from eating kosher. Another example is someone who can drive a manual transmission vehicle can also drive an automatic transmission vehicle, but not the other way around. When automatic vehicles were introduced, those who couldn’t drive a manual transmission were in the minority, but over time most manufactured cars are now automatic transmissions.

We’ve observed this phenomenon on AI Dungeon as well. Players who enjoy NSFW content can also enjoy SFW content. But someone who only enjoys SFW content will feel uncomfortable or disinterested when seeing NSFW content, and potentially stop using the platform. Even if NSFW content is a minority of the content available, its presence and the fact that it isn’t universally inviting to the majority of players will actually cause more of those players to leave, and the content balance will shift even further towards NSFW.

The unfortunate reality was that our home page sometimes featured content that made many people uncomfortable. We’ve appreciated all the feedback and feature requests to improve our home page, whether you shared it with us directly in emails and DMs, in player surveys, or even in public discussions about home page content in our Reddit and Discord communities. Even some of our most prolific creators of NSFW content shared how disappointing it was to share AI Dungeon with friends, only to have them be turned off by content they saw on their first visit. As we conducted user research, we found that new users experienced the same frustration as our current players. One user said they liked the game but just didn't feel like they'd fit into our community. A university class of UX students did a semester-long research project on AI Dungeon and found the majority of their testers felt uncomfortable at least one time during their play experience. It's clear that showing players content they are uncomfortable with causes them to leave, even if they might otherwise enjoy AI Dungeon.

Other considerations

The content created on AI Dungeon also factors into vendor relationships. We’ve discussed recently how maintaining good vendor relationships with multiple providers has allowed us to give you new features and value. For example, we’ve added new AI models at lower costs, and doubled context size available for many models. This also lowers the risk of being too dependent on a single provider, which we learned the hard way can lead to frustration and pain for players. The content we support often comes up in discussions we have with new vendors. However, we’re not simply capitulating to the content policies of vendors—we advocate for creative freedom and, most importantly, player privacy. One of our partners told us recently that a privacy change we insisted be included in our contract has now been added as a company policy that benefits all of their other customers. We work hard to find a balance between supporting your creative freedom, while also being able to work with a variety of vendors.

There are also implications on distribution to consider. Many of you use (and some of you even subscribe to) AI Dungeon through our iOS and Android Apps. In the past, we offered AI Dungeon on Steam. Each of these platforms has its own policies about user-generated content, content ratings, and safety filters. We researched and considered these policies as we figured out a strategy that made sense to us. There are some platforms that intentionally avoid the app stores (both to avoid the policies and to have a smaller technical footprint), but we know many of you enjoy our native apps, and for some of you, paying through the app stores is the only way for you to subscribe and we want to continue providing that option to you.

We've also seen how our content impacts our brand image, which can affect opportunities for growth or partnerships we think you'd like. For instance, we've had interest from third-party IP providers and book authors who'd like to publish interesting content on AI Dungeon or collaborate with us. We've also had contact with streamers and content creators who want to partner with us for events, giveaways, or collaborations. Having an inclusive and welcoming brand is important to be able to collaborate with other people who can add value to the community. Some of you have even expressed concern that our content and brand image could impact fundraising. We're not exploring fundraising anytime soon, but it could affect that process if we were to change our strategy. As we make AI Dungeon more broadly appealing and our brand reputation more universally agreeable, we can entertain more opportunities for growth and create interesting experiences for our community.

Content Rating Systems

Before designing our new content rating system, we spent considerable time analyzing and researching content rating systems on other platforms to understand how they work, how they are received by communities, and the intentional or unintentional effects of their implementations.

Most content rating systems fall into one of two buckets: age-based rating systems, or content tags/warning systems.

Age-based rating systems assign content to specific categories based on the intended audience's age. The advantage of these systems is they are easy to understand—it’s intuitive that not all content is appropriate (and can even be harmful) for kids and teens. Age is an intuitive concept for everyone. It’s also used frequently in popular rating systems like the ESRB for games, MPA for movies, TV parental guidelines, app store ratings, and more. However, there are some downsides to age-based rating systems. As broad categories, they lack nuance. Although they have more granularity than our binary SFW/NSFW system, there are still a limited number of categories, and don’t always fit all content nicely.

Tag-based systems offer more granular detail, by providing specific tags to indicate or describe the type of content. Tags or warnings can be issued for violence, sexual content, strong language, etc. Archive of Our Own has an extremely robust tag-based system for authors of fan fiction. Some of these systems are specific and granular, but that comes at a cost. It’s harder for creators to understand all the different rating options, and content can be tagged incorrectly. More cognitive load is needed for players to figure out what tags and content they are comfortable with. It also takes more work for creators to use a system like this, spending time getting all the right tags set up. Yet our experience using Hive AI and Llama Guard makes us optimistic that AI can play a role in adding consistency and ease of use to tag-based systems in the future.

Many products and industries use a hybrid approach. For instance, Steam shows age ratings and content warnings when browsing games. In fact, the ESRB ratings are frequently hybrid, showing both the age rating and specific content warnings. It wouldn’t be surprising if we ended up with a hybrid system over time.

Insights from other platforms

We didn’t just study the technical aspects of content rating systems, but also how they impact the communities they serve, especially for content platforms. Platforms are delicate ecosystems, and even small changes can bring outstanding benefits or devastating effects on the community.

One key insight we gained from talking with leaders from other content platforms is that using algorithms for content discovery inevitably leads to surfacing content that doesnʻt meet users’ expectations. Algorithms are never 100% accurate, and they may incorrectly label, serve, or flag content. In some cases (like what we’ve seen on many social media platforms), algorithms can overly index on engagement metrics and end up promoting more and more extreme and provocative content, rather than content that people truly value. We learned that major platforms like Pinterest relied more heavily on curation in their early days until they developed sophisticated algorithms. Even Facebook and Twitter, in the early days, were self-curated. You manually curated your list of friends and people you followed, and you’d only see content from those users. Algorithms are useful at scale but require significant investment and dedicated teams to execute effectively. At our stage, we were advised to rely more on curation.

Content policies and enforcement go hand in hand. X/Twitter has made changes to its policies that have transformed its user base and brand image. They removed many of their old policies with the goal of encouraging free speech and balanced discussion. However, they laid off a significant portion of their moderation team around the same time, which caused some users to report that enforcement was slow and inconsistent. Some of their creators felt there was an increase in hate speech and attacks, leading them to reduce their X/Twitter usage or leave altogether. Other users returned to the platform, attracted by the new policies. Brands and advertisers have also reconsidered their presence on X/Twitter as a result of the changes. Today, X/Twitter feels like a completely different platform and community than it did just a few years ago.

In general, large, dramatic changes can be bad for platforms. After banning adult content to respond to child safety concerns, Tumblr lost 30% of its traffic in three months and alienated other artists, writers, and creators—especially those from marginalized communities. Twitch is an example of a company that has evolved its policies more successfully. From their dress code changes in 2018 to gambling restrictions in 2021, they’ve done a better job of making incremental changes, communicating them clearly, listening to feedback, and adjusting. For instance, in 2021 they implemented policies to address “hot tub streams”, some of which they’ve adjusted a few times since launching to address user feedback and changing platform needs. We also believe in this approach of making small, measured changes, listening for feedback from players, and adjusting.

Finally, we’ve seen that these kinds of changes can be much more frustrating to communities when platforms have monetization programs for creators. We’re interested in letting our creators earn money from their content, and we’d much rather find the right balance of creative freedom and player preferences now before people make a living off of their work on AI Dungeon.

Our new strategy

As you can see from the highlights we’ve shared from our research, designing a successful content rating system and strategy requires careful thought and planning. It's challenging to find a balance between multiple conflicting goals. We’re starting to sound like a broken record, but the question that guides us through these difficult challenges is, “What will give our players the most value?”

Let’s explore the next steps we want to take to improve the content experience on AI Dungeon.

Key principles

First, we wanted to focus on strategies to improve content quality overall. Content ratings are really only one factor in what makes content interesting and fun to play, and we believe we can make additional improvements to raise the bar of quality on content overall.

Second, we want to make small, measured changes, monitor for feedback, and iterate. We don’t think we need to solve every problem now. We want to continue to make changes and monitor their effects to make sure we’re moving in the right direction for our players, community, and company. As always, listening and responding to your feedback helps us make a better product.

Third, we’d like inclusion to be a core value of our community. Obviously, our team is ultimately responsible for building systems to help players feel welcome on the platform. But we believe all of you—players and creators—can work together to make AI Dungeon inviting and approachable for everyone. We’re seeing this already. When we launched content ratings to Beta a few weeks ago, our top creators spent time reviewing their content, considering the audiences represented by the new ratings, and updating their content to match. The more we recognize and respect other players’ preferences, the better everyone’s experience can be, and the more the AI Dungeon community can thrive and grow.

Finally, AI Dungeon allows you to create nearly any kind of content you’re interested in. With our Walls Approach, players are free to create nearly any kind of content in unpublished, single-player mode. This isn’t changing—it’s an approach we’re committed to. For published content, the preferences of our broader community play an important role in deciding how that content is represented and distributed.

A principled approach to Content Policies and Moderation

Over the last few years, we’ve worked through dozens of iterations and changes to our Community Guidelines, attempting to make them clearer and better represent the expectations players have when visiting AI Dungeon. We’ve learned that defining universally applicable policies is difficult, if not impossible. One approach is to make rules extremely specific with detailed definitions, examples, edge cases, and exceptions. To accomplish that, our team has had in depth discussions and conversations around riveting topics like “What does ‘bestiality’ mean?” You can probably imagine how unusual it would be to discuss things like this at work with your colleagues.

Despite our best efforts, we frequently encountered cases where our Community Guidelines didn’t feel like they were serving our community well. Our moderation team frequently faced two types of difficult cases:

  • Content that didn’t violate any specific rule, but felt wrong or inappropriate
  • Content that did violate one of our rules, but seemed allowable and safe

In both instances, trying to adhere to a strict rule felt like we were doing a disservice to the community by either allowing content that shouldn’t have been allowed or by holding back good content that didn’t follow the letter of the law.

The truth is context matters a lot. As communications between our moderation team and creators improved, we found better success by simply working directly with creators on questions about their specific content. Together, we’ve been able to figure out the audience their content is intended for, and make sure it’s rated correctly. In some cases, creators will make changes we recommend in order to move the content into a different rating category.

Our new content policies are more principled and less specific. This has led to some confusion and frustration from creators. We’ve had people ask questions like, “If someone gets shot in my scenario, does it have to be rated ‘M’?” The answer is, it depends, and that can be frustrating. However, by working with you one-on-one, we think it will help us find the right rating for your content together in a way that honors your creativity and how the community will respond to it.

We want to make sure you know our moderation team is available to explore how you’re using it in your story and give you feedback. GuardianLynx and Rogue2 on Discord, and our support email at support@aidungeon.com can answer any questions you have about your specific content. We’re more than happy to work with you.

Content Rating System

We decided to base our content rating system on the ESRB ratings. Heroes, a new game mode we’re creating, introduces a number of game mechanics to AI Dungeon. And even though the current AI Dungeon is used in multiple ways, it’s still primarily regarded as a game. Because of that, it makes sense for us to adopt a convention widely understood in gaming for our content rating system.

As an age-based system, the ESRB ratings are easily understandable and provide additional granularity compared to our previous rating system. We can still formalize a tag-based system for rating content to turn it into a hybrid system down the road.

Adopting the ESRB system helps us to make AI Dungeon more accessible, friendly, and inclusive to new players. The new ratings identify audiences and player preferences first, in contrast to our old system, which simply indicated the presence or absence of NSFW subject matter. Players can more easily identify with a given category and feel confident the content they find will align with their specific preferences. Parents will appreciate having an “E” category for kids to play. “Teen” lets you engage with a wider variety of content within safe parameters. “Mature” lets creators include themes like violence, gore, strong language, and some types of sexual content.

The new ratings will let everyone who visits AI Dungeon feel at home with content they enjoy. They will also create more distinct categories for creators to write for and help them be discovered by audiences who will most appreciate the content they create.

About Unrated and Unpublishable Content

Our content rating system has two key differences from the ESRB ratings: “Unrated” and “Unpublishable” content.

Unpublishable content is not a new concept for AI Dungeon. Our Community Guidelines specify subject matter that is not publishable on AI Dungeon. These guidelines have served the community well—we’re not making any major changes to these policies. Visit our guidebook to learn more about our Community Guidelines.

The “Unrated” category is new to AI Dungeon and unique to our system. It has two purposes.

First, the “Unrated” category is the default rating for published content unless a creator explicitly labels it as “E”, “T”, or “M”. In other words, “Unrated” can mean “Not yet rated”. Creators must intentionally choose a rating that best fits their scenario. This means players can be confident that the content in “E”, “T”, and “M” represents what they’d expect within those ratings.

Second, the “Unrated” category is used for any publishable content (i.e. doesn’t violate our Community Guidelines) that doesn’t fit within the “E”, “T”, and “M” ratings. If you have questions about whether your content should be labeled as “Unrated”, please reach out to GuardianLynx and Rogue2 on Discord, and our support email at support@aidungeon.com.

Some of you have shared feedback about this category's dual purpose and have suggested we create separate ratings for each function: one for “not yet rated” and another for adult content. We explored this option during our research, but discovered there is far more upside to a single “Unrated” category.

As we thought about the implications of creating a prominent category for adult content, we realized that having a multipurpose “Unrated” category lets you continue to publish and enjoy the same diverse and varied content you’re playing today while also balancing player preferences, vendor relationships, and community growth. It lets us keep AI Dungeon available on multiple platforms and payment systems. It means that network administrators at schools, universities, businesses, government offices, and even internet providers can allow AI Dungeon to be used on their networks so you can enjoy AI Dungeon wherever you connect to the internet. It means we can continue working with multiple high-quality tech partners and keep lowering AI costs so that we can provide you with more value. Some recent examples of benefits we’ve added include adding new AI models (including models for the Free tier), doubling context, and increasing the amount of context per credit you get with Ultra models. As we look forward, we anticipate other benefits like potentially offering the memory system to free players, partnering with interesting IP owners or authors for unique content on AI Dungeon, and growing our team so we accelerate our speed of development.

It’s also worth noting that these are the same reasons why some content is not publishable on our platform at all.

This approach and strategy will bring the most value to you today and in the future.

Content Quality

We’re really excited about the work we’re doing to improve content quality on AI Dungeon.

We’ve already announced that we hired WanderingStar, one of our top community creators. He’s joined our team as our Lead Narrative designer and will spearhead the work on content quality. We're targeting three main areas right now.

  1. First-Party Content—WanderingStar is already developing some new content that will be published as AI Dungeon Originals. He is an expert in crafting stories that work well with AI, and we’re all looking forward to playing them when they are released. We expect to publish new content regularly. Also on the roadmap is improving the Quick Start prompts.
  2. Curation—The new home page will now feature manually curated sections. We can easily update these sections with new content or add entirely new sections. We could have themes, holiday contests, and more.
  3. Scenario Workshop—Our staff of volunteer moderators set up a new channel for us on our Discord server called “scenario-workshop”. It’s a place where creators can ask questions and get feedback from others on the content they’re working on. Crafting a great scenario on AI Dungeon is a skill, and our community has some great coaches to help you get better at your craft. Other creators, including WanderingStar, will chime in and offer feedback on how to improve the content. We’re already seeing success from this new channel!

We’d love feedback and suggestions on how we can improve the quality of our content.

Ideas for the future

The changes we’ve described above are important next steps to helping you find great content to play on AI Dungeon. We are also exploring other ideas to take this even further. Some ideas being considered are:

  • Content tagging and content warnings—We could strengthen the “E,” “T,” and “M” ratings with specific tags that better indicate to players what content they can expect for a scenario. We’d like to explore using AI to provide this feature, to maximize labeling consistency and add minimal extra work for creators.
  • Creator tools and ImprovementsYou’ve requested, and we’ve identified, a number of improvements and changes to make creating content easier and more efficient. We want you to be able to focus on great content and not have to worry about the platform getting in your way. Things like desktop views, Story Card improvements, and better scenario creation flows are up for consideration.
  • Personalization algorithms—As we grow and scale, we can see ourselves investing in developing and maintaining personalized feeds or a recommendation engine to help players find content that suits their interests and preferences.

These ideas are all very early, and we’d love to hear your input and suggestions on whether you’d like to see them in AI Dungeon and any ideas you think we should consider when we give them more serious consideration in the future.


If you’re still here, you deserve an achievement badge for reading several thousand words on a feature many people probably haven’t even thought about. We hope you found it interesting to hear highlights from the research, analysis, and design considerations behind content ratings and the other related features. Long posts have started to become a trend. Please let us know if you enjoy reading things like this, and we’ll keep writing them.

We’re excited to see how the new content ratings improve your experience on AI Dungeon. Please reach out and share your feedback and suggestions. We will continue to evolve and adjust to meet your needs. Let us know if you have any other ideas to help you find great content on AI Dungeon. Thanks for being a part of our community. Happy adventuring!