A Message to Our Community: Building the future of Social Interactions on AI Dungeon

Latitude is working on building an entirely new Explore and social experience to enable users to create and find amazing AI generated experiences in a safe way. As part of this we are taking down the Explore page as currently implemented on AI Dungeon today in order to rebuild Explore safely for the long run.

Many of us have seen the ebbs and flows of the Internet, watching our beloved childhood games, forums, and websites disappear overnight, especially those of us who have been online since the very early days. Like many other games and startups, the first version of AI Dungeon was created by our CEO, Nick, back in May 2019 as a hackathon project. Since then we’ve released AI Dungeon 2, we officially formed our company 16 months ago, and we’ve doubled in size here at Latitude since January. We have exciting new developments we’re elated to share with players and creators soon; additionally, we want to discuss the work we have been developing to improve the safety of our game and our communities, and ensure Latitude and AI Dungeon stick around for generations to come.

Through building AI Dungeon, we’ve faced challenges typically only seen by larger established companies. We value our community and have heard your requests, comments, and concerns. This has led to us having challenging discussions amongst ourselves on where we want to be, who we want to serve, and how we’ll get there. At this stage we’ve hit the part of our main storyline quest where we’ve decided to face off the terrifying bosses now rather than wait later before they grow into tougher fights. We’re doing this because of our deep commitment at Latitude to amplify our most aspirational human qualities through our games and foster communities you feel safe in. This requires building trust with our users and community. We believe trust must be earned and we will proactively work to earn and keep your trust as we continue to grow.

Our goal with AI Dungeon is to connect players and creators together. For features like Explore this means we have a responsibility to create the best possible experience for you to share content you’re thrilled to show the world, discover content that appeals to you, and provide a safe environment for our community to interact together. We’ve realized it has not lived up to our expectations and we need to make changes to improve the Explore experience, so we have a plan for doing just that.

This is why we’re taking down the Explore page and social features on AI Dungeon as currently implemented today. We are aggressively working to update our systems, community guidelines, and policies to rebuild the experience for Explore and other social features safely and apply our approach to safety moving forward as we aim to become the predominant global marketplace for AI games. Here are some of the ways we plan to achieve this:

Community Guidelines

We will set the tone and level-set expectations for our users by drafting new community guidelines and policies that reflect the type of community we want to foster. Community guidelines and policies are living documents that change with new advances in technology, changes in our societies we live in, and new things we learn from direct experiences or knowledge sharing.

We want to get to know our users better and form strong connections. In establishing and maintaining a community with psychological safety and trust, we must reject or remove users who demonstrate that they do not respect the community, our values, and/or boundaries. At the same time, we recognize even the best of us will make mistakes. We do not want to be punitive and apply zero tolerance policies towards all cases; instead, we seek to be educational in helping members of our community learn from their mistakes and restore their relationships in the broader community, and benefit all users moving forward.

Safety Features

We will be adding features and tools for users to have control over their experience and feel empowered to shape their experiences on our platform. This will include features such as the ability to mute or block users.

Moderation

We are making key improvements to our moderation tools and systems for both moderators and users. We will be providing training and, later on, continued education to stay up-to-date on industry best practices for areas such as content moderation and psychological safety. This will include the usage of our machine learning capabilities to expand our ability to investigate and respond to incidents while protecting our moderators and employees.

Transparency

Building trust requires being transparent in our processes to the best of our abilities to do so safely with our community and comply with laws within the jurisdiction of the United States. In future follow ups we will be providing clear expectations of what the process looks like for when material gets flagged or reported; how it gets reviewed and by whom; detail what specific processes such as takedowns or permanent enforcement actions (e.g. bans) look like. We will be working on establishing a transparency report and cadence for publishing reports on a wide array of our safety and transparency efforts.

Collaboration

Early on when Latitude first started, Nick stated, “...our community will no doubt continue to be a driving force behind the decisions we make and our ability to execute them.” We will continue to engage with users and our community to better understand the needs or wishes people have to better their experience. In addition, we will be seeking out partnerships and collaborations with legal counsel, nonprofits, and other stakeholders as we develop our Trust & Safety program here at Latitude and participate in knowledge sharing across the industry.

These changes and improvements reflect our early long term investment in our users, community, AI Dungeon, and our company. Safety, like security, is an ever-evolving landscape. We are proud to take these next steps as a company and community, committed to protecting our users and employees, and we recognize the work that needs to be done. We’re revolutionizing and actualizing our dreams of the Internet we want to see ourselves in with the games we want to play now and in the future powered by AI. In the coming days ahead of us, we welcome any additional feedback, concerns, or questions on our work we’ve shared here at safety@latitude.io

Thank you for joining us in this journey. We are here to stay.