<aside> 👋🏽
This is an overview of how Playlab approaches trust and safety. Have feedback or ideas? Reach out to [email protected].
</aside>
We incorporated as a nonprofit in order to better align incentives so we’re accountable to impact and safety.
Right now, Playlab is invite-only. The primary way to join Playlab is through professional opportunities offered by us or trusted partners like Teach For America / Teach For All, Relay GSE, Berkeley, and Leading Educators.
We think that the AI models used in public education should be transparent and open to being interrogated for bias and interpretability. In the short-term, we use closed AI models so that our community can experience frontier technology and explore how that can be used to impact teaching and learning. In the long-term, we’re aiming to prioritize fully open-source models.
Every app in Playlab benefits from additional bias and alignment guidance provided to the AI models. Most user inputs and all model outputs have automated moderating enabled. Outputs that fail moderation cause a conversation to end and users are able to flag outputs manually for issues related to bias, appropriateness, and hallucination. Every time someone creates a new app in Playlab, they start with a template that encourages them to consider bias.
Our approach to product development includes red teaming; testing higher risk releases with a smaller subset of users that we co-design with; disclosures in product; and ongoing professional learning. We are also dedicating resources to developing improved age-appropriate and education-appropriate moderation models.
Through professional learning, courses, content, and coaching, we support our community members in designing guardrails for their specific projects. App usage is reviewable and inspectable by the app’s creators, enabling teachers to understand how students are using the resources they are given.
We’re very early to this AI wave. We encourage Playlab community members and partners to pilot and test how Playlab apps might drive impact in their context — so they can both test for and guard against harm and bias — and so they can prioritize projects that drive forward impact.
In collaboration with partners like Chan Zuckerberg Initiative and Leading Educators, we’re developing rubrics and evaluation tooling to evaluate the quality, impact, and safety of apps built on Playlab. Our community can use this to improve their projects and to gauge the quality and safety of projects created by others.