Model Testing & Evaluation

Using AI Squared to audit your AI models

Auditing models before and while they are put into production is hugely important to limit the harms of using an AI that stem from poor performance (e.g. biases, inaccuracy). AI Squared empowers you to instrument models you use in the browser with feedback widgets, enabling you to capture feedback on model performance and usefulness from across the model's user base.

A blog detailing how we made the .air file in the above video, as well as general information about using AI in the browser, can be found here:

Running AI Squared Locally

For users with the local version of the AI Squared Chrome extension, you will be able to provide feedback on models and log that feedback to a database right in the AI Squared extension. Your data stays yours - perfect for quick experimentation or for hobbyists!

Collecting Feedback with the AI Squared Platform

For members of large organizations, you'll likely have a team performing model piloting. In this case, all feedback and corrections provided by users of the AI Squared extension is passed to a database within an on-premises deployment of the AI Squared platform. This allows you to aggregate your model feedback over tens, hundreds, or even thousands of AI testers, ensuring that your enterprise-grade AI is ready for production.

Last updated