Feedback Gallery

Collect user feedback on model performance, data accuracy, UI design, and everything else.

Any inference pipeline can be instrumented to collect feedback from end users. This feedback is highly configurable, and is best thought of as a flexible survey widget which can be associated with any of the other steps in a single inference pipeline. This feedback can then be aggregated within the AI Squared platform for analysis. Here is each type of feedback that can be gathered, along with a brief example:

Simple feedback

Simple feedback is the basic AI Squared feedback mechanism. From here, you can specify questions to ask based on the prediction (i.e. output) of the analytic class. This method supports user-provided questions as well as open text fields for users to provide unstructured feedback.

Binary feedback

Binary feedback allows users to provide feedback on the accuracy of a prediction in cases where you have 2 possible outcomes from an analytic (e.g. a binary classification problem).

Multiclass feedback

Multiclass feedback is similar to binary feedback, but allows you to select from longer list of corrections to choose from. See the Model Testing and Evaluation guide for an example of this feedback mechanism.

Step feedback

Step feedback is designed to solicit feedback about the UI of any of the n number of rendering steps used to create a visualization.

Model feedback

Model feedback is designed to allow end users to provide qualatative feedback about the entire 'model', here used interchangeably with 'inference pipeline'. Generally, this is meant to ask questions not about the accuracy or appearance of an inference pipeline but instead focus on the impact of the inference pipeline on the end user.

Last updated