Our Technology

Creating an inference pipeline with AI Squared

AI Squared makes it easy to integrate information into an end user workflow. To do this, we allow you to build out a sequence of steps, which I'll be referring to as an inference pipeline. This pipeline can be thought of as a set of chained steps, happening in sequence:

Note that each step passes its output to a subsequent step. Also note that not all inference pipelines require all steps. If you review the examples in the tutorial library you'll see inference pipelines that run the gamut from utilizing every step to some which only require a handful of steps.

Steps in the Inference Pipeline

AI Squared inference pipelines are comprised of a series of steps, which you can adjust to the specific needs of your use case.

Harvesting

The first step in the typical inference pipeline is gathering some information from the end user (e.g. a prompt for a chatbot) or from a webpage or webapp (e.g. a customer's name in their CRM record or an image embedded in a webpage), which is then passed to a model or used as a query parameter. AI Squared provides several harvesters for this purpose.

Harvesters Gallery

Preprocessing

After harvesting some information, it might need to be processed before it can be utilized by a model or analytic (e.g. a body of text might need to be tokenized or have special characters removed).

Preprocessors Gallery

Analytic

Using AI Squared is all about connecing people with the models and data they need within their workflow, and this step is the heart of that. Here, we define the machine learning model or analytic we want to make use of, which can be running locally in the browser or deployed to a remote (e.g. SageMaker) endpoint. We can also connect to remote databases using ReverseML.

Analytics Gallery

Post-Processing

In some cases, the output from a model might need to be transformed before it makes sense to a human. In this step, you can transform the output from a model or analytic (e.g. applying a label map to the output from a classification model).

Post-Processors Gallery

Rendering

At this point in the process, we have the information we want to make available to end users. This step allows us define how, when, and where that information is integrated into the end user's workflow by providing off-the-shelf and fully-customizable rendering components to enable the easy visualization of relevant information.

Rendering Components Gallery

Feedback

Now that end users can view the information you are providing to them, you can get their input on the accuracy and relevance of that information, as well as how it was presented. This is invaluable for e.g. A / B testing models, analytics, and UI design with smaller study groups before pushing an inference pipeline to the entire end user population.

Feedback Gallery

Utilizing an Inference Pipeline

Once the inference pipeline is created - using either the Python API or the no-code environment in the platform - it is compiled into an AI Squared file format, .air, and from here it can be manipulated in the AI Squared platform as well as used directly by users of the AI Squared extension.

Last updated