Sales Opportunity Score Dashboard

Building a Relationship Manager's Best Friend With No Code

Imagine your goal is to empower sales people to make the best decisions possible: what would that solution look like? For several AI Squared customers, that has been combining analytics such as propensity scores and product recommendations, as well as useful information about sales prospects, into a single dashboard that is then integrated into a CRM tool.

This example shows you how to build this dashboard in the no-code editor in the AI Squared platform. For an overview of how to build this same use case with our Python API, click here:

Sales Opportunity Score Dashboard

Here's a view of the dashboard that we are building in this example:

Harvesting

The first step of the AI Squared process is harvesting - pulling information from the end-user environment to help define the rest of the integration.

  • The first step in the Harvesting phase is to instantiate an object from the appropriate harvesting class, based on our predictive model. In this particular scenario, we are dealing with text data, and hence we will use the Text Harvester class from our harvesting options.

  • Subsequent to the selection of the Harvester class, the next step is to define a regular expression (regex) to extract the Lead first name and last name from the dataset. The SOS dashboard will be determined based on this Lead name. The regular expression will function as an extractor for this specific pattern within the data.

  • The format of the Lead name would be first name and last name; therefore, we have given two examples of the name format where we are supposed to specify the regex pattern.

  • The regex flags parameter can be used to alter the behavior of the regex search. In this case, we use 'gu'. The 'g' flag is the global search flag, which causes the regex engine to find all matches rather than stopping after the first match. The 'u' flag, on the other hand, enables full Unicode matching. Instead of treating high-code-point characters as two separate units, it treats them as a single character.

Analytics

For this example, we are making use of 2 machine learning models (computing the propensity score and product recommendations) and a data source containing static information about the prospective customer. For simplicity, we consider the 2 models as running periodic batch inference, with their outputs merged with the static data source, resulting in a single CSV file in S3 that we need to pull information from.

  • In the Analytics stage, we instantiate an object from the 'Reverse ML' workflow, This helps us understand how our model makes decisions.

  • We set the data input type as 'Text', in accordance with the data type we're working with. This makes sure that the Analytics class processes the input data appropriately.

  • We then supply the bucket name parameter. This refers to the name of the specific cloud storage bucket where our data file is stored.

  • In our model, we use a specific column designated as 'Lead_Name'. This column serves as a key identifier as it contains the first and last names of the leads, forming the primary basis for generating the SOS dashboard.

    • Note that in the harvesting step we are regexing customer names - we use the identified customer name to map to the row in the data source we are connecting to with ReverseML.

  • We can then implement filters if needed. The filters can be added by specifying the column name along with its corresponding value. The function of these filters is to include or exclude certain data points during the data processing phase in the analytics stage.

  • In the given scenario, an 'input' filter is being implemented. Here, the column selected for this filter operation is the 'Lead_Name', and the filter type is designated as 'input'. This signifies that the filter operation will be applied on the data points in the 'Lead_Name' column, accepting user-defined input values. This basically allows for a more targeted analysis by narrowing down the dataset based on specific lead names.

  • Finally, a data preview is presented. This consists of a snapshot of the data, which includes the column names and their corresponding values. This assists in verifying the correctness of the data before proceeding.

Here is what the analytic step of the configuration editor should look like, including the data preview:

Pre and Post - Processing

No pre or post-processessing is required for this use case (that is more typical for machine learning use cases with online inference. You can leave these blank:

Rendering

We'll now make use of the container rendering class, as well as several of the AI Squared rendering components, to visualize our data.

  • During the rendering phase, outputs generated from our model are transposed into the browser interface. Here, the class type is utilized to decide the form of rendering, specifically, we opt for Word Rendering in this context. This will ultimately decide the location of model predictions and their visual representation on the interface.

  • In this particular scenario, a dashboard will be created to visualize the results. As per the above image, various containers were added to include various data results for example, under Email Stats container, email sent, emails opened, emails bounced back data will be displayed. There will also be tables and graphs that will be visualized on the dashboard. The firm market table will be displayed in a table format and core recommendations, non-core recommendations, Sales Opportunity, Events will be visualized using doughnut charts. Again, these visualization settings can be customized based on your preferences.

Feedback

For use cases where end-user feedback is solicited to help data and product teams monitor the value and accuracy of a dashboard or individual metrics within that dashboard, feedback can be easily added with AI Squared feedback components.

  • In the feedback phase, the 'Simple Feedback' class as well as ‘Model Feedback’ class is chosen, which provides the ability to generate queries for user feedback.

  • This step assists in creating a feedback mechanism which enables end-users to provide essential insights on the model's performance and the output results.

Last updated