Salesforce Einstein – ABSYZ https://absyz.com Salesforce Gold Consulting and Implementation Partner Fri, 27 Nov 2020 10:52:20 +0000 en-US hourly 1 https://absyz.com/wp-content/uploads/2020/06/cropped-favicon-1-1-32x32.png Salesforce Einstein – ABSYZ https://absyz.com 32 32 REST API call from Einstein Analytics Dashboard https://absyz.com/rest-api-call-from-einstein-analytics-dashboard/ https://absyz.com/rest-api-call-from-einstein-analytics-dashboard/#comments Tue, 26 May 2020 06:22:34 +0000 http://blogs.absyz.com/?p=11219

In Einstein Analytics we can create a lens and dashboard with the dataset available in your analytics studio. You have a dataset that is created from the dataflow which is scheduled every hour to update the dataset. Here your dataset might not have the updated data for every second or minute. What if you want to get the live data from Salesforce or any external system or handling complex solution when it is not possible in analytics to be shown on your dashboard. Using the apex step type in Dashboard JSON we can achieve displaying the manipulated data or real-time data.

Using SOQL from Dashboard

Scenario 1: The user wanted to show the real-time data from salesforce on the analytics dashboard. It is simple to achieve, just follow the steps mentioned. So we create an apex step in the dashboard and using ‘@restresource’ in apex call we fetch the data to the dashboard. Create an apex class as AnalyticsDashboardStep with restResource name as accountdata. Define a @HttpPost method that will return the value to the dashboard JSON. So your code will look as shown below.

@RestResource(urlMapping='/accountdata')
global with sharing class AnalyticsDashboardStep {
    @HttpPost 
    global static String fetchAccount(String selectedIndustry) { 
         //selectedIndustry - attribute value passed from the analytics Dashboard
         //return the output
    } 
}

Define a wrapper that parses the data to the dashboard JSON. The WrappedData wrapper creates a mapping between the parameters defined and the queried fields of account. The ReturnMetadata wrapper is to define the data type for each returned column. The ChartFormatJSON wrapper is to combine the data(rows) and header(columns).

    public class WrappedData{
        public String Account_Name;
        public String Account_Id;
        public String Account_Industry;
        public String Account_AccountSource;
        public Decimal Account_AnnualRevenue;
        public WrappedData(){}
        public WrappedData(Account data){
            this.Account_Name = data.name;
            this.Account_Id = data.Id;
            this.Account_Industry = data.Industry;
            this.Account_AccountSource = data.AccountSource;
            this.Account_AnnualRevenue = data.AnnualRevenue;
        }
    }
    public class ReturnMetadata {
        public List<String> strings; // columns that return as text
        public List<String> numbers; // columns that return as numeric
        public List<String> groups;  // columns that return as groups
        public ReturnMetadata(List<String> strings, List<String> numbers, List<String> groups) {
            this.strings = strings;
            this.numbers = numbers;
            this.groups = groups;
        }
    }
    public class ChartFormatJSON {
        public List<WrappedData> data;
        public ReturnMetadata metadata;
        public ChartFormatJSON(List<WrappedData> data) {
            this.data = data;
            this.metadata = new ReturnMetadata(new List<String>{'Account_Id','Account_Name','Account_Industry'}, 
                                                new List<String>{'Account_AnnualRevenue'}, new List<String>{'Account_Name'});
        }   
    }

Since your wrapper parameters are set up, now pass the values in the @HttpPost fetchAccount method as stated above. In the below method we have a queried List of accounts as a return statement to the Dashboard. For the complete class check this link.

    @HttpPost 
    global static String fetchAccount(String selectedIndustry) {
        List<Account> dataDisplay = new List<Account>();
        List<WrappedData> wrpData = new List<WrappedData>();
        // If the Industry is not selected from the interaction step
        if (selectedIndustry == null) {
            dataDisplay = [select Id,Name,Industry,AnnualRevenue,AccountSource from account order by AnnualRevenue desc];
        }else{
            dataDisplay = [select Id,Name,Industry,AnnualRevenue,AccountSource from account where industry=:selectedIndustry order by AnnualRevenue desc];
        }
        for(Account acc : dataDisplay){
            wrpData.add(new WrappedData(acc));
        }  
        //Serialize the wrapper that you have created with account data
        return JSON.serialize(new ChartFormatJSON(wrpData));
    }

Once your apex class is ready, move to the analytics studio to create a dashboard with an apex step. Create a toggle with filter values as Account Industry. Follow the below steps to create a dashboard.

In the new dashboard if you have completed the above steps, click ctrl + E in windows or command + E in Mac to edit your Dashboard JSON. Add the apex step as shown below. The GetChartData is the query name in the dashboard. In the query parameter, you set the body value as an apex input parameter named selectedIndustry that holds the value selected from the dashboard. Define the apex class name in the path parameter and apex in the type parameter.

"GetChartData": {
	"query": {
		"body": {
			"selectedIndustry": "Agricultrue"
		},
		"path": "accountdata"
	},
	"type": "apex"
},

If you want to pass the selected industry value dynamically then use interaction syntax that can be generated from the advance editor of the GetChartData query. Edit your query value with “{{cell(Industry_1.selection, 0, \”Industry\”).asString()}}” to pass the value dynamic. You can use both Result or Selection type of interaction.

"GetChartData": {
     "query": {
          "body": {
                "selectedIndustry": "{{cell(Industry_1.selection, 0, \"Industry\").asString()}}"
          },
          "path": "accountdata"
     },
     "type": "apex"
},

Click on Done after completing the JSON edit. You can see the GetChartData query in the query list on your right-hand side of the dashboard. Drag the query to the dashboard building area. To get the full dashboard JSON click here.

Dashboard Output:

dashboard output1

REST API call from Dashboard

Scenario 2: Similarly, the user wanted to show the live updates from an external system to your analytics dashboard, then we do an apex REST API call to fetch the details whenever the dashboard is loading or refreshing. Here we have taken an example of COVID ’19 to show the number of updated cases in India. So we are using https://api.covid19india.org API to fetch the COVID updated details. Similarly, you can choose your API according to your necessity.

It is similar to scenario 1, whereas here we are creating an apex class with REST API callout and passing the parameters in the same format that is required for the dashboard. Add the URL in remote site settings.

Create two custom labels with the details mentioned below:

  • Custom Label 1
    • Name: CovidBaseEndpoint
    • Value: https://api.covid19india.org/
  • Custom Label 2
    • Name: CovidStateWise
    • Value: /data.json

CustomLabel

The next step is to create an apex class to make an Http request and get the response. The getStateWiseData method makes an API request and is serialized to the dashboard through the data_val method. Notice PackagedReturnItem wrapper where we have categorized the columns as a string, number, and group in ReturnMetadata.

@RestResource(urlMapping='/covid')
global with sharing class CovidData {
    @HttpPost  // Annotation Specified to highlight that this method needs to be called.
    global static String data_val() {
    	CovidStatusCoreData1 data = getStateWiseData();
        return JSON.serialize(new PackagedReturnItem(data.statewise));
    }
    
    public static CovidStatusCoreData1 getStateWiseData() {
        String BaseEndpoint = System.Label.covidBaseEndpoint; //Retrieve the endpoint and statewise variable from custom label
        String StateWise = System.Label.covidStateWise;
        HttpResponse resp = makeAPICallout(BaseEndpoint,StateWise);
        CovidStatusCoreData1 response = (CovidStatusCoreData1)System.JSON.deserialize(resp.getbody(), CovidStatusCoreData1.class);
        if (response != null) {
            return response;
        }
        return null;
    }
    public static HttpResponse makeAPICallout(String BaseEndpoint,String StateWise) {
        Http h = new Http();			//Make a request with the parameters set
        HttpRequest req = new HttpRequest();
        String endpoint = BaseEndpoint + StateWise;
        req.setEndpoint(endpoint);
        req.setMethod('GET');
        HttpResponse res = h.send(req);		// Send the request, and return a response
        if (res.getStatusCode() == 200 ) {
            return res;
        }
        return null;
    }
    public class ReturnMetadata {
        public List<String> strings; 
        public List<String> numbers; 
        public List<String> groups;  
        public ReturnMetadata(List<String> strings, List<String> numbers, List<String> groups) {
            this.strings = strings;
            this.numbers = numbers;
            this.groups = groups;
        }
    }
    public class PackagedReturnItem {
        public List<StateWiseData> data;
        public ReturnMetadata metadata;
        public PackagedReturnItem(List<StateWiseData> data) {
            this.data = data;
            this.metadata = new ReturnMetadata(new List<String>{'state','statecode','lastupdatedtime'}, 
                                               new List<String>{'active','recovered','deaths','confirmed','deltaconfirmed','deltadeaths','deltarecovered'}, 
                                               new List<String>{'state'});
        }   
    }  
    public class CovidStatusCoreData1 {
        public List<DailyKeyValues> key_values;
        public List<StateWiseData> statewise;
    }
    public class DailyKeyValues {
        public String confirmeddelta;
        public String counterforautotimeupdate;
        public String deceaseddelta;
        public String lastupdatedtime;
        public String recovereddelta;
        public String statesdelta;
    }
    public class StateWiseData {
        public Integer active;
        public String confirmed;
        public String deaths;
        public String recovered;
        public String state;
        public String statecode;
        public String lastupdatedtime;
        public String deltaconfirmed;
        public String deltadeaths;
        public String deltarecovered;
    }
}

Create an apex step GetChartData in dashboard JSON to display the data. After placing the apex step click on done from the dashboard. Place the query on top of the table widget as shown below.

"GetChartData": {
	"query": {
		"body": {},
		"path": "covid"
	},
	"type": "apex"
},

covid states

In the final step, we have created a Static step to filter by states in India. To show a different type of output chart in the dashboard pass the wrapper from the apex class. Set the ReturnMetadata string, number, and group by columns correctly so that you could see the correct output when you place the query in the chart widget. Likewise, create the apex classes to fetch the data and display it in different chart types as shown in the below dashboard. Refer to the link for the apex classes and dashboard JSON. Using the dashboard inspector we can check the output for each lens that helps us to identify the performance and time taken for the query. Click on show details for a query that will show the details on your right-hand side panel.

There you go! you are on the last step to verify your dashboard, check the clip below.

[wpvideo RHIjNB4j]

NOTE: When you work with API apex step in analytics dashboard remember you have certain limits provided by salesforce that you can refer here. Firstly, Maximum concurrent Analytics API calls per org limit as 100. Secondly, Maximum Analytics API calls per user per hour limit as 10,000.

]]>
https://absyz.com/rest-api-call-from-einstein-analytics-dashboard/feed/ 3
Introduction to ChatBots https://absyz.com/introduction-to-chatbots/ https://absyz.com/introduction-to-chatbots/#respond Mon, 10 Dec 2018 06:47:31 +0000 http://blogs.absyz.com/?p=9490

Chat Bot is a  chat Robot that is used instead of live agents to communicate with the customers/end users for solving the queries. A chatbot is an application that simulates human conversation, either aloud or via text message. Instead of having a conversation with a person, like a support agent, a customer can have a conversation with a computer. Whether through typing or talking, a chatbot can connect with a customer. It can influence a customer relationship.

CB31

What is Chatbot ?

A ChatBot is a service that is powered by predefined business rules, scripts and artificial intelligence through a conversational interface. With the recent rise of Artificial Intelligence, ChatBots have become a lot smarter and they understand customers more accurately than a bot could ten years ago. More and more businesses are now looking into ChatBot as a practical channel to provide instant service to their customers. Less agent handle time, less customer wait time, a better user experience can be assured by chatbots.

Why use chatbots?

chatbots provide one-to-one service immediately. Reduces the time waste on analysing repeated queries. chatbots can deflect easy cases, agents can devote more time to complex issues that require creativity or teamwork. Bots can instantly welcome customers with a branded greeting in a chat window and direct them to resources they need faster than human resources. Bots are used to save the customer time and money responses to the straightforward questions over and over again.

How does chatbot work?

ChatBot is very smart, but you need to teach it what to do. The Einstein Bot builder is the new point-and-click setup tool that allows us to build dialogs in a bot. It supports different types of tasks that can be configured in a dialog. You can Ask Question to gather information, and Send Response to return the output to the chat window. Apart from providing the greeting, it will respond to the questions asked by the customers on accepting one or more input(s) from the customer/user.

CB32

Chatbot is provided with certain contents which are used to train the bot accordingly. Chatbot is made of below contents:

  • Dialog: They are a basic ‘unit’ of conversation between your customers and bots. Dialogs are built from a developer-defined list of MessagesQuestionsActions and Rules.
  • Messages are the static string interpolated information like “welcome”, “Hello” etc.
  • Questions enable you to ask for customer input and store that information for later use.
  • Actions are enabled to call apex code with parameters and store the results.
  • Rules act as a conditional engine allowing developers to make runtime decisions based on customer data. On condition, the bot will decide the corresponding action that needs to be done.
  • Intent: Intents are phrases and words that you provide for each dialog. Whenever a customer types a message, the bot framework predicts the customer’s intent using the Einstein Platform’s Intent service. Based on the prediction, the system selects the dialog most likely matching the customer’s input.
  • Entity: Entities define the datatype (i. e. String, Currency, Date etc.) as well as optional selection values. A slot gets assigned to an entity so that the system knows how to handle the slot in-/output. Think of an Entity as a ‘bucket’ of slots.
  • Slots: Slots represent a mechanism to store dynamically created data within the context of a dialog. They act like variables. For example, the customer’s response to a question can be stored in a slot, and you pass data from a Slot as a parameter to an Action. Not only can slots be used to store data, but developers can use familiar merge syntax {!slotName} in messages to interpolate slot data.

How to create chatbot?

To create the chatbot we need to setup the prerequisites and can create a bot and can train the chatbot by creating the placeholders as follows.

prerequisites:

For creating the Chatbot we require the following licenses.

-Service Cloud license
-Live Agent license
-Enable Lightning Experience.

once pre-requisites are done, we can start creating the chatbot and create dialogs and entities for chatbot in salesforce.

A1111

Let’s Start creating a simple chatbot.

Step 1: On clicking create Bot specify the bot name and greeting message which is displayed to welcome the customer while chatting and the menu has to be created for initiating the chat to know the queries of the customer.

A1113

A1114

A1115

A1116

On creating Chatbot successfully, Now its time to train the bot with required data that is used to communicate with the customer. Bot messages are used to communicate and know the customer queries/requirement, On customer response, it will be stored in slots and response that are trained to chatbot as messages, intents are given to the customer.

Step 2: As we have discussed before messages, dialogues and intent are used to train the bot. So we have to create the things that train chatbot to communicate with the customer.

CB4

Here we can create a dialogue group so we can create dialogue intents and add to the dialogue group. Here I am creating a dialogue group called Order_List and adding dialogues to that group.

CB3

I have created a message where the bot asks the customer -do you want to place an order? which accepts the answer in boolean values either yes or no on customer answers bot responses.

CB2

To perform customized actions we have to create ACTION which can include apex code, flow and can send email on dependent conditions. Apex code will be added to the action values and those values displayed in the bot.

 

Created simple statements for chatbot and used to communicate with a customer. On customer specific questions bots response are feeded in the questions. Here the bot displays the welcoming message as hi welcome and continues with the question do you want to place an order and proceed with customer input values.

CB7

On Yes it will display the menu to select the order. End chat dialogues are created, The bot displays goodbye! and asks to click the end chat button to end the chat.

CB9

Confused Dialogues displayed to represent that the chatbot is unable to acquire the expected answer from the customer, On this condition chat can be restarted or it can be transferred to the agent on request of the customer. Bot says the customer sorry, I didn’t understand on unexpected answers like for order place if the customer is giving I want some pizza instead of yes or no then this confused message is displayed by the bot to know the customer requirement.

CB10

After training the bot the chatbot has to be activated and we can check the chat internally by a preview of the chatbot.

CB5

Conclusion:

ChatBot is a replacement of live agent which decreases the wastage of time and increases the customer experience, resolves the frequently asked customer queries and on-demand transfer to the live agent on complex issues which requires more attention. ChatBot can be created and trained according to the client requirement and can add complex scenarios through actions, intents, messages, dialogue intents which are chatbot contents that are used to train the chatbot, it can be trained with customized functionalities and can be published with higher effectiveness on demand of customer as they are booming technologies which providing the best customer support.

]]>
https://absyz.com/introduction-to-chatbots/feed/ 0
Grocery Stock Maintenance Using Einstein Object Detection https://absyz.com/grocery-stock-maintenance-using-einstein-object-detection/ https://absyz.com/grocery-stock-maintenance-using-einstein-object-detection/#comments Tue, 29 May 2018 08:57:53 +0000 http://blogs.absyz.com/?p=8927

Let suppose a grocery store wanted to update the daily stock of juice cartons that are of different brand refrigerated. This cannot be done manually every day by counting each carton of a different brand. Think when you have a system that helps you in identifying daily stock, here Einstein Object Detection helps the grocery owner. Einstein identifies the brand and boundaries from which we can categorize them. From the data, we can make a count of different brand cartons and generate a report to the grocery owner.

Einstein should identify objects so first, we should train with the dataset. To create a dataset, collect combinations of brand images. Here I have taken 4 brands of juice cartons as Real, Tropicana, Natural, and Nescafe. The images in dataset should be like the permutation of these cartons.

obj1

Make a folder saving all these images to it and create a .csv file inside the folder. The .csv should contain height, width, x-axis, and y-axis of each carton. The format in the .csv file should be as specified: Box1 {“height”:494,”Y”:410,”label”:”Tropicana”,”width”:284,”x”:11}. The image name, for example, juice1.jpg should be specified in the first column of the sheet and the corresponding boundaries in the above format. Refer below images to create the .csv file.

In the final folder, all images and .csv file are saved. Zip the folder and upload it to AWS to get downloadable link to train the model. To know now how to upload in AWS and get the link refer (https://blogs.absyz.com/2018/02/13/einstein-vision-real-estate-app/).

Using the link we have to train Einstein

obj7In the Einstein package, modify some lines of code in the Einstein Prediction Service class.

[sourcecode language=”java”]
private static String BASE_URL = ‘https://api.einstein.ai/v2’;
private static String BASE_URL = ‘https://api.einstein.ai/v2’;
private String PREDICT = BASE_URL + ‘/vision/detect’;
//all through code the url of httpbodypart should be image-detection
EinsteinVision_HttpBodyPartDatasetUrl parts = new EinsteinVision_HttpBodyPartDatasetUrl(url,’image-detection’);
[/sourcecode]

To get the output with the predicted boundaries in the probability class add this below code.

[sourcecode language=”java”]
@AuraEnabled
public BoundingBox boundingBox {get; set;}
public class BoundingBox {
@AuraEnabled
public Integer minX {get; set;}
@AuraEnabled
public Integer minY {get; set;}
@AuraEnabled
public Integer maxX {get; set;}
@AuraEnabled
public Integer maxY {get; set;}
}
[/sourcecode]

In this apex class, a wrapper is created for the image and record. I have to display the input image to the user hence wrapper is used.  The method that returns the predicted value.

[sourcecode language=”java”]
@AuraEnabled
public static objects__c getPrediction(id objectId,String fileName,String base64) {
wrapperClass returnwrapperClass = new wrapperClass ();
objects__c obj = new objects__c();
Blob fileBlob = EncodingUtil.base64Decode(base64);
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
EinsteinVision_Dataset[] datasets = service.getDatasets();
List<ContentDocument> documents = new List<ContentDocument>();
for (EinsteinVision_Dataset dataset : datasets) {
if (dataset.Name.equals(‘juice’)) {
EinsteinVision_Model[] models = service.getModels(dataset);
EinsteinVision_Model model = models.get(0);
EinsteinVision_PredictionResult result = service.predictBlob(model.modelId,fileBlob, ”);
EinsteinVision_Probability probability = result.probabilities.get(0);
string resultedProbablity = ”;
Map<string,integer> items = new Map<string,integer>();
for(integer i=0;i<result.probabilities.size();i++){
if(!items.containskey(result.probabilities.get(i).label)){
items.put(result.probabilities.get(i).label,1);
}
else{
integer count = items.get(result.probabilities.get(i).label);
items.put(result.probabilities.get(i).label,count+1);
}
integer j = i+1;
}
for(String i : items.keyset()){
resultedProbablity = resultedProbablity +’ ‘+i+’ — ‘+’ ‘+items.get(i);
}
obj = [select id,Results__c from objects__c where id =: objectId];
obj.Results__c = resultedProbablity;
update obj;
returnwrapperClass.objectRecord = obj;
ContentVersion contentVersion = new ContentVersion(
Title = fileName,
PathOnClient = fileName+’.jpg’,
VersionData = fileBlob,
IsMajorVersion = true
);
insert contentVersion;
documents = [SELECT Id, Title, LatestPublishedVersionId,createddate FROM ContentDocument order by createddate desc];
//create ContentDocumentLink record
ContentDocumentLink cdl = New ContentDocumentLink();
cdl.LinkedEntityId = objectId;
cdl.ContentDocumentId = documents[0].Id;
cdl.shareType = ‘V’;
insert cdl;
}
}
return obj;
}
[/sourcecode]

Here I have created a component that is accessed from mobile using salesforce one app. Firstly we have to send an image to Einstein so we take a photo through mobile. Second, we have to upload the image to Einstein. Finally, we get the output that can be displayed and further processed to create monthly reports on the stock and sales data.

[sourcecode language=”html”]
<aura:component implements=”force:appHostable,flexipage:availableForAllPageTypes,force:hasRecordId” access=”global” controller=”EinsteinVision_Admin”>
<aura:attribute name=”contents” type=”object” />
<aura:attribute name=”Objectdetection” type=”objects__c” />
<aura:attribute name=”files” type=”Object[]”/>
<aura:attribute name=”image” type=”String” />
<aura:attribute name=”recordId” type=”Id” />
<aura:attribute name=”newPicShow” type=”boolean” default=”false” />
<aura:attribute name=”wrapperList” type=”object”/>

<lightning:card iconName=”standard:event” title=”Object Detection”>
<aura:set attribute=”actions”>
<lightning:button class=”slds-float_left” variant=”brand” label=”Upload File” onclick=”{! c.handleClick }” />
</aura:set>
</lightning:card>
<aura:if isTrue=”{!v.newPicShow}”>
<div style=”font-size:20px;”>
<h1>Result1 : {!v.Objectdetection.Results__c}</h1>
</div>
<div class=”slds-float_left” style =”height:500px;width:400px”>
<img src=”{!v.image}”/>
</div>
</aura:if>
<div>
<div aura:id=”changeIt” class=”change”>
<div class=”slds-m-around–xx-large”>
<div role=”dialog” tabindex=”-1″ aria-labelledby=”header99″ class=”slds-modal slds-fade-in-open “>
<div class=”slds-modal__container”>
<div class=”slds-modal__header”>Upload Files
<lightning:buttonIcon class=”slds-button slds-modal__close slds-button–icon-inverse” iconName=”utility:close” variant=”bare” onclick=”{!c.closeModal}” alternativeText=”Close window.” size=”medium”/>
</div>
<div class=”slds-modal__content slds-p-around–medium”>
<div class=” slds-box”>
<div class=”slds-grid slds-wrap”>
<lightning:input aura:id=”fileInput” type=”file” name=”file” multiple=”false” accept=”image/*;capture=camera” files=”{!v.files}” onchange=”{! c.onReadImage }”
label=”Upload an image:”/>
</div>
</div>
</div>
<div class=”slds-modal__footer”>
</div>
</div>
</div>
</div>
</div>
</div>
</aura:component>
[/sourcecode]

In the controller.js, handling the user input and passing it to apex in return probability with boundaries as resultant.

[sourcecode language=”javascript”]
({
onUploadImage: function(component, file, base64Data) {
var action = component.get(“c.getPrediction”);
var objectId = component.get(“v.recordId”);
action.setParams({
objectId: objectId,
fileName: file.name,
base64: base64Data
});
action.setCallback(this, function(a) {
var state = a.getState();
if (state === ‘ERROR’) {
console.log(a.getError());
} else {
component.set(“v.Objectdetection”, a.getReturnValue());
var cmpTarget1 = component.find(‘changeIt’);
$A.util.addClass(cmpTarget1, ‘change’);
component.set(“v.newPicShow”,true);
}
});
},
onGetImageUrl: function(component, file, base64Data) {
var action = component.get(“c.getImageUrlFromAttachment”);
var catId = component.get(“v.recordId”);
action.setParams({
objId: objId
});
action.setCallback(this, function(a) {
var state = a.getState();
if (state === ‘ERROR’) {
console.log(a.getError());
} else {
if (!a.getReturnValue()==”) {
component.set(“v.image”, “/servlet/servlet.FileDownload?file=” + a.getReturnValue());
}
}
});
$A.enqueueAction(action);
}
})
[/sourcecode]

helper.js

[sourcecode language=”javascript”]
({
onUploadImage: function(component, file, base64Data) {
var action = component.get(“c.getPrediction”);
var objectId = component.get(“v.recordId”);
action.setParams({
objectId: objectId,
fileName: file.name,
base64: base64Data
});
action.setCallback(this, function(a) {
var state = a.getState();
if (state === ‘ERROR’) {
console.log(a.getError());
alert(“An error has occurred”);
} else {
component.set(“v.Objectdetection”, a.getReturnValue());
var cmpTarget1 = component.find(‘changeIt’);
$A.util.addClass(cmpTarget1, ‘change’);
component.set(“v.newPicShow”,true);
}
});
}
})
[/sourcecode]

The final output is tested from the mobile salesforce one app.

In case of any doubts feel free to reach out to us.

]]>
https://absyz.com/grocery-stock-maintenance-using-einstein-object-detection/feed/ 5
Einstein Intent and Einstein Sentiment Analysis on Facebook Posts https://absyz.com/einstein-language-facebook-integration/ https://absyz.com/einstein-language-facebook-integration/#comments Thu, 01 Mar 2018 06:58:09 +0000 https://teamforcesite.wordpress.com/?p=8594

Einstein Language

Einstein Language Is used to build natural language processing into your apps and unlock insights within text. Einstein Language contains two NLP services:

Einstein Intent

—Categorize unstructured text into user-defined labels to better understand what users are trying to accomplish. Leverage the Einstein Intent API to analyze text from emails, chats, or web forms to

  • Determine which products prospects are interested in, and send customer inquiries to the appropriate sales person.
  • Route service cases to the correct agents or departments, or provide self-service options.
  • Understand customer posts to provide personalized self-service in your communities.
Einstein Sentiment

—Classify the sentiment of text into positive, negative, and neutral classes to understand the feeling behind text. You can use the Einstein Sentiment API to analyze emails, social media, and text from chat to:

  • Identify the sentiment of a prospect’s emails to trend a lead or opportunity up or down.
  • Provide proactive service by helping dissatisfied customers first or extending promotional offers to satisfied customers.
  • Monitor the perception of your brand across social media channels, identify brand evangelists, and monitor customer satisfaction.

 

Lets suppose I have a Facebook Page for an E-Commerce site. If any user sharing their feedback through posts or comments on the page, We can retrieve the post and comments to Salesforce and find Intent of the post and the Sentiment of the comments.

This can be achieved by following these steps:

step 1. Create a Facebook page.

step 2. Create a Facebook app using Facebook Developers account.

To Connect with Facebook we Should use Graph API, To use Graph API we need to Create a Facebook Developers account using Facebook credentials,

To create a Facebook app

  • login to https://developers.facebook.com/
  • click on My Apps -> Add a New App.

Enter basic information and make the App Live, If the app is not Live we can’t use it for Communicating with Facebook.

Now click on Tools & Support and go to Graph API Explorer, here we can generate Access token and test the app.

Steps to create Access token:
  • select you app from Application drop down.
  • click on Get Token and select Get User Access Token.
  • select the permissions required and click Get Access Token.
  • To get Page Access token click on Get Token and select you Page from Page Access Tokens.
  • Now you can see your page name on the drop down list, click on that and select Request publish_pages.

 

step 3. Create an Object on Salesforce to save posts and their comments.

Create an object with fields to store posts, comments, Sentiment, Intent.

step 4. Create remote site setting on Salesforce.

Go to Remote Site Settings -> New Remote Site

enter site name, remote site URL as https://graph.facebook.com , check Active checkbox and Save.

step 5. Make a callout to Facebook to retrieve posts and comments.

In Einstein_Facebook class we are getting post and comments from Facebook, and extracting posts and comments from returned JSON and creating records.

 

[sourcecode language=”java”]
public class Einstein_Facebook {

public static void sendRequest(){

//getting accesstoken from custom lable.
String accessToken = Label.Facebook_Access_Token;
HttpRequest request = new HttpRequest();
request .setEndpoint(‘https://graph.facebook.com/v2.12/391648194596048?fields=posts{message,comments,type}&access_token=’+accessToken);
request .setMethod(‘GET’);
//Making request
Http http = new Http();
HTTPResponse response= http.send(request );
//getting post and comments in response.
String data=response.getBody();

//getting post and comments from JSON rceived in response.
List line=data.split(‘,’);
List posts=new List();
List postAndComments=new List();
for(String l:line){
posts.add(l.substringBetween(‘{“message”:”‘,'”‘));
string substring=l.substringBetween(‘”message”:”‘,'”‘);
if(substring!=null)
postAndComments.add(substring);
}
//creating a Map of posts and comments.
Map PostComments =new Map();
for(String p:posts){
if(p!=null){
List empty=new List();
PostComments.put(p,empty);
}
}
String key;
Listcomments=new List();
for(string messages : postAndComments){

//checking if the message exists in posts and adding as key.
if(messages!=null){
if(PostComments.containsKey(messages)){
key = messages ;
}

//else adding to comments to corresponding post.
else{
comments=PostComments.get(key);
comments.add(messages);
PostComments.put(key, comments);
}
}
}

//querying all the Posts and comments
List existingPosts = [SELECT id,Comments__c,Posts__c FROM Einstein_Facebook__c];
List listDelete = new List();
for(Einstein_Facebook__c Post:existingPosts){

//checking if post already exists and adding to a list.
if(PostComments.containsKey(Post.Posts__c)){
listDelete.add(Post);
}
}

//deleting the list.
delete listDelete;

List  PostCommentList=new List();
//Iterating through Map to create records.
for(string post : PostComments.keySet()){
for(string comments : PostComments.get(post)){
Einstein_Facebook__c Facebookcomment=new Einstein_Facebook__c();
Facebookcomment.Posts__c=post;
Facebookcomment.Comments__c=comments ;
PostCommentList.add(Facebookcomment);}
}

//Inserting List of  posts and comments.
insert PostCommentList;

}

}
[/sourcecode]

step 6. Write a trigger to get Intent and sentiment of post and comments.

Trigger to fire after insert on Facebook posts and comments:

[sourcecode language=”java”]

trigger EinsteinFB_Trigger on Einstein_Facebook__c (after insert) {    For(Einstein_Facebook__c post : Trigger.New)

{

String  postId = post.Id;

Einstein_Handler.getProbabilityforFB(postId);

}

}

[/sourcecode]

Handler for the trigger:

Here we are using EinsteinVision_Admin EinsteinVision_Sentiment classes to get Intent and Sentiment of posts and comments which we explained in detail in our previous blog on Einstein Intent and Einstein Sentiment.

[sourcecode language=”java”]
global class Einstein_Handler{

@future(callout=true)
Public static void getProbabilityforFB(String body){
Einstein_Facebook__c fd=[select Posts__c, Comments__c from Einstein_Facebook__c where id=:body];

String intentLabel = EinsteinVision_Admin.getPrediction(fd.Posts__c);
String sentimentLabel = EinsteinVision_Sentiment.findSentiment(fd.Comments__c);

fd.Feedback_Type__c=intentLabel;
fd.Sentiment__c=sentimentLabel;
update fd;
}

}
[/sourcecode]

This is a report to show analysis of Facebook post and comments.

facebook report

]]>
https://absyz.com/einstein-language-facebook-integration/feed/ 4
Einstein Vision – Real Estate App https://absyz.com/einstein-vision-real-estate-app/ https://absyz.com/einstein-vision-real-estate-app/#comments Tue, 13 Feb 2018 09:53:48 +0000 https://teamforcesite.wordpress.com/?p=8429

Let suppose in some real estate website, we search for properties and we get the result related to it. When this comes to salesforces we have Einstein which can help in a large amount of data. Einstein has image classification and object identification as Einstien Vision.

Same way using Einstein Vision a real estate app is built and the scenario is developed where a user can search for a property in Property.com by choosing the type of the house. As the input is given by the user, the related images are displayed to the user. Here we have thousands of images to process it and display it to the user. Where it is difficult manually and can be achieved by some predictive algorithms. In this scenario, Einstein plays a vital role that acts like the human in predicting the images. For more understanding, you can through the trailhead  that provides a managed package that will be useful in this demo (https://trailhead.salesforce.com/en/projects/build-a-cat-rescue-app-that-recognizes-cat-breeds)

Steps to Follow:
  1. AWS
  2. Train Dataset
  3. S3 Link

1. AWS storage is used in two ways. First, storage of images in huge amount inside salesforce is difficult. Hence AWS storage is used where there is no limit to the amount of data to be stored. Second, training Einstein is done with a downloadable zip link in an URL format. In AWS we create a bucket in an S3 storage type where the files are stored in the bucket. Here I have created a zip folder and a common folder. The zip folder is to train the datasets that should be of more than 12MB. The more you add the images, the more Einstein prediction will be accurate. One main thing when you create the files inside AWS is to make the access public to each and every file.

aws

Now the link is ready to train the dataset (https://s3.amazonaws.com/sfdc-einstein-demo/newmodifiedhouses2.zip). The zip folder contains sub-folders as shown below,

subfolder

2. For Einstein, the sub-folder name is a Label and images inside sub-folders are datasets. The created data should be trained by passing the link. After training the data they form models for each dataset labels.

einsteinVision

We pass the URL to apex class on click of the button create a dataset. The file is downloaded to meta-minds where Einstein processes the analysis. To get to know more about meta-minds refer the given link (https://metamind.readme.io/docs/introduction-to-the-einstein-predictive-vision-service).

[sourcecode language=”java”]
//method1 in awsFileTest.apex
@AuraEnabled
public static void createDatasetFromUrl(String zipUrl) {
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
service.createDatasetFromUrlAsync(zipUrl);
system.debug(service);
}
[/sourcecode]

On refreshing the dataset we get a list of labels and the number of files that we give to train Einstein.

[sourcecode language=”java”]
//method2 in awsFileTest.apex
@AuraEnabled public static List<EinsteinVision_Dataset> getDatasets() {
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
EinsteinVision_Dataset[] datasets = service.getDatasets();
return datasets;
}
[/sourcecode]

Einstein identifies with dataset models that are done after training the dataset. We can also delete the trained dataset and add a new dataset.

[sourcecode language=”java”]
//method3 in awsFileTest.apex
@AuraEnabled
public static String trainDataset(Decimal datasetId) {
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
EinsteinVision_Model model = service.trainDataset(Long.valueOf(String.valueOf(datasetId)), ‘Training’, 0, 0, ”);
return model.modelId;
}
//method4 in awsFileTest.apex
@AuraEnabled
public static void deleteDataset(Long datasetId)
{
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
service.deleteDataset(datasetId);
}
[/sourcecode]

On training the dataset, dataset model with an id is generated.

[sourcecode language=”java”]
public static List<EinsteinVision_Model> getModels(Long datasetId) {
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
EinsteinVision_Model[] models = service.getModels(datasetId);
return models;
}
[/sourcecode]

3. We use S3 Link app from appExchange to iterate the filenames inside AWS. S3 Link is basically a link between salesforce and AWS. This app helps us to import and export files from AWS whereas importing file means only the details of the file and provides a redirecting link to view or download images. In callout (to AWS) we can only hardcode the destination file name. We have many files that are not possible to hardcode all files names. To install the app follow the guidelines in the given link. (https://appexchange.salesforce.com/appxListingDetail?listingId=a0N3000000CW1OXEA1)

s3 link

Here I make a call out to the AWS on iterating the image name and receiving it as a blog because Einstein needs the actual(*original) image to compare for finding the probability of all possible type of houses.

[sourcecode language=”java”]
@AuraEnabled
public static list<awsFileTestWrapper.awswrapper> getImageAsBlob() {

List<NEILON__File__c> fList = [SELECT Name FROM NEILON__File__c];
system.debug(‘flist ‘+fList);
Map<Blob,String> bList = new Map<Blob,String>();
for(NEILON__File__c nm:fList)
{
Http h = new Http();
HttpRequest req = new HttpRequest();
string firstImageURL = ‘https://s3.amazonaws.com/sfdc-einstein-demo/commonhouses/’+nm.Name;
//Replace any spaces with %20
system.debug(‘firstImageURL’+firstImageURL);
firstImageURL = firstImageURL.replace(‘ ‘, ‘%20’);
req.setEndpoint(firstImageURL);
req.setMethod(‘GET’);
//If you want to get a PDF file the Content Type would be ‘application/pdf’
req.setHeader(‘Content-Type’, ‘image/jpg’);
req.setCompressed(true);
req.setTimeout(60000);

HttpResponse res = null;
res = h.send(req);
//These next three lines can show you the actual response for dealing with error situations
string responseValue = ”;
responseValue = res.getStatus();
system.debug(‘Response Body for File: ‘ + responseValue);
//This is the line that does the magic. We can get the blob of our file. This getBodyAsBlob method was added in the Spring 2012 release and version 24 of the API.
blob image = res.getBodyAsBlob();
system.debug(‘blob’+image);
// bList.add(res.getBodyAsBlob());
bList.put(res.getBodyAsBlob(),nm.Name);
}
system.debug(‘blob list’+bList);

EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
EinsteinVision_Dataset[] datasets = service.getDatasets();
list<awsFileTestWrapper.awswrapper> listaws=new list<awsFileTestWrapper.awswrapper>();

for (EinsteinVision_Dataset dataset : datasets) {

EinsteinVision_Model[] models = service.getModels(dataset);
EinsteinVision_Model model = models.get(0);
Set<blob> bList2=bList.keySet();
for(Blob fileBlob:bList2)
{
system.debug(‘blob in loop ‘+fileBlob);
EinsteinVision_PredictionResult result = service.predictBlob(model.modelId, fileBlob, ”);
EinsteinVision_Probability probability = result.probabilities.get(0);
system.debug(‘1.’+result.probabilities.get(0).label+’—-‘+result.probabilities.get(0).probability+’ 2.’+result.probabilities.get(1).label+’—-‘+result.probabilities.get(1).probability+
‘ 3.’+result.probabilities.get(2).label+’—-‘+result.probabilities.get(2).probability
+’ 4.’+result.probabilities.get(3).label+’—-‘+result.probabilities.get(3).probability
+’ 5.’+result.probabilities.get(4).label+’—-‘+result.probabilities.get(4).probability);
awsFileTestWrapper.awswrapper aws=new awsFileTestWrapper.awswrapper();
aws.filename=blist.get(fileblob);
//aws.content=fileblob;
aws.mylabel=result.probabilities.get(0).label;
aws.prob=result.probabilities.get(0).probability;
listaws.add(aws);
}
}
System.debug(‘values are’+listaws[0].filename);
return listaws;
}
[/sourcecode]

Einstein gives the result of label and probability related to the image. In result.probabilities.get(0).probability it gives the nearest probability to that particular image. I pass the filename, label, and probability to Lightning component controller. Hence, list of wrappers is used.

[sourcecode language=”java”]
//awsFileTestWrapper.apex
public class awsFileTestWrapper {
public class awswrapper{
@auraenabled public String mylabel;
@auraenabled public String filename;
@auraenabled public double prob;
}
}
[/sourcecode]

In a controller, the callout is made with the iteration of a file name and we are fetching the images from AWS that is displayed to the user.

aura cmp.PNG

The values from apex controller are sent to the javascript.

[sourcecode language=”java”]
//controller.js
({
extractfile: function(component, event, helper) {
alert(‘button clicked’);
var val = component.find(“select”).get(“v.value”);
alert(‘value’+val);
var names=[];
var probs=[];
component.set(“v.IsSpinner”,true);
var action1 = component.get(“c.getImageAsBlob”);
action1.setCallback(this, function(response) {
var ret=response.getReturnValue();
var name=”;
var prob=”;
for(var i=0;i<ret.length;i++){
if(ret[i].mylabel==val){
name=ret[i].filename;
names.push(name);
prob=ret[i].prob;
probs.push(prob);
}
}
component.set(“v.IsSpinner”,false);
component.set(“v.contents”,names);
component.set(“v.probability”,probs);
});
$A.enqueueAction(action1);
},
})
[/sourcecode]

The final output results with images and probability to the user as shown below.

output

Feel free to contact us for any doubts and if you need the code which I have given in the screenshot.

References:
  1. https://developer.salesforce.com/blogs/developer-relations/2017/05/image-based-search-einstein-vision-lightning-components.html
  2. https://andyinthecloud.com/2017/02/05/image-recognition-with-the-salesforce-einstein-api-and-an-amazon-echo/
  3. https://metamind.readme.io/docs/prediction-with-image-file

 

]]>
https://absyz.com/einstein-vision-real-estate-app/feed/ 2
Einstein Sentiment Analysis https://absyz.com/einstein-sentiment-analysis/ https://absyz.com/einstein-sentiment-analysis/#comments Fri, 09 Feb 2018 06:07:31 +0000 https://teamforcesite.wordpress.com/?p=8444

Einstein Sentiment is something to predict the reviews or messages whether it is positive, negative or neutral. Using these companies can categorize the customer attitudes and take action appropriately to build their insights. As earlier discussed in marketing cloud (https://teamforcesite.wordpress.com/2018/02/07/marketing-cloud-social-studio-series-macros/) sentiment analysis can be achieved even using Einstein.

Here user enters some message, Einstein finds the sentiment that will be useful to companies finding it as an appreciation when it is positive and take action when it is negative. This scenario will be helpful when handling more number of clients. For example in real-time usage facebook comments, client reply, etc.

To start with Einstein, first set up your environment with the help of the trailhead (https://trailhead.salesforce.com/modules/einstein_intent_basics/units/einstein_intent_basics_env). For one account mail id, only one key is generated that is stored in files and we get access token generated with the time limit being activated. Create two apex class to generate a JWT access token, you can refer the trailhead (https://trailhead.salesforce.com/projects/predictive_vision_apex/steps/predictive_vision_apex_get_code).

[sourcecode language=”java”]
//JWT.apex
public class JWT {

public String alg {get;set;}
public String iss {get;set;}
public String sub {get;set;}
public String aud {get;set;}
public String exp {get;set;}
public String iat {get;set;}
public Map claims {get;set;}
public Integer validFor {get;set;}
public String cert {get;set;}
public String pkcs8 {get;set;}
public String privateKey {get;set;}

public static final String HS256 = ‘HS256’;
public static final String RS256 = ‘RS256’;
public static final String NONE = ‘none’;
public JWT(String alg) {
this.alg = alg;
this.validFor = 300;
}

public String issue() {
String jwt = ”;
JSONGenerator header = JSON.createGenerator(false);
header.writeStartObject();
header.writeStringField(‘alg’, this.alg);
header.writeEndObject();
String encodedHeader = base64URLencode(Blob.valueOf(header.getAsString()));

JSONGenerator body = JSON.createGenerator(false);
body.writeStartObject();
body.writeStringField(‘iss’, this.iss);
body.writeStringField(‘sub’, this.sub);
body.writeStringField(‘aud’, this.aud);
Long rightNow = (dateTime.now().getTime()/1000)+1;
body.writeNumberField(‘iat’, rightNow);
body.writeNumberField(‘exp’, (rightNow + validFor));
if (claims != null) {
for (String claim : claims.keySet()) {
body.writeStringField(claim, claims.get(claim));
}
}
body.writeEndObject();

jwt = encodedHeader + ‘.’ + base64URLencode(Blob.valueOf(body.getAsString()));

if ( this.alg == HS256 ) {
Blob key = EncodingUtil.base64Decode(privateKey);
Blob signature = Crypto.generateMac(‘hmacSHA256’,Blob.valueof(jwt),key);
jwt += ‘.’ + base64URLencode(signature);
} else if ( this.alg == RS256 ) {
Blob signature = null;

if (cert != null ) {
signature = Crypto.signWithCertificate(‘rsa-sha256’, Blob.valueOf(jwt), cert);
} else {
Blob privateKey = EncodingUtil.base64Decode(pkcs8);
signature = Crypto.sign(‘rsa-sha256’, Blob.valueOf(jwt), privateKey);
}
jwt += ‘.’ + base64URLencode(signature);
} else if ( this.alg == NONE ) {
jwt += ‘.’;

return jwt;
}
public String base64URLencode(Blob input){
String output = encodingUtil.base64Encode(input);
output = output.replace(‘+’, ‘-‘);
output = output.replace(‘/’, ‘_’);
while ( output.endsWith(‘=’)){
output = output.subString(0,output.length()-1);
}
return output;
}
}
[/sourcecode]

To generate new access token each and every time JWTBearerFlow class is used

[sourcecode language=”java”]
//JWTBearerFlow.apex
public class JWTBearerFlow {

public static String getAccessToken(String tokenEndpoint, JWT jwt) {

String access_token = null;
String body = ‘grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer&assertion=’ + jwt.issue();
HttpRequest req = new HttpRequest();
req.setMethod(‘POST’);
req.setEndpoint(tokenEndpoint);
req.setHeader(‘Content-type’, ‘application/x-www-form-urlencoded’);
req.setBody(body);
Http http = new Http();
HTTPResponse res = http.send(req);

if ( res.getStatusCode() == 200 ) {
System.JSONParser parser = System.JSON.createParser(res.getBody());
while (parser.nextToken() != null) {
if ((parser.getCurrentToken() == JSONToken.FIELD_NAME) && (parser.getText() == ‘access_token’)) {
parser.nextToken();
access_token = parser.getText();
break;
}
}
}
return access_token;
}
} [/sourcecode]

Now the sentiment is analyzed through the probability given by Einstein. In the below class we refer to JWT apex class to pass the key and generate a new access token.(https://metamind.readme.io/docs/what-you-need-to-call-api). The below apex class returns a string with labels and its probability.

[sourcecode language=”java”]
//EinsteinVision_Sentiment.apex
@auraenabled
public static String findSentiment(String text)
{
ContentVersion con = [SELECT Title,VersionData
FROM ContentVersion
WHERE Title = ‘einstein_platform’
OR Title = ‘predictive_services’
ORDER BY Title LIMIT 1];

String key = con.VersionData.tostring();
key = key.replace( ‘—–BEGIN RSA PRIVATE KEY—–‘, ” );
key = key.replace( ‘—–END RSA PRIVATE KEY—–‘, ” );
key = key.replace( ‘\n’, ” );
JWT jwt = new JWT( ‘RS256’ );
jwt.pkcs8 = key;
jwt.iss = ‘developer.force.com’;
jwt.sub = ‘xxx@xxx.com’; // Update with your own email ID
jwt.aud = ‘https://api.metamind.io/v1/oauth2/token’;
jwt.exp = String.valueOf(3600);
String access_token = JWTBearerFlow.getAccessToken( ‘https://api.metamind.io/v1/oauth2/token’, jwt );
String keyaccess = access_token;

Http http = new Http();
HttpRequest req = new HttpRequest();
req.setMethod( ‘POST’ );
req.setEndpoint( ‘https://api.einstein.ai/v2/language/sentiment’);
req.setHeader( ‘Authorization’, ‘Bearer ‘ + keyaccess );
req.setHeader( ‘Content-type’, ‘application/json’ );
String body = ‘{\”modelId\”:\”CommunitySentiment\”,\”document\”:\”‘ + text + ‘\”}’;
req.setBody( body );
HTTPResponse res = http.send( req );
string fullString=res.getBody();
string removeString=fullString.removeEnd(‘],”object”:”predictresponse”}’);
string stringBody=removeString.removeStart(‘{“probabilities”:[‘);
return stringBody;
} [/sourcecode]

Using the probability, I display them as charts using chart.js which can be referred to the given link (http://www.chartjs.org/docs/latest/getting-started/). The chart.js is saved as a static resource in Salesforce and used in the component.

lightng.png

The response from apex controller is handled in js and we split the data into two lists that are passed as values to the chart data.

[sourcecode language=”java”]
({
extractfile: function(component, event, helper) {
var val = component.find(“select”).get(“v.value”);
alert(‘value’+val);
var action1 = component.get(“c.findSentiment”);
action1.setParams({ text: val });

action1.setCallback(this, function(response) {

var ret=response.getReturnValue();
component.set(“v.probability”,ret);
alert(‘probability ‘+component.get(“v.probability”));

var line=component.get(“v.probability”);
var list=line.split(‘,’);
var temp=0;
var labels=[];
var values=[];
for(var i=0;i <list.length;i++){
if(i%2==0){
list[i]=list[i].match(‘:”(.*)”‘)[1];
temp=temp+1;
labels.push(list[i]);
}
else
{
temp=temp+1;
list[i]=list[i].match(‘:(.*)}’)[1];
values.push(list[i]);
}
}
component.set(“v.labels”,labels);
component.set(“v.values”,values);

var label=component.get(“v.labels”);
var value=component.get(“v.values”);
var data = {
labels: [label[0],label[1],label[2]],
datasets: [
{
fillColor: ‘#b9f6ca’,
strokeColor: ‘#b9f6ca’,
data: [value[0],value[1],value[2]]
}
]
}
var options = { responsive: true };
var element = component.find(‘chart’).getElement();

var ctx = element.getContext(‘2d’);

var myBarChart = new Chart(ctx).Bar(data, options);
});
$A.enqueueAction(action1);
},

})
[/sourcecode]

The result of above data is shown below,

output for sentiment.png

To know more about Einstein keep visiting our blog. For any doubts, you can feel free to reach out us.

]]>
https://absyz.com/einstein-sentiment-analysis/feed/ 2