Pushmitha Babu – ABSYZ https://absyz.com Salesforce Gold Consulting and Implementation Partner Fri, 27 Nov 2020 10:52:20 +0000 en-US hourly 1 https://absyz.com/wp-content/uploads/2020/06/cropped-favicon-1-1-32x32.png Pushmitha Babu – ABSYZ https://absyz.com 32 32 REST API call from Einstein Analytics Dashboard https://absyz.com/rest-api-call-from-einstein-analytics-dashboard/ https://absyz.com/rest-api-call-from-einstein-analytics-dashboard/#comments Tue, 26 May 2020 06:22:34 +0000 http://blogs.absyz.com/?p=11219

In Einstein Analytics we can create a lens and dashboard with the dataset available in your analytics studio. You have a dataset that is created from the dataflow which is scheduled every hour to update the dataset. Here your dataset might not have the updated data for every second or minute. What if you want to get the live data from Salesforce or any external system or handling complex solution when it is not possible in analytics to be shown on your dashboard. Using the apex step type in Dashboard JSON we can achieve displaying the manipulated data or real-time data.

Using SOQL from Dashboard

Scenario 1: The user wanted to show the real-time data from salesforce on the analytics dashboard. It is simple to achieve, just follow the steps mentioned. So we create an apex step in the dashboard and using ‘@restresource’ in apex call we fetch the data to the dashboard. Create an apex class as AnalyticsDashboardStep with restResource name as accountdata. Define a @HttpPost method that will return the value to the dashboard JSON. So your code will look as shown below.

@RestResource(urlMapping='/accountdata')
global with sharing class AnalyticsDashboardStep {
    @HttpPost 
    global static String fetchAccount(String selectedIndustry) { 
         //selectedIndustry - attribute value passed from the analytics Dashboard
         //return the output
    } 
}

Define a wrapper that parses the data to the dashboard JSON. The WrappedData wrapper creates a mapping between the parameters defined and the queried fields of account. The ReturnMetadata wrapper is to define the data type for each returned column. The ChartFormatJSON wrapper is to combine the data(rows) and header(columns).

    public class WrappedData{
        public String Account_Name;
        public String Account_Id;
        public String Account_Industry;
        public String Account_AccountSource;
        public Decimal Account_AnnualRevenue;
        public WrappedData(){}
        public WrappedData(Account data){
            this.Account_Name = data.name;
            this.Account_Id = data.Id;
            this.Account_Industry = data.Industry;
            this.Account_AccountSource = data.AccountSource;
            this.Account_AnnualRevenue = data.AnnualRevenue;
        }
    }
    public class ReturnMetadata {
        public List<String> strings; // columns that return as text
        public List<String> numbers; // columns that return as numeric
        public List<String> groups;  // columns that return as groups
        public ReturnMetadata(List<String> strings, List<String> numbers, List<String> groups) {
            this.strings = strings;
            this.numbers = numbers;
            this.groups = groups;
        }
    }
    public class ChartFormatJSON {
        public List<WrappedData> data;
        public ReturnMetadata metadata;
        public ChartFormatJSON(List<WrappedData> data) {
            this.data = data;
            this.metadata = new ReturnMetadata(new List<String>{'Account_Id','Account_Name','Account_Industry'}, 
                                                new List<String>{'Account_AnnualRevenue'}, new List<String>{'Account_Name'});
        }   
    }

Since your wrapper parameters are set up, now pass the values in the @HttpPost fetchAccount method as stated above. In the below method we have a queried List of accounts as a return statement to the Dashboard. For the complete class check this link.

    @HttpPost 
    global static String fetchAccount(String selectedIndustry) {
        List<Account> dataDisplay = new List<Account>();
        List<WrappedData> wrpData = new List<WrappedData>();
        // If the Industry is not selected from the interaction step
        if (selectedIndustry == null) {
            dataDisplay = [select Id,Name,Industry,AnnualRevenue,AccountSource from account order by AnnualRevenue desc];
        }else{
            dataDisplay = [select Id,Name,Industry,AnnualRevenue,AccountSource from account where industry=:selectedIndustry order by AnnualRevenue desc];
        }
        for(Account acc : dataDisplay){
            wrpData.add(new WrappedData(acc));
        }  
        //Serialize the wrapper that you have created with account data
        return JSON.serialize(new ChartFormatJSON(wrpData));
    }

Once your apex class is ready, move to the analytics studio to create a dashboard with an apex step. Create a toggle with filter values as Account Industry. Follow the below steps to create a dashboard.

In the new dashboard if you have completed the above steps, click ctrl + E in windows or command + E in Mac to edit your Dashboard JSON. Add the apex step as shown below. The GetChartData is the query name in the dashboard. In the query parameter, you set the body value as an apex input parameter named selectedIndustry that holds the value selected from the dashboard. Define the apex class name in the path parameter and apex in the type parameter.

"GetChartData": {
	"query": {
		"body": {
			"selectedIndustry": "Agricultrue"
		},
		"path": "accountdata"
	},
	"type": "apex"
},

If you want to pass the selected industry value dynamically then use interaction syntax that can be generated from the advance editor of the GetChartData query. Edit your query value with “{{cell(Industry_1.selection, 0, \”Industry\”).asString()}}” to pass the value dynamic. You can use both Result or Selection type of interaction.

"GetChartData": {
     "query": {
          "body": {
                "selectedIndustry": "{{cell(Industry_1.selection, 0, \"Industry\").asString()}}"
          },
          "path": "accountdata"
     },
     "type": "apex"
},

Click on Done after completing the JSON edit. You can see the GetChartData query in the query list on your right-hand side of the dashboard. Drag the query to the dashboard building area. To get the full dashboard JSON click here.

Dashboard Output:

dashboard output1

REST API call from Dashboard

Scenario 2: Similarly, the user wanted to show the live updates from an external system to your analytics dashboard, then we do an apex REST API call to fetch the details whenever the dashboard is loading or refreshing. Here we have taken an example of COVID ’19 to show the number of updated cases in India. So we are using https://api.covid19india.org API to fetch the COVID updated details. Similarly, you can choose your API according to your necessity.

It is similar to scenario 1, whereas here we are creating an apex class with REST API callout and passing the parameters in the same format that is required for the dashboard. Add the URL in remote site settings.

Create two custom labels with the details mentioned below:

  • Custom Label 1
    • Name: CovidBaseEndpoint
    • Value: https://api.covid19india.org/
  • Custom Label 2
    • Name: CovidStateWise
    • Value: /data.json

CustomLabel

The next step is to create an apex class to make an Http request and get the response. The getStateWiseData method makes an API request and is serialized to the dashboard through the data_val method. Notice PackagedReturnItem wrapper where we have categorized the columns as a string, number, and group in ReturnMetadata.

@RestResource(urlMapping='/covid')
global with sharing class CovidData {
    @HttpPost  // Annotation Specified to highlight that this method needs to be called.
    global static String data_val() {
    	CovidStatusCoreData1 data = getStateWiseData();
        return JSON.serialize(new PackagedReturnItem(data.statewise));
    }
    
    public static CovidStatusCoreData1 getStateWiseData() {
        String BaseEndpoint = System.Label.covidBaseEndpoint; //Retrieve the endpoint and statewise variable from custom label
        String StateWise = System.Label.covidStateWise;
        HttpResponse resp = makeAPICallout(BaseEndpoint,StateWise);
        CovidStatusCoreData1 response = (CovidStatusCoreData1)System.JSON.deserialize(resp.getbody(), CovidStatusCoreData1.class);
        if (response != null) {
            return response;
        }
        return null;
    }
    public static HttpResponse makeAPICallout(String BaseEndpoint,String StateWise) {
        Http h = new Http();			//Make a request with the parameters set
        HttpRequest req = new HttpRequest();
        String endpoint = BaseEndpoint + StateWise;
        req.setEndpoint(endpoint);
        req.setMethod('GET');
        HttpResponse res = h.send(req);		// Send the request, and return a response
        if (res.getStatusCode() == 200 ) {
            return res;
        }
        return null;
    }
    public class ReturnMetadata {
        public List<String> strings; 
        public List<String> numbers; 
        public List<String> groups;  
        public ReturnMetadata(List<String> strings, List<String> numbers, List<String> groups) {
            this.strings = strings;
            this.numbers = numbers;
            this.groups = groups;
        }
    }
    public class PackagedReturnItem {
        public List<StateWiseData> data;
        public ReturnMetadata metadata;
        public PackagedReturnItem(List<StateWiseData> data) {
            this.data = data;
            this.metadata = new ReturnMetadata(new List<String>{'state','statecode','lastupdatedtime'}, 
                                               new List<String>{'active','recovered','deaths','confirmed','deltaconfirmed','deltadeaths','deltarecovered'}, 
                                               new List<String>{'state'});
        }   
    }  
    public class CovidStatusCoreData1 {
        public List<DailyKeyValues> key_values;
        public List<StateWiseData> statewise;
    }
    public class DailyKeyValues {
        public String confirmeddelta;
        public String counterforautotimeupdate;
        public String deceaseddelta;
        public String lastupdatedtime;
        public String recovereddelta;
        public String statesdelta;
    }
    public class StateWiseData {
        public Integer active;
        public String confirmed;
        public String deaths;
        public String recovered;
        public String state;
        public String statecode;
        public String lastupdatedtime;
        public String deltaconfirmed;
        public String deltadeaths;
        public String deltarecovered;
    }
}

Create an apex step GetChartData in dashboard JSON to display the data. After placing the apex step click on done from the dashboard. Place the query on top of the table widget as shown below.

"GetChartData": {
	"query": {
		"body": {},
		"path": "covid"
	},
	"type": "apex"
},

covid states

In the final step, we have created a Static step to filter by states in India. To show a different type of output chart in the dashboard pass the wrapper from the apex class. Set the ReturnMetadata string, number, and group by columns correctly so that you could see the correct output when you place the query in the chart widget. Likewise, create the apex classes to fetch the data and display it in different chart types as shown in the below dashboard. Refer to the link for the apex classes and dashboard JSON. Using the dashboard inspector we can check the output for each lens that helps us to identify the performance and time taken for the query. Click on show details for a query that will show the details on your right-hand side panel.

There you go! you are on the last step to verify your dashboard, check the clip below.

[wpvideo RHIjNB4j]

NOTE: When you work with API apex step in analytics dashboard remember you have certain limits provided by salesforce that you can refer here. Firstly, Maximum concurrent Analytics API calls per org limit as 100. Secondly, Maximum Analytics API calls per user per hour limit as 10,000.

]]>
https://absyz.com/rest-api-call-from-einstein-analytics-dashboard/feed/ 3
Einstein Analytics: Creating Date Fields in Recipes for Toggles https://absyz.com/einstein-analytics-creating-date-fields-in-recipes-for-toggles/ https://absyz.com/einstein-analytics-creating-date-fields-in-recipes-for-toggles/#respond Tue, 04 Dec 2018 12:32:16 +0000 http://blogs.absyz.com/?p=9489
Originally Posted on December 4, 2018; Last updated on November 5, 2020.

Einstein Analytics by Salesforce is a cloud-based AI-powered advanced analytics tool that helps in exploring data quickly and with Salesforce Analytics Query Language (SAQL) to query, manage your datasets and customize dashboards programmatically. Formerly known as Wave Analytics Cloud, it integrated very well with the Salesforce platform. We have already covered Wave Analytics in detail in our previous blog post “Introduction to Salesforce Wave Analytics Cloud”

Update: Einstein Analytics has a new name now, Tableau CRM, the tool remains the same. Read about this major update here

Einstein Analytics dashboard is built up of many widgets, which is similar to a component over the Salesforce dashboard; however, a date filter or list filter can also be referred to as widgets in EA. For the purpose of our tutorial, we would be using the Toggle widget, which can be used to create a filter for the viewers on Analytics dashboard results based on date or dimension. 

In the following example, we will create a filter for the Users on the date field as Quarterly and Yearly. 

To begin, we will create a Dataset. In EA, a Dataset is a specific view of a data source based on how you’ve customised it. We’ve already covered steps to create a Dataset in “How to create a Dataset, Lens and Dashboard in Wave Analytics”. After creating the dataset, create a Dataset Recipe as shown below. You can find the created recipe in the Dataflows & Recipes from the sidebar menu.

Datasets Einstein Analytics
Datasets in Einstein Analytics / Tableau

 

 

create recipe tableau einstein analytics
Create Recipe Option – Einstein Analytics / Tableau

Recipe Name einstein Analytics
Provide Recipe Name – Einstein Analytics / Tableau

List of Dataflows & Recipes
List of Data Flows & Recipes – Einstein Analytics

Create a Bucket

Click on the Recipe, you will be redirected to a table that shows all the data. We write the filter on the Close Date field in Opportunity. Create a bucket to categorise the fields.

Customising a Recipe
Recipe Editor / Customisation Pane

Selecting the row
Row for the Bucket to be Created

selecting bucket from data fields
Creating Bucket from Data fields – Recipe – Einstein Analytics

You have 2 options shown while creating a Bucket. Choose the Relative option. A drop-down list appears below with the options by Year, Quarter, Month, Week or Day. Below you have another option to choose your range provided namings for the start and end of your selected type.

Customising the Bucket
Bucket Customisation Pane – Select Relative.

analytics9analytics10

Now create two buckets: By Year and By Quarter. Select the range and name each range that you want to display. As you create each bucket, add the bucket to the Dataset and update as shown below. The Dataset Recipe will get updated and can be monitored from the Data Manager. In the left side panel click on the monitor tab, where you can check the status.

adding additional conditions buckets
Additional Conditions – Bucket

 

applying changes bucket remaining values
Select Bucket Remaining Values to Other and Apply changes to Bucket

 

Dataset reciple table preview selected columns

Find Recipe after successful creation

 

steps to update attributes for column in dataset recipe
Steps to update attributes for column in dataset recipe

 

Updating Attributes for Column in Dataset
Updating Attributes for Column in Dataset

 

Saving changes to the Dataset
Saving changes to the Dataset
Select our Derived fields from Close Date
Select our Derived fields from Close Date

 

Running the Recipe
Running the Recipe – One time or Scheduled

 

Find Recipe after succesful creation
Find Recipe after successful creation

 

Create a Dashboard

The Dashboard is where users explore and analyse widgets, So go ahead and create a blank Dashboard and rename it to a more specific and suitable description.

Create a Step

Create a step to add a toggle option. So first click on Create a step and choose the dataset you have created at the start. In the left side panel, under Bars choose Quarterly according to your naming’s saved above. Add a condition under Filters, choosing Quarterly equals to Current Quarter and Last year’s same Quarter. After following the steps as mentioned click on done. On the Dashboard, you can see the created step.

Creating step for a Toggle Einstein Analytics
Creating step for a Toggle Einstein Analytics

"<yoastmark

Add Bars to the Custom Step
Add Bars to the Custom Step

"<yoastmark

Ensure correct Bars are selected
Ensure correct Bars are selected

Select Filters from to Eliminate unnecessary Dimensions
Select Filters from to Eliminate Unnecessary Dimensions

Select the required Dimensions
Select the required Dimensions

Filters for Bar Steps
Click on Done after verifying correct selection of correct Dimensions in Step

Step is visible in the Right pane of Toggle Designer
Step is visible in the Right pane of Toggle Designer

Create Toggle

On the left side panel, select the Toggle option. Drag and drop the created step on the Toggle box. The options that are added to the filter are displayed on the Toggle.

Select Toggle from Left Pane
Select Toggle from Left Pane – Einstein Analytics

Drag toggle to the Designer pane
Drag Toggle to the designer pane and Select the Step Created earlier

Ensure both filter custom dimensions are presentCreate the same toggle for a current year and last year. Follow the same steps given above to create a step for Yearly.

Step - Current Year

"<yoastmark

Create a Step

Now we create a step to show a chart with Account Industry data. Click on create a step and select under bars AccountId.Industry. Under Bar Length add criteria to count already available choosing the option: the sum of the amount. Rename the step and drag it to the Dashboard as shown below. Save your Dashboard and click on the preview to see how the toggle helps out.

default step accountid.industry
Creating a new Step AccountId.Industry selecting our Dataset Recipe

Select Bars - AccountId.Industry
Select Bars – AccountId.Industry

Select Bar Length - Sum of Amount
Select Bar Length – Sum of Amount

Rename Step, ensure the fields are reflecting correct data
Rename Step, ensure the fields are reflecting correct data then hit “Done”

Drag Industry Step onto the Dashboard
Drag Industry Step onto the Dashboard

Save the Toggle Dashboard and Hit Preview
Save the Toggle Dashboard and Hit Preview

On Preview, you can see the output by selecting on the toggle options as shown below.

Preview - Einstein Analytics : Tableau - Current Quarter
Preview – Einstein Analytics – Tableau – Current Quarter

"<yoastmark

"<yoastmark

"<yoastmark

"<yoastmark

So That’s it! If you are still facing problems, you can touch base with us over email team@absyz.com, we ABSYZ are one of the largest Salesforce integrators and a salesforce partner company providing End to End Salesforce implementation services for not just Tableau CRM but also for Salesforce Marketing Cloud and Salesforce Health Cloud. Do reach out to us for any help with integrations.

]]>
https://absyz.com/einstein-analytics-creating-date-fields-in-recipes-for-toggles/feed/ 0
Google Assistant Integration with Salesforce – Part 1 https://absyz.com/google-assistant-integration-with-salesforce-part-1/ https://absyz.com/google-assistant-integration-with-salesforce-part-1/#comments Thu, 30 Aug 2018 09:56:57 +0000 http://blogs.absyz.com/?p=9171

Everyone has come across Google Assistant which help people in communication that is a Google’s voice-controlled AI smart assistant. Google Assistant is integrated with Salesforce, to make communication easy and reduce the work. Here we do a simple integration with Salesforce to create, delete and fetch details of a particular record.

First, we should sign up Dialogflow.api with your Gmail account. Click on signup for free. You will be redirected to a page asking for the sign in with Google. Sign up with existing Gmail account and you can see the redirected page that logins into dialogflow.com. And create an agent (a project) for a custom development.

Four terminologies to know:

  1. Intent
  2. Entities
  3. Fulfillment
  4. Integrations

Intent:

Intent maps the user input commands with related actions for your app. Intent allows the users to specify what they wanted to do and figures out what activity matches what was said. Click on Intent that is on your left and click on create new Intent as shown below.

Absyz

Entities:

Entities are used for extracting parameter values from the user inputs and any important data that you wanted to get from the user, you will create the corresponding entity. It is not necessary to create all possible concepts as entities, entities are created only for the actionable data that is needed. To create an Entity check the below images.

Absyz

Fulfillment:

Fulfillment allows us to decide our responses to our conversations. It is a conversational interface between your application and the logic to fulfill the action. For example, we integrate with Salesforce and Google Assistant. We need to create a site from salesforce org as shown below (Site -Custom URL is used because we need to get a public access to the apex class from our org). Enter the domain URL in Fulfillment Webhook URL and append the URL with the rest resource name.

Absyz cloud

Integration:

Integration is to use Dialogflow’s Actions on Google integration to test your Dialogflow agent in the Actions on Google simulator. Click on Integration in the left menu and select Integration Settings. Enable Auto-Preview changes as the dialogflow will propagate changes to the Actions Console and Assistant Simulator automatically. Now you are all set to test your custom app.

Absyz

We create an apex class with the annotation @RestResource in the class because we expose an apex class as a REST resource. If we use RestResource, that particular class should be defined as global. In the brackets, we provide the rest resource name. We are extracting the data from the JSON so you can get the key terms whether you want to insert, update or delete records.

[sourcecode language=”java”]
@RestResource(urlMapping=’/Dialogflow’)
global class restCall {
@HTTPPost
global static string createRecords(){
//response from Google Assistant as a JSON
String request = RestContext.request.requestBody.toString();
//deserialize the JSON
mapurl orp = (mapurl)JSON.deserialize(request, mapurl.class);
string str=orp.result.metadata.intentName;

//check whether it is an account
if((str.contains(‘New’)||(str.contains(‘Add’)) ||(str.contains(‘Create’))) &&(str.contains(‘Account’))){
account acc= new account();
acc.name=orp.result.parameters.Name;
acc.Phone=orp.result.parameters.phone;
acc.Email__c=orp.result.parameters.Email;
insert acc;
}
//check whether it is a contact
else if((str.contains(‘New’)||(str.contains(‘Add’)) ||(str.contains(‘Create’))) &&(str.contains(‘Contact’))){
contact con= new contact();
con.LastName = orp.result.parameters.Name;
con.Phone = orp.result.parameters.phone;
con.Email = orp.result.parameters.Email;
insert con;
}
String s= ‘Success’;
return s;
}
//wrapper to get the values from the JSON
global class mapurl{
global result result;
}
global class result{
global parameters parameters;
global metadata metadata;
global string resolvedQuery;
}
global class parameters{
global String Phone;
global String Name;
global String Email;
}
global class metadata{
global String intentName;
}
}
[/sourcecode]

On click of Test from Integration setting, it will redirect to a page called simulator. Google allows you to test in a browser without an actual google home device named Google Home Web Simulator. You have not yet named your custom App so initially, the command will be: “Talk to Test app”. The setup to create your application name and use the app live will be continued in our next blog.

]]>
https://absyz.com/google-assistant-integration-with-salesforce-part-1/feed/ 7
Grocery Stock Maintenance Using Einstein Object Detection https://absyz.com/grocery-stock-maintenance-using-einstein-object-detection/ https://absyz.com/grocery-stock-maintenance-using-einstein-object-detection/#comments Tue, 29 May 2018 08:57:53 +0000 http://blogs.absyz.com/?p=8927

Let suppose a grocery store wanted to update the daily stock of juice cartons that are of different brand refrigerated. This cannot be done manually every day by counting each carton of a different brand. Think when you have a system that helps you in identifying daily stock, here Einstein Object Detection helps the grocery owner. Einstein identifies the brand and boundaries from which we can categorize them. From the data, we can make a count of different brand cartons and generate a report to the grocery owner.

Einstein should identify objects so first, we should train with the dataset. To create a dataset, collect combinations of brand images. Here I have taken 4 brands of juice cartons as Real, Tropicana, Natural, and Nescafe. The images in dataset should be like the permutation of these cartons.

obj1

Make a folder saving all these images to it and create a .csv file inside the folder. The .csv should contain height, width, x-axis, and y-axis of each carton. The format in the .csv file should be as specified: Box1 {“height”:494,”Y”:410,”label”:”Tropicana”,”width”:284,”x”:11}. The image name, for example, juice1.jpg should be specified in the first column of the sheet and the corresponding boundaries in the above format. Refer below images to create the .csv file.

In the final folder, all images and .csv file are saved. Zip the folder and upload it to AWS to get downloadable link to train the model. To know now how to upload in AWS and get the link refer (https://blogs.absyz.com/2018/02/13/einstein-vision-real-estate-app/).

Using the link we have to train Einstein

obj7In the Einstein package, modify some lines of code in the Einstein Prediction Service class.

[sourcecode language=”java”]
private static String BASE_URL = ‘https://api.einstein.ai/v2’;
private static String BASE_URL = ‘https://api.einstein.ai/v2’;
private String PREDICT = BASE_URL + ‘/vision/detect’;
//all through code the url of httpbodypart should be image-detection
EinsteinVision_HttpBodyPartDatasetUrl parts = new EinsteinVision_HttpBodyPartDatasetUrl(url,’image-detection’);
[/sourcecode]

To get the output with the predicted boundaries in the probability class add this below code.

[sourcecode language=”java”]
@AuraEnabled
public BoundingBox boundingBox {get; set;}
public class BoundingBox {
@AuraEnabled
public Integer minX {get; set;}
@AuraEnabled
public Integer minY {get; set;}
@AuraEnabled
public Integer maxX {get; set;}
@AuraEnabled
public Integer maxY {get; set;}
}
[/sourcecode]

In this apex class, a wrapper is created for the image and record. I have to display the input image to the user hence wrapper is used.  The method that returns the predicted value.

[sourcecode language=”java”]
@AuraEnabled
public static objects__c getPrediction(id objectId,String fileName,String base64) {
wrapperClass returnwrapperClass = new wrapperClass ();
objects__c obj = new objects__c();
Blob fileBlob = EncodingUtil.base64Decode(base64);
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
EinsteinVision_Dataset[] datasets = service.getDatasets();
List<ContentDocument> documents = new List<ContentDocument>();
for (EinsteinVision_Dataset dataset : datasets) {
if (dataset.Name.equals(‘juice’)) {
EinsteinVision_Model[] models = service.getModels(dataset);
EinsteinVision_Model model = models.get(0);
EinsteinVision_PredictionResult result = service.predictBlob(model.modelId,fileBlob, ”);
EinsteinVision_Probability probability = result.probabilities.get(0);
string resultedProbablity = ”;
Map<string,integer> items = new Map<string,integer>();
for(integer i=0;i<result.probabilities.size();i++){
if(!items.containskey(result.probabilities.get(i).label)){
items.put(result.probabilities.get(i).label,1);
}
else{
integer count = items.get(result.probabilities.get(i).label);
items.put(result.probabilities.get(i).label,count+1);
}
integer j = i+1;
}
for(String i : items.keyset()){
resultedProbablity = resultedProbablity +’ ‘+i+’ — ‘+’ ‘+items.get(i);
}
obj = [select id,Results__c from objects__c where id =: objectId];
obj.Results__c = resultedProbablity;
update obj;
returnwrapperClass.objectRecord = obj;
ContentVersion contentVersion = new ContentVersion(
Title = fileName,
PathOnClient = fileName+’.jpg’,
VersionData = fileBlob,
IsMajorVersion = true
);
insert contentVersion;
documents = [SELECT Id, Title, LatestPublishedVersionId,createddate FROM ContentDocument order by createddate desc];
//create ContentDocumentLink record
ContentDocumentLink cdl = New ContentDocumentLink();
cdl.LinkedEntityId = objectId;
cdl.ContentDocumentId = documents[0].Id;
cdl.shareType = ‘V’;
insert cdl;
}
}
return obj;
}
[/sourcecode]

Here I have created a component that is accessed from mobile using salesforce one app. Firstly we have to send an image to Einstein so we take a photo through mobile. Second, we have to upload the image to Einstein. Finally, we get the output that can be displayed and further processed to create monthly reports on the stock and sales data.

[sourcecode language=”html”]
<aura:component implements=”force:appHostable,flexipage:availableForAllPageTypes,force:hasRecordId” access=”global” controller=”EinsteinVision_Admin”>
<aura:attribute name=”contents” type=”object” />
<aura:attribute name=”Objectdetection” type=”objects__c” />
<aura:attribute name=”files” type=”Object[]”/>
<aura:attribute name=”image” type=”String” />
<aura:attribute name=”recordId” type=”Id” />
<aura:attribute name=”newPicShow” type=”boolean” default=”false” />
<aura:attribute name=”wrapperList” type=”object”/>

<lightning:card iconName=”standard:event” title=”Object Detection”>
<aura:set attribute=”actions”>
<lightning:button class=”slds-float_left” variant=”brand” label=”Upload File” onclick=”{! c.handleClick }” />
</aura:set>
</lightning:card>
<aura:if isTrue=”{!v.newPicShow}”>
<div style=”font-size:20px;”>
<h1>Result1 : {!v.Objectdetection.Results__c}</h1>
</div>
<div class=”slds-float_left” style =”height:500px;width:400px”>
<img src=”{!v.image}”/>
</div>
</aura:if>
<div>
<div aura:id=”changeIt” class=”change”>
<div class=”slds-m-around–xx-large”>
<div role=”dialog” tabindex=”-1″ aria-labelledby=”header99″ class=”slds-modal slds-fade-in-open “>
<div class=”slds-modal__container”>
<div class=”slds-modal__header”>Upload Files
<lightning:buttonIcon class=”slds-button slds-modal__close slds-button–icon-inverse” iconName=”utility:close” variant=”bare” onclick=”{!c.closeModal}” alternativeText=”Close window.” size=”medium”/>
</div>
<div class=”slds-modal__content slds-p-around–medium”>
<div class=” slds-box”>
<div class=”slds-grid slds-wrap”>
<lightning:input aura:id=”fileInput” type=”file” name=”file” multiple=”false” accept=”image/*;capture=camera” files=”{!v.files}” onchange=”{! c.onReadImage }”
label=”Upload an image:”/>
</div>
</div>
</div>
<div class=”slds-modal__footer”>
</div>
</div>
</div>
</div>
</div>
</div>
</aura:component>
[/sourcecode]

In the controller.js, handling the user input and passing it to apex in return probability with boundaries as resultant.

[sourcecode language=”javascript”]
({
onUploadImage: function(component, file, base64Data) {
var action = component.get(“c.getPrediction”);
var objectId = component.get(“v.recordId”);
action.setParams({
objectId: objectId,
fileName: file.name,
base64: base64Data
});
action.setCallback(this, function(a) {
var state = a.getState();
if (state === ‘ERROR’) {
console.log(a.getError());
} else {
component.set(“v.Objectdetection”, a.getReturnValue());
var cmpTarget1 = component.find(‘changeIt’);
$A.util.addClass(cmpTarget1, ‘change’);
component.set(“v.newPicShow”,true);
}
});
},
onGetImageUrl: function(component, file, base64Data) {
var action = component.get(“c.getImageUrlFromAttachment”);
var catId = component.get(“v.recordId”);
action.setParams({
objId: objId
});
action.setCallback(this, function(a) {
var state = a.getState();
if (state === ‘ERROR’) {
console.log(a.getError());
} else {
if (!a.getReturnValue()==”) {
component.set(“v.image”, “/servlet/servlet.FileDownload?file=” + a.getReturnValue());
}
}
});
$A.enqueueAction(action);
}
})
[/sourcecode]

helper.js

[sourcecode language=”javascript”]
({
onUploadImage: function(component, file, base64Data) {
var action = component.get(“c.getPrediction”);
var objectId = component.get(“v.recordId”);
action.setParams({
objectId: objectId,
fileName: file.name,
base64: base64Data
});
action.setCallback(this, function(a) {
var state = a.getState();
if (state === ‘ERROR’) {
console.log(a.getError());
alert(“An error has occurred”);
} else {
component.set(“v.Objectdetection”, a.getReturnValue());
var cmpTarget1 = component.find(‘changeIt’);
$A.util.addClass(cmpTarget1, ‘change’);
component.set(“v.newPicShow”,true);
}
});
}
})
[/sourcecode]

The final output is tested from the mobile salesforce one app.

In case of any doubts feel free to reach out to us.

]]>
https://absyz.com/grocery-stock-maintenance-using-einstein-object-detection/feed/ 5
Salesforce Integration with LinkedIn https://absyz.com/salesforce-integration-with-linkedin/ https://absyz.com/salesforce-integration-with-linkedin/#comments Thu, 15 Mar 2018 09:52:57 +0000 https://teamforcesite.wordpress.com/?p=8633

Posting message to social media through Salesforce makes it easy as Salesforce provide all flexibility in developing customer relationships. Posting and getting information from one of the social media helps to increase the number of customers and achieve targets.

Let suppose any company is having their LinkedIn page where users should log in and upload status or image. Instead, we can promote, improve sales, hire people and much more through Salesforce. Here we can upload and retrieve posts from LinkedIn to Salesforce using REST API. LinkedIn provides API calls for developers that can be referred in the documentation (https://developer.linkedin.com/docs). Developers can also test the callouts using LinkedIn REST Console.

First, create a LinkedIn account and proceed to create a company page as shown below.

Now to access LinkedIn we need to create an app. For that, you need to keep your account active.

link4.PNG
click on My Apps to create an app

Fill in the details and submit.

After submitting go to Application settings -> Authorization where you will get the client key and client secret key. Provide permissions for the application and authorized redirect link to OAuth 2.0 as shown below.

link9.PNG

To register the user go to the REST API console and click on the drop-down menu Authentication select OAuth 2.0. On clicking on it you are asked to redirect to user to access your basic LinkedIn profile information. Click on allow to access basic information. If you are unable to get the page then refer the link given (https://developer.linkedin.com/docs/oauth2). Enter the URL in the console to get access token. Currently, all access tokens are issued with a 60-day lifespan.

We have a different way of accessing the LinkedIn like to Sign In with LinkedIn,  post on LinkedIn, Manage company pages and added details to Profiles. Here we are doing POST and GET method to an XXX company page. In this link, we have all the details how to manage our company page (https://developer.linkedin.com/docs/company-pages).

From Salesforce I provide status, link, and image fields to do a POST method to LinkedIn. The image is given in an URL format. When the button is clicked it gets posted to LinkedIn.

POST Method

First, add the link to remote site settings as specified in the code. Identify your company id in the URL of that page like this – https://www.linkedin.com/company/12345678/. Send these parameters through callout to LinkedIn as a JSON format. In the response, we get the status updated Id.

[sourcecode language=”java”]
public static void postRequest(String post,String link,String image){
String NewLine=’\n’;
String NewLinereplace=’ ‘;
String newPost=post.replace(NewLine, NewLinereplace);
if(image!=null)
{
// pass the values in JSON format
String text = ‘{“visibility”: { “code”: “anyone” },”comment”: “‘+newPost+'”,”content”: {“submitted-url”: “‘+link+'”,”submitted-image-url”:”‘+image+'”}}’;
HttpRequest req = new HttpRequest();
//https://api.linkedin.com – add the link to remote site settings
//12345678 – give your company id
//oauth2_access_token – provide your access token
req.setEndpoint(‘https://api.linkedin.com/v1/companies/12345678/shares?format=json&oauth2_access_token=AQW…QQqA&format=json’);
req.setMethod(‘POST’);
req.setHeader(‘Content-Type’ , ‘application/json’);
req.setBody(text);
//Make request
Http http = new Http();
HTTPResponse res = http.send(req);
system.debug(‘response’+res.getBody());//in response we get posts id
}
else
{
//pass the values in JSON format
String text = ‘{“visibility”: { “code”: “anyone” },”comment”: “‘+newPost+'”,”content”: {“submitted-url”: “‘+link+'”}}’;
HttpRequest req = new HttpRequest();
//https://api.linkedin.com – add the link to remote site settings
//12345678 – give your company id
//oauth2_access_token – provide your access token
req.setEndpoint(‘https://api.linkedin.com/v1/companies/1234567/shares?format=json&oauth2_access_token=AQW…QQqA&format=json’);
req.setMethod(‘POST’);
req.setHeader(‘Content-Type’ , ‘application/json’);
req.setBody(text);
//Make request
Http http = new Http();
HTTPResponse res = http.send(req);
system.debug(‘response’+res.getBody());//in response we get posts id
}
}
[/sourcecode]

GET Method

To GET, we make an API call out to with company Id and access token

[sourcecode language=”java”]
public static void getRequest(){
HttpRequest req = new HttpRequest();
//https://api.linkedin.com – add the link to remote site settings
//12345678 – give your company id
//oauth2_access_token – provide your access token
req.setEndpoint(‘https://api.linkedin.com/v1/companies/12345678/updates?oauth2_access_token=AQW…QQqA&format=json’);
req.setMethod(‘GET’);
//Make request
Http http = new Http();
HTTPResponse res = http.send(req);
//resopnse body we get the information in JSON format containing id and message
system.debug(‘response’+res.getBody());
}
[/sourcecode]

From Salesforce

We get Status, Link, and Image URL from the user.

[sourcecode language=”java”]
<aura:component controller=”PosttoSocialMediaEXT” implements=”force:appHostable, flexipage:availableForAllPageTypes, flexipage:availableForRecordHome, force:hasRecordId, forceCommunity:availableForAllPageTypes, force:lightningQuickAction” access=”global”>
<aura:attribute name=”status” type=”String”/>
<aura:attribute name=”link” type=”String”/>
<aura:attribute name=”image” type=”String”/>
<aura:attribute name=”LinkedIn” type=”Boolean” default=”false”/>
<lightning:card title=”Social Media Post”>
<div class=”slds-align_absolute-center”>
<aura:set attribute=”title”>
<div style=”width:40%;”>
<div class=”slds-align_absolute-center”>
<p style=”font-family:serif;font-size: 40px;”>Social Media Post</p>
</div>
</div>
</aura:set>
<div style=”width:40%;”>
Post Message:
<lightning:textarea label=”” name=”myTextArea” value=”{!v.status}”
maxlength=”1000″ />
Post Link:
<lightning:textarea label=”” name=”myTextArea” value=”{!v.link}”
maxlength=”300″ />
Post Image Link:
<lightning:textarea label=”” name=”myTextArea” value=”{!v.image}”
maxlength=”300″ /><br/>
<div>
<lightning:input type=”checkbox” label=”Add To LinkedIn” name=”LinkedId” checked=”{!v.LinkedIn}” />
</div>
<div class=”slds-align_absolute-center”>
<lightning:button variant=”brand” label=”Submit” onclick=”{! c.handleClick }” />
</div>
</div>
</div>
</lightning:card>

</aura:component>
[/sourcecode]

From Salesforce, through javascript controller, we pass the values to apex methods.

[sourcecode language=”java”]
({
handleClick : function(component, event, helper) {

var action = component.get(“c.postStatus”);
action.setParams({ Post : component.get(“v.status”),
link : component.get(“v.link”),
image : component.get(“v.image”),
linkdIn : component.get(“v.LinkedIn”),
});
action.setCallback(this,function(response){
var State = response.getState();
if(State = ‘SUCCESS’){
var toastEvent = $A.get(“e.force:showToast”);
toastEvent.setParams({
“title”: “Success!”,
“message”: “The Status has been updated successfully.”
});
toastEvent.fire();
}
});
$A.enqueueAction(action);
}
})
[/sourcecode]

OUTPUT

User fill the details

link12

The post on LinkedIn as shown below.

If you have any questions please feel free to post it in the comment.

]]>
https://absyz.com/salesforce-integration-with-linkedin/feed/ 2
Einstein Intent Analysis Using Einstein Language on Salesforce Chatter https://absyz.com/einstein-intent-analysis-using-einstein-language-on-salesforce-chatter/ https://absyz.com/einstein-intent-analysis-using-einstein-language-on-salesforce-chatter/#comments Tue, 27 Feb 2018 06:05:51 +0000 https://teamforcesite.wordpress.com/?p=8589

Einstein Intent

Einstein Intent helps to categorize unstructured data into labels using its Intent API for a better understanding of what user is trying to accomplish. In Salesforce, Einstein Intent API allows understanding the customer inquiries that is easy to automatically route leads, escalate service and personalize marketing. This is helpful in prioritizing customer service inquiries. For example, Einstein Language comes into Salesforce where Einstein Intent and Einstein Sentimental analysis are in use by including them in chatter. Here in Salesforce Chatter, the post where the Einstein Intent and the comments where Einstein Sentiment can be applied to the real world example of e-commerce that is discussed below.

For a better understanding of Einstein Intent, you can refer the Intent basics trailhead (https://trailhead.salesforce.com/en/modules/einstein_intent_basics). The steps to access Einstein Intent are:

  1. Create Dataset
  2. Train the Dataset
  3. Predict
Create Dataset:

Create a .csv file with a column of sample data and another column as labels as shown below. You can create your own dataset and use the downloadable link for training data. To know more about how to create a downloadable link you can refer our blog. The more data you give the more Einstein is getting trained to be accurate. Here an e-commerce example is used with the labels named: Billing, Password Help, Shipping Info, Order Change and Sales Opportunity.

csvfile

Train the Dataset:

We give the downloadable link in the URL field and train dataset. In detail, refer the blog for training the data in a step by step process to create component where models are created to predict the text. After you follow the steps, go to setup -> apex classes and replace the code because we are calling the Intent API to train the data. You find the classes like this.

class

In the class, EinsteinVision_PredictionService.apex the intent is passed to the API language added to the URL. Here the change the endpoint URL as

[sourcecode language=”java”]
private static String BASE_URL = ‘https://api.einstein.ai/v2’;
private String DATASETS = BASE_URL + ‘/language/datasets’;
private String LABELS = ‘/labels’;
private String EXAMPLES = ‘/examples’;
private String TRAIN = BASE_URL + ‘/language/train’;
private String MODELS = ‘/language/models’;
private String PREDICT = BASE_URL + ‘/language/intent’;
private String API_USAGE = BASE_URL + ‘/apiusage’;
private static String OAUTH2 = BASE_URL + ‘/oauth2/token’;
public String data;
[/sourcecode]

Change the URL value as ‘text-intent’ in all the methods

[sourcecode language=”java”]
EinsteinVision_HttpBodyPartDatasetUrl parts = new EinsteinVision_HttpBodyPartDatasetUrl(url,’text-intent’);
[/sourcecode]

And the parameter to pass is only a string so that change the argument name where ever necessary to pass the model id and data.

[sourcecode language=”java”]
public EinsteinVision_PredictionResult METHOD (String modelId, String data) {
EinsteinVision_HttpBodyPartPrediction parts = new EinsteinVision_HttpBodyPartPrediction(modelId, data);
[/sourcecode]

In the EinsteinVision_HttpBodyPartPrediction.apex class change the parameter to pass.

[sourcecode language=”java”]
public EinsteinVision_HttpBodyPartPrediction(String modelId, String data) {
this.modelId = modelId;
this.data = data;
}
[/sourcecode]

Now the apex class is ready to use for Einstein Intent. The sample data can be downloaded from this link and the same can be specified to train the data (http://einstein.ai/text/case_routing_intent.csv).

intenttrain.PNG

Predict:

Here when I give a text, Einstein Intent will find the probability which type of department it belongs to and categorizes the data. Pass the CSV file name that you have saved it or refer the dataset name after training that is shown in the above image.

[sourcecode language=”java”]
@AuraEnabled
public static String getPrediction(String Comments) {
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
EinsteinVision_Dataset[] datasets = service.getDatasets();
String Prob;
for (EinsteinVision_Dataset dataset : datasets) {
if (dataset.Name.equals(‘case_routing_intent.csv’)) {
EinsteinVision_Model[] models = service.getModels(dataset);
EinsteinVision_Model model = models.get(0);
EinsteinVision_PredictionResult result = service.predictBlob(model.modelId, Comments);
EinsteinVision_Probability probability = result.probabilities.get(0);
Prob = result.probabilities.get(0).label;
}
}
return Prob;
}
[/sourcecode]

The component to get the data from the user and display the probability.

[sourcecode language=”html”]
<aura:component controller=”EinsteinVision_Admin” implements=”force:appHostable, flexipage:availableForAllPageTypes, flexipage:availableForRecordHome, force:hasRecordId, forceCommunity:availableForAllPageTypes, force:lightningQuickAction” access=”global”>
<aura:attribute name=”CsvURL” type=”String”/>
<aura:attribute name=”Probablity” type=”Object”/>
<center>
<lightning:card >
<aura:set attribute=”title”>
<p style=”font-family:serif;font-size: 40px;”>User Feedback</p>
</aura:set>
<div style=”width:40%;”>
<ui:inputText class=”textbox” aura:id=”select” placeholder=”Enter your comments”/><br/>
<br/>
<lightning:button onclick=”{!c.extractfile}”>search</lightning:button><br/>
<br/>
</div>

<p style=”font-family:serif;font-size: 20px;”><ui:outputTextArea value=”{!v.Probablity}”/></p>
</lightning:card>
</center>
</aura:component>
[/sourcecode]

Pass the data form user to apex controller using controller.js and return the response.

[sourcecode language=”java”]
({
extractfile : function(component, event, helper) {
var action = component.get(“c.getPrediction”);
var text = component.find(“select”).get(“v.value”);

action.setParams({
Comments: text
});
action.setCallback(this, function(response) {
component.set(“v.waiting”, false);
var state = response.getState();
if (state === ‘ERROR’) {
var errors = response.getError();
if (errors) {
if (errors[0] && errors[0].message) {
return alert(errors[0].message);
}
} else {
return console.log(“Unknown error”);
}
}
var result = response.getReturnValue();
component.set(“v.Probablity”,result);
});
component.set(“v.waiting”, true);
$A.enqueueAction(action);
}
})
[/sourcecode]

The intent of the text that we get is the output shown below.

intent

 

Salesforce Chatter

The real-time example where we can make use of Einstein Language on Salesforce is the Chatter. Here we are posting a message to calculate the  Intent and the comments to the post made by the users to calculate the Sentiment (Positive, Neutral and Negative). When a user posts a message on Chatter and a comment is given to it, a trigger is fired to Intent and Sentiment methods. I have a created a custom object to store the posts and comments in the custom fields. On after insert trigger, we feed the text to the apex methods to get the type and sentiment of the text.

 

Trigger to fire after insert on FeedComment:

[sourcecode language=”java”]
trigger Einstein_ChatterComment on FeedComment (after insert) {
String Body;
For(FeedComment comment : Trigger.New)
{
Body = comment.Id;

}
Einstein_ChatterHandler.getProbability(Body);
}
[/sourcecode]

Handler for the trigger:

[sourcecode language=”java”]
global class Einstein_ChatterHandler {
@future(callout=true)
Public static void getProbability(String body){
FeedComment fd=[select CommentBody, FeedItemId from FeedComment where id=:body];
FeedItem fi = [select Body from FeedItem where id=:fd.FeedItemId];

String intentLabel = EinsteinVision_Admin.getPrediction(fi.Body);
String sentimentLabel = EinsteinVision_Sentiment.findSentiment(fd.CommentBody);

Einstein__c ein = new Einstein__c(Chatter_Message__c=fi.body, Comment__c=fd.CommentBody, Feedback_type__c=intentLabel, Emotion__c=sentimentLabel);
insert ein;
}
}
[/sourcecode]

Finally, we create reports to view the analyzed data as shown below. The report is grouped by sentiment for each department.

report.PNG

Einstein Language can be implemented on social media to know the customer view that will be our upcoming blog. Please feel free to contact us for any doubts and queries.

]]>
https://absyz.com/einstein-intent-analysis-using-einstein-language-on-salesforce-chatter/feed/ 7
Einstein Vision – Real Estate App https://absyz.com/einstein-vision-real-estate-app/ https://absyz.com/einstein-vision-real-estate-app/#comments Tue, 13 Feb 2018 09:53:48 +0000 https://teamforcesite.wordpress.com/?p=8429

Let suppose in some real estate website, we search for properties and we get the result related to it. When this comes to salesforces we have Einstein which can help in a large amount of data. Einstein has image classification and object identification as Einstien Vision.

Same way using Einstein Vision a real estate app is built and the scenario is developed where a user can search for a property in Property.com by choosing the type of the house. As the input is given by the user, the related images are displayed to the user. Here we have thousands of images to process it and display it to the user. Where it is difficult manually and can be achieved by some predictive algorithms. In this scenario, Einstein plays a vital role that acts like the human in predicting the images. For more understanding, you can through the trailhead  that provides a managed package that will be useful in this demo (https://trailhead.salesforce.com/en/projects/build-a-cat-rescue-app-that-recognizes-cat-breeds)

Steps to Follow:
  1. AWS
  2. Train Dataset
  3. S3 Link

1. AWS storage is used in two ways. First, storage of images in huge amount inside salesforce is difficult. Hence AWS storage is used where there is no limit to the amount of data to be stored. Second, training Einstein is done with a downloadable zip link in an URL format. In AWS we create a bucket in an S3 storage type where the files are stored in the bucket. Here I have created a zip folder and a common folder. The zip folder is to train the datasets that should be of more than 12MB. The more you add the images, the more Einstein prediction will be accurate. One main thing when you create the files inside AWS is to make the access public to each and every file.

aws

Now the link is ready to train the dataset (https://s3.amazonaws.com/sfdc-einstein-demo/newmodifiedhouses2.zip). The zip folder contains sub-folders as shown below,

subfolder

2. For Einstein, the sub-folder name is a Label and images inside sub-folders are datasets. The created data should be trained by passing the link. After training the data they form models for each dataset labels.

einsteinVision

We pass the URL to apex class on click of the button create a dataset. The file is downloaded to meta-minds where Einstein processes the analysis. To get to know more about meta-minds refer the given link (https://metamind.readme.io/docs/introduction-to-the-einstein-predictive-vision-service).

[sourcecode language=”java”]
//method1 in awsFileTest.apex
@AuraEnabled
public static void createDatasetFromUrl(String zipUrl) {
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
service.createDatasetFromUrlAsync(zipUrl);
system.debug(service);
}
[/sourcecode]

On refreshing the dataset we get a list of labels and the number of files that we give to train Einstein.

[sourcecode language=”java”]
//method2 in awsFileTest.apex
@AuraEnabled public static List<EinsteinVision_Dataset> getDatasets() {
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
EinsteinVision_Dataset[] datasets = service.getDatasets();
return datasets;
}
[/sourcecode]

Einstein identifies with dataset models that are done after training the dataset. We can also delete the trained dataset and add a new dataset.

[sourcecode language=”java”]
//method3 in awsFileTest.apex
@AuraEnabled
public static String trainDataset(Decimal datasetId) {
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
EinsteinVision_Model model = service.trainDataset(Long.valueOf(String.valueOf(datasetId)), ‘Training’, 0, 0, ”);
return model.modelId;
}
//method4 in awsFileTest.apex
@AuraEnabled
public static void deleteDataset(Long datasetId)
{
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
service.deleteDataset(datasetId);
}
[/sourcecode]

On training the dataset, dataset model with an id is generated.

[sourcecode language=”java”]
public static List<EinsteinVision_Model> getModels(Long datasetId) {
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
EinsteinVision_Model[] models = service.getModels(datasetId);
return models;
}
[/sourcecode]

3. We use S3 Link app from appExchange to iterate the filenames inside AWS. S3 Link is basically a link between salesforce and AWS. This app helps us to import and export files from AWS whereas importing file means only the details of the file and provides a redirecting link to view or download images. In callout (to AWS) we can only hardcode the destination file name. We have many files that are not possible to hardcode all files names. To install the app follow the guidelines in the given link. (https://appexchange.salesforce.com/appxListingDetail?listingId=a0N3000000CW1OXEA1)

s3 link

Here I make a call out to the AWS on iterating the image name and receiving it as a blog because Einstein needs the actual(*original) image to compare for finding the probability of all possible type of houses.

[sourcecode language=”java”]
@AuraEnabled
public static list<awsFileTestWrapper.awswrapper> getImageAsBlob() {

List<NEILON__File__c> fList = [SELECT Name FROM NEILON__File__c];
system.debug(‘flist ‘+fList);
Map<Blob,String> bList = new Map<Blob,String>();
for(NEILON__File__c nm:fList)
{
Http h = new Http();
HttpRequest req = new HttpRequest();
string firstImageURL = ‘https://s3.amazonaws.com/sfdc-einstein-demo/commonhouses/’+nm.Name;
//Replace any spaces with %20
system.debug(‘firstImageURL’+firstImageURL);
firstImageURL = firstImageURL.replace(‘ ‘, ‘%20’);
req.setEndpoint(firstImageURL);
req.setMethod(‘GET’);
//If you want to get a PDF file the Content Type would be ‘application/pdf’
req.setHeader(‘Content-Type’, ‘image/jpg’);
req.setCompressed(true);
req.setTimeout(60000);

HttpResponse res = null;
res = h.send(req);
//These next three lines can show you the actual response for dealing with error situations
string responseValue = ”;
responseValue = res.getStatus();
system.debug(‘Response Body for File: ‘ + responseValue);
//This is the line that does the magic. We can get the blob of our file. This getBodyAsBlob method was added in the Spring 2012 release and version 24 of the API.
blob image = res.getBodyAsBlob();
system.debug(‘blob’+image);
// bList.add(res.getBodyAsBlob());
bList.put(res.getBodyAsBlob(),nm.Name);
}
system.debug(‘blob list’+bList);

EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
EinsteinVision_Dataset[] datasets = service.getDatasets();
list<awsFileTestWrapper.awswrapper> listaws=new list<awsFileTestWrapper.awswrapper>();

for (EinsteinVision_Dataset dataset : datasets) {

EinsteinVision_Model[] models = service.getModels(dataset);
EinsteinVision_Model model = models.get(0);
Set<blob> bList2=bList.keySet();
for(Blob fileBlob:bList2)
{
system.debug(‘blob in loop ‘+fileBlob);
EinsteinVision_PredictionResult result = service.predictBlob(model.modelId, fileBlob, ”);
EinsteinVision_Probability probability = result.probabilities.get(0);
system.debug(‘1.’+result.probabilities.get(0).label+’—-‘+result.probabilities.get(0).probability+’ 2.’+result.probabilities.get(1).label+’—-‘+result.probabilities.get(1).probability+
‘ 3.’+result.probabilities.get(2).label+’—-‘+result.probabilities.get(2).probability
+’ 4.’+result.probabilities.get(3).label+’—-‘+result.probabilities.get(3).probability
+’ 5.’+result.probabilities.get(4).label+’—-‘+result.probabilities.get(4).probability);
awsFileTestWrapper.awswrapper aws=new awsFileTestWrapper.awswrapper();
aws.filename=blist.get(fileblob);
//aws.content=fileblob;
aws.mylabel=result.probabilities.get(0).label;
aws.prob=result.probabilities.get(0).probability;
listaws.add(aws);
}
}
System.debug(‘values are’+listaws[0].filename);
return listaws;
}
[/sourcecode]

Einstein gives the result of label and probability related to the image. In result.probabilities.get(0).probability it gives the nearest probability to that particular image. I pass the filename, label, and probability to Lightning component controller. Hence, list of wrappers is used.

[sourcecode language=”java”]
//awsFileTestWrapper.apex
public class awsFileTestWrapper {
public class awswrapper{
@auraenabled public String mylabel;
@auraenabled public String filename;
@auraenabled public double prob;
}
}
[/sourcecode]

In a controller, the callout is made with the iteration of a file name and we are fetching the images from AWS that is displayed to the user.

aura cmp.PNG

The values from apex controller are sent to the javascript.

[sourcecode language=”java”]
//controller.js
({
extractfile: function(component, event, helper) {
alert(‘button clicked’);
var val = component.find(“select”).get(“v.value”);
alert(‘value’+val);
var names=[];
var probs=[];
component.set(“v.IsSpinner”,true);
var action1 = component.get(“c.getImageAsBlob”);
action1.setCallback(this, function(response) {
var ret=response.getReturnValue();
var name=”;
var prob=”;
for(var i=0;i<ret.length;i++){
if(ret[i].mylabel==val){
name=ret[i].filename;
names.push(name);
prob=ret[i].prob;
probs.push(prob);
}
}
component.set(“v.IsSpinner”,false);
component.set(“v.contents”,names);
component.set(“v.probability”,probs);
});
$A.enqueueAction(action1);
},
})
[/sourcecode]

The final output results with images and probability to the user as shown below.

output

Feel free to contact us for any doubts and if you need the code which I have given in the screenshot.

References:
  1. https://developer.salesforce.com/blogs/developer-relations/2017/05/image-based-search-einstein-vision-lightning-components.html
  2. https://andyinthecloud.com/2017/02/05/image-recognition-with-the-salesforce-einstein-api-and-an-amazon-echo/
  3. https://metamind.readme.io/docs/prediction-with-image-file

 

]]>
https://absyz.com/einstein-vision-real-estate-app/feed/ 2
Einstein Sentiment Analysis https://absyz.com/einstein-sentiment-analysis/ https://absyz.com/einstein-sentiment-analysis/#comments Fri, 09 Feb 2018 06:07:31 +0000 https://teamforcesite.wordpress.com/?p=8444

Einstein Sentiment is something to predict the reviews or messages whether it is positive, negative or neutral. Using these companies can categorize the customer attitudes and take action appropriately to build their insights. As earlier discussed in marketing cloud (https://teamforcesite.wordpress.com/2018/02/07/marketing-cloud-social-studio-series-macros/) sentiment analysis can be achieved even using Einstein.

Here user enters some message, Einstein finds the sentiment that will be useful to companies finding it as an appreciation when it is positive and take action when it is negative. This scenario will be helpful when handling more number of clients. For example in real-time usage facebook comments, client reply, etc.

To start with Einstein, first set up your environment with the help of the trailhead (https://trailhead.salesforce.com/modules/einstein_intent_basics/units/einstein_intent_basics_env). For one account mail id, only one key is generated that is stored in files and we get access token generated with the time limit being activated. Create two apex class to generate a JWT access token, you can refer the trailhead (https://trailhead.salesforce.com/projects/predictive_vision_apex/steps/predictive_vision_apex_get_code).

[sourcecode language=”java”]
//JWT.apex
public class JWT {

public String alg {get;set;}
public String iss {get;set;}
public String sub {get;set;}
public String aud {get;set;}
public String exp {get;set;}
public String iat {get;set;}
public Map claims {get;set;}
public Integer validFor {get;set;}
public String cert {get;set;}
public String pkcs8 {get;set;}
public String privateKey {get;set;}

public static final String HS256 = ‘HS256’;
public static final String RS256 = ‘RS256’;
public static final String NONE = ‘none’;
public JWT(String alg) {
this.alg = alg;
this.validFor = 300;
}

public String issue() {
String jwt = ”;
JSONGenerator header = JSON.createGenerator(false);
header.writeStartObject();
header.writeStringField(‘alg’, this.alg);
header.writeEndObject();
String encodedHeader = base64URLencode(Blob.valueOf(header.getAsString()));

JSONGenerator body = JSON.createGenerator(false);
body.writeStartObject();
body.writeStringField(‘iss’, this.iss);
body.writeStringField(‘sub’, this.sub);
body.writeStringField(‘aud’, this.aud);
Long rightNow = (dateTime.now().getTime()/1000)+1;
body.writeNumberField(‘iat’, rightNow);
body.writeNumberField(‘exp’, (rightNow + validFor));
if (claims != null) {
for (String claim : claims.keySet()) {
body.writeStringField(claim, claims.get(claim));
}
}
body.writeEndObject();

jwt = encodedHeader + ‘.’ + base64URLencode(Blob.valueOf(body.getAsString()));

if ( this.alg == HS256 ) {
Blob key = EncodingUtil.base64Decode(privateKey);
Blob signature = Crypto.generateMac(‘hmacSHA256’,Blob.valueof(jwt),key);
jwt += ‘.’ + base64URLencode(signature);
} else if ( this.alg == RS256 ) {
Blob signature = null;

if (cert != null ) {
signature = Crypto.signWithCertificate(‘rsa-sha256’, Blob.valueOf(jwt), cert);
} else {
Blob privateKey = EncodingUtil.base64Decode(pkcs8);
signature = Crypto.sign(‘rsa-sha256’, Blob.valueOf(jwt), privateKey);
}
jwt += ‘.’ + base64URLencode(signature);
} else if ( this.alg == NONE ) {
jwt += ‘.’;

return jwt;
}
public String base64URLencode(Blob input){
String output = encodingUtil.base64Encode(input);
output = output.replace(‘+’, ‘-‘);
output = output.replace(‘/’, ‘_’);
while ( output.endsWith(‘=’)){
output = output.subString(0,output.length()-1);
}
return output;
}
}
[/sourcecode]

To generate new access token each and every time JWTBearerFlow class is used

[sourcecode language=”java”]
//JWTBearerFlow.apex
public class JWTBearerFlow {

public static String getAccessToken(String tokenEndpoint, JWT jwt) {

String access_token = null;
String body = ‘grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer&assertion=’ + jwt.issue();
HttpRequest req = new HttpRequest();
req.setMethod(‘POST’);
req.setEndpoint(tokenEndpoint);
req.setHeader(‘Content-type’, ‘application/x-www-form-urlencoded’);
req.setBody(body);
Http http = new Http();
HTTPResponse res = http.send(req);

if ( res.getStatusCode() == 200 ) {
System.JSONParser parser = System.JSON.createParser(res.getBody());
while (parser.nextToken() != null) {
if ((parser.getCurrentToken() == JSONToken.FIELD_NAME) && (parser.getText() == ‘access_token’)) {
parser.nextToken();
access_token = parser.getText();
break;
}
}
}
return access_token;
}
} [/sourcecode]

Now the sentiment is analyzed through the probability given by Einstein. In the below class we refer to JWT apex class to pass the key and generate a new access token.(https://metamind.readme.io/docs/what-you-need-to-call-api). The below apex class returns a string with labels and its probability.

[sourcecode language=”java”]
//EinsteinVision_Sentiment.apex
@auraenabled
public static String findSentiment(String text)
{
ContentVersion con = [SELECT Title,VersionData
FROM ContentVersion
WHERE Title = ‘einstein_platform’
OR Title = ‘predictive_services’
ORDER BY Title LIMIT 1];

String key = con.VersionData.tostring();
key = key.replace( ‘—–BEGIN RSA PRIVATE KEY—–‘, ” );
key = key.replace( ‘—–END RSA PRIVATE KEY—–‘, ” );
key = key.replace( ‘\n’, ” );
JWT jwt = new JWT( ‘RS256’ );
jwt.pkcs8 = key;
jwt.iss = ‘developer.force.com’;
jwt.sub = ‘xxx@xxx.com’; // Update with your own email ID
jwt.aud = ‘https://api.metamind.io/v1/oauth2/token’;
jwt.exp = String.valueOf(3600);
String access_token = JWTBearerFlow.getAccessToken( ‘https://api.metamind.io/v1/oauth2/token’, jwt );
String keyaccess = access_token;

Http http = new Http();
HttpRequest req = new HttpRequest();
req.setMethod( ‘POST’ );
req.setEndpoint( ‘https://api.einstein.ai/v2/language/sentiment’);
req.setHeader( ‘Authorization’, ‘Bearer ‘ + keyaccess );
req.setHeader( ‘Content-type’, ‘application/json’ );
String body = ‘{\”modelId\”:\”CommunitySentiment\”,\”document\”:\”‘ + text + ‘\”}’;
req.setBody( body );
HTTPResponse res = http.send( req );
string fullString=res.getBody();
string removeString=fullString.removeEnd(‘],”object”:”predictresponse”}’);
string stringBody=removeString.removeStart(‘{“probabilities”:[‘);
return stringBody;
} [/sourcecode]

Using the probability, I display them as charts using chart.js which can be referred to the given link (http://www.chartjs.org/docs/latest/getting-started/). The chart.js is saved as a static resource in Salesforce and used in the component.

lightng.png

The response from apex controller is handled in js and we split the data into two lists that are passed as values to the chart data.

[sourcecode language=”java”]
({
extractfile: function(component, event, helper) {
var val = component.find(“select”).get(“v.value”);
alert(‘value’+val);
var action1 = component.get(“c.findSentiment”);
action1.setParams({ text: val });

action1.setCallback(this, function(response) {

var ret=response.getReturnValue();
component.set(“v.probability”,ret);
alert(‘probability ‘+component.get(“v.probability”));

var line=component.get(“v.probability”);
var list=line.split(‘,’);
var temp=0;
var labels=[];
var values=[];
for(var i=0;i <list.length;i++){
if(i%2==0){
list[i]=list[i].match(‘:”(.*)”‘)[1];
temp=temp+1;
labels.push(list[i]);
}
else
{
temp=temp+1;
list[i]=list[i].match(‘:(.*)}’)[1];
values.push(list[i]);
}
}
component.set(“v.labels”,labels);
component.set(“v.values”,values);

var label=component.get(“v.labels”);
var value=component.get(“v.values”);
var data = {
labels: [label[0],label[1],label[2]],
datasets: [
{
fillColor: ‘#b9f6ca’,
strokeColor: ‘#b9f6ca’,
data: [value[0],value[1],value[2]]
}
]
}
var options = { responsive: true };
var element = component.find(‘chart’).getElement();

var ctx = element.getContext(‘2d’);

var myBarChart = new Chart(ctx).Bar(data, options);
});
$A.enqueueAction(action1);
},

})
[/sourcecode]

The result of above data is shown below,

output for sentiment.png

To know more about Einstein keep visiting our blog. For any doubts, you can feel free to reach out us.

]]>
https://absyz.com/einstein-sentiment-analysis/feed/ 2