Replay: Building Einstein for Flow with Ajaay Ravi

By

Today on the Salesforce Admins Podcast, we talk to Ajaay Ravi, Senior Technical Product Manager at Salesforce. Join us as we chat about AI, Einstein for Flow (which at the time of the recording was called Flow GPT), and why admins should pay close attention.

You should subscribe for the full episode, but here are a few takeaways from our conversation with Ajaay Ravi.

Do Androids dream of bohemian furniture?

Ajaay learned a lot about AI when he led a team at Amazon tasked with building technology to recommend furniture. The only problem was that they were all engineers and didn’t know the first thing about interior design. They brought in some experts who could tell them what individual components made something a particular style, and used their knowledge to train the AI by giving it high-quality data in bite-sized pieces that it could understand.

What’s important to understand here is that any AI model requires training. And to do that, you need to break a concept like “furniture style” down into tags like “upholstery,” “seat,” “legs,” “paisley,” etc. Then you can give it a group of tagged images to try to teach it a broader concept, like “Bohemian.” Finally, you test it to see if it can identify new images that have Bohemian furniture in them, give the model feedback on how it did, and start the loop again.

Einstein for Flow

For Salesforce, Ajaay has been building Einstein for Flow. The goal is to create a tool where you can just describe the automation you need and it will build you a flow—automagically. They’re still in the testing and training phase but the possibilities are tantalizing.

Depending on what type of user you are, you might use Einstein for Flow in several different ways. For those that are already experienced with Flow, you can leverage it to eliminate some steps and work faster. And for people newer to the ecosystem, Ajaay hopes it can remove barriers to unlocking the full potential of the platform.

Learning to crawl

Just like with the interior design tool Ajaay built earlier in his career, Einstein for Flow needs some time to learn. For now, they’re focused on building simple flows of five steps or less with minimal decision elements and branches. But it’ll only get better as they keep working on it and getting feedback from test users.

Be sure to check out the full episode for more about what makes for a good prompt, what you can do to get ready for Einstein for Flow, and how AI can “hallucinate.”

Podcast swag

Learn more

Social

Full show transcript

Mike:
Hey, Salesforce Admins. I am bringing back an episode that you might’ve missed in July, where we talked about what was then called Flow GPT. It’s now called Einstein for Flow. And now that we’re past Dreamforce, we can kind of dive into it a little bit more.

This is a really cool product that Salesforce is working on. And way back in July, I had the privilege of speaking with Ajaay Ravi, who is an engineer with Salesforce and, really, the tip of the spear working on all of this. We talk about AI and GPT and what is now Einstein for Flow, and really just why you as a Salesforce Admin should pay attention to it. So I want to call that out. Also, because we recorded this back in July, we do say “Einstein GPT” a lot, or “Flow GPT.” Of course, that naming convention has been changed, so we couldn’t really go back through and change that. But this is a good episode. Dig into it.

Before you get into it, though, I just want to be sure you’re doing one thing and you’re following the podcast. So if you’re just listening to this for the first time and you’re not subscribed on iTunes or Spotify, go ahead and click that follow or subscribe button. And then, that way, every time a new podcast drops, which is Thursday morning, it’ll be automatically put on your phone. And so you don’t have to worry about it. And then you can just get in your car and ride to work or the train or ride your bike or take your dog for a walk. But anyway, this is a really fun conversation about Einstein for Flow with Ajaay.

Ajaay Ravi:
Thank you, Mike. Happy to be here.

Mike:
We’re talking about AI and GPT, and I feel like you are the right person to do that, so let’s start off by just giving everybody a background on your history of AI and GPT and how you got to Salesforce.

Ajaay Ravi:
Yeah, absolutely. Primarily, I have an engineering background and I used to design hardware chips, primarily networking and storage chip sets, and then I moved into the management carrier after doing my MBA. My initial brush with AI was at my previous job at Amazon, so I was a product manager. I did create a product called Shop by Style, which is live on Amazon Home. Essentially, it deals with how do you sell furniture, heavy, bulky home furnishings to customers via the phone and online. Traditionally, those are things that customers would like to walk into a store, touch, see, feel, and then figure out if it passes the squish test or not, and then purchase it. But then our focus was on how do we actually get customers to imagine what a particular piece of furniture will look like inside their own house and how do we enable every single person to become an interior designer of sorts and give them the skillset to do that?

So that’s where we put together a team of computer vision scientists and we said, “Let’s go and let’s teach an AI model how to recognize style and stylistically put furniture together,” and we realized that none of us knew anything about style or interior design, given that all of us had engineering backgrounds, and we were like, “Oops, what do we do now?” And then, to be very honest with you, I did not know what Bohemian meant, and what is the difference between industrial and glam and California casual and…

Mike:
Farmhouse chic. Don’t forget about that.

Ajaay Ravi:
Absolutely. Farmhouse chic.

Mike:
Yep.

Ajaay Ravi:
So one of my first design [inaudible 00:03:01] was to hire some interior designers, actually new college grads fresh out of school. And they had two things. One was they had to come and teach blokes like us what style was about, and educate us on design and design concepts. And then they helped go through every single product or home furnishings product that Amazon sold, and tagged every single image with what style it pertained to. And that’s when I realized that a single piece of furniture could also pertain to multiple different styles, and you had to break it down into the components. You had to see what kind of arm it had, what kind of upholstery, what kind of leather, what kind of back, what kind of seat, what kind of cushioning. Oh my God, they did a wonderful job.

So that was one of our first inputs into our computer vision models, and then we started teaching the computer how to discern style. And then we put together this product called Shop by Style, where now the AI could automatically go in and if there was a product that a seller or anybody posted on Amazon, it would automatically look at the images that they post and determine what kind of style it pertained to. A particular piece of furniture could pertain to multiple styles too. And then the best piece was depending on what it is that you’re browsing, it could also recommend stylistically appropriate complementary products to go with it. For example, if you were looking at a couch, it could help you complete the look by suggesting a coffee table and an end table and a lamp to go with it. So that was my first foray into everything AI-related. And essentially, we used things like AR and VR.

Then how do we show these pictures on a phone and get you to just take a picture of your room from your phone and then superpose these images at scale, on that picture, and then help you visualize if you purchased all these things and put it in your house and whatever places that you wanted to, how would it look like and complete the look for you? So yeah, that’s where I started. And then, after my time at Amazon, I moved into Salesforce. Seattle was getting a little too…

Mike:
Rainy?

Ajaay Ravi:
… rainy and cold and snowy for my liking.

Mike:
Oh, no.

Ajaay Ravi:
So as I like to tell people, I’ve spent three years in Cincinnati in the Midwest and I’ve paid my dues to the Midwest, and given that Seattle was starting to get snowy, I wanted to get out of there. So yeah, that brought me back to my origins here in the Bay Area. And then, yeah, it’s been a wonderful journey with Salesforce. I started off as a product owner on the Marketing Cloud Einstein team, where we were building AI-driven products to help marketers reach their customers at the right time, on the right channel, with the right frequency to help them be successful and help sell more products to their customers.

Mike:
Well, I can speak on behalf of maybe thousands of bachelors across the world that you’ve helped at Amazon design cool living rooms as a thank you, because that’s generally how I shop when I’m on Amazon is I find one thing I like and then everything else that goes with it. Yes, I’ll take all of that. I need basically that room. So thanks for inventing that and making us look good.

Ajaay Ravi:
Yeah. Well, you are very welcome. Like I said, I had an absolutely wonderful team that worked the magic behind the scenes, so I can’t take all the credit for it.

Mike:
Yeah, so I feel like I should just upfront let you know, I’m going to ask a lot of stupid questions or questions that I feel are stupid, but are questions that I think everybody’s trying to figure out. In your description, you talked about furniture and having these interior designers come in and tag the images with the style. And when Salesforce launched a lot of Salesforce Einstein and the Einstein product and Next Best Action, a lot of the discussion for admins was, “Okay, we need to make sure our data’s good so that we can train the data model,” and it sounds like that was something you were working on.

I bring that up because the juxtaposition now that I feel is, and I did this the other day, I went to ChatGPT. I didn’t train it on a data model. I just asked it, write a one paragraph summary of X, Y, and Z, and it did that. So, long question short, can you help me understand the difference between when you were training that data model to show only Bohemian-style furniture versus what’s happening when I just type in a text prompt with ChatGPT?

Ajaay Ravi:
Absolutely. Yeah, that is actually a fantastic question. So whether you’re typing a text-based input on ChatGPT or it is something more sophisticated and targeted that you’re looking for, any kind of AI model that runs behind the scenes requires training. Even these LLMs or large language models as we talk about, like OpenAI’s ChatGPT, that is also pre-trained on multiple different kinds of inputs and the responses and texts, just that we don’t go in and personally train those models. Those models are already pre-trained and they are hosted by ChatGPT, which we just access.

Mike:
Okay.

Ajaay Ravi:
So the way I like to talk about models are I have a two-year-old and a four-month-old at home. It’s like trying to teach them anything about how to communicate or language or anything in general. If I go to my younger one, who’s four months old, and I try to talk to him, sometimes I try to talk world politics or philosophy, he just blinks back at me and all he says is, “Baba.”

Mike:
That’s all sometimes I say too, when people talk world politics with me.

Ajaay Ravi:
Well, that is true. Unfortunately, I’m not good at talking baby language, I guess, when my wife drops me off with him. I need to find topics to talk to him about, so I just scroll the news and then I’m like, “Hey, do you want to hear about a new piece of news today?” Anyway, that’s how my interactions go with him. It was the same with my two-year-old as well when he was an infant. But then you have to constantly keep, one, talking to them, and two, keep introducing them to these new simpler concepts because their mind is like a blank slate, and then they start forming these connections and maps as you keep trying to do things, as you keep trying to tell them things.

And you have to keep repeating. It’s not only just once if you tell them, they just get it right out of the bat. Another example is my toddler recently, I told him he shouldn’t be kicking the ball on the road when there’s nobody present. And I thought he would understand that, hey, balls should not be kicked on the road. And what he understood was that particular ball, which was a soccer ball that he had, should not be kicked on the road. So he brings his basketball next and he is like, “Dad, can I kick this ball on the road?”

Mike:
I like your kids.

Ajaay Ravi:
Yeah. And then I realized my instruction to him was wrong because I did not specify the fact that any kind of ball should not be kicked on the road. I just said, “Don’t kick that ball on the road.” So that’s how it comes in. That’s how you think about when you’re training models. You have a certain kind of inputs that you’re training the models on. For example, let’s go back for the training that model to recognize furniture style. So essentially, if you look at training, it consists of two pieces. One is you have a training data set and the other one is you have a testing data set. So you first train a model to recognize or do something, and then you test it against various other inputs to see how it responds to your test scenario. So the thing is, you can’t train it for all the scenarios that include your test also, because then tomorrow, if there is a scenario that comes up that it doesn’t know about or that it hasn’t seen, then it will react in a way that you don’t expect it to react.

So what I mean by that is, going back to this ball example, I needed to tell my son repeatedly that he cannot kick the soccer ball on the road without supervision, he cannot kick the basketball on the road, and he cannot kick his bouncy ball on the road. So I had to tell him one after the other, patiently answer his questions. And then what I did was I just brought a tennis ball and then I gave it to him and I said, “What do you want to do with this?” And then he finally was like, “No, I’m not going to kick it on the road because dad said no kicking any ball on the road.” I was like, “Okay, you finally got it,” so that’s essentially the thing. So you try to train a model by giving it a few inputs. For example, what we did was we took 300 pictures of different furniture types that were Bohemian, and then we broke it down and you have to do what is called labeling or tagging.
And then we labeled each one of those parts. For example, what is the arm? What was the seat, the cushion, the upholstery, the legs, the back, the arch of the back? And all of that, and then we gave it to the model and we said, “Hey, if you see this tag and this image, and if you can understand that this tag pertains to this image, these are what constitute Bohemian.” So it repeatedly looked through all those 300 sets of images and tags that we provided, and it formed a neural network where it was like, “Okay, I think I now understand what Bohemian looks like.” And the next step, what we need to do is the testing after the training is now we give it 600 new images that it has not seen, and then we tell it, “Now go and find out if this is Bohemian or not.” And then of course you expect it to get it right at least a fair few number of times.

And then every time it gets it wrong, you provide feedback. You tell it, “Hey, for this image, you got it wrong.” And then it learns from that feedback. So over time, the accuracy improves. That is what training constitutes. It’s the same thing for the large language model. If you look at OpenAI, whatever text you’re typing in today, there are billions of users across the world who have been typing in everything that ranges from, “Write me a poem for Mother’s Day,” to what I recently did was I wanted an overnight oats recipe that did not use nuts, so different kinds of inputs from billions of people around the world. It’s continually learning. And then the best part is that is also feedback that is captured. And every time you think that ChatGPT hasn’t gotten something right, you hit that thumbs down button so ChatGPT knows the model which was trained on some training data is now being used against testing data, and then it learns as it keeps going and then it keeps getting better.

And there’s the same thing with the internal models that we have in Salesforce as well. One of the coolest things that we’re working on right now is called Einstein GPT for Flow, because, as we know, Flow is a very nuanced and sophisticated product that our admins use. So we were just thinking about, “Hey, there is this new GPT feature where you can just talk to it in natural language and it can do something really technical and cool and just give you an output. Why don’t we take that and try to help our customers and just have them describe a flow that they want to create?” And then we just automatically create it for them. They don’t have to go through the steps of opening Flow Builder, bringing in different elements, actually configuring them, connecting them, and going into the property editors and making sure that they have the right information because all of this is time-consuming

What if we just had them describe in two sentences or three sentences what it is that they’re trying to do and then voila, just go ahead and automatically create it for them? So that I think is a long-winded answer to your question.

Mike:
No, it’s good. And you took us there because I did want to talk about why Salesforce is paying attention to GPT and what we’re trying to do with Flow GPT, because I feel like at least my interaction so far with this new tech is, “Tell me how many pizzas to order for a birthday party,” or like you did, “Help me find a recipe.” And I’m thinking, so what are we trying to do here at Salesforce with that? But you tipped the answer of have the person describe essentially the automation that they need created in Salesforce, and then so what exactly are we trying to do with Flow GPT? Do we want it to create the entire flow? Do we want it to just get started or return a nice image of a pirate cat?

Ajaay Ravi:
That is a lovely way of putting it. I would like a pirate cat. So what we are trying to do is just, in very simple terms, make life easy for our customers. And that could mean different things for different people. For certain people, it might mean reducing the amount of time that they spent in trying to create these flows. For certain people, probably the barriers to entry or adoption is pretty high where they feel like they’re new users and there are just a million options that are available for them. They don’t know where to start. And there are some users who are like, “I manage multiple things during my day. I wear so many different hats, so I need some kind of personal assistant for me who can help do some mundane tasks if I just ask them to go do it without me doing it.”

So there are these different kinds of use cases. So with Flow, what we are trying to solve is whatever it is for you that makes your life easy. Let’s say you’re an admin and let’s say you are somebody who’s an expert user of flow, how do we help save some time for you? So before I go there, there’s one thing that I would like to mention. It is with anything that is AI and training-related, as they say, Rome wasn’t built in a day, so you have to crawl, walk, and then run. Because it takes time initially for that training to happen, and then the testing is the slow crawling phase where the model starts learning. And then once the model slowly starts getting better, then you can start adding more complexity to the point where you can now start walking and then finally you can start running.

For example, with my two-year-old, I can only talk to him about balls in the road and kicking the ball right now. I still can’t go and talk to him about algebra and trigonometry. He’s not going to understand that. So my natural progression should be, I talk about this, and the next thing is I’ve taught him how to count from one to 10, and he can go up to 11 today, but that’s all he can do. So he first has to figure out how to get to about a hundred, how to do some basic math additions and subtractions, and then once these initial few foundational years, you do it correctly, and then that running phase is very simple because at that point he would start grasping things much faster, and then algebra and trigonometry would become much easier.

Similarly, here, our crawl phase is we are going to start with simpler flows. Can we go ahead and reliably create flows that are four or five steps long, for example, that do not have multiple decision elements, multiple branches, just have simple formulas and things like that, or maybe even simple MuleSoft connectors and do something very simple? For example, as an admin, if there are let’s say five different kinds of flows that you wanted to create, out of which two are simple, and we can help save some time for you by creating those simple flows ,if you just come in and say, “Hey, I want to send an email when a lead is converted to an opportunity and just send that email to this email address,” done. We can go ahead and create it for you. We don’t want you to go into Flow Builder, open it up, and spend all the time in doing it when you have other things to do with your time.

And like I said, there are some people who would want to come in and who are like, “Hey, Flow has a million options.” Crawl here, which is basic flows. “I want to create some basic flows. I want to know how these flows are created, but what if I can just describe what I want and you can create a simple flow for me and then I can go and look it up, and maybe I understand how it actually works?” So that is essentially what we’re trying to do here is make life easy and simple for our customers.

Mike:
Yeah. No, that makes sense. As I’ve heard you describe this, and the ball analogy with your children I think is perfect, one thing that I just want to bounce this idea off you and see if I’m correct, can you tell me how important it will be for admins to correctly articulate what they want Flow GPT to do?

Ajaay Ravi:
That is an excellent question. So with anything GPT-related, your output or whatever you get out of it at the end of the day is going only going to be as good as what your input was or what you told it that you wanted. And given that natural language is something that is very subjective and different people can understand things in different ways, it is very, very important that we get that input very correct. So we call that as an input prompt, because prompt is the term that is used for anything that’s natural language-related. So there are two things here. One is as we are training, we are trying to train the model to look for certain keywords in that input prompt and make certain decisions. So for example, if it looks at, “Create an email when a lead is converted into an opportunity,” so the model recognizes the word lead there and says, “Okay, fine,” so it is going to be a record keyword flow on the object called lead, and it makes those assumptions as it looks at the prompt.

In this initial crawl phase, it is very important for us to make sure that our admins that we are working with help give us as much detail as possible in those prompts, because the more specific they are, the higher are the chances of getting a more reliable output. And then, like I said, the second piece that is more important is feedback. So we are trying to collect feedback from customers. We don’t want to take up too much of their time. Something as simple as a thumbs up, thumbs down mechanism. If you think that we really got the flow correct and we got the output that you expected or you wanted, just give us a thumbs up so we know, or if you think we were off target, give us a thumbs down, so what we would do is we would go take your feedback and then look at the prompt that you had and see where we missed. Why did we misunderstand your prompt? How can we get better?

The other thing is, to get absolutely great, if you were also to go in and create the right flow that you expected for the input prompt that you created and you gave us permission to access that. Now, that would be absolutely wonderful because not only do we know that we got it wrong, but we also have the right one that you were expecting or you wanted, so we can train our models even better, but again, that’s a long shot. I’m not even going to go all the way there. As long as you can give me feedback to say it was thumbs up or thumbs down, then I can start training the model to better understand what your intent is and have it stop what is called hallucinating in the GPT world. Hallucination is when the prompt can start just going into some kind of loops and then the output is not always technically correct and it doesn’t get you what it is that you wanted.

Mike:
Wow.

Ajaay Ravi:
And then at the same time, there are two things here. As far as we’re talking about Flow GPT or Einstein GPT for Flow, one is we should be able to reliably get you a flow that can be rendered on the Flow canvas. And then the second piece is how accurate was that? So right now in the first stage, what we are targeting is to be able to reliably get you a flow. So whatever input prompt that you type in, and of course this is subject to ethical and legal guidelines. So whatever input prompt you type in, we want to make sure that we can take it, translate it, create a flow, and then render the flow on the canvas.

That in itself is a very hard problem. And once we solve that, the next step is did we get that correctly? Did we understand your intent? Then how do we tune ourselves to make sure that every time there is a prompt, we actually understand it correctly? And actually, we can even take this one step further. There is something called prompt engineering that is being discussed these days, is when as a customer or as an admin, they type their prompt, how do we help? Think about it as fill in the blanks. Can we provide you with a template of sorts where you just come in and say what kind of inputs and what kind of outputs to use? You just fill in certain blanks. Or can we add certain deterministic descriptions to those prompts above and below what you write in order to make sure that we pass on the right information to the model? So the reason there is a lot of research and work that’s going on around prompt engineering these days is people have come to realize that that input prompt is so important to get the correct outputs.

Mike:
Yeah, no, and I’ll actually link to a podcast. We talked with Sarah Flamion at Salesforce about prompt engineering and the importance of it back in June, so you can see that podcast episode. I think as we wrap up, Ajaay, I’d love to know, I can’t get my hands on Flow GPT. Admins can’t. That’s a thing that’s coming. Absent of that, from your perspective, what advice would you give admins to get ready for Flow GPT?

Ajaay Ravi:
That is a great question. Yeah. Shortly, whenever you see that Flow GPT is available, please do go ahead and use Flow GPT as much as possible, because the more you use it and the more input prompts that you specify, the better it will become with the days and with the years. So eventually, there will come a point when you have a really complicated use case that could probably take days to set up, which Flow GPT can go and do it for you in seconds. But it takes all of us to come together with those simpler use cases at the offset or at the start and then provide that feedback. Even if it is as simple as just indicating thumbs up, thumbs down, please do do that. And yeah, we will make your life simpler and better as we go together.

Mike:
You make it sound so easy, so go out and don’t learn to kick a ball down a street. That’s what I heard.

Ajaay Ravi:
That’s precisely what it is, if you’re on the street.

Mike:
This has been fun. You are a fountain of knowledge. You probably have forgotten more about GPT than most of us know to date, so I appreciate you taking time to be on the podcast with us.

Ajaay Ravi:
Absolutely, Mike. It was lovely talking to you.

Mike:
So, that was fun. I learned a lot. I didn’t know AI could hallucinate. This is a new thing. Learning together, folks. Now, if you enjoyed the episode, can you do me a favor? Just share it with one person. If you’re listening on iTunes, all you need to do is tap the dots and choose, Share Episode. Then you can post it to social, you can text it to a friend. If you’re looking for more great resources, your one stop for literally everything admin is admin.salesforce.com. We even include a transcript of the show. And be sure to join our conversation in the Admin Trailblazer group, which is on the Trailblazer community. Don’t worry, there’s a lot of links, a lot of things I mentioned. All of that is in the show notes for this episode. So until next week, we’ll see you in the cloud.

Love our podcasts?

Subscribe today on iTunes, Google Play, Sound Cloud and Spotify!

What Are the Key Features of Salesforce’s Model Builder?

Today on the Salesforce Admins Podcast, it’s another deep dive with Josh Birk as he talks to Bobby Brill, Senior Director of Product for Einstein Discovery. Join us as we chat about how you can use Model Builder to harness the power of AI with clicks, not code. You should subscribe for the full episode, […]

READ MORE

Salesforce Prompt Builder Features Every Admin Should Know

Today on the Salesforce Admins Podcast, it’s time for a deep dive with Josh Birk, who talks to Raveesh Raina, Principal Solutions Engineer at Salesforce. Join us as we chat about what Prompt Builder can do and how to write effective prompts. You should subscribe for the full episode, but here are a few takeaways […]

READ MORE

Einstein Lead Scoring with Kalpana Chauhan

Today on the Salesforce Admins Podcast, we talk to Kalpana Chauhan, Lead Salesforce Consultant at Mar Dat. Join us as we chat about Einstein Lead Scoring and the importance of high-quality data. You should subscribe for the full episode, but here are a few takeaways from our conversation with Kalpana Chauhan. From math teacher to […]

READ MORE