Today on the Salesforce Admins Podcast, we’re replaying our episode with Sarah Flamion, Research Architect on Salesforce’s Research & Insights Team. Join us as we chat about what recent advances in generative AI mean for admins and the Salesforce Platform.

You should subscribe for the full episode, but here are a few takeaways from our conversation with Sarah Flamion.

What is generative AI?

Generative AI is a blanket term for algorithms that can generate new content: text, images, code, voice, video, and more. It does that based on what it has learned from the existing data you give it.

Sounds complicated, but one of the coolest things about generative AI is that the interface for it is natural language processing (NLP). You can describe what you want it to make in plain English and it will spit something out at you.

The human in the loop

One thing that’s important to understand about generative AI is that it’s not an encyclopedia, it’s a completion system. Fundamentally, the way it works is to identify patterns and then predict the next thing in the sequence.

A new field is emerging called prompt engineering, which is focused on how to talk to these models to get better results. You can adapt the model to specific knowledge by “grounding” it with data that isn’t public, for example, your brand voice or information about your industry. You can also give it feedback on its responses, which gives it a chance to learn and improve thanks to “the human in the loop.”

The main takeaway from all of this is that generative AI “supercharges the things that the humans can do,” Sarah says. You can make an image and then have the model give you three variations on it, or get a quick first draft for the opening of a piece of content you need to write.

Jobs to be done and Salesforce

For Salesforce, there is a lot of potential to make users’ lives easier. You might be able to automatically log calls based on an AI-generated transcript of the conversation, or clean up old data that was perhaps sloppily entered.

Sarah and her team often look at their research in terms of Jobs to Be Done. Businesses generally have a list of jobs they’re trying to accomplish and then use tools, like Salesforce, to help them do those jobs. The thing is, the tools might change but most businesses will have the same Jobs to Be Done, even over decades. Generative AI stands to shake that up, both in terms of what businesses need to get done and who can do those jobs.

Be sure to listen to the full episode for our in-depth conversation with Sarah and why change management just might be the most important skill for admins in the future.

Podcast swag

Learn more

Social

Full show transcript

Mike: Hey, Salesforce Admins. Boy, last week was busy with Dreamforce. And even if you didn’t get to go, I’m sure you saw a lot of the news around AI and generative AI and GPT, and boy, just a lot of things going on. So in the spirit of making sure that you’re staying up-to-date, I’m going to rebroadcast an episode that I did not that long ago, literally just a month or so ago, with one of the generative AI specialists that we have here at Salesforce. So listen in. This is a fantastic conversation that I have with Sarah.

So Sarah, welcome to the podcast.

Sarah Flamion: Thanks. I’m happy to be here.

Mike: Yeah. Before we get started, just give us a brief history of Sarah’s journey to Salesforce.

Sarah Flamion: Sure. So I have worked at a couple different large enterprise organizations. I worked at General Electric back when they had a Capital Services division. And then, I worked at a company called MedPlus, which is part of Quest Diagnostics, a big lab on their electronic medical record. So I like big software.

And I interviewed with Salesforce. I was interested in switching to a more established research team. And I interviewed on the day that ExactTarget got acquired by Salesforce.

Mike: Oh, wow.

Sarah Flamion: I tell people I’ve been with Salesforce exactly as long as ExactTarget has.

Mike: Exactly.

Sarah Flamion: Exactly. That’s right.

Mike: Perfect. Well, I appreciate you sharing that with us, but we really want to… I really, really, really, really, really want to dig into generative AI because I feel like I can’t shake a stick or go to any social post without people talking about it or my friends talking about it in tech. So I’ll just start off with the obvious question. What is generative AI?

Sarah Flamion: Well, it’s a great question. I think a lot of people are still on the learning curve about it. When we talk about generative AI, we are generally referring to algorithms that can create or generate, that’s where the name comes from, new content. And that can be text. It could be pictures. It could be code. It could be voice. It could be video, and it does all that based on what it’s learning from existing data.

And the newer cooler thing about generative AI is that it’s creating that content in a way that’s controlled by natural language. So people can just conversationally describe what output they want, and they don’t have to have special expertise to get that out.

Mike: Okay. That makes sense. So hearing that from you, I’m wondering why is everybody freaking out about this?

Sarah Flamion: Sure. So it might be worth explaining a few more things about generative AI to help it make sense. If you think about AI, we use this word called a model. It’s basically like a computer program or an algorithm that can be trained on large amounts of data, and it can learn patterns and relationships between those, and then use that to do things.

Generative AI has these large language models. They sometimes call them foundation models, but they’re different in that they’re huge. So all these models have little parameters that can be configured to tweak how it behaves and functions.

And the foundation models behind generative AI have billions of parameters. And they’ve been modified so they can ingest just massive quantities of data. So you’ll hear a lot like the term GPT. That is a famous model. The G is for generative.

The P means pre-trained, and that is kind of the secret here, where it’s trained on this huge broad set of data, not unique to any particular task. And it can learn in a, they call it, self-supervised way, but it can learn without a human having to control all the learning. And then, the T is for transformer. That’s a capability that allows the model to understand relationships.

But people are freaking out about it. I think right now, we have an AI research team that published a blog post called, If You Say It, You Can Do It, the Age of Conversational AI. But in it, they talk about how the world is primed for this type of advancement.

I think all of us have had the experience of information overload where there’s just more to consume than time. So there’s more emails and Slack messages and texts and podcasts, articles that look interesting than any human could really ingest.

And we’re all also working a lot. There’s increasing workloads and this pressure to do more with less. And the article, they refer to trapped potential, but you’ve probably worked with people who have ideas, but maybe not the skills to execute them. So they have an idea for an app but can’t maybe make it happen, or something they want to do at work, but they lack some sort of skill that’s required. So we’re sort of ready for this new kind of fundamental way to interact with our tools. And then, you get this fundamentally new tool. So the timing was right.

Generative AI is different from previous ones in that it can generate new content. So it’s not something that previously existed. It can make net new things, and it can make them in unstructured ways, so long texts or images or videos. So I’m sure you’ve seen examples in the past of AI being able to identify photos.

So you can classify a photo as a dog or a landscape. And now, it can make a new dog or a new landscape based on what it’s seen. So if you ask it, it can generate this new thing. And it also can be used for a wide range of tasks.

So I commented on the broad data that it’s trained on, but AI in the past was pretty narrow. So it was designed to do a thing like language translation or image recognition. And shifting from that narrow to this really wide possibility is a pretty fundamental change. So the same tool can be used to do just this incredible array of things, and that’s pretty transformative.

You’ll also hear people talk about it being multimodal. So that’s just a fancy way of saying it can process and generate output that isn’t text, which makes it a lot easier to interact with as a human person. We’re much more multimodal in the way we talk. We use words, but also we make sounds, and we’re using visuals and gestures. And so, the more conversationally we can interact with the technology, the more natural it is.

And I think as part of that conversational thing, you’ll see that if you’ve played with any of the tools, you can ask it to do something for you. You can ask it to produce output. And then, you can interact with it, and you can say like, “Well, actually, that’s not what I wanted. I thought it would be, can you make it less formal or, I want this image to be more in the style of this other thing,” and it’ll adjust it. So it’s adapting based on what you’re telling it, which again makes it feel more conversational.

Mike: On our previous podcast at the end of May, talked with Josh Burke, who is big into Midjourney and generating images from that. And I said, “Can you just do a demo for me?” And I was so fascinated that he was able to just put in… I think he put in cat on a pirate ship.

Sarah Flamion: Right.

Mike: And it came back with that. And then, he’s like, “No, but change this and make it in the style of Van Gogh.” And I thought to myself, “Oh, well, this is going to be interesting what comes back with.” And it understood that.

Sarah Flamion: Yeah. And you’ll see… So OpenAI introduced ChatGPT in late 2022. And the public-facing person hit 100 million users in two months, which is the fastest growing app of all time. And I think the conversational nature of it is really what is behind that. It’s so easy to do.

My sister was talking about her daughter is a kindergartner and has been writing stories and wants pictures for them, but they’re very specific pictures. And she’s been using Midjourney to create a purple cat in high tops skateboarding in the neighborhood.

Mike: Totally [inaudible 00:08:39].

Sarah Flamion: Yeah. So it’s such a low lift that you can really play around with it. Yeah. So it’s been pretty fascinating thing to be a part of.

Mike: So without getting too technical, just thinking ahead here, what are some other concepts that myself or Salesforce admin should understand about generative AI?

Sarah Flamion: Sure. I think, fundamentally, one thing that is worth understanding is that it’s not a big knowledge repository. It’s not just like a giant encyclopedia where you go and find the answer. It’s really more of a completion system. So the tool is predicting the next element of a sequence based on the information that’s been trained on.

So if you give it a sentence like today is a rainy, it can predict the next plausible text like day. It doesn’t just always pick the highest probability word, that makes it sound really flat and kind of weird. Those parameters that we talked about kind of changed that. But it’s probably important to understand that what it’s really doing is going out and completing sequences rather than acting as a repository.

There’s also some language that you’ll hear. So if you’re looking around at generative AI, you’ll hear it talk about prompts. And those are just as natural language descriptions of the task could be accomplished. So the example I was giving earlier, give me a picture of a cat who’s purple and is wearing high tops and is skateboarding.

You’ll probably hear the term prompt engineering a lot. That’s kind of a new field that is out there now, which is where you design and formulate these really effective instructions, which we’re calling prompts to guide the output or the behavior of the model that you’re using.

Mike: And when you say-

Sarah Flamion: Oh, go ahead.

Mike: Oh, sorry.

Sarah Flamion: No, go ahead.

Mike: I just wanted to make sure I understood that. When you say the prompts to guide the behavior, you mean the behavior of the AI, not the behavior of the person putting in or giving a text?

Sarah Flamion: That’s right. The behavior of the AI. So you are telling the generative AI model what you want from it, and there are techniques that are increasingly being learned about how to do that in really effective ways. And the instructions that you’re giving it are called prompts, and the technique of getting those prompts to be better and more effective is called prompt engineering.

Mike: Gotcha. Okay. Wanted to make sure, because-

Sarah Flamion: Sure.

Mike: … old-school Mike of the technology of the 21st century prompts are like, “How do you get the right input from the person on the keyboard?”

Sarah Flamion: Yeah. It’s also worth knowing that generative AI models are adaptable. They can constantly be tweaked. So there’s lots of parameters that can be adjusted. But you can also take specific knowledge. So knowledge from a domain, like finance or healthcare or an organization or a particular task, you can inject it right into the prompts. That’s called grounding them, or you can also adapt the model to specific knowledge.

So you can train it. You can take a model that’s been pre-trained. And then, you can do some additional training basically on an industry or a specific subject matter. And that lets you use the generative AI in ways that aren’t… on information that isn’t public. So you can say like, “I want you to generate an email that uses my brand voice,” or you want to use a huge amount of data to perform a task without entering it all into the prompt.

So that adaptability is really interesting and is super powerful. It’s also kind of continuously getting better. So they can be fine-tuned these models where you can teach it basically what a good prompt is and what a good output for that prompt is. So when you see this type of prompt, this is good output, that’s kind of called demonstration data.

You can also train it on big sets of data, or you can get human feedback on it. So you can have people enter a prompt and get back maybe four different outputs and say, “This is the best one. This is the second best one.” And it will teach the models how to improve, so that real human feedback can also be taken in as input by the model and then used to make it better.

So we’re in the early stages really, but these models are just continuously learning and getting better. And so, it’s going to be a really exciting space as we watch what’s possible as they learn.

Mike: As I try and learn. This might be a dumb question. Actually, it feels dumb, but it might not be. Is it possible that the generative AI is also building its own prompts as it’s learning because you gave that example of it comes back with four and you’re like, “Ooh, I like two the best?” Does that then create its own prompt or am I using that term-

Sarah Flamion: I’m actually not sure.

Mike: Okay.

Sarah Flamion: I think typically, when we talk about training, it’s more in terms of understanding the right sequences of things to put together and reaction to a particular instruction.

Mike: Yeah. Okay. So just trying to put two and two together. Man, that’s so cool. Well, we talked about pictures a lot. But what are some other things that generative AI is kind of really good at?

Sarah Flamion: So a lot of the capabilities are around jump-starting things. So you’ve probably experienced that if you’ve messed around with it on your own. You can create a first version of something, a quick draft. You can use it to create content at scale. So like, “I want a bunch of product descriptions for this product catalog.”

It can be used to transform content. That’s pretty interesting. You can take content and then transform it into something really different, like a real remix. So I want this image, but I want these three different variations of it, or, personally, I was reading a blog recently where it was asking for product recommendations. And people in the comments were posting products that they loved.

And then, somebody use generative AI to go through all the comments and take out all the recommendations and put them in a categorized spreadsheet. So you can really transform things in different ways. It does an amazing job of summarizing data. So in a business context, maybe you’re in a really long meeting. Then, you want to summarize it, or there’s a huge set of texts and you want it to summarize and tell you, “What are three things I need to know from this big set of information.”

Mike: So in the realm of why does this matter to me, I’m hearing it’s really good at expanding or synthesizing information.

Sarah Flamion: It is, yes. And it can just do these things at a speed and a scale that humans really can’t, which just supercharges the things that the humans can do. If you’re hitting writer’s block, you can quickly get a starter for a paragraph to help you get going again or if you make an amazing image, you can quickly make it into three variations.

So there’s lots of work that is just really manual for us to do, where now, we have this super powerful tool that really anybody can harness to do these things. So before, you have to have this really specialized capability in order to write code or generate these variations or summarize data. Somebody has to go through it and pick all that out. And so, having these tools at your disposal to do that is just really powerful.

Mike: Yeah. No. I can’t help but think back to… So the early to mid-90s, my mom was in a call center. And she used to get these call scripts of, “Okay, so if a customer says this, then do this.” Right?

Sarah Flamion: Yeah.

Mike: And the good call center people like my mom was could memorize those, and she didn’t even have to flip through. She knew when somebody said something, exactly what to go to next.

And I remember early versions of working in a call center where we would try to write prompts for people. I guess I’m thinking of that as you’re telling me this because we’re talking prompts, but probably in a different way of asking generative AI to do something, make a cat on a pirate ship, and then come back with stuff. I guess outside of that, what are the ways that this feels kind of new and different?

Sarah Flamion: So we’ve talked about it a lot. But I think the conversational nature of it feels very new. It can democratize a lot of what can be done because you just have to be able to speak regular language to invoke it. I think that introduces some weirdness also. And you see that play out in the media a lot, that as we’re building this, typically, when we’re having a conversation, we’re conversing with a sentient being on the other side.

Mike: Sure.

Sarah Flamion: And it’s hard to remember that, in this case, we are having a conversation, what feels like a conversation. But the thing that we’re conversing with is not a human. So there’s a lot of work that’s going into thinking about how we push back against that natural human tendency to think of the thing we’re conversing with as a person. It’s interesting too in that it has variable output.

So if you give the same prompt a couple times, you can get different outputs. And that feels really different than a lot of technology today where we sort of lean on consistency and are used to the idea that if you do A and B, you get C. So that’s kind of a different thing.

And then, I think we talked about the adaptability earlier, but you’ll hear this concept sometimes called human in the loop. But it’s the idea that feedback from humans can continuously improve the model, and that’s kind of a fun new aspect of this. So it can be explicit like maybe you thumbs up or thumbs down some output that you got to let it know if it was good, or it can be implicit.

So we can track how users interact with generated content. Are they just immediately accepting it? Are they making edits? Are they ignoring it? How are they interacting with it? And you can make some inferences and do some research around that too.

So I think those are all things about generative AI that feel different and that, like I said, it’s all pretty new. So there’s a lot of fun research to be done about how to help people interact with these things and what those patterns should be.

Mike: Yeah. I mean, the first point that you hit on for me is you can give it the same prompt and get something different back is really like, “Whoa.” That to me feels like, “Okay, there’s something going on. Who’s the wizard behind the curtain there?” So shifting a little bit towards Salesforce and some of the stuff that you do on the team, what do we already know about our customers and AI that could apply here?

Sarah Flamion: So our research team has invested a lot of time in understanding something we call jobs to be done. The idea behind the jobs to be done theory is based on this belief that people buy or hire products and services to get a specific job done. They’re trying to do a thing in order to achieve a particular outcome or set of outcomes.

And we do research in that way because it helps give us a framework for discovering and defining the jobs and needs and then understanding how our customers think about the success of those.

So as we’re thinking about generative AI, there’s a couple ways that those jobs to be done factor in. The first one is kind of obvious. We’re going to help people achieve the existing jobs to be done, but in better ways, so they can get to the success metrics that they’re looking for, just more effectively or more easily.

So sometimes you’ll hear people call that augmenting. You’re helping someone achieve their desired outcomes with more satisfaction and more efficiency, reducing manual workarounds, that kind of thing. When we talk to customers just generally about the value they expect to realize based on AI and generative AI, that comes up a lot.

Mike: Yeah. I can understand that. I mean, that also sounds a little broad, right?

Sarah Flamion: Right.

Mike: We’re going to help do things more effectively and efficiently. Do you have an example or a few examples you could give us?

Sarah Flamion: Oh yeah. There’s lots of them. So some of the things that kind of immediately jump to mind, most systems like the Salesforce work best when they have really good data being entered into them. But if you’ve entered data into them, it can be time-consuming and manual. So using a tool like this to make that data entry easier would be a good example of augmentation.

Another example is we talked about summarization. So maybe you want to summarize a conversation that you had with a customer or a meeting, or you want to summarize a really good resolution to a support issue. So your mom is handling a case, and she has a great resolution, and you want to summarize that so you can share it broadly. We have a lot of developers in our ecosystem. Developers can use tools like these to help write code or generate test cases.

It can be great for inspiration. So you can get a first draft of an email that you want to send, or an image that you want to use in a marketing campaign, or maybe a presentation or a proposal. You can get past that writer’s block or you can brainstorm a bunch of ideas.

So the other day, I had an idea for something I wanted to do inside the company, and I needed a name for it. So I was brainstorming using ChatGPT, what are some good names for this? And then, you can take that, and you can polish it or kind of adjust it.

And the nice thing is, because it is a tool, you can just scrap ideas that are bad. It gave me a whole bunch of ideas that weren’t great. And it took me two seconds. So I can just say, “No, not like that. Here’s what I was thinking, something like this.” And then, I got a bunch of better ideas. You don’t have to feel badly about taking or leaving the content.

Also, we talked about generating content at scale. It can also help people look at huge amounts of data and ask questions of it, interrogate the data with more natural language or spot patterns. So without writing queries or code, you’re just speaking what you’re looking for and being able to find it.

And then, it can enhance conversations. So you gave the example earlier of having the service call or support call. You can harness these huge bodies of information and information about the brand or the customer, and you can put that and inject that into the context of the conversation itself to make suggestions.

Mike: Well, I don’t know about that whole developer and test cases stuff, because everything I know of developers, they love writing test cases.

Sarah Flamion: Yes.

Mike: Tried to say that with a normal face just to see.

Sarah Flamion: So it just really can amplify what you’re able to do.

Mike: Yeah.

Sarah Flamion: And then, I think if we go back to the jobs to be done, everybody has things that they do that are more fulfilling to them than others. So you’re looking at your workday, and you’re recognizing and pieces of it are more fun, or more interesting or more engaging to you.

And so, personally, I’m really optimistic that generative AI is going to help people free up time to spend on those more fulfilling jobs to be done. So if the data entry piece is a necessary task that you have to do but isn’t your favorite part, we can help expedite that so that you have more time to think about interesting ways to connect with your customers or build proposals for new projects, just more interesting and rewarding ways.

So that to me is what is most exciting about all of this. I am obviously a big proponent of Salesforce. So we have seen the incredible things that our trailblazers can do with the tools they have today. And if we can make it easier for them to complete the kind of rote, mundane, time-consuming tasks and surface information to them that might have been hard for them to get before, it just feels like the creativity that, that could unlock is going to be amazing.

Mike: I mean, you’re saying the exact same words I’ve been saying for years about all of what we call declarative tools that we build on the platform. What do you want your developer spending time doing? Do you want them writing mundane validation rules or just using our validation rules? Do you want them writing these insane business processes, or would you rather they go write some really cool stuff, and not to put this on the admin, but the admins building the flows that are the meat and potatoes, getting everything done.

Sarah Flamion: Exactly. I also think it’s interesting to think about, we know the jobs to be done that people are trying to do today. And typically, if you read about jobs to be done, they’re typically pretty evergreen. So they don’t change a lot. The ways that you accomplish them might. But the job you’re trying to do might be the same for 50 years.

But with generative AI, it feels like there’s a possibility to start to see really new jobs to be done. So things that just weren’t even in the realm of possibility before are now something that you might think about being part of your role. So just as a personal example, I have three kids, and they really enjoy laughing at how old I am and talking about all the things that we have every day now that weren’t possible when I was growing up. So the fact that we didn’t have cell phones or navigation systems just kind of blows their mind.

Mike: They’ll never know. They’ll never know the excitement of seeing the blinking red light on the answering machine.

Sarah Flamion: Exactly, or how cool I thought I was when I could print out MapQuest directions and fold them up and put them in my dashboard.

Mike: That was the thing.

Sarah Flamion: That was the thing, right? But if I think about when I used to write school reports, I would look stuff up in an encyclopedia or go to the library-

Mike: For hours.

Sarah Flamion: … and I would have a limited amount of sources.

Mike: Hours.

Sarah Flamion: It would take forever. And now, if I look at what they’re doing, my daughter did a report last year about paleo artistry, which she learned about by asking questions of the internet about how we know what dinosaurs look like. And she found all these interactive, really interesting resources, but she also was able to find a paleo artist through social media that she could interview.

And just the amount of expansion and what is possible for them versus what was possible when I was growing up really is pretty breathtaking. And it bends my mind a little bit to think about what is going to be possible for my grandkids because of what is emerging now.

Mike: Is paleo artistry the art of drawing dinosaurs as to what we thought they looked like, or-

Sarah Flamion: It is. Yes. They’re artists and they kind of combine that artistry with science to take the scientific information that we have and then render what that would look like.

Mike: Okay.

Sarah Flamion: And there’s a lot that goes into all the plants in the background have to be-

Mike: Right. Certain kind-

Sarah Flamion: … contextually appropriate. It was a pretty fascinating little report.

Mike: I’ve never heard that word. This is so cool. Okay. Come on, do a podcast about generative AI, learn about paleo artistry.

Sarah Flamion: Paleo artistry. Yeah. But I think about just the stuff that they’re going to be able to understand and learn about and know because you can just so easily access this information kind of… It’s pretty amazing.

Mike: I mean, I-

Sarah Flamion: I also think it’s going to be interesting.

Mike: You and I wouldn’t have run across it unless the word was in the encyclopedia.

Sarah Flamion: That’s exactly right. And then, you only get one perspective, whoever wrote that article.

Mike: Sure. Hopefully, they liked paleo artists.

Sarah Flamion: Yeah. We’re at a pivotal point, I think. You talked about it earlier, but the people who do the jobs today have to have particular skills to do the jobs. And I think with generative AI, we’re going to see some of the job performers changing, so new roles. People who had ideas but couldn’t tackle that job before might be able to now, or new groups within an organization working together in different ways. I think that kind of organizational shaping is going to be a really interesting aspect of research as well.

Mike: I mean, you look at how organizations are structured now versus 50 years ago when things are very different based on just the jobs that we’re doing. Yeah. It’s also crazy to think that most likely the kids born now will have a job that doesn’t exist maybe for another 20 years.

Sarah Flamion: Right.

Mike: Who knows? So as you’re going through this and you’re reading about the research, what kind of concerns do customers have around some of these capabilities?

Sarah Flamion: So I think anyone who’s thinking responsibly about these technologies is also thinking about the risks and the concerns and how we best handle those.

Salesforce has been in the AI space a long time. So we pioneered AI for enterprise in 2013 with Einstein. And I think our latest count, we have over 60 AI features. So we’ve had lots of opportunities as researchers to talk to customers about AI. Our customers understand the importance of accuracy and quality.

And so, I think there’s some worry that generative technologies are going to churn out content that isn’t great, so low quality code or wrong answers to questions or mediocre marketing content. And I think they’ve seen stories on the news about generative AI making toxic or really strange kind of weird content. I think there’s concerns about security and privacy of data and the data of their customers. And customers talk to about they want to have agency and what’s going on. So they don’t want to just totally lose control of their craft and their domain or see a decline in their skillset.

At Salesforce, we talk a lot about how trust is such an important value for us. And I think we’re going to have to really lean into that as we bring this generative AI tech to market. I know we’re not deep diving today into Salesforce, but it’s worth the admins who are listening, knowing that there’s lots of smart people really invested in how we build this in the most responsible and thoughtful way possible.

And that includes not just how the technology is being built. So we are building some of our own models, but also how it’s being leveraged by our systems, how those interfaces work, how we’re training it, how we’re going to use data cloud technology to ensure that we’re basing these capabilities on really good clean data. And then, especially from a researcher perspective, how we make sure that the people, the humans who are using it are informed and engaged in the right way.

Mike: Yeah. All very valid. So as we kind of wrap up, a couple questions. I think I see admins being on the frontline of helping organizations understand AI and probably GPT inside of Salesforce. What suggestions do you have for them?

Sarah Flamion: I think our knowledge and understanding of change management is going to be more important than ever. So just because we can augment all these jobs and have all these amazing new capabilities, people won’t just adopt it because we have, as humans, a preference for doing things the way we do them now, what we call that status quo bias.

And, sometimes, we also talk about this psychological inertia, which is the idea that it’s difficult to get people truly invested in change once they’ve kind of solidified the way that they’re doing it now, their ideas and their habits.

So if you’re trying to combat those tendencies and drive adoption, I think, first, admins need to think about the culture of their organization. Every organization has kind of different feelings about things, and you need to use that to frame how you’re talking about enrolling out these capabilities.

So an example of that might be if you have a team with a lot of high trust in data, they might respond really well to sort of data-oriented framing metrics about the productivity improvements for particular roles, for example. If you have a team who’s maybe less data oriented but much more relationship driven, maybe, you’re highlighting how AI might free up some of their time by taking care of some day-to-day tasks so they have more time for relationship building. So I think some of it is around framing.

I think the second thing is really focusing on value. So the more obvious the value and the clearer it is, the more you can push back against that status quo bias. So highlighting the solutions that are available, benchmarking before and after changes, so you get that social proof of what is possible.

I think prioritizing really role-relevant information to help people super clearly understand what’s in it for me. So rather than kind of blanket statements about success, really identifying this is what’s in it for you as a service agent. So this is what’s in it for you as a marketing specialist.

I think being clear about how we bridge from output of the models to action and then making sure there’s lots of feedback mechanisms. I probably lean toward that because I’m a researcher, but you want your teams to be able to share input and talk about what they’re loving and what they’re nervous about and ideas that they see for how to use it so that you can kind of meet them where they are.

And then, the last recommendation we’d have is you share success stories and be really honest about the time commitment required to adopt these changes. There’s a little bit of a learning curve with any new thing. And I think the more honest you can be about the payoff and also what it takes to get there, the better. And then, we generally suggest rather than focusing on one major long-term goal, look for kind of smaller targets where you can quickly see success, and iterate as you go.

Mike: Your second point was huge because it was exactly the bailiwick that I feel I’ve done a lot as an admin, which was focusing on here are the tasks that this takes off your plate so you can do this part of your job better because you’re already great at it. Maybe that was the salesperson in me, but that was completely how I sold some workflow rules and validation rules to salespeople.

Sarah Flamion: Exactly.

Mike: You don’t have to worry about doing this. It’s going to take care of it for you, so you can spend more time on the phone because that’s what you’re good at.

Sarah Flamion: And our admins have such a pulse on where the folks in their organization are struggling and where they really shine. So they’re the perfect people to help craft that narrative. They can really point to specific examples and say it in ways that their teams can hear.

Mike: Yeah. No. 100%. Sarah, you taught me, and I think everybody listening, more about generative AI than I knew heading into it. So this has been a very productive time. One last question I love to ask, and it’s totally fun. It has nothing to do with everything we talked about.

Sarah Flamion: All right.

Mike: This is why I put it at the end, but I think it’s always fun to find out the hobbies that people have when they work at Salesforce, because a lot of times, working in technology, there’s nothing physical or tangible. I often the met the builder of the house that I live in. He said, “It was a very proud moment when I finished your house because I got to drive by and show my son that I had finished this house.”

And I often think admins, I can’t drive by and show their son, “Look, son, here’s the flow I built today.” And as a product manager, it can be the same way. So I was wondering if you would be so kind as to share with us any fun hobby that you may have.

Sarah Flamion: Yeah. I spend a lot of time cooking. I really enjoy probably for the same reason you just pointed out. I like the tactical like tactile output of it. It brings me joy. There’s like a creative element to it. You can start with a recipe and add your own ideas and come out with something really new.

And then, I’m a mom. So there’s also-

Mike: There’s a-

Sarah Flamion:… a productive aspect to it. Yeah. It feeds my family. So I really enjoy that. I’ve also got some kiddos who are pretty artistic. And so, we’ve been spending a lot of time doing art. So I’ve been learning a lot of new techniques and tools from them, but that’s been a pretty fun side gig as well.

Mike: Okay. I love it. I also enjoy cooking. I can follow a recipe, but don’t send me to the store with those shows on the Food Network where they sent… Here’s $50 and you got to make a dinner. I just come back with a pre-made dinner. I need a recipe. I’m a good [inaudible 00:37:43].

Sarah Flamion: Really, I’m not a great baker. I think the precision is not what I’m looking for. I like a little of this and a little of that and maybe [inaudible 00:37:49]

Mike: Exactly. I have learned that there’s a difference between cooking and baking. I am not a chef, but I like to cook because I can add a little extra, and it doesn’t mess everything up. But baking, everything, it’s a leveled cup. I forgot to level it. Well, it’s ruined. Awesome. I’ll still eat the brownies anyway. Not a fan. Sarah, thanks so much for coming on the pod today and giving us a lot of generative AI knowledge.

Sarah Flamion: It was great. It was my pleasure. Thank you so much for inviting me. And I certainly encourage anyone who’s listening to share their feedback with us. It’s a space where there’s lots of research to be done and still to do. So if you are at Dreamforce or Connections or you’re part of our research program, we really do want to hear from you because there’s a lot that we’re all collectively figuring out, and I think there’s a lot of possibilities. So I’m looking forward to the creative input from your audience.

Mike: We’ll see what the next year is like.

Okay. So I don’t know about you, but I know a ton more about GPT and generative AI than when this podcast started. So I also feel like I could finally get the Jeopardy question right as to what GPT stands for. So if you enjoyed this episode, can you do me a favor and just share it with one person? If you’re listening on iTunes, just tap the dots and choose share episode. Then, you can post it on social. You can text it to a friend. I have a feeling you’re really going to want to do that with this episode. Just saying.

Now, if you’re looking for more great resources, your one stop for everything admin is admin.salesforce.com, including a transcript of this show. And be sure to join our conversation in the Admin Trailblazer Group in the Trailblazer Community. Don’t worry, link is in the show notes. And, of course, until next week, we’ll see you in the cloud.

Love our podcasts?

Subscribe today on iTunes, Google Play, Sound Cloud and Spotify!

Sarah Flamion on Generative AI

Today on the Salesforce Admins Podcast, we talk to Sarah Flamion, Research Architect on Salesforce’s Research & Insights team. Join us as we chat about what recent advances in generative AI mean for admins and the Salesforce Platform. You should subscribe for the full episode, but here are a few takeaways from our conversation with […]

READ MORE
The image is a promotional graphic for the "Salesforce Admins Podcast". It features a light blue background with graphic elements such as trees, clouds, and mountains at the bottom. On the left side, there's a circular headshot of a man with dark hair and glasses, identified as Gary Brandeleer. He is smiling and looking towards the camera. Next to the portrait, the text reads "Introduction to Copilot with Gary Brandeleer". On the right side, there's an illustrated character of a cow dressed as a Salesforce admin holding a smartphone and a placard with the Salesforce logo on it. The character is standing on two legs and appears to be cheerful. At the top, in bold letters, is the title "Salesforce Admins Podcast", with the Salesforce cloud logo as the letter 'o' in "Salesforce". The overall style is friendly and informative, geared towards listeners of the podcast.

Introduction to Einstein Copilot with Gary Brandeleer

Today on the Salesforce Admins Podcast, we talk to Gary Brandeleer, Senior Director of Product Management, Emerging Tech and Products at Salesforce. Join us for a roundtable discussion of everything Einstein Copilot: what it can do, how you can customize it, and what you need to do to get your org ready to get the […]

READ MORE