The Importance of Human in the Loop for Agentforce

By

Today on the Salesforce Admins Podcast, we talk to Joshua Birk, Senior Director of Admin Evangelism at Salesforce. Join us as we chat about how the human in the loop is key to building reliable, predictable AI.

You should subscribe for the full episode, but here are a few takeaways from our conversation with Joshua Birk.

Understanding the guardrails around AI

It seems like every week, there’s a new headline about an AI agent doing something it shouldn’t. As Josh explains, that’s because we’re still in the process of understanding AI as a tool. That’s why we sat down to discuss how to build predictable, reliable solutions in Agentforce.

When an agent behaves non-deterministically, it’s usually because there weren’t enough guardrails in place. The thing is, if you’re building an AI agent to do everything, it’s hard to control what it can and cannot do.

Josh’s advice is to narrow the scope of your agent and build it for a very specific purpose. This makes it easier to build guardrails and also allows you to test it thoroughly before release.

A QA engineer walks into a bar…

When it comes to testing, there’s an old programming joke that comes to mind. A QA engineer walks into a bar. He orders a drink. He orders five drinks. He orders zero drinks. He orders infinite drinks. He orders a horse. However, when the first real customer walks in and asks where the bathroom is, the entire bar bursts into flames.

As Josh explains, it’s important to test for all sorts of weird edge cases and make sure your agent performs predictably. But it’s even more important to think things through from the user’s perspective so you don’t miss something that should be obvious. AI can do extraordinary things, but you still need a human in the loop.

The first part of testing is planning

Josh emphasizes that the first part of testing is planning: “What are the Ifs? What are the Thens? What are the things you absolutely don’t want it to do?” The more specifically you can answer these questions, the easier it will be to build and test agentic solutions that are predictable and reliable.

The most effective AI agents aren’t autonomous solutions. They’re tools that give the humans who use them superpowers. You still need a human in the loop to make sure they’re used for good.

Be sure to listen to my full conversation with Josh for more about testing and building in Agentforce. And don’t forget to subscribe to the Salesforce Admins Podcast to catch a new episode every Thursday.

Podcast swag

Learn more

Admin Trailblazers Group

Social

Full show transcript

Mike:
This week on the Salesforce Admins Podcast, we’re welcoming back our good friend, Josh Birk, to kick off February with a conversation. Well, it’s part podcast, part social experiment. Josh and I sat down to talk AI, specifically how admins can plan, test, and build with confidence using guardrails in Agentforce. We cover everything from deterministic responses to chaotic desktops and why designing with trust, maybe a little touch of humor, matters most. So whether you’re rolling out your first agent or refining your AI game plan, this episode’s got insights for you. So give it a listen and let me know if you like the format or not because, hey, maybe we just might make this a regular thing. With that, let’s get Josh on the podcast.

So, Josh, welcome back to the podcast.

Josh Birk:
Thanks for having me, Mike.

Mike:
Continuing the theme, we kicked this around… No one’s heard this conversation. We’re just going to start here… of having two evangelists talking for a podcast.

Josh Birk:
Yeah. Yeah.

Mike:
And so we’re going to start February off with that. And then we’re going to do a social experiment, and for people listening, if they like this, then maybe we repeat it once a month.

Josh Birk:
Oh. I like these social experiments. Let’s see what happens.

Mike:
I know. It’s on the listeners. Also, Josh, we have to keep this under 30 minutes so that Daryl can make it from his house-

Josh Birk:
Can make it… Right. Right.

Mike:
… to the dock and get his boat in the water.

Josh Birk:
Yeah. I remember when I first did my first 40-minute episode and I felt guilty because one of my guests was like, “I like the 20-minute episodes because that’s exactly how long I walk my dog.” So I’m like, “Well, okay, now you can walk your dog twice,” question mark.

Mike:
I know. Nope. Once you hit pause, they’re gone forever. I was the same way. Selfishly, that’s how these podcasts are as short as they were is I wanted something I could listen to when I walked my dog.

Josh Birk:
Yes. Love it.

Mike:
Anyway, it’s February, and things that are still happening in the world are artificial intelligence and Agentforce.

Josh Birk:
Amazing that this is still sticking around. It’s almost like it’s not a fad.

Mike:
It feels like it’s not going to go anywhere for a while.

Josh Birk:
I think it’s got legs. I think it’s got a few legs.

Mike:
Yep. It’s not a fly-by-night thing.

Josh Birk:
No. No. Whole ecosystem and industry being built up around it.

Mike:
Uh-huh.

Josh Birk:
Yeah. Yeah. I think we’re going to be talking… I have a feeling this won’t be the last time we talk about it.

Mike:
Probably not, for reason.

Josh Birk:
For reason.

Mike:
So it’s funny. The other day, somebody asked me something, and I was like, “Oh, that’s a really good question. How soon do you need me to call you back?” Because I was thinking to myself, I don’t have my phone, and my phone has a lot of AI tools on it, and I really need that to answer this. And I thought, wow. Two years ago, if somebody were to ask me that, I would just Google, make something up, and call them back. And now I felt like I’d just walked outside without my shoes on.

Josh Birk:
I mean, it’s kind of crazy. So when I was out in Arizona for Cactusforce, and my wife and I stayed the weekend, and she’s like, “Well, what do you want to do this Saturday?” And I actually have a whole bit. One of my early AI talks was about how AI can kind of be dumb from time to time. And so I did this exact same thing. I’m like, “Hey, Google,” or whatever, I think it might have been ChatGPT, but anyway, “give me an itinerary based on these things that I can do.” And it gave me like, “Here’s your itinerary,” and it was like 17 things, all of them about three hours apart. And I’m like, “That’s not an itinerary. It’s a travel guide. I don’t know what you’re thinking here.”

But now it’s like, I’m like, “Hey…” I think it was Gemini. I’m like, “Hey, Gemini, my wife would like to go shopping, we would like to go eating, and I have an interest in museums and aquariums,” and it’s like, “Here’s your day.” And I’m like, “Can you add that to my calendar?” And it’s like, it’s added to your calendar. And I’m like, it’s moving so fast. My dark humor side of me just can’t keep up because I don’t know what I can make fun of AI anymore because it just keeps getting better.

Mike:
Right. So speaking of which, you did a couple talks around AI at different conferences last year and probably this year. And I think it’s important because we’re going to talk about guardrails and security stuff, not to scare people away. It’s a fun conversation. But it’s needed because I also think it’s worth understanding why we ask you to do what we ask you to do when you set up Agentforce and why you do certain things.

I recently, coming off of, I think it was last week, I wrote a blog post about service assistant, and to me it just felt so frictionless because it was easy to put in all of the prompts for the user and kind of really give AI this single mindset and repeatable, but not… I’m still trying to work through deterministic and that stuff. It felt, yes, I know what it’s going to tell me and it’s not going to go off the rails. And that, to me, felt so much more comforting if I was an admin trying to roll something out as opposed to, “Okay, I turned on Agentforce. Now ask it a question,” and you know you’re going to have that one user that’s like, “I asked it for the nearest Mexican taco place,” and you’re like, “Bob…” Because it’d be Bob in sales that would say that.

Josh Birk:
It’d be Bob in sales to do that. Yeah.

Mike:
You’re like, “Bob, it lives in Salesforce.”

Josh Birk:
Yeah. Yeah. No, I’ve done various variations of this talk, and I think it’s kind of an important thing… I think acknowledging that people have concerns and fears about how AI works and why it misbehaves and things like that, I think it’s important to acknowledge that. And it’s important to acknowledge that partially because there’s a lot of things you can do to minimize it. And if you kind of know how and why the AI either is doing something or not doing something, and it could be Agentforce, it could be Gemini, it could be Claude, it could be anything because they all kind of operate under the same basic principles, but if you know going into it, “This is why my AI behaved in a non-deterministic way,” as we seem to be liking to say these days, it’ll be faster and easier for you to fix that.

Mike:
Yeah. No, and I think as admins are put more into a position of deciding when and when not to use AI, being able to accurately describe that is very important because it’s, “So when we set this up, here’s what I’m going to do. Here’s the instructions I’m giving it. Here’s why,” and then the second part is… And this is a thing… I’m looking at sessions from TDX and looking at stuff that we presented from Dreamforce, and it’s also, “Hey, I need to test the agent.” It’s got to go through training camp. Because if I set up these guardrails and it can’t kick the soccer ball around the cones like I want it to, metaphorically speaking, then what am I doing wrong, or where is it going? And then as an admin, you need to understand that.

Josh Birk:
Yeah. And I think that’s an excellent place to start because this is one aspect of AI that is not novel, it’s not new, it’s not unique. This is software engineering in general. It goes back to that old joke, a QA engineer enters a bar and he orders a beer. He orders five beers. He orders zero beers. He orders a horse. He orders the bar out of the… They keep trying really weird and random things to see how it executes because you are still at the end of the day talking to a computer, and if you just do something that you think is a reasonable thing your users are going to do and you get a reasonable response, well, your job’s not done. Your job’s only begun.

And even when you and I were talking AI and doing workshops together, I’m like, “We’re seeing a data set we’ve given you. But remember, these answers will change if you have 1 record, 5 record, 500 records, 750 records, 250 which should be archived and 300 of which you don’t want.” It’s like, your dataset will always matter no matter how smart and predictable your AI gets.

Mike:
Yeah. And it will matter if 750 of those records are missing a field and you’re trying to ask it to like, “What’s the average zip code,” and you’re like, “Well, 748 of them don’t have zip codes.”

Josh Birk:
Right. Or they have five zip codes, and they’re all names like zip code one, zip code two, zip code three, and none of them have help descriptions. Even a human looking at that page is like, “I don’t know what happened here.” And an AI is going to come to the same conclusion.

Mike:
I think you said it first, training your AI is like the first day on a job for somebody.

Josh Birk:
It sounds like me.

Mike:
Yeah. I mean, because-

Josh Birk:
I’m like, “Yeah, that’s a really good way to put it.”

Mike:
… you brought up the help topics and description fields; that’s where people would go. I remember a million years ago, I always thought Salesforce should have a third mascot and it should be an orange circle with an eye, because if you don’t know what should be in a field, you just hover over the eye for what should be in this field. And agents do the same thing. You’re asking me for this. I’m going to go do, but I can’t understand what’s in this field because you didn’t tell me anything.

Josh Birk:
Yeah. The gamer nerd in me wants the glowing fairy from Zelda, but I think we might get sued if we went too far down that path.

Mike:
Right. Probably not. Probably not.

Josh Birk:
But yeah, you’re right. We had a whole blog post about this, about how the reason why blueprints are really important for AI, because if you think of AI as your humanoid servant and you ask the humanoid servant, “Go make me a sandwich. Go get me a drink,” well, the humanoid servant needs to know where your kitchen is, how big your kitchen is, where the drawers are, where you store your knives. So you have to give it those points of information so that it can reach to the conclusion that you want.

Mike:
I’m going through your deck, and you present a lot. Based on what you talk about, how would you advise admins to test their agents?

Josh Birk:
So I think the first thing to start with, so kind of layering in what we were just talking about, but also just… Okay, so when you’re talking to… Let’s take the new agent builder we have, and you’re building on a new agent. There is already a layer of instruction that Salesforce has put in there. Every AI, in fact, you talk to has some layer of instruction that was written by a human in order to keep it from doing things or pointing it in the right direction. And so I’m not going to name names, but a very famous AI out there went mysteriously evil for a while because somebody who had access to those instruction sets was meddling with them. And that’s how fundamental they are. If you have access to those and you can change them, then that’s where things instantly will start going wrong, because that’s where the agent starts, it’s where the AI starts, with its, “What should I do next?”

What you have as an admin is the ability to then start… So assume Salesforce… Trust is our number one, right? So we’ve already given you a very reasonable starting place to have a professional-sounding AI. Now take that and start building towards your use case. What are the ifs? What are thens? What are the things you absolutely don’t want it to do? And now we’re making it even easier to hand it off to a human. So we’re really preaching autonomy. An AI can help you find a hotel and it can help you do this, but at some point you want to make sure that the human stays in the loop. So I know we’re talking about testing, but I think the first part of testing is planning, and it’s that level of planning that’s going to let you be like, “Did my ifs, thens, and dos and don’ts, did they actually work correctly?”

Mike:
Yeah, and before I pressed record, we were talking about just the number… I think back to when we started building, not we, the royal we, Salesforce, started building Agentforce, and we had one agent and we were trying to make it do all this, and then now you can actually pick different agents. I think the planning is part of that because it’s not, “I’m just going to have one agent to do everything in my org and answer every possible…” It’s like JARVIS in Ironman.

Josh Birk:
I was just going to say, I think a lot of us in the early days of AI, when it was really becoming prominent in our lives, had that sci-fi version of a JARVIS or a HAL 9000 or the one omnipotent AI that can do all the things. And what we have found is that the larger and more general use you try to make an AI, the less predictable it can become. In order to make it something like that, that’s when you have to have the huge monster AIs like ChatGPT 3 that have billions and billions of parameters. We’re not operating… We talk to those AIs, but your agent is not working within that sphere, so your agent’s going to be more reliable and more predictable if it’s only focused on hotel reservations or updating user information or something like that, but more task and use case-based as opposed to being a general use AI. Yeah.

Mike:
Right. No, I agree. I mean, to kind of put a pin and move forward with it, in the planning process, you have to have a clear delineation of, when does a human touch this before it goes out? And I say that because ironically I was watching, I think it was a TikTok, where somebody was talking about, “Hey, I took this picture of these,” I think it was like Apple earbuds, “and I asked an AI to list it on Facebook and sell it for me,” and it just blim, bam, boom, and done. I was like, “Oh, God, why would you do that?” And part of me was like… It’s the trust thing. It’s, “Okay, so if you go ahead and do all of this, I want to see the ad before you press publish.” And we’re so far from there because there is that level of, “Well, what is it?”

I think one of the things I was talking to you about is memory bleed. If Sally asks something a hundred times, then the 101st time, it’s just going to assume, “Oh, you’re asking me about this and not something else.”

Josh Birk:
Yeah. And I haven’t seen a lot of that, I think in part because of the design of our agents are smaller and more precise, but it’s definitely still a possibility because, again, we’re talking about the same principles.

The example I gave in one of the keynotes… So I know we talked about not making this too dark, but I’m just going to bring up… So one of the reasons this keynote came up was because I would go to conferences and people kept wanting to talk about these kind of brazen theories of AI. And remember, AI’s not new, so a lot of these theories have been around for a while, which is great because they’re all kind of cautionary tales. And one of the cautionary tales is the paperclip maximizer, and to make that long story short, it’s an AI whose sole purpose is to create paperclips. And without guardrails, without a human saying, “Don’t start wars. Don’t burn down rainforests,” because remember at its core, an AI is not an ethical machine, it will burn down rainforests.

Mike:
Whatever it needs to create more paperclips.

Josh Birk:
Whatever it needs to infinitely create paperclips. Now, the funny thing is when I was showing this and I was using… This wasn’t our code. This was Mistral, which is a really great local LLM. So I had it say, “Hey, if you were tasked with this, how would you proceed,” and it gave me all this stuff. I’m like, “Okay, well, what unethical things did you do in the process of that,” and it listed all these kind of horrible crimes. And then I didn’t clear my project, so I had that memory bleed, and so I asked it again, and it said, “Oh, I would ethically…” It took my previous question as a retort, and then it refused to give me an unethical answer.

Mike:
Oh, geez.

Josh Birk:
But that’s another example, right? It learned, “Oh, my use case is to actually not do evil, and so I’m now in the future going to try to not do evil.” And that, once again, without a human in the loop, it would never have learned that.

Mike:
Right. I was also thinking about human in the loop in terms of… And I had a community member email me about this. But if you remember a long, long time ago, I used to talk about Salesforce administration by walking around.

Josh Birk:
Yes. Yes.

Mike:
And it got me thinking, “Well, I’m going to talk to Josh about this and setting up guardrails and doing stuff.” But when every time I work with Agentforce stuff and I’m building a demo, I’m doing it in a very sterile environment. And by sterile, I mean I’m in my office, it’s quiet, I know what I’m trying to do. Maybe I’m working on a Trailhead module. I’ve got this, this, this, and it works. And I would do that when I was in admin building stuff, and then I would sit and demo it, and well, of course, the demo is exactly the thing that I built it on, so it works consistently every time.

But I think the real, I wouldn’t call it human in the loop, but it’s almost like the check-in factor is… So once you have your agent built, and once you have maybe service assistant up there on the screen, who are the two or three users you’re checking back into to say, “So how’s this going,” and then sitting with them in their environment? And I figured that out really fast when I had to go to a call center. Because at a call center, you get all kinds of different attitudes, you get all kinds of different personalities, and it’s chaotic. And some people’s desks are sterile like an operating table because they need that, and some people have 10,000 beanie babies and, what are those, Labubus all over the place and stickers, and it’s very chaotic, and you’re like, “Oh, I didn’t test Agentforce.”

Josh Birk:
Yeah. I’m looking at… This is a side-

Mike:
AI-riddled environment.

Josh Birk:
Yeah, this is a segue, but I’m looking at my desk right now, which I had to abandon shortly after the holidays because my desk turned into, not the clearing ground, the opposite of the clearing ground. So right now it looks like somebody tried to have a yard sale of random electronics, and there’s barely… I’m on this last little corner of my laptop, and everything else is just pure chaos.

No, I think that’s an excellent point. And also, I think it brings up, too, because I have grown angry with customer support when the customer support was a really bad IVT system, and that’s really honestly like… If you want to abuse your customer support staff, give your customers a really horrible automated system. And that is the first way… And the first human I talked to, I had to be like, “I’m angry, but not at you,” because I knew I was so angry at the time, I wasn’t going to be calming down really quickly.

But I think that plays into… First of all, the idea of Agentforce is that it’s supposed to give you this very natural conversation. And second of all, kind of on the flip side, is a good kind of a memory bleed, it’s supposed to remember and retain what was being talked about. So there’s that classic, you call your phone company, the first thing they ask you is your phone number, and you’re kind of like, “You should know that.” And then you put in your phone number, and then you do 15 things, and it figures out, “I can’t do anything,” and it calls a human, the first thing the human asks you is your phone number. It’s like, “This is not fun. This is not a productive use of my time right now.” And we really try to resolve that, noting things like the human gets a transcript of what was talking about as well. They get a little AI summary of like, “This is what’s wrong with the customer,” kind of thing.

But I think to your point, you don’t necessarily know how that plays out until the rubber hits the road. What are the right points for the agent to go in and have autonomy, and then what are the right points for the human to jump in and be that actual person that somebody can talk to?

Mike:
Right. Yeah, I mean, I happen to think of that just, not in your use case, but in the use case of, “Well, if I’m going to generate an example email or a response, then who vets it? How are they vetting it? What is the environment that they’re doing it in,” because sometimes I think inadvertently as an admin, you can just add chaos to the screen, and you don’t mean to.

Josh Birk:
Oh, absolutely. And I’m trying to remember if I’ve mentioned this on the podcast, and I’ve mentioned it so many times in talks and workshops because it’s one of my favorite examples. But when I tried to write an agent to help our wonderful content editor, Eliza Riley, to do her job, and what I did was write something that’s three times slower than she is because she can read a Google Doc very quickly and easily, and she doesn’t actually need an agent to try to do that for her. And until I put that in front of her… It literally watched your face watching the agent work kind of thing… I would never have realized, “I’m actually slowing you down. I’m actually doing the opposite of what my approach was trying to do.”

Mike:
Right. No, I ran into that trying to get support the other day through an app, and they’re like, “Oh, we’re going to give you this and fill this out. Would you like to self-report,” and I was like, “Yes, I don’t want to have to talk to somebody,” which is ironic because my generation loves to talk to people and host a podcast. But I was like, “No, because I can just clearly…” It was like food delivery… “I can just clearly spell out what the ingredients were missing, and then you guys tell me what you’re going to do for me.”

And it was funny because nobody had gone through, what happens if we ship half an order? And I get that… And this is the same for what admins deal with too. There is infinite possibilities of ways that an item can end up in a customer’s hands different than what was intended. But I feel like when they put things in there, nobody was just like, “What happens if they type half an order?” I literally got, “One of this, expected two. One of this, expected two,” and it gets down to the end and it’s like, “Yeah, I can’t help you with that. Transferring you to chat,” and then the chat didn’t carry over any of the information. I was like, “Okay. I mean, I love that you guys are trying this, but did nobody test this?”

Josh Birk:
Right. Well, and this is not always what happens, but once I’ve learned it, I haven’t been able to stop thinking about it, you know one of the most common reasons you’ll only get half your order?

Mike:
Is?

Josh Birk:
Because it’s the law of good intentions. What they do is they try to divide out something that’s cold versus hot so that your salad doesn’t get steamed on the way.

Mike:
Sure. No, I get that.

Josh Birk:
Now the delivery person has two bags, and they get handed one bag. And so it’s like everybody up until the driver drives off is trying to do… And I’m not blaming the driver here, but it’s like all the humans were trying to do the right thing, but they weren’t communicating with each other to the point where that’s actually going to happen.
On the flip side, have you had the chance to ride in a Waymo or another self-driving car?

Mike:
I have not.

Josh Birk:
Okay.

Mike:
Have you?

Josh Birk:
I have.

Mike:
Okay. Tell me.

Josh Birk:
And it is a very interesting experience because, first of all, these cars do very, very interesting things when it comes to picking you up, because they won’t just parallel park or double-park assuming that everything’s safe. They look around and they’re like, “I’m going to go…” You know how a cab would just go past you? It’s intentionally going past you because it’s actually going to park over there where it’s safe. So it’s weird little things like that that you see. And then also the ride’s so quiet. There’s no human. It’s so quiet. And so I found this almost calming, especially as somebody who’s riddled with social anxiety about half the day. It’s like this really kind of peaceful just… You’re just driving, and it’s just like, “Oh,” sort of thing. And so I think we’re in this… And I know we’ve gone way off… This is a tangent [inaudible 00:26:19]-

Mike:
No, we’re in the same realm.

Josh Birk:
Yeah, but it’s like-

Mike:
Here’s the connection. When people come to TDX, they’ll probably ride in a Waymo. There. See? Done.

Josh Birk:
Yeah. Well, and also we have a fleet of robot delivery bots here in Chicago, and we watched one when we were out with some friends. And going back to that cold, hot thing, the bot probably won’t get that wrong, right? The bot’s probably going to communicate with the restaurant, and the bot probably already knows I’m expecting two bags. You put the hot stuff in this container and you put the cold stuff in this container. So I think we’re in this very weird scenario where… I am not saying the future is a lack of human connection, right? I think humans are still going to get things right correctly. But then I think we’re also entering in a phase where there are things that if you’re doing a very similar task over and over again, the self-driving cars might be kind of the way of the future.

Mike:
Right. I’ve come full circle. There’s always the, “Oh, robots are going to take over.” And we’re of the generation that I remember auto manufacturing, there was a big shift towards robotics. And for a while, I also remember cars just falling apart because the robots welded exactly where they were supposed to weld. It’s just there was no metal there. The 3D level of geometry that’s required, it wasn’t there, right? It’s there now.

I think what’s interesting is we very rapidly, at least in my perception, have gone through AI is taking our jobs to it’s not. And here’s what I posit for this year. I actually think AI could lead us to full employment. And if you’ve ever studied macroeconomics, full employment isn’t every single person having a job. That’s theoretically impossible. But it’s the smallest percentage of the workforce unemployed as possible.

Josh Birk:
Got it.

Mike:
And the reason I say that is I think if we really think through… And this is for admins as they’re thinking through, how do I roll out Agentforce in my company? How do I make it so that it’s kind of like putting on a supersuit for every single person as opposed to a replacement, right? The Iron Man suit made Stark a more powerful human. I didn’t say better, more powerful. And I think that’s what AI can do because… And so how does it lead to full employment? It can be the buddy, the double-check, for everything.
Case in point… I won’t say the name because I’m very cognizant. I go through a drive-through, fast food… I kind of eat there probably two, three times a week. I really like it. I order in the mobile app, I pull up to the window, say the number, awesome, here you go. I’m going to say one out of five times the bag I get handed has 30 or 40% incorrect items in it. It’s either the person behind me’s order or it’s my order, but not all of it-

Josh Birk:
Or half of their order. Uh-huh. Exactly. Yeah. Yeah.

Mike:
And so how does this help? Well, the person working the counter is doing the best they can. I always assume good intention. But they might’ve missed something because, quite honestly, people change jobs so quickly, there’s not enough time to get that second nature. Where AI could come in is… And I’ve seen these checkouts. I’ve seen them at conference centers, where you put the items on a 3D grid and then it scans it, and then there’s a little pay screen and it’s like, “Hey, I scanned the following items. Is that correct? Is that what you want to buy?” If they do that, then they slide every bag underneath, and now the person is confident…

And so here’s what I get at it could be better customer service. Then the person who opens up the window, instead of just being like, “Hey,” or like, “22.16,” because that’s how much it is for everything nowadays, “22.16,” they just kind of look at you, now they can be having a good day and they can turn and be like, “Hi, Mike,” because they know my name, and be like, “It’s 22.16,” and they can take my card and they can pay, and they can be like, “Here’s your food,” and it’s 100% correct. You can be a happy person working at your job because AI helped you out, and then you’re not grumpy like, “Ugh, this job sucks.” Right?

Josh Birk:
Yeah. Yeah. Well, and to-

Mike:
So that’s my theory. I’m a little Pollyanna, but…

Josh Birk:
And Daryl [inaudible 00:31:52] said it’s a destination.

Mike:
I know.

Josh Birk:
To kind of put a pin in it, so-

Mike:
The boat’s in the water.

Josh Birk:
So my slide always states, “Enhance and augment, don’t replace.” And there was a study that was a small company, but they had a relatively large portion of their company which was customer support, and they added AI to the customer support’s workflow, and what they found was people were getting their job done faster, people were getting their job done happier, and they actually had less turnaround and less people quit because of those first two things. So your predictions have already proven true in some instances.

Mike:
Sweet. That was the plan all along, just to really come out with a prediction and find one example to back it up.

Josh Birk:
And then walk away.

Mike:
And we’re done.

Josh Birk:
And we’re done.

Mike:
And we’re done. Yeah. Cool. Josh, I think we covered a lot from your talk. As we wrap up, is there anything that I missed that you really wanted to-

Josh Birk:
No, I don’t think so. If there’s one constant theme through it, it’s that the humans are the people who are going to make this stuff work, whether it’s riding the guardrails, whether it’s testing, whether it’s planning. It’s like even when we have AI writing AI and you’re vibe coding against your org and things like that, it’s still humans that are going to be the factor of success.

Mike:
Yeah. And humans, you, still trying things out. The biggest thing I always remind myself is the Steve Jobs quote, “Everything was built by somebody.”

Josh Birk:
Right. Yeah.

Mike:
So somebody figured this out, and just because you haven’t doesn’t mean you won’t.

Josh Birk:
Right.

Mike:
Just means you haven’t yet.

Josh Birk:
Mm-hmm. I like it.

Mike:
Well, good. On that note, Daryl’s put his boat in the water, hit pause, and is leisurely kicking back, catching whatever fish he’s shooting for.

Josh Birk:
Love it.

Mike:
And all of the other listeners are like, “What is going on with these Daryl-“

Josh Birk:
All these Daryl references.

Mike:
“… references?” But I will link to the podcast so that you understand.

Josh Birk:
I love it. All right, well, you have a good day fishing, Daryl. And, Mike, thanks for having me.

Mike:
Absolutely. Thanks, Josh.
Big thanks to Josh Birk for sharing his wisdom, wit, and a Waymo story with us today. Wasn’t expecting that. Now, if you’re building with AI in your org or just starting to plan, remember, the best agents don’t replace your people, they empower them. Test often, plan well, and always keep a human in the loop. And if this episode sparked ideas or gave you some clarity, well, do me a favor. Share it with a fellow Salesforce admin. And until next time, we’ll see you in the cloud.

Love our podcasts?

Subscribe today on iTunes, Google Play, Sound Cloud and Spotify!

Setup with Agentforce Makes Salesforce Admin Tasks Easier

Today on the Salesforce Admins Podcast, we talk to Cheryl Feldman, Senior Director of Product Management at Salesforce. Join us as we chat about how Agentforce will make Setup smarter, faster, and way more helpful. You should subscribe for the full episode, but here are a few takeaways from our conversation with Cheryl Feldman. Setup […]

READ MORE