Today on the Salesforce Admins Podcast, we talk to Sri Srinivasan, Senior Director of Information Security at Salesforce. Join us as we chat about his recent presentation at TDX and how to build secure, reliable AI experiences with Agentforce.
You should subscribe for the full episode, but here are a few takeaways from our conversation with Sri Srinivasan.
A quick heads up before we dive in: This episode may include forward-looking statements—aka things we’re excited about that may not be here just yet. So, as always, make your purchasing decisions based only on what’s currently available. For the full legal scoop, check out salesforce.com.
Five questions to ask when you’re building with Agentforce
I caught up with Sri fresh off his TDX presentation about secure Agentforce implementation to pick his brain on how admins should think about security and AI.
For Sri, there are five things to think about in order to build secure AI agents:
- What is the agent’s role and scope?
- What data will the agent have access to?
- Which actions should be public and which should be private?
- Do you need to build any extra guardrails?
- Which channels will the agent use?
As always with security, the key concept here is the principle of least privilege. Running through Sri’s questions helps you build an agent that can’t do something you don’t want it to do.
What’s coming next for security in Agentforce
Sri also gives us a sneak peek at the new tools his team is piloting to help admins build secure AI agents. You’ll be able to look at metrics like instruction adherence, coherence, how factual the responses are, and how grounded the agent is.
They’re also trying to simplify how user permissions work with AI agents in order to make it easier to keep things limited and secure. It’s easy to turn things on and off when you’re trying to get something to work, but you need to revisit your permissions from time to time and apply the principle of least privilege.
The role of admins in the future of Agentforce
Finally, I asked Sri about how admins fit into the future of AI on Salesforce. “Admins are key to everything that we do,” he says, “they understand everything that’s happening within their environment. They know which actions, what permissions, what they do, and agents are just another avenue to expose and interact with this crux of it.”
How fast would you drive a car with no brakes? Sure, Agentforce is a sports car in terms of everything it can do. But it’s up to admins to build the brakes and make sure that AI agents are only doing the things you want them to do. And that starts by understanding the systems and data behind them and then asking the right questions.
There’s a lot more great stuff in my conversation with Sri, so be sure to listen to the full episode. And don’t forget to subscribe to the Salesforce Admins Podcast to catch us every Thursday.
Podcast swag
Learn more
Admin Trailblazers Group
Social
Full show transcript
Mike Gerholdt:
This week on the Salesforce Admins Podcast, we’re talking with Sri Srinivasan about secure, reliable AI experiences with Agentforce. Now, Sri is a leader on the security compliance customer trust team at Salesforce, where he helps customers understand and implement security best practices. Of course, before we get into this episode, be sure to follow the Salesforce Admins Podcast wherever you get your podcasts. That way you get a new episode every Thursday delivered right to your phone or your mobile device. So with that, let’s get into our conversation with Sri. So Sri, welcome to the podcast.
Sri Srinivasan:
Thanks for having me here, Mike. Super excited for it.
Mike Gerholdt:
Well, I love the presentation that you gave at TDX, and I’m sure more people would love to hear about it too, which is why I wanted to have you come back on, because everything now is Agentforce and security is always top of mind. I’ve always preached security ever since I started at Salesforce. I’ve had, I think, Laura Pelkey on quite a few times. But that was the compass of what you talked about at TDX. But I’m jumping ahead. Let’s talk about you a little bit. Tell me kind of where you got started and how you got to Salesforce.
Sri Srinivasan:
Let me try to make it very sweet and sharp. So I have always been in security. I have a master’s in information management specializing in security. I worked for big four accounting firms, but not doing accounting. I did security for them, data security and data privacy. Then I ended up working for a little gaming company where I really got involved in security, due diligence. Was a small company based out of Reno, but they were not really small. They did almost all gaming systems, all gaming interactions, lottery, all across the world. So that got me exposed to different systems and more specifically around fraud and how systems can be hacked to do things that they shouldn’t be doing. That’s where I got more interested in understanding the lay of the land of security. I spent about five, six years there.
Then I got an opportunity to work for one of the biggest tax preparers in the United States. I ran their cyber fraud operations group for two years down there, and then my business teams, product teams came over to me and said, “Sri, you’ve been on the other side yelling at us to do a better job. Why don’t you come on this side and do that?” So I spent a couple of years on the product side as well.
Then during COVID, I was looking back at my life when we had lots of time at home, and I realized I’ve done a lot of the security functions in total audit, GRC, red teaming, blue teaming, security operations center, fraud operations. One thing that I thought I did not have was that customer-facing experience, and this great opportunity came about at Salesforce, and my role currently in Salesforce is to interact with customers. My team, security compliance customer trust, is the front-facing team for all customer-facing security inquiries around security, compliance, and trust. So that’s how I got here, and I’ve been here for about five years or so, almost five. It feels like I just started yesterday, and it’s amazing. Every time I meet a customer, I just feel excited.
Mike Gerholdt:
Yeah. I mean, if I had to go back in time and pick a career in tech, I feel like security is one that you’re always going to have a job because if there’s a lock out there, I promise you there’s probably somebody trying to break it.
Sri Srinivasan:
Yep. I hear you. And the frustrating part about is that it’s oftentime not people trying to break the lock. It’s just people forgetting to lock their locks, and then figuring out like, “Hey, how did somebody get in?” Well, you didn’t lock it at the first place.
Mike Gerholdt:
Yep. Oh, man. Speaking the truth. So it feels like there’s kind of two eras. Well, I mean, we talk about different waves at Salesforce, but to me there’s the pre-AI era and then there’s the post-AI era. And for a long time, up until I saw your presentation, I kind of didn’t think about security with AI, because most of everything that we do on the platform is just so secure, but let’s talk about what your presentation at TDX was. So kind of in a nutshell, bring us into that presentation and what you talked about.
Sri Srinivasan:
Sure. I think what the intent was, AI is the hype word right now. So everybody’s talking about LLMs, everyone’s talking about how to protect those LLMs, but that’s just the tip of the iceberg. There’s so much more when it comes to implementing AI right. Salesforce provides you with a very secure platform, but is only as secure as you implement it. So that was kind of the crux of the presentation where we actually articulated the shared responsibility model in terms of what is expected is you as a customer, you as an admins, what are the few five or 10 things that you want to question anybody in your organization that is wanting to come up with an AI solution?
And we wanted to break it down from a business case perspective, in a sense, if you look at all of our top tracks around Agentforce, we break it down into role, data, actions, guardrails, and channels. Those are the things that your business users are very familiar with. If we can build security into those aspects, by nature of it, we’re building security into the product itself, rather than coming at the end and saying, “Now I’m going to do a security review and I’m going to add security on top of it.”
So that’s what we were focusing on during the presentation. Things around being very cognizant on what is the role of the agent, what is the scope of the agent, what will it do? What will it not do? What data it will have access to, and where is that data coming from? Do we need to bring that data into the Salesforce system? Do we need the agent to have access to that? Other critical things, such as least privilege, access controls, designing your actions securely. Those are the things that we spoke about during our presentation, most of which, if you just took it out of context and put it in a paper, none of this should be new words. All of this is standard security practices, but the way it’s applied, the lens through which you look at it, is a little different when it comes to Agentforce.
Mike Gerholdt:
Yeah, I think it’s always interesting as we delve into new tech when you think of security as really telling the agent what it should and shouldn’t do. Wow. Because most people, I think, probably encapsulate security into permissions and profiles and data access, but also what it should and shouldn’t do is also security, right? One of the things you mentioned in your answers was guardrails. And I’m wondering, can you give us some examples of what could go wrong if guardrails aren’t set properly?
Sri Srinivasan:
Yeah. So I’m trying to give a better example here because I don’t want it to be something that is future-looking, but rather what’s in the product today. So when you give your agent guardrails, a simple one could be your agent instructions. Under no circumstances should I ask for your email address or customer name or your order number because I have all that information. If I can validate Mike, I have all that information. I shouldn’t be asking you to give me that information and assume that is right. So that’s a very simple guardrail that you can throw into your system, right? Another set of guardrails could be you shall not perform these actions without having the user verified. You need to know who the user is before you can go and reset his password, or you need to have their second factor. You need to do a step-up authentication before you can trigger these actions, things of that sort. And what agent does with our Agent Builder, you can start providing these as natural language instructions. And the system would know.
Mike Gerholdt:
One thing I was thinking of, so I’m going to ask a silly question, because when we talk security, I feel like I’m the person that has to ask the silly questions. So I’m going to do that. You mentioned one thing of, well, I’m going to set the guardrail if it shouldn’t have access, or it shouldn’t ask for the order number because the person looking at the screen has the order number. Why is that important if we’re not passing an order number to an agent? Why would we withhold data to an agent?
Sri Srinivasan:
So we’re not actually withholding the data to an agent. The reason why we don’t want to explicitly ask the user for certain information is not just to ask, but we can ask, but we shouldn’t trust that information. It’s the innate concept of trust but verify. I can ask you for this information, but that’s not a great user experience because I already know your account number, your order number. I have all that information. But rather, what I don’t have is I don’t know who’s on the other side of the system. So that’s more important for me when I say that you shall not ask for it. The reason why I explicitly state that is because I don’t need this information from you. I already have it. What I need from you is to validate who you are. Once I know you’re Mike and this is the associated user ID, I have all the other information. Are you able to connect those dots?
Mike Gerholdt:
Yeah, no, I totally get it now. I mean, guardrails, I guess I was envisioning guardrails as like those bumpers that you put up when you go bowling and you’re not very good at it so you roll a strike every time. And it’s kind of that, but it’s also kind of that to make sure that the conversation and the agent is flowing in a natural manner so that you’re actually being productive is the way I hear it. So that totally makes sense.
Sri Srinivasan:
100%. And what we have, it’s currently in pilot, is we have instruction adherence. So this is basically our systems, our Atlas Reasoning Engine has supervisory elements that are constantly looking at those conversations and getting metrics around key aspects such as instruction adherence, coherence, how factual it is, how grounded it is. These are then used to decide how the user experience should be.
For example, if there is an instruction that says you shall not ask for the password through the portal, and if the system has to ask for the password, then the instruction adherence will be low and it will be ungrounded because it’s going to do something that is not grounded in its instructions. So then we can set the system to say, “Block those transactions, don’t do it.” So the agent would say, “Hey, sorry, I cannot help you here.” Whereas on the other cases, maybe we can say, “We don’t have enough information,” so then we can build the system in a way that it starts asking for more information so it has all the information that it needs to help you. So these are some things that are coming out. These are our guardrails that are happening when the system’s executing.
Mike Gerholdt:
Yeah, no, it makes sense. I mean, you wouldn’t have to teach somebody the history of math, algebra, geometry, and trigonometry if all you were going to ask them is what is two plus two? And that totally makes sense to me now. Like I’m rethinking guardrails in a completely different way now. I just learned something on my own podcast. Light bulb moment. Let’s talk about that. You mentioned you’re talking with a lot of customers. Tell me about, you don’t have to be specific, but what were some of the aha moments that stood out when you were working with admins or customers and they finally, I don’t want to say finally, but there’s always that moment where you finally make a really good egg and you’re like, “Oh my God, I know how to fry eggs now,” using that as an example because I’m cooking eggs, but can you kind of give me that, because I feel like I get to see it a lot with some of the workshops, but it’s probably a little bit different for everybody.
Sri Srinivasan:
So off-lead, one of the biggest aha moments that I have experienced with admins is I do run these AI workshops at these world tours, and it is really eye-opening for them to look at the Agent Builder in the middle section. When they start looking at the reasoning, they now know why an agent does something that it did. So what are the biggest reasons why this whole agents are a little complicated and different is under the hood, agents use LLMs, right? And we all know what LLM is famous for, right? They’re non-deterministic. What do I mean by being non-deterministic? By non-deterministic, I mean that the same input can give you different outputs at different times. And earlier, like I think about a year and a half ago, one of the bigger problems with LLMs were they hallucinate. It’s still a problem, but we have figured out how to solve it. We provide it with more data, we ground it with more truth, so that it is then working within this construct. We have RAG, we have a lot of other things that we have actually provided to solve that problem.
But the other problem of being non-deterministic is still there, right? And that is why when you start looking at the Agent Builder and you can start looking at the reasoning sections, our Atlas Reasoning Engine is basically telling you there which topic did I choose, what was the utterance that was provided. By utterance, I mean what the user typed. What topic did I choose based on the utterance. And once I chose the topic, what action I chose and I executed the action. But before I execute the action, I did a plan of executing the action. If I did execute the actions, here are the guardrails, here are the runtime guardrails that I would have triggered or I would’ve violated. And hence, I chose not to provide this answer, or hence I chose to go on to the next step.
So when admins look at it, it instantly clicks in their mind. “Okay, this is how the agent worked.” And that also allows them to understand, “Oh, if I were to tweak this one word, maybe the agent would react a different way.” And then they go in and they try that and they’re like, “Whoa, wow. Now I’ve actually cracked the code of agents.” That has personally been one of the biggest aha moments for me.
Mike Gerholdt:
Yeah, I mean, for me, it was always, I love when I help admins and myself build a prompt, and especially when we do grounding, and then we’ll bind it to a field on a page, that’s always very simple, and they press this, I call it the sparkle button, and they get a response back and it’s like, “Wow.” And it’s consistent enough, but it’s not like a chatbot, right? It’s not like an email template. It’s a little bit different every time. But that feeling of AI isn’t scary and it’s not hard to do, you can see it sort of start to melt away.
Sri Srinivasan:
Right. 100%. And in those same scenarios, I’ve seen admins go really crazy when they do the dropdown in the prompt builder and they say, “Oh, so Sri, I can actually bring in data from an Apex class?” I’m like, “Yeah, you can.” And now they’re able to relate AI to the things that it’s dear and near to them, actions, flows, and Apex classes. Admins, that’s their bread and butter, they know that in and out. So now when they’re able to look at that and they’re like, “Oh, it’s as simple as using this in the AI world,” I feel they get very empowered and they’re like, “Okay, let me go play with it more now.”
Mike Gerholdt:
Yep. Let’s touch on permissions for a little bit because I know you covered that in your presentation… Words. What are some common pitfalls? Because I know that I’ve gotten questions at the Agentforce NOW Tour about setting up permissions and giving people access to Agentforce, but what are some things that are just real easy things that most people trip over?
Sri Srinivasan:
So one thing to understand when it comes to permissions is every time you create a service agent, those agents are running as their own designated user. We are going to be releasing employee agents pretty soon. Again, forward-looking statement supply. Employee agents actually run as the underlying user that are executing it. So if you are in your CRM panel, the right-hand side, Einstein Copilot panel we used to call it, that, now you can start interacting with it, those are kind of like the employee agents where it runs as Mike. Whereas if I create a service agent and you’re interacting it through any of the different channels through WhatsApp or through an Experience Cloud, you have to designate a running user. And oftentimes, folks will create a brand new user. And good thing is that this user comes with no permissions, which is good, but the downside of it is it will not be able to do anything. So similar to your standard profiles, licenses, permissions, object and record level, all of those needs to be assigned to this user.
And sometimes what folks forget, admins forget, is you have organization-wide defaults and role hierarchy that could overwrite this. And over time, these roles, these permissions, because they’re like, “Oh, this doesn’t work. Maybe add this, maybe add this.” And over time, that role could end up having excess permission. So it’s always important to review the access to this agent user periodically to make sure it’s appropriate and make sure that only the right folks have access to even edit the permissions for these agent users.
Mike Gerholdt:
Yeah, I always think of, especially when you’re getting started, who are you going to roll it out to? Because you don’t need to just turn it on and let everybody just ask Agentforce whatever question it feels because you’re not set up for that. And that’s probably not the business case, and that’s a lot of the prep, is sitting down and saying, “So we have this really powerful tool, we have this really fast car. Where are we going to drive it?” It’s not like you necessarily have to drive the really fast car everywhere you go, and not everybody needs to have keys to it. So let’s end on kind of a, this not forward-looking statement, but I want to get your opinion on this. What do you think the role of admins will be in terms of shaping the future of AI at Salesforce?
Sri Srinivasan:
That’s a very interesting question. Admins are key to everything that we do. And admins, literally, they understand the Salesforce ecosystem. From my vantage point, I look at admins as the know-it-all people because they understand everything that’s happening within their environment. They know which actions, what permissions, what they do, and agents should be considered as just another avenue to expose and interact with this crux of it. So admins should take it on themselves to make sure that we are interacting with these agents in the right way. There are right guardrails in place. And I think one thing I want to just quickly peel back, you brought in the example of a sports car, right? So let me ask you, what makes the car go fast?
Mike Gerholdt:
Could be a number of things, but usually it’s the motor.
Sri Srinivasan:
Yes, I would say it’s the brake. If I gave you a Bugatti and I told you I took the brakes off, would you drive it at a hundred miles an hour?
Mike Gerholdt:
Well, you’re asking the wrong person.
Sri Srinivasan:
So that’s where I feel the admins come in. They have to put the brakes to make sure that agents are doing the right things, agents are responsible, agents are ethical, agents are doing the right things that they’re supposed to do. Yes, Salesforce does provide a lot of these out of the box, but you also need to provide. If the underlying data is biased, then the response will also be biased. So admins play a very important role, maybe not in developing all these things, but being the trusted advisors to their implementation teams to ask them the right questions.
Mike Gerholdt:
I love your example, because fast is only relative if you understand slow. And fast only matters if you can stop. Great way to frame security. See why I have smart people like you on the podcast because you got to educate us more.
Sri Srinivasan:
I’m happy to be here and thanks for this opportunity. And I think one thing I also want to emphasize is, just like everything, protecting data is a partnership. Salesforce provides you with the tools, the foundations, out-of-the-box topics, the guardrails. When we spoke about guardrails, we have runtime guardrails, detective guardrails, and then corrective ones too, so that we can actually look at things and make changes to the system. So we have all those things, self-improving guardrails, all of those things, we’re providing that as a platform, but it’s on you as the customer and admins to understand and use them. One of the biggest things that we just released was the Agentforce Testing Center. That’s something that’s very different, because like I said, how do you test non-deterministic outputs? Yes, I can put 20 people in front of the agent and have them test for 10 hours a day or for a month, and we would be able to cover a lot of use cases. But think about it, you’ve built your own agent, right, Mike?
Mike Gerholdt:
Oh, yeah. Quite a few times.
Sri Srinivasan:
How long does it take for you to build your first agent?
Mike Gerholdt:
So that’s a trick question. It doesn’t take long to turn it on, but I wouldn’t consider it built in that time.
Sri Srinivasan:
So if we have Salesforce enabled folks to quickly churn out agents we would not be able to reap that benefit if we’re going to tell the organizations, “Well, I’ve churned out 50 agents in a matter of three days. Now go ahead and spend 50 days per agent to test it.” It doesn’t scale. So that’s where Agentforce Testing Center comes in. It allows you to use AI to now start generating test cases and to evaluate your agents. So that’s a great addition. And as admins, we should be aware of that, and we should be leveraging that in order to make sure that our agents are secure and does what it says it does.
Mike Gerholdt:
Yeah. No, I agree. I mean, does it take long to spin it up and do some configuration? No, probably faster than most. But is it ready for prime time? You can think of all the car builders. They can build cars really fast, but then they got to test them. And just because it’s built doesn’t mean it’s ready for people to use it. So it’s a great analogy.
Sri Srinivasan:
Yeah, and we’re bringing in things. One of the other things that I spoke about during my presentation was your need to bring in the user context. You need to do user validations and verifications. We are coming with newer features that we’re going to enable agent variables. These are secure session-scoped variables that capture and store data, and it can only be set by action outputs. So therefore, this is something that you can start using to improve your trust of your agents as well. The other things you can do is that you can have filtering rules now with your agents. So basically you say that only certain topics can be unlocked if you did some other activity. For example, you can only start handling refund when the user is verified. So those are some things that are coming out very shortly. We call them the agent variables, the action bindings, and then filtering rules. These are keywords. I’m just prompting it out so that you can go in after hearing this podcast, go in, search for it. You should be able to find more details in our help articles.
Mike Gerholdt:
Yeah, and our coming up events. I’m sure there’ll be more stuff about that at Dreamforce. Yeah, I mean, so people ask me why this or why that? And you have to think of when you’re building an agent, you’re building something at an enterprise level, and it’s almost like the difference between commercial grade and household grade. When you buy a mower, there’s like for the homeowner that’s going to maybe mow a couple times a month, and then there’s mowers that are made for people that own businesses that are going to run the mower eight hours a day, five days a week, 200 hours a month, and really put it through its paces. And that’s the difference between some of the stuff that we build and some of the low lift, other stuff that’s out there. So I appreciate that perspective.
Sri Srinivasan:
Yep. I think if we’re still at time, the only other thing that I want to let folks know is we just started pilot with our Interaction Explorer, and I think as admins, this is something that we all like to do. How many times have we gone in, looked into our history or logs and we love that part, right? So Interaction Explorer takes the hard work out. It gives you all the information, one pane of observability across all your Agentforce interactions. How many users, how many sessions, what is the quality score, how many interactions per moment, what are the top ranking topics? What are the top ranking tags?
By tags, what we have done is we have taken all these interactions using AI, and we’ve generated tags and we have grouped them together. So what that allows you to do is that now you can trace in cluster sessions with granular log data, and then now you can click down on it and inspect configurations at each and every level so that you can optimize your agents. I can actually go down to a specific conversation you had with an agent at this time, and then look at that specific interaction, that message that you typed, look in the background and say, “How long did the agent take? Where did it spend its time on? How much time did it take on utterances? How much time did it take on doing trust-related activities? How much time did it take to execute another action?” All that information is available for you so that you can take a much more smart decision on your agent enablement and also on how your agent is being consumed.
Mike Gerholdt:
No, I like it. Thanks for coming on the podcast, Sri. I mean, I feel like we went through all levels and I bet if I had a two-hour podcast, you and I could talk all day.
Sri Srinivasan:
I would love to.
Mike Gerholdt:
There’s always something to talk about, right?
Sri Srinivasan:
Yes, there’s definitely a lot. Again, folks know our security blogs coming out, our security sessions that happen at these world tours and other events. If you ever run into me, feel free to stop by and talk to me. I love to talk to customers. I love to talk to fellow admins.
Mike Gerholdt:
Okay, I don’t know about you, but my brain’s doing cartwheels in probably the best possible way. Just a huge thanks to Sri for coming on the podcast. I know he worked really hard on his TDX presentation, and security and AI is always a very difficult thing to wrap your head around. But hey, showed up and nailed the seatbelt on your self-driving car and came loaded with car analogies. So I thought it was fun.
And if this episode made you go, “Ooh, that’s what guardrails do,” do me a favor, send it to your favorite admin friend out in the community. And to do that, just tap those three dots and boom, share away. You can put it on social, you can put it on the Trailblazer Community, you can text it to a friend. And if you’re hungry for more Salesforce Admins Podcast, be sure to go to admin.salesforce.com. That’s where you’ll find any of the links to resources that Sri shares with us, including other podcasts and a full transcript on this entire episode. So that’s always good. I appreciate that. But one last thing. If you’re looking to bounce knowledge off, ask questions, communicate, interact with other Salesforce admins, you can go to the Admin Trailblazer group that is in the Trailblazer Community and pop over there. There’s a lot of good stuff going on. But hey, until next time, you keep those agents in line and we’ll see you in the cloud.