Data Ethics and AI for Admins with Kathy Baxter and Rob Katz

By

Today on the Salesforce Admins Podcast, we talk to Kathy Baxter, Principal Architect, and Rob Katz, VP of Product Management, both in the Office of Ethical and Humane Use of Technology at Salesforce.

Join us as we talk about data ethics, AI ethics, what it all means for admins, and why responsible AI and data management protect the bottom line.

You should subscribe for the full episode, but here are a few takeaways from our conversation with Rob Katz and Kathy Baxter.

With great power comes great responsibility

We brought Rob and Kathy on the pod because they work in the Office of Ethical and Humane Use of Technology at Salesforce. They’re there to work on questions around data ethics and AI ethics. In practice, they set up systems and processes that make it as easy as possible to do the right thing and as difficult as possible to do the wrong thing.

While these issues are important to us here at Salesforce in terms of thinking about the platform we create, they’re also fundamental to everything you do as an admin. We become capable of more and more with each new release, but we have to make sure we use that power responsibly in a way that builds trust with customers. 

Thinking through data ethics

So, what are data ethics? “They’re guideposts about the gathering, protection, and use of personally identifiable information,” Rob says. Many services these days use your data to deliver personalized experiences. It makes sense, however, that users should have some measure of control over how much information they’re sharing and transparency on how it’s being used. Data ethics is putting those principles into action. As Rob explains, it’s “applied privacy by design.”

This is important because, over the past few years, we’ve moved from a world of data scarcity to a world of data surplus. It’s become less about how we can get more data, and more about how we can get the right data. It’s all about decreasing the signal-to-noise ratio.

For example, if someone signs up for your birthday promotion, you may also end up with their birth year when all you needed was the month and day to send them a “happy birthday” email. And while that might not seem like such a big deal, it could inadvertently lead to some creepy behavior from your organization when you start segmenting your list by age, target them based on that information, and the customer doesn’t know how you know that about them. An understanding of data ethics helps you focus on only collecting the information you need, and make a decision about keeping it, restricting it, or discarding it when you’re finished.

How AI ethics enters the picture

You also have to think through how your automations and algorithms work within that framework, which is where AI ethics comes in. This comes down to asking questions to determine if you’re using data responsibly: Are you collecting data that is representative of everyone your AI system is going to impact? Did you get consent for collecting this information in the first place? If your AI makes money off of other people’s data, how do you pay them back fairly? “All of those things are necessary to create responsible AI that works for everyone,” Kathy says.

Even if regulations like the EU AI Act don’t apply to your industry, there is a good chance that you could lose revenue, customers, and employees if you aren’t thinking about data ethics and how to create AI responsibly. A recent DataRobot survey, in collaboration with the World Economic Forum, found that 36% of companies had suffered losses due to AI bias in one or several algorithms. “Good, responsible AI is good business,” Kathy says.

Our guests also pointed us to some great resources you can go through right after listening to the podcast, including a trailmix. This episode is absolutely jam-packed with smart people talking about important topics, so make sure to listen to the full episode.

Podcast swag

Learn more

Social

Love our podcasts?

Subscribe today or review us on iTunes!

Full show transcript

Mike Gerholdt: Welcome to the Salesforce Admins Podcast, where we talk about product, community, and career to help you become an awesome admin. And this week we are taking the awesome level up a few notches, let me tell you. So we are at two guests on the pod and I’m excited for both of these. These people are so incredibly intelligent and thoughtful and I’m so glad they’re in the world. Kathy Baxter, you may remember, she’s a returning guest. She is the Principal Architect in the Office of Ethical and Humane Use of Technology here at Salesforce. And also joining us is Rob Katz, who is the VP of Product Management of the Office of Ethical and Humane Use of Technology.

Folks, we’re talking data ethics. We’re talking AI ethics. Don’t turn this off. This is stuff that falls in our bailiwick’s admins. This is stuff that we need to pay attention to. This is stuff that we will clearly have an impact on as we build more and more apps that collect more and more data. Wait until you hear how much data will be collected from Rob and the amount of information and things we need to think through that Kathy brings to light.

It’s a fun conversation and I did close it by having them give you one thing you should do next. I’m going to do these things right now. I think they’re so incredibly helpful. I hope you enjoy this episode. I really had a great time. I love to have Kathy back on and teaser, you will find out about a podcast episode that will come out in February. So I’m just going to leave that hangout there because let’s get Kathy and Rob on the podcast. So Kathy and Rob, welcome to the podcast.

Kathy Baxter: Thank you for having me.

Rob Katz: Thanks for having us.

Mike Gerholdt: It’s great. Kathy, we’ve had you back. Rob, first time guest, longtime listener. I’ll just say because that’s like always the first thing people want to get out of the way when they’re on a podcast. But that being said, I’m actually going to start off with you, Rob, because we’re talking a whole lot about data ethics and data and ethical AI. But let’s lay that first foundation and talk about data ethics. So can you start by telling us what you mean by that term data ethics?

Rob Katz: Sure. Thanks Mike, and thanks for having me. So I get this question a lot. What is data ethics? They’re guideposts about the gathering, protection and use of personally identifiable information. So that’s practically speaking about transparency and control over data that consumers that we share with companies and organizations about ourselves in return for personalized experiences. And when it comes to data ethics, more and more the onus is on an organization to handle data that they have about their stakeholders, whether those are consumers or other folks that they need to handle that data ethically, because we as users are overwhelmed with all these opt outs and privacy and preference centers and cookie consents. So data ethics in a nutshell is applied privacy by design.

Mike Gerholdt: Got you. Okay. Now before we jump into how that applies to Salesforce, Kathy, can you help us understand this in the context of AI?

Kathy Baxter: Absolutely. You can’t have AI ethics without data ethics. Data ethics is the foundation. And in terms of AI, that includes making sure that you are collecting data that are representative of everyone that your AI system is going to impact. It also means getting consent. We’ve seen too many different AI models that are built on data that was simply scraped off the web. And whether it’s an artist or a writer, that their content that may be protected under copyright or simply wasn’t intended to be scraped and fueling an AI is indeed doing just that. And by scraping the data off the web, you’re probably also capturing a whole lot of toxicity and bias.
So how we treat data, how we collect it, how we get consent and pay people back for the data that they have paid to power the AI, that now companies are making lots of money off of how we label it. All of those things really matter. So first and foremost, we need to ensure that we have good data, that it’s representative, that it was ethically collected, and that it is identified for systemic bias, toxicity, and quality. All of those things are necessary to create responsible AI that works for everyone.

Mike Gerholdt: So I started there because I think this is so fundamental. So for even people like me that have been in the ecosystem, we think of all of those different bots and ways that we used to, as you said, Kathy. I remember thinking and being in sales meetings with the sales managers like, “Well, how do we just scrape LinkedIn for a whole bunch of lead information?” So that leads me a little bit to talking in the context of Salesforce, but before we talk of the context of Salesforce, help me and everyone understand the office of ethical and humane use of technology, because that’s where both of you sit. And to me it feels, wow, this is such an important part of what Salesforce is devoting it’s time to. So help me understand that office’s purpose. And that’ll set the tone maybe for where we’re headed today.

Rob Katz: Kathy, do you want to take that one? It’s your Salesforce anniversary after all today.

Mike Gerholdt: Oh, it is. Congrats on your Salesforce anniversary or Salesforce anniversary as we call it.

Kathy Baxter: Yeah. Today is seven years. It’s fantastic. One of the things that really attracted me to Salesforce in the first place was the company’s stance on issues of social justice and pay equity and a recognition that we are part of the broader community and society. And what we create in the world isn’t just about giving value back to shareholders, but it’s what are we putting out in the world and our values of trust and customer success and equality and now sustainability. All of that has created a culture that when Marc announced that we would be an AI first company in 2016, and I started asking questions about ethics and how do we make sure that we are building our chat bots responsibly, that we don’t create a chat bot that spews hate and disinformation. How do we make sure that we are giving our customers the tools they need to use our AI responsibly.

In 2016, there was a lot of head nodding, like, “Yeah, those are good questions.” And then in 2016, I had pitched this as a role to our then chief scientist, Richard Socher, and he pitched it to Marc Benioff and they were both like, “Yeah, this is what we totally need.” And at the same time, Marc said, “We also need an office. We need a chief ethical use officer.” And the hunt began, and Paula Goldman was hired to become our chief ethical and humane use officer. So this is really about creating not just a culture, because I think the culture has been here, but creating the practice and the awareness and putting the processes in place that enable our teams to make it as easy as possible to do the right thing and as difficult as possible to do the wrong thing.

Mike Gerholdt: That’s really good. So Rob, can you help us put that a little bit more into context? I think Kathy set you up there, but Salesforce as a company, we need to pay attention to this. There’s also Salesforce as a platform.

Rob Katz: Absolutely. So as a platform, how are we building the platform and shipping it? And then how are we helping our ecosystem configure it in a way that it’s easy to do the right thing and hard to do the wrong thing? And that is what Kathy and I do on a day-to-day basis with our technical teams and our product teams, our design teams, and increasingly with our systems integrators and ISV partners and folks on the app exchange and yes admins and data specialists and users. Because how the product and how the Salesforce platform is used as you know, is highly configurable. So let’s work together to ensure that we’re using it in line with these best practices and principles.

Mike Gerholdt: Totally makes sense. And I think for a lot of us, it’s a big platform. Well, it’s easy to think marketing and I think you actually use that. These opt out forms gathers a lot of data. So as we continue the fall from 30,000 feet, we know that data ethics is a good foundation. Let’s set that tone for, I’m an admin sitting here listening to this, and I want to start good data ethics in my organization. Kathy, I’m going to get to you because then this is going to help me lead to building AI that’s being ethical. What are some of the questions or some of the areas that the admin should start thinking about in terms of asking questions or red flags they should as they’re building apps?

Rob Katz: So when it comes to data ethics, it’s important to remember that Salesforce will build a single source of truth to help you connect with your customers. And you were talking about how the world has evolved and now the current state of the world in terms of data, I would argue has shifted. And we see that in statistics as we heard from Bret on the Dreamforce main stage in the keynote, Bret Taylor, the average company has 976 IT systems and their data are going to double again by 2026. So we have moved from a world of data scarcity where every single data point to go and scrape LinkedIn to get sales leads was a good idea in the past, because we were in a data scarcity world. We’re now in a data surplus world and in a data surplus world, we as admins need to think a little differently.

We need to think less about how can I go get more data and then cross my fingers and hope that I can make sense of it? And rather, how can I get the right data? How can I get the right signal to noise ratio? Because when you have too much data and you have a bad signal to noise ratio, you can have unintended consequences or unexpected creepy outcomes. So I’ll give you an example. When you are trying to run a birthday promotion, let’s say, and you want to connect with your consumers whose birthday it is or even whose birth month it is and say, “Hey, happy birthday. We’d like to send you a free…” I’m going to say free coffee because I’m in Seattle and there’s a little local coffee shop here that likes to give me a free drink on my birthday.

Mike Gerholdt: I mean, free coffee, can’t turn that down.

Rob Katz: Never. So I’m opting in, I’m saying, “Yeah. Love my free coffee, here’s my email address, here’s my birthday.” Now the form, admin, we’re setting up the form. What do you need there? You need month, you might need day if you want to do it specifically on that person’s birthday. And the way you set it up typically would be.

Mike Gerholdt: Wait, hang on.

Rob Katz: Go ahead.

Mike Gerholdt: I’m an aggressive admin, why don’t I put month a year?

Rob Katz: Well, so there you go. Well, now you know exactly how old I am and I’m not too particular about that. But when you bring in year, you don’t need it for that birthday promotion. You just need month and day. When you bring year in, you’ve now captured data about somebody that can indicate how old they are. And as a result, when those data are later used for other things like segmentation of an ad audience or in a machine learning model, it can lead to potential age discrimination creeping in to the predictions that are made or to the segmentation that’s created. And that is an application of data ethics. Take what you need, not what you can. Take what you need and you get a better signal to noise ratio and you can still deliver a great birthday promotion without needing to get their year.

Mike Gerholdt: It’s just so ingrained that when you say, “Give me a date,” it’s month, day, year, or was it day, month, year in Europe because it’s has how we’re trained. Even the little calendar field, when you create a date field in Salesforce, it includes year, I think. So wow. Things to think about.

Kathy Baxter: And now the challenge with this, the corollary on AI ethics, if we want to do an analysis to see is there the potential of bias in my model on a protected group. Am I making bias decisions based on age? Because you might not have collected year, but maybe there’s another proxy in there. For example, maybe the year you graduated college or the year you first started working, or how many years of industry experience you have. It’s difficult to do fairness assessments if you don’t have that data.

So in some cases, companies may decide that they do want to collect that data because they explicitly want to do fairness and bias assessments on those fields. But they put it behind ales and only the say, fairness and ethics data scientist can actually see those fields and can actually use them when modeling. And they’re only used to be able to do fairness assessments. They’re not used to make predictions. So to just underline what Rob was saying about take what you need, be mindful. There may be cases where there’s a sensitive variable that you want to collect, but handle it with extreme care and know exactly what you’re going to do with it and put barriers around it so that it’s not going to be used in unfair and unsafe ways.

Mike Gerholdt: It’s like we were meant to podcast together, Kathy, because I was literally turning to you to think of, so we’re collecting this data and we’re trying to do right, but now I want to turn some of our AI onto it. Marketing’s asking me, “Well, help me understand if targeting these promotions on to maybe to Rob’s point the day of their birthday versus just any day in that month is helping me. But one thing maybe I didn’t pick up was that I’m picking up on year.” So when we’re building our ai, what are some things that we need to think about if maybe we’re walking in and this data is already being collected?

Kathy Baxter: I think there are so many customers or companies that they really don’t know, again, going back to Rob’s point, there’s so much data that they have. They don’t even necessarily know all of the data that they have going back years, decades. And if they just are turning the bucket upside down and dumping all of this data into a model and telling AI, “Hey, find something for me.” AI will find something for you. It could be spur correlations, it could be very biased patterns in your data. You don’t know what you might get from it. So being exceedingly mindful about what are those variables that you are using.

And you mentioned about marketing campaigns, we recommend actually that you don’t make marketing or targeted advertising decisions simply based on demographics. That’s the tendency you market makeup towards women and dresses towards women but we have increasing numbers of people who identify as male who really like makeup. And we also have people that are not on the binary gender spectrum. So how do you target your ads at the people who would really like to see those makeup ads? You don’t waste your dollars on the individuals that have no interest in it. Just because they identify as female, it’s not a guarantee that they’re going to want makeup.

So by creating trust with your customers and not relying on third party cookies that at some point are going to disappear, you’re not going to be able to depend on those massive data brokers that have been hoovering up data and making predictions about you. You’re going to have to build trust with your customers to give you that first party data. What are you clicking on? What are you searching on? But also zero party data. Can you get trust to give a form to your customers and have them indicate, “I like makeup. Here are my skincare woes. These are the types of things that I’m looking for in my skincare regime.” If you can build that trust and demonstrate value and give people control over their data, you’re going to be able to make much more accurate targeting decisions. So not only does it give you value, but you’re going to give value back to your customers.

Mike Gerholdt: So far, you’ve done a really good job of giving me examples that I feel aren’t inside of a heavily regulated industry because I do feel like the awareness in regulated industries is there for things. But not every executive that an admin works for or takes requirements from is in a regulated industry. So how do we help them understand red flags, especially around building an ethical AI, Kathy?

Kathy Baxter: Yeah. I have heard unfortunately more than once from executives or admins at other companies say, “I’m not in a regulated industry or I’m not creating high risk AI that would fall under the definition that the proposed EU AI ACT has said will be regulated or has these additional requirements. So why should I care?” No one wants to make bias decisions but if an exec isn’t convinced that they need to invest in responsible AI, especially in these difficult financial times where cutting costs and making sure that you make as much money as possible becomes really, really critical. It’s going to be difficult for an admin to try to convince them that they should invest in this.

So what I would say to an admin, if they really wanted to convince their executives after listening to this podcast that this is something they should be investing in, I would say that good responsible AI is good business. And just to give you a couple of stats for our listeners, I would recommend they take a look at a survey that was published at the beginning of this year. It was done by DataRobot in collaboration with the World Economic Forum.

They found out of 350 companies they surveyed in the US and UK executives and IT decision makers, they found that 36% of those companies had suffered losses due to AI bias in one or several algorithms. And of those 62% had lost revenue, 61% had lost customers, 43% had lost employees, and 35% had incurred legal fees from litigation. So even if there isn’t a particular regulation that applies to your AI, if you aren’t thinking about data ethics and how to create AI responsibly, there is a good chance that you could lose revenue, customers and employees.

Mike Gerholdt: Well, my ears perked up and I think we’ve done… I feel bad because sometimes the podcast train runs over marketers a lot because we love to point fingers at the cookies, which by the way, kudos, whoever came up with that idea of calling it a cookie. But Rob, I would love because when we were talking about this podcast, you mentioned something that made me abrupt face and it’s around some of our other industries like sales and service. Can you help me understand data ethics that maybe haven’t peaked our interest? Because we’re not thinking, well, marketers are collecting all this stuff with their forms and AI, but you brought up a few use cases around sales and service that I hadn’t thought of.

Rob Katz: Well, thanks. So let’s talk about field service technicians as an example. So someone’s using Salesforce field service to manage their fleet of folks who are out there in the flesh doing field service appointments for customers, maybe servicing cable boxes, HVAC units, handling updates on security systems, installing new dishwashers, you name it. Well, what do you know about how those folks use data in order to do their jobs? Well, one thing they might have in the system is their next appointments gate code or key code for the building or something like that so that they can get in and do the work.

Well, how about we set a deletion period so that those gate codes or key codes, especially for one time appointments like you’re getting a new dishwasher installed, are automatically deleted because we don’t need my gate code, my lockbox code stored in a Salesforce system for anyone to see forever because that is a breach of my private information potentially and it’s a risk. On the other hand, you may want to keep that information handy if you’re doing a recurring service appointment like landscaping.

So it’s about how you think about data and whether it should be retained or deleted and who should have access to it. And these are all things that you can do inside of the system using things like field level security and audits. And you can handle time to live and automatic deletion as well.

Mike Gerholdt: Wow. You’re right. I hadn’t actually thought of that. That does make me think about all those companies now that… I mean, my garage door has a code to it and there’s been a few times I’ve had service people come by, “Well, just give me your code. Yeah, no, not going to do that because I mostly forget how to change the code and then you forget the code. But also I don’t need it lingering around in your system.”

Rob Katz: Exactly.

Mike Gerholdt: Exactly. Good point. Rob and Kathy, as we wrap up, we’ve literally scratched the surface on this. I think I’ve talked to both of you for a long time and I find all of this so compelling to understand because it’s something that admins work with every single day and it’s also expands the level of responsibility that we have. It’s one more question to ask, but it’s one more question in the right direction. And Kathy, as you pointed out of keeping customers and keeping employees, the one thing that I would ask for each of you, and I always bring this up to a lot of the people that I coach for presentations. So you’re done listening to this podcast. Kathy, what is the next thing you would love for a Salesforce admin to do?

Kathy Baxter: I think there are a couple of really easy things that they can do. We have a white paper on ethical AI maturity model. This walks through how we actually stood up our ethical AI practice at Salesforce and I validated it with my peers at a number of other companies who have had similar ethical AI, responsible AI responsible innovation, insert your favorite name here practice. And they validated, yes. This also looks similar to how we stood up our practice. So that’s something that a Salesforce admin or others at the company could take a look at and see how might we apply this to our company? We have a number of Trailheads and Trailmixes that we can put onto the episode description for them to check out.

But I would also encourage them to, if they are trying to convince an executive this is something that we need to do, equal AI has a responsible governance of AI badge program that is specifically targeted to executives. It’s not how to build AI, it’s not how to do bias assessments and mitigation. This is for executives of what should you look for to be on a lookout of, is your company building or implementing AI responsibly? In full transparency, I’m a board member for equal AI and I’m one of the instructors for the program. So I’m biased in that recommendation but nevertheless, I wouldn’t be involved if I didn’t think that this was a good use of executives time to help stand up responsible AI practices at their company.

Mike Gerholdt: Yeah. Rob, same question. So I’m an admin, I just finished this. I’m excited, I want to go dive in. What would be the first thing you would want me to do tomorrow?

Rob Katz: Ask yourself whether you can but whether you should when it comes to that field, when it comes to that new object, when it comes to how you’re configuring those requirements that you got. And if you want to learn a little bit more practically and tactically about it from an ethical personalization and trusted marketing perspective, we will link to a Trailmix in the show notes that can give you some very specific do’s and don’ts and suggestions. But for anyone, regardless of whether you’re working on marketing or not, it’s just because you can, doesn’t mean you should. And in a world of data surplus, actually now less is more. And that is a new way to think about it that I hope is helpful.

Mike Gerholdt: Very helpful and I appreciate that. Kathy, it was great to have you back again as a guest. I hope we make this a little more frequent than every few years.

Kathy Baxter: Yes, I would like that very much.

Mike Gerholdt: And Rob, it was great to have you on. The virtual podcast door is always open, if you’d like to come back and help admins become better at data ethics, we would appreciate that.

Rob Katz: It was great to be here. And as a preview, we have a new feature coming out in the February release and I would love to talk about it with you on the podcast as we’re getting closer.

Mike Gerholdt: I love that. When a guest puts you on the spot, that I got to book. So I guess we’re going to book you now. Look for that episode coming out in February. It’s going to be awesome. Don’t know what it’s going to be called but Rob will be the guest.

Rob Katz: Awesome. Thanks Mike.

Mike Gerholdt: Thank you both.
So it was great having Kathy back. See, I told you it was a fun discussion and I promise you, I bet you weren’t thinking along the same lines about sales and service having data ethics in the same way that some of the marketing stuff was, because that example caught me off guard too. Totally makes sense though, and I’m glad they were on to help us be better humans. There is a ton of resources in the show notes. So when you’re back and you’re not driving or you’re not walking your dog, or you’re not running, I know that’s what a lot of you do. Click on this when you’re back in your house and you’re in front of your computer or you’re on your phone, and go through some of those resources. Boy, that Trailmix is super helpful in understanding.

And of course we have a ton of resources for everything Salesforce admin, if you just go to admin.salesforce.com, you can find those there. Of course, I linked all the resources that Rob and Kathy talked about in the show notes. And of course there’s a full transcript there as well. You can stay up to date with us on social. We are @SalesforceAdmns on Twitter. And of course my co-host Gillian Bruce is on Twitter. You can give her a follow, @GillianKBruce. And while you’re over there, you can send me a tweet, give me a follow. I am at @MikeGerholdt. So with that, stay safe, stay awesome, and stay tuned for the next episode. We’ll see you in the cloud.

Love our podcasts?

Subscribe today on iTunes, Google Play, Sound Cloud and Spotify!

What Should Salesforce Admins Know About User Learning Styles?

Today on the Salesforce Admins Podcast, we talk to Lisa Tulchin, Senior Curriculum Developer at Salesforce. Join us as we chat about user learning styles and how to use them to create better training sessions. You should subscribe for the full episode, but here are a few takeaways from our conversation with Lisa Tulchin. Choose […]

READ MORE