Salesforce Admins Podcast cover featuring a woman's photo and a cartoon mascot holding a phone, with text on diversity in tech

Unlocking Diversity in Tech: a Deep Dive with Kat Holmes & Josh Birk

By

Today on the Salesforce Admins Podcast, Admin Evangelist Josh Birk sits down with Kat Holmes, Chief Design Officer and EVP at Salesforce.

Join us as we chat about diversity, accessibility, and her book, Mismatch: How Inclusion Shapes Design.

You should subscribe for the full episode, but here are a few takeaways from our conversation with Kat Holmes.

What is a mismatch?

I brought Josh on the podcast to host this special deep dive episode of the Salesforce Admins Podcast, and we couldn’t think of a better guest than Kat Holmes. At Salesforce, she’s in charge of User Experience. But she’s also the author of the amazing book, Mismatch: How Inclusion Shapes Design.

The title of the book comes from the World Health Organization. In 2011, they redefined disability as “a mismatched interaction between the features of a person’s body and the features of the environment in which they live.” As Kat explains, thinking of design as a way to solve mismatches leads to innovative solutions you wouldn’t otherwise find.

The problem with designing for the “average user”

For decades, designers have tried to make things for the “average user.” Kat takes us through the fascinating history of the bell curve, which goes back to a 19th-century Belgian astronomer who set out to apply the principles of statistics and probability to sociology. The problem, as she points out, is all of the different types of users that this approach leaves out.

Kat’s favorite example is the keyboard. It’s an interface that’s incredibly efficient and enables pretty much everything we do with computers. But it was actually invented to help a blind Italian countess write letters without the need to dictate everything. And there are tons of other examples, like bendy straws and curb cuts. These designs solved one person’s specific mismatch problem but ended up benefiting all sorts of other people, too.

Designing with inclusion and the potential of AI

When you’re building something, Kat recommends recognizing the abilities on your team and thinking about who might be excluded. As she puts it, “What abilities are missing that are important to the design we’re making?” Then, find a way to include someone with those different abilities in your process.

We also get into AI and what the future holds. As it becomes easier and easier for admins to build things, it’s more important than ever to factor things like accessibility and inclusion into the equation. And there’s a lot of potential to adapt to the interface to the user to give each person a different experience.

There’s so much more in this deep dive episode, so be sure to take a listen for . Be sure to subscribe so you don’t miss out.

Podcast swag

Learn more

Admin Trailblazers Group

Social

Full show transcript

Mike Gerholdt:
This week on the Salesforce Admins podcast, well, it’s our Deep Dive episode. I said we’re launching something new for April, and with a deep dive comes a guest host. Hey Josh, how are you?

Josh:
Hi, Mike. I’m doing pretty good. How are you?

Mike Gerholdt:
I’m excited because I listened in on this episode and I can’t wait to see if this is the pilot episode of where the Deep Dive series is going. Buckle up, folks because it’s going to be awesome.

Josh:
Right? I honestly think maybe we should just, I don’t know if we’re going to do better than this. This was a… And I hate saying things like when people are like, “Oh, who was your favorite guest?” I’m like, “I don’t like picking the favorite of my children.” Kat’s going to get into the top five right away. I never thought I would talk about diversity when it comes to everything from the iPhone to bendy straws. Just almost [inaudible 00:00:58].

Mike Gerholdt:
Yeah, it’s fascinating. Let’s get into the episode with Kat.

Josh:
Today on the Salesforce Admin podcast, we are going to talk to Kat Holmes about things, diversity, inclusivity, and AI. Kat, welcome to the show.

Kat Holmes:
Thanks for having me.

Josh:
So let’s talk about your early years. In one of your talks, you speak about growing up in Oakland, and that led you thinking and eventually promoting inclusion. Can you expand on that a little bit? What about Oakland fermented this for you?

Kat Holmes:
Yeah, in the way back machine. So growing up in a city that os incredibly diverse, all the way through my schooling, all of my community engagements, we really learned a lot about many different ways that people live. But the thing that was really interesting for me, all the way through college, so I went to college in the Bay Area as well. I never learned about the fundamentals of accessibility as part of my training as an engineer. I also studied pre-med. We just didn’t learn about ways that people experience disability in the world. So it’s kind of ironic, you’d come up in this environment where you have all these kind of movements that had happened, right? Free Speech Movement, there was the Black Panthers, and at the same time, we never learned about the Disability Rights Movement, which also started in Berkeley in the 19… I’m going to say ’50s and ’60s by students, Ed Roberts, students that really, they created some of the first accessible sidewalks in the United States-

Josh:
Oh, wow.

Kat Holmes:
… right here in Berkeley. And I just never knew it even though I was going to school right there on campus.

Josh:
Gotcha. Now, you’ve talked about how when you were 16 you encountered racism, and I believe even neo-Nazism for the first time. And that left you, and I believe I’m quoting you here, “Activated and angry.” And I have to say, as somebody who has used the written word to try to exact revenge on his enemies, I can appreciate it. But when you say active and angry, what actions were you taking? What led you closer to activism?

Kat Holmes:
The encounter I had, this was when I was a junior in high school. It was right off of the school campus, and I was physically and verbally assaulted by a group of neo-Nazis. I’m going for lunch. And it was a pretty shocking… I had also just moved from Oakland to a suburb, and this is where this encounter happened. So it was really shocking to my system. But the thing that really got to me is, that I told the administrators of the school, the principal, and their response was that there was nothing they could do about it because it was off of school grounds, so therefore it was perfectly legal, and that’s the part that angered me the most.

Because that sense of responsibility, here’s the adults in the environment that are, I thought, there to provide my safety. And what I was really hearing is I only do that within a certain boundary. And the way I got activated was writing. New student in the school, and took the time to write a intense feeling-filled, sixteen-year-old article that was published in the newspaper about my experience. And so when I think about, for me, and it means many different things for different folks, but for me it was about saying what was true and saying what my experience was and what was true about that. And so finding ways to activate people through our experiences, really, to share those experiences. And that’s what I really have taken through my entire life.

Josh:
Did it feel like you were taking the power back?

Kat Holmes:
I felt like I could make myself visible is the way to say it.

Josh:
Got it.

Kat Holmes:
In the moment where I felt very much like people were trying to keep me invisible.

Josh:
Got it. Moving on a few years, what exactly did you study at UC Berkeley?

Kat Holmes:
I studied orthopedic biomechanics and material science engineering. So my goal was to design prosthetic limbs for people and tried to find a way to eke that out of a combination of majors.

Josh:
I got to ask, and I am going to throw in an anecdote here, because my father-in-law actually is blind and has no hands. So prosthetic limbs is something we… I think we have a few in the house here, actually. Why prosthetic limbs? Where were you going with that?

Kat Holmes:
I had been really interested in materials and mechanics for a lot of my young adult life. One of the things that struck me was prosthetics. We often try to replicate a human [inaudible 00:06:13] to try to make some material look like skin or some material shaped like bone or nail. And I thought there were so many other kinds of materials that were more expressive or unique that actually when you pair them up with somebody, you ask them what their preference is, they may choose a really amazing leather over a polymer. So quite honestly, it was just curiosity, following curiosity, connecting with people that I knew in my life who used prosthetics, but also just there had to be a better way to do this.

Josh:
Gotcha. Gotcha. Now at Microsoft, you were, I believe, if I’m correct, you were involved in designing their first-ever smartphone. Which I have to say, I think might’ve been my first-ever smartphone, might’ve been exactly that smartphone, and I remember it pretty clearly because it had this wonderful keyboard that was this very nice, tactile keyboard. And I know that a lot of people out there probably think this sounds weird because we live in… This is before the age of the iPhone where touch screen basically started ruling the world. What was that like at that time? Because smartphones were really just basically being invented. And so what kind of challenges were you facing when it came to designing a product for something that didn’t exist before?

Kat Holmes:
Just to clarify, I did not work on Windows Mobile, and Windows Mobile was a really relatively successful platform for Microsoft. I came in right about the time that the iPhone came out, so 2007. And it was this existential moment for Microsoft because like you said, there’s this physical world, BlackBerrys, and Nokia phones, and some of those great tactile keyboards that you’re talking about. And then the emergence of the iPhone was the pinch and zoom on a map.

Being able to still take a phone call, even though you’re taking photos, amazing. And the first phone that I worked on for Microsoft actually ended up being a spectacular failure, it was called Kin. I don’t know if anybody knows it, but we had a blast building this phone, and it was about tactility. It was really a phone for teenagers, and it’s because Facebook was one of the first apps on the iPhone. It was just emerging as well. And so we thought, wouldn’t it be cool if you could create a full on social media app just for teenagers all built into the phone?

So learning a lot about that time, what I’ll say is the top lesson for me is we poured money, our hearts and souls. We developed beautiful hardware with a company [inaudible 00:08:57] Sharp. But we missed what the success of the iPhone was going to be. And that was the developer ecosystem, the App Store. So you can build the best phone in the world, but the game had changed and we hadn’t realized it. The game was all about activating a tremendous ecosystem of applications and developers that could build on this platform. And so we were still thinking of it as a device-centered world when really it was a platform game.

Josh:
Yeah. Well, and to your credit, I think Apple itself, because for the first year, I want to say of the iPhone, they’re just like, “Oh, no, if you want to do anything custom to this, you have to do it through a website. We’re not going to let you past our Ivory Palace into the App Store.” And then somebody course corrected and here we are now in the middle of history.

Kat Holmes:
Well, that’s where I did then transition into Windows Phone. And so I did help build that product and that platform. And that was a really fun experience, a really interesting experience. I think we pushed the boundaries and the design of user interfaces for mobile, and that did change the game for a lot of companies and how they thought about mobile design.

Josh:
Nice. Can you give me a couple of specifics there? What were some corners that you turned that you feel we might be still seeing today?

Kat Holmes:
If you remember the iPhone in 2007 when it came out, I think we used the term lickable for the advertisements, it looked like pieces of candy. They were shiny, they looked like they had [inaudible 00:10:36] in the phone. And it’s those kind of, we use the term in design of affordances. The shape of the button says, “Push here,” because it’s so clearly indicating that it wants to be touched.

One of the first things that we did with Windows Phone was flat UI is what we called it. And we took all of those affordances out, but it’s because we wanted the content itself to come through, people’s photos. An application’s top metrics, maybe it’s biometrics from your health app. We want that content to come through on the icon or, we now think of them as widgets, but at the time it was very revolutionary to say, “What if the icon was the photo? What if the icon was the biometric data?” And so on a home screen for a user, they’d look at this unique, only looks like their phone, doesn’t look like anybody else’s, flat window into all of their content. And that was pretty revolutionary at the time.

Josh:
To actually surface that detail right up to the phone so that you can just glance at it and be like, “Oh, it’s Tuesday.”

Kat Holmes:
It’s right there. And we still see that. I think the iPhone and its widgets in particular, but many developers have tried to bring, what’s the most important thing a user wants to know both so they can glance and go, but also to maybe entice them to come into the app.

Josh:
Right. One of my favorite T-shirts is from Apple’s WWDC where they announced the App Store, and they must have still, the icons are literally the location and date and time of the WWDC announcement.

Kat Holmes:
Oh, that’s cool.

Josh:
Yeah, they lifted that for sure.

Kat Holmes:
I want a cool T-shirt like that. I have so many cool T-shirts from my 25, 30 years in tech. That’s maybe the best part of working in tech is you get cool T-shirts.

Josh:
You get cool T-shirts. I have found that every now and then I have to double check myself and make sure I don’t have more than three Salesforce logos at a time. And then I just feel like that guy at that concert. So, yeah. Speaking of Salesforce, how would you describe your current job?

Kat Holmes:
I am the Chief Design Officer and Executive VP for our user experience team. So I lead product experience, which means anything that at the end of the day ends up in front of an end user, whether it’s through our amazing admin community, architects, developers, we’re thinking about the platform that you use to build that, but also the end experience that people are going to interact with.

Josh:
Got it. Now in your book Mismatch: How Inclusion Shapes Design, you talk about design leaning to the average person. How are you defining a mismatch here, and what are some examples of design that intentionally are not being inclusive because they’re designing for the average person?

Kat Holmes:
Yeah, the first thing I’ll say is that in all my training as an engineer, in addition to not learning about accessibility, I also was taught the myth of there being an average person. So I’ll get to that in a moment. When I think of… So the term mismatch, I borrowed from the World Health Organization’s definition of disability, and they dramatically redefined it in 2010, they defined it as a mismatched interaction between the features of a person’s body and the features of the environment in which they live. And I loved that as an engineer, as a designer, because it meant that it was my responsibility, in the choices that I make for the product, to make sure that I was considering different types of abilities that somebody might have when they come to use that product. The responsibility sits with me, as a product maker.

Josh:
Got it.

Kat Holmes:
And so some examples of mismatches might be stairs at the front of a library. It’s a public library, but somebody who uses a wheelchair, who has limited mobility, would not be able to access that front entry. So another great example is the keyboard. This is a mismatch for anybody who has limited use of their hands, or doesn’t have hands, completely unworkable for interacting with a computer. And what I love about these examples of mismatches, it means that we can identify who might be experiencing the greatest mismatch when they come to interact with our program or application. We need to make sure that it works for voice as well as keyboards, or it needs to be for different types of audio in addition to tactility. But, what I love about this also is that it’s not about trying to create one solution for all people. You often hear the term universal design. That really means creating one environment that works for everybody. What I love about the keyboard is it was actually invented by a blind countess from Italy, and an inventor named Pellegrino Turri. And the two of them worked together to create a device that she could use to type letters on her own, rather than dictating to somebody else who’d write it for her. So they invented this device originally for someone who is blind, but it went on to benefit so many more people. We’ve used this device multiple times today, all of us. And in that, they’ve created an inclusive design. It started first with somebody who’s highly excluded from some sort of activity. And that solution that they created benefited many more people. And so when I think about coming back to your point on the average person the misinformation that I was certainly taught in engineering is that there’s a bell curve of human abilities, or any kind of human dimension.

And if you think about that bell curve, that the middle of that bell curve is the average human. This is a concept that was created by Adolphe Quetelet, he was a Belgian astronomer in the mid-1800s. And he was actually super jealous, is the way I read it. Super jealous of Isaac Newton, right? Isaac Newton had created these laws governing, deciphering what was happening in the heavens like why does the moon move this way? And Quetelet, who was also an astronomer, he had a pretty curious bombing of his observatory and could not practice astronomy during the Belgian Revolution.

Josh:
Oh my gosh.

Kat Holmes:
So he turned all of his ambitions to be as famous as Newton towards human society, and he started measuring human bodily dimensions. He created the body mass index that we still use today. It actually used to be called the Quetelet Index, to determine is a person healthy based on weight and height, which is a pretty crude measurement. He also developed the foundation of IQ tests, and he also developed really dangerous frameworks that underlie eugenics.

And the challenge with what Quetelet did is he gathered data for as many people as he could, but in the mid-1800s, really hard to believe that he had a true global sample of human [inaudible 00:18:36]. He had a nice Belgian, maybe a couple of countries over sample. So he took all of his data and he was astonished to find his data fit to a curve, a normal curve, which is in mathematics we know of normal curve is there’s a point where the tangent reaches a perpendicular. So he was astonished that it fit this normal curve. And he took the middle of that line and he said, “Well, that curve right in the middle must be the perfect person.” [inaudible 00:19:07] perfect person.

Josh:
Oh, God.

Kat Holmes:
And that became the foundation for saying any deviation from the center of that curve was some kind of abnormality or error. So taking mathematics and applying it to humans can be very powerful in some ways and can be very dangerous in others. But it’s why we refer to people as normal, is actually from a mathematical background. And what I was taught as an engineer is if you design something for the average, you’re going to hit 80% of the population. And then there’s edge cases. I like to talk about edge cases.

There’s 20%. That’s an edge case. All you have to do is really look around at humanity, or do some research of your own, to know that that is just not true. That’s not actually how the world is. But it’s so deeply entrenched. It happens maybe at large sets of data, like large public health issues, and you find anomalies, and that’s good indicators. But when it comes down to one person’s experience sitting in front of whatever technology you’re configuring or building or designing, it actually just isn’t true. So that’s where inclusive design becomes a much more interesting paradigm.

Josh:
It’s fascinating to me that when we say the word average, and we apply that to a person, that we are probably describing a 20- to 30-something-year-old white male in Belgium.

Kat Holmes:
Yes.

Josh:
It’s slightly terrifying, too to be kind of honest. And speaking this. And I honestly, I just want to bring this up because when I was reading about it, it shocked me that this even exists. You talk about Robert Moses, who apparently had, I’m actually struggling to say this to be honest, that he utilized a racist lens in some of his urban planning, which, I’m like, that’s supervillain-level stuff right there. What’s an example of this? I think a lot of what we’re talking about is sort of designed through intention and it’s good intention. We don’t think about the average person being a 30-year-old white male in Belgium, so people don’t intend to exclude people. But here we have an example of somebody who did. What’s the story there?

Kat Holmes:
It’s a really fascinating study, and you always have to remember the context of the time and place. But Robert Moses was the… the term they gave him was the master builder of New York City. He was a city planner, but he had wide-ranging control and power over the design of New York City. And the practices that he employed, and some of these are documented in a book called The Power Broker, is thinking about the types of transit that people had access to or didn’t have access to. And so he’d say, “Hey, the tunnels leading out of Manhattan, heading out to the beaches,” Long Beach, let’s say. So the height of an average public bus, let’s say is X, and the height of an average car is Y. So he would design the tunnels coming out of the city to be low enough that a public bus couldn’t pass underneath it.

In effect, it created limited access to those public spaces outside of the city. But the inherent, nefarious part is, people who predominantly relied on public transportation, or exclusively relied on public transportation, tended to be Black or African-American families or families of low income. And so it’s that it can happen intentionally, and it can happen unintentionally when you think about, oh, I have a car, so I’m going to just make this tunnel to fit my car. And not really think about somebody who maybe doesn’t and somebody who maybe uses other modes of transportation that you’re in fact creating this physical barrier in participating in public spaces outside of the city.

So that’s a great example of sometimes it is nefarious, and sometimes it is accidental or unintentional. I think as people who are problem solvers, we come to this discipline or our jobs because we like solving interesting problems, or we think about how we can solve these and make the world a better place. And it’s that kind of intentionality that fascinates me because when we bring attention to it, you can’t unlearn it. [inaudible 00:23:48] oh, I didn’t realize I created something that made it uncomfortable for somebody else. Just [inaudible 00:23:53]. How can I be a better problem solver?

Josh:
And to flip that script completely to the other end, give me a little bit of backstory. Once again, it was fascinating to learn, why do we have bendy straws?

Kat Holmes:
The story behind the bendy straw is super fascinating. The first design actually came from a man who was watching his four-year-old niece try to drink a milkshake at a counter. And this is the old soda fountain days, and they had straight paper straws in those days, and she kept tipping the glass and spilling the milkshake while she was trying to drink out of this straight straw. So he went home and he put a nail inside of one of these paper straws and he wrapped a wire around the outside and created a flexible joint in the straw and then ended up patenting it. And that’s how we have bendy straws.

Josh:
That’s awesome. That is awesome. Okay, so let’s talk specifics about if I am a designer, how can I identify and address these kind of potential exclusions while I’m working?

Kat Holmes:
The best way to identify this is really first looking at our own abilities, like what abilities… Often the products that we make, there’s teams that are working together. So looking across that group and saying what abilities are represented? And it might be, oh, okay, we all have 20/20 vision, we all are right-handed. We all speak a particular language. These are the abilities that we represent. Now, what abilities are missing that would be really important to the design that we’re creating?

And that might be, okay, somebody who has low vision, or somebody who speaks a different language. And it doesn’t mean you have to solve every scenario, every potential language, every potential ability. But what are you making? And who’s going to need to use it? Are you designing something that’s going to be in healthcare? Do you potentially need to think about somebody who is not well? Somebody who maybe has a different cognitive state, maybe they’re in an emergency situation? If that’s the case, then how can we think about including people in your team who have either experienced that or are experiencing that difference in ability and bring them in as experts to advise and learn from, or even co-design that product with you. So that’s really the starting point is recognizing exclusion and then asking yourself who’s missing? Really seeking out their expertise.

Josh:
And what’s the importance in collaborating directly with people who either have experience or are possibly experts in different forms of disabilities?

Kat Holmes:
There’s a couple of lenses I think are really important. One is, we often do research in design and we think of it more as user or usability research, or we’re putting something in front of a person and asking, how do you think this works? Or does this work for you? We’re treating people a little bit more like a subject, a research subject, which is different than starting before we’ve designed anything, and going to someone who has a different set of abilities than we do, and asking them, how would you solve this problem? Or have you already solved this problem in some way, in your home or in your work? And learning from the workarounds that people already have, or the considerations before you even create any solution is incredibly insightful to the process. And so it gives us a way of A, thinking differently about expertise. I’m not the expert as the designer. The expert is the person who’s experienced exclusion, but still somehow is making a living using the product that I created.

Josh:
Got it.

Kat Holmes:
And then I think the other part’s just, quite candidly ego, just to check my ego as a designer, that there’s collaboration has a way of opening up the creative process. And I think that keeping our egos in check is a really important factor, and bringing other people to the process and letting them be the experts to lead the way is a really great way to do that.

Josh:
So to paraphrase, don’t design a solution and then take it to somebody and be like, “How bad is this for you?” But bring them into the process so that by the time you get to the point where we’re trying the solution, you’ve already brought their feedback in.

Kat Holmes:
Well said. Yeah.

Josh:
Thanks. Now let’s move that kind of conversation to AI because that’s how the world’s revolving these days. So when we talk about AI in collaboration, how do you think people should think about AI itself?

Kat Holmes:
That’s a ginormous question. There’s two lenses I’ll put on for this conversation. I think AI as a tool that can help us think about and the things that we’re not recognizing ourselves. What are other considerations I’m not considering? How do I think more broadly than my own experience? I think AI is a great tool to help us expand the starting points. I do this often just with our own tools, with Einstein or some of the other tools in the world that are AI-related. But it’s just, “Hey, I’m thinking about getting started on this. Where are different considerations that I might have?” So it could be a way of expanding beyond our own biases.

I think the other lens is thinking of AI as a user of what we’re designing. So there’s a whole bunch of behaviors, AI or different types of machine learning, different types of generative and predictive, even machine learning, are going to bring to our applications or businesses that we’re building. So if we think of AI as a user that itself is trying to solve some set of problems. It’s going to encounter certain kinds of errors, it’s going to need to make certain kind of adjustments on the fly. The more we can understand what kind of goals and what kind of barriers AI is going to encounter when they work with the data that we are providing, or working with the applications we’re providing, the more we’re going to be able to design this positive cycle of access and also safe parameters around what AI can access, what it can and can’t do. And so it might be a nuance, but thinking about AI as a tool, versus thinking about AI as a user, I think gives us really two interesting places to design from.

Josh:
Gotcha. Because I think one of the things, it’s very hard, and this is one of the reasons in my own AI talks, I always tell people, just go try it because it’s really hard to describe why it’s a new style of interface, simply because it’s conversational and it’s interactive. What sort of design challenges come up with something that’s having more of a conversation with you than just pressing a submit button?

Kat Holmes:
The interesting thing about AI is that we’re kind of in love with this conversational moment of AI, ChatGPT welcomed us to a really broad and accessible kind of AI through conversation. But most of machine learning and AI applications that I’ve worked with, and I’ve worked with different types of interactions since about 2010, a lot of them aren’t conversational.

Josh:
Got it.

Kat Holmes:
And even in our devices, our smartphones, we may have different types of machine learning or AI that is vision-based, object recognition, or audio-based or tactile. So there’s many different kinds of interaction models that come along with processing information through AI. And the unique design challenges, I think one of the biggest ones comes back to the mismatches we were talking about earlier.

AI could give us a tool to be much more adaptive, to meet people where they are, whether that is, we were talking a lot about physical abilities earlier, whether the person can see or hear, but what about cognitive differences? And that’s a whole frontier that I think is fascinating. There’s so many different ways that people learn or process information or want information presented to them. Can AI help us adapt a design or an interface or an application to meet people where they are? If they’re a novice versus an expert, wouldn’t it be interesting to think about the differences in experience that AI could create to meet people where they are? So that’s one design challenge.

And then another prominent one that there’s many leaders in this field is thinking about the biases in AI itself. And there’s a lot more, I think, visibility and awareness of this now than there was, say, five years ago, certainly 10 years ago. But the training sets of data, or when I go into Midjourney and I say, “Create an image of a doctor treating patients.” [inaudible 00:33:58]. What’s the doctor look like and what does the patient look like? And has this algorithm been trained predominantly on sets of data that favor certain races or for certain experiences, genders. So that kind of bias is a very small example, but a lot of companies have learned early lessons in this. I think Tay at Microsoft being trained overnight, within hours by the Twitter community, formerly known as the Twitter community. And it just went sideways within hours. And so that risk of what we’re teaching and how that shapes the design at the end of the day is a huge challenge as well.

Josh:
I kind of feel like the world should actually kind of thank Tay for being such a horrible, awful example of how things can go wrong.

Kat Holmes:
That’s true. It happened in a relatively safe sandbox.

Josh:
Right. No doubt here, it’s basically speaking Hitler. We all can agree, let’s not do that.

Kat Holmes:
[inaudible 00:35:11]. Thank you, Tay.

Josh:
Thank you, Tay. And I really appreciate it because I’ve talked to women of color who they’re kind of in a generation where they grew up with the concept of what an engineer looks like, and it’s that crew cut guy with glasses and a shirt and a pocket protector in an IBM [inaudible 00:35:31]. And they didn’t think they would be an engineer because they never saw anybody who looked like them be an engineer. And I feel like we just have that history that AI has. I don’t know how AI is even going to try to catch up to it.

Kat Holmes:
The opportunity is there. The opportunity to create a different reflection of reality is there. And it really comes down to the choices that we make in the design of our AI. And who is designing that AI at the end of the day. Can we really broaden… One of the things I love is I think the skillset to become an AI designer will dramatically change because the things that I learned in engineering school, I learned FORTRAN, so that’s not super helpful anymore. But if we don’t need to learn some of these technologies that are going to turn over anyways, what is the important thing to learn about the design of AI and then what skills are needed? And that could open up the field dramatically to a wider range of people.

Josh:
Yeah. And it’s one of the things I’m really excited about with Salesforce because the idea that an admin could use their preexisting skills as a flow builder to then also be an AI builder is very exciting to me. Do you have any tips for some… I think our community’s really in the shallow end of this. They’re slowly getting into the waters of it. When it comes to thinking of solutions for their users, do you have any suggestions or tips for lining up what we can do with AI with a user’s skills or job or role?

Kat Holmes:
Being in the shallow end is I think where everybody is. There’s maybe a very small population that really, really is deep in these waters. Most of the population hasn’t even put their toe in yet. So if you’re in the shallow end, welcome.

Josh:
You’re in good company.

Kat Holmes:
… [inaudible 00:37:29]. And please keep learning and keep walking a little bit further in because this is the first wave of us who, coming into those shallow waters, are going to say, “This is how we apply it to life.” This is where it makes a difference. And I think our admin community understands the work that people are trying to get done on a daily basis. They understand the challenges people encounter. And when we designed Prompt Builder, for example, we were really thinking about the community that understands what an end user is trying to do. We’re thinking about the admin community who can say, “These are the most important mundane tasks that need to be repeated and automated or supported by AI.”

And so I think the most important advice is lean into that understanding who’s using your products or who’s using Salesforce at the end of the day. And help us understand what more will serve the people, and the use cases that they have, in better ways. And going back to inclusive design, think about folks beyond, think about the edge cases or think about the folks who maybe are experiencing challenges without using Salesforce today, and how can we really make this a turning point using AI tools to make sure that we’re doing a better job going forward.

Josh:
Yeah. Okay. I’m going to throw a hypothetical to you and we’re going to pretend you have infinite time and money. Where do you think… One of the things I think is very interesting is that the hardware curve, I feel is still advancing. We’re just now getting things like AR goggles that are associated with AI. Where are some edge cases that you think could AI really help with inclusivity? For instance, I was having a conversation with a friend and I was like, “Well, I have a nephew who is autistic, and he might benefit from glasses that could actually identify social cues that maybe his brain isn’t wired for.” Where do you think we might be going with this?

Kat Holmes:
There’s this interesting debate, I think, between computing power, infinite times and resources to make trillions on infinite computing power. Versus reaching as many people as possible with something that’s beneficial.

Josh:
Got it.

Kat Holmes:
I would lean towards reaching as many people as possible with something beneficial. We may be in a place with what we have today to transform a lot of lives if we can really connect the potential of the technology to what people are trying to achieve. So with infinite time and money, I think there’s tremendous diversity in human… This is such an obvious statement, but it’s one that we haven’t really taken to heart as technologists. There’s infinite diversity in human lives. And understanding unique medical needs, diagnosing those, giving people the power to diagnose them for themselves, or to at least understand some of what’s happening in their lives.

I think about medical, I think about cognitive learning styles, education around the world, just thinking about how I learned versus I have an 11-year-old, it’s my youngest kid who’s learning on YouTube, so fast, guitar virtuoso overnight. And I’m like, “Oh, how’d you do that?” Well, they’ve been watching YouTube videos and [inaudible 00:41:26]. So the learning, the medical applications, and then I think, one of the things I’m really interested in is how language models are going to become local to devices. How are we going to get really personal, device-driven AI that can be a close companion, or just the applications of being able to embed that in different environments? And that’s where I think about climate science. And could we combine sensor technology with local AI device technology and think about climate science differently on a global pattern.

And so we put all our money into computing power for one great AI. Or do we think about the diversification of many different kinds? And I’d say the past 20 years has taught us that this tremendous power in diversification of applications, like we said in the beginning through the iPhone, that whole ecosystem, many, many small things can sometimes solve a problem equally or better than one ginormous thing. And that’s how, I’d apply my money towards the small and the mighty.

Josh:
I love it. Kat, thank you so much for the great time and conversation. This was a lot of fun.

Kat Holmes:
Thank you. It was really good to dive into these topics. I appreciate it.

Josh:
Thank you very much.
I want to thank Kat for the great conversation and information. And as always. Thank you all for listening. Mike, how you think we did?

Mike Gerholdt:
I think it was amazing. I also got into some of the discussions that you were talking about, especially around architecture. I think a lot of times we, as admins, think of, “Oh, well, how does this apply to tech?” Well, how does it apply everywhere? We’re design thinkers everywhere. And some of this is really opening up. I mean, you’ve exposed to me the whole making ChatGPT do illustrations, and now I’m asking it stuff. Like, that’s fascinating. That’s not what I was thinking in my head, but that’s clearly what other people, or a machine, was thinking.

Josh:
Yeah. And I’m really glad that we got Kat to really describe how admins are going to really be in a driver’s seat. They have a really important role based on what they’re already doing. Based on the solutions that they’re already building and their relationship with current users.

Mike Gerholdt:
Yep, absolutely. And of course, any of the resources that Kat or Josh mentioned we’ll include in the show notes, which can be found on admin.salesforce.com, including a transcript of the entire show. And be sure to join our Trailblazer community because we’ll post there to discuss about it. So with that, we’ll see you in the cloud.

Love our podcasts?

Subscribe today on iTunes, Google Play, Sound Cloud and Spotify!

Promotional image for Salesforce Admins Podcast featuring a woman named Lisa Tulchin, discussing learning styles. There’s a cartoon of a goat dressed as a Salesforce admin holding a smartphone, set against a blue sky background with trees and mountains.

How Do I Know What My Learning Style Is?

Today on the Salesforce Admins Podcast, we talk to Lisa Tulchin, Senior Curriculum Developer at Salesforce. Join us as we chat about choosing the learning path that fits your learning style and strategies for training your users. You should subscribe for the full episode, but here are a few takeaways from our conversation with Lisa […]

READ MORE