Elephant in the Room: Diversity in Digital Experience

Women of Sitecore put focus on key considerations for personalization, user testing, and AI during Sitecore Symposium 2022

By Monica Lara.

5 minute read

AI Summary

Six Women of Sitecore panelists surfaced key considerations and shared real-life examples of experiences gone wrong during a two-part session at Sitecore Symposium 2022.

The technology powering today’s leading brand experiences are influencing everything from serving up content to users across the globe, to hyper-personalizing experiences based on individual data. With 1-in-3 marketers also currently experimenting with immersive experiences of the future, what should we be taking into consideration when designing our experience strategies to ensure diversity, inclusion, and representation?

Six Women of Sitecore panelists surfaced key considerations and shared real-life examples of experiences gone wrong during a two-part session at Sitecore Symposium 2022. Explore detailed answers on personalization, user testing and personas, and automated processing by panelists: Deepthi Katta, Technical Director, Verndale; Amy Turrin, Sr. Product & Engineering Manager, Kimberly-Clark; Jaina Baumgartner, Director of Digital Strategy, RDA; Kimberly McCabe, Sr. Director of Solutions, ICREON; and Daniela Millitaru, Sr. Sales Engineer, Sitecore.

Q1: What ethical considerations should be made when using personalization?

Kimberly McCabe: Let’s take all the security, personal identifiable information (PII), and the regulated parts of the data and let’s put that aside. I’d say that’s table stakes. We have to do that. For this conversation: what is the most important consideration that is never spoken about? Whose ethics are we talking about? This is an important question that we need to ask because, when we’re talking about personalization, the person we’re personalizing to is important.

A lot of our ethics come from our cultural backgrounds and our cultural backgrounds are also formed by who is raising us, where we are from, where we are born. These are things that we need to become more aware of as we’re becoming a more global society and we’re personalizing. When we’re talking at a table about what ethical consideration we need to make, we need to have some kind of framework for how we talk about it and understand what we believe to be ethical. We really need to think about the personal point of view.

As personalization is evolving, companies are going to end up having something like a Chief Ethics Officer onboard to consider how our ethics are impacting the individual consumer in the end.

Daniela Millitaru: I see it from two points of view: the ethicality of the brand and the ethicality of the consumer. So, for example, brands that we know that exploit children because they create clothing in a very bad way versus the people that buy it. I don’t believe, in this case, there is any form of ethical personalization, if the business is not ethical.

Amy Turrin: We also always have to consider the culture.

A situation, for example, that did happen, a brand knew, based on all the data they had about someone, that they were expecting a child, which is wonderful news for some people. And it’s terrible news for others. So, this brand reached out and sent them some diapers, which seems on the surface pretty benign. It seems like a good gesture.

But without having talks about the ethics or the impacts of it and only considering the cultural situation that you’re in, this person may have been in a society where maybe they didn’t want to have a baby, and they’ve been exposed. And that kind of ‘outing’ can happen pretty easily. It can happen to people who are expecting a baby, people who are dealing with their sexuality, people who are trying to understand something about themselves.

Personalization, when it knows enough about you, it can take a bit of a leap. It can say ‘hey we know this stuff about you, what can do with it?’ ‘I know, I can sell you this thing’. And if you take that leap too far, and if you don’t know their intention, with whatever you know, you can’t make those leaps very safely.

Jaina Baumgartner: Why are you personalizing? This is extremely important in this conversation. And the answer should be because you are trying to help them in some way. So how are you going to help them do something? Not personalize to personalize, not personalize to sell – personalize to help. And if you think about it that way, you will sell, eventually, but helping should be your number one priority. And then the second, personalize to help identify with the person because that helps them be seen.

Q2: Why should we ensure diversity with user testing and development of personas?

Deepthi Katta: If we think about why personalization actually began, the intent behind that was that one solution doesn’t fit all. And my translation of that is one set of content doesn’t please all. So, if you don’t have a diverse set of personas for user testing, what we are doing is not building a well-rounded solution. It doesn’t really matter what it is – an e-commerce application or a fun website – we have to make sure that we reach the right audience. No matter what product you’re testing, it’s a pillar you cannot skip, ensuring that you’re covering a wide set of users.

Kimberly McCabe: A lot of the time we’re looking at things through our own lens and we have cognitive fixedness, which is driving our understanding of the world around us. And sometimes what that means is that we have the best of intentions, and we think that we’re doing something great for all kinds of minorities, but we haven’t actually asked people who are being impacted. So, we're still being biased. It’s difficult because you can't know what you don't know. We need to be a little bit more open about our ability to ask people and ask that person what matters to them.

Amy Turrin: If we don't have that diversity on the table or actually having somebody in those conversations, then that means all of your personas are based on who you do have in the room. And if you don't have diverse perspectives, you’ve just got biases being put on display. You're just echoing whatever you know. I think we are all trying to do our very, very best to understand our customers, but unless we actually invite the humans that we're talking about to the table, we're going to have a skewed perspective; It's just going to be what we think the customer wants, which again could be really wrong.

That's where you end up seeing products getting put out and campaigns being run — you know what they were trying to do, but it was clear they never asked a woman, never asked a person of color, they never asked a queer person. They're just like ‘you want this product’, we want to sell it to you, and so this is what we think you'd like to see, and you're cancelled.

Jaina Baumgartner: A lot of us work in the world of building websites and I would say even user testing is maybe too much down the road because if you find out something, it's really hard to go back and redo everything that you've built. So it's extremely important to actually have user interviews and bring people to the table as early as possible into the conversation and really seek to understand what they're seeing, what they want, how can they be helped, what can you bring to the table, what kind of experience can you create that would really help them in their journey and that can ensure that everything down the road is mapping back to it.

Daniela Millitaru: Diversity of backgrounds also brings formatting diversity, so different ways of thinking normally that brings innovation; that we can get ‘there’, faster. I really dislike capitalism, but you would make more money because you will not waste, for example, if you don't have diverse user testing and then you create a product not for everyone. You would like your audience to be everyone, right? Because you would like there to be as many as possible, so that we could buy as much as possible. And if you don't have that diversity user testing, you might have to go back to the drawing board and you will have to consider those personas that you haven't really considered, and that’s time — time is money.

Q3: What considerations need to be made as we rely more on AI and automated processing to make human decisions?

Jaina Baumgartner: Over 60% of marketers say that AI is critical to executing their personalization strategies for the next year. So ,in discussing what it means to have AI, machine learning, and personalization, there are two main forms of this and it's mainly used in recommender systems. The first form this collaborative filtering where you find similar users to you and then recommend something based on what the similar user used or did. Then there is content-based filtering, which is you take your past history and behavior and make recommendations based on that.

The problem with both of these ways of doing machine learning or AI-based personalization is that they're fundamentally optimizing for similarities in the assumption that similarities are ‘good’, differences are ‘bad’. Because of that, they limit our growth and we fail to see anything that's new, interesting, or have other perspectives; it's basically a dangerous AI feedback loop of an echo chamber. This is not just a social problem. It's a business problem too because it's typical a user has 3 to 5 preferences, and if you just personalize to the one that you find out first, you're not able to cross-sell to your customer. A person that you are able to cross-sell to will spend 5 to 10 times more with you than someone that you don't cross-sell to.

Amy Turrin: As we think about AI, automated processing, making decisions, and thinking about those models, we can start heading into an uncanny valley: ‘hey it's so close, we can kind of figure out who you are’. It’s unnerving to be that uncanny valley where you know it knows who you are and knows what you want. For those people who maybe do not fit the mold of what that uncanny valley was aiming for, it's a little bit upsetting.

I have a good example of this: I had a baby recently and I am firmly mummy. She can't say it yet, but I'm firmly mummy. As I am not the most feminine of people, I like to do some different types of shopping for my shoes. So, Facebook did a wonderful job of trying to understand who I am. They knew that I had a baby, they knew that I wanted to buy brown shoes, they knew that I wanted to buy these fun socks, they knew that I might buy diapers sometimes, but what they didn't think to know was who is this person beyond those things. So, they made the leap and their wonderful and probably the most advanced AI that we have that’s utilizing machine learning said this person is the daddy. I'm just getting constant ads from men's wear. And I think that's how they got there, but you end up taking these leaps and you end up taking a disturbing leap if you fall outside of those things.

Daniela Millitaru: I wanted to attract attention more to areas where having AI and automated processing to make human decisions can really kill someone or put someone in prison and that's legal and medical fields. Recently, ProPublica evaluated an AI-assisted risk assessment system to predict a defendants’ future risk division. So, because of the implicit bias, out of 7,000 outcomes generated, they found that there is a big tendency to assign a higher risk to black people; that is extremely, extremely dangerous.

Also, in the medical system, people could be killed if you have an AI for some medication, for example. If there's no human presence, it could kill. There needs to be consideration of the ethicality of practice as well and also look at unconscious bias.

Kimberly McCabe: I've seen research about the algorithms and think about what the role of actuaries are; actuaries decided the risk level for an insurance and under life insurance, your health insurance, or car insurance. They're looking at it and saying what gender are you and what car you drive, what group do you fall in, and we're paying based on these things, which we see clearly now as biased and we're concerned about with AI and machine learning. But does it make you look at other things in different industries, differently? Do other industries start making you wonder if we were doing it really wrong? And how what we're doing in tech influences those other industries going forward?

Deepthi Katta: There's no looking back, there's only looking forward. The world is going to do AI and that's why these discussions are very important to make sure that the right things are considered in that model and the end user is not feeling emotionally distressed because in this world we already have people suffering from a lot of problems, and we don't want AI to be another one.

Image not available

Monica Lara

Sr. Content Marketing Manager

Sitecore