[00:00:36] Chicago Camps: How do you define personalization in the context of behavior change and what are your primary strategies you employ to achieve it?
[00:00:44] Amy Bucher: I think at its heart, personalization is really talking to each person a lot the way you would if you were talking to them one on one, right? It’s really understanding who is for you and tailoring all of the words out of your mouth to meet the context, the things that they’ve said, the things that you know about them.
[00:01:01] Most of my work in my career has been digital, and of course, in a digital setting, and I work in healthcare as well we don’t know everything about a person. It’s not quite to the level of having that conversation and being able to see, oh, his facial expression changed a little bit. I should modulate what I’m saying to make sure that I’m not sending the message across the wrong way.
[00:01:19] But there’s a lot that we can know about people that we can use to speak to them differently than we speak to everybody else. So I think it’s a lot more than segmentation. I see segmentation a lot out there in the market. And I think a lot of companies, when they talk about personalization, kind of the best that they’re doing is segmentation where they’ve got maybe seven, eight different groups of people.
[00:01:39] And oftentimes they’re thinking about them in terms of behavioral attributes or attitudes. This is a person who is not comfortable with technology. This is a person who has a lot of health conditions and really needs a lot of support in managing those. But in reality, people change over time and across contexts.
[00:01:56] I might be a really terrible patient for myself, but if I’m taking care of someone else, maybe I’m really on top of it. And so I need to be spoken to a little bit differently in those two contexts. Or if I’m doing something for the very first time, I might be really afraid of it. I’ve never done it before, I’m unsure.
[00:02:09] And again, in healthcare, I’m talking about things like cancer screenings, your very first colonoscopy, or your very first mammogram. A lot of people are afraid because they’ve never done it before, they’ve maybe heard jokes about it. But the second time you do it… You may still not want to do it, but your reasons aren’t going to be uncertainty.
[00:02:25] And so personalization is really recognizing that this isn’t a person who’s going through this for the first time. I maybe don’t need to deliver to them the very fundamental education about this is what happens because they know that, but maybe instead what I need to do is help them understand how to make it go better than it did the last time. The way I really think about it today in my job at Lyrio , we write messaging for people around their healthcare and every message we write includes a behavior science technique that we’ve selected specifically because it addresses a problem that we know that people experience around the health behaviors we’re asking them to take part in. And so our personalization, really its heart, is about giving each person the right message for what they may be experiencing right now.
[00:03:04] Recognizing that’s going to change over time, it might change between behaviors, it’s going to look different than the other people around them. And there’s other elements as well. I think it’s as simple as putting people’s first names and their doctor’s information and stuff. So it feels relevant, but I think the really important thing is from that behavioral perspective that.
[00:03:21] We’re trying to solve a problem that they have instead of giving them something that’s like a bland informational packet that anybody could be reading.
[00:03:27] Chicago Camps: Can you explain the role of reinforcement learning in achieving personalization? And what are the key benefits and challenges of using AI for this purpose?
[00:03:37] Amy Bucher: Lirio, my current company, we use a type of artificial intelligence called reinforcement learning. Specifically, our platform is called behavioral reinforcement learning. And so what reinforcement learning does, it’s a subset of AI. So different than a large language model or a natural language processing, which I think people are a little bit more familiar with in general, what we do with reinforcement learning is we designate specific outcomes that basically reward the algorithm for achieving, and we can set different reward levels.
[00:04:05] Like we can basically say, this is the jackpot. This is the thing you really want to achieve. And this is like a nice little prize. That’s not the big thing. Think of it almost like designing a video game. We can designate what the big boss is and then just like the level bosses. So when we’re designing for healthcare, the health behavior is the jackpot.
[00:04:22] I mentioned mammograms, somebody actually finishing that mammogram, it’s done. That’s what the algorithm is rewarded for most heavily. We’re sending people those behavioral messages, like I mentioned, and what the algorithm is trying to do is find the right message to get the person to take action.
[00:04:36] So we know it’s important that people interact with those messages. They have to read them if they want to benefit from the good content that we’ve crafted, that’s hopefully going to help them figure out how to work around their barriers. So we do reward our algorithms a little bit for that, but we make sure the biggest reward is on the behavior.
[00:04:52] Cause otherwise we’d end up in like a Facebook engagement flywheel, which is not what we want. So what I really like about reinforcement learning is, first of all, it lends itself really nicely to use by behavioral scientists or behavioral designers. We can really help specify what are those behavioral outcomes that are important.
[00:05:09] A challenge is those have to be things that have some kind of data attached. The algorithm has to recognize somehow that the behavior occurred in order to learn from it. And healthcare behaviors that involve showing up at an appointment work really well. So whether that’s a vaccination or seeing your doctor or getting a cancer screening.
[00:05:23] In all of those cases, there’s something in your health data that says this happened. Things that are a little bit more challenging, not impossible, but require a little bit more design finesse, like lifestyle things. How do I know that you actually exercised today or took your medication? So we’re doing a lot of work designing around how do we detect those behaviors and bring them into the system.
[00:05:44] But what I really like about it is I do think it helps us overcome some of the bias that might be present if we had designed the whole thing by hand because it is training on behavioral responses. So I might make an assumption the first time I talk to you, to get back to personas, I might be using a persona the first time I talk to you and be like, okay, Russ is similar to the Billy persona, let’s try this.
[00:06:05] If we’re wrong, you’re not going to do the thing. You’re not going to complete the behavior that the algorithm is being rewarded on. And the algorithm is not going to repeat that strategy. Because it now knows that you are not like the Billy persona. And so over time, and with more interactions, it gets better and better at figuring out how you are as an individual.
[00:06:23] And it does it even more so if it can ask you to do different behaviors. Because now it can start to tease apart what are some of the things that are characteristics of you that you carry across situations. And in healthcare, health insurance might be one of those. If you have cost sensitivity, that might affect all your else behaviors.
[00:06:38] But what are the things situationally specific? So if you’re afraid of needles, that will show up for some behaviors, but not others. So that, I think, is a really powerful thing about using reinforcement learning for behavior in that we’re able to get this map of who a person is across behaviors. I almost think if it’s If you think about the way that an intervention or an experience is designed, like reinforcement learning lets us design really broad.
[00:07:01] Like we’re putting all of the ingredients into our approach that we think any individual we encounter could experience . And so we’re doing, we’re casting a big net when we do our initial research and when we’re developing our content, like we put things into our libraries that we know will only work for a small subset of people.
[00:07:16] But if you’re the person who needs to read that, this is the thing for you. So we can work with these broad populations and then figure out how to personalize. The other approach, which I’ve also done in my career, is you start with a really specific audience. I really want to help this very well designed group of people, and I’m going to design exactly the right thing for their needs.
[00:07:33] I’m almost picturing like an inverted pyramid that reinforcement learning allows us to design within. It’s really intricate. One of the other things I’ve been interested in AI for a number of years. That was one of the reasons why I was so interested in joining Lirio and working with a science team that specializes in AI, but I was also a little skeptical because you never know what’s going on here.
[00:07:52] And so one of the things that we do sometimes we’ll pick some people in our dataset, de identify them, and actually look, what messages did they get? And what’s the behavioral science encoded in those messages? What actually worked to get them to take action? And I cannot think of a time where there hasn’t been a storyline in there that didn’t make sense to me as a behavioral science expert.
[00:08:11] There’s one that we use all the time, a woman with a health system that we were working with who hadn’t had a mammogram for years and years and we, kept reaching out to her started to get her to interact with the messaging and there’s a couple times she clicked the schedule button but didn’t finish.
[00:08:24] And then she finally had her mammogram, she was diagnosed with early stage breast cancer, she was treated, she’s doing well, she’s actually had a mammogram every year since then, so she’s gotten back on track. But when I looked at the messages that she received, she got a couple in the beginning that used behavioral science techniques that are warm and fuzzy.
[00:08:39] They’re some around autonomy, like you have control and power and you can do this great thing for yourself. And there were some women like you, the sisterhood, and she didn’t fight on those. She never really showed any interest on those messages. But any message that was like, here are practical tips to get this thing done.
[00:08:54] Here’s how you fit this manogram into a really busy week. Here’s how you get someone in your social network to help you so that you can find time to do this. Those were the ones she interacted with.
[00:09:03] Chicago Camps: How do you approach personalization when dealing with limited data and what are the ethical considerations and how do you ensure that personalization doesn’t become intrusive?
[00:09:13] Amy Bucher: Limited data, first of all, we do this a lot. There’s a couple of ways. I’m not an AI expert. I think Dunning Kruger is debunked now, but I still believe in it because of my lived experience. I work for a company now with AI scientists, and I feel like I know less than I ever did because I know enough now where I’m like, Oh, I don’t know anything.
[00:09:30] These products are built with something called transfer learning, where basically we do know something about people. Think about like when you go on Amazon or Netflix and other recommender types, since it’s not exactly the same, like asterisks all over the sky that I’m not an AI expert when I’m saying this.
[00:09:44] So if you are and you’re listening and think I’m an idiot, you might be right. But we’re able to essentially know a little, we can be like, Amy looks a lot like this other person that we’ve communicated with in other contexts. Let’s see if she really is like this other person. So we’re able to do a little bit of that, and there is work behind the scenes to make sure that it’s more abstract learnings that are being used.
[00:10:07] We’re not hoarding people’s data around things like that. It’s really about the learnings. The other thing we can do is, and two, I mentioned as well, there’s research that sometimes says that certain demographics are associated with certain behavioral barriers, things like that. If we do know very limited things about, for example, your demographics, and we also know that those demographics might somehow influence the behavior, we can use that to do what I call a warm start of the AI, and basically, hey, Amy’s in her 40s, this puts her at higher risk for this, why don’t you give this a try first?
[00:10:36] But really, once we start interacting with a person, anything we did know about them is really quickly replaced with what they’re actually doing. It might be a little bit slower to learn at the start, but I think we can get to a place with minimal data where we’re starting to understand who that person is.
[00:10:51] So that, that’s the first piece, a lot of ethical considerations. Of course, there’s data privacy rules and regulations that we pay a lot of attention to. We’ve taken a lot of care to ensure that the company has gone through our SOC 2 and high trust certifications and have all the firewalls in place to protect the data that we do use.
[00:11:10] I think a bigger piece is making sure that we have permission to use the data that we use and to talk to people the way that we talk to people. And so we spend a lot of time up front in designing our interventions and working with our clients to make sure all of that is, is done appropriately. We always white label with our clients, so we don’t show up as ourselves.
[00:11:29] We show up under the client brand, and part of that is because they’re working with us to speak to their population in a different way, but it also means that when we show up in somebody’s inbox or on their phone, they know who it’s coming from. There’s a trusted brand that stands behind it where they already have a relationship.
[00:11:45] We’re not doing like mass marketing type stuff. These are typically places where people are already receiving care, have already established a relationship. And I think that goes a long way. I also think the creepy question is connected here too because If there’s data that we’re not supposed to have and that somehow surfaces in the way that we talk to people, that’s where it gets creepy.
[00:12:03] Like, how did you know that I bought that last month? And we’re not messaging people in a way where I feel like that comes through. We use people’s first names. I do think that using people’s first names is more powerful in certain channels than in others. So one thing that I am paying a lot of attention to is what modality are we using?
[00:12:22] And one of the things that happens with phones, phone numbers turn over. So you might move to a new city, you might decide to get a new phone, someone else gets that phone number. So if I text message you and say, Hey Russ, it’s time to get your flu shot, but you’ve moved and you’ve given your phone number to someone else, someone’s going to get that and go, who’s Russ?
[00:12:37] That’s not me. And then they might not get their flu shot, even though they too could benefit from that. So I’m starting to feel it for certain types of behaviors and certain types of modalities. Personalization to that extent, using names, using like personal variables might be counterproductive, but in an email where email addresses don’t get shifted around that way, we have data internally that finds using first name is extremely powerful in terms of getting people to interact with the message, in terms of getting them to take action.
[00:13:04] So it’s also thinking about how does this show up? Is it creepy on a text message versus an email or in an app? We’re not worried about that because you’re in the app, like that’s your account. And so at that point, we have more permission to talk about some personal characteristics. So I guess it’s like how tied to you is the communication channel has some influence on how much we can show up with the personalization and not seem creepy or wrong.
[00:13:26] Chicago Camps: You work with two science teams and a technology team at Lirio. How do you navigate and negotiate requirements across these interdisciplinary teams?
[00:13:35] Amy Bucher: I lead our behavioral science team, and then there is an artificial intelligence machine learning team. And then we have our platform team and the three of us work really close together in what we call The Factory.
[00:13:46] To create our product and so at the executive level, there’s four of us and we stay very close. We talk every day and one of the things that I have learned, and it’s not just at Lirio actually, it’s I think the higher I’ve gotten in my career and the more I’ve worked across disciplinary, you really have to be willing to sound stupid in some of these conversations.
[00:14:06] And I’m really fortunate my colleagues who lead these other functions are like wonderful and really good relationship with them where I trust them as people as well as trusting them as colleagues. So it’s not too much of a challenge at this point to ask the dumb question or to try to rephrase things in the wrong vocabulary and make sure I understand.
[00:14:24] And they’re very patient with me as I am with them because they’re doing the same thing back to me. There’s a lot of let me put this in my language and make sure that I understand it. It’s not been without its challenges. Very early in my time at Lirio, the first project I worked on that I felt I led the design of was for colonoscopies.
[00:14:41] And one of the things that we designed in there, colonoscopies are a little unique among cancer screenings because of the bowel cleanse, the prep you have to do is to start eating and take this horrible laxative and it’s all time based. There’s a certain point in time in which these things happen. So in behavioral science, there’s this idea of a just in time message that you send, just in time, I get the message, I do the thing.
[00:15:01] And it’s supposed to help address the, what they call the intention action gap. So people intend to do something, but they forget or get distracted. It doesn’t happen. So we design all this and in our minds, the behavioral team, this is very straightforward. Like we know the date and time of the colonoscopy.
[00:15:16] It’s not a big deal to just go back 36 hours and send this message. Turned out it was a big deal. So it took the platform team a long time to build that functionality correctly. And now it’s great. It’s up and running. It’s doing what we intend it to do. And it’s a very flexible tool in our toolkit.
[00:15:30] But I didn’t realize that I needed to have a conversation very early in my design process with platform team to say, here’s my behavioral requirements. How does the technology work with this? So that’s my failure story, but it was the start of my time at Lirio. And I think I’m glad because since then we’ve gotten really good.
[00:15:50] We understand now that we need to have cross functional meetings early in our design process. One of our Lirio like maxims that people say all the time, share before you’re ready. A lot of people will just be like, I’m sharing before I’m ready. And in that instance, I’m not going to pull out the red pen.
[00:16:03] This is for my initial reaction. This is for my gut. Where are the roadblocks? What do we need to work on together? That’s been really helpful. And sometimes if the team isn’t ready to socialize something, I’ll just go to my colleagues, my Factory leadership colleagues and say, Hey, we’re thinking of doing this.
[00:16:18] How does this fall? So those sorts of dynamics are really helpful. And I just really believe it’s so helpful to have people at the top of the teams who are willing to work with each other. I think we’re all pretty good at putting our egos aside. That can be really hard sometimes with people who are very talented and in leadership positions, but I think fortunately the four of us are pretty good at it.
[00:16:38] Chicago Camps: Can you shed some light on the different types of AI and how they might be suitable for various design needs?
[00:16:45] Amy Bucher: I talked a little bit about reinforcement learning that we use at Lirio and really how that helps us define a behavioral outcome and then figure out different ways to get people to arrive at that behavioral outcome. That’s one tool, large language models and centered in AI or another. And I think those are the ones that are really stealing the headlines right now. Chat GPT or the Bing AI. I know a Bard is Google, right? And so those are really good in terms of generating novel content that might correspond to a question.
[00:17:14] Now, we haven’t used a lot of that in my job. We’ve experimented a lot with it. We really want to understand the role that it might play, but we have some concerns about it. So we do like we have humans who generate all of our content and they played around a little bit with some of the generative AI to understand the role that it might play.
[00:17:30] We were wondering if it might be helpful in terms of first draft, for example, that we could add it. But it’s unclear whether you can own work products that is contributed to by generative AI, which is clearly from a business perspective, not a place that we want to run into before we really understand the implications and we’ve also found in our experimentation that it can be quite repetitive, so like a single version of something might look really good, but then as you try to generate additional versions, you’re like, Oh, this is not a human and pretty easy to see.
[00:17:58] So that’s one. There’s recommender systems, as I mentioned, and I think those actually can be really powerful. And I see a future for us potentially starting to use those sorts of things as well. Reinforcement learning can do some of this. And again, not an AI expert, so I can’t really speak to but re recommenders like you see when you shop on a website and people like you used this sort of thing.
[00:18:19] We do joke a lot about the role of those things in healthcare because you don’t want to be like, hey, you really enjoyed your Lipitor. People like you also enjoy Crisp. Or, hey, you had a new replacement last week. Get another one this week for half price. It doesn’t quite work in healthcare. But if you think about lifestyle management and things like that, you can start to see a role for, oh, you’re starting a fitness program, people like you really enjoy yoga and given your level, this is the right instructor to start with.
[00:18:46] There’s use cases like that where I think we might see recommender systems play a good role. And then another thing that I really enchanted with is natural language processing and how we might use that. So if you think about bidirectional messaging with somebody. Being able to use natural language processing to get the gist of what they’re saying, identify keywords or topics, can start to correspond back to them in the right way.
[00:19:08] That I think is intriguing. The whole thing though you really want to manage risks. So if we’re having AI generate content or even using NLP to try to figure out what a person’s saying, like you really need it to be right. Because the risk of delivering somebody the wrong message is so high. Whether they took action on something that’s not really good for them, or they missed a recommendation that would have been incredibly helpful or even life saving.
[00:19:33] Chicago Camps: What are some of the most significant ethical considerations you encounter in your work, especially when dealing with AI and personal data?
[00:19:41] Amy Bucher: I think a really big ethical consideration we deal with is who are you to tell me to do this thing? Why do we presume to recommend to somebody that they take action in a specific way around their healthcare?
[00:19:56] And I’m really sensitive to that one. My training is as a behavioral scientist and specifically within motivational psychology. And there’s this concept in motivational psychology called, what is it, volitional non adherence. So basically somebody chooses not to do the thing. They’re not interested, right?
[00:20:11] I don’t want to take that medication. I don’t want to lose weight. I don’t want to change my lifestyle. And all of the behavioral science around that says the best course of action is you don’t force somebody who’s in that state. You let them not do a thing. Because if you force somebody to take action, you might get them to do it for a little while, but they probably won’t sustain it over time.
[00:20:31] And they’re going to feel resentful. They’re not going to be willing to meet you where you are the next time. It’s counterproductive. You’re going to get a short term win, but long term loss. And I think about that a lot in my work because we are asking people, sometimes without them expecting it a lot of the people that we communicate with have really not done much for their healthcare in a long time.
[00:20:52] A lot of the people we work with, a lot of the companies we work with, ask us to deal with their very disengaged populations. And so this might be someone who hasn’t seen a doctor in 3, 4, 5 years. And… I think we really have to be okay with them still not being ready to do that. We can provide them the reasons we can have that gentle conversation with them.
[00:21:12] But ultimately, if they don’t want to take action, I think that’s something we just have to learn from and move on. And I think that is an ethical thing. So I’ve worked with clients in past lives where they’re like let’s impose a penalty on people who don’t do this and it was a little uncomfortable with that because even though we know from a clinical perspective that like, yeah, this is the thing for this person to do, it’s just not going to be productive if we force it.
[00:21:34] So that’s a really big one for me.