Tent Talks Featuring Heidi Trost – When AI Knows Everything: Privacy, Security, and the Future of UX

Tent Talks Featuring: Heidi Trost
Heidi Trost
Author
Human-Centered Security
Heidi Trost is a UX leader who helps cross-disciplinary teams improve the security user experience. With a background in UX research, Heidi does this by helping teams better understand the people they are designing for, as well as the security threats that may negatively impact people and systems.

Heidi Trost, a UX leader and host of the *Human-Centered Security* podcast, will explore the fascinating—and sometimes unsettling—ways AI is reshaping security, privacy, and user experiences. With AI agents becoming increasingly capable of making decisions and taking actions on our behalf, we face a future where traditional user interfaces fade into the background. What does this mean for designing systems that are both secure and user-friendly?

Heidi will share her unique perspective on how UX design can address the challenges posed by AI-driven technologies, helping users understand and manage their own privacy while mitigating security risks. Whether you’re a designer, researcher, or technologist, this conversation will spark new ideas and leave you inspired to rethink how we design for a more secure future.

Session Notes

Session Overview

In this Tent Talks session, Heidi Trost dives deep into the evolving relationship between AI, privacy, and the future of UX. She introduces a helpful mental model involving three key players in the cybersecurity ecosystem: Alice (the user), the threat actor (the adversary), and Charlie (the design of the system). Through this lens, Heidi explores how invisible interfaces and AI agents are shifting the landscape of privacy and security, often creating tension and confusion for users like Alice.

Heidi emphasizes that while AI can enhance usability and offer powerful new capabilities, it also opens up major risks—especially when users are unaware of how their data is being used or what rights they have. She calls for UX designers to become advocates for Alice, learning enough about the underlying technology to design responsibly and communicate clearly. Throughout, she stresses the importance of trust, transparency, and cross-functional collaboration to build safer, more user-friendly systems.

How do less visible interfaces change perceptions of privacy and security?

  • Introduced a model with three roles: Alice (user), threat actor, and Charlie (system design).
  • AI-powered tools like transcription at a doctor’s visit or smart glasses can provide value but also raise privacy concerns.
  • Users often don’t know what rights they have or how their data is being used.
  • Trust is key—users behave differently based on how much they trust the system, even when that trust is misplaced.
  • Invisible interfaces make it harder to know when data is being collected, creating new security and ethical concerns.

What are the biggest risks with AI agents acting on users’ behalf?

  • AI agents can access email, financial accounts, and more—making life easier for users but also for threat actors.
  • The broader the access, the bigger the attack surface.
  • Onboarding and setup must balance ease of use with friction that promotes awareness.
  • Advocates for “secure by default” settings—like Firefox’s built-in safe browsing—as best practice.
  • Good UX needs to clearly explain choices and risks specific to users’ context, which security often fails to do.

Can AI help users understand privacy, or does it create false security?

  • Answer is both—it depends on how Charlie (system design) shows up.
  • Currently, Charlie is like an annoying coworker who interrupts Alice with jargon and unclear warnings.
  • AI has potential to become a helpful sidekick, like Daniel Miessler’s concept of a digital perimeter protector.
  • Danger lies in over-reliance; users might trust AI too much and stop questioning or verifying.

Advice for UX designers building AI-driven experiences:

  • Learn the dynamics of Alice, Charlie, and threat actors—security is a constant game of reaction and adjustment.
  • Understand enough about the tech to ask the right questions and push back on bad decisions.
  • Don’t gather or store more data than needed—reduce risk at the source.
  • Prepare for multimodal experiences: voice, gestures, facial expressions, and text.
  • Communicate clearly what the system is doing and why, without overwhelming users.
  • Make system limitations visible—users need to know what AI can and can’t do.
  • Allow for reversibility: let users undo mistakes the AI makes.
  • Embrace cross-functional collaboration—design alone can’t solve this, but it must lead the way.

Notable Quotes

  • “You can’t lose data that you don’t gather—or don’t keep.”
  • “Charlie is the security UX—and UX people, you are in charge of Charlie.”
  • “Trust changes how Alice behaves—even if the trust is misplaced.”
  • “The holy grail is building in security and privacy so Alice doesn’t have to think about it.”
  • “Help Charlie help Alice.”
  • “The Venn diagram of engineering, design, security, law, and product—that’s where the magic happens.”

Reference Materials

  • Human-Centered Security by Heidi Trost
  • Daniel Miessler – Security researcher and writer (danielmiessler.com)
  • Firefox – Example of secure defaults in UX design

Session Transcript

[00:00:34] Chicago Camps: How do you think the shift to less visible user interfaces impacts the way users perceive security and privacy?

[00:00:41] Heidi Trost: There’s one thing that I want to stress before I talk about that, and it will help in understanding the rest of what I’m going to talk about.

And the thing that I wanna talk about first are the three players. And that’s what makes solving for security really difficult, whether it’s AI or anything else. So I wanna introduce these three players in the security ecosystem so we have a foundation and we can build on that.

So think about this dynamic cybersecurity ecosystem. And these three players are all influencing and being influenced by one another. And I’m so sad that all you folks can’t see me with my hand gestures ’cause it makes it so much more interesting.

Number one with we have Alice, the user. I like to tell stories. I like to give names and tell stories, so I have Alice think when you think of your end user, think of Alice. When you think of Alice, think of your end user.

Then think of the threat actor. So the threat actor is a person or group of people who are taking advantage of human or technical vulnerabilities. For their own motivations. So a threat actor is thinking, how do I trick Alice?

How do I trick Alice into giving me money, into giving me assets, to giving me access, into believing something that I want her to believe? And they have all sorts of like tactics and techniques to do this. But it all boils down, how do I get Alice to give me money, to give me assets, to give me access or believe something that I want her to believe?

And then there’s the design of the system. So that’s three. I say the design of a system is Charlie. He is the third player in this cybersecurity ecosystem. So Charlie is where the security user experience impacts Alice. If you think of a horizontal line, everything above that horizontal line, like a timeline is where Charlie is impacting Alice.

He’s bubbling to the surface and saying, Alice, you have to do this. Alice, remember to do that. And then everything below that horizontal line is the stuff that happened underneath the surface, right? Alice isn’t really aware of what is happening. It’s all those bits and bytes, but it’s where security bubbles to a surface and an action alice has to take a decision that she has to make. That is Charlie.

And UX people. You are in charge of, Charlie. You are in charge of designing the security user experience. Okay? Just wanted to get that outta the way first. We have these three players in the cybersecurity ecosystem. Alice, the user. Threat actor. And then Charlie, the design of the system.

The relationships between Alice and Charlie are really important. Okay. So how do I think the shift to less visible user interfaces impacts the way that users perceive privacy and security?

 I’m worried about a few things. So when I think of privacy, I’m thinking about like when Alice goes to her doctor’s office and the doctor is recording and transcribing Alice’s entire visit. I actually just heard people at the coffee shop talking about this scenario and they were a little upset about it. For the doctor, they’re viewing this experience of recording and transcribing Alice’s visit as something that’s a benefit, right? It help it to them. It helps them give Alice better care, maybe leveraging AI to come up with different treatment plans for Alice or leveraging AI for them to give Alice a better outcome for her health. But for Alice, she might be like, why would I give you permission to record and transcribe my entire visit?

Of course you have the threat actors who are thinking, oh, I’m recording the entire visit. I wanna grab all of this data. Right? The more data the better. Where security and privacy bubbles to the surface is Alice may not be explained what her rights are and the doctor and the different people working at the doctor’s office aren’t always empowered to help Alice understand what her rights are.

Can she say no? What happens if she does say no? How is her data pro processed? How is it shared? Who will it be shared with? What’s gonna happen with the data? Is it just gonna live there forever? What are these third parties that are accessing this data?

Often the secure user experience does a really bad job of explaining those things to the people providing the data. So Alice. So in terms of privacy, that’s one of the scenarios that I’m thinking about, like gobbling up all of this data and in some ways it could be used for good. It’d be awesome if Alice can improve her healthcare experience and her health outcomes.

But it’s also really scary because we don’t really know what’s happening or we may not know what it’s happening to our personal data.

Another example and talked about this a little bit in the presentation that I gave for Chicago Camps earlier this month was like about smart glasses.

Smart glasses are a device, but they are powered by AI. You become increasingly more comfortable giving away control of your data and giving control of all of the different services that you use, because it provides so much value, right?

Like to someone like Alice who might be a college freshman, she’s thinking These smart classes can help me record all of my lectures. It’s gonna help me do better in school. It’s gonna help me make friends. It’s gonna help me navigate these awkward social situation where the AI might be able to tell me, Hey, given this person’s reaction, or given this person’s facial gestures, maybe you should respond in this way.

For Alice, that’s a lot of value to the person on the receiving end. That’s a little weird. You’re recording me. Right? So again, we’re becoming more and more comfortable giving away the data, giving away the control, because it’s providing so much value.

And I see that.

And observe your own behavior too, as you’re thinking about this. I try to reflect on what I’m becoming more and more comfortable with, and as someone who has a lot of background in UX research, I’m observing how my users are reacting. So think about your Alice, your user and how she’s gonna perceive this value, what she’s gonna be more comfortable, and what she’s not gonna be comfortable with, right?

Where there are potential roadblocks where Alice is saying, why do you need that? That’s weird, goes back to trust. So Alice May trust that these services are doing their due diligence with her data, even when they’re not. Alice will behave differently if she trusts the service versus if she distrusts it or she may place trust where trust shouldn’t be given. She misplaces trust.

And the fact that I may not even know as the person talking to you, that I’m being recorded. So at what point am I giving my consent and I don’t know what’s happening as that stuff’s being uploaded to the cloud somewhere and all of a sudden that’s my data and I have no control over it.

The question is shifting to these less visible user interfaces. When do you know your data is being collected and you lose control over what’s being done with it? The threat there, the risk there is that the data that’s collected can be used against the user can be used against that person at a later point in time. And we may not even be aware of some of the ways that it can be used, right?

So I said the threat actor is looking to get access to systems, to manipulate Alice, to coerce her, to, to stalk her, to all of these negative things.

Before I had an iPhone I wasn’t thinking about, oh, my face could potentially be used to unlock my phone. That just never occurred to me until that actually became a reality. Right? And then all of a sudden that reality gets a little scarier. 

[00:08:38] Chicago Camps: As AI agents become capable of making decisions and taking actions on our behalf, what are the biggest security risks users face? How can UX design help mitigate those risks?

[00:08:52] Heidi Trost: Privacy still stands as one of my big items. The second thing that I’m thinking about, that I’m worried about are taking advantage of account privileges. You have this AI agent. An AI agent is something that is taking action on behalf of Alice, your end user.

So this AI agent is connected to Alice’s email, to her social media, to her financial accounts, and for Alice, like it’s incredibly convenient. Like it, it helps her immensely, it saves her a lot of time, but it’s also making it really convenient for the threat actors who wanna take advantage of all of that connectedness, right.

My prediction is that things will just become more and more connected because it’s just easier, right? Like just connect everything. You’re in an Apple ecosystem, you’re in a Android ecosystem, and just connecting all those things together is just gonna make your life so much easier.

So if that AI agent has the same privileges, is able to do the same thing that Alice is able to do on those accounts, maybe it can change account settings like security and privacy, settings, it can send messages. Maybe it can make purchases. Connect to other accounts and more.

So basically it has, it can do the same things that Alice can do and if we don’t put controls in place to prevent that, you can start to see how that would unravel. Right? Any account that’s connected to Alice’s email, the AI agent could take over.

All of a sudden the AI agent is using your Venmo to send people money or to make purchases. Your attack surface is broader and the threat actor is incentivized to keep these actions hush, right? And that is what your AI agent is supposed to do, is to help you and to do things behind the scenes so you don’t have to worry about them.

And maybe initially we were giving just like tiny little tasks for the AI agent to do. But you can see in the future that we’re going to be relying on the AI agent more and more we’re gonna be giving it more and more access. And I like to say if you don’t have access to something, you can’t do something bad with it.

Access is a fundamental security principle, right? Limiting access is a fundamental security principle because if you don’t have access to it, you can’t break it, you can’t steal it, you can’t do something bad with it. So that is one of the things that I’m worried about.

We see the, Alice sees the massive benefit of granting access, of granting privileges, of allowing the AI agent to do and take these actions, but it’s also exposing her to a lot of risks.

So onboarding and setup are gonna become even more important. This is something I talk about in my book, Human-Centered Security. I almost devoted a whole chapter to it, but I didn’t quite, but I walked through a case study because it really is that important. It’s getting Alice to the value really fast because you want her to use your product, but it’s also making sure you introduce the right amount of friction and get her to stop and think about what she needs to do thinking about secure defaults.

Is there’s just stuff you can build in. So Alice doesn’t even have to think about it, but where she does need to make a decision, making that crystal clear. So let me just take a step back and try to explain that a little bit better.

So if you can build, if you can build the security and the privacy control in to your product so Alice doesn’t have to think about it, that would be the holy grail, right? The tier below that, the second best, would be to think about what safe defaults you can implement.

So safe default, for example, is you download Firefox for the very first time, and by default it is, it has, it was like these safe browsing controls built in. You don’t have to think about them. It’s automatically going to try to protect you from malicious sites, from malicious downloads. You don’t have to check a box. It’s already there. It’s secure by default, and it’s really hard to turn off.

The tier below that, the third best, would be to very succinctly explain to Alice what her choices are and what the risks are applied to her and her specific situation. That’s really hard in that, like you said, like that’s service design. That means content designers. We need you really badly to explain those things. That’s one thing that security does pretty poorly right now is explaining risk in a way that people can identify with and it is specific enough to them in their circumstances.

[00:13:24] Chicago Camps: What role should AI systems play in helping users understand and manage their own privacy? Do you think AI can foster greater awareness or does it create a false sense of security?

[00:13:36] Heidi Trost: My answer is both right. Charlie, the design of the system, secuirty user experience has the potential to do both.

And if you remember what I said at the very beginning, the dynamic between Alice and Charlie is very important.

Right now there’s a tension between Alice and Charlie, the secuirty user experience. Charlie is like the most annoying coworker you’ve ever had. He just piled up on your desk. He comes in at the most inopportune times and says, Alice, you need to fix this.

He uses jargon, uses technical terms, he uses acronyms. He’s just really obnoxious and annoying. And guess what? Alice doesn’t like him, doesn’t take him seriously, is exasperated when he uses terms and acronyms that she can’t understand and basically feels like he’s a roadblock for her to accomplish the tasks that she needs to accomplish.

He’s not helpful. He’s not a helpful teammate. But, Charlie, and especially with AI and I’m thinking AI agents can be a much more helpful teammate and hopefully that relationship can be reconciled. So Charlie is able to do more on Alice’s behalf and be more helpful and anticipate and can actually watch out for Alice.

So there is a guy named Daniel Miessler. And you should follow his stuff if you haven’t before. He talks about how your AI assistant can be like your little sidekick, right? So they’re watching out for you and he talks about it from the perspective of both physical and digital security. Your sidekick is actually watching like your perimeter around you physically. And it’s also looking at your perimeter around you in the digital world and could potentially say, Hey Alice, this email that you just got doesn’t seem right or it can just filter it out just based on what Charlie knows.

So I think, again, check out Daniel Miessler’s work. He’s an expert in this and had thought about it a lot more than I have. I’m using his example. But I think there is a lot of potential for that relationship between Alice and Charlie to improve security and privacy outcomes because Alice finally has someone who she can rely on to help navigate this very complex, currently very difficult user experience of security.

So that’s the positive. The negative is the over-reliance on AI systems, or placing too much trust or misplacing trust. That means that Alice isn’t necessarily checking what Charlie is doing. It also means that Alice might be susceptible to manipulation.

So if the AI agent is manipulated, if Alice trusts Charlie, then they have this almost like this like sidekick relationship. And she is potentially more susceptible to be manipulated by her AI agent, and guess who loves that?

Threat actors.

[00:16:39] Chicago Camps: Looking forward, what advice would you give UX designers who are starting to design for a future where traditional UIs are replaced by AI driven experiences?

[00:16:49] Heidi Trost: Number one, I want you to understand the dynamics of those three players. And the reason that I think that’s so important is because of like a ping pong, like everything the one does, the other one has a reaction to it and it never ends like that back and forth is nonstop. So like you’re continuously having to modify the security user experience because threat actors are doing something you didn’t anticipate, users are doing something you didn’t anticipate. They’re reacting to the design of a system which just is never ending.

The other thing is learn the technology, right? I really think that’s important for UX people. I think there is a ton of opportunity for the field of UX to improve the security user experience specifically around AI, but you have to understand it at least to some level.

Be comfortable pushing back. Let me just say, you can’t lose data that you don’t gather, right? Or don’t keep. So if you feel like this just doesn’t seem right, feel comfortable voicing those concerns with your product team.

And the final piece, and I’m gonna build off of this last one, is help Charlie help Alice, right? You are the designers, you are the system designers. Help Charlie, the security user experience help Alice.

And what do I mean by that?

What are like some specific things that you can think about and you can bring back to your product team? So I want you to remember that the experience is gonna be multimodal. And that’s both awesome and is gonna require a lot of service design and content designers and like different skill sets.

So if you’re thinking like we’re gonna be using speech and gestures and facial expressions and the written word, like all of these things. That means that you have a lot on your plate in terms of how to best communicate to Alice in all those variety of different ways, and also thinking about the different circumstances that she’s going to be in when she encounters these experiences.

It might also mean that depending on how Alice is experiencing these things, that she might be more or less likely to make a mistake, right? Maybe she’s in the middle of the grocery store in the checkout line. Maybe she’s carrying groceries to her car and has got her kid on the other arm, and you’re having to think about those experiences and where she’s more likely to potentially make a mistake or just be a human being human.

The other thing I want product teams to think about is. How can you help Alice understand and keep track of what’s being done, what her AI agent doing without overloading her? That’s the key, right? Like we can only take on so much.

We can only oversee so much. I just see this playing out in my head. It’s gonna be awesome in one respect with the AI agent, we’re able to offload all of this stuff. But guess what are we’re also gonna do? We’re just gonna take on more stuff.

So this is gonna get increasingly harder, more challenging as the AI does more, takes on more tasks on behalf of Alice. So think about how and where those actions need to be visible to Alice and how she can intervene.

The other thing is to help Alice understand what the system limitations are. Right now. Security doesn’t do a great job of succinctly saying you can reasonably expect this of us. And I think we’re seeing that even like with ChatGPT, right? And other LLMs.

What can you reasonably expect this system to be able to do? And what are the things that you really need to check? And that’s just gonna become more and more important.

I think this is one of the things where we definitely need content designers because I think it is going to be very difficult to achieve. Alice will want to leverage AI and squeeze every bit out of it that she possibly can, but there are gonna be limitations and it’s gonna be very important that the system communicates those limitations to her.

The last thing is giving Alice the ability to revert. To fix a mistake that the AI agent may have made or the AI agent went wild and Alice needs to go back and fix these things. And this is a usability heuristic. Basically that Alice has some recourse that she’s able to go back and fix something. So thinking about tho those things, right?

So the first thing I said was multimodal. The second thing, and thinking about how to, how best to communicate to Alice. Thinking about how to communicate to Alice what’s being done and why, but not overloading Alice. Helping Alice understand what the system’s limitations are, and then finally giving her a way to go back in time to fix things that might have gone wrong.

If you think about a Venn diagram of all of these different disciplines, engineering, design, security, law, privacy, product management, right? Those circles in the Venn diagram all need to overlap, and that’s where you can improve the security user experience.

So it’s not necessarily that one team has the secret sauce. I firmly believe that you need that overlap. The design team doesn’t know enough about security to be the only source in improving the security user experience. And they also need engineering and they also need their legal team and that sort of thing.

I definitely agree that design needs to play a bigger role and I think there’s a huge opportunity, designers for you to help improve the security user experience. That is the entire reason that I wrote this book.

‘Cause remember, you are designing the system!

Event Details
Tent Talks Featuring: Heidi Trost
Expired
$Free
March 24, 2025
5:00 pm
March 24, 2025
6:00 pm
Tent Talks Featuring Heidi Trost When AI Knows Everything: Privacy, Security, and the Future of UX Heidi Trost, a UX leader and host of the *Human-Centered Security* podcast, will explore the fascinating—and sometimes unsettling—ways AI is reshaping security, privacy, and...

 

February 2026
SMTWTFS
1234567
891011121314
15161718192021
22232425262728
Categories