Podcast: Play in new window | Download
Subscribe: Apple Podcasts | Google Podcasts | Stitcher | TuneIn | RSS

David Dylan Thomas can help you tame the unconscious biases that can undermine your design decision-making.
These biases are strong. You may never conquer them all. But recognizing them and accounting for them in your content strategy and design work can mitigate the hazards they present.
You need to be on your toes at every turn to account for these cognitive biases. They can affect the products and experiences you design, your collaborations with your team, and your own behavior. Dave’s new book shows you how to deal with each of these challenges.
Dave and I talked about:
- the importance of understanding how people make decisions and how much of that process is unconscious and irrational
- how his Cognitive Bias Podcast led to the insights that inform his book
- an example of using anonymized resumes to remove bias from hiring processes
- how to re-introduce friction into design processes to slow down your thinking so that you have chance to make less-biased decisions
- the importance of adopting design practices that check your biases – e.g., “Red Team, Blue Team” or speculative design
- the hazards of focusing on the positive outcomes of our design work and ignoring the many possible negatives outcome
- the story of Abraham Wald and how he brilliantly figured out where to put armor on warplanes, leading to insight about “survivorship bias”
- how cognitive biases manifest in general, in end-user designs, in internal design processes, and in your own personal behavior
- how the fear of loss is twice as powerful as the prospect of gain, illustrating the bias of “loss aversion”
- how the design of real-life and virtual spaces prime people for different behaviors
- the three key biases to consider when looking at your personal behavior:
- notational bias
- confirmation bias
- déformation professionnelle, the bias of seeing the world through the lens of your job
Dave’s Bio
David Dylan Thomas, author of the book Design for Cognitive Bias from A Book Apart, serves as Content Strategy Advocate at Think Company and is the creator and host of the Cognitive Bias Podcast. He has developed digital strategies for major clients in entertainment, healthcare, publishing, finance, and retail. He has presented at TEDNYC, SXSW Interactive, Confab, LavaCon, UX Copenhagen, Artifact, IA Conference, Design and Content Conference, and the Wharton Web Conference on topics at the intersection of bias, design, and social justice.
Follow Dave on the Web
Links Mentioned in the Podcast
- Design for Cognitive Bias book
- Design for Community, Derek Powazek
- Joyful: The Surprising Power of Ordinary Things to Create Extraordinary Happiness, Ingrid Fetell Lee
Video
Here’s the video version of our conversation:
Podcast Intro Transcript
We human beings like to think that we’re rational creatures, carefully looking at an array of objective factors before we make a decision. In a professional setting like a content strategy or design practice, we may feel like we’re at the pinnacle of this rationality. In fact, we’re operating on auto-pilot about 95 percent of the time, making decisions based on biases that are hard-wired into our thinking. Dave Thomas can help you understand and tame these cognitive biases and make better design and business decisions.
Interview Transcript
Larry:
Hi, everyone. Welcome to Episode Number 80 of the Content Strategy Insights Podcast. I’m really happy today to have with us Dave Thomas.
Larry:
David Dylan Thomas was with us two years ago, shortly after Confab 2018, where he and I talked. So welcome back, Dave, I’m excited to see your new book. It’s called Design for Cognitive Bias. So tell us a little bit about the book, and what folks can expect from it.
Dave:
Sure. Well, first off, we’re really happy to be back. I can’t believe it’s been two years.
Dave:
Well, so I wrote a book called Design for Cognitive Bias, actually building on some of the ideas, I think, we chatted about way back when, and the basic idea is understanding that, as designers or content strategists, basically, people who make things, part of our job is helping people make decisions. And in order to do our jobs well, we really need to understand how people make decisions.
Dave:
For the past few years, I’ve been studying bias and how it informs how we make decisions. So I’ve tried to take what I’ve understood about bias, and combined it with what I understand about content strategy and user experience, to try to give us tools to be more responsible in the way that we use design, and the way that we use content strategy.
Larry:
Right. And I love that responsibility and ethics really underlie a huge amount of your work. Tell us. There’s a lot of conventional economics, and a lot of the social sciences are based on this idea that human beings are these rational critters. And we’re just going to, we’ve just got these big brains that we use all the time. And so, we’re good to go.
Larry:
We’re set, right? So we’re pretty rational creatures, and these biases are just minor inconveniences. Is that the case?
Dave:
Yeah. So I have a section of the book called The Myth of the Rational User. The thing we have to understand is that something like 95% of human cognition goes on underneath the hood, below the level of conscious thought. So most of the decisions you’re making, even right now, are not conscious. They’re on autopilot.
Dave:
Just now, you took a sip of your tea. You could have thought very carefully about, “Well, how exactly do I want to hold that tea? And how long do I want to sit for, and exactly what I want to put the tea down when I put it down?” If you thought carefully about every single one of those decisions we wouldn’t get through this podcast, right? You’d barely get through the day.
Dave:
So a lot of these things are just on autopilot, and that’s usually a good thing, but every now and then, those shortcuts we’re taking lead to errors, and those, we call those cognitive biases.
Larry:
Well, and that’s how I first discovered you. I loved the Cognitive Bias Podcast, and I’d urge anybody who’s listening to go back and listen to it.
Larry:
It was great, very short overviews of all these cognitive biases. But you’ve taken, you’ve now taken all of those, and rounded them up, and thought about their implications for design practice, and what to do with it.
Larry:
Can you talk just a little bit more about the scope of those biases? And maybe a little bit about that, this fundamental irrationality, I think, some people have a little, need a little work to get their heads around.
Dave:
Sure. Well, it’s funny. I mean, and thanks for your kind words about the podcast.
Dave:
I’d been doing that for a little while, when I was asked by the city of Philadelphia to come and speak about accessibility for a internal panel they were doing. And they wanted to include cognitive bias in that realm of accessibility. So it forced me to start to think about the intersection between UX and bias.
Dave:
One of the first places I saw this was anonymized resumes. So unfortunately, in experiment after experiment, if you take two identical resumes, and the only difference is the name at the top of the resume, if it’s a male dominated field, the male name resume will keep going, and then the female resume would just stay on the pile.
Dave:
That’s a bias. That’s a shortcut, where the person looking at that resume has some pattern built up in their head that, if it’s a Web designer job, Web designer equals male. And even though, if you asked them, in their conscious thought, “Hey, do you think men are better designers than women?” They’d say, “No, of course, that’s ridiculous.”
Dave:
But the pattern that’s been built up like that, that shortcut, when they’re looking at that resume, just at a glance, they start to give them the side eye. And so, the design approach for that might be to say, “Well, hey, from a content and strategy perspective, that’s not actually useful information in the first place. The name isn’t helping you decide who you should hire, so why don’t we just take that out? Why don’t we just remove that design element, and any other design elements that might be biasing you, in a way that’s not useful, and just leave the important information, and redesign the resume to be less biased, or to not trigger your biases so easily?”
Dave:
That’s really the prime example, when I think of that intersection between, we understand that there’s this bias that people are going to default to, without even realizing it. What can we, as content strategists and the designers do to rethink the tool, in this case, the resume, to better serve a less biased outcome?
Larry:
Right, and you talk, I mean, so much of design talk is often about making frictionless easy experiences, but a lot of this is about slowing people down a little bit, and getting them to think about, does it really matter if their name is, how their name is relevant to this job, filling this job position? So is that one of the main principles, is just figuring out ways to derail or sidetrack these biases a little bit?
Dave:
It’s really undermining two notions we have about design. So one is, as you said, about frictionless design where we think, “Oh, you want the most frictionless experience possible.” But the fact of the matter is, bias comes directly from thinking too fast.
Dave:
One of the foundational works on bias is called Thinking Fast and Slow, by Daniel Kahneman. And the whole upshot is that when you’re thinking very quickly, that’s when you’re most likely to run into these errors, when you really process and slow down your thinking those, when you’re less likely to fall for some of this stuff.
Dave:
Part of it is understanding you should be putting in speed bumps to improve the quality of your user’s decisions. And then the other piece, is we think as designers, we’re supposed to be artfully revealing information, but what’s most useful for the user sometimes is to conceal information.
Dave:
That’s where the anonymized resume comes in. It challenges that notion of, “Oh, you have to give the user all of the information, and do it in this really progressive disclosure cool way, when, in fact, one of the decisions you have to make as a designer and as a content strategist is, ‘What shouldn’t I show? What’s actually going to make this worse, if I show it?’ Right?” So I think those are the two, from a design perspective, the things we have to rethink when we’re designing for cognitive bias.
Larry:
That’s really interesting. Because I’ve studied design for years. And that was, I’m thinking back to my early days, and I don’t recall that ever being revealed as a concern, that being thoughtful, and to what you said earlier about being ethical, about what you choose, like that example you used of the Philadelphia hiring process, one of the follow-ons to that was, “Okay, we’re going to black out the name so that you don’t see that.”
Larry:
But the first thing people do is go look at their GitHub repo, and then you have their, immediately review that. So they had a technical fix for that. Can you talk a little bit about that, in the-
Dave:
Oh, sure. So, actually following on that, when the city of Philadelphia asked me to speak about bias, I ended up talking to a guy who, at the time, was their chief data officer. And he said, “Oh, it’s funny, you brought that up, because we tried a round of blind hiring for our web developer position.”
Dave:
We realized that first we’d have to have someone physically print out the resume, and black it out with a marker, and someone who had no stake in the hiring process, like an intern. And then, you would have the hiring manager look at that blacked out resume, and pick some resumes that they liked. But what you do if you’re hiring a web developer, typically, is you want to look at their GitHub profile.
Dave:
GitHub is a code repository. And you can see a developer’s portfolio, so to speak, by going there. And so, they would go there and look it up. And immediately, if you go to their GitHub profile, there’s their name and all the other personal information that you were trying to conceal before.
Dave:
So, very cleverly, they created a Chrome plugin that would actually redact all that information as it loaded, so you wouldn’t see it. And then they took that code and put it back on GitHub. It’s actually there now, if you want to use it for your own anonymized hiring.
Larry:
Nice. I’m trying to think of other examples like that. There’s so many examples in the book that, for every cognitive bias you talked about, you had a good example of that. Because putting the guardrails and speed bumps in, that’s one thing. Are there more proactive practices that arise from your insights around cognitive biases?
Dave:
Sure. I mean, a lot of the proactive things you need to do really come to process, right? Like, how are you actually designing the things you’re doing?
Dave:
So there’s things I talk about, like red team/blue team, where we have our typical processes around, “Oh, we’re going to do some research. And then we’re going to create some wire frames. And then we’re going to put it into design, and then we’re going to develop it and test it.”
Dave:
And there are opportunities there to avoid decisions that lean into confirmation bias, or thinking you’ve hit on the solution, when actually there’s a much better solution you haven’t explored yet, or thinking you’ve hit on the solution, but you’ve not thought about the potential harm. So, red team/blue team is a practice that’s been around for a while. It’s used by the military, it’s used by journalists.
Dave:
The basic idea is, you do that first round of review, where it’s like, “Okay, I’m going to do the research. I’m going to come up with maybe some wire frames, get the idea into a place where it can be expressed.” But then, before I go any further, a red team is going to come in. And really take apart the work I’ve done. And not design critique in the sense of, “Oh, I’m going to look at the aesthetics or the flow.”
Dave:
I’m going to look at it more like, “Hey, is there a better design you’ve left on the table, because you’re so in love with your original idea? Or is there harm that you haven’t even been, that hasn’t occurred to you, might be caused by this thing, because again, you’ve been so narrowly focused on making this one solution work?”
Dave:
There are any number of procedures like that. Speculative design is another good one, where, if you built that in, it’s a good way to proactively say, “Rather than put it out in the world, and wait for all the bad e-mails to come in, saying, ‘Why didn’t you do this,’ I’m going to proactively try to anticipate, ‘Okay, what could go wrong? What’s the unhappy path?'”
Dave:
I think it’s ironic. We spend so much time, as designers, thinking about the happy path and, “Okay, how are we going to delight our users with this?”
Dave:
When in fact, it should be patently obvious that there are far more things that can go wrong than can go right. So we should actually be spending way more time thinking about what could go wrong, and designing for that. And then less time, not no time, but less time on having things go right, because that’s actually going to be not the majority of cases, necessarily.
Larry:
Oh, that’s right. The rose colored glasses tinting, all of our, so many of our decisions. I was just reading last night, I was reminded of Barbara Ehrenreich’s book about the negative impacts of the positive culture, positivity culture in America, the book that came out about 10 years ago.
Larry:
But there’s some analogy there, it seems, that we always start with the best of intention and the best hope, but keeping, consciously adopting a, not a negative, but a critical mindset, as you go through. Is that a good way to think about that?
Dave:
Yeah. I mean, the bias that comes to mind is called survivorship bias. And there’s a great little story where this guy, Abraham Wald, who was a genius mathematician in Europe during World War II, and he fled, because he was Jewish and the Nazis were moving in, I believe he was in Austria, and he fled to the States. And while he was there, he was asked by the military to help them, with a bunch of other scientists, figure out where to best put armaments on planes, to keep them from getting shot down so much.
Dave:
So they had a bunch of planes there. And the planes had bullet holes in all these different places. And the other scientists in the room were basically saying, “Hey, obviously you’ve got all these bullet holes, I should just be putting more armaments there.” And Abraham Wald said, “Wait, that’s exactly where you shouldn’t put more armaments.”
Dave:
What he realized was, they were looking at the planes that made it back. What they needed to be looking at were the planes that got shot so bad, that they couldn’t make it back. Those were the ones that were critically injured. And those pretty obviously were hit everywhere else. So if you look at, and picture the plane, and all the places that didn’t have bullet holes, oh, those are the most essential places. You need to put more argument there.
Dave:
The team in that room was looking at the survivors. They were looking at the best case scenario, when what they really needed to be thinking about was the worst case scenario. And as designers, we’re very good at thinking about and selling, frankly, the best case scenario. But we need to devote as much time, if not more, to thinking about those bad cases, if we really want resilient, useful design,
Larry:
I love that. And are there analogies? Like, if you were evaluating a portfolio of products that some company had, how would you do that, as a digital designer? To go in and look and say, look at a product history, and go, “Well, this worked, this didn’t.” How do you infer those bullet holes and where the armor should be in your digital design?
Dave:
I mean, I think that there are exercises and ways to think about that, again, proactively as you’re doing the work. I had a client, once where we were trying to … This was specifically not around functionality, but really around voice and tone, to come up with the different voice and tone characteristics of error messaging.
Dave:
We went through the product flow and we spent two, three hours, specifically just going point by point, “Okay. What could go wrong here? Okay. What could go wrong here? What is the user feeling at this point, and how are they going to feel if it goes, if this goes wrong, versus if that goes wrong?” And just to basically come up with a library of potential emotions, on which we could then build a voice and tone guide, but the way to get there was to literally think about every single thing that could go wrong. And document that.
Dave:
To me, that’s what I’m thinking of when you’re talking about that portfolio example. It’s sort of, “Hey, this is great, to show me all the designs you have around the good use cases. I also want to see your designs around the bad use cases. And what choices did you make to account for those?”
Larry:
Right, okay. I get that. I love that. So much of what we’re talking about is the end product design, but all of this happens in the context of internal relationships, the various stakeholders, both the clients that you’re working with, but also your internal team, and all that.
Larry:
One of the things I love about the book is the way you organized it, so that that gets equal weight to the designs. That’s equally as important to talk about, like how you do it, as to what you’re doing. Can you talk a little bit?
Larry:
I was trying to infer, as I went, I couldn’t really put it together. Are there certain families of biases that apply more at the customer-facing level, at the internal level?
Dave:
Yeah.
Larry:
Or, yeah?
Dave:
I think that it’s funny you put it that way. Because I think that, just to clarify the book is structured, first to just ground the reader in, okay, what is bias? What am I talking about?
Dave:
Then we move really into just user biases, and how to mitigate those. Then we move into your co-workers’, your stakeholders’ biases as a designer, as a strategist. And then, at the end, we talk about your biases personally.
Dave:
And I think you’re right. As you move through the book, I hadn’t noticed this before, but I think, user biases? There’s hundreds of them. The stakeholder biases tend to fall into like three or four main buckets. Then, finally, when we’re talking about your biases, there’s basically three biases, that are really, really dangerous, that I end up talking about.
Dave:
But for the stakeholder biases, it’s interesting. I think that a lot of what I’m talking about there is understanding that, just as your users are making decisions 95% of the time, without thinking, the same is true to stakeholders.
Dave:
You have to understand at that, if you’re having trouble convincing your boss of something or convincing your client of something, they’re not just being obstinate for the sake of being obstinate. There’s probably something underneath the hood. A lot of times, it comes down to things like incentivization.
Dave:
So if they’re not going to get their bonus, unless they ship 15 products this quarter, maybe that’s why they’re averse to research. Because it’s going to take up time, it takes time away from that. And if you don’t understand that, as a designer and as a collaborator, you’re going to have a difficult time, talking to them and convincing them of the value of research, for example.
Dave:
Or if you’re trying to make an argument about something, it’s good to understand, I talk a lot in that section about loss aversion, and this idea that we generally, it hurts more to lose something than it feels good to gain something. So here, it’s twice as much to lose $5 as it feels good to find $5.
Dave:
Oftentimes, when we’re approaching our bosses, our clients, people we answer to, and we want them to take a risky choice, they’re coming at that from a fear of losing more than they are from feeling it’s good to win something. And that manifests itself as when you’re trying to get a client to make a risky decision, it’s better to point out the downside of not making that decision, than it is to point out the upside of it.
Dave:
For example, if I had to convince you to abandon this legacy CMS, that nobody likes, but you keep using, because of some cost fallacy and status quo bias, it’s like, it’s the devil you know, and it feels risky to try something new. I could say, “Oh, well, this is how much money you’re going to make, if you switch CMSes, and there’s going to be unicorns, tap dancing down the hallway, it’s going to be wonderful.”
Dave:
That won’t be as effective as saying, “Oh, if you keep with this bad CMS, here’s how much money you’re going to lose. Here’s how many people are going to quit, and that you’re going to have to retrain and rehire people, on this terrible CMS.” That’s going to be a more convincing argument for people who are feeling risk averse.
Dave:
Again, it just goes back to this phenomenon called loss aversion. So there’s a handful of biases you can rely on, but it’s very likely the organization you’re dealing with is going to succumb to. If you understand that, it’s easier to navigate, and figure out how to frame your argument, in a way that’s more helpful.
Larry:
Got it. Hey, and another kind of framing, you just reminded me that, I think it’s the very end of that section on stakeholder and team stuff, that you talk about physical design of meeting spaces and workshop spaces. Can you talk a little bit about that, and how that fits into these?
Dave:
Yeah. Yeah, so one of the core things I’ve been talking about for awhile now, and I credit a guy named Derek Powazek about this. He literally wrote the book on design for community back in the day, it’s called Design for Community, in early 2000.
Dave:
One of the concepts he introduced me to is this notion that we’d like to think that online conflict is happening, because people show up, and they’re just angry, and they want to fight. And then that’s why Twitter is terrible and Facebook is terrible.
Dave:
That’s part of it, obviously. But one of the things he points out is, the design, the actual design of those online spaces, and the design of physical spaces, can actually influence the kind of conversation people are willing to have. It’s really fascinating to see the science on this.
Dave:
For example, if I’m in a room, if I’m having a meeting and there is a suitcase in that room that everyone can see, people are going to act more competitively, than if, instead of a briefcase, it’s a backpack, right? And it’s just, we’re primed to think of business and competition, and cutthroat, if we see a briefcase. And we embody that. It’s just interesting to see that play out.
Dave:
Another thing is money. The second you bring up money, people act less prosocially and more competitively. And so, there’s a section of the book, to a case study around an organization that had to figure out how to divvy up federal funding for an at-risk population in a major metropolitan city.
Dave:
They had lots of stakeholders who have to make this decision, from lots of different walks of life, and they would always just make the same decision every time. Whatever the budget was last year, they’d be, “Okay, let’s spend it the same way again,” because it was the safe choice.
Dave:
They weren’t really collaborating, they weren’t really engaging with the material. So they redesigned the physical space that this group was meeting in, to allow for more green space, to make it more movable. Because if we feel confined, we tend to make safer decisions. We don’t tend to be as creative.
Dave:
My favorite move was that they took a coat rack that was in the back of the room, and nobody used, to the front of the room. And then, to get people to actually use the coat rack, they would use their own work sweaters, and seed it with that.
Dave:
So when people walked in, there was always already a couple of things on the coat rack, and that would free them up to actually use it. And it seemed like a small thing, but the reason they did that was, when you first walked in, it looked like that space was a space that was used by human beings, right?
Dave:
You’ve been in these meeting rooms, where it’s like, this room was designed for computer staff meetings. This wasn’t designed for human beings to use.
Larry:
Yup.
Dave:
So all of those things led to a more warm environment. And from an agenda standpoint, they didn’t bring up money until the very, very end. They changed the agenda, too, to make it more prosocial. But all of that cumulative effect was, it was the same people, it was the same topic.
Dave:
But these framing choices, these design choices, got those people to a point where they could look at those needs, without the context of the previous budget, but just look at what was needed and say, “Oh, this thing used to be needed. We’ve actually taken care of that. Now here’s this new emerging need. Let’s shift the budgeting over to this new emerging need,” which was kind of the problem to begin with, right?
Larry:
And that only happened because of the work with the physical space that you’re in, I love that. That sounds like a whole other book.
Dave:
Oh, I’m sure you could. You totally could. And in fact, to be fair, there’s a book by Ingrid Fetell Lee called Joyful, that was part of the inspiration for them changing the design of that space.
Dave:
So credit where credit is due, that was actually part of what inspired them. So there is literally a whole other book to be written about the design of spaces.
Larry:
Yeah. Hey, I want to make sure we get also to the last part of the book, where you talked about accounting for your own biases, in your professional conduct and practice. Tell me a little bit about that. And again, you said, there’s just about three predictable biases that come up in …
Dave:
Yeah. The ones that I really focus on are notational bias, which is really just how your own prejudices can find their way into how you structure content.
Dave:
The most obvious example is, if I’m creating a form, that’s supposed to gather personal information. And when it comes to gender in my head, I only think there are two genders, male, and female.
Dave:
If I create the form that way, I’m going to erase all these identities without even really thinking about it. So that’s notational bias. So there has to be a way to check your own, your own prejudices there.
Dave:
There’s confirmation bias, which is really just this, we talked a little about before is, “Oh, I found the perfect answer, I found the perfect design.” So I’m just going to go down that path.
Dave:
And true scientific method, for example, it doesn’t just say, “I’m going to test something, and if I’m right, great, let’s move on.” True scientific method says, “Okay, I’m going to test something. And if I’m right, I’m going to now do everything I can to prove myself wrong.” I’m going to say, “If I’m wrong, what else might be true? Let me test that.” And that’s a really more rigorous way to find the better solution.
Dave:
There are examples in the book I give of how easy it is, to simply think you’ve got the perfect solution, and miss a completely obvious, better solution. And really, the solution there to avoiding that outcome is, just to have other people, other perspectives in the room, preferably, perspectives of people less powerful than you, people who are going to be impacted by that design, who don’t have as much power than you as you.
Dave:
In fact, if you can give them power by not only using them as research subjects, but actually making them stakeholders, where the design doesn’t ship unless they’re happy? That’s where, inevitably, the end of the book really becomes about design ethics and design justice.
Dave:
You can’t talk about this and have that not be the inevitable outcome. So really, it’s almost like, at the end of the book, I’m like, “Got you. This is actually about design ethics.”
Dave:
But that, I think, is where you need to get to is, the book tries to give you serve useful constructive ways to invite other perspectives, and give power and honor to other perspectives that are not your own, in order to create this more ethical, helpful, less harmful design.
Larry:
You just reminded me of an old cartoon, a guy at a vending machine that says “ethics,” and he’s trying to reach up and steal something out of it. That’s sort of your approach to this book you’re stealing. That’s great.
Larry:
Yeah. Well, Dave, we’re coming up close. Well, actually, one thing I wanted to ask about that, because if you’re in that, facing confirmation bias, I can’t remember if this is exactly how you used it, but if you’re facing in a design process, you’d go to blue team/red team, or something like that.
Larry:
But in your own, you use, you close … Near the end, you talked about Black Mirror. Whenever you have a good idea, and are getting like confirmed in your own head about, “This is a great idea, we’re going to do this,” pause, write a Black Mirror episode about it.
Dave:
Yeah.
Larry:
And then revisit it. I mean, talk a little bit about that.
Dave:
Yes. I mentioned speculative design earlier, and that’s really the formal name for what a show like Black Mirror does. Black Mirror, if you haven’t seen it, is like Twilight Zone for tech. Take some near future tech, and then tell a story about how an actual human being would use it, and the outcome’s usually terrible.
Dave:
I argue that anyone working on a new technology by law should have to write a Black Mirror episode about it. But it really is this actual job where you can take a design, and put it through its paces, or take a course of action, and really think through the outcome.
Dave:
Superflex did a thing with the United Arab Emirates back in the day, where the UAE was trying to figure out, “Are we going to continue down the path of fossil fuels or should we invest in renewables?” And Superflex came in and said, “Okay, let’s think about what your air quality is going to be like, five years out, 10 years out, 15 years out.” And they didn’t just figure out what it would be like, they bottled it.
Dave:
Then they made them breathe it. And then, by 10 years out, it’s unbreathable. And so, by the end of that engagement, they were like, “Yeah, we’re going to invest,” I think they said $150 billion in renewables.
Dave:
So it’s a really, honestly, it’s a fun way to create the path. The unhappy path is to just tell a story about it, the same way that you tell user stories about, “Oh, this is what it looks like when the future goes well. We do use your stories for that. We should also do user stories for, ‘Oh, this is what happens when the user does something terrible,’ you know?”
Larry:
Oh yeah. A lot of people are good with that example of, “And here’s how your world will smell if you pursue this path.”
Dave:
Yeah.
Larry:
That’s super powerful, yeah. Well, hey, Dave, I’m going to close-
Dave:
Oh, I-
Larry:
Oh, I’m sorry. Go ahead. Yes?
Dave:
No, I just want to say, the other thing is, I said there were three biases, that the one that is really important, that I close with is, called déformation professionnelle. And it’s this idea of seeing the world through the lens of your job.
Dave:
And so, where I build to at the end, spoiler alert, I guess, is that we as designers need to define our jobs beyond, “Just make cool stuff.” We really need to say, “Okay, how can we design things in a way that lets us be more human to each other?” And again, I give some examples of how that can go very, very wrong, and why we need to be very concerned about that.
Dave:
But that’s the other key bias, I think, we can fall into as designers and content strategists is, “Oh, I’ve done my job, if I create a really cool sounding strategy, that’s going to get all of this engagement, right? I’ve done my job.” It’s like, if you define your job that narrowly, you’re at risk for introducing something harmful without even realizing it.
Larry:
Right. No, and that seems super powerful, and potentially really dangerous, that, just, it’s like, that’s the Nuremberg thing, “I’m just doing my job.”
Dave:
Oh yeah.
Larry:
So yeah. Well, hey, Dave, we’re coming up close to time. I always like to give my guests a chance. Is there anything last, anything that hasn’t come up, or that’s just on your mind about design and cognitive biases, or content strategy, or-
Dave:
Sure. So I would say, in general, if you want to keep up with all things Dave, daviddylanthomas.com has links to my book. It has links to how can you get it, to be a speaker, and to just get in touch. But it also has a link to an event I have coming up.
Dave:
If this launches on the launch date for the book, which is August 25th, on August 28th, which is going to be the Friday after, at the end of that week, I am doing a launch party, a digital launch party where you can come. The price of the ticket actually includes a digital copy of the book.
Dave:
It’s only 10 bucks. And you can come, and I’m going to do a book reading. I’m going to do an interview with a guy named Alex Hillman, who just created his own book called Tiny MBA. We’re going to do Q&A.
Dave:
We’re going to do this fun thing called an idea exchange. It’s going to be a lot of fun. So if you are free on Friday, August 28th, come to the launch party.
Larry:
Great. And I’m looking forward to it. And I do, assuming I can stay on schedule with production, I plan to drop this on the book launch date, on the 25th, so folks should have some.
Dave:
Awesome.
Larry:
And I’ll promote that, I’ll announce that ahead of time too. Well, thinking, and also, one last thing, if people just want to follow you in general, what’s the best way? Twitter, LinkedIn, do you have a preferred medium?
Dave:
Yeah. I’m on Twitter at @movie_pundit, and my Twitter and my LinkedIn are also on daviddylanthomas.com. If you go there, it’s probably the one-stop shopping for all things Dave. So I would just go there.
Larry:
Great. I’ll put those in the show notes as well. Well, thanks so much, Dave. It was fun.
Larry:
I feel like I’ve made a big loop here, coming back around. So thanks so much for coming back.
Dave:
Thank you. It’s my pleasure.
Leave a Reply