Podcast: Play in new window | Download
Subscribe: Apple Podcasts | Google Podcasts | Stitcher | TuneIn | RSS
Designing voice conversations requires new skills and new ways of thinking about how people interact with your digital product.
Rebecca Evanhoe and Diana Deibel are experts in this new approach to interaction design.
Like many content strategists, they are learning on the fly. And they are constantly studying the steady stream of research in their field. They’ll share their discoveries in a book, Conversations With Things, in the spring of 2021.


We talked about:
- Rebbeca and Diana’s backgrounds
- the differences between designing for voice interfaces and for graphical interfaces
- the higher expectations that people have for conversational voice interactions
- some of the challenges of designing for voice interactions: navigation, lack of standardization, lower impatience thresholds in voice interactions, etc.
- how conversation design fits in the UX field
- the principles and concepts that underlie conversation design: personas (of the voice agent), prompts, training data, etc.
- the implications for user trust in systems that use AI (artificial intelligence)
- conventions around the transition from AI-driven bots to human agents in a conversation system
- the ethics of conversation design
- practices to instill trust in voice interfaces and allay user privacy concerns
- racial and gender and geographical biases that can creep into conversation design
- the rapid evolution of the research that underlies conversation design
- the challenges of managing the content associated with conversation design
- the value of the contributions of humanities disciplines to conversation design
- the need for more diverse perspectives in conversational design, and the need to try harder in accomplishing this
Diana and Rebecca’s Bios
Diana Deibel, Design Director at Grand Studio in Chicago, is a Brazilian-American award-winning writer and VUI designer with a background in fictional dialogue. She has designed multi-channel voice-first products, chatbots for healthcare, insurance and HR operations, smart speaker skills, and large IVR systems. She is a national speaker and VUI consultant who has set up voice practices for Fortune 100 companies, among others. In addition to conversational design, she has written and produced for a variety of networks and creatives including “Animal Planet” and “Blue Man Group”. She co-created two TV pilots, now in pre-development with One Bowl Productions, and has had several plays produced with the Modern-Day Griot Theatre Company in Brooklyn (under the name Diana de Souza). She loves learning, puns, and leading workshops on dialogue to help others find their voices.
Rebecca Evanhoe, author and conversation designer, has been developing technology that you talk to since 2011 at companies like Amazon Web Services, Mobiquity, and Shadow Health. She has created virtual patient characters for chat-based learning games, bots for fun and service, and interactive experiences for Alexa and Google Home platforms. Along with her experience in voice and conversation, she earned an MFA in creative writing. She teaches conversation design as a visiting assistant professor at Pratt Institute, and she leads workshops in a variety of writing genres, from creative to technical to UX. Her fiction can be found in the O. Henry Prize Collection, Harper’s Magazine, Vice, NOON, and Gulf Coast, among others.
Books Mentioned in the Podcast
- Conversations With Things (their new book, coming in 2021)
- Weapons of Math Destruction
- Algorithms of Oppression
Video
Here’s the video version of our conversation:
Podcast Intro Transcript
Most of the content and interaction designers that you encounter in the content strategy field come from design, copywriting, journalism, and similar careers. In those fields, content presentation design happens in a GUI – a graphical user interface. In the new field of conversation design, the interactions happen in a VUI – a voice user interface. Rebecca Evanhoe and Diana Deibel are experts in this new type of design. They’ve got some great insights into how you can create truly conversational voice interactions.
Interview Transcript
Larry:
Hi, everyone. Welcome to episode number 71 of the Content Strategy Insights podcast. I’m really delighted today to have with us, Rebecca Evanhoe and Diana Deibel. Rebecca is a freelance conversation designer, and Diana is a design director at Grand Studio. Rebecca, I want you to tell the folks a little bit more about yourself… Oh, wait. One other thing I want to mention is that the reason you both came to my attention is that you’re working on a book right now. It’s called Conversations With Things and it’s coming out, probably what, about a year from now I think you said, but I just want to note that that’s something folks should keep an eye on, is to keep an eye peeled for your book. Rebecca you’re showing up first on my screen, tell the folks a little bit more about your background and then how you got into conversation design.
Rebecca:
Okay. Like many conversation designers. I have a bit of an unusual pathway into the work. My undergrad degree is in chemistry and I have an MFA in fiction writing. So just I’m a bit of a right brain left brain person like a lot of people in this field. I got my start working for a startup in Gainesville, Florida when I was still in graduate school, and it was a conversation design role, but it was very much DIY at a tiny little startup, and I didn’t know I was doing conversation design. From there, when the industry started growing and Alexa became a public figure, I realized I had the skillset that suddenly was in very high demand and from there I worked at a couple of different places Mobiquity and most recently I was at AWS.
Larry:
Nice. And Diana, tell us a little bit about your path.
Diana:
Yeah, thanks for having us. Like Rebecca, I have a kind of circuitous path here and a bit accidental as well. My degree is in playwriting, which seemed like a long shot at the time, but now it turned out to be really useful. After I graduated, I did the thing of moving to New York city, which is what you do when you have any kind of theater interest and are able to, and just worked a bunch of odd jobs until I found myself working in theater and documentary production, which evolves itself into a lot of audio work, a lot of sound editing and clip editing. When I moved to Chicago, continued that documentary work, but there’s less entertainment industry in general in Chicago than there is on the coast.
Diana:
So eventually found my way into, similar to Rebecca, a smaller startup that was health based. So a health tech company and the role that I signed on for there originally was as a health writer, which was again, like Rebecca, not at all what it sounds like or what the job entailed, but it was writing interactive multimedia patient education that over the course of a couple of years evolved into VUI products where we were doing IVR outreach, started out very small, like reminders and things like that, and evolved into longitudinal conversations to triage patients on behalf of nurses. So it got pretty complicated towards the end.
Larry:
Hey, I want to stop you right there because you’re starting to throw out three letter acronyms. So I want to slow down, bring our folks up to speed. So VUI, V-U-I that’s like a voice user interface, sort of an analogy to GUI, is that correct?
Rebecca:
Yeah, that’s right.
Diana:
Yeah.
Larry:
Okay. And then IVR, you use that term. Is that…
Diana:
Interactive voice response. It is essentially a robocall or one of those awesome phone banks that you call when you’re calling like a flight airline or a bank or God help you, your internet company.
Larry:
I think we all know that horror. Yeah, that’s great. We were talking before we went on the air about how refreshing it is for me to have folks who come from entirely different backgrounds into this world, because almost everybody else coming to the… not everybody, but a lot of the people I’ve talked to in this podcast are coming from like the GUI, the graphical user interface, like how to display a story or a diagram or something on paper, in words. So the VUI… how do you say that? view-ey, voo-ey?
Diana:
VUI like rhymes with gooey.
Larry:
VUI rhymes with gooey. Okay, that makes perfect sense. Tell me, do you either be having experience operating in a GUI? I mean you do just browsing the web and stuff, but can you talk a little bit about the differences in designing for a VUI versus a GUI?
Diana:
Yeah, sure. So I work… Grand Studio is a product design consultancy, and it has to do a lot of weird things, but most of them result in at least some form of a digital product. Sometimes that encompasses other channels or service design as well, but the bread and butter is really around digital product design.
Diana:
So I actually kind of backed into that from VUI, which is not, I think, the traditional path, but I’ve found it to be… there are a lot of corollaries in terms of process. If you’re coming from a human centered design base, it’s the same approach either way, it’s just a different output. There are, to me, a lot more affordances in screen design than there are in VUI design, simply because we’ve learned a lot longer how to interact, “appropriately” with a visual digital product where we haven’t had that kind of timeframe to do that in a conversational product, and we expect a lot more as humans from something that claims to be conversational and have a conversation UI than we do from something… we’re a lot more forgiving around screen design.
Larry:
That’s so interesting to me because conversation is probably the oldest form of human communication and printed stuff or displayed stuff on a screen is as much newer. So it’s almost like you’re tugged at both ends it almost seems like, that there’s this obligation to be true to human nature, but you’re new at it and you don’t have all the history and the legacy and the tools that GUI designers have. How do you cope with that?
Rebecca:
I think that question is still being answered. I think a lot of the big pain points of designing for voice interfaces are navigation. There’s a lot of discussion about whether you should give your user menu verbally, whether you should keep it open-ended, at what point you might introduce a menu. Users aren’t sure, there’s not a lot of standardization, so users aren’t sure if they can say things like, “Skip ahead,” or “Go back,” or “Start over.” Sometimes that works and sometimes it doesn’t, and it depends on the chat bot or the voice assistant that you’re talking to. The example that I like to use is when you’re searching a website and you don’t see the information that you want right away, by now we all know to start looking for those three little lines that they call the hamburger menu and kind of go from there or maybe you’ll go back out to Google and try your search again and see if you can hit a more specific page.
Rebecca:
So when we’re doing that with GUI, we know alternate ways to try and find the information we need if it’s not working the first time. With voice there’s no standardization, so we don’t know how to find that information, and there’s the added pain point of people get much more frustrated more easily with voice interfaces. The frustration mounts really quickly, so there’s a lot at stake for current conversation designers to work around.
Larry:
I can picture that because the expectation is like, “I am just having a conversation. Why can’t you just answer my darn question?”
Rebecca:
Exactly.
Larry:
Yeah, oy. Right. So there aren’t conventions, there’s no hamburger menu … Well, kind of related to that, I think, is neither of you have mentioned explicitly the practice of UX design, but would you both identify as UX designers and the interaction design part of that field?
Rebecca:
I would say a firm yes for me.
Larry:
And Diana?
Diana:
Yeah. It’s so funny because everybody defines UX designer a different way. So short answer, yes. Longer answer, maybe . . . Like it depends on the definition that’s coming out of that.
Larry:
Yeah. No, because you both obviously you’re designing conversational experiences. Anyhow I think one of the common themes across content strategy is that labels and terminology are completely inconsistent and you’re just doing the work and people will call you whatever they want so you have something to put on your LinkedIn profile.
Larry:
One thing I love about… just reading the table of contents for your book at the Rosenfeld Media site, you talk about – it’s very principles based – that you start right off the bat with the principles. Can one or both of you talk about the principles that underlie conversation design, because I’m inferring from some of the things you said the need for a principle based approach, but I’m just wondering what those principles are and how they manifest in your work.
Rebecca:
Yeah.
Diana:
You want to take that one?
Rebecca:
Sure. So one thing that we’re finding as we gather our thoughts for this book and do the research for this book is that, I think particularly because this field is so interdisciplinary, there are linguists, there are playwrights, fiction writers like me, there are people with data backgrounds, there’s developers. Everybody has different ideas about what these principles are. So I like to start by saying that there are some unique artifacts that conversation designers develop as part of what we do. One of them is persona, and that is distinct from user persona UX people talk about. User personas, when conversation designers use it they’re talking about the personality or character of the system or the chat bot or the virtual assistant. So there’s the persona component.
Rebecca:
We do a lot of work with actually writing the prompts, like writing what a conversational interface responds with. We do a lot of work or at least are involved with training data and how that impacts the users’ experience. Also, the bulk of our work is looking holistically at all the different conversational pathways that a user could take and that includes both “happy” paths, of which there are often many and error paths and how to repair, how to recover from that. So those are the key components I think that are unique to conversation design.
Rebecca:
The processes that we use are more general, sort of UX, starting with user research, iterating on prototypes, doing lots of usability testing. So there’s a snapshot of some of the work of conversation design as a foundation. Diana, do you want to talk about some of the principles in our book? I know we talk a lot about trust, things like that.
Diana:
Yeah, that is probably the biggest one, is I think anytime you’re going to use an AI to do anything, one, there has to be trust that the AI is going to serve up whatever it is that you’re asking of it, whether that’s served up in a conversation or that served up on a screen or in any other capacity. Beyond that there also has to be a very distinct line in the sand of what is the responsibility of the AI and what is the responsibility of the human and that extends beyond as, let’s say, conversational partners, so people on opposing sides of that conversation, it also pertains to the people that are essentially on the same side as the AI, that are helping provide that service.
Diana:
Like if you think of a chat bot and you might be talking to the chat bot and it’s powered by an AI that you’re talking to, and it’s a machine, it’s a system and then at some point you make it switched over to a human, who in that same channel, in that same modality, is conversing with you in this very similar way, but you have to have those lines in the sand, from a system perspective, when does the system stop and the human takeover, and then a very clear external facing perspective of as the user of this product when am I talking to a bot, when am I talking to a human?
Larry:
Are there conventions around that? I’m trying to think of examples where it’s like, “Hey, you’re talking to a human.” Is there anything like that that’s…
Diana:
Very clearly… I’ve done this before, I’ve had experience with this before. Let’s go back to the IVR or the phone bank. If you call into, let’s say, change a flight, right, you probably get hit with that, “Press 1 for this. Press 2 for that,” system first and then eventually it will say like, “Okay, I’m going to transfer you now to someone who can help you,” and then you have somebody, “Hi, this is so and so, and I can help you. I see that you’ve been giving us some information.”
Diana:
Like there’s some sort of transition that indicates when you are going to leave the bot portion of the interaction and when you’ve fully passed into the human portion. That’s a very basic version of that, but there should always be some kind of flag wave of, “We’re done here,” and, “Hello, welcome to this side of things.”
Larry:
It just seems like that would be a really important cue for humans to know, so they don’t feel like they’re constantly in a Turing test or something.
Rebecca:
Absolutely, and there are groups that are publishing ethical standards, like a snapshot of where that… Microsoft published ethical guidelines, and there’s a bunch of other groups like AI Now Institute, for example, who are looking at these sorts of things, and all of them are saying a bot should never pretend like it’s a human. It should always be explicit about being an artificial mind.
Rebecca:
I think Google Duplex blurred those lines – that was the technology that would call and schedule hair appointments or restaurant reservations as a synthetic voice, as an artificial intelligence, but it was so convincing that the people answering the phone didn’t know that it was not a real person.
Larry:
That was interesting because the prompts that made you think it was human, it was like introducing just sort of ers and ums and things that human beings would do. So that’s an interesting one because it seems if you were just making this up out of thin air, you think, “Oh, that’ll make people comfortable,” but it turns out that it doesn’t, it sounds like.
Rebecca:
Absolutely. Our cell phones know more about us than the current iteration of in-home virtual assistants do, but there is something about how intimately the human brain interacts with language and with communication that people are much more creeped out by things that they’re talking to. They feel more uncanny. I’ve heard people say over and over, “I don’t want an Alexa because she’s always listening,” which technically isn’t true, but there is something a lot more gut level with conversational interfaces that makes people extra easily spooked I think.
Larry:
Yep. What you just said reminds me of the trust issue, that we trust Alexa to just be listening for the prompt to start and that just seems like – what’s the mechanism by which you instill that trust?
Diana:
Well, that’s why Alexa has things like… if we’re specifically talking about Alexa here, obviously all of the devices have some sort of indicator, but Alexa specifically has that blue ring to sort of give you an indicator beyond just an oral ding. I can’t make the sound, it’s not a ding, but gives you that so that you have different ways of perceiving that it’s listening. That said, you might not always be aware when that comes on. So you may miss the oral cue, your back maybe turn to it, it be positioned in another room or something.
Diana:
So to be the counterpoint to Rebecca’s like, “It’s not that bad. It’s not listening,” I’m definitely more of a tinfoil hat. Like we leave ours unplugged all the time. My husband won’t even allow the Google Hub in the house, it sits in my office. So we’re on the further spectrum of that, but I think there is some legitimate concern, particularly from people that have experienced negative influence of surveillance, that this is going to extend beyond what they’re comfortable with sharing privately, and because of that idea that it has to listen, it has to send things up to the Cloud, even if it’s indicating that it’s listening, we’ve all seen the horror stories and heard the horror stories of moments where something has gone wrong and a device has listened to when it shouldn’t have been listening or sent it somewhere when it shouldn’t have been sending it. So I think there’s a fair skepticism of an ability of a device to protect a user’s privacy.
Larry:
I think I can almost picture a whole other conversation about the ethics of this. I think back to that, what was the Microsoft thing that learned within like 12 hours, how to be just a racist troll?
Rebecca:
Yeah.
Larry:
Were lessons from that that that informed these sort of ethical…
Diana:
Well, I would say if you haven’t read Weapons of Math Destruction, put that on your reading list right away . . .
Larry:
I’ll put it in the show notes too, yeah.
Diana:
It’s all about essentially how we teach AI via algorithms and whatever we’re feeding it has a bias, and if we are trying to do our best to go beyond that bias or our own bias, we always have one and the machine can only be as good as the people that design it and information that its fed.
Rebecca:
Absolutely.
Diana:
Rebecca, do you have a…
Rebecca:
Yeah, the book I’ll plug for is Algorithms of Oppression, which I talk about almost every day. I think there’s the human bias that comes in, but there’s also just the simple fact and Diana said this too, the data itself is not always reliable and the data is not always as robust as it needs to be.
Rebecca:
So an example of what that looks like in conversation design is these assistants tend to work better for white people. They tend to work better for males. They tend to be to work better for people like me from the Midwest, I’m from Kansas, I don’t have a Northeastern accent. I don’t have any of that “neutral accent,” although neutral accent isn’t actually a thing.
Rebecca:
So there’s certain populations that these work better for and it’s because the training data is not representing enough people. Another great example is that there’s a group of people who have Down’s syndrome, a lot of them could really benefit from voice assistance because it can help them with daily tasks and practicing things and all kinds of stuff, but because of physical differences in their mouths, their sound waves are different and these devices haven’t been trained on those types of sound waves.
Rebecca:
Diana, what’s the name of that? Is it Project Listen? [Project Understood] There’s a group that is working, The Canadian Down Syndrome Society is working with Google to correct this. That’s just one example of a type of bias where the training data wasn’t robust, it didn’t include all people.
Larry:
Interesting. So the issues of inclusion are super important here. If you’re going to have a conversation with somebody, you better understand how their vocal chords work and how they might be limited. Yeah, that’s so interesting and you guys are figuring that out as you go, it sounds like.
Rebecca:
A lot of the research is unfolding in real time. Each week, some big study comes out and we’re like, “Oh, we have to put this in the book,” and our understanding is certainly evolving. The whole field is changing, on a weekly basis, which is really exciting.
Larry:
Yeah. It sounds like an interesting field. One thing I want to make sure we get to, and I don’t know how to exactly stitch it in with what we’ve talked about, but basically like everybody else in the content world, we’re all thinking about some kind of a content management system where the content that we design and create and manage it gets tucked away, and a lot of what you’ve talked about, about there’s these answers that are served up. Is there a content management system for voice content or how does the backend of this work?
Rebecca:
The million dollar question. Most conversation designers that I talk to and my own experience sort of stitches together a bunch of different tools and it’s a common complaint that that sort of one-stop-shop where you’re storing all of your assets, the training data, the prompts, the bots, reading. There’s videos, there’s visual data too, that comes along in these interactions. A lot of times those things are all stored in different places and are hard to keep in sync. Diana, I know you had experience with this as well.
Diana:
Yeah, and to the broader question there are banks of training data for voice, for NLU [natural language understaning] systems that have come from the IVR legacy of just collecting phone calls and people talking, which is why we have such a narrow focus on what voices are accepted by machines or not accepted by machines.
Diana:
Then beyond that there’s been this work, especially by some of the bigger companies like Google, like Amazon, to create repositories of different voices or different inputs if it’s like in chat, which Project Understood that’s the Down syndrome project. So it can depend on what you are using. Like if you’re using one of these platforms that already exist, these repositories that already exist in terms of the content management and what you have to pull from in terms of recognition and what you might have to train.
Diana:
Then in terms of how it gets connected, you have to do the work of tagging it and saying, “This phrase belongs to this question,” or what we call an intent. Like, “I intend to ask you about delivery times for my pizza,” and therefore anything that I say that’s tagged with delivery times for my pizzas, I might say this a number of different ways or there might be like a number of subtopics beneath that, but as a designer, we have to tag all that to make sure that it gets bucketed in the right place, so that the system doesn’t answer you about pizza toppings, if that’s not what you actually mean for it to be answering.
Larry:
It’s a little mind blowing to think about all the possible questions, all the possible answers and then the metadata, to structure the data so that you can get back to it, and then have that happen in real time. I’m impressed you just must now believe that this whole thing even works. It’s pretty amazing.
Larry:
I noticed we’re already coming close to time. These things always go so quickly. One thing I like to do in the podcast, I always give my guests a chance, if there’s anything last, anything that’s come up in the conversation that you want to comment on, or just anything that’s on your mind about conversation design or UX design or design in general before we wrap up. Do either of you have or both of you have anything you want to make sure we could get to?
Rebecca:
I always like to bring up in this field, I think this field is such a perfect example of where the tech industry needs people who’ve studied the humanities. I always make a plug for that. So many different disciplines have been studying how people interact with each other, how people talk to each other, how people emote together and this is linguistics, this is fiction writing and play writing and poetry. People with humanities perspectives are really valuable in this space and I think that tech companies are starting to recognize that and appreciate how that has a place, and that’s always exciting to me. I also like to use that as like a call to invite people. Maybe you have a writing background or you have a philosophy background. If this work interests you, you can probably find a way into it.
Larry:
Great.
Diana:
Yeah, and I would build… Oh sorry, go ahead.
Larry:
No. I just to say, I love that because half my friends are humanities majors. So there’s worked for them in this field. Great.
Rebecca:
Yeah, we’re employable again.
Larry:
Yeah.
Diana:
To build on that I would also say, in general, in the design industry, I don’t think this is surprising news to anyone, we have a real diversity problem. We just don’t have enough different perspectives and voices involved and particularly when you are building something from the ground up, like we are with the redesign, of course it has been around since essentially the ’50s and there is a history here, but this latest push of using natural conversation and finding new ways to use the technology and pushing beyond what we’ve had really requires a more inclusive outlook. We simply cannot design with the people that have been designing, and that’s not to say that people don’t have experience or value to provide to the field if they’ve been working in it, but we need new people. We need new perspectives and we need more diversity, quite frankly.
Larry:
Yeah. I think the tech field in general certainly has issues with that. I also do a lot of community organizing work in open source software and things like that and the thing that comes up in all those places over and over again, is not only this idea of inclusion, not only just accounting for a variety of a diversity in your customers, but diversity within your team and the folks actually doing the work. One way I heard this quote was like, “Not that we built this for you, we built this with you.”
Rebecca:
Absolutely.
Diana:
Yeah, co-creations huge, but you’re never going to reach all your customers if you don’t have representation of your customers within your team. It’s not going to happen.
Larry:
Yeah, absolutely. Do you know ways of like… If either both of you have ideas about, both on the hiring side, like how people can cast a wider net and be more inclusive in their recruiting, but also for people who are in underrepresented groups who want to break into this. Any thoughts on that from either of you?
Rebecca:
I have strong opinions. To be frank, there are black experts in user experience design and conversation design. There are Native American people who are experts in this field and people just aren’t doing the work to find them. We all just need to try harder. I say, we I’m a white person. We just need to put more time into it. It’s sort of astounding to me. I think the question is asked a lot. Like, “What can people do to hire more inclusively?” and it’s keep looking until you have a diverse set of highly qualified candidates and you’ll find them.
Diana:
I would also say I’m not a DNI [diversity and inclusion] expert by any means. I too am a white Latina, so I get that privilege as well. I would say that what I have learned by talking to just a few DNI experts is if you’re concerned about hiring, look at your pipeline, look at your process and see what you are doing.
Diana:
There are probably a lot of things you’re doing to exclude a lot of different kinds of people, not necessarily about race, that if you are able to sit down with somebody who is an expert in diversity inclusion and go through what you’re doing, you’ll be able to pinpoint where those pinches in the pipe are and you can open that up a little bit more to everyone.
Larry:
Great.
Rebecca:
One thing I also like to say, it’s not enough to hire, to bring in black people, people of color into these companies, into these workspaces, if they are unsafe in those spaces. If the culture is oppressive, if the culture is racist, you’re not going to retain people. I think hiring is one piece of the puzzle and then culturally making workspaces that are safe for everyone and where everyone can flourish, that’s another big piece that companies need to be working on.
Larry:
Yeah. I’m settled in for the long haul on that, advancing that and I think we all… I think it’s not an episodic thing. Like, “Oh great, we recruited some people or we’ve changed this one thing.” It’s like, “No, you really…” and what you just said to culture, there’s some fundamental cultural change that needs to happen with this.
Larry:
One last thing. Where can folks reach you if they want to follow you? Are there places you’re active on social media or what’s the best way for people to keep in touch with you?
Rebecca:
For me, Rebecca Evanhoe, you can follow me on Twitter. My handle is @Revanhoe, the letter R and then my last name.
Larry:
And Diana?
Rebecca:
And Twitter’s good for me too. I’m Diana. I’m @Dianadoesthis on Twitter.
Larry:
Well, thanks so much to both of you. I really enjoyed the conversation. I feel like we could talk forever, but I like to give my folks… I think most people are pulling in the driveway, they finished their commute.
Rebecca:
Sounds good. Thanks for having us. This has been really fun and we appreciate your questions very much.
Diana:
Yeah. Thank you, Larry.
Leave a Reply