IA Summit 2015 Main Conference Talk
As information architects and user experience designers we work hard to understand our users and their needs so we can design compelling and usable content for them. Often this involves audience segmentation, psychographics, and personas. Unfortunately that’s not enough to accurately model potential behaviors in certain scenarios.
Behavioral economics is a field of cognitive science and psychology research that helps us understand how and why humans behave in unpredictable ways in particular scenarios. In this talk you’ll learn four principles that will help you identify when your audience might behave in unexpected ways and how to account for these cases by applying a few easy to understand principles of cognitive science.
Through IA and UX examples you’ll walk away knowing how to apply solid scientific concepts to your design and architecture process.
Learn why people turn their air conditioning down to 65 when they want it to be 72 and how IA and UX might solve that.
Learn how changing the ordering of pricing on your website can significantly affect conversions as well as average ticket price and why.
Learn these and other interesting insights that mere content inventories, psychographics and personas would never reveal.
Robert Neal: I’m going to start. I’m glad that we already got the audience warmed up. I’m going to start off with a really easy math question, and I want you all to yell out the answer as soon as you get it, as soon as you know what it is.
There’s a pond, covered with lily pads, and the lily pads cover the pond by doubling in size every day. If it takes 48 days for the pond to be covered in lily pads, how long will it take for it to be covered by half?
47, Good. There’s two answers to this that I think are correct. Some people answered 24, and some people answered 47. What happens is, and what the talk is about, is really dual system theory, which is what encompasses applied behavioral economics. This idea that there are really two ways the brain operates.
The brain operates, first, in an intuitive, very fast, cognitively, efficient mode, and in dual system theory that’s called system one. It also can act in a very slow, methodical, cognitively heavy and energy heavy way, that’s called system two.
Our system one is full of cognitive shortcuts that allow us to make really quick inferences, really quick judgments, and do very efficiently from a cognitive perspective, but a lot of times, that means that we sometimes we get the wrong answer.
If we said 47, that’s mathematically correct, but a lot of times people say 24, which I think of as cognitively correct. It’s what you most likely get if you’re not into math, and you’re using system one. I’ll explain a little bit more about how that works in a little bit, but this is a good illustration.
I’m going to talk about this topic by way of something that happened to me recently. If you’re not familiar with Uber, Uber is a ride sharing app, it’s a ride sharing platform. The way it works is anybody who has a car can sign up to be an Uber driver. As an Uber driver, you’re a contractor for Uber. Uber doesn’t have any employees that are driving cars and they don’t own any cars, they’re just a platform for allowing people to get rides, like a taxi, except that it’s ride sharing.
I was in Phoenix a few months ago. I went back to San Francisco and took the BART. BART is our Bay Area Rapid Transit, it’s like our subway. I took BART into the city, and then I needed to get up the hill to my house, so I pulled up my app and I hailed an Uber. When I pulled up my app, it turns out that it’s Monday at nine o’clock in the morning. I got a screen similar to this that said, “Look, there’s surge pricing right now and you’re going to have to pay 2.25 times the regular fare.”
I wasn’t too surprised by this. I don’t know if you guys have been paying attention to the news, but they got a lot of flock for this, maybe six months ago or so, when they rolled it out. There was a lot of hoopla covered in the media about people saying, “Oh, these rates are crazy. I don’t want to pay them.”
When you understand how Uber works, you get why they have to do this. Since Uber doesn’t have employees, they can’t tell their employees, “You really need to be available during these high-peak times, these high-demand times.”
Uber had to come up with a solution that was a win/win/win for their customers, for their drivers, and for themselves. They don’t want it to be the case that when a customer opens up the app there are no drivers available. That’s a really bad customer experience. The person’s going to be less likely to open the app in the future to hail an Uber.
They also need to have enough people available to actually be drivers at that time. What they did is they said, “If we increase the fare, it’ll cause more drivers to be available when people open the app during these high-demand times.
“We’ll make more money, because we’re increasing the fares, and customers will be happy because they’ll have drivers available whenever they want. The one downside, of course, being that the cost is slightly higher for the customer, but the customer should be happy that they’re getting a driver at all.”
They actually did a really good job of solving some user experience problems with this that I want to point out, as well. There are two in particular that I want to point out.
One is, “Agree button fatigue.” I’m sure we’re all familiar with “Agree button fatigue.” You don’t read anything. You just hit the “Agree” button.
An app could say, “We’re going to take all your private data, sell it to advertisers, not give you a cut, and give it to the NSA for free,” and we’re just like, “Agree.” That’s “Agree button fatigue.”
The other thing is sticker shock. They solve these both really well with this interaction pattern that we have here, which is instead of just hitting “Agree, 2.25x, charge it to me,” you actually have to type in “225,” which means that it’s different each time. It’s not like you can just do it automatically and type in the same number each time you encounter surge pricing. You know exactly what your multiplier’s going to be, and it also takes a little bit more cognitive effort than just hitting “Agree.”
They did a really good job of solving both these problems, but there’s a bigger problem that they faced and that they could’ve solved with behavioral economics. I’ll help motivate that by talking a little bit about traditional economics.
In traditional economics, they have this idea of an indifference map. An indifference map, we’re only going to pay attention to the “A” and “B,” here. An indifference map is just that there’s some points that you can plot out where you have two decisions, and you’re completely indifferent between those two decisions.
In traditional economics, they typically use this specific graph, which is, “Income to Leisure,” so income to vacation days. If I was offered a job, and they would say, “We’ll give you $100,000 and 4 weeks of vacation, or we’ll give you $50,000 and 10 weeks of vacation,” it might turn out that I’m completely indifferent between the two.
Either one sounds good. What’s interesting about traditional economics is, they take indifferent maps to be constant, so that if in six months time my boss come to me and said, “Oh, you remember you were indifferent between those two choices. You picked $50,000 and 10 weeks vacation. Will you now switch to $100,000 and four weeks vacation?”
This is where traditional economics breaks down, because what happens when I accepted the $50,000 and 10 weeks is I set a reference point for myself. Any decision after that if I’m losing something, it feels like a loss, even though I was completely indifferent to it just a few months prior.
If they say, “Take the $100,000 and four weeks,” I’m losing six weeks of vacation. I don’t feel indifferent to it at all. I want to go through a couple of situations here, to help illustrate that. This is interesting. Let’s just focus on situation one right now. Imagine I give you $1,000 right now. There’s a lot of you, so I’d have to have a lot of money.
If I give each of you $1,000 cash, and I said, “Look, now I’m going to give you two options.” The first option is, I’ll give you a 50 percent chance to either win $1,000 or win nothing. You have a 50 percent chance of getting an additional thousand dollars or nothing at all. For sure, I’ll just give you $500. How many of you will pick the first option? 50 percent chance for $1,000 or nothing. OK, got a few.
How many would pick the $500 for sure? A whole bunch. OK, great. Now, let’s imagine a second scenario. In the second scenario, I give you all $2,000 cash. You’re $2,000 richer than you were. Then, I’ll give you a similar option. This time, you’re losing money. I say, “I give you 50 percent chance to lose $1,000 or lose zero, or a hundred percent chance that you’re going to lose $500.”
In that case, how many would choose the first option? Yeah. The second option, good. All right. You guys are all really risk averse. [laughs] We saw a little bit here, but it was a little bit skewed. Typically, the balance is, most people for situation one, they’ll choose to win the 50 percent chance to win $1,000. In the second choice, they’ll choose lose $500 for sure.
They’re worried about losing that $1,000. In the first case, they’re not losing anything. They’re happy that they have the possibility of gaining $1,000. What’s interesting about this is, if you run this study enough times, you see these numbers being consistently far apart, where people will choose option A in the first one and option B in the second one.
The outcome of these is identical. If in situation one, you choose option A. Then, the expected outcome is exactly the same as in situation two, if you picked option A. That is, there’s a 50 percent chance you’ll have either $1,000 or $2,000. Then in the second option, there’s 100 percent chance that you have $1,500 both times. The way that people make decisions is not based on outcomes, it’s based on how you present that information.
The way we’re presenting the information is by setting a reference point. The reference point is either $1,000 or $2,000. The critical thing here that I want to have as a theme throughout is, “Look, you really need to design for what you want people to think, instead of what you think the outcomes are going to be.”
In fact, this is not an endorsement. I don’t remember the company name, but there’s a company out here who had like a shirt. They said, “For users who aren’t Spock or something.” I don’t know, if you saw that. There’s a little booth out there. It’s for user testing. That’s a good observation that users aren’t rational agents, like traditional economics treats people.
They have a lot of other things going on in contexts. They take a lot of cognitive shortcuts, to get them to where they want to be. You never want to think like, “I have a user. They’re going to follow this logical hierarchy. Look at this perfectly logical navigation system I made. Users are going to flow exactly how I expect them to.”
You really have to think like, “How do human brains work,” and design for what you want people to think based on what you know about human psychology. Bring this back to Uber. What Uber did is, they set a reference point, some base rate. Then, they told everybody, “We’re going to charge you more money.”
Everybody’s experiencing that as a loss. “We’re going to charge you 2.25 times the base rate.” How could they have fixed this? This is my recommendation to Uber. Anybody works at Uber? This is free consulting.
Robert: Instead of presenting something as a loss, Uber can just easily present just the cost. Whatever the cost is at that time, don’t call it surge pricing. That’s ridiculous. Call it current pricing. Then, just show the current pricing. For most people, they don’t walk around thinking, “I know what the base rate of Uber’s ridesharing service is.”
I use Uber all the time. I couldn’t tell you how much it cost per mile or anything like that. If instead of showing me that it’s going to cost you 2.25 times, just show me what the cost is. Remove that reference point is, and whatever that cost is, for me, that’s the cost.” I don’t feel like, “Oh, I’m losing something.”
I have two things here, because Uber has a nice feature that you don’t have to set the destination. The one on the left is a little bit nicer, but that’s only if a user sets the destination. They can tell you the cost of the entire trip. Otherwise, they would just tell you per mile. Normally, it’s a dollar per mile. I don’t know that. I just see this is the cost per mile.
If they had taken a step back and thought, “OK. Let’s use behavioral economics and think about this. How are our users going to respond to these costs?” They might have come up with something like this. There’s more that you can do. Even given the same principle in behavioral economics, we can leverage that more for Uber’s advantage.
Now, we have a reference point. We’re showing just this cost, $34. We’re setting a reference point. How can we use that to our advantage? One way you could do it is by showing what the projected cost will be over the next hour. We have a reference point. We’re saying, “Hey, guess what? It’s going to fall over the next hour.”
The user now gets to make a decision. “Is it worth my time to wait 20 minutes and save some money, or do I just want to accept the price now?” You’re using that reference point, to give people an option, so they feel like they’re making a good choice, and they have all the information they need to make that choice. If they want to, they can save a little money. If 20 minutes is worth $10 to them, they can save that money.
What if the prices go up? In fact, you get just as good of a benefit. If the prices are going up over the next hour, I’m feeling like I’m getting a good deal right now. If I know it’s $34 now, and in 20 minutes, it’s going to be $50, I feel like, “OK. Great. I’m getting a really good deal. I’m going to accept the $34.”
By using that reference point, whatever happens afterwards, because you set the reference point to the current time, instead of some arbitrary time, you get a lot of benefit out of that. It’s cool. It’s a fun, little exercise that I ran into after experiencing Uber’s surge pricing.
Now, I want to go through a couple of principles in detail, and hopefully, something that you can bring back and use in your own practice. We’ll start with the principle called “attribute substitution.” The best way to get at this is through a visual analogy. With visual processing, what happens is, if I asked you which is larger in a two-dimensional perspective, which of these is larger?
Our visual system is using context of the picture, to give us an answer about which one’s larger. The three-dimensional context clearly, the one in the back is larger in a three-dimensional context. What I really want to know is which one’s larger in a two-dimensional context. It’s almost impossible for most people, to override that three-dimensional context and give that answer.
It feels like the one in the back is bigger. In fact, you could probably have guessed these are both exactly the same size. What’s happening is, you’re answering a similar question. Your brain is answering a similar question, but not the one that’s asked. Daniel Kahneman says it this way. “The essence of attribute substitution is that respondents offer a reasonable answer to a question that they have not been asked.”
Daniel Kahneman is the godfather of behavioral economics. He won a Economics Nobel Prize. You see this a lot in some of the research that he’s done. One of the things that I noticed this in most is with nest thermostats. I’m going to use nest thermostat, because they’re well-designed products. They have great affordance. They’re round. You go up. You grab them. You turn them, and it sets the temperature.
I have a couple of nests. I feel like I’m allowed to pick on them a little bit. They’ve solved a lot of cool design problems. Thermostats are super boring. Nest is a lovely product that you want to use. There’s this interesting interaction pattern that some people have with thermostats that they still have with nests. It’s this.
Some people when they use thermostats, instead of setting the temperature to the temperature that they want their house to be, they’ll set their temperature much lower, because they want the house to cool down faster. I lived in Phoenix for a long time. You get home. It’s 120 degrees outside. It’s 90 degrees in your house.
You just want to crank the AC all the way down, so that it gets cold as quickly as possible. Nest has a good feedback. As you turn the dial, the numbers changed to reflect what the temperature’s going to be, when it’s done running. Air conditioners in houses are not like air conditioners in cars. Air conditioners in houses, the air flows at a constant rate and it’s a constant temperature.
Setting it to lower than the temperature you want isn’t going to get you colder any faster in case any of you are those people that turn air conditioners down lower than they’re supposed to be. You might say, “Well, what’s going on here?” What’s going on here is attribute substitution.
The people who are using the nest thermostat are answering a different question, than the one that the nest thermostat is asking. The nest thermostat is saying, “What temperature do you want the house to be?” The person is saying, “How quickly do I want my house to get cold?” It’s a reasonable answer to a question that wasn’t asked. How might you solve this problem?
I’m throwing this in here, because I’ve been asked this before. This is not a perfect solution. Here’s one way they might solve it. As you turn it, and you can’t see that very well, I understand that. It has, how long it’s going to take to get to the temperature.
That stays relative to other temperatures around it, so that the feedback loop that the user gets is, even though you’re turning it lower than what you expect, the time that it takes to get to the temperature that you desire is the same. You need to add a little bit of information, because you can predict that users are going to be answering a different question than the one that you’re asking them.
If you understand behavioral economics or dual system theory, you can predict these behaviors and design for them.
All right. Another math question. If you have five machines that can make five widgets in five minutes, how long does it take does it take a hundred machines to make a hundred widgets? Good. Great. That’s the system 1 answer. I will say there’s two right answers.
There’s the one where the system 1 answer which is quick and intuitive for just five minutes. I’m sorry. That’s a system 2 answer. The system 1 answer is 100 minutes. Who thought 100, but didn’t want to say it? Nobody’s going to raise their hand. OK, great. Honest people. I love it. The system 1 answer is 100 minutes, and the system 2 answer is 5 minutes. I want to talk about another principle.
That was just an exercise, to make sure you are all awake. I just want to talk about another principle here. This one is interesting to me. One of the things that’s interesting about it, and interesting about dual system theory and behavioral economics in general is that these principles apply to experts in making decisions in their fields, just as much to lay persons or the average consumers.
In this case, they took some judges. These judges all had about 20 years of experience. They gave them each a die and a caseload. What they did is, they had them read through cases, rolled this die. The die only had two numbers on it, threes and nines. There’s a 50 percent they’d roll a three, and a 50 percent they’d roll a nine.
They have to read through the case, rolled the die, and then sentenced the person in the case. It turns out that if you run this through enough judges with enough cases that people who rolled a three, sentenced people for a shorter amount of time than people who rolled a nine.
This seemingly unrelated task of rolling a die caused people to behave in a different way than they would normally behave as experts in their field. When I read studies like this, I’m always a little bit in awe. I don’t believe it. I expect you all to have that same skepticism. I’m going to tell you one other study that’s similar, which is also very interesting.
Another study in this same principle had people fill out a survey, to evaluate a product. How valuable they though this product was. Before they have valued the product though, they had the person write down the last four digits of their social security number.
Last four digits of your social security number for you Americans out there, you know that that’s a pretty arbitrary number, has no relation to anything in the world other than some database somewhere.
It turns out that people who had the last four digits of their social security number in a higher range actually valued the product higher than people who had last four digits in a lower range. If your last four digits were two, seven, three, eight, you valued the product worse than if the last four digits of your social security are nine, six, three, two. Completely unrelated things having pretty huge effect. This is called anchoring.
Anchoring is pretty interesting. I had the opportunity to use it recently. I was working for this online videogame company. They sell virtual currency. Virtual currency is pretty interesting because it doesn’t really have commensurable value in the world other than what the company tells you that it has. You go buy a hundred virtual coins and they just set a price. You really don’t have any other point of reference for that.
They asked me to figure out a way to increase sales. I was looking at their sales funnel and they presented their pricing something like this. They had three packages. One was 9.99, one was 34.99, and one was 59.99. I’m looking at this and I’m thinking about what I know about how people make decisions. One thing that stood out to me was that presenting the lowest price first really has a strong anchoring effect.
You could just imagine a user is looking at this. They see 9.99. They think, “OK, that’s the price.” Virtual currency, I don’t really know how much it’s going to cost. 9.99 is what you’re telling me it’s going to cost. That’s my reference point. I looked at the next one, 34.99, it’s a little bit more than the 9.99. Then, I get the 59.99. I’m like, “Oh, that’s a lot of money. That’s a lot of money for me to spend. I could just spend 9.99.”
We decided to run a test where we swapped the numbers. The hypothesis was look, if the anchor is 59.99, then when you get to 34.99, you’re like, “Oh, that’s cheaper.” When you get to 9.99, you’re like, “That’s dirt cheap. There’s no reason for me not to buy virtual currency today.”
Our expectation was that sales would increase on both factors of the average ticket price, as well as number of transactions. Our expectation is that the average ticket price should go up because now, it seems the lower ones are much lower than they were in the previous presentation. The number of transactions should go up because people who at first thought 9.99 was just a price now think 9.99 is really cheap. In fact, they did go up on both metrics. It worked out really well, really cool, really interesting. That’s anchoring.
Just going to do another math question for you all. You guys are really good at math. You have a baseball bat and a baseball. Together they cost $1.10. If the baseball bat costs a dollar more than the baseball, how much does a baseball cost?
Audience Member: 10 cents.
Robert: Good. We got both right answers for those of you who are cognitive one thinkers, 10 cents. For those of you who have heard this question before, you got five cents, good job.
10 cents is a really appealing answer, but that can’t be right. If the baseball is 10 cents and the baseball bat is a dollar more, that means the baseball bat is a $1.10. $1.10 plus 10 is $1.20, not $1.10. Five cents is right. If the baseball is five cents and the baseball bat is a dollar more, the baseball bat is $1.05, the total is $1.10.
What’s really interesting about what this illustrates is just how powerful system one is. System one is able to give us an answer to a math problem without us doing any actual math. If you actually do the math, you have to do some algebra. You have to fill in what are these variables and how can I get the answer based on X and what it’s supposed to be.
Our brain says, “Look, I think I’ve got this. I’m not going to do any math but 10 seems right.” This is the essence of dual system theory which is that, look, a lot of times, our brain is taking cognitive shortcuts. We need to know about that. We need to design for it.
Daniel Kahneman says if system one is involved, the conclusion comes first and the arguments follow. Our brain gives us an answer. Then, we come up with reasons about why we got that answer. It’s not the other way around. It really feels like it’s the other way around but it’s not.
I’m going to talk about another principle. This one is the framing effect. The other one was the anchor. The framing effect is really interesting, too. Again, I find it interesting because experts get the wrong answers about the fields that they’re supposed to be experts in.
In this case, they took two groups of doctors. They gave them the same medical cases. They gave them the same medication. In the first case, they told doctors, “Look, if you prescribe this medication, there’s a 10 percent chance that the patient will die.” In the second group, they told them, “Look, if you prescribe this medication, there’s a 90 percent chance that the patient will survive.”
It turns out that doctors are much more likely to prescribe the second medication even though the outcomes are identical. In both cases, there’s a 10 percent mortality rate. In both cases, there’s a 90 percent survival rate. By framing the same outcome in different ways, it changes their behavior pretty drastically. These are people who you go to and trust your life with, so don’t ever trust them again. Just kidding.
An application of this came up in the same company that I was working with before. After we increased their sales, they said, “Hey.” This is how companies work. You increase their sales. They say, “Increase them more.” I was like, “OK, yeah, sounds good.”
We had this layout. We had everything set up. They asked us to increase the sales more. We took a look at it. One thing that we noticed is that they were just giving their consumers the information that they thought was important which was the price and what the packages included.
One thing that is important and that I mentioned earlier is you really want to design for what you want people to think. One thing that they were missing which every retailer knows is important is why they should pick one package over another. We asked ourselves, “How can we frame this to get people to purchase more and to value them in a certain way?” All we did was we added some information that said, “Look, you’re saving if you’re paying more.” The larger package you pick, the more you’re going to save.
What’s interesting about this is not the fact that we added these things. That’s not a novel idea. People have been doing it in retail forever. The outcome for the user is exactly the same, and by introducing this new information, we changed their behavior pretty drastically. We saw something like 15 percent uptake in sales just by telling people that they’re saving money that they were already saving in the first place, they just didn’t know it.
Just by introducing that information and framing it in that way that we’re able to have a pretty significant effect on the user’s behavior. Don’t forget this. This is important. That’s it. That’s all I got. I can talk more. I took out some slides, but I thought that I’d leave a healthy amount of Q&A time available. If there’s not Q&A, then that’s fine, too.
Audience Member: Hi. Could you just re-explain the framing? That was the one that I got the least.
Robert: I’m trying to think of another study. The one that I talked about was the doctors. The essence of framing effect is really just this. You have some information. You’re trying to communicate to a user. You can change how the user perceives information and consequently their behavior just based on how you present the information.
The crux of it is really this. That same information can be presented in two different ways with the same outcome and people perceive it as completely different information. Just by framing it in a particular way, you’re able to cause people to think of it as if it’s different information, as if it’s new or novel information.
One thing that we always talk about is don’t leave inferences up to the person reading or digesting your information. You always want to make sure that any inferences that you expect them to make, you make explicitly. That’s really just framing that information in the way that you want them to perceive it.
That can also be used negatively. You see this with commercials and stuff a lot where they frame statistics in a certain way or percentages so that it sounds like their product is really great. When you think about the percentages, you think, “It’s not really that great,” but you’re not supposed to think about it. The idea is they want you to just use those cognitive shortcuts to make it seem like their product is really great.
Audience Member: Thank you. I’m really interested in this stuff. I’m wondering what you would recommend in terms of next steps. I already listen to Freakonomics and Planet Money and all of those kinds of things. What are some of your books or blogs that you would recommend?
Robert: Good question. I’ll post a couple on Twitter so that you follow me. No, I’m kidding. I will post them on Twitter. Daniel Kahneman’s book is really great. I love Daniel Kahneman for two reasons. He’s a great writer, super clear, but he’s also a great academic. The things that he writes are actually true things which is more than you can say about a lot of the writers. His book is “Thinking Fast and Slow.” I’ll pull up his name here, Daniel Kahneman.
Another good book, not great like Daniel Kahneman’s but another good book is “Nudge” by Richard Thaler. There’s one other book. It escapes now. I’ll put it on Twitter. I want to say…
Audience Member: There’s another really good one, [indecipherable 28:26] .
Robert: Dan Ariely has a lot of books. The other one I’m thinking is Daniel Gilbert. I’ll put it on Twitter though. I can’t remember it off the top of my head. Daniel Kahneman’s book, “Thinking Fast and Slow” is like a canon for behavioral economics. You could read that and not read anything else and you’d do well. It’s also a really long book. It’s like 900 pages or something. Don’t let that scare you. It’s really approachable.
Audience Member: That was a good framing.
Robert: I don’t make any money off it, so I don’t care.
Audience Member: Appreciated the talk, loved the concepts behind or the thinking behind it all. Totally see how this applies to pricing and scenarios where money, it’s a loss, etc, etc. I’m struggling to find some examples outside of the shopping cart, pricing, just standard interface decisions that could also play on either side. Do you have some examples you might share outside of money?
Robert: Yeah. Great. One of the slides that I cut out because I didn’t know if I’d have enough time, another principle that’s interesting is called base rate neglect. The reason I cut it out, it’s a little bit difficult to explain. The idea is just this, that when you’re presented with a question about statistics, it’s very difficult for you to pay attention to base rates in the population and not just focus on the question that’s being asked.
I’ll try to describe this, but I’ll use this example in just a second to illustrate it better because the example is going to be hard to follow. The standard test for this is asking people, presenting a character. Let’s say that we have somebody named Mary. She graduated from Berkeley. She lives alone, likes to read, and is very active in her community. You say, “How likely is it that she is a bank teller?”
People will say, “Oh, she’s so likely. Maybe she’s 10 percent likely.” Then you ask, “How likely is it that she’s a bank teller and a feminist?” Based on the description, people are like, “Well, 20 percent likely.” What’s interesting about that is that there’s no mathematical way you could be more likely to be a bank teller and a feminist than just a bank teller.
What’s happening here is you’re neglecting that base rate and you’re only paying attention to the details of the story. Nike actually uses this for good in a good way. They do something like this. This isn’t their actual screenshot. If you use Nike Plus, you can go on to their website and they tell you things like, “Compared to other people your age…” I hope this doesn’t give away my age.
Based on other people your age, you run as much as they do or less than they do, or you run as many times a week as they do or fewer times a week than they do. What they’re conveniently neglecting which is really good for motivating people is that they’re only comparing you to other people who use Nike Plus. The other people who use Nike Plus are all somewhat active people just like you.
If they compared you to the population of the US, you probably would be faring very much, much better but your motivation to keep doing it would be much, much less. They’re actually using base rate neglect in your favor to help you out. They’re omitting that purposefully and then just presenting information relative to the population that’s going to benefit you the most. That’s an example outside of making money. A really good one. I really like what Nike Plus has done.
Audience Member: I’m curious once you tell stakeholders how they can use this to control users, how do you get them to do good, not evil, with this? If you’re a financial institution, and you could use this to encourage people to put savings in, which might have a long-term ROI, or you could encourage them to take out more loans, which would have a short-term higher return? How do you convince them, how do you frame that argument?
Robert: I don’t know if any of you are familiar with dark patterns in UX, but it’s a similar problem. There are things that you can do that improve people’s lives, and things you can do to improve your business that have the opposite effect on people’s lives. For us, it’s just a decision for us. We don’t go around telling people how to manipulate people into doing things they don’t want to do.
In fact, our policy is we help users do things that they would want to do if they had full knowledge of the situation. Dark patterns and UX are things like when you’re stuck in a conversion funnel and you can’t get out. The only way you can get out is by giving your credit card information, or something like that.
How a company uses information is really up to them. How we use it is up to us. We can’t tell companies you have to be ethical, that’s a decision the company has to make on their own.
Host: This will be our last question.
Audience Member: The place where I work is really involved in financial education. Could you speak about some of the things that maybe could be used in terms of behavioral economics, specifically for more education things, particularly around things that we’re trying to move towards the good and trying to let people inform them about what is actually in their best interest.
We’re finding, even then, when we’re going through trying to use some best practices and incorporate some things from behavioral economics, still the cognitive difference is within the brain for that.
Robert: Good. Yeah. One thing that we always tell our clients is that education is one of the worst ways to change people’s behavior. Education has been a big tool for a long time that people thought, “if people just know this information, then they’ll change their behavior.” The US government tried that with how people eat. Financial institutions have tried that before, and it’s very, very ineffective.
I highly recommend you read “Nudge.” Richard Thaler is very big on things like getting people to make investment decisions that are in their best interest. There’s a lot of problems in things like letting employees pick their 401(k)’s. A lot of times there are too many options, the options aren’t clear. He’s written a lot on how you can more effectively get people save for their retirement by using behavioral economics.
Some of it is outside of behavioral economics, in things like gamification, or persuasive psychology where it’s things…Simple does a really good job of making savings a lot easier. They’re doing that in part by framing. If any of you have used Simple as a bank account, it’s really cool. They have introduced something which, now that they’ve introduced it, seems super-obvious. I forget what they call it. It’s something like “Spendable Balance,” or something like that.
Whereas most bank accounts just have “available balance,” Simple actually predicts what your bills are going to be based on your past behavior. It takes out anything from any savings goals that you have, and it gives you your balance as the available balance minus those things.
By presenting this lower balance, it affects how much you spend on a day-to-day basis, because you feel like you have less money, which contributes very heavily to you hitting your savings goals.
They’re doing something similar to what Nike’s doing, by saying, “Look, don’t pay attention to those numbers over there, just pay attention to this one key number, and it’ll help you a lot.” I recommend checking out Richard Thaler, he’s written a lot about that, but if you’re not familiar with Simple, that’s a really great example of using the presentation of information to affect users’ behavior pretty dramatically.
I think they’re doing some studies on how presenting the information in that way affects savings rates, so their members, it seems like, save on a much higher rate than members of other banks.
Feel free to ask other questions afterwards if you want, and thanks for coming.