Fuqua Insights Podcast: How Can We Make Smarter Decisions?

Professor David Brown explains how simple strategies can guide better decisions even when information is incomplete

Big Data, Podcast
Image

New job postings appear daily. Real estate markets update constantly with fresh listings. In an environment where alternatives continuously multiply and options can seem endless, the hardest decision is knowing when to stop searching and commit.

In this episode, David Brown, the Snow Family Business Professor of Decision Sciences at Duke University’s Fuqua School of Business, discusses how people and organizations can make better decisions when information is scarce or costly.  

Building on economist Martin Weitzman’s classic “Pandora’s Box Problem,” Brown and his co-author, Fuqua Ph.D. student Cagin Uru, found that straightforward search rules perform nearly as well as complex algorithms. Their research  shows a surprisingly simple solution: commit upfront to search a specific number of alternatives based on search costs, then simply rank what you've seen and choose the best.  

What makes their approach practical and appealing is its simplicity: it requires only the ability to rank alternatives you've seen and the discipline to stop searching at the right point, not probability calculations or complex data analysis. This applies broadly, from navigating job searches to booking flights to hiring contractors.

The conversation also explores when sophisticated algorithms are truly necessary. Their research shows that, across several search settings, their simple, transparent rules perform nearly as well as those based on more complex approaches (e.g., AI), raising questions about when algorithmic solutions are worth the investment. 

00:03

Welcome to Duke Fuqua insights, a Podcast where we explore faculty research and the actionable takeaways for business leaders at every level,

00:14

Tanner Morgan

We make decisions every day without complete information, whether it's a job seeker scanning vague salary postings, a manager looking for the right supplier, or a shopper comparing online prices. So how do you choose wisely when you're essentially searching in the dark? I'm Tanner Morgan, a recent Fuqua grad, and today I'm delighted to welcome Professor David Brown to the show. Professor Brown is an expert in Decision Sciences, and His research focuses on data driven decision making, often focusing on how businesses can address problems involving uncertainty or complex trade offs. His latest paper shows how simple strategies can guide smart choices, even when you know almost nothing about what's out there. Professor, thanks so much for joining me.

David Brown

Thanks for having me. Tanner. I'm delighted to be here. It's great opportunity.

Tanner Morgan

So, what first got you interested in this problem of kind of--quote, unquote--searching in the dark? Was there a particular moment or observation that sparked this research?

David Brown 01:14

Yeah, so, well, first I want to acknowledge I have a fantastic co-author on this work. His name is Cagin Uru. He's a PhD student at Fuqua. He's been an absolute joy to work with, and I credit him for a lot of the heaviest lifting on this work. And grateful to be at Fuqua and Duke for a lot of reasons, including the opportunity to work with fantastic students. So back to your question. One thing that's fun about my research is it's methodological in focus, which means I broadly think about methods for making decisions under uncertainty, and that's application agnostic, but the tools that you know aren't very compelling unless you have applications that you can demonstrate them on. And so about 10 years ago, I came across this paper on optimal search. I was working on a project with a faculty colleague at the time, and the paper was called optimal search for the best alternative. It was written by an economist who was then at MIT He later moved to Harvard. His name was Martin Weitzman, so this paper is from 1979 and this is not weitzman's most famous work. His actual most famous work was really on the economics of climate change, thinking about whether we should be using carbon taxes or cap and trade policies anyway. I'd heard of this paper, but I'd never studied in detail, and in this paper, he posed a problem. It's a model of search that's called Pandora's problem.

And I'm a little rusty on my Greek mythology, but you've heard of opening Pandora's box. I think there's some story involving Prometheus and Zeus some urn that was buried underground. I forget all those details, but, uh, but Pandora's problem, abstractly, is describing a model of search, and this is the problem that Weitzman posed. And so you can think of a bunch of decision maker making, you know, searching, and they're trying to find the best alternative from a set of options, okay? And Weizmann calls these alternatives boxes,hence the name, you know, Pandora's box. And sort of, at each point in time, the decision maker can pay a cost, a search cost, and reveal the value of the box, and that's kind of discovering the value of that alternative. And so at each point in time, the decision maker has this choice. Do I want to pay another cost and learn more about my alternatives that I haven't explored or take the best option I've found so far? So you can write this problem mathematically as a sequential decision problem, but that's not very helpful. It's kind of daunting to solve. And the beautiful thing that Weitzman did is he devised this solution called Pandora's rule that's actually very simple, and that was kind of his main contribution. So my sense is this paper is well known in kind of the micro theory community in econ for a number of years, but it really took off 15 or 20 years later when this thing called the internet came around and people were thinking about e commerce and recommender systems, consumer search, and so it got a lot of traction from folks In marketing, computer science and broadly my field operations research. So I came across this paper. I thought it was beautiful. And the other aspect that's really interesting about this Pandora's problem is its richness. And there's lots of variations to the problem, to this model of search that one could think about, and that actually changes fundamentally. Pandora's rule is no longer optimal there. And so that's kind of the history Sean and I have another, another project related to sequential search, and then led us to this kind of data driven search model that we talked we're going to talk about today.

Tanner Morgan 04:52

So your research mentions job seekers, housing markets, even art auctions. What makes decision making in these settings so challenging from a data perspective.

David Brown 05:02

Yeah, so couple things here. So first of all, when we talk about data, we can get very philosophical here about what that means, and different people, you know, disagree and have different views on this, but you could, you know, one way to think about data is numerical encoding of information. And this is true even if we think about, and we're talking a lot with generative AI about unstructured data, like images, video, language, there's there's got to be underlying numbers representing that information. So ultimately, if that's if you take that as true, that implies you have to be able to measure these quantities. So one thing is, in the settings you mentioned, there's a fair amount of subjective judgment in how you might value alternatives, right? So to get data on jobs you're exploring or homes you might purchase, you know you have to be able to somehow maybe quantify those subjective values, and that that's challenging. I think in the paper, we even mentioned the term feng shui when thinking about houses.

The second aspect is that if you think about these settings, like searching for a home or a job, often, these are inherently multi attribute problems, right? There's a lot of different criteria you're thinking about, and you have to sort of not only be able to measure those attributes which themselves could be subjective, but somehow combine them together, right? So, what are your sort of relative weights on these different attributes? And there is a whole literature on multi attribute decision making, and you can do all these things, but you know, there's a challenge there.

And I would say the third aspect here is that the traditional models of search, like Pandora's rule that I mentioned, coming from the classic Martin Weitzman paper, typically involve not just being able to kind of quantify some of these values, but also kind of looking forward. You have to think about how likely it is. I'm going to see values of certain magnitudes. Think about a probability distribution on those values, and that can be difficult to produce, particularly in more subjective or qualitative settings. So what's kind of nice about the rules we kind of develop in this paper is, A, we don't need that probability distribution. It's fully data driven. That's kind of the punchline of the title there. And B, actually, if you dig in, we don't actually need the values themselves. We just need a ranking. So you just need to be able to make relative comparisons. Sometimes that's referred to as ordinal decision making. So you just need to kind of look at everything you've explored in the past and say, which one of these do I like best?

Tanner Morgan 07:38

In your study, you find that some very simple strategies, like committing upfront to checking a set number of options can actually perform quite well. Why do these rules work even when the situation feels really uncertain?

David Brown  

Fantastic question. I'm going to struggle to give you a fully precise answer, because I'm still working and building my own intuition, but with that preemptive excuse out of the way, let me take a crack at it. So, so the way the approach we developed in the paper works is, what we're saying is search a fixed number of alternatives and then take the best thing you find at the end of that. Okay, and that's it. So there's no being adaptive. There's no, you know, trying to blend data in some smart way, and the number of alternatives you search, let's call that N is determined solely on the basis of how costly search is, and it's nothing to do directly with the values you might receive or any distribution on those values. So you need to kind of have an understanding of your search costs. And when you have that, that gives you an n and that can tell you prescribes, kind of how many times you should be searching.

So this is the search effort, if you will, is carefully chosen. It's not coming out of thin air. And I think the high level intuition for your question is that the number of times you search, the number of alternatives you search your n if chosen the right way, that kind of balances between the total cost of your search with the anticipated value you're going to be getting from whatever alternative you find best During that search. And so I guess maybe this, perhaps the surprising result of the paper is that this kind of fixed search process ends up being a good approximation for your performance. Think of the value you get net of all your search costs you'd get with the smartest search algorithm you could imagine. So that could be something that takes all the data of past values explored and uses algorithmic machinery to guide your decisions. In fact, this fixed search effort does about as well as that, or relatively close to that. So I think maybe the short answer to your question, in a nutshell, is that time or duration of a committed search process can provide a good proxy for the fully optimal search, which might be adaptive and data dependent and more complex.

Tanner Morgan 10:05

So a big part of your model is how costly searching can be, whether that's time, money or even attention. How should managers think about balancing search effort against these costs?

David Brown 10:19

Yeah, and I think that comes back to the previous point, and maybe this is really our fundamental punch line, is that search effort, you should think about your search costs, and firms and individuals as well should be disciplined in their search budget, and that budget should be influenced and determined by your search costs.

Strictly speaking, if you dig into the paper, it's a little more subtle than that. We start with the case where the searcher this could be, again, a firm or an individual, really knows nothing about what they're going to see, completely searching in the dark, and how likely it is the next alternative will be really good, great, mediocre, whatever. We analyze that case. However, we also look at more restricted settings where the searcher might be comfortable making some relatively loose assumptions, like the tails of the distribution aren't really crazy. For lack of a better term, you're not going to see something really extremely huge. And so if you're comfortable making certain assumptions that that distribution of whatever it is you're going to see those those values isn't too wild that then you might think about, you know, recalibrating your search effort. And typically, in those settings, you'd search a little bit less because you're not hoping for something really amazing. So the analysis in the paper shows if those assumptions are correct, you can do even better, actually. But the basic premise and lesson is the same that I think the search effort you want to think about should really be tied to your search costs.

Tanner Morgan 11:55

So how would this apply to consumers? Say, I'm shopping for airline tickets, a new apartment. In a recent case, for me, someone to do some renovations on my home. How might this change how I search?

David Brown 12:07

Yeah, I think it comes back to the search effort piece.

We've all been in that situation where we're looking for something, we're searching and you're constantly wondering, is there something else out there that I haven't seen that's better, you know, MBA students searching for jobs, almost from day one, you're thinking about that, right? And you go through the job search, good can be super stressful. And do I take the best thing I've found so far, or am I settling, right? Is there to date, there's unseen option with really high value, and maybe just if I keep trying, I'll go out there and find it. And that can be true with jobs. You mentioned apartments. There's probably a joke in here about online dating. We didn't use that example in the paper.

So you might call that like the jackpot or the lottery ticket distribution, or maybe the boom or boring distribution.

So one of the things that's intriguing in the analysis, in the fully general case, is that in the paper, there's a bunch of math, and then when the dust settles, there's actually two, two kind of distributions of values that emerge as limiting cases. There's like the fully boring case where there's really not much uncertainty at all about what you're going to see. Kind of everything's the same. And you may as that well, in that case, not really search, just take the first thing you find. And the other case that pops up is the the boomer boring that I just mentioned, where things are usually kind of all the same, but there's this really great value out there. And if you search really long time, you're eventually going to get there. And so those, those are actually the true extreme cases, and they pop right out of the analysis. And so how do the methods we develop help? Well, the issue with this boom or boring situation is that to find that boom, that really great option, you're going to have to search probably a really long time, and that's going to be costly, and searching forever is you don't want to pay all those costs. So the rules on the fixed search effort that we develop essentially prescribe how long you should be searching before you give up on the boom. So this is a long answer to your question about buying airline tickets, by the way, but so I guess coming back to your example there with consumers, I think my hope is maybe this gives some kind of theoretically grounded support and how they should be thinking about searching in practical situations. You know, kind of committing to a certain amount of search effort or committed to a certain number of alternatives. And methodologically speaking, it's, I guess, it's not fundamentally different than the story for managers as well or firms. It's just that the stakes maybe in a lot of those situations aren't, aren't quite as large.

Tanner Morgan 14:40

How do you think that this research connects to the growing reliance on algorithms and AI for recommendations? Are humans and machines solving the same problem in different ways? I know as you were answering I thought of Google Flights, for example, right for searching for airline tickets.

David Brown 14:56

So this is an excellent question. Yeah, so first, sort of one little disclaimer. We kind of throw the term AI is like being thrown around all over the place, and sometimes I think that creates some challenges for a society. You know, we don't know what we're talking about. One person saying one thing, they mean one thing on AI, and another person means something else. So I don't know if we have this magic wand that's going to solve everything, and there's certainly powerful tools out there, like generative AI, they're a big part of that. So I think it helps to think about specific technologies. But backslash and rant, I'll get off my soapbox there. You mentioned algorithm. So let's use, let's call that. I like that framing, and I like the analogy to Google Flights. Actually, there's two parts to what we show.

So first they're showing that fixed search effort rules we develop perform well, and they perform well compared to Weizmann’s Pandora, which requires a full kind of probabilistic view of the world. So we touched upon that already. The other thing we show in the paper is that you might say, well, look, this fixed search effort business, that seems fine, and I guess it does Okay. It seems to do relatively well, but maybe there's something else out there. Maybe I could be smarter. I could take all this data and use an algorithm and do a lot better. And we can call that an algorithm. We can call that AI more broadly defined, I guess. And that's important, not just from a theoretical side standpoint, but also from a practitioner side. You know, people would like to know if we can, we can be doing better by smartly combining data and using that in the paper, we actually precisely quantify how well any data driven algorithm can do compared to this fixed search effort rule and there are precise results in the paper, but the punch line is often not a whole lot better, and that was maybe one of the more surprising aspects of the work. So I guess coming back to your question, in this particular search setting, actually, the use of complex algorithms that blend data in smart, sophisticated ways might not buy you a whole lot. It might buy you a little. But there are other factors to think about in those situations, such as you're dealing with sort of black box machinery that may be less interpretable, and you may you know, if this is a higher stakes decision, probably more than, you know, booking an airline ticket, this might require investments in that compute to drive those algorithms, and whether that's worth it, you know, might not be right.

Tanner Morgan 17:29

As you worked through your research, was there a result that really surprised you?

David Brown  

So I think the simplicity of the search rules was kind of surprising. Didn't expect that these sort of, for lack of a better term, naive search rules, with just this fixed amount of search effort or fixed number of alternatives, could perform as well as they do. And that seems surprising, especially when you consider scenarios like that boom or boring scenario where you know, you could have something out there that's really great, but just really unlikely. So apriori. It seems a little hard to guard against that case, but that was the main thing that surprised me.

Tanner Morgan 18:11

So professor, where do you see this line of research going next? Are there new contexts like aI assisted decision making, or rapidly changing markets where you'd like to test these ideas?

David Brown  

Yeah, I think there's a lot of interesting directions looking ahead. So coming back to the AI piece. So one thing we're hearing a lot about, and this ties back to your previous question, are AI agents. So this is not just, you know, making predictions with AI. This is actually using it to drive decision making, and this has often been framed around sort of more mundane tasks, like, you know, managing calendars, booking flights. I saw a paper recently from some colleagues at Columbia, you know, AI shopping for you, filling up your grocery basket, right? And Chun and I have been talking about some of the implications there. I think the other aspect that's maybe a little more subtle, and maybe a refinement of my response to the previous question, is that if you're using an algorithm to handle the search, or take the lead on the search, then maybe search costs are actually lower there, right? Because it's not so much your time, it's not your mental effort. And maybe with the lowering of the search costs, you can actually, you know, that can change the equation. And maybe this algorithmic approach would be okay, searching longer, leading to more, leading to efficiency gains, so that that would kind of be a twist on things.

There's also a whole literature on decision making with what's called advice, and this is in the kind of the computer science in operations research literature. It doesn't mean sort of advice. You know someone giving you advice directly, although it's sort of in spirit, the idea you, here, it's more abstract, and it's intended to capture some kind of initial information, typically provided by a machine, a computer, an algorithm. And this could be some kind of market research for the situation in question, or simply some samples or initial data that might be relevant for the values you could see in the future. And so rather than this complete cold start where you say we don't really know anything about the search. We actually do have this information through this advice. And so how much better can we do in those settings?

And then I think the last would be sort of the more meta question is, you know, when should we be using algorithms? When should we be using AI to drive decision making? We're hearing a lot about all the powerful things you can do with this new set of technologies. But we're also hearing concerns about things like energy consumption and our electricity grid being overwhelmed by, you know, so many people using AI. So maybe the meta question is, there are certain problems where we really don't need algorithmic solutions, and then there are other settings where it can be really helpful. And thinking about we want to kind of invest our AI resource allocation, if you will. So that's maybe a bigger picture question,

Tanner Morgan 21.12

Professor. Thank you so much for joining us today.

David Brown

Thank you so much. Tanner, it was a lot of fun.

21:24

Duke Fuqua Insights is produced by the Fuqua School of Business at Duke University. You can learn more at fuqua.duke.edu forward slash podcast, Duke. 

 

Bio

David B. Brown is the Snow Family Business Professor in Decision Sciences and the Faculty Director for the Center for Energy, Development and Global Environment (EDGE) at Duke University's Fuqua School of Business.

Professor Brown's research focuses on designing and analyzing algorithms for decision problems involving uncertainty. Professor Brown is actively working with researchers at Duke and several other institutions to improve the efficiency and reliability of electricity grid operations in the face of uncertainty in demand and renewable energy sources.

His recent research also includes developing and analyzing solution techniques for problems such as network revenue management, dynamic pricing in shared-vehicle systems, stochastic scheduling, and sequential search. Professor Brown is an Area Editor for the Decision Analysis Area at the journal Operations Research, and the Institute for Operations Research and the Management Sciences (INFORMS) has recognized his research with several awards. At Fuqua, he has taught Decision Models, Data Analytics and Applications, Probability and Statistics, and Convex Optimization, and he has won teaching awards in multiple programs.

Professor Brown received a Bachelor's and Master's of Science in Electrical Engineering from Stanford University and has been on the faculty at Fuqua since receiving his Ph.D. in Electrical Engineering and Computer Science from MIT. 

This story may not be republished without permission from Duke University’s Fuqua School of Business. Please contact media-relations@fuqua.duke.edu for additional information.

Podcast Article
Podcast Article