Fuqua Insights Podcast: Can 1% Improvements Transform Your Business?

Professor Sharique Hasan explains how startups that embrace experimentation see dramatic performance gains—and why some companies miss out

Podcast, Strategy
Image

A single decision improved by 1% might seem trivial. But make 300 small improvements over a year, and the compounding effect becomes transformative. A/B testing allows companies to systematically test different approaches and optimize performance, but research shows that the startups that could benefit most are the least likely to use it. 

In this episode, Professor Sharique Hasan of Duke University’s Fuqua School of Business discusses his paper “Experimentation and Start-up Performance: Evidence from A/B Testing,” which focuses on how startups use A/B testing to drive performance. Based on data from more than 35,000 startups, Hasan and his coauthors found those that adopt A/B testing experience significantly higher performance over time—sometimes doubling outcomes after a year. 

Hasan explains that while the impact is strongest for smaller and non–Silicon Valley startups, these firms often lack the resources to implement A/B testing effectively. For them, he introduces the concept of “experimental thinking” as a more accessible alternative: a mindset of comparing options rigorously, asking the right causal questions, and framing decisions with clear counterfactuals.

Drawing from both large-scale quantitative analysis and rich qualitative insights from tech practitioners, Hasan describes how small, compound decisions can lead to transformative outcomes. 

SPEAKERS

Sharique Hasan, Emma Salomon

(music)

Emma Salomon  00:04

Companies make hundreds of decisions every week —what features to build, which customers to target, how to price their products. But how do you know if you're making the right call? And what kind of impacts do these decisions make on performance? Professor Sharique Hasan of Duke's Fuqua School of Business studied how startups use experimentation to make better decisions. His research on A/B testing analyzed over 35,000 startups to understand whether testing ideas systematically actually improves performance. I'm Emma Salomon, Director of Strategic Communication at Fuqua, and today I'm joined by Professor Hasan to explore what he learned.

Let's start with the big picture: What sparked your interest in A/B testing and its impact on startup performance?

Sharique Hasan  01:09

Thanks, Emma, for having me on the podcast. So when I was in graduate school, I was doing experiments, social science type experiments, where you're interested in understanding whether some social science theory is true versus some other theory. And so I started doing field experiments, kind of trying to look at the impact of social connections on people's outcomes and behavior and performance. And I got my first job. It was in the Bay Area, and there I was, you know, at my desk doing my social science experiments. But after work, I met lots of people who were working in tech companies who were also doing experiments, and they were also trying to figure out, how do we do experiments well? And I thought that was kind of interesting, that this technology, this methodology of experimentation, which I thought was poor academics, was beginning to get used pretty extensively at Google, at Facebook, at that time, Facebook, at Amazon, and that sparked my interest. Like, wow, they're using experimentation. Does it work? Does it do what they think it's going to do, which is increase their performance. And so that kind of planted the seed in my mind, that, hey, what's the impact of experimentation on startups, on big companies, and how it affects their performance?


Emma Salomon  02:31

And why is this topic important for aspiring entrepreneurs or future business leaders?

Sharique Hasan  02:37

Entrepreneurs and business leaders have to make lots of decisions, and it's not like one or two big decisions, it's literally 1000s of small decisions. And how do they make decisions? Usually, they have various methodologies that they use. Some are not really methodologies, but things like, you know, what have they done before? Did it work? Their gut feeling they use heuristics. They heard from someone that they did this and it worked. And so they're making a whole bunch of decisions, and they have some decision rules about like whether you go with path A versus path B, and then you make the decision, and you make big decisions, and sometimes those decisions don't work out. You don't know whether they worked out or not, and so you want to make good decisions, and you want to make good decisions in a way that is cost effective. And so experimentation allows you to really do small tests reduce the risk, because you're now not making a big choice. You're actually testing it in a small group of people, 200 customers, 300 customers, and learning, ah, does this small decision get people to basically buy my product more, or does it get them to stay and watch the video for a little bit longer? Or whatever the decision may be, actually, Capital One was an early user of this. They would send different mailers to different households, 1000s and 1000s of different versions, to make sure that people actually opened up the mail. And this is a big decision. Mailers are really expensive, so you want to test out first whether, on a small scale, does this work, before you go big. And I think experimentation, if done repeatedly, allows you to actually be much more sure that you're making the right decision, and those things add up.  


Emma Salomon  04:26

Your research shows that AB testing leads to significant performance gains over time up to 100% after a year. Why do you think this impact is so strong?

Sharique Hasan  04:34

I think it relates to a couple of things. I think you have to think about like, how were people making decisions before? We talked to actually, a major tech company about their rollout of AB testing inside their company. They hadn't done it before, and you'd be surprised. It's a company you know, you probably use your product a lot, and actually, the way they made product decisions was very. Very interesting. They had something called, or what they called hippo, the highest paid person's opinion. And so at, you know, at a product meeting, you know, the boss gets to decide, this is what I like we should go with this decision. Or a very charismatic product manager would basically weave a great story about, like, this is gonna be amazing. The customer's gonna love it, and it would go through just purely based on charisma, or important changes would basically get shut down by the ads division, because, you know, ads is how you make money. And they would say, Well, this is gonna hurt ads. We don't know whether it's gonna hurt ads. We don't know how much it's gonna hurt ads, but ads, because they're important, they're the revenue engine for the company gets to shut it down because it's gonna hurt ads. And so they knew this was like preventing them from actually making product changes that were going to be delightful to users, and they were facing competition from another big tech company that you probably know, and they had to do something. So they implemented a whole one and a half year long process where basically every decision was going to be done through AB testing. They rolled out a big AB testing platform. They taught people how to think in terms of experimentation, and they found massive increases in performance. Because now you have to go to the meeting and you have to say, well, this is what my test said. You could argue about the statistics, but now you're not arguing about someone's story. You were arguing about the data. And you do that not just once, but literally in every single meeting, you're gonna make better decisions on average than if you were essentially just like doing story time at the product meetings.

Emma Salomon  06:50

One surprising finding is that smaller and less well funded startups seem to benefit the most from AB testing. What might explain why they're also the least likely to adopt it? So I  

Sharique Hasan  06:59

want to just like preface that this is not an experimental study, and it's a little bit ironic, we're studying experimentation without doing an experiment, and so people actually get to select into whether they start doing experimentation inside their firms. We deal with this causal inference issue quite a bit in the paper. But there is a difference in terms of who adopts AB testing. It's the big er, venture backed startups that are based in Silicon Valley, and I think it just became part of the culture. And everyone knew that if you were gonna make consumer facing products or products that you know people wanted to use. You had to do AB testing. And if you're outside of that kind of mainstream Silicon Valley tech ecosystem, you actually didn't even know how to set this up. Like, you know, tech skills include programming, product design, etc. Product Design people are qualitative people. They do user interviews. And the software engineers, they're software engineers. And this is a completely new skill set that really wasn't prevalent, at least at the time when we wrote the paper, which is statistics, and thinking about, like, rigorous statistics, particularly experimental statistics. And as a result, you know, when you're a smaller startup, maybe based in Georgia, or maybe based in, I don't know, like, even Chicago, that skill set is just unavailable to you. And so in some ways, it's not surprising they didn't do it because, like, that's a pretty unique skill set, and very few companies actually had a lot of that talent. Facebook, for instance, had an entire core data science group whose whole job was to literally AB test every single thing.


Emma Salomon  08:49

So what can scrappy founders or early stage operators do to overcome these hurdles? I 

Sharique Hasan  08:55

would break up experimentation into two buckets. One is like, actually running AB tests rigorously with a solid sample size, testing between treatment and control. And then the other is actually experimental thinking. And I think you can separate those two out. I think anyone can engage in experimental thinking, which is like, be rigorous about like, what it is the causal story. Why do you think X will matter? And what are you comparing it against, even if you're not doing a formal experiment, just by actually framing the decision as I'm testing between these, these two things all else equal. You know, I think this is going to win out, because just by framing your decision in that way actually opens up a lot of performance, because now you're actually being formal and rigorous about why you're making a decision you're making. So I always tell you know, founders that I talk to that don't have the resources, don't even have the data, people. Coming to their platform. But, like, actually think about, you know, the following. Like, have a causal story about why you think the effect has an effect. Well, will matter for your performance, which is, if x, this thing change your making, then why? What do you think is gonna happen, especially when, or except when? Like, what are the scope conditions, because right, once you have that story, and you have lots of stories that way, you can actually be much more rigorous about, like, why? And then when you get the data, this just becomes natural. Every decision is framed in this way, and you can actually just consistently make good decisions. I had one more kind of thing is like, why does the performance become so large? Actually, decisions are compounding, and, you know, so you make one good decision, you get a 1% improvement. The next decision you make, you get a 1% improvement on that. And the next decision, you get a 1% improvement on that. You do that for 300 and whatever days that like is, you know, X, your original thing times one, times 1.01 to the power of 365, that's gigantic. That's gigantic. And so most people don't realize these small things compound so significantly that then it becomes a significant barrier to entry, because, you know, it's very easy to copy companies that do one or two big things, right? Impossible to copy companies that make lots of little changes that compound, because those are completely unobservable to even the customers, but it feels certain in a certain way, or looks a certain way, but you don't know how they got there, because it was a lot of little changes rather than a few big changes.

Emma Salomon  11:41

Your paper talks about failing fast and scaling fast as outcomes of experimentation. Can you break down how AB testing helps startup know when to pivot or double down?

Sharique Hasan  11:52

In the absence of feedback loops where you learn like, did this work? Right? You're going to continue doing things that don't work. And as a result, you can persist, as long as you have pots of money, potentially from a venture capitalist, that you can keep on going down and doing the same old stuff that has no outcomes, just because you have money to be able to fund it. And you can weave whatever stories. I think AB testing, because now you're getting data -- this isn't working, it's not working again, it's not working again -- okay, like we found evidence that hasn't worked 75 times. Let's stop now or right where you're not wasting your time on things that don't work, and you can start allocating your attention to things that do work. And it's not just attention, it's effort, it's money, it's other resources. And so what we find is really AB testing is indeed improving the mean performance of a startup. But actually, what it is doing it's basically cutting out bad paths that like aren't working, and now you know very fast that they're not working, so you don't waste your time on it. You know, it's like that Pavlovian dog, you know, you pull the lever, food comes out like, the lever hasn't been working, no foods coming out, like, let's pull a different lever now. And you start, start allocating to other things that are working. So it's really the shift of attention, effort, time, resources, from bad paths to good paths.

Emma Salomon  13:22

How do the findings here challenge or build on classic ideas of business strategy, like Porter's deliberate strategy versus Mintzberg emergent strategy?

Sharique Hasan  13:31

I think Porter's view is a very big picture view. I've actually written about portarian strategy in my sub stack. And I think, you know, porterian strategy is really very much, I would say, paranoia based, which is like, Oh, my, you know, competitors coming after me and trying to steal my margins. And, you know, even my collaborators, you know, my supply chain, they're trying to squeeze. And it's very macro. It's like thinking about the external forces trying to eat your lunch. And that's how actually we teach it, and that's how a lot of people teach it, whereas I think AB testing is really saying, Look, you know, very few firms are at a position where anyone's trying to eat your lunch because you have no lunch to take, which is, you know, then you think, okay, like, how do I actually make … it's a philosophical stance. It's a philosophical stance that I personally think, I think there's a lot of performance that is lost because we make bad decisions consistently within organizations. And I think AB testing -- it really focuses on organizational decisions, rather than market and competitive dynamics, which do matter, but actually I think they matter for a smaller subset of firms that are really well managed to begin with, for most firms, I think there's a lot of room for improvement just by making good decisions internally. So I think that's a philosophical difference,

Emma Salomon  14:55

And you use a fascinating mix of qualitative insights from practitioners. And large-scale quantitative data. How did that combination help you tell a more complete story about experimentation.  

Sharique Hasan  15:08

We combined lots of different data sources. We had, like, daily page views for 35,000 websites from all across the world. We had bounce rates. We got literally every single press release from these startups of the new product introductions. We bought data looking at every single change in their underlying technology stack over this period. Just crazy. We downloaded every single day on web of internet archives -- their code base: we just, literally downloaded and looked at changes in the code base, and we found our results, and now you have these results, and you're like, okay, this is a story we think we're telling with this, this is what we do in quantitative research. I'm primarily a quantitative researcher, but you know, my entire co-author team, you know, we knew people in this space. This is, was their job. They're the head of experimentation at every single top firm you can think of. So we said, hey, why don't we just talk to people and see what, what they think, how it's going to change things inside their company, which we can't get from the data. We can say, hey, affects performance. But why? I think that's a really interesting question is, why is this happening? And that, I think we talked to folks at, you know, a major website that has social media profiles for people who are looking for jobs. And talked to their experimentation team, and they were like, Yeah, well, you know, lots of our engineers have ideas. How do we aggregate those ideas? Well, they can submit an AB test they want to do. And now, rather than a product manager, relying on the product manager for ideas, you're actually benefiting from literally 1000s of people whose ideas can be tested and implemented. It's like, wow, that's really interesting. So you reduce the cost of testing, make it easy for people to test ideas, and now you can test a lot more ideas than you used to. That's like, interesting that you can't get from the data. So I think I like I do that now for all my research is that I just go and talk to lots and lots of people, because the data tells story. You know, you try to basically be as convincing as possible, but you also need to know the why. 

Emma Salomon  17:16

For MBA students heading into product strategy or analytics roles. What's your advice on fostering a culture of experimentation inside an organization? 

Sharique Hasan  17:25

I'd go back to think experimentally. Be clear about what we call the counterfactual -- compared to what? Right? Compared to what? I think asking that question is just the foundation of experimental thinking. And then have a clear mechanism. Why do you think this matters? I think those two things together are really powerful. And it's surprising, actually people don't think that way. That's not how they're taught to think. And I think as you go into organizations, the other thing I would do is think about the decisions. I think we make decisions, and they're consequential decisions, but we don't actually think about them as decisions. And so really be thoughtful about the decisions you're making and why you're making them. And I think those things combined, now you have a decision, what's the counterfactual? Why do you think your decision in this way is going to move the needle? And I think then you now already have the conceptual framework, and then you think about, okay, how do I do this cheaper, faster, more? How do I do it as a system where I can incorporate lots of people's diverse ideas? So that's what I would basically recommend. 

Emma Salomon  18:46

If you were designing a course module or workshop for MBA students on experimentation in business, what would be the key takeaway or activity you'd include? 

Sharique Hasan  18:55

I used to teach a class on Yelp, and they implemented a big experimentation platform. I actually now have shifted to talking about the New York Times. And basically the New York Times was declining in terms of ad revenue. They were facing tons of competition from upstarts on the right –Breitbart-- on the left --Vox Huffington Post-- taking away important ad revenue. And so how do you get ad revenue? You get people to click on your articles. Ads show up. Well, how do you get people to click on our articles? They have to be interested in. So what New York Times basically did is now every morning, basically when you submit an article, it goes to an editor. When you submit that article as an author, you write several titles for it. The editor probably adds more. They basically get plugged into a system. The system basically is serving randomly these different titles to different people reading the article. The article stays the same, the content stays the same, but basically the title is a little bit different. Now, you do this and you find the article that gets, let's say, a 2% increase in clicks relative to some other article title. Now that's 2%. The main website has the front page of the website, it's 20 articles, 20 articles, each with a 2% improvement it’s millions of dollars, right? You do that every single day, and now you are basically generating massive amounts of revenue that is coming literally from a system you've designed. And so what I ask students to do is think about how they might actually design a title for an article that they read, they design the title. And then we actually have people randomly get a different title and decide whether they should read it or not. And then we can actually do the test right in class, both from generating the different titles. You know, with a class of 70,80 students, you get 80 different titles. That's incredible. And then literally, in a matter of few minutes, you can basically find out which one is the best, and now you've unlocked even a 2% gain. You do this systematically. That's a pretty, pretty powerful lesson, I think. 

Emma Salomon  21:09

And as we wrap up, I'm curious, what are some of the common misconceptions about AB testing that business leaders or students should unlearn? 

Sharique Hasan  21:17

That it's only for tech companies. I would say is a big one. I think you can apply this to lots of different businesses where you have a choice and you can make changes at a low cost -- try it out -- so AB testing can be done in the grocery aisle, right? Should I put this thing here? Put that thing here at the top or the bottom of the shelf? How? You know, we can think about this in manufacturing. Experimental thinking can be really useful. You might not be able to do an exact experiment, but in the factory line, should I move this part of the production process here versus here? You can think about, you know, an auto dealership that is trying to figure out, how do I get people to come in and actually do their oil change with me, rather than Jiffy Lube? What is the email that I send? How often should I send it? It can be for nonprofits. How do I get people vaccinated? How Walmart actually does AB testing consistently with Walmart pharmacy to actually get people to take the flu vaccine. In America, about 20 to 30,000 people, die of the flu every year, and research has found just a slight tweak in the fact that you send two messages with a particular structure relative to 25 other different messages they send over SMS increased the number of vaccinations by about 20,000, right? That's incredible. So you can apply this everywhere where you're making decisions and there's options of different ways of doing it.


b  22:55

Well. Thank you again for joining us. You've been listening to Professor Sharique Hasan from Duke University's Fuqua School of Business.

(music) 

 

Bio  

Sharique Hasan is an associate professor of Strategy at Duke University’s Fuqua School of Business and an Associate Professor of Sociology (by courtesy). His research focuses on entrepreneurship, innovation and technology, and their social and community implications.

He is the deputy editor of the journal Organization Science, and has published in top journals including Management Science, Strategic Management Journal, American Sociological Review, Organization Science, Strategy Science, and Administrative Science Quarterly. He also serves as a Board Member and Co-Scientific Director of the Innovation Growth Lab, a think-tank focused on broadening the impact of innovation and entrepreneurship through research and policy.  
 
With other Duke researchers, he created an AI-powered tool, scientifiq.ai, that evaluates the commercial potential of science. Hasan earned his Ph.D. from Carnegie Mellon University and has held positions at Stanford. He is the author of the Superadditive Substack, where he writes about innovation, strategy, and organizations.

This story may not be republished without permission from Duke University’s Fuqua School of Business. Please contact media-relations@fuqua.duke.edu for additional information.

Podcast Article
Podcast Article