Experimenting with custdev

Note to the peer review board:) 

  1. I have not read Kevin's post yet but I have great respect for him - I will read it and I will incorporate my thoughts into the final post that I put on medium - blogger is just a platform I use to fuck it, ship it 
  2. The title is intentionally not for or against custdev - I have no interest in those link bait titles - other titles could be CUSTDEV: What problem does it solve or something corny like 2 cusdev or not to custdev
  3. I thought I had more… anyway - I would love to hear your thoughts - I think it is an important conversation 

Does custdev work? It is a REALLY important question. The answer is…

I don't know, and neither do you. 

The reason you don't know is because the question as framed above is unknowable, and the question as framed below has not been tested (well enough to answer). 

The format of the question above is unknowable - until we answer the following 
  1. What is the definition of custdev
  2. What is the definition of work (for the context of each experiment) 
  3. For whom
In order to keep our variables to a minimum in order for this to work, let's assume Steve Blanks definition (note to the peer review board, this is where I googled Steves definition looking for something about an interview that does not mention the solution and aims to learn about the 5 characteristics of an early evangelist - I landed on Erics blog post first, instead) 

I did find this quote from Steve -- I had in my notes for this post that "No self respecting southerner uses instant grits, and no self respecting lean scientist tries to determine the efficacy of custdev by having a debate on slack." 

“In a startup no facts exist inside the building, only opinions.” 
The question depends on the FOR WHOM and the answer depends on running a nearly flawless experiment… so let's continue to variables 2 and 3. 


Custdev will have different yields for a first time entrepreneur, accelerator MD, enterprise product team, enterprise shareholder, enterprise coach. Th


The JTBD for custdev can be as varied as confirming a problem, honing a segment, filtering a pool of applicants, humbling a seasoned c suite of executives. 


We are getting very meta here, trying to run the perfect experiment. What is the MINIMUM we need to do, to LEARN the thing we want to learn, in this case that PROPER custdev does the job promised for the segment in question. 

So before we design the experiment, we still need to pick the segment and the job. Again, meta - but if we are going to run the perfect experiment we should pick the segment with the biggest pain. In my opinion, the people with the biggest pain point are those where the problem is most unknown and the cost of building is extremely high. When people try to debunk custdev by asking if FB or TWITTER did it, my answer is (note to peer review board - I want to say something here about how the risk is lower when you can ship your own MVPs and move 10x faster than anyone else and already have access to markets or something along those lines) that MZ can build things 1000 times faster than you and had low risk… Other examples where the actual (not perceived) market risk is low could be cases where pure customer development has either diminishing marginal returns OR returns lower than other types of experiments (high opportunity cost). 

The "perfect" segment to try this out on is one who has a high proclivity for sunk cost bias, high market risk, relatively slow feedback loop with customers, 

Picking the segment, the for whom - will be easy, especially when we challenge all the practitioners out there to stop asking this question without testing it. WORKING is going to be a bit tougher. I have been thinking lately that custdev is a vaccine against bad ideas not a cure for good ones. You can measure working in any of the following ways 

note to the peer review board, I need to tidy this up:)
  1. absence of a negative -- less money wasted, faster to recognize x, 
  2. presence of a positive -- faster to revenue, fa


The perfect experiment at this point of the conversation unfortunately is ANY experiment. Anything that is not just an opinion - a debate inside the building. I admire the fact this guy seems to be running ACTUAL experiments rather than just giving opinions - that said I think his success criteria of "performance at a pitch competition" sinks any credibility of the first experiment but it looks like he is doing more and I look forward to seeing if he does any that specifically control for custdev and not just experimentation. 

We are also using the milvalchal as a way to control for custdev as a CAUSALLY related variable but I admit we too can do better by going faster, segmenting better, and… that said, if the measure of success is it NOT BEING A WASTE OF TIME doing 25 interviews… then 


The perfect experiment probably goes something like this. Someone like Techstars blindly makes half their teams do PROPER CUSTDEV (and the rigor is controlled by folks like JB, BORIS, JUSTIN, NAMEDROP, ETC) and measures success not only by demo day funding but by profit 5 years later or some sort of 3 year growth rate of a non vanity metric) 

That is unlikely to happen, so the burden of this experiment is on you….

note to the peer review board - I am hitting publish bc I promised myself I would… this post is by no means finished… if you got this far, please help me finish it - I think it is important!

Post a Comment

Popular Posts