Here is a presentation that Anthony gave to about a year ago about Lean Startups. I think it’s still relevant today, although my understanding has subsequently improved.
Here’s a breakdown of the slides, with a short description of them.
Why Do I Care?
Why is this interesting to me?
Interested in creating great products for people to use and not wasting my time
Have been following this community for awhile, and thought I would share what I have learned and see if others are interested in this topic
What Is A Startup?
Widely accepted definition, and the one I’ll use is:
An organization delivering a new product/service under extreme uncertainty.
Context is king
Cannot understand where a practice is useful and apply it correctly unless you know the right contexts to use it in
Most of the case studies that I talk about apply to small product companies, but the implications are much bigger
Would like to plant some seeds for future applications and conversations
Types of risks organizations typically face
Phase III trial -> 4.5 years and $50 million
Something that is not currently technically feasible
If you develop and distribute a cure for cancer, the only question is how many can you make?
Industries / startups affected: biotechnology, health care, energy
I don’t have the problem you are solving
I don’t need your solution badly enough to buy it
Do people want what you are building?
How have things in the valley traditionally worked?
Case Study: Webvan
There were many large failures (WebVan story)
Enter Steve Blank and the Customer Development model
Blank had several companies that succeeded and failed, and tried to come up with a good model for why the companies had the outcomes that they did
Redefined startup as: an organization formed to search for a repeatable and scalable business model.
Not necessarily just for products aimed at end consumers, as Steve Blank’s original study focused on enterprise customers like banks
– not repeatable? you’re consulting / contracting
– not scalable? you can’t scale returns
Use Webvan as example of problems
Customer Development Model
The part on the top of the CD Model slide image is what companies typically do
What they should be doing instead is learning and iterating
Form hypotheses and test them. Two are critical: problem and product concept
Another one that is important is pricing
Blank’s model states that there are various things to do and validate at each step of the process
Customer discovery – achieving problem/solution fit (does the customer think this is a problem?)
Customer validation – achieving product/market fit (a repeatable and scalable business model)
Customer creation – driving demand (crossing the chasm)
Company building – scaling the company
At each step, consider making an exit
Get out of the building
A startup is not a smaller version of a big company
Theory of market types (reminds me of Blue Ocean Strategy)
Creating a new market
Long timeline, be prepared to have slow burn
Do not understand customers well
Bringing a new product to an existing market
Resegmented market (niche, different cost / performance points)
Lots of competition, might fail quickly
Major point is that if you use the incorrect market type model to make decisions, you will not do well
Find a market for the product as specified, not the other way around
Learning and iterating versus linear execution
Product Adoption Lifecycle
Most companies used to focus on “crossing the chasm” (Geoffrey Moore)
However, most companies aren’t learning enough to even get to the chasm
Visionary customers can “fill in the gaps” on missing features if the product solves a real problem
One of the more potentially ambiguous concepts in LS theory
MVP is minimum set of features needed to learn from earlyvangelists – visionary early adopters
MVP could be something as simple as: [and give examples of these]
– a survey targeted to potential customers
– landing page that describes a proposed solution
– a mock up of a solution (paper prototype, HTML prototype)
– an extremely buggy, barely working product that contains only the most important features and doesn’t scale to more than ten users
Technique of giving a link that takes someone to a page where they ask them about the feature
Think of how this could change the way we build applications
Instead of eliciting feature requests and such, have links to things that users might want and when they try to explore there
The latter is exactly what Eric Ries created at IMVU
He took a class by Steve Blank when Customer Development was still just theory and applied it to his startup
Leverage a commodity technology stack – what would have cost big bucks can be had for cheap now
Ries contends that:
Traditional waterfall development takes a known problem and generates a predictable or planned solution
Agile development takes a known problem and generates an unpredictable solution based on feedback
Lean startup tries to find an unknown solution to an unknown problem
OODA loop applied to startups
We are trying to increase throughput not of features, but of validated learning
When you learn, take what you learned and change your product accordingly
Consider these three pivot types:
Pivot on customer segment
Perhaps your consumer product is gaining traction with enterprise customers
Perhaps your product aimed at teens is actually popular among moms over 45
Pivot on customer problem
Solve a different problem for the same customer segment
Pivot on a specific feature
PayPal found users loved the email payments of its service, so switched to do this
Isn’t all of this wasteful?
Incorporate thread of MTDF and waste being critical to learning
Beyond continuous integration
What it looks like to ship one piece of code to production:
Run tests locally (unit, Selenium)
Everyone has a complete sandbox
Continuous Integration Server
All tests must pass
Monitor cluster and business metrics in real-time
Reject changes that move metrics out of bounds
Alerting and predictive monitoring (Nagios)
Monitor all metrics that stakeholders care about
If any metric goes out of bounds, wake someone up
Use historical trends to predict acceptable bounds
Key is that this infrastructure gets built up over time as your needs and understandings change (through the 5 whys below)
When customers see a failure
Fix the problem for customers
Use 5 Whys to improve defenses at each level
“Great test coverage is not enough. Continuous Deployment requires much more than that. Continuous Deployment means running all your tests, all the time. That means tests must be reliable. We’ve made a science out of debugging and fixing intermittently failing tests. When I say reliable, I don’t mean “they can fail once in a thousand test runs.” I mean “they must not fail more often than once in a million test runs.” We have around 15k test cases, and they’re run around 70 times a day. That’s a million test cases a day. Even with a literally one in a million chance of an intermittent failure per test case we would still expect to see an intermittent test failure every day. It may be hard to imagine writing rock solid one-in-a-million-or-better tests that drive Internet Explorer to click ajax frontend buttons executing backend apache, php, memcache, mysql, java and solr. I am writing this blog post to tell you that not only is it possible, it’s just one part of my day job.”
Important to note that this can be done with desktop / mobile products as well by having them fetch updates regularly, etc.
Triggers in code to ensure that functionality that is under development is not run by end users
What are the advantages?
What are the pitfalls?
Key component: Analytics
Back in the day, it was tough to figure out how you got leads and what the effectiveness of marketing was
Newspaper or magazine ads might have had a special code or number, but overall, not good at figuring out ROI
Lemonade Stand advertising example – It was pretty easy to figure out the algorithm for how much marketing was good
Overview of Google Analytics
Three keys of metrics
Avoid vanity metrics like page hits
Instead, focus on things that will provide results you can act on or answer hypotheses you have
You and everyone else understands what the metric measures
Are the metrics correct or verifiable?
Search Engine Marketing to test viability and naming of products
4HWW example – selling shirts with and without market testing
4HWW example – how Ferriss figured out the name for his book
Statistical hypothesis testing
A/B and multivariate split testing
Huffington Post example
Pros: can tweak small changes and optimize designs
Can increase variables, might be harder to interpret or get sample sizes you desire
Should be transparent
Can’t really use for huge changes, as users will notice
Hard to figure out which larger solution better
Triggers in code
Lean does not necessarily mean cheap
Lean does not mean have few members
Lean startups do not necessarily follow tradition Lean principles
Implications For You
Useful for people’s side projects because they won’t become discouraged
Don’t just put things out there – find a way to learn from those that are using your product first, the earlyvangelists
A/B testing to get the most bang for your effort
Consider writing a book as a process that needs information to succeed
Almost everything you do, from open source to writing blogs is a product
Use these techniques to learn about your “customers”
Implications For Contracting Companies
A client with significant market risk in their application might benefit from these techniques
Almost no one is doing these sorts of things well, so if we got someone who wanted it and had someone who could do it, we could really help them out
If we are trying to put out a product, we might consider some of the things I talked about
Kanban is not truly delivering continuous development to the customer like it does for other organizations
If you release every two weeks, it is zero to two weeks before your client will see a correction
Changes that cause problems in production are not seen until long after they have been implemented
Talked to CMS about this, and the main reason that we do Kanban is for internal quality, reducing WIP, etc.
Can stop building things that are never/rarely used, understand what is the most used
Do you really need this sub sub sub page in the application? Would it be better to fix a bug on the third-most used page on the site?
Better alerts for when things go awry in the system
If someone has set up an alert system for when business metrics go out of normal bounds, shouldn’t we have better alert systems?
Why should the customer have to email you when something goes wrong? Shouldn’t you already know?
Do you know when people can’t sign up for a service by linking it to business metrics?
Allows us to hedge against over-delivering because the customer sees things in production as soon as they are fixed
Recruiting or advertising could use multivariate testing techniques to refine our message
Appendix / out-takes
Required reading on Continuous Deployment:
Overview of process
From Dave McClure
Essentially a user acquisition and retention understanding mechanism, useful for good baseline metrics for improving your flow
A: Acquisition – where / what channels do users come from?
A: Activation – what % have a “happy” initial experience?
R: Retention – do they come back & re-visit over time?
R: Referral – do they like it enough to tell their friends?
R: Revenue – can you monetize any of this behavior?