There’s not really a typical SaaS business, but there are patterns we see a lot in our consultancy work. This article’s about one of them.
For this article, I’m going to categorise the firms we’ve worked with, into one of three stereotypes. First of all, startups: firms that want to offer a service, based on software, that doesn’t already exist. Even if the firm is established, if the product is brand new, I’m calling it a startup. Second, migrations; this is where an existing offering isn’t SaaS, but someone wants it to be.
(There’s a third category, because we do also provide consultancy outside of SaaS… but this article is about SaaS businesses and a common transition point in the SaaS product lifecycle).
Everything starts broken
Early on, finding gaps is easy. If you’re making a new product, you start with a blank page. The gaps that matter might look like stories, or faults, or customer queries. But gaps they are.
Providing software as a service offers you the opportunity, perhaps even the luxury, to deliver and release changes as often as you like. You see it, you solve it, and you ship it. There are more bugs on a backlog, no doubt, or new stories to unwrap and features to implement.
For folks who have mainly experienced this kind of workflow, making software wasn’t always like this. For a long time in the evolution of IT, releasing code was about aiming for perfection, rigorous testing, and then time-consuming publication. Shipping software meant at the very least formal, signed-off packages and very commonly physical media with processes around making really, really sure that what you put in the box worked like you hoped it would.
Move fast then fix things
With SaaS, you can iterate fast on new components. Automated testing and releasing makes this very much a practical proposition. Because delivering incremental improvements isn’t difficult, the first moment you can offer things to a customer can be as early as the first moment that basic functions work - even if you have a long list of gaps and partially met needs.
You want your product to be beyond that buggy, rough-shod initial phase, because you want customers to love it - and their finance teams to sign off on renewals. When you get there, though, things change.
Working code had better stay working
The most obvious difference is around how much regressions matter. In the early days, they usually don’t. A product that now has nine critical bugs might not seem obviously worse than one with eight, especially if the extra bug is rare and it arrived as part of delivering a key feature that customers were crying out for.
The time comes when assuring quality becomes obviously more important than delivering change at pace. Maybe the trigger is turning on those governance features you’d planned for and then parked; maybe the team needs to start talking about code quality and systems development guidelines.
Automation helps a lot, although one of our principles at The Scale Factory is to be very cautious about automating any process that the team can’t already achieve more manually. You can grow a continuous integration pipeline organically and develop that into fully automated continuous deployment. The folks doing deployments will definitely prefer it if you do.
What does the new work look like at this point? Although no two firms are alike, typically you’ll be adding or expanding your integration tests, making changes to shorten feedback loops (finding faults earlier means cheaper fixes), or providing more guidance around code review.
And, just to be clear, I’d apply this principle equally to application code and infrastructure code such as Terraform.
Don’t start ’til you get enough
This article is about the switch from focusing on velocity to focusing on value.
Despite the title, I’m not advocating that new teams put all their work into new features and forget about finding bugs or fixing them. What I am saying is: before you reach a certain point, the best place to focus is on adding features, and beyond that point, the story becomes more complicated.
You should invest in making sure you don’t - and can’t - ship obviously buggy code. Automation always helps and there’s a baseline of continuous testing that is worth doing from day one. So, find the right set of shortcuts and take them. Simple or no integration tests? Maybe? Code review that tolerates and merges minor defects? With care, yes. Security defects that put customer data at risk? Well, no. Extra testing here is worth putting in early. It is all along a balancing act.
Every time you add a feature, some (maybe all) customers benefit - and you enable more sales. Some improvements can be incredibly well received, and those are worth making. Bugs can be far worse. Any time you ship, or trigger, a critical bug, you run the risk of disappointing every existing customer. The transition point I’m talking about is when managing that risk, becomes just as important as the risk of not delivering the features on your backlog.
We’ve been an AWS SaaS Services Competency Partner since 2020. If you’re building SaaS on AWS, why not book a free health check to find out what the SaaS SI Partner of the Year can do for you?
This blog is written exclusively by The Scale Factory team. We do not accept external contributions.