Please note that this post, first published over a year ago, may now be out of date.
Have you got a legacy SaaS workload on the cloud that is costing you an arm and a leg? Does it take a long time to get features to market? Is your engineering team spending lots of time on tasks far removed from value centres? If that sounds familiar, I’d suggest reading on.
Typically, legacy SaaS workloads have undergone a lift-and-shift from an on-premises data centre. A lift-and-shift means copying the on-premises architecture (almost) exactly into the cloud. Lift-and-shift migrations are a cheap and fairly simple way to get your application into the cloud. However, retaining the same architecture in the long-term means you are not taking full advantage of cloud native architecture, and are not realising the full potential of your migration to the cloud.
“Cloud native is the software approach of building, deploying, and managing modern applications in cloud computing environments.” Embracing a cloud native architecture can increase efficiency, reduce costs, and ensure availability of services; let’s explore this more.
A key advantage of public cloud computing (and cloud native) over on-premises solutions is automatically scaling for demand. If you lifted-and-shifted the architecture from on-premises, you may have a fixed number of virtual machines that you provisioned to meet peak demand, and these are active even if your service is not under heavy load. Provisioning for that peak demand could (and likely will) mean large unnecessary costs. On-demand cloud infrastructure can scale up and down to match your needs, and customer load, reducing costs in low traffic periods. Some cloud architectures are billed at zero when they’re not being actively used.
With legacy SaaS workloads you have to manage the entire software stack in-house, which takes a lot of human resource and is therefore costly. Managing the entire software stack in-house only really makes sense when you have lots of economies of scale.
Whatever the scale of your business, it can make far more sense to use managed services. With a managed service, a public cloud provider such as AWS operates the majority of the software stack for you, leaving you to configure the top layers to your requirements. The economies of scale of public cloud allows those firms to do a cost-efficient and top-notch job at the lower levels of the software stack. It rarely makes good financial sense to hire an expensive engineer to manage the entire software stack when AWS might be able to sell you a managed service that delivers the same value for a fraction of the cost.
SaaS businesses that use managed cloud services often find they have a large advantage over the competition. In a 2017 survey by Frost & Sullivan, 76% of businesses that have already migrated to the cloud said cloud managed services are an essential part of their IT strategy. Businesses surveyed realised advantages in areas such as the predictability of IT costs, workload performance, faster delivery of applications, reduced capital expenditure, and shorter time-to-market.
Hiring in-house experts that specialise in public cloud can be tricky, as it’s a set of skills that are in demand. If you’re finding that, or you don’t have those roles internally, consider getting some external advice. Even though the principles and the underlying technologies are going to be the same, public cloud is not like buying a basic utility like water or electricity; the skills are different. The Scale Factory specialises in helping customers who use AWS, because that lets our team focus on deep understanding of the infrastructure services you’re going to use.
Database management can be particularly costly in comparison to using a managed database solution such as Amazon RDS. Managing a complete database solution effectively is a full time job, or even a team. With RDS, your team is just a few clicks (or terminal commands) from being able to provide durability across multiple data centres, automated snapshots, durable automated back-ups (with AWS Backup), and insight into bottlenecks in database performance.
Does your legacy workload involve managing long-lived servers? The cloud changes the game here. On-premises, you want servers that are durable to get the best return on capital, and to reduce expensive costs to rack up a replacement. Cloud servers are still just as reliable, but the set up costs can be much lower. The focus shifts from keeping a server healthy towards being able to set up a replacement quickly. Once that process is in place, you can use it to deal with all kinds of operational events - such as handling a security incident - and not just for the case when an on-premises server physically packs up.
Modern cloud native architecture has solved the problem of managing servers at scale using an approach called immutable infrastructure. Immutable infrastructure means that once a server is deployed, not only do you stop managing the software in it: you actually can’t make changes at all.
Once you have achieved immutable infrastructure via automation, there is no longer any need for servers to be long-lived, and they can be re-built regularly. AWS auto-scaling also knows how to replace a server that has failed, completely automatically. Instead of turning it off and on again, the paradigm changes. You turn a failed server off, and let the cloud automation self-heal to bring up a working replacement. The process of re-building services regularly has many knock-on benefits that make the change worthwhile. Re-building services regularly encourages engineers to reduce build time, which will eventually lead to lower cycle time and less re-work; feedback in pre-production environments will occur more rapidly.
When services can be re-built consistently and quickly via automation, it immediately betters your disaster recovery story too. You no longer need to focus so much on the infrastructure layer and application layer - the automation takes care of this. Instead, you can focus on details such as customer communications, or on how you’d restore the data layer and codebase. In the cloud, it’s routine to configure your architecture to span multiple geographically distributed data centres, for increased durability.
Our team knows how to manage the risks around moving data to the cloud. We also know how risky it can be if you don’t have a cloud copy of your critical data. Book a free chat to find out how we can help.
This blog is written exclusively by The Scale Factory team. We do not accept external contributions.