Please note that this post, first published over a year ago, may now be out of date.
The Cloud Native Computing Foundation (CNCF), part of the Linux Foundation, was created in 2015, and is a key force behind cloud native technologies. The Cloud Native Computing Foundation currently provides the following definition:
“Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.
The Cloud Native Computing Foundation seeks to drive adoption of this paradigm by fostering and sustaining an ecosystem of open source, vendor-neutral projects. We democratize state-of-the-art patterns to make these innovations accessible for everyone.”
— CNCF Cloud Native Definition v1.0
AWS also has their own take (which is similar but not identical) on the meaning of a cloud native approach.
People often mistakenly conflate containers and Kubernetes with cloud native; containers and Kubernetes are one of many types of cloud native technology. Cloud native is really about making best use of utility computing and API-driven infrastructure to deliver maximum business value. Legacy SaaS workloads that aren’t yet using a cloud native approach are probably excessively costly; I’ve written about that previously: Cut cloud costs by going cloud native.
The vision and benefits
Before the evolution and definition of the cloud native approach, IT professionals would often set up pet servers – manually configured virtual or physical servers. The primary problem with these servers was that they were not reproducible because their state was often unknown. If a server had to be replaced – either for business reasons or disaster recovery – the level of risk was excessively high as the new pet might not be exactly the same as the old one. Additionally, maintaining pet servers is also very time intensive.
Prior to the emergence and definition of the cloud native approach, teams adopted configuration management tools such as Ansible and Puppet. Those tools set up servers to match a specification. In a modern, cloud-native deployment workflow, these tools are often an unnecessary extra step.
The cloud native approach solves this problem by encouraging immutable infrastructure and automation, and providing technologies to achieve it. Immutable infrastructure does not change after it is deployed. Instead, the team - or, ideally, an automated management tool - starts up a replacement system that starts up and takes over. Once the new component has taken over, the old system is ready to shut down. In the cloud, buying and disposing of computer systems is so simple, a computer can do it for you.
As a further consequence of achieving immutable infrastructure, new (repeatable) environments can rapidly be created; this cuts time-to-market and is also great for recovering from a disaster. Maintaining many customer specific environments is also much less time intensive.
The cloud lets you manage non-production infrastructure easily too. This is great for testing: instead of sharing a single preproduction environment that stays running each time, you can reach a point where teams or colleagues can create - or destroy - a test environment quickly, easily, and on demand. Self-service test environments help cut cycle time, reduce the risk of changes, and have a great psychological impact in boosting the confidence of development teams in making small regular changes. Bugs and rework are reduced because developers can quickly get good feedback on their code without waiting for real customers to use a service.
Some sectors, for example financial services or pharmaceuticals, require strong security isolation of environments from their SaaS providers. Cloud native technologies allow companies to achieve this strong security isolation in a low cost manner. For sectors with less stringent security requirements, cloud native technologies also enable resources to be pooled to reduce costs.
Businesses that achieve cloud native architecture can feasibly (and often do) test a release completely via automation; this decreases the overall cost of releases and means releases can happen more frequently.
Cloud native technologies enable elastic infrastructure – infrastructure that can scale based on customer demand. While no customers are using a service, it could scale down to one or even zero replicas (in the cloud, you typically run more than one replica of the same component for reliability - and APIs make it easy to do that). On the other hand, at peak time, there could be many running replicas to match a high level of demand. As cost is usually based on the number of replicas - servers, containers, etc - that are running, elastic infrastructure is great for reducing costs.
AWS’s definition of cloud native also includes adopting managed services where appropriate; this is often a good idea as it reduces the total cost of ownership. Firms like AWS can maintain the lower parts of the software stack more efficiently than most companies, due to economies of scale.
If you haven’t already, consider adopting a cloud native approach to improve the resilience, manageability, and observability of your infrastructure; and to enable engineers to deliver high value changes regularly.
Keeping on top of all the latest features can feel like an impossible task. Is practical infrastructure-modernisation an area you are interested in hearing more about? Book a free chat with us to discuss this further.
This blog is written exclusively by The Scale Factory team. We do not accept external contributions.