Skip to main content

System Architecture: Traditional and Serverless CMS Approaches

You're starting a new software project, or you’re concerned that your current solutions may have significant limitations that you’d like to remove. You've heard about some dev teams choosing a  serverless CMS architecture, you've heard lots of talk about "cloud native", and every third article on Hacker News seems to be about Kubernetes. You begin to wonder if and how any of these new or established solutions can help alleviate your project’s pain points.

So what are those teams actually doing? How is it different from the way you’ve built applications in the past? What do all those words really mean? And how do I know what the responsible choice is for me? Unicon is here to help you sort it out!

diego-ph-vTitvl4O2kE-unsplash

Defining Traditional and Serverless CMS approaches

All major cloud providers offer services that replace traditional software operational models. Some such services simply offer exactly the same capabilities as local/on-prem tools but relieve you of having to think (as much) about day-to-day maintenance, backups, or redundancy. Others dramatically change the operational and runtime models, in that there isn’t any service except when it is actually being used. The latter is referred to as “serverless” because of this “pay per use” model, as opposed to a traditional (or cloud-managed) server’s “pay for uptime” model.

The traditional approach to online software development is what most developers are historically accustomed to: pick a technology stack and development framework, stand up your environment locally, and build out your app using your favorite development tools. When ready, stand up essentially the same architecture on your favorite hosting provider. The key here is that the developer assumes that, once deployed, the entire stack is “always on”, that there is some minimum runtime that is always available and is mostly bootstrapped and configured in a way that is fairly consistent between local and shared tiers.

The serverless approach requires a bit more, or at least a different, type of forethought, as the aim for a serverless architecture is not just to leverage certain provider-specific offerings, e.g. Amazon's managed database services, but rely on the cloud provider to provision your application stack's infrastructure and runtime(s) just in time. This impacts the developer experience because the cloud, or something that acts like the cloud, is required to support iterative code-test-debug development cycles that historically tended to be relatively loosely coupled from the target infrastructure.

Traditional Approach

While serverless isn’t exactly new (AWS Lambda debuted in November 2014, nearly seven years ago at this writing), the traditional approach is exactly what it sounds like and tends to be what most developers are familiar with. Again, the baseline assumptions are that:

  • The core development cycle relies on locally deployed resources (usually a web server, the application, and a database), and
  •  In “real” environments those resources, or at least the application, are “always on”, albeit possibly with variable instance counts depending on load.

Containerization (which we’ll dig into further later in this article) helps overcome local setup issues caused by various operating systems differences. But in general, all the tools and services needed to run the application have very high-fidelity local equivalents and the application itself is thought of as another one of those services, either running all on its own, or deployed into some already-running application support runtime, e.g. a webserver. What makes this approach “traditional” is that the vast majority of developers have first-hand experience with this way of working and have well-developed mental models for thinking about how their local environment relates to the target deployment environment.

But while “traditional” may mean “widely understood and adopted”, it obviously doesn’t mean “perfect.” The following table outlines several of its tradeoffs, especially as it relates to the serverless approach we’ll discuss next.

  Advantages Disadvantages
Team Get started quickly with familiar languages, tools, and local development.

 

Front-end developers often find it challenging to stand up complex backends locally.

Business Established hosting pricing models and development team familiarity makes pricing and time estimations easier to establish upfront.

Features requiring server-only software processes are possible.
Difficulty bringing hosting costs down on small, low-volume, and/or batch-processing projects.


Pay for uptime, even if service is not utilized
Performance Often, simply bumping the specs on the hosting server lends to a performance increase. Bumping the specs on a hosting server increases the hosting costs.
Security Can implement your own custom security models, controls, and processes. Dealing with all of the constant monitoring, manual access controls, patching against vulnerabilities, etc. can be cumbersome.
Infrastructure Can use simple infrastructure design like a monolithic app/db server that performs multiple functions/roles. Monolithic multi-role servers can be a single point of failure, and more difficult to scale out.

Serverless CMS Approach

As the name implies, a serverless approach attempts to do away with an “always up” server. Instead, the architecture and deployment strategy of a serverless application emphasizes just-in-time provisioning, scaling, and use of cloud services, especially “Functions as a Service" (FaaS) offerings such as AWS’s Lambda, Google Cloud Platform’s (GCP) Cloud Functions, and Microsoft Azure’s Functions.

Other than the “pay per use” model that serverless services operate on, one of the major benefits of serverless is that it potentially reduces a project’s operational costs by eliminating the need to support and scale a server. Instead, AWS, GPC, and Azure all handle the details of operational maintenance of their service offerings for you.

Let’s think about a simple file manager application: You have a user interface that allows for adding, editing, and removing files. You have the backend written in any number of languages that handle the moving of files from your computer to another storage location. With a traditional development environment, “storage” is on the server, and the “server” is running on a virtual machine (VM) running the appropriate operating system (OS) to support the server. The choice of programming language the backend is written in informs the type of server that needs to run, and the server requirements inform the operating system requirements.

With serverless, each cloud provider offers a serverless equivalent to everything that goes into our file manager application. For example, AWS’s S3 storage solution can both host our user interface and store our uploaded files. The mechanics of handling the file uploads, editing, and deleting can be written in your language of choice, and made available to the front end via Functions as a Service. You can even create as many functions as needed for your application, in different programming languages. In the end, you can create this file manager application with just a couple of cloud services.

If we’re to host our traditional application on a VM, and we never use it, we’re still stuck paying the monthly bill for that server’s uptime. Whereas with our serverless application, we pay nothing since no requests for those services are being made. Serverless services are also built to scale, so where our serverless application scales easily, handling any additional load requested of it, our traditional application incurs both the initial configuration costs to setup and configuration to scale, hosting costs also increase to support the new demands.

  Advantages Disadvantages
Team

Get started quickly in the cloud, creating and deploying proofs-of-concept.

Infrastructure “source of truth”: all developers working with the same infrastructure configuration (even if not the same exact instances).

Not limited by programming language knowledge.

Learning curve in understanding, using, and tying together cloud resources.


No local development (for the most part). It’s all done in the cloud.
Business Low upfront and long-term costs via pay-per-usage model.

Rapid POC to MVP to Production cycles.

Growing list of services that offer a "pay per usage" model
Some traditional development functionalities don’t have a serverless equivalent yet (like AWS Elasticsearch).

Certain projects may require using software that can only be implemented on a traditional server.
Performance Cloud service providers continually refine the performance of their FaaS offerings, without the developers needing to do anything to take advantage.

Cloud services are optimized to work best together.
FaaS historically have had issues that impact performance with their cold and warm start times.
Security No manual management of OS updates, app server security patches, etc. Some knowledge of the proper configuration of your serverless providers' security controls, to properly configure access controls, app and user roles, etc.
Infrastructure Can easily segregate, deploy, and scale individual app components using various FaaS and other native managed services for databases, etc. including nearly limitless auto-scaling. "Black box" effect of not having direct low-level access to infra components - AWS mitigates this feeling for the most part via a Comprehensive admin UI, CLI tools, and monitoring options.

Serverless versus Traditional: a False Dilemma

The notion that you might have to choose between a serverless or traditional approach is a false dilemma. The end-goal for any application architecture is to strike a balance between application performance, development velocity, and operational costs. To that end, architects and team leads often end up mixing and matching various serverless offerings with features and functionality that can only be achieved on a server and/or with traditional development tools and workflows.

For example, local cloud emulation tools like AWS SAM and LocalStack help streamline the serverless development experience by providing options for running application code without having to connect to “live” cloud environments, creating a sort of hybrid “traditional serverless” workflow. Alternatively, some teams choose to largely eliminate local development outright opting to move code-test-debug cycles entirely into cloud environments. Infrastructure as Code (IaC) frameworks like Pulumi can help accelerate this cloud-centric workflow by defining complete cloud environments via general-purpose programming language tools and concepts that developers are already comfortable with.

Perhaps the biggest single innovation in this space has been the emergence and widespread adoption of containerization, especially via Docker. “Containers” are essentially full-fledged, executable software services that include both that primary service and all its runtime dependencies such that it can be deployed and run reliably regardless of the details of the target host.

Containerization provides a great way to streamline local environment setup and mirror the production environment, regardless of the developer’s host operating system. Where a developer may be working on a Windows OS, and the target hosting environment might be Linux, implementing a containerization tool like Docker goes a long way to ensuring that the code a developer writes and tests locally as a containerized application runs the exact same way in the production environment. This is because it’s the container itself that gets deployed, and by definition, the container includes everything it needs to produce a valid and consistent application runtime (ignoring complicating issues like file system mounts and environment-sourced configs). Thus containerizing the environment setup not only has the benefit of streamlining the development workflow, it also sets up your applications for container orchestration.

Container orchestration is essentially about managing container lifecycles, which sounds straightforward enough, but it’s here where the line between traditional and serverless approaches begins to blur. Through orchestration, utilizing tools like Kubernetes (k8s for short), container-oriented environments can free developers from worrying about how or where their applications are deployed. These tools also typically abstract away the process of locating other applications and services upon which they depend. And under the KNative project, Kubernetes can even support “scaled to zero”, only spinning up containers when needed, thereby bringing your containerized application or service pretty close to serverless’s major selling point of “pay per usage”.

Even without containerization, there’s a very large (and growing) list of serverless services and software that help bring the overall price tag of a traditional application down. And again, the key thing is to strike the right balance. It isn’t necessary to go “all-in” on serverless right away. For example, you might consider switching from native file system storage to a serverless storage solution, but keep the rest of your application and deployment architecture as-is. Or you might decide to do away with database management and scaling headaches by switching to a serverless database. Or you might even extract your rarely used APIs and deploy them as FaaS endpoints to take advantage of potentially zero-cost low-traffic windows.

Blurring the Line

Take a look at Webiny, a severless, headless content management system (CMS). This system runs an almost entirely serverless AWS serverless architecture, the exception (at least as of Webiny v5.5) being its Elasticsearch dependency, which requires a small, always-on Amazon AWS server. For Webiny, the need to deliver search results quickly while maintaining a peppy user experience outweighs the need to be 100% serverless (though the Webiny team is actively looking into alternatives).

Another example of a traditionally architected system with serverless add-ons is Strapi, a headless CMS that runs (in the traditional sense) on a server. A popular plugin for Strapi is the S3 connector, where Strapi will use an S3 bucket (AWS’s serverless file storage solution) for uploading and storing files, rather than on the server itself.

While we’re talking about headless content management systems, the moniker “headless” implies we have front-end applications that retrieve content via queries against the CMS’s APIs. Even if that CMS is deployed under a traditional model, serverless options are still available for the front-end/s. For example, a React application can be built and deployed via AWS CodeBuild, then hosted and run on an AWS S3 bucket without issue, happily consuming the “traditionally”-deployed CMS APIs.

Final Thoughts

Moving into the serverless world can be challenging to architect, challenging to understand, and sometimes unnerving to development teams accustomed to traditional development tools and deployment models. The important thing to remember is that, as with all technical dilemmas, there is no silver bullet; each of the options on the table has both strengths and weaknesses. And, especially if you’re already working with a traditionally deployed application, the nice part about adopting serverless is that you frequently don’t need to go all-in to find out whether or not you made the right choice. So onward to incremental greatness, and may all your services scale to zero when not in use!

Special thanks to our Vice President Technology, Dan McCallum for his insights into this article.

Unicon has been focused on the serverless space, migrating systems from server to serverless for the better part of the last decade. We’ve been implementing traditionally approached applications for 28+ years. We’d love to see what solutions we can discover and deliver for you. Give us a call!

New call-to-action

 

Phillip Ball

Phillip Ball

User Experience Architect
Phillip Ball is a Learning Experience Design Solutions Architect who has been with Unicon for over 10 years. Mr. Ball has worked on many projects for many clients, executing and enhancing his skills in User Experience Design and Software Development. Throughout the course of his career at Unicon, Mr. Ball has worked closely with shot callers in the edtech space, ensuring they have the details they need in order to make informed decisions. He particularly enjoys the implementation of design, taking pride in knowing his work has an impact on the learning experience. Mr. Ball holds a Bachelor of Arts degree, with a focus on Visual Communication. He enjoys researching new trends in technology, and how they might benefit Unicon’s work in education.