Drawn/blueprint-style illustration of the layered monolith architecture pattern.

Patterns have become indispensable tools in software development.. Patterns can provide a high-bandwidth mechanism for communicating ideas, however as our industry has noted (and we explored in-depth in our previous post) application of architecture patterns rarely yield a consistent set of outcomes for any given project. A powerful solution is to apply the idea of design-by-constraint. This resolves the prevalent semantic ambiguities of our current architecture pattern set and enables a more reliable and deterministic means of inducing architectural capabilities in systems.

In the dynamic world of software architecture, be it for buildings or software, two dominant philosophies emerge. The first imagines a designer starting from scratch–a void on a whiteboard–and begins meticulously crafting an architecture, piece by piece, with familiar components until it embodies the system’s aspirations. The alternative perspective envisions a designer commencing with an all-encompassing view of the system’s requirements, unshackled by constraints. As the design evolves, constraints are strategically imposed on system elements, fine-tuning the design canvas and allowing the forces that mold system behavior to move fluidly, in perfect sync with the system’s essence. While the former accentuates unbridled creativity, the latter champions understanding and discernment within the system’s milieu. It’s this latter philosophy that the Tailor-Made Software Architecture reveres, focusing on constraints as composable design elements to both precisely define our architectures and also enable fine-grained control of the capabilities and characteristics the architecture elicits. Architectural constraints possess a high-degree of reusability and they can typically be applied to almost any pattern or candidate architecture. For the next several posts, we will define the eight common patterns in terms of their core constraints, potential optional constraints, and highlight how the composition of these constraints results in a predictable set of architectural capabilities. We begin with the humble layered monolith.

This post is part of a series on Tailor-Made Software Architecture, a set of concepts, tools, models, and practices to improve fit and reduce uncertainty in the field of software architecture. Concepts are introduced sequentially and build upon one another. Think of this series as a serially published leanpub architecture book. If you find these ideas useful and want to dive deeper, join me for Next Level Software Architecture Training in Santa Clara, CA March 4-6th for an immersive, live, interactive, hands-on three-day software architecture masterclass.

The Ball of Mud

When diving into architectural patterns, it’s useful to start with familiar analogies. Many of us have encountered the “big ball of mud” in software development.

The term, which has become a formal anti-pattern, was popularized in a 1997 paper, written by Brian Foote and Joseph Yoder:

A Big Ball of Mud is a haphazardly structured, sprawling, sloppy, duct-tape-and-bailing-wire, spaghetti-code jungle. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated.

The overall structure of the system may never have been well defined.

If it was, it may have eroded beyond recognition. Programmars with a shred of architectural sensibility shun these quagmires. Only those who are unconcerned about architecture, and, perhaps, are comfortable with the inertia of the day-to-day chore of patching the holes in these failing dikes, are content to work on such systems.

-Brian Foote and Joseph Yoder

I started programming somewhere in the 1980s. I first learned BASIC on my Apple ][. There was no concept of modules, classes, or even multiple files. I had little exposure to languages or tools outside of what happened to be on the ROM of my first PC, so this was what I knew. As the programs I wrote increased in complexity, GOTO and GOSUB/RETURN statements proliferated. Weighing in at close to 12000 lines, the largest BASIC program I ever wrote could certainly be described as a “Ball of Mud.”

By the late 1990s I was building software professionally in Visual Basic 6 (Hey, don’t judge me - it was a different time! I was young, I needed the money!). Although there was a concept of classes, VB6 was not an OO language. Most structure was achieved through the use of module (which were essentially static classes) but the IDE encouraged a number of bad habits. Adding an event handler to a button on a form would scaffold a method in a module where I could add behavior. I, and the teams I worked with, would typically just implement everything we wanted to happen when that button was clicked directly inside the click handler. Need to access the database? No problem, just spin up a ADO recordset right there in the click handler in the form code. Maybe we refactored common functionality to a “utils” module, but the code was truly a ball of mud. We had no separation of concerns, no real modularity, no thought to maintainability, no concept of testability. We had an “accidental architecture.” Although this was around 25 years ago at the time of this writing, it’s possible I have already lost all credibility with you. If you’re still with me, I’ll continue.

The Null Style

Building on the idea of unstructured design, the Null style—as articulated by Fielding—is essentially its architectural counterpart, serving as a constraint vacuum. In architectural terms, the Null style presents a system where there are no clear boundaries between components, which could be reminiscent of the unorganized approach of a “big ball of mud.” (although even in mud systems, there are usually some emergent constraints). This concept will be our starting point as we explore various architectural patterns in this series.

Enter The Layered Monolith

The layered monolith is one of the oldest and most ubiquitous architecture patterns–so ubiquitous, in fact, that it has been called the “de-facto architecture.” The layered monolith informally defined is this:

“Layered Monolith: Single deployment with functionality grouped by technical categories”

The basic idea is we take the evolving form of the monolith and add structure by introducing the concept of “layers” with each layer defined by the area it concerns itself with.

Since any given pattern label can be a fairly broad umbrella, we’ll define the layered monolith by it’s core architectural constraints. Like all architecture patterns, the layered monolith is abstract. Although a project may be sufficiently simple that this abstract pattern is enough. More often, however, we will extend this abstract pattern by adding more constraints. Common variations are monolithic MVC web applications, monolithic client-server applications (e.g. a monolithic, fat-client angular SPA that contains some amount of business logic), or n-tier applications (where the monolithic architecture is sliced multiple times, horizontally e.g. a client application, an API monolith, a business-logic monolith, etc.) In subsequent posts in this series, we’ll look at variations in the pattern in the form of related architectural styles that modify the set of defining constraints.

For this abstract definition of the Layered Monolith architecture pattern, the core constraints are as follows:

  • Monolithic Build Artifact
  • Monolithic Deployment Granularity
  • Technical Partitioning
  • Separation of Concerns
  • Shared Database

In the following sections, we’ll examine each constraint and how it elicits certain capabilities while weakening others.

Constraint: Monolithic Build Artifact

As the title “monolith” (mono - single, lith - stone) suggests, the system is compiled into a single binary artifact. This constraint, like every constraint we will introduce in this series, introduces a set of trade-offs. Let’s start by looking at capabilities that are induced by this constraint.

A picture of the Uluru monolith in Northern Territory, Australia - Image CC-BY

Simplicity (+2)

If an entire system resides within a single binary, we avoid the challenges that distributed systems face. These types of systems avoid an entire category of complexity that distributed systems introduce. The challenges that distributed systems introduce are best exemplified by the fallacies of distributed computing. In summary, the fallacies are:

  1. The network is reliable
  2. Latency is zero
  3. Bandwidth is infinite
  4. The network is secure
  5. Topology doesn’t change
  6. Transport cost is zero
  7. The network is homogeneous

Basically, the fallacies describe the complexities that emerge when a system becomes distributed. In short, there is a lot less that developers have to worry about with a monolith.

In addition, the build process is generally much simpler. It can be as simple as performing a build in an IDE.

It is generally easier to get started building a monolithic app. We can begin to write code that creates user value without too much thought.

Working in the monolithic codebase is also generally simpler. The entire codebase can be indexed by an IDE providing useful intellisense, there is direct visibility into every part of the system, and since everything is in a single codebase, coordinating changes can be performed in a single commit.

Administration of a single app is also radically simplified. There is much less to monitor or manage.

Performance (+2)

As implied by the fallacies of distributed computing, one unexpected consequence of distributed architectures is some performance penalty incurred through network latency and bandwidth. In memory calls are, thus, faster than network calls. That said, the potential benefits are limited by the available hardware to a given application (even when writing multi-threaded, high-performance code). Resources are shared and are difficult to scale out (as we can with other distributed patterns).

Cost (+1)

A related consequence of axis of simplicity induced by this constraint is a corresponding reduction in cost. Up-front design efforts are reduced as are infrastructure requirements.

Deployability (+1)

Deployment of a monolithic binary can be as simple as copying a directory to a server or using a deploy feature in development tooling (although friends don’t let friends right-click deploy). The improvement of this capability is modest at best. We certainly can’t deploy with the same velocity of more granular architectures given the size of the deployment (the entire application, even for a small change) and the start-up time for an application degrades proportionately with the size of the code base.

Testability (+0.5)

Compared with the null style, testability can be slightly improved - but only slightly. There are few–if any–external dependencies to be aware of, in theory the entire codebase can be tested with minimal coordination cost.

Agility (-1)

A monolithic build artifact requires that any change require a redeployment of the entire system. Testing scope is higher and coordinating releases more difficult.

Scalability (-2)

This constraint introduces scalability challenges. The only available avenues towards scale revolve around either scaling up the hardware the application is running on or scaling out. The latter is severely constrained by the fact that the entire application must be replicated, not merely the handful of components responsible for the lion’s share of the load.

Abstraction (-2)

One key trade-off of the single, monolithic, codebase is that generally abstraction becomes a secondary concern (if at all). Without care, the code becomes tightly coupled with a high degree of connascence. Other constraints will balance this somewhat, but in the context of just this constraint, abstraction is degraded.

Elasticity (-3)

In the same way this constraint degrades scalability, elasticity is even more affected. Quickly responding to bursts in load become challenging as the entire application must be replicated, and the coarse granularity of the application will degrade startup times.

Fault-tolerance (-3)

Since the entire system resides in a single binary, fault-tolerance is adversely affected. Generally the system as a whole is healthy, or it is not. In more fine-grained architectures it’s possible for components to fail without bringing down the entire system.

Constraint: Monolithic Deployment Granularity

Although this constraint is superficially similar to a monolithic build artifact, there is a key distinction. A monolithic build artifact requires monolithic deployment granularity, but the inverse is not necessarily true. One notable example is the microservice mega-disasters I alluded to in earlier posts. Many times I have seen “microservices” architectures where all services must be deployed at the same time (often in a specific order). At that point, however, those systems can no longer be called microservices given that is a core constraint, they become distributed monoliths. This constraint cannot co-exist with the independent delployability constraint.

Simplicity (+1)

It is easier to reason about the deployment process at this granularity.

Cost (+0.5)

These pipelines are also generally cheaper to produce and maintain.

Deployability (-1)

Deployability is generally degraded as every change–even a minor one–requires a full redeployment of the system which reduces velocity and introduces risk.

Agility (-2)

As detailed in the deployability metric, velocity is reduced and risk in increased which reduces organizational agility.

Constraint: Technical Partitioning

This constraint introduces some structure in terms of how components of the system are organized. In this case components are grouped by their technical categories. As the depiction of the abstract style indicates, usually these are along the line of UI/Presentation, Business logic, persistence, and database but this constraint can apply to both monolithic and distributed topologies.

Generally layers are consider either open or closed. Closed layers abstract any layers below the closed layer - meaning they must act as an intermediary. Open layers are free to be bypassed when it makes sense (the sinkhole antipattern defines a scenario where the layers don’t apply any meaningful changes or validation on the data as it passes through a layer).

Cost (+2)

This constraint introduces some structure in terms of how components of the system are organized. In this case components are grouped by their technical categories. As the depiction of the abstract style indicates, usually these are along the line of UI/Presentation, Business logic, persistence, and database but this constraint can apply to both monolithic and distributed topologies.

Generally layers are consider either open or closed. Closed layers abstract any layers below the closed layer - meaning they must act as an intermediary. Open layers are free to be bypassed when it makes sense (the sinkhole antipattern defines a scenario where the layers don’t apply any meaningful changes or validation on the data as it passes through a layer).

Testability (+0.5)

Because this constraint add some structure to our software components, testing scope becomes better defined as do interfaces between layers.

Abstraction (+0.5)

Because one layer must interact with another, often this results in better interfaces and abstractions being put in place. As a result, generally this constraint slightly improves abstraction within the system.

Deployability (-1)

Deployability is degraded here, whether this constraint is applied to a monolithic or distributed technically partitioned system. Generally any single change requires modifications to all layers which increases testing and regression-testing scope and reduces velocity while increasing deployment risk.

Configability (-2)

Technical partitioning might introduce tight coupling between different parts of the application. This can make it difficult to change or configure one part without affecting others.

Evolvability (-2)

The large change scope, reduced deployment velocity, and increased change risk also adversely affects evolvability. The risk surface area is much larger than in domain-partitioned systems.

Constraint: Separation of Concerns

This constraint further narrows the technical partitioning constraint by being more prescriptive around how layer boundaries and modularity are defined. This constraint defines that code is not simply defined by technical area, but also logical concern.

Cost (+2)

Development and maintenance costs are reduced by adding this level of modularity to code. Developers may develop deep domain expertise in business logic (or subset of the business logic) which further reduces cost.

Testability (+1)

This constraint further reduces testing scope for any given change.

Agility (+1)

Agility is improved as the code generally has better boundaries, reduced testing scope, and potentially change scope.

Simplicity (+1)

This constraint generally improves simplicity of development and maintenance of the code. It is well-defined way to develop software, and this constraint improves understandability of the system components as well.

Evolvability (+0.5)

Evolvability is slightly improved as a consequence of the factors detailed above.

Constraint: Shared Database

This constraint states that the entire application utilizes a common database. While this is often a default of some abstract styles and patterns, it still strengthens and weakens capabilities and should be explicitly noted.

Cost (+1.5)

Generally sharing a single, shared database reduces licensing costs, hosting costs, and reduces development costs. Generally this also reduces data storage redundancy as there is much less need to replicate data to be visible to other application components.

Simplicity (+1)

Administration is simplified by virtue of having a single database to manage. Design is also simplified as all data modeling can be done at the application level rather than domain level.

Deployability (+0.5)

Deployment is generally straightforward as changes to a single database have reduced coordination costs. The improvement is modest, however, as DB changes at this scale can affect availability and introduce risk if schema that other components rely on change.

Configability (-0.5)

Configurability is reduced as any changes must be applied system-wide. A one-size-fits-all approach is generally required under this constraint.

Fault-tolerance (-0.5)

A single database becomes a single point-of-failure. Although most database management systems bring high-availability configuration options, if one database (the only database) is unavailable, the entire system is unavailable.

Scalability (-0.5)

Databases are notoriously difficult to scale. Multiple databases responsible for different parts of the data provide some level of parallelism and increase total capacity, a single database may be limited to scaling up.

Agility (-1)

Database changes potentially require coordinating with all teams and must be regression tested across all components. It can be very difficult to tell which teams are using various tables. Consequently, any change introduces risk which reduces change velocity.

Evolvability (-1)

The high coordination cost and testing scope also degrades evolvability.

Elasticity (-2.5)

As a single, shared resource, the system as a whole becomes less elastic because there is a ceiling to the single database’s capacity.

Summary

The layered monolith pattern is the foundation for a simple, inexpensive, application architecture. However, this is just the base layer. The real potential lies in how we expand, adjust, and tailor this (or any) foundational architecture. That said, remember this is an abstract pattern and these scores only tell the story of these core constraints in this abstract style. Any weakness can be offset by any number of additional constraints to yield a more capable architectural style.