Software takes more than “the right technology” | SitePen

SitePen
8 min readFeb 26, 2021

--

Carpenters have lots of tools to pick from. Drills, impact drivers, circular saws, and miter saws; each tool may be really good for a specific purpose and a great carpenter will know when to use each one. However, the best carpenter is not necessarily the one who uses the most tools, but the one who best knows how to use the tools he has to get the job done quickly and accurately.

Projects often spend a lot of time discussing what technologies to use. These conversations are important, but technology is ultimately about enabling productivity. Taking a look into these conversations gives us insight into what might be the right technology for a project, but also what challenges a team is concerned about. Good engineering teams can distill problems down to their core concerns. This helps them know whether a technology choice is a true solution or a scapegoat for deeper issues.

Teams rightfully make decisions based on their past experience. A newly assembled team might look to use the technologies they are most familiar with. A team that also supports a legacy application may be looking to avoid the maintenance they are currently doing in a future project. People are attracted to new and exciting tools they hear mentioned by their peers. With so many tools available, it’s easy for teams to overstate the importance of which tools they use and miss the adjustments that can be made with the tools they have.

Looking at common technology choices

Recognizing the human elements of decision making, let’s look at some commonly debated tools to identify the different strategies they employ but also adjustments that can be made to optimize the tools teams may already be using. These comparisons are not meant to be comprehensive or an endorsement of one over the other. Rather, the goal is to see where teams have room for improvement.

Microservices vs Monoliths

The term microservices can be over-simplified to mean breaking a service’s implementation into individually-deployable applications. This is a response to issues some companies have had with large, monolithic applications that are deployed as a single executable. Microservices are touted for the flexibility that comes with keeping services independent of each other while monoliths are recognized for their simplicity. These drastically different approaches to software have lots of tradeoffs.

Pros-Microservices

  • Flexibility getting started
  • Limit drag of technical debt
  • Can be deployed/scaled independently

Cons-Microservices

  • Lots of boilerplate/config to start
  • Coordinating between services is more complicated/expensive
  • Maintaining consistency is difficult

Pros-Monoliths

  • Simple to extend/expand
  • Take advantage of existing architecture
  • It’s easy to reuse existing code

Cons-Monoliths

  • Build/test/startup time can get slow
  • Harder to introduce new technologies/architectures
  • Code easily becomes tightly coupled

Both of these system architectures are accepted and have their place. Both are valid approaches to scaling software. As software scales, tightly coupled parts of a codebase will make changing individual parts of an application harder. Microservices encourage splitting an application into independent parts and define clear APIs for those services to communicate to keep a codebase loosely-coupled. If your team needs services to be independent and you have the resources to handle the cost of managing independent services, microservices are a great choice.

However, some teams adopt microservices hoping to achieve its benefits while still writing code that is tightly coupled AND has the overhead of managing multiple services, giving them the worst of both worlds. If a software team can be more disciplined about encouraging loose coupling in a monolith, they can benefit from better separation while avoiding the management costs of microservices. For teams using microservices, knowing how to keep service dependencies loosely coupled can also make their deployments and development easier. Each approach has its limits but a full understanding of the goals and tradeoffs is key to reaping the benefits of a technological choice.

Monorepos vs Polyrepos

Similar to the previous discussion, monorepos and polyrepos (also called multirepos) are different approaches to splitting up code, focusing more on where code is stored than how it’s deployed. Instead of storing applications in their own location or repository, monorepos attempt to bring consistency and make coupling easier by bringing applications into one location. Tech giants like Google and Facebook famously use extremely large monorepos to store code across the company, but monorepos can also be applied on a much smaller scale, like tying together a few related projects for a single application.

Pros-Monorepo

  • Easy to share code/config/infrastructure
  • Easier to enforce consistency through tooling
  • Cross-project changes can be made at once

Cons-Monorepo

  • More complicated infrastructure
  • Download/build/test times grow as projects are added
  • Managing code ownership isn’t built-in

Pros-Polyrepo

  • Simple to get started
  • Flexible to each project’s needs

Cons-Polyrepo

  • Each application needs its own CI/CD infrastructure
  • Hard to make cross-project changes
  • Hard to share code
  • Hard to enforce consistency

Again, both approaches are looking to solve the issue of “How do you manage lots of projects?”. Polyrepos are often the default because each team can manage their code as needed. This gives a lot of flexibility but also means that sharing code between projects is more difficult. Monorepos co-locate code from different projects to make it easier to manage those intra-project dependencies. To do so, there needs to be consistency between projects so that a common set of infrastructure can be used to build, test, and deploy code.

For many organizations, this consistency is a key benefit to monorepos. Configuration and tooling are the same across many projects, making it easier for engineers to switch between projects and for processes to be automated. This has the downside of requiring much more complicated tooling to manage building multiple projects at the same time. But regardless of which approach a team takes, they can still work to improve the process of starting new projects, sharing common code, maintaining inter-project consistency, and speeding up build times. For monorepos, this may look like DevOps infrastructure or improving coordination between teams that use an internal library. For polyrepos, improvements could include having engineers work together and document patterns for consistency or making it easier for teams to contribute to a shared library. A strategy like InnerSource can help teams collaborate on shared libraries but will need people’s time and clear communication to be successful.

Cloud vs On-premise

Cloud-based services are a rapidly growing market and have attractive features for software teams. In many scenarios, cloud-based computing services are a direct replacement for managing complex servers in-house. Other kinds of cloud offerings are more specific to a single technology or have a unique API that would require reworking or replacing significant parts of an application For teams looking to scale quickly or who don’t have existing significant server management infrastructure, cloud computing is often an easy choice. For teams that already have on-premise infrastructure, the decision gets a little more complex.

Pros-Cloud

  • Less software, hardware, and security maintenance
  • Quickly scaleable
  • Uptime is usually better

Cons-Cloud

  • Dependence on a cloud service
  • New APIs for existing teams to learn
  • Different price structure

Pros-On-premise

  • Often already existing
  • More control over security, software, and hardware

Cons-On-premise

  • Complex technical maintenance
  • Need extra capacity to scale
  • Complex costs

Migrating teams to a cloud provider takes coordinated work. The primary challenges are not usually technical, but organizational. Because migrating to the cloud could be implemented using many different approaches, technical teams will need to work with business partners to identify the best approach and what modifications will be needed to change how an application is hosted. Development teams need to work closely with architecture, security, and DevOps groups to plan and execute a proper migration. As with many business requests, the question is not “can it be done?” but rather “how difficult is it?”.

If a team chooses a cloud deployment strategy, it’s important to learn the options available with that cloud provider to maximize the value of the service but also to mitigate the risks. For teams choosing on-premise hosting, knowing the company process for upgrades and provisioning new servers may look different but is possibly more important. In both of these scenarios, clear communication is key. No matter how an application is hosted, great teams know their responsibilities and how to deliver uptime and meet the performance demands of users.

Rewrites vs Upgrades

Many existing projects suffer under the weight of technical debt, past technical choices that encumber future progress. Some technical debt can be such an impediment that it is easier to start over with a rewrite than make updates to an existing application. But where is the line for when to throw out the existing code and start anew?

Pros-Rewrite

  • Can rethink assumption to make improvements
  • Can take advantage of the latest technology available
  • More attractive projects for both developers and managers

Cons-Rewrite

  • Has higher initial development costs to reimplement base functionality
  • Managing two products side-by-side is difficult
  • Migrating data requires additional thought

Pros-Upgrade

  • Can build on existing work/team/infrastructure
  • Can get new features to users more quickly
  • Existing technologies are often better understood

Cons-Upgrade

  • Need to address existing technical debt to be productive
  • Legacy technologies may have a shrinking community
  • Fixing fatal-flaws may be cost-prohibitive

Rewrites are fundamentally about weighing up-front costs for future benefits. When talking about technology decisions these costs and benefits can be hard to quantify. Benefits could be better application performance for users or quicker development for engineers. The costs are most often the amount of development time needed and the opportunities lost while waiting to get the new functionality to match the old application.

On both sides there are risks. Rewrites often are chosen because they are flashy rather than because they are needed. If the rewrite doesn’t learn from the legacy application, it may run into the same technical limitations that the legacy application made. It’s important to identify technical debt, identify the fix, and identify how to avoid it in the future to avoid having to rewrite an application frequently. For legacy applications, it’s important not to fall into the sunk cost fallacy of believing that the cost already paid justifies future costs. Sometimes a poorly designed application will be harder to fix than replace but past costs don’t directly affect what will be better in the long run.

Summary

Technology choices are important, but when picking between a few industry-accepted choices the success of a project usually depends more on the implementation of that choice. Looking at the pros and cons of a few options isn’t just helpful for making a decision, it also gives a roadmap for what problems may arise down the line. Many potential problems can be mitigated when teams know the limitations of their tools. This is why software engineering is more than just writing code, it is knowing what tools do for them. It is understanding how to write code that is flexible enough to change as requirements change.

Even though circular saws and table saws can sometimes do the same task, an experienced carpenter will know how to use both and which they prefer to use. Software teams need to make similar decisions. The tool that is easiest for a team may be different than what another group would consider the best tool. Each choice will be different but a thorough understanding of the problem space helps maximize the value of whatever tool is selected.

Originally published at https://www.sitepen.com on February 26, 2021.

--

--