Tech Blog

Software architecture in agile teams: refactoring myths and truths

Software architecture: refactoring myths

One of the strangest conversations I had in my entire career was when I was asked a question, during a quote presentation at a large German automotive manufacturer. These presentations are typically designed to grill all the contestants to see how well the technical and financial aspects of the quote hold up when confronted with critical questions.

This particular question on the other hand said more about the person asking it then about the project. A young man working for the customer’s project team asked me, “how can you guarantee that with all this ‘refactoring’ you’re talking about, your architecture stays intact?”

At that moment I had to bite my tongue to stop myself from answering “it would help if you knew what word ‘refactoring‘ means”. Luckily, I was professional enough to swallow my frustration.

There are a few very common but nonetheless critical misconceptions apparent in that customer’s question:

  1. Software architecture is some kind of set-in-stone master plan
  2. If you ever need to change the architecture you’re not a good architect
  3. Refactoring is a fancy way of saying “repairing a botched job”
  4. Refactoring endangers the architecture
  5. Change is bad

Depending on how long you’ve been working in the software industry you would be surprised at how many people still think this way.

Agile vs. Command & Control

Despite what many people still think, agile software development is not a new thing. And for good reason – the idea of using short cycles of implementation and verification to make sure you are on the right track is just common sense.

The greatest achievement of the agile manifesto in my opinion is the emphasis on people over processes. At the time it came out, I thought it was a bold move for two reasons:

  1. Because there’s been a lot of industry standards processes written, which try to define the one true way for software development.
  2. Because the whole claim had the air of anarchy — something many companies, especially large corporations, were not terribly comfortable with.

When it comes to agile development, from my experience you have the following types of people:

  1. Those who understand and live agile development
  2. Those who use the word ‘agile’ as an excuse for unplanned chaos
  3. Those who are forced to use it by managers who don’t understand that agile requires personal commitment
  4. Those who outright reject it because they think that a project must be planned in all detail long ahead
    (if you want to have some fun ask them if you think the Soviet Union’s five year plans were a good idea)
  5. Those who reject it because they’re convinced that leadership means giving commands that need to be executed without question

Groups 3 through 5 are what I like to call the Command & Control Crowd.

In my eyes the second group is the worst because they spoil the reputation of agile by using it as a fig leave to cover their incompetence and lack of discipline. In a talk about the future of programming Robert C. Martin makes a case that in the 1940’s Alan Turing already predicted that discipline would be one of the deciding factors for the success of computer programming. Agile requires the programmers to be very disciplined people.

Whereas Command & Control tries to introduce the discipline as an external factor. Guess which works better?

Down the waterfall

Riding the waterfall
Riding the waterfall

The Waterfall Development Model (and later iterations derived from it like the V-Model) were a direct result of a mindset where ‘all upfront design’ is considered the only responsible choice. Additional support came from customers of software makers that insisted on fixed pricing. After all, how can you offer a fixed price for a large project if you don’t plan the whole five years ahead?

Irritatingly, the same customers mostly ignored the fact that it would neither be feasible nor economical to create a complete multi-layer design as part of the offer process.

Fun fact: most people still don’t know that one of the first papers describing strict top-down development was written by Dr. Winston W. Royce in 1970 to discuss the benefits and dangers of that development method. But selective quoting made people believe it was outright promoting the idea. In fact, people liked to copy the diagrams without the surrounding text. It’s a practice that unfortunately is still alive a well today. Later in 1976 the term waterfall was coined for this method.

Here is a quote [see section “Computer program development functions”, p. 329] which shows that Dr. Royce was more than aware of the shortcomings of the model:

I believe in this concept, but the implementation described above is risky and invites failure. The problem is illustrated in Figure 4. The testing phase which occurs at the end of the development cycle is the first event for which timing, storage, input/output transfers, etc., are experienced as distinguished from analyzed. These phenomena are not precisely analyzable. They are not the solutions to the standard partial differential equations of mathematical physics for instance. Yet if these phenomena fail to satisfy the various external constraints, then invariably a major redesign is required.

He presented his findings in the ’70s but even today, a lot of people still champion the idea of ‘stable design’.

In my experience, the lasting appeal of the waterfall development model doesn’t lie in it’s questionable benefits. It’s in the fact that it’s simple to understand – and inspires the false perception of structure and control.

Non-trivial things change

I like to say that reality doesn’t do you the favor of sticking to your plan. The more complex your product and the environment it lives in are, the more likely it has to evolve in order to survive.

Dodo
Dodo

Think of Dodo’s as a typical example of a failed software project. Fat, flightless pigeon-like birds, happily hopping around their remote island until the first real predator (read ‘humans’) showed up and ate them. The same thing happens to software if you’re counting on your initial assumptions being correct over the product lifetime.

There are a number of things that can change without your control:

  • Customer interest shifts (hype cycle)
  • Technology becomes obsolete (J2EE)
  • The core services, which you depend upon for your software projects are discontinued, even if big players are behind them (list of discontinued Google projects)
  • It gets harder to find developers for your languages and frameworks (Tiobe index)

About the only product that comes to my mind that didn’t change  over the years are supermarket trolley tokens. Little plastic disks typically serving as advertisement gifts. And even there someone will probably be able to prove me wrong.

For everything more complex, change is inevitable. Even something as simple as an LED (light emitting diode) changed drastically over the years, giving us new colors, better efficiency, brighter light and longer life span.

In my eyes the most prominent quality aspect of a software architecture is how easy it is to change it.

The right time to make decisions

When is the best point to make a decisions? As early as possible or as late as possible? If you were inclined to answer “early”, think again. The later in a project you are, the more you know about it and information is the basis for an educated decision.

Brexit is a perfect example of what happens if people make decisions before they have all the important information.

An uniformed decision
Brexit is what you get if you decide with insufficient information

That’s why it is a software architect’s job to design a product in a way that allows the team to delay or revise decisions with the least possible amount of impact.

To give you an example, I once worked in a project that created server-side software in Loading...Java. At that time J2EE was still the common and agreed upon standard on which everyone built. One year into the project, we realized that J2EE did nothing for us but waste perfectly good computing resources. It introduced tight coupling without good reason. And overall, it made the software harder to maintain than it needed to be.

Luckily for us, our architect had put a facade layer in place that allowed us to throw out all the dead weight of J2EE and implement the same software as a simple servlet.

Our customer, while first being reluctant to drop the de-facto industry standard, was more than happy with the change when the software license bill dropped significantly after replacing IBM WebSphere WAS with Apache Tomcat.

If you still cringe at the original premise, lets rephrase it a little bit: the best time to make decisions is at the latest responsible moment.

Refactoring as the backbone of a healthy architecture

Building software is often compared to constructing a large building. I think that metaphor is deeply flawed.

You can’t just:

  • move a large building around,
  •  exchange its foundation,
  • reorder its floors,
  • build it from the roof down,
  •  replace it with smaller buildings,
  •  push buildings around it out of the way to gain more space.

But with software you can do all of these things.

One point that the metaphor gets right though is that if you do extensive construction work, you inevitably face the rubble and the dirt as a byproduct. If you don’t clean up regularly, the building becomes an inhospitable landfill.

The same is true for software development. With changing demand come changes in the feature set. And if you try to force those changes into your product without proper design adaptions, your code gets harder to read and maintain.

This is where refactoring comes in

In a nutshell ‘refactoring’ is the art of improving your software internally without changing the feature set. I intentionally didn’t use the phrase, “without changing the behavior”. That’s because resource usage and speed are two things that are often influenced by refactoring and you could very well consider this a change in behavior.

While in most cases lower memory footprint and faster operation are welcome side effects of refactoring, there are cases where you need to be careful. Real time applications for example do not take kindly to being faster. They need to be on time, not as fast as possible. That being said, if exact timing depends on your code being slow, you’re in a different kind of trouble.

A price tag on maintenance

Project leads don’t particularly like the idea of refactoring. It takes up time and resources, which in their minds, would be better spent on adding new features. Worst of all, customers don’t normally pay for refactoring.

If you plan to convince a project manager of the benefits of refactoring, put a price tag on maintenance. Take a set of features from your backlog that require a common refactoring and estimate how much each would cost if you had to implement, test and document them individually given the current design. Then estimate the costs for the refactoring itself and implementing the same features on top of that refactored software. If your refactoring is really necessary, the costs of the latter should be lower. Be prepared to answer the question why you think the costs go down on a level that a stakeholder can understand.

You often can use statistics from your ticket tracker to show how work on features tends to always get slower and slower without refactoring.

Summary

The constraints of your projects change over time — often with reasons outside of your control. Good software architects make sure that the design stays modifiable. That allows making hard-to-change decisions at the latest responsible point in time. Regular refactoring prevents the development team  from coding itself into a corner – the point where the effort from maintenance rises exponentially and progress comes to a halt. And let’s face it, taking time to do all this refactoring has got to be better than having to explain to the business why their project won’t deliver on time.

Sebastian Bär