In the September issue of the MSDN magazine in an article entitled "Software Disasters : Recovery and Prevention Strategies" Dino Esposito provides the following strategy for recovering from a deteriorated software system:
1. Stopping new development.
2. Isolating sore points to layers and arranging regression tools.
3. Refactoring.
While a significant proportion of my career has been spent working on shiny new architectures of green field projects using leading/bleeding edge technologies, I have also served my time working on existing software platforms, some of which have had lifespans longer than my software career. This experience has taught me to have respect:
- Respect for the original business idea and intellectual capital behind the software solution. A solution which had enough legs for the product to still be conferring business advantage to its users when I was working on it.
- Respect for the original developers who created the product with sufficient skill for it to still be operating many years later, having been through many iterations of enhancements.
- Respect for the software and IT infrastructure teams who have continued to grow, and at times, nurse, the software product through sometimes unhealthy times.
And I am sure that much of this occurred with tight timescales and high business pressure.
So as I look back with 20:20 hindsight I try to be mindful of this, rather than being judgemental. And I am inclined to disagree with the term "software disaster". But I do accept that developers working on such a system find themselves in a regression minefield which limits the lifespan of the product they find themselves working on.
While I can appreciate Dino's perspective on recovering from a "muddy" position, I don't fully agree with the practicalities of his strategy. After all, if the piece of software is muddy enough to warrant the applying the strategy, chances are that the requests for enhancements and defect resolutions continue unabated. Calling a halt to all enhancements while code improvements are made (and defects are introduced!) is unlikely to go down well with the business users &/or product owners, and is also not necessarily advantageous for the software in a highly competitive market (either for the businesses using the software, or the software product itself). I would like to propose an alternative approach: "traffic light" coding accompanied by the "piranha principle" to product improvement. Why such names? Well, they are the sort of concepts that business oriented people can appreciate and grasp hold of. Lets look at them in turn:
Traffic Light Coding
The code is divided into 3 "zones" matching the colours of a traffic light.
The Red Zone
- No separation of concerns
- Difficulty in identifying where to make a change
- Making a change in one area of the system is likely to break another area of the system
- NOT Implemented in accordance with industry and organisation "good practice" standards
- NOT Implemented in the organisation's current preferred technology
The Amber Zone
- Clear separation of concerns
- Implemented in accordance with industry and organisation "good practice" standards
- NOT Implemented in the organisation's current preferred technology
The Green Zone
- Implemented in the organisations current preferred technology
- Implemented in accordance with industry and organisation "good practice" standards
- Maintainable and extensible
Zoning areas of code in this manner helps product managers understand that changes in some areas of the application will take longer and be more expensive than changes in other areas of the application. Over time, as changes are made, the aim is to promote code from one zone to another, until the entire application is in the green zone. At this point, Dino's recommendations for prevention can be applied, ie.
- Have a domain expert (or become one)
- Learn and apply sane principles of software development and common best practices
- Understand the lifespan of your software product and design and develop according to that
The "Piranha Principle"
As for scheduling these improvements; that is where the "piranha principle" comes into play. A school of pirahnas can strip an animal down to its bare skeleton in just a matter of minutes through a large number of small bites. So too, the hope is to promote red zone code to green zone code through a large number of small changes. Although feature enhancement could be paused to make these large number of small changes, that is not done here. Enhancements continue to be permitted for the reasons mentioned earlier. Rather, every time a component has a change made to it, either a bug fix or an enhancement, an additional "small bite" is taken to move the code towards promotion from one zone to another. Since the developer has to understand the code to make the change, the improvement is likely to be only a small part of the development time. The area already needs to be retested, so the impact on QA resourcing is also minimised.
This has two implications:
- Areas with high code churn due to enhancements or defects are promoted more quickly.
- More stable areas are promoted more slowly, if at all.
This delivers real business benefits: areas which need more maintenance become more maintainable and areas which are stable do not have any additional business cost.
How does this work?
Let's take a theoretical page of ASP code created using a RAD approach. Reviewing the code reveals the following:
- Inline SQL query generation, with string concatenation, creating SQL injection attack vulnerability
- ASP code interleaved with HTML
- Inline CSS
- Inline javascript
So it is identified as belonging to the "red zone".
Over the next few months, the code churn was high and the following pirahna activities were conducted:
- SQL querys are converted to stored procedures and SQL injection vulnerability removed
- Inline styles are moved to a single style tag at the top of the page
- Javascript functions are moved to a separate file - improving performance through browser file caching and reduced page weight
- ASP code is decoupled from HTML page structure
- CSS styles are moved to a separate CSS file - improving performance through browser file caching and reduced page weight
- Javascript is rewritten using jQuery to reduce browser compatibility issues
- Javascript is reformatted in JSON style, enhancing maintainability
- Stored procedures are replaced by an object model and an ORM
- The ASP page is replaced by an MVC razor page
As you can see, the earlier activities lay the groundwork for later activities, isolating the changes and reducing the risk associated with each change.
As a developer, I like to improve the condition of each area of code I work, leaving it easier to understand, and easier to change. Hopefully, making things easier for the developer coming after me. It is my hope that this article will provide hope and a pathway out for my fellow developers who find themselves working on a "Software Disaster".
Comments
Post a Comment