The Recipe for Disaster Success: Part 2
The first half of this 2-part series highlighted why ongoing maintenance is vital for the health of large-scale enterprise systems, especially heading into a substantial upgrade or replacement project as the system reaches end-of-life. We inverted the problem and started describing a recipe for implementing the worst “best” possible enterprise system upgrade project a business could ever want. Now, what other areas could we add to the recipe for even more fun?
Disclaimer: The following recipe is intended as satire – not to be taken literally!
Add more layers
Are engineers interacting directly with product owners? Who wants that! Add extra layers into the hierarchy, such as analysts with conflicting information or multiple levels of product owners with differing ideas on future direction. Make sure it’s almost impossible to schedule meetings with all responsible stakeholders, so quorums are never reached, and developers spend multiple sprints to try to find out what the actual feature requirement is. They’ll be able to figure out if the feature is correct or not once the production release is ready to go out and if a product owner suddenly raises concerns over how the feature works for their area.
Is engineering deploying value directly to customers? Too risky! Add in some gatekeepers external to your business unit (maybe even a third-party company) that control how and when deployments have to happen. Make sure they follow the recipe step of No Automation in the processes they enforce on your delivery groups – and be sure to never work with them to improve the shared processes.
To make things even better, ensure the gatekeepers control access to portions of the production stack, blocking delivery teams from investigating the root causes of any production issues. This will ensure problems take ages to find and will keep reoccurring due to delivery teams eventually working around the symptoms rather than fixing the root cause.
Another great idea is to add more environments. You might say having pre-production environments help with validating changes and deployments before they reach production – but why not make all your environments slightly different? Add infrastructure or middleware specifics that require per-environment configuration beyond access credentials.
Better yet, having completely different data sets in each environment helps with never knowing if a change works the same way as the old implementation or whether it will successfully work with the live data set. Plus, every deployment to any environment becomes a dice roll without consistent data or automation. Who doesn’t love an extra challenge under tight deadlines!
Change your strategy midway through
You can go one of two ways here. The first approach is not to have a strategy at all. No strategy means everything your delivery groups do can be considered successful, even when going around in circles or following massively wasteful tangents. As long as budgets are spent, who says how they should get apportioned?
The second approach is to change your strategy midway through the project. Add more expectations on your delivery teams – something along the lines of “corporate responsibility” or “greater alignment” than help implement the Add More Layers recipe item through more interactions with external groups. Ideally, these would be external teams that are newly formed, understaffed, and without a clear strategy of their own.
And make sure your delivery project is blazing the trail with any new corporate strategy, acting as a guinea pig to figure out new ways of collaborating and sharing assets. Better still, make sure cross-team requirements and asset reuse expectations change multiple times throughout your project! Perhaps even back to a previous way of working that was abandoned for no apparent reason?
Overload your delivery teams
Make sure you don’t have enough team members to undertake all the above points. People can’t ask questions if they don’t have the time to ask them. Not going to reach delivery milestones? Dynamically scale up extra teams that require your core members to take more time out of their schedule to train the new teams and validate their output!
Once a milestone is eventually reached, or once it’s clear that the external teams were never very good to begin with, scale back to your original core team (they probably could have handled all the work anyway!) Or perhaps look for a new third-party delivery partner that is somehow “more strategic” than the previous, and repeat the process.
A good metric to aim for at project completion is a greater than 100% churn rate – the delivery team you end with should be completely different from the group you started with due to burnout and turnover. Ideally, a cycle that happens multiple times for every single position within the team.
Impose a fixed deadline
The most successful projects end on a hard cutoff point, where an extension to the timeline would incur massive regulatory penalties or unwanted licensing costs. But make sure you constantly defer the uplift work for other “high priority” business as usual features and only start the migration project as close to the fixed deadline as possible.
You want to aim for the maximum amount of changes required in the shortest possible time frame. Doing so means the business can also get more of what they want before technology interferes with the pesky work of ensuring the system can adequately support the new features.
Don’t learn anything
Bury your head in the sand and don’t acknowledge any delivery problems. Don’t track any metrics, so the actual cost of issues and inefficiencies is unknown. Make sure always to look forward, never backward – don’t analyze what went wrong in previous iterations or throughout the entire project so that no changes are needed for the next one.
Better yet, move on to a new position, so there are no consequences to your project’s failings. Or, at the very least, if anyone does raise any issues, it’ll be the problem of whoever your replacement ends up being!
Conclusion 🥴😵💫😵🤯
Hopefully, this facetious look at how NOT to operate software delivery highlights any anti-patterns that may be occurring within your projects. It is easy to step back and laugh at how preposterous the above issues are – but not so funny when you’re experiencing them and their consequences every day.
Identify and correct problems early
If several of the above points are recognizable in your project, understand that serious work is required to fix things. But keep the faith that fixing these issues will lead to a much more streamlined and efficient software delivery process that your teams will actually be happy to work with. Ideally, anyone on the team can call out such anti-patterns and have work scheduled to correct the course as early as possible. The sooner a problem is identified and addressed, the cheaper and simpler the fix is.
Plus, it’s easier to fix issues incrementally over time rather than attempting a big-bang approach after years of neglect. Having a continuous stream of measurable improvements will also help with morale, as the team can see that some progress is happening over time. Hopefully, such visibility will encourage them to attempt more improvements until things snowball and all major issues are dealt with.
Learn from mistakes and don’t repeat them
Decision-makers also need to learn from their project’s issues and understand the true cost of deferring or ignoring any problems. Issues will suddenly stop being swept under the rug and instead proactively fixed once dollar figures become visible, representing all the time wasted dealing with preventable issues after the fact. Or at the very least, such cost visibility can hopefully break the cycle in future projects.
Enterprise software delivery is complex enough, and technology leadership should be working continuously to fix issues and simplify their team’s delivery processes. Decision-makers should aim to remove as many roadblocks as possible and prevent minor issues from developing into major concerns that negatively impact their next critical delivery project.