Deployment without downtime?

Downtime is costly to the business. As developers, avoiding it can give us a ton of benefits both in term of efficiency and for personal well-being as well. For example, when making changes that require downtime to a shared environment, I have my freedom back since I don’t have to ask or wait to do it at night. 

With the introduction of Automation Script, most of the business logic and front-end changes we need push to production nowadays can be done without downtime. Some of them are:

  • Automation Script
  • Escalation
  • Application Design
  • Conditions
  • Workflows

However, Database Configuration changes still need Admin Mode or a restart. 

In recent years, many of us have switched to DBC script to deploy changes. Although this approach takes more time to prepare than compared to other methods such as using Migration Manager or doing it by hand. It proves to be very reliable and allows faster deployment with much less risk. 

Then many of us probably realized that, for small changes, we can run DBC script directly when the system is live. But after that, we will still need a quick restart. Doesn’t matter whether it’s a small environment which takes 5 minutes to restart or a massive cluster which needs 30 minutes. A restart is downtime, and any deployment that involved downtime will be treated differently with days or weeks of planning and rounds of approval and review.

For development, a colleague showed me a trick that, instead of a restart, we can just turn on and off Admin Mode. As part of this process, Maximo cache is refreshed and the changes will take effect. This works quite well in a few instances. However, this is still a downtime and can’t be used for Production. On a big cluster, in many cases, turning on Admin Mode takes more time than a restart.

My other colleague hinted me for a different method and this is what I ended up with. I have been using this for a while now and can report that it is quite useful. Not only my productivity has improved, it has proven to be valuable a few times when I don’t have to approach cloud vendors to ask for a downtime or restart.

The approach is very simple, when having a change that requires restart, I’ll script it using DBC. If the change is small, I can get away using Update/Insert SQL to update directly to the configuration tables such as:

  • MAXATTRIBUTE/MAXATTRIBUTECFG
  • MAXOBJECT/MAXOBJECTCFG
  • SYNONYMDOMAIN
  • MAXLOOKUPMAP
  • Etc.

Next, I will create a super complex autoscript with no launchpoint below:




Please note, this is not a bullet proof approach officially recommended by IBM. As such, I suggest if you use it for Production, make sure you understand the change and its impact. I will only use it for small changes on areas that have little or no risk of user writing the data to while the change is being applied. For a major deployment, for example, a change to the WORKORDER table, it’s a bad idea to apply it during business hours. For non-production, I don’t see much risk involved. 

A man who don't work at night is a happy man.


UPDATE
  • This works well for clustered environment. The "True" parameter passed to the function means Refresh All Servers in the cluster. If you want to refresh only one server, set it to False and run the script from the specific server by accessing the script via the server's specific 908x port.


No comments:

Post a Comment