Recently I involved in a project where we need to upgrade Maximo to the latest version and install several big add-ons which include SP, HSE (which is Oil & Gas), and Spatial (which is the same as Utilities). This is the first time I see a system with that many add-ons installed. The DB in use is SQL Server.
Although we’ve done all recommended steps for the upgrade like running integrity checks before and after the upgrade; and updating statistics and rebuilt all indexes. After the upgrade, it was still a lot slower compared to the before upgrade version. To analyze the problem, I tested and compared the performance of several different queries on the WORKORDER table and the TICKET, some intended to use indexes, some intended to create a full table scan. With the queries that use more indexes, the performance gap seems to be smaller (upgraded version is 2-3 times slower); and the one which requires full table scan is significantly slower (10-20 times slower).
With that result, we concluded that the slower performance is due to
some sort of IO bottleneck, and with the help of the DB Admin, we defragmented all tables in Maximo and it restored the system’s performance to the level similar to what we have before the upgrade.
some sort of IO bottleneck, and with the help of the DB Admin, we defragmented all tables in Maximo and it restored the system’s performance to the level similar to what we have before the upgrade.
Since I’m not super experienced with SQL, I never knew that is something we could to do to Maximo tables. With the upgrade and installation of several large add-ons, I think because there are thousands of insert/update statements executed, it caused a significant fragmentation of the data stored on disk and thus resulted in the DB must run a lot more IO operations to retrieve data.
Another interesting thing I learned from this project is that the UPDATEDB process took a lot of time. After tried and implemented several things, the whole process still took more than 5 hours. This is quite problematic because it the downtime window is way too long for the business to accept. Then the DB Admin tried to rebuild the indexes and defragmented the tables, after that it took only 3 hours to complete.
A lot to learn, a lot of fun working on this kind of projects.
No comments:
Post a Comment