Skip to Main Content
IBM Sustainability Software - Ideas Portal


This portal is to open public enhancement requests against the products and services belonging to IBM Sustainability Software. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Planned for future release
Created by Guest
Created on Jun 11, 2025

Manage 9.0 Significant Performance Issues for SQL Server customers during upgrade

Maximo changed the data types used on SQL Server platforms from datetime to datetime2, text to varchar(max), etc. as part of the upgrade to Manage 9.0. While these changes are positive, the impact seen during the upgrade has been extreme. The updatedb (whether coming from 7.6.1.3 or Manage 8.7/MAS 8.11) has run for days in some larger customer environments. tempdb & log drives have also needed to be significantly extended, otherwise the updatedb fails. 

Maximo finds any datetime columns, then finds any indexes that reference the column (because you cannot change the data type when there is an active index), drops the index(es), then alters the table like:

2305050 ms    alter table asset alter column eq23 datetime2 null
;

// Success.

2018871 ms    alter table asset alter column estendoflife datetime2 null
;

Each of these alters on larger tables will run for an extended period of time. These two examples are a snippet from one of our upgrades where each column on the table ran for over 2000 seconds (over 33 minutes each). And there are many more date fields, sometimes over 40+ columns on a single table. 

Then after all the columns have been altered, the indexes are recreated. 

In addition to the duration & tempdb/log utilization issues, these will sometimes fail when indexes have been created outside of Maximo. Because the updatedb process only considers indexes that exist in Maximo metadata, any indexes that reference the columns that are not in the Maximo metadata will cause the alter to fail. While this is not a supported configuration, it does happen frequently. 


Maximo has a process to rebuild tables during the configdb by setting the maxobjectcfg changed field to M. What we've seen is on these larger tables, rather than letting the updatedb alter columns individually, a full rebuild of the table using configdb takes a fraction of the time and drastically reduces the impact of tempdb & log utilization. As a specific example, a customer's updatedb was running for 34 hours before we killed the process. By rebuilding the largest tables (WORKORDER, WOSTATUS, ESCSTATUS, etc.) in advance of the updatedb, the entire process (configdb and updatedb) completed in < 8 hours on the same database server.

There are some limitations we noticed with the configdb process. 
1) It can't be used on audit tables 
2) some data type conversions were not supported by the tool where the restorefrombackup that moves the data back into the table will fail to attempt to process these conversions. 

We currently run a query like this to generate the update statements:

select 'update maxobjectcfg set changed=''M'' where objectname='''+upper(name)+''';' from sys.tables 
where exists(select 1 from sys.columns where object_id=tables.object_id and type_name(columns.user_type_id)='datetime')
and not exists(select 1 from sys.columns where object_id=tables.object_id and type_name(columns.user_type_id) in('text','varbinary','image','binary'))
and 500000<=(select sum(rows) from sys.partitions where partitions.object_id=tables.object_id)
and exists(select 1 from maxtable where tablename=upper(name) and isaudittable=0)
order by name;


To try and identify larger tables (at least 500,000 records) that contains datetime columns that do not contain some of the other legacy formats that are problematic with the restorefrombackup. Though I believe these legacy formats would be able to migrate cleanly into the new data types so it would be better if the restorefrombackup supported them. 

We also manually migrate data out of audit tables and truncate the tables to address the audit table performance since we can't use the full rebuild process.

This impacts every SQL Server upgrade, but is most impactful for larger databases (>100 GB). 

Idea priority Urgent
Needed By Yesterday (Let's go already!)
  • Guest
    Jun 13, 2025

    I have faced the exact issue. What I did was, before starting upgrade, ran ALTER command directly on database for each DATETIME attribute. Had to drop all the indexes/statistics/view related to those attributes in advance.

    Executed multiple alter statements in parallel for different tables in order to reduce the time. UPDATEDB process does one by one, so UPDATEDB process takes more time.

    Once ALTER completed manually, started the upgrade process from 7.6 to 9.0, then it completed in 3-4 hours.


    This definitely impacts the upgrade time significantly. IBM must provide some alternative for this on handling this scenario.


    Note: It took 2-3 days for me for 500 GB of DB Size. And it happens for DATE attribute also.