Excessive temporary storage while updating CLOB fields via SQLRPGLE program
Posted: Wed Jan 01, 2025 7:25 pm
Summary of the Issue :
A Batch job running under the HTTP server on IBM - I , meant for processing database updates to CLOBs using JSON_UPDATE SQL function shows excessive amount of temporary storage being consumed ( close to 8 GB )
Explanation of the process:
We have designed a new database file with a CLOB field meant to store JSON data in it.
The Service to update the JSON elements in the CLOB field is coded in SQLRPGLE and plugged into the end of an IBM I Job running under the IBM I - HTTP server meant to process REST API calls from Online applications.
So on an average each API call involves updating around 10 to 15 fields that need to be updated in the individual JSON elements all stored in the CLOB field.
So the way we have implemented this is as below .
Step 1 : Read the JSON data from the CLOB from the new file and store into a global variable defined as SQLTYPE : CLOB .
Step 2 : Now for every field that needs to be updated within a single API call , use the JSON_UPDATE SQL function to manipulate the SQLTYPE:CLOB variable .
Note: We are repeatedly invoking this above function for every field being updated via the API call
Step 3 : Now once all the fields are updated , we transfer the JSON value from the global variable to the database using an SQL update.
While implementing this approach , what we observed is , the SQL function JSON_UPDATE to do the JSON manipulation , is causing a sudden spike in Jobs temporary storage ( increases from 450MB to 3 GB suddenly quite randomly while doing any of the updates). We are measuring this temporary storage in the bucket number greater than 65535 , meant for the job temporary storage.
I am failing to understand , if this is caused due to the repeated invocation of JSON_UPDATE functions, (or) could this be a memory leak (or) is there a PTF to check this.
Any thoughts please.
A Batch job running under the HTTP server on IBM - I , meant for processing database updates to CLOBs using JSON_UPDATE SQL function shows excessive amount of temporary storage being consumed ( close to 8 GB )
Explanation of the process:
We have designed a new database file with a CLOB field meant to store JSON data in it.
The Service to update the JSON elements in the CLOB field is coded in SQLRPGLE and plugged into the end of an IBM I Job running under the IBM I - HTTP server meant to process REST API calls from Online applications.
So on an average each API call involves updating around 10 to 15 fields that need to be updated in the individual JSON elements all stored in the CLOB field.
So the way we have implemented this is as below .
Step 1 : Read the JSON data from the CLOB from the new file and store into a global variable defined as SQLTYPE : CLOB .
Code: Select all
dcl-s jsonData SQLTYPE(CLOB : 75000);
Code: Select all
exec sql values
(JSON_UPDATE(:jsonData, 'SET', trim(:path), trim(:newValue)
)) into :jsonData;
Note: We are repeatedly invoking this above function for every field being updated via the API call
Step 3 : Now once all the fields are updated , we transfer the JSON value from the global variable to the database using an SQL update.
While implementing this approach , what we observed is , the SQL function JSON_UPDATE to do the JSON manipulation , is causing a sudden spike in Jobs temporary storage ( increases from 450MB to 3 GB suddenly quite randomly while doing any of the updates). We are measuring this temporary storage in the bucket number greater than 65535 , meant for the job temporary storage.
I am failing to understand , if this is caused due to the repeated invocation of JSON_UPDATE functions, (or) could this be a memory leak (or) is there a PTF to check this.
Any thoughts please.