Large Data Structure and Program Dumps

I am using an SDK provided by a vendor for a REST service. In order to use the SDK, I pass several Data Structures. Some of these Data Structures are Array Data Structures with arrays as subfields, some are just Array Data Structures, and some are just plain old Data Structures. For example, one Array Data Structure is defined with a Dim(3750) and several subfields defined with a Dim(10). Another is just defined as an Array Data Structure with Dim(3750). As a visual, it would look like this:

D DS1                     Dim(3750)
D  subfield1      10A     Dim(10)
D  subfiled2       7  4   Dim(10)

D DS2                     Dim(3750)
D  subfield1      40A
D  subfield2      15  4

There are way more subfields but I am just trying to provide a brief visual. We have subprocedures that contain parameters that mirror DS1 and DS2. I wrote a Service Program to act as a front-end to the vendor's SDK that uses parameters defined with likeds(DS1) and likeds(DS2) just so I can pass back what is returned from the SDK without having to do a lot of coding. Another Developer wrote subprocedures that are in a copybook to parse what is returned in the Data Structures in order to feed information to our ERP package. Again, the parameters to the subprocedures are defined using likeds.

The norm for our programs is to produce a program dump when something goes awry. This is the default behavior of the vendor's ERP and since we modify some of their programs and adopt some of their standards, it also becomes our norm. Since adding code to our own custom programs and modified version of the ERP vendor's programs to make use of this new REST service, if something goes awry, it takes forever to produce a program dump and we usually have to answer a message where we've reached the max spool file pages for the program dump. Usually, we just answer NOMAX and move on.

Hopefully, that is enough background and now we can get to my issue. The issue is that we now have program dumps that can be up to 9,000+ pages after the message is answered. I am assuming this is due to all the large Array Data Structures in our various subprocedures. We are currently in test mode and I am trying to come up with a solution to address the large program dumps. Some of the programs that this REST service was added to are time sensitive and if the program sits in MSGW for a bit, it will delay other jobs waiting behind it and then we get a snowball effect and I, or someone on my team, gets a call in the middle of the night or it's an interactive job that takes forever to end because it is writing out a 5,000 page program dump and the user gets impatient and closes out instead of waiting. The result will be the same, someone will be asking us to fix it and quickly. Any thoughts on how I can solve this issue?

1 answer

  • answered 2018-04-14 14:18 Charles

    IMO...If program dumps are routine enough to cause problems...then you've got other more serious problems.

    If you're still relying on a job going to MSGW and being answered manually, you've got yet another problem.

    Your program, particularly a web service program, should gracefully handle any reasonably possible error.

    Global error handling should take care of everything else, dumping the program, saving the job log and notifying your team.

    Read through chapter 7, Exception and error handling, in the Who Knew You Could Do That with RPG IV? IBM Redbook.