Discussion:
running os/vs cobol in CICS 5.3 with DB2
Add Reply
Rob Schramm
2018-01-10 15:21:35 UTC
Reply
Permalink
Raw Message
Does,

Any one have experiences with MacKinney product that allows OS/VS Cobol to
be run in new CICS 5.x region?

Thanks,
Rob Schramm
--
Rob Schramm

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Pommier, Rex
2018-01-10 16:03:58 UTC
Reply
Permalink
Raw Message
Rob,

I can't comment on the OS/VS Cobol product, but I can give glowing reviews of other MacKinney products. We run MLI and the product and support have been stellar. If their support for the Cobol product is on a par with the MLI support, you will have no issues with it.

Rex

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Rob Schramm
Sent: Wednesday, January 10, 2018 9:22 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: running os/vs cobol in CICS 5.3 with DB2

Does,

Any one have experiences with MacKinney product that allows OS/VS Cobol to
be run in new CICS 5.x region?

Thanks,
Rob Schramm
--
Rob Schramm

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN


The information contained in this message is confidential, protected from disclosure and may be legally privileged. If the reader of this message is not the intended recipient or an employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any disclosure, distribution, copying, or any action taken or action omitted in reliance on it, is strictly prohibited and may be unlawful. If you have received this communication in error, please notify us immediately by replying to this message and destroy the material in its entirety, whether in electronic or hard copy format. Thank you.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Timothy Sipples
2018-01-11 12:12:51 UTC
Reply
Permalink
Raw Message
A couple quick comments from me:

1. IBM eliminated Single Version Charge (SVC) time limits. If, for example,
you have an OS/VS COBOL application that's still lagging behind, you could
keep a CICS Transaction Server 2.3 AOR running it until you can get it
pulled forward, and surround that laggard AOR with CICS Transaction Server
5.3 (or better yet 5.4) regions for everything else you run. CICS TS 2.3,
the last release that supported OS/VS COBOL applications, has reached End
of Service, of course. So has OS/VS COBOL for that matter. However, CICS
has long supported freely intermixing interoperating releases, unless
exceptionally documented otherwise. And there shouldn't be any financial
obstacle in doing that now.

2. It's nearly certain you're taking a longer path length through the OS/VS
COBOL execution versus an optimized Enterprise COBOL Version 6 alternative
reality. Try to make the trek if you can, as soon as you can. There's
considerable reward in that, especially if this OS/VS COBOL code is either
contributing to your monthly peak 4HRA utilization, or if it is elongating
your batch execution time when your batch execution time is your peak
demand driver.

3. If the problem is that you lost the source code, reasonable source
recovery might be possible. (There are some previous discussions about
that.)

--------------------------------------------------------------------------------------------------------
Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z and LinuxONE, AP/GCG/MEA
E-Mail: ***@sg.ibm.com

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Rob Schramm
2018-01-17 14:17:38 UTC
Reply
Permalink
Raw Message
Tim,

I am looking for the QDM (quick and dirty method) to pull a shop into a
more supported OS. There is no appetite for reworking code to accommodate
the new COBOL. If we were actually able to run on something newer, there
might be path for a small section of apps to go thru actual conversion.

Rob
Post by Timothy Sipples
1. IBM eliminated Single Version Charge (SVC) time limits. If, for example,
you have an OS/VS COBOL application that's still lagging behind, you could
keep a CICS Transaction Server 2.3 AOR running it until you can get it
pulled forward, and surround that laggard AOR with CICS Transaction Server
5.3 (or better yet 5.4) regions for everything else you run. CICS TS 2.3,
the last release that supported OS/VS COBOL applications, has reached End
of Service, of course. So has OS/VS COBOL for that matter. However, CICS
has long supported freely intermixing interoperating releases, unless
exceptionally documented otherwise. And there shouldn't be any financial
obstacle in doing that now.
2. It's nearly certain you're taking a longer path length through the OS/VS
COBOL execution versus an optimized Enterprise COBOL Version 6 alternative
reality. Try to make the trek if you can, as soon as you can. There's
considerable reward in that, especially if this OS/VS COBOL code is either
contributing to your monthly peak 4HRA utilization, or if it is elongating
your batch execution time when your batch execution time is your peak
demand driver.
3. If the problem is that you lost the source code, reasonable source
recovery might be possible. (There are some previous discussions about
that.)
--------------------------------------------------------------------------------------------------------
Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z and LinuxONE, AP/GCG/MEA
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
--
Rob Schramm

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Timothy Sipples
2018-01-11 12:43:41 UTC
Reply
Permalink
Raw Message
Losing XCF connection to a sysplex member would be a whole
nother level of impact that I've never been willing to sign
up for even though our network today is far more reliable
than it was 20 years ago.
Isn't losing XCF connectivity something worth planning for? It's rare, but
I suppose it could happen no matter what the distance.

Isn't it always best to weigh various risks, sometimes competing ones, and
try to get as much overall risk reduction as you can? You're in southern
California, and there are earthquakes and fires there, I've noticed. (Maybe
plagues of locusts next? :-)) One would think there's some extra California
value in awarding an extra point or two to distance there. Japan's 2011
Tōhoku earthquake and tsunami triggered some business continuity rethinking
there, and it has altered some decisions about data center locations,
distances, and deployment patterns. The risk profile can change. And, as
you mentioned, networks have improved a lot in 20 years while the risks
California faces seem to be somewhat different. It's always worth
revisiting past risk calculations when there's some material change in the
parameters -- "marking to market."

If losing XCF connectivity would be that devastating, why have XCF links
(and a Parallel Sysplex) at all? It is technically possible to eliminate
those links. You just might not like the alternative. :-)

You're also allowed to do "some of both." You can stretch a Parallel
Sysplex and run certain workloads across the stretch, while at the same
time you can have a non-stretched Parallel Sysplex and run other workloads
non-stretched. That sort of deployment configuration is technically
possible, and conceptually it's not a huge leap from the classic remote
tape library deployments.

--------------------------------------------------------------------------------------------------------
Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z and LinuxONE, AP/GCG/MEA
E-Mail: ***@sg.ibm.com

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Mike Schwab
2018-01-11 15:21:20 UTC
Reply
Permalink
Raw Message
One company had data centers in Miami and New Orleans. Miami shut
down for a hurricane, and wasn't back up before Katrina hit New
Orleans.
Post by Timothy Sipples
Losing XCF connection to a sysplex member would be a whole
nother level of impact that I've never been willing to sign
up for even though our network today is far more reliable
than it was 20 years ago.
Isn't losing XCF connectivity something worth planning for? It's rare, but
I suppose it could happen no matter what the distance.
Isn't it always best to weigh various risks, sometimes competing ones, and
try to get as much overall risk reduction as you can? You're in southern
California, and there are earthquakes and fires there, I've noticed. (Maybe
plagues of locusts next? :-)) One would think there's some extra California
value in awarding an extra point or two to distance there. Japan's 2011
Tōhoku earthquake and tsunami triggered some business continuity rethinking
there, and it has altered some decisions about data center locations,
distances, and deployment patterns. The risk profile can change. And, as
you mentioned, networks have improved a lot in 20 years while the risks
California faces seem to be somewhat different. It's always worth
revisiting past risk calculations when there's some material change in the
parameters -- "marking to market."
If losing XCF connectivity would be that devastating, why have XCF links
(and a Parallel Sysplex) at all? It is technically possible to eliminate
those links. You just might not like the alternative. :-)
You're also allowed to do "some of both." You can stretch a Parallel
Sysplex and run certain workloads across the stretch, while at the same
time you can have a non-stretched Parallel Sysplex and run other workloads
non-stretched. That sort of deployment configuration is technically
possible, and conceptually it's not a huge leap from the classic remote
tape library deployments.
--------------------------------------------------------------------------------------------------------
Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z and LinuxONE, AP/GCG/MEA
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Rob Schramm
2018-01-11 17:00:17 UTC
Reply
Permalink
Raw Message
SFM and planning for what your surviving system should always be done. And
yes early on there was a failure of one of the two dark fiber connections
and the sysplex timers were not connected properly to allow for a continued
service.

Planning planning planning.
Post by Mike Schwab
One company had data centers in Miami and New Orleans. Miami shut
down for a hurricane, and wasn't back up before Katrina hit New
Orleans.
Post by Timothy Sipples
Losing XCF connection to a sysplex member would be a whole
nother level of impact that I've never been willing to sign
up for even though our network today is far more reliable
than it was 20 years ago.
Isn't losing XCF connectivity something worth planning for? It's rare,
but
Post by Timothy Sipples
I suppose it could happen no matter what the distance.
Isn't it always best to weigh various risks, sometimes competing ones,
and
Post by Timothy Sipples
try to get as much overall risk reduction as you can? You're in southern
California, and there are earthquakes and fires there, I've noticed.
(Maybe
Post by Timothy Sipples
plagues of locusts next? :-)) One would think there's some extra
California
Post by Timothy Sipples
value in awarding an extra point or two to distance there. Japan's 2011
Tōhoku earthquake and tsunami triggered some business continuity
rethinking
Post by Timothy Sipples
there, and it has altered some decisions about data center locations,
distances, and deployment patterns. The risk profile can change. And, as
you mentioned, networks have improved a lot in 20 years while the risks
California faces seem to be somewhat different. It's always worth
revisiting past risk calculations when there's some material change in
the
Post by Timothy Sipples
parameters -- "marking to market."
If losing XCF connectivity would be that devastating, why have XCF links
(and a Parallel Sysplex) at all? It is technically possible to eliminate
those links. You just might not like the alternative. :-)
You're also allowed to do "some of both." You can stretch a Parallel
Sysplex and run certain workloads across the stretch, while at the same
time you can have a non-stretched Parallel Sysplex and run other
workloads
Post by Timothy Sipples
non-stretched. That sort of deployment configuration is technically
possible, and conceptually it's not a huge leap from the classic remote
tape library deployments.
--------------------------------------------------------------------------------------------------------
Post by Timothy Sipples
Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z and LinuxONE,
AP/GCG/MEA
Post by Timothy Sipples
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
--
Rob Schramm

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
retired mainframer
2018-01-11 17:18:43 UTC
Reply
Permalink
Raw Message
Post by Pommier, Rex
-----Original Message-----
Behalf Of Rob Schramm
Sent: Thursday, January 11, 2018 9:01 AM
Subject: Re: SYSPLEX distance
SFM and planning for what your surviving system should always be done. And
yes early on there was a failure of one of the two dark fiber connections
and the sysplex timers were not connected properly to allow for a continued
service.
Planning planning planning.
To which you should add testing testing testing.

And once the developers of the plan have succeeded in making it work, it should be tested again with many of the least experienced people in the organization. Murphy will guarantee that they will be the only ones available when it really hits the fan. (It is amazing how differently a pro and a rookie read the same set of instructions.)

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2018-01-11 20:43:02 UTC
Reply
Permalink
Raw Message
To clarify. We have *no* XCF connection between primary and backup data centers. All DASD is mirrored continuously via XRC, but the DR LPARs are 'cold'. They get IPLed only on demand: for (frequent) testing and for (godforbid) actual failover.

When got into serious DR in the 90s, channel technology was ESCON, and network technology was ISV CNT. Parallel sysplex synchronization was governed by external timers (9037). When we started with parallel sysplex, loss of timer connection would kill the member that experienced it first. Then IBM introduced a change whereby the entire sysplex would go down on timer loss. This technology did not bode well for running a single sysplex over 100+ KM. Network connectivity was far too flaky to bet the farm on. Now we have FICON over DWDM. Way more reliable, but sysplex timing would still be an issue AFAIK.

In our actual sysplexes (prod and DR), boxes are literally feet apart connected by physical cables du jour. I cannot recall a complete loss of XCF connectivity ever in this configuration. I'm still not clear on how a 'geographically dispersed sysplex' (original definition, not 'GDPS') would work. Critical data sets must be shared by all members. One of each set of mirrored pairs must be chosen as 'The Guy' that everyone uses. If The Guy suddenly loses connection to the other site--i.e. site disaster--how will the surviving member(s) at the other site continue running without interruption? If there is an interruption that requires some reconfig and IPL(s), then what's the point of running this way in first place?

We commit to a four-hour recovery (including user validation) with data currency within seconds of the disaster.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of retired mainframer
Sent: Thursday, January 11, 2018 9:20 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: SYSPLEX distance
Post by Pommier, Rex
-----Original Message-----
On Behalf Of Rob Schramm
Sent: Thursday, January 11, 2018 9:01 AM
Subject: Re: SYSPLEX distance
SFM and planning for what your surviving system should always be done.
And yes early on there was a failure of one of the two dark fiber
connections and the sysplex timers were not connected properly to
allow for a continued service.
Planning planning planning.
To which you should add testing testing testing.

And once the developers of the plan have succeeded in making it work, it should be tested again with many of the least experienced people in the organization. Murphy will guarantee that they will be the only ones available when it really hits the fan. (It is amazing how differently a pro and a rookie read the same set of instructions.)


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Peter Hunkeler
2018-01-16 07:47:51 UTC
Reply
Permalink
Raw Message
Post by Jesse 1 Robinson
I'm still not clear on how a 'geographically dispersed sysplex' (original definition, not 'GDPS') would work.
You say "original definition". I seem to remember, but might be wrong, that the term GDPS was coined when sysplexes were al contained within a single building or in buildings near by. GDPS was taking sysplexes with members in data centers up to a few kilometers apart. Apart from the longer distance between members, they were sysplexes as usual. No XRC involved.

--Peter Hunkeler



----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
AlanWatthey , GMAIL
2018-01-14 05:53:52 UTC
Reply
Permalink
Raw Message
Kees,

It all helps and it's always nice to know others are doing it successfully.
Your comments on SCMFSD are particularly interesting (not at all you say) as
I'm sure some cleaning up is possible there.

I read somewhere that IBM expect nearly all requests to become asynchronous
eventually. Processors are becoming quicker whereas the speed of light
isn't, so the heuristic algorithm used will deem spinning too costly for
shorter and shorter distances.

Groetjes,
Alan


-----Original Message-----
From: Vernooij, Kees (ITOPT1) - KLM [mailto:***@KLM.COM]
Sent: 11 January 2018 11:03 am
Subject: Re: SYSPLEX distance

If this helps:
We run a parallel sysplex with sites at 16 - 18 km (2 separate routes with
some difference in distance) with active systems and CFs at both sites,
without problems.
Most Sync CF Requests to the Remote CFs are converted to Async.
To minimize the Async/Remote CF delays, we configure structures over the CFs
in such a way that the most busy or most important structures are in the
busiest or the most important site.
We do not use System Managed Coupling Facility Structure Duplexing. All our
applications are able to recover their structures well.
SMCFSD's inter-CF communication would add a number of elongated delays to
each CF update request. The advantage of SMCFSD is that each site has a copy
of the structure and intelligence can chose the nearest (=fastest) CF for
read requests.

Kees.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Parwez Hamid
2018-01-14 08:05:46 UTC
Reply
Permalink
Raw Message
Abstract: Asynchronous CF Lock Duplexing is a new enhancement to IBM’s parallel sysplex technology that was made generally available in October 2016. It is an alternative to the synchronous system managed duplexing of coupling facility (CF) lock structures that has been available for many years. The new Asynchronous CF Lock Duplexing feature was designed to be a viable alternative to synchronous system managed duplexing. The goal was to provide the benefits of lock duplexing without the high performance penalty. It eliminates the synchronous mirroring between the CFs to keep the primary and secondary structures in sync.

https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102720

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2018-01-16 17:39:35 UTC
Reply
Permalink
Raw Message
My recollection is that the term 'GDPS' was coined at a time when IBM had the *ambition* to run a single sysplex with members at a considerable distance apart. That ambition was too optimistic for the technology of the day, so 'GDPS' was redefined. A remnant of that shift is the difficulty of finding an actual spelling out of the acronym in GDPS doc.

GDPS as presented to my shop around Y2K had morphed into a service offering (not a 'product') for managing a sysplex and simplifying recovery of it elsewhere. That's how we use it. Whatever the supporting technology, DASD mirroring is key to GDPS. We actually implemented mirroring (XRC) before we obtained GDPS, which greatly simplified our previously RYO procedures.

I've asked this question earlier in this thread. If you have a truly 'dispersed' sysplex with XCF functioning properly over a great distance, how do you survive the total loss of one glass house? At any given time, all members of the sysplex must be using one copy of DASD or another. As long as all remains sweetness and light, it doesn't much matter where the active copy resides and where the mirrored copy. But loss of one side implies that only one copy of the DASD remains accessible. How do the surviving sysplex members continue running seamlessly when the DASD farm suddenly changes? Of course you can re-IPL the surviving members and carry on. But that's basically what we do with 'cold' members. What is the advantage of running hot sysplex members in the remote site?

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Peter Hunkeler
Sent: Monday, January 15, 2018 11:49 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):AW: Re: SYSPLEX distance
Post by Jesse 1 Robinson
I'm still not clear on how a 'geographically dispersed sysplex' (original definition, not 'GDPS') would work.
You say "original definition". I seem to remember, but might be wrong, that the term GDPS was coined when sysplexes were al contained within a single building or in buildings near by. GDPS was taking sysplexes with members in data centers up to a few kilometers apart. Apart from the longer distance between members, they were sysplexes as usual. No XRC involved.

--Peter Hunkeler


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Glenn Miller
2018-01-17 01:18:22 UTC
Reply
Permalink
Raw Message
Hi Skip,
One possible option for the survivability of a 24x7 z/OS environment would be to place the Primary DASD Control Unit(s) at a different site ( a 3rd site ) from the Mainframe CEC's. Then, if one or the other Mainframe CEC "glass house" is unusable, the other ( in theory ) continues to operate. Also, if you happen to be really lucky, the site for the Primary DASD Control Unit (s) would be at or near "half way" between the 2 Mainframe CEC sites. Take that another step further and place standalone Coupling Facilities at that 3rd site as well.

Of course, the Primary DASD Control Unit(s) are still a single point of failure. So, extend this "design" to a 4th site and place duplicate DASD Control Unit(s) at that 4th site. This is where GDPS/Hyperswap would provide near immediate "switchover" from one DASD site to the other.

This is just one possible option.

Glenn

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Loading...