Discussion:
Setting up a new parallel sysplex
Add Reply
Fred Glenlake
2018-02-05 17:33:12 UTC
Reply
Permalink
Raw Message
I am looking for documentation on how to set up a full parallel sysplex from a "sham-plex". We are running all the usual cast of characters, IMS, DB2, CICS, MQ, etc. We have a basic sysplex in place to be able to qualify for IBM sysplex pricing. The management team has decided we should have a full parallel sysplex to be able to do z/OS maintenance, etc. without taking an outage to the client applications. They would also like to reduce the amount of time it takes us to bring up our DR systems. It might mean we are going to have a physically separate processor in another location/city, that has not yet been decided yet.

It has been a while since I have worked with setting up a sysplex so I was hoping someone could direct me to where I might find documentation on setting up a full parallel sysplex with CICSPLEX, IMSPLEX, DB2PLEX, JESPLEX, etc. I have the IBM redbook on Sysplex Considerations (all 500+) pages of it and I am starting to go through it. What I don't see and have not found yet is something that would help me determine what do to first, what can I do ahead of time to prepare or position for the new full parallel sysplex. Perhaps if there is a document or manual or informational APAR that indicates how to get from basic sysplex to full parallel sysplex.

Many thanks in advance for any pearls of wisdom.

Fred G.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Burrell, Todd
2018-02-05 17:37:59 UTC
Reply
Permalink
Raw Message
I believe there is a manual that comes with z/OS called Setting up a Sysplex? This is probably a good place to start - and I am sure there are numerous Share presentations on this as well.

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Fred Glenlake
Sent: Monday, February 05, 2018 12:25 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Setting up a new parallel sysplex

I am looking for documentation on how to set up a full parallel sysplex from a "sham-plex". We are running all the usual cast of characters, IMS, DB2, CICS, MQ, etc. We have a basic sysplex in place to be able to qualify for IBM sysplex pricing. The management team has decided we should have a full parallel sysplex to be able to do z/OS maintenance, etc. without taking an outage to the client applications. They would also like to reduce the amount of time it takes us to bring up our DR systems. It might mean we are going to have a physically separate processor in another location/city, that has not yet been decided yet.

It has been a while since I have worked with setting up a sysplex so I was hoping someone could direct me to where I might find documentation on setting up a full parallel sysplex with CICSPLEX, IMSPLEX, DB2PLEX, JESPLEX, etc. I have the IBM redbook on Sysplex Considerations (all 500+) pages of it and I am starting to go through it. What I don't see and have not found yet is something that would help me determine what do to first, what can I do ahead of time to prepare or position for the new full parallel sysplex. Perhaps if there is a document or manual or informational APAR that indicates how to get from basic sysplex to full parallel sysplex.

Many thanks in advance for any pearls of wisdom.

Fred G.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN



This email transmission and any accompanying attachments may contain CSX privileged and confidential information intended only for the use of the intended addressee. Any dissemination, distribution, copying or action taken in reliance on the contents of this email by anyone other than the intended recipient is strictly prohibited. If you have received this email in error please immediately delete it and notify sender at the above CSX email address. Sender and CSX accept no liability for any damage caused directly or indirectly by receipt of this email.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2018-02-05 20:23:07 UTC
Reply
Permalink
Raw Message
Much of what you have to do depends on history. For example, we have one 'bronzeplex' (the parlor term for 'shamplex') that long ago used to contain completely separate systems converted--again long ago--to separate parallel sysplexes. Then along came IBM with the message that we had to create a single sysplex or lose the PSLC discount. So we bolted the sysplexes together without attempting to resolve the myriad conflicts resulting from decades-long isolation: a jillion duplicate names and altogether different access rules and management policies.

If that resembles your history, you have a lot of work to do to integrate everything together. Virtually *all* names of all kinds need to be unique, else you cannot manage the sysplex from a single perspective. That is a *major* effort totally apart from the 'sysplex' aspect.

If OTOH you already have unique names, the merge is a lot simpler. I would suggest starting with a brand new set of couple data sets depending on what you have already. The reason: if you have to fall back to bronzeplex, you don't want to have to recreate what you have now that works. If you are now ready to plan for these steps, we can (collectively) suggest a sequence.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Allan Staller
Sent: Monday, February 05, 2018 9:41 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: Setting up a new parallel sysplex

Additional info. Do z/OS first followed by subsystems in order of preference.

TEST TEST TEST TEST TEST lest bad things happen.

HTH,

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Allan Staller
Sent: Monday, February 5, 2018 11:37 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: Setting up a new parallel sysplex

MVS "Setting up a SYSPELX" Redbook "merging systems into a sysplex.

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Fred Glenlake
Sent: Monday, February 5, 2018 11:25 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Setting up a new parallel sysplex

I am looking for documentation on how to set up a full parallel sysplex from a "sham-plex". We are running all the usual cast of characters, IMS, DB2, CICS, MQ, etc. We have a basic sysplex in place to be able to qualify for IBM sysplex pricing. The management team has decided we should have a full parallel sysplex to be able to do z/OS maintenance, etc. without taking an outage to the client applications. They would also like to reduce the amount of time it takes us to bring up our DR systems. It might mean we are going to have a physically separate processor in another location/city, that has not yet been decided yet.

It has been a while since I have worked with setting up a sysplex so I was hoping someone could direct me to where I might find documentation on setting up a full parallel sysplex with CICSPLEX, IMSPLEX, DB2PLEX, JESPLEX, etc. I have the IBM redbook on Sysplex Considerations (all 500+) pages of it and I am starting to go through it. What I don't see and have not found yet is something that would help me determine what do to first, what can I do ahead of time to prepare or position for the new full parallel sysplex. Perhaps if there is a document or manual or informational APAR that indicates how to get from basic sysplex to full parallel sysplex.

Many thanks in advance for any pearls of wisdom.

Fred G.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Fred Glenlake
2018-02-05 21:09:48 UTC
Reply
Permalink
Raw Message
Thank you Mr. Staller for the manual recommendation, I have downloaded it (another 500+ pages of fun reading), and Yes TEST TEST TEST is definitely the plan.

Thank you Mr. Robinson for your input. I am a relatively new hire (6 months) and I am still discovering where my predecessors hid all the skeletons. However what I have learned thus far leads me to believe my site has a fairly good unique naming convention and standards so I am hoping adding a new lpar and merging it with the existing production lpar should not have issues with duplicate names. If you or anyone else could suggest a sequence of events to follow to get from "see spot run/bronzeplex" sysplex to full parallel sysplex that would be greatly appreciated. Not messing up what is already there in the current coupling datasets and defining new ones to house all the new structures I would think would be up near the very beginning of the process of defining a new parallel sysplex. I need to make sure we have enough memory available to house the new couple datasets in the coupling facility.

My own planning is to go through the process using our sandbox lpar first and joining it with another test lpar before even considering trying this out with the real production lpar. (Again thank you Mr. Staller for the TEST TEST TEST advice, it was already carved in stone in my hard head).

I appreciate your advice and suggestions very much.

Thank you again,

Fred G.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Vernooij, Kees - KLM , ITOPT1
2018-02-06 14:57:27 UTC
Reply
Permalink
Raw Message
In addition to that, if you are implementing LOGGER CF Logstreams: beware if you read about putting more than 1 Logstream in a CF structure. As I understand, these are old recommendations and it is often difficult to convert them to 1 Logstream per CF structure.

Kees.
Post by Burrell, Todd
-----Original Message-----
Behalf Of Allan Staller
Sent: 06 February, 2018 14:46
Subject: Re: Setting up a new parallel sysplex
I would attempt to make any decisions regarding the CFRM policy first.
This is the one act with the most potential for "harm".
Decide which structures you will want in the CF (GRS, JES,......) and
use CFSIZER to obtain sizing estimates.
(shared spool vs. multi-jesplex).. HSM is of particular interest since
it requires reconfiguration to run properly in a multi-image
environment.
Alter the CF LPAR(s) accordingly and then generate a new CFRM policy.
Then proceed w/"sysplexification" of each component.
This would also be a great time to review "new" features introduced in
earlier releases of z/OS for inclusion.
HTH,
-----Original Message-----
Behalf Of Fred Glenlake
Sent: Monday, February 5, 2018 3:11 PM
Subject: Re: Setting up a new parallel sysplex
Thank you Mr. Staller for the manual recommendation, I have downloaded
it (another 500+ pages of fun reading), and Yes TEST TEST TEST is
definitely the plan.
Thank you Mr. Robinson for your input. I am a relatively new hire (6
months) and I am still discovering where my predecessors hid all the
skeletons. However what I have learned thus far leads me to believe my
site has a fairly good unique naming convention and standards so I am
hoping adding a new lpar and merging it with the existing production
lpar should not have issues with duplicate names. If you or anyone
else could suggest a sequence of events to follow to get from "see spot
run/bronzeplex" sysplex to full parallel sysplex that would be greatly
appreciated. Not messing up what is already there in the current
coupling datasets and defining new ones to house all the new structures
I would think would be up near the very beginning of the process of
defining a new parallel sysplex. I need to make sure we have enough
memory available to house the new couple datasets in the coupling
facility.
My own planning is to go through the process using our sandbox lpar
first and joining it with another test lpar before even considering
trying this out with the real production lpar. (Again thank you Mr.
Staller for the TEST TEST TEST advice, it was already carved in stone in
my hard head).
I appreciate your advice and suggestions very much.
Thank you again,
Fred G.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
--------------------------------------------------------------
The contents of this e-mail and any attachment(s) are confidential and
intended for the named recipient(s) only. E-mail transmission is not
guaranteed to be secure or error-free as information could be
intercepted, corrupted, lost, destroyed, arrive late or incomplete, or
may contain viruses in transmission. The e mail and its contents (with
or without referred errors) shall therefore not attach any liability on
the originator or HCL or its affiliates. Views or opinions, if any,
presented in this email are solely those of the author and may not
necessarily reflect the views or opinions of HCL or its affiliates. Any
form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior
written consent of authorized representative of HCL is strictly
prohibited. If you have received this email in error please delete it
and notify the sender immediately. Before opening any email and/or
attachments, please check them for viruses and other defects.
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
--------------------------------------------------------------
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
********************************************************
For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch Airlines) is registered in Amstelveen, The Netherlands, with registered number 33014286
********************************************************


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
fred glenlake
2018-02-06 16:34:52 UTC
Reply
Permalink
Raw Message
Hi Kees and Allan,

Thank you both for your suggestions. I greatly appreciate all the input I am receiving as I get through the "Insomnia Cure"....I mean the IBM Red Book on Setting up a Sysplex. I will definitely include your input in my notes I am making as I read through the book.

Fred G.

________________________________
From: IBM Mainframe Discussion List <IBM-***@LISTSERV.UA.EDU> on behalf of Vernooij, Kees (ITOPT1) - KLM <***@KLM.COM>
Sent: February 6, 2018 8:56 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: Setting up a new parallel sysplex

In addition to that, if you are implementing LOGGER CF Logstreams: beware if you read about putting more than 1 Logstream in a CF structure. As I understand, these are old recommendations and it is often difficult to convert them to 1 Logstream per CF structure.

Kees.
Post by Burrell, Todd
-----Original Message-----
Behalf Of Allan Staller
Sent: 06 February, 2018 14:46
Subject: Re: Setting up a new parallel sysplex
I would attempt to make any decisions regarding the CFRM policy first.
This is the one act with the most potential for "harm".
Decide which structures you will want in the CF (GRS, JES,......) and
use CFSIZER to obtain sizing estimates.
(shared spool vs. multi-jesplex).. HSM is of particular interest since
it requires reconfiguration to run properly in a multi-image
environment.
Alter the CF LPAR(s) accordingly and then generate a new CFRM policy.
Then proceed w/"sysplexification" of each component.
This would also be a great time to review "new" features introduced in
earlier releases of z/OS for inclusion.
HTH,
-----Original Message-----
Behalf Of Fred Glenlake
Sent: Monday, February 5, 2018 3:11 PM
Subject: Re: Setting up a new parallel sysplex
Thank you Mr. Staller for the manual recommendation, I have downloaded
it (another 500+ pages of fun reading), and Yes TEST TEST TEST is
definitely the plan.
Thank you Mr. Robinson for your input. I am a relatively new hire (6
months) and I am still discovering where my predecessors hid all the
skeletons. However what I have learned thus far leads me to believe my
site has a fairly good unique naming convention and standards so I am
hoping adding a new lpar and merging it with the existing production
lpar should not have issues with duplicate names. If you or anyone
else could suggest a sequence of events to follow to get from "see spot
run/bronzeplex" sysplex to full parallel sysplex that would be greatly
appreciated. Not messing up what is already there in the current
coupling datasets and defining new ones to house all the new structures
I would think would be up near the very beginning of the process of
defining a new parallel sysplex. I need to make sure we have enough
memory available to house the new couple datasets in the coupling
facility.
My own planning is to go through the process using our sandbox lpar
first and joining it with another test lpar before even considering
trying this out with the real production lpar. (Again thank you Mr.
Staller for the TEST TEST TEST advice, it was already carved in stone in
my hard head).
I appreciate your advice and suggestions very much.
Thank you again,
Fred G.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
--------------------------------------------------------------
The contents of this e-mail and any attachment(s) are confidential and
intended for the named recipient(s) only. E-mail transmission is not
guaranteed to be secure or error-free as information could be
intercepted, corrupted, lost, destroyed, arrive late or incomplete, or
may contain viruses in transmission. The e mail and its contents (with
or without referred errors) shall therefore not attach any liability on
the originator or HCL or its affiliates. Views or opinions, if any,
presented in this email are solely those of the author and may not
necessarily reflect the views or opinions of HCL or its affiliates. Any
form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior
written consent of authorized representative of HCL is strictly
prohibited. If you have received this email in error please delete it
and notify the sender immediately. Before opening any email and/or
attachments, please check them for viruses and other defects.
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
--------------------------------------------------------------
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
********************************************************
For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch Airlines) is registered in Amstelveen, The Netherlands, with registered number 33014286
********************************************************


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2018-02-07 20:52:25 UTC
Reply
Permalink
Raw Message
I know this is not OP's first bronco ride in a sysplex rodeo, but I have a few more suggestions.

-- Create a new unique name for the combined sysplex.

-- Pick one (least critical) system and start there as the first member of the new sysplex.

-- Once the new sysplex is running with this one member, add in a second member.

-- Continue until all desired members have joined the new sysplex.

-- All along the way make extensive use of symbolics in names. One suggestion for DSN is the suffix '$SYS&SYSCLONE'. '$SYS' (or whatever you choose) is a kind of marker that several similar data sets exist in the sysplex with &SYSCLONE identifying the owning member.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of fred glenlake
Sent: Tuesday, February 06, 2018 8:36 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: Setting up a new parallel sysplex

Hi Kees and Allan,

Thank you both for your suggestions. I greatly appreciate all the input I am receiving as I get through the "Insomnia Cure"....I mean the IBM Red Book on Setting up a Sysplex. I will definitely include your input in my notes I am making as I read through the book.

Fred G.

________________________________
From: IBM Mainframe Discussion List <IBM-***@LISTSERV.UA.EDU> on behalf of Vernooij, Kees (ITOPT1) - KLM <***@KLM.COM>
Sent: February 6, 2018 8:56 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: Setting up a new parallel sysplex

In addition to that, if you are implementing LOGGER CF Logstreams: beware if you read about putting more than 1 Logstream in a CF structure. As I understand, these are old recommendations and it is often difficult to convert them to 1 Logstream per CF structure.

Kees.
Post by Burrell, Todd
-----Original Message-----
On Behalf Of Allan Staller
Sent: 06 February, 2018 14:46
Subject: Re: Setting up a new parallel sysplex
I would attempt to make any decisions regarding the CFRM policy first.
This is the one act with the most potential for "harm".
Decide which structures you will want in the CF (GRS, JES,......) and
use CFSIZER to obtain sizing estimates.
(shared spool vs. multi-jesplex).. HSM is of particular interest since
it requires reconfiguration to run properly in a multi-image
environment.
Alter the CF LPAR(s) accordingly and then generate a new CFRM policy.
Then proceed w/"sysplexification" of each component.
This would also be a great time to review "new" features introduced in
earlier releases of z/OS for inclusion.
HTH,
-----Original Message-----
On Behalf Of Fred Glenlake
Sent: Monday, February 5, 2018 3:11 PM
Subject: Re: Setting up a new parallel sysplex
Thank you Mr. Staller for the manual recommendation, I have downloaded
it (another 500+ pages of fun reading), and Yes TEST TEST TEST is
definitely the plan.
Thank you Mr. Robinson for your input. I am a relatively new hire (6
months) and I am still discovering where my predecessors hid all the
skeletons. However what I have learned thus far leads me to believe my
site has a fairly good unique naming convention and standards so I am
hoping adding a new lpar and merging it with the existing production
lpar should not have issues with duplicate names. If you or anyone
else could suggest a sequence of events to follow to get from "see
spot run/bronzeplex" sysplex to full parallel sysplex that would be greatly
appreciated. Not messing up what is already there in the current
coupling datasets and defining new ones to house all the new
structures I would think would be up near the very beginning of the process of
defining a new parallel sysplex. I need to make sure we have enough
memory available to house the new couple datasets in the coupling
facility.
My own planning is to go through the process using our sandbox lpar
first and joining it with another test lpar before even considering
trying this out with the real production lpar. (Again thank you Mr.
Staller for the TEST TEST TEST advice, it was already carved in stone
in my hard head).
I appreciate your advice and suggestions very much.
Thank you again,
Fred G.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Chuck Kreiter
2018-03-01 13:37:59 UTC
Reply
Permalink
Raw Message
We recently did an upgrade to z14's and have seen some unexplained problems.
It appears (unconfirmed as of yet, but should be soon) to be related to some
CA products. We should have confirmation later today or tomorrow. Our
upgrade was z12's to z14's and we are running z/OS 2.2. If I get
confirmation, I'll pass along more details.

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On
Behalf Of fred glenlake
Sent: Monday, February 26, 2018 11:11 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Hardware upgrade z13 to z14....Yikes

Hi List,

My management must be in line to cash in on performance bonuses because they
have decreed we will upgrade our z13's to z14's in 90 days. This was a
total out of left field surprise, perhaps our hardware vendor had a sale on
for "Presidents Day", right next to the slacks and shirts and CEC's??

I am just starting to review the IBM considerations of going to z14's, lots
to consider and read which I do not mind. Wondered if any list members had
already moved to z14's and could share any land mines they encountered
upgrading.

At the 10,000 foot level I am thinking to get this done quickly hardware
wise we drop the new CEC's next to the existing ones. Hook them up to
power, HMC's, etc. Then grab a couple of cables from existing CEC's for
DASD and Tape, swing them over. Use the existing IOCP and IOCDS as the
basis of the new IOCP/IOCDS, update serial numbers, models, etc. Then bring
up our sysprog lpar on the new CEC, get that one going, update and fix
software keys/licenses, issues with first IPL's on new CEC, etc. Once all
the work is done in terms of getting ready for the rest of the lpars, then
go for the big bang one weekend. Bring down the rest of the lpars, drop the
cables, swing over to new CEC, hook up and IPL remaining lpars. Assuming
no phat thumb checks it should work. Of course there are a ton of
considerations to review and check, coupling facility stuff, software stuff,
compatibility maintenance, etc. However all things being equal I am
thinking this approach is likely the safest in terms of risk avoidance and
getting this done with the 90 days. We could go the move one lpar at a
time route but that would mean more work and it would take longer especially
with our change management processes (bless their little hearts).

Any alternate suggestions or comments would be appreciated as I am sure
there will still be a few land mines with my name on them waiting in the
woods.

FredG.




----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
fred glenlake
2018-03-01 14:34:40 UTC
Reply
Permalink
Raw Message
Hi Chuck and list members,

I would be interested in knowing once you figure out the cause. My upgrade is actually much more complicated that first presented. We are going from two z13's with about 6 lpars on each to one z14 so not only are we upgrading we are merging as well just to make things more interesting.

We are going to run into duplicate definitions for channels, devices, etc. that we will need to rectify when we do the odd 30 minutes of planning....Ha. The new z14 came with of course other new hardware HMC, routers, etc. I personally have not had the experience of gluing two into one, all of the upgrades I have worked on have been one for one....up until now.

I would be interested in hearing if anyone else has merged two to one or three to one and their experiences.

FredG.

________________________________
From: IBM Mainframe Discussion List <IBM-***@LISTSERV.UA.EDU> on behalf of Chuck Kreiter <kreiter_ibm-***@TWC.COM>
Sent: March 1, 2018 8:39 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: Hardware upgrade z13 to z14....Yikes

We recently did an upgrade to z14's and have seen some unexplained problems.
It appears (unconfirmed as of yet, but should be soon) to be related to some
CA products. We should have confirmation later today or tomorrow. Our
upgrade was z12's to z14's and we are running z/OS 2.2. If I get
confirmation, I'll pass along more details.

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On
Behalf Of fred glenlake
Sent: Monday, February 26, 2018 11:11 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Hardware upgrade z13 to z14....Yikes

Hi List,

My management must be in line to cash in on performance bonuses because they
have decreed we will upgrade our z13's to z14's in 90 days. This was a
total out of left field surprise, perhaps our hardware vendor had a sale on
for "Presidents Day", right next to the slacks and shirts and CEC's??

I am just starting to review the IBM considerations of going to z14's, lots
to consider and read which I do not mind. Wondered if any list members had
already moved to z14's and could share any land mines they encountered
upgrading.

At the 10,000 foot level I am thinking to get this done quickly hardware
wise we drop the new CEC's next to the existing ones. Hook them up to
power, HMC's, etc. Then grab a couple of cables from existing CEC's for
DASD and Tape, swing them over. Use the existing IOCP and IOCDS as the
basis of the new IOCP/IOCDS, update serial numbers, models, etc. Then bring
up our sysprog lpar on the new CEC, get that one going, update and fix
software keys/licenses, issues with first IPL's on new CEC, etc. Once all
the work is done in terms of getting ready for the rest of the lpars, then
go for the big bang one weekend. Bring down the rest of the lpars, drop the
cables, swing over to new CEC, hook up and IPL remaining lpars. Assuming
no phat thumb checks it should work. Of course there are a ton of
considerations to review and check, coupling facility stuff, software stuff,
compatibility maintenance, etc. However all things being equal I am
thinking this approach is likely the safest in terms of risk avoidance and
getting this done with the 90 days. We could go the move one lpar at a
time route but that would mean more work and it would take longer especially
with our change management processes (bless their little hearts).

Any alternate suggestions or comments would be appreciated as I am sure
there will still be a few land mines with my name on them waiting in the
woods.

FredG.




----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Feller, Paul
2018-03-01 16:07:55 UTC
Reply
Permalink
Raw Message
Well we did 2 zEC12s to one z13 last summer. In our case we did not have to move all the lpars over in one weekend. Also both CECs had been in the same sysplex so no duplicate datasets, devices or other stuff. Our big issue was software contracts. The one zEC12 was small and we had to fight with vendors over MSU pricing for the software that was on the small box that now ran on a big box. We put all the lpars that had been on the small box into a capacity group so we could cap them. We also took the time to actually eliminate a few lpars in the process.

The main thing is we worked it out that we did not have to do everything in one weekend. We had a about a two week time frame to get everything done.

Thanks..

Paul Feller
AGT Mainframe Technical Support

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of fred glenlake
Sent: Thursday, March 01, 2018 08:36
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: Hardware upgrade z13 to z14....Yikes

Hi Chuck and list members,

I would be interested in knowing once you figure out the cause. My upgrade is actually much more complicated that first presented. We are going from two z13's with about 6 lpars on each to one z14 so not only are we upgrading we are merging as well just to make things more interesting.

We are going to run into duplicate definitions for channels, devices, etc. that we will need to rectify when we do the odd 30 minutes of planning....Ha. The new z14 came with of course other new hardware HMC, routers, etc. I personally have not had the experience of gluing two into one, all of the upgrades I have worked on have been one for one....up until now.

I would be interested in hearing if anyone else has merged two to one or three to one and their experiences.

FredG.

________________________________
From: IBM Mainframe Discussion List <IBM-***@LISTSERV.UA.EDU> on behalf of Chuck Kreiter <kreiter_ibm-***@TWC.COM>
Sent: March 1, 2018 8:39 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: Hardware upgrade z13 to z14....Yikes

We recently did an upgrade to z14's and have seen some unexplained problems.
It appears (unconfirmed as of yet, but should be soon) to be related to some
CA products. We should have confirmation later today or tomorrow. Our
upgrade was z12's to z14's and we are running z/OS 2.2. If I get
confirmation, I'll pass along more details.

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On
Behalf Of fred glenlake
Sent: Monday, February 26, 2018 11:11 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Hardware upgrade z13 to z14....Yikes

Hi List,

My management must be in line to cash in on performance bonuses because they
have decreed we will upgrade our z13's to z14's in 90 days. This was a
total out of left field surprise, perhaps our hardware vendor had a sale on
for "Presidents Day", right next to the slacks and shirts and CEC's??

I am just starting to review the IBM considerations of going to z14's, lots
to consider and read which I do not mind. Wondered if any list members had
already moved to z14's and could share any land mines they encountered
upgrading.

At the 10,000 foot level I am thinking to get this done quickly hardware
wise we drop the new CEC's next to the existing ones. Hook them up to
power, HMC's, etc. Then grab a couple of cables from existing CEC's for
DASD and Tape, swing them over. Use the existing IOCP and IOCDS as the
basis of the new IOCP/IOCDS, update serial numbers, models, etc. Then bring
up our sysprog lpar on the new CEC, get that one going, update and fix
software keys/licenses, issues with first IPL's on new CEC, etc. Once all
the work is done in terms of getting ready for the rest of the lpars, then
go for the big bang one weekend. Bring down the rest of the lpars, drop the
cables, swing over to new CEC, hook up and IPL remaining lpars. Assuming
no phat thumb checks it should work. Of course there are a ton of
considerations to review and check, coupling facility stuff, software stuff,
compatibility maintenance, etc. However all things being equal I am
thinking this approach is likely the safest in terms of risk avoidance and
getting this done with the 90 days. We could go the move one lpar at a
time route but that would mean more work and it would take longer especially
with our change management processes (bless their little hearts).

Any alternate suggestions or comments would be appreciated as I am sure
there will still be a few land mines with my name on them waiting in the
woods.

FredG.




----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
fred glenlake
2018-05-03 16:17:23 UTC
Reply
Permalink
Raw Message
Hello,

We are going from one production lpar and one test lpar to two sysplexs, one plex for production, one plex for test. Currently the RACF databases are shared (yeah not ideal) but they will be split (prod and test on their own databases) once we are sysplexed.

In preparation for the split and the new sysplexes I want to split up the databases ahead of time. I am new to sysplexes so excuse the silly questions.

Currently my primary and backup RACF databases are on DASD, shared DASD between prod and test. I am going to move them to non-shared DASD so prod has its own databases and test has its own. In a Sysplex should the RACF databases still reside on DASD that both sides of the sysplexes share (so both prod lpars in the plex) or should they reside in the coupling facility or ??

Are there any tools that will help me get to my end state, split up the databases, report on the databases, etc.?? Normally I just use the RACF utilities ICH***** but perhaps other sites use different tools I could look into.

Any suggestions, comments are most welcome.

FredG.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Lizette Koehler
2018-05-03 16:54:08 UTC
Reply
Permalink
Raw Message
If you have not joined, there is a RACF List that you might like to also ask
this question.

To join, if you have not done so, go to this URL

RACF http://www.listserv.uga.edu/archives/racf-l.html

Lizette
Post by Burrell, Todd
-----Original Message-----
fred glenlake
Sent: Thursday, May 03, 2018 9:19 AM
Subject: RACF on a Sysplex??
Hello,
We are going from one production lpar and one test lpar to two sysplexs, one
plex for production, one plex for test. Currently the RACF databases are
shared (yeah not ideal) but they will be split (prod and test on their own
databases) once we are sysplexed.
In preparation for the split and the new sysplexes I want to split up the
databases ahead of time. I am new to sysplexes so excuse the silly
questions.
Currently my primary and backup RACF databases are on DASD, shared DASD
between prod and test. I am going to move them to non-shared DASD so prod
has its own databases and test has its own. In a Sysplex should the RACF
databases still reside on DASD that both sides of the sysplexes share (so
both prod lpars in the plex) or should they reside in the coupling facility
or ??
Are there any tools that will help me get to my end state, split up the
databases, report on the databases, etc.?? Normally I just use the RACF
utilities ICH***** but perhaps other sites use different tools I could look
into.
Any suggestions, comments are most welcome.
FredG.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email to
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2018-03-01 17:31:48 UTC
Reply
Permalink
Raw Message
I'm not clear on the scope of 'merging'. If you are just moving LPARs wholesale from two CECs to one CEC, it should not be too complicated. In particular, if you're careful, there should be minimal impact on users.

If you're consolidating LPARs, then you're in a whole new ballgame. That requires a great deal of planning and preparation.

In either case, you have to choose between push-pull and 'incorporation'. In the latter case, you add the new CEC into your configuration and move things gradually, then remove the old CEC. In the former case, you have to deal with a big bang event. Keep in mind a back-out plan in case the new configuration falls horribly flat. I've come over the years to prefer a 'fix forward' strategy that determines to correct problems and move on. Nevertheless you could encounter a problem so severe that the business would not put up the ensuing outage.

So many variables.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of fred glenlake
Sent: Thursday, March 01, 2018 6:36 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: Hardware upgrade z13 to z14....Yikes

Hi Chuck and list members,

I would be interested in knowing once you figure out the cause. My upgrade is actually much more complicated that first presented. We are going from two z13's with about 6 lpars on each to one z14 so not only are we upgrading we are merging as well just to make things more interesting.

We are going to run into duplicate definitions for channels, devices, etc. that we will need to rectify when we do the odd 30 minutes of planning....Ha. The new z14 came with of course other new hardware HMC, routers, etc. I personally have not had the experience of gluing two into one, all of the upgrades I have worked on have been one for one....up until now.

I would be interested in hearing if anyone else has merged two to one or three to one and their experiences.

FredG.

________________________________
From: IBM Mainframe Discussion List <IBM-***@LISTSERV.UA.EDU> on behalf of Chuck Kreiter <kreiter_ibm-***@TWC.COM>
Sent: March 1, 2018 8:39 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: Hardware upgrade z13 to z14....Yikes

We recently did an upgrade to z14's and have seen some unexplained problems.
It appears (unconfirmed as of yet, but should be soon) to be related to some CA products. We should have confirmation later today or tomorrow. Our upgrade was z12's to z14's and we are running z/OS 2.2. If I get confirmation, I'll pass along more details.

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of fred glenlake
Sent: Monday, February 26, 2018 11:11 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Hardware upgrade z13 to z14....Yikes

Hi List,

My management must be in line to cash in on performance bonuses because they
have decreed we will upgrade our z13's to z14's in 90 days. This was a
total out of left field surprise, perhaps our hardware vendor had a sale on for "Presidents Day", right next to the slacks and shirts and CEC's??

I am just starting to review the IBM considerations of going to z14's, lots
to consider and read which I do not mind. Wondered if any list members had
already moved to z14's and could share any land mines they encountered upgrading.

At the 10,000 foot level I am thinking to get this done quickly hardware
wise we drop the new CEC's next to the existing ones. Hook them up to
power, HMC's, etc. Then grab a couple of cables from existing CEC's for
DASD and Tape, swing them over. Use the existing IOCP and IOCDS as the
basis of the new IOCP/IOCDS, update serial numbers, models, etc. Then bring up our sysprog lpar on the new CEC, get that one going, update and fix
software keys/licenses, issues with first IPL's on new CEC, etc. Once all
the work is done in terms of getting ready for the rest of the lpars, then go for the big bang one weekend. Bring down the rest of the lpars, drop the
cables, swing over to new CEC, hook up and IPL remaining lpars. Assuming
no phat thumb checks it should work. Of course there are a ton of considerations to review and check, coupling facility stuff, software stuff,
compatibility maintenance, etc. However all things being equal I am
thinking this approach is likely the safest in terms of risk avoidance and
getting this done with the 90 days. We could go the move one lpar at a
time route but that would mean more work and it would take longer especially with our change management processes (bless their little hearts).

Any alternate suggestions or comments would be appreciated as I am sure there will still be a few land mines with my name on them waiting in the woods.

FredG.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
fred glenlake
2018-03-01 18:00:01 UTC
Reply
Permalink
Raw Message
Hi Jesse,

From what I understand the current plan to be as of typing of this response, the plan is to merge/move the lpars from the z13's to the z14. There is no plan to collapse lpars into smaller number of lpars so no need to break out the abacus and slide rules to figure out workloads that need to merge and all that mess. Those are my current marching orders and I am working towards that. Having said that a couple of weeks ago the marching orders were build a new sysplex, then last week it was upgrade from a single z13 to a z14 and this week it has morphed into merge/move all lpars from the two z13's into one z14. The beauty of management directions and the right to change their minds.... 😊

Fred G.

________________________________
From: IBM Mainframe Discussion List <IBM-***@LISTSERV.UA.EDU> on behalf of Jesse 1 Robinson <***@SCE.COM>
Sent: March 1, 2018 12:32 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: Hardware upgrade z13 to z14....Yikes

I'm not clear on the scope of 'merging'. If you are just moving LPARs wholesale from two CECs to one CEC, it should not be too complicated. In particular, if you're careful, there should be minimal impact on users.

If you're consolidating LPARs, then you're in a whole new ballgame. That requires a great deal of planning and preparation.

In either case, you have to choose between push-pull and 'incorporation'. In the latter case, you add the new CEC into your configuration and move things gradually, then remove the old CEC. In the former case, you have to deal with a big bang event. Keep in mind a back-out plan in case the new configuration falls horribly flat. I've come over the years to prefer a 'fix forward' strategy that determines to correct problems and move on. Nevertheless you could encounter a problem so severe that the business would not put up the ensuing outage.

So many variables.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of fred glenlake
Sent: Thursday, March 01, 2018 6:36 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: Hardware upgrade z13 to z14....Yikes

Hi Chuck and list members,

I would be interested in knowing once you figure out the cause. My upgrade is actually much more complicated that first presented. We are going from two z13's with about 6 lpars on each to one z14 so not only are we upgrading we are merging as well just to make things more interesting.

We are going to run into duplicate definitions for channels, devices, etc. that we will need to rectify when we do the odd 30 minutes of planning....Ha. The new z14 came with of course other new hardware HMC, routers, etc. I personally have not had the experience of gluing two into one, all of the upgrades I have worked on have been one for one....up until now.

I would be interested in hearing if anyone else has merged two to one or three to one and their experiences.

FredG.

________________________________
From: IBM Mainframe Discussion List <IBM-***@LISTSERV.UA.EDU> on behalf of Chuck Kreiter <kreiter_ibm-***@TWC.COM>
Sent: March 1, 2018 8:39 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: Hardware upgrade z13 to z14....Yikes

We recently did an upgrade to z14's and have seen some unexplained problems.
It appears (unconfirmed as of yet, but should be soon) to be related to some CA products. We should have confirmation later today or tomorrow. Our upgrade was z12's to z14's and we are running z/OS 2.2. If I get confirmation, I'll pass along more details.

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of fred glenlake
Sent: Monday, February 26, 2018 11:11 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Hardware upgrade z13 to z14....Yikes

Hi List,

My management must be in line to cash in on performance bonuses because they
have decreed we will upgrade our z13's to z14's in 90 days. This was a
total out of left field surprise, perhaps our hardware vendor had a sale on for "Presidents Day", right next to the slacks and shirts and CEC's??

I am just starting to review the IBM considerations of going to z14's, lots
to consider and read which I do not mind. Wondered if any list members had
already moved to z14's and could share any land mines they encountered upgrading.

At the 10,000 foot level I am thinking to get this done quickly hardware
wise we drop the new CEC's next to the existing ones. Hook them up to
power, HMC's, etc. Then grab a couple of cables from existing CEC's for
DASD and Tape, swing them over. Use the existing IOCP and IOCDS as the
basis of the new IOCP/IOCDS, update serial numbers, models, etc. Then bring up our sysprog lpar on the new CEC, get that one going, update and fix
software keys/licenses, issues with first IPL's on new CEC, etc. Once all
the work is done in terms of getting ready for the rest of the lpars, then go for the big bang one weekend. Bring down the rest of the lpars, drop the
cables, swing over to new CEC, hook up and IPL remaining lpars. Assuming
no phat thumb checks it should work. Of course there are a ton of considerations to review and check, coupling facility stuff, software stuff,
compatibility maintenance, etc. However all things being equal I am
thinking this approach is likely the safest in terms of risk avoidance and
getting this done with the 90 days. We could go the move one lpar at a
time route but that would mean more work and it would take longer especially with our change management processes (bless their little hearts).

Any alternate suggestions or comments would be appreciated as I am sure there will still be a few land mines with my name on them waiting in the woods.

FredG.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jerry Whitteridge
2018-05-03 17:15:12 UTC
Reply
Permalink
Raw Message
We have separate DB's for each Sysplex BUT keep them in sync using
RRSAF so password changes and profile updates flow from the initial
system to all the others. In the case of the Sandbox Sysplex we
allow updates IN but not OUT - allowing us to test RACF changes in
the Sandbox Sysplex and not propagate to the other Sysplexes. This
does mean that you need to maintain a consistent naming convention
where Prod is Prod and Dev is Dev across the enterprise (e.g. Don't
reuse Prod DS name in Dev and grant different accesses because its Dev.

Jerry Whitteridge
Delivery Manager / Mainframe Architect
GTS - Safeway Account
602 527 4871 Mobile
***@ibm.com

IBM Services
Date: 05/03/2018 09:19 AM
Subject: RACF on a Sysplex??
Hello,
We are going from one production lpar and one test lpar to two
sysplexs, one plex for production, one plex for test. Currently
the RACF databases are shared (yeah not ideal) but they will be
split (prod and test on their own databases) once we are sysplexed.
In preparation for the split and the new sysplexes I want to split
up the databases ahead of time. I am new to sysplexes so excuse
the silly questions.
Currently my primary and backup RACF databases are on DASD, shared
DASD between prod and test. I am going to move them to non-shared
DASD so prod has its own databases and test has its own. In a
Sysplex should the RACF databases still reside on DASD that both
sides of the sysplexes share (so both prod lpars in the plex) or
should they reside in the coupling facility or ??
Are there any tools that will help me get to my end state, split up
the databases, report on the databases, etc.?? Normally I just use
the RACF utilities ICH***** but perhaps other sites use different
tools I could look into.
Any suggestions, comments are most welcome.
FredG.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Elardus Engelbrecht
2018-05-03 20:36:54 UTC
Reply
Permalink
Raw Message
Post by fred glenlake
We are going from one production lpar and one test lpar to two sysplexs, one plex for production, one plex for test. Currently the RACF databases are shared (yeah not ideal) but they will be split (prod and test on their own databases) once we are sysplexed.
In preparation for the split and the new sysplexes I want to split up the databases ahead of time. I am new to sysplexes so excuse the silly questions.
Currently my primary and backup RACF databases are on DASD, shared DASD between prod and test. I am going to move them to non-shared DASD so prod has its own databases and test has its own. In a Sysplex should the RACF databases still reside on DASD that both sides of the sysplexes share (so both prod lpars in the plex) or should they reside in the coupling facility or ??
Wait a moment please. First thing first - Do NOT share any RACF DBs across two or more Sysplexes.

Ensure one and each Sysplex has its own set of RACF DBs. Each LPARs inside the Sysplex can share that RACF DB or just use its own RACF DB. I recommend that ONE RACF DB is used by all LPARs inside a Sysplex.

From what you said, I believe the safest way is - Make an exact copy of the RACF DBs to be used on the other Sysplex.

Say you have two RACF DBs (Primary and Backup) on Volser A and B. Copy them to Volser C and D and ensure that one Sysplex is using A and B and another Sysplex is using C and D.

In this way you can have 'prod has its own databases and test has its own.'

Then when everything is fine and you have IPLed and verified each Sysplex is using its own RACF DBs, now you can get rid of unneeded profiles as needed.

About 'splitting' - IBM is using the word 'splitting' for RACF DB in another, but strange way. Let me explain.

In your way of 'splitting', do not use IRRUT400 to do a 'split'. That type of 'split' by using IRRUT400 is just to spread out your profiles amongst more than one datasets inside a RACF DB, but all these datasets are used as ONE RACF DB (inside a Sysplex). That type of split is more for performance and resizing.

In your scenario, the only way to 'split' is to make identical copy and then at each Sysplex, you can get rid of unneeded profiles from a LPAR inside that Sysplex. Say you have ids Prod1 and Test1 on both copies. Now you delete Prod1 on the test Sysplex RACF DB and also delete Test1 on the Prod RACF DB.

When everything is in order and you can verify each Sysplex has its own RACF DBs, then you can setup your XCF so the XCF structures can be used. If you need guidance, please e-mail me privately or you can post on RACF-L for more guidance.
Post by fred glenlake
Are there any tools that will help me get to my end state, split up the databases, report on the databases, etc.?? Normally I just use the RACF utilities ICH***** but perhaps other sites use different tools I could look into.
zSecure (and Vanguard) can help you there, but to do make copies and setting up the ICHRDSNT and other modules, you need RACF utilities like IRRMIN00 (for templates), IRRUT200 (for making exact copies), IRRUT400 (to re-org the RACF DB indexes during copy), IRRDBU00 (for RACF DB unloads and reporting).

Just ensure that all Volsers used by RACF are Non-SMS Volsers (DSORG=PSU) and of course not shared by both Sysplexes.

For clarification:
I have two Sysplexes. Prod and Sandbox. Each Sysplex has numerous LPARs, but each Sysplex has its own RACF DBs (Primary and Backup). These sets are on different Non-SMS Volsers and are not shared amongst the Sysplex. Each Sysplex RACF DBs are cataloged in its own Sysplex Master Catalogs. Each Sysplex has its own ICHRDSNT module.

Think about 'isolating' or think about putting each Sysplex in separate prison cells where nothing is shared at all and you're a heavy handed guard taking no bribes at all. ;-)
Post by fred glenlake
Any suggestions, comments are most welcome.
Post your questions on RACF-L. I'll check you out there. ;-)

Groete / Greetings
Elardus Engelbrecht

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Mark Zelden
2018-05-03 22:43:58 UTC
Reply
Permalink
Raw Message
My client has a mixture of things, but nothing is ever shared with a sandbox LPAR - not even
via RRSF "one way". It really doesn't seem dangerous to do it one way, but I still
prefer to isolate things in a sandbox as completely as possible.

One business unit with 2 large sysplexes has separate RACF databases, but RRSF keeps
things in sync. Both have sysplex communications enabled in the DSNT and CF
structures.

Another business unit has one RACF database shared between 2 sysplexes. MII is the
integrity manager and SYSZRACF is excluded, so the DB is protected with RESERVEs.

Another business unit has a 2 system basic sysplex and is in GRS ring mode, RACF DB is
shared between both. I just checked and SYSZRACF is converted to a global ENQ.

Another business unit has a prod / devl LPAR (both monoplexes). They share a RACF
DB. Since there is no GRS ring, the DB is protected with RESERVE. There is a sandbox
version of this business unit also, but it has its own RACF DB.

There are also 2 sandbox parallel sysplexes each with 2 LPARs that are "clones" of the
first 2 environments I wrote about - one with GRS, the other with MII. Both those sysplexes
have their own RACF DBs, have sysplex communications enabled in the DSNT
and CF structures.

Regards,

Mark
--
Mark Zelden - Zelden Consulting Services - z/OS, OS/390 and MVS
ITIL v3 Foundation Certified
mailto:***@mzelden.com
Mark's MVS Utilities: http://www.mzelden.com/mvsutil.html
Systems Programming expert at http://search390.techtarget.com/ateExperts/
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Loading...