(too old to reply)
How to move DFDSS DUMPS on tape with large block sizes to TMM DASD?
Fred Schmidt
2011-02-18 06:08:15 UTC
Hi folks,

We are in the process of migrating from 3590 to 3592 tape. We would like to use HSM's Tape Mount Management (TMM) to stack data on the new tapes, thus taking advantage of their much greater capacity.

However, much of the data we have on 3590 tape currently is backups in DFDSS DUMP format with a blocksize of 229360 (LBI). The fine DFDSS manual says that COPYDUMP is the only supported method for copying DUMP datasets, and that it cannot be used to change the blocksize of the DUMP dataset. This means that we cannot copy this data from tape to DASD as it does not support block sizes larger than 32760. Therefore we cannot copy it to TMM.

I have considered restoring the datasets in these dumps and then re-dumping them to TMM. However, since the datasets in the dumps largely still exist, they would have to be restored to a non-SMS volume as uncataloged datasets. Given that we have some 22,000 datasets on tape and a lot of them are DFDSS DUMP's, that is starting to look very ugly.

So, does anybody have a more practical approach?

Regards,
Fred Schmidt
Data Centre Services
NT Government, Australia




----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
Mike Schwab
2011-02-18 06:40:19 UTC
http://ibmmainframes.com/about35024.html
Restore to disk and dump.

How about copying from tape to tape using your old tape management
system? Or just relying on the cataloged datasets?
Post by Fred Schmidt
Hi folks,
We are in the process of migrating from 3590 to 3592 tape. We would like to use HSM's Tape Mount Management (TMM) to stack data on the new tapes, thus taking advantage of their much greater capacity.
However, much of the data we have on 3590 tape currently is backups in DFDSS DUMP format with a blocksize of 229360 (LBI). The fine DFDSS manual says that COPYDUMP is the only supported method for copying DUMP datasets, and that it cannot be used to change the blocksize of the DUMP dataset. This means that we cannot copy this data from tape to DASD as it does not support block sizes larger than 32760. Therefore we cannot copy it to TMM.
I have considered restoring the datasets in these dumps and then re-dumping them to TMM. However, since the datasets in the dumps largely still exist, they would have to be restored to a non-SMS volume as uncataloged datasets. Given that we have some 22,000 datasets on tape and a lot of them are DFDSS DUMP's, that is starting to look very ugly.
So, does anybody have a more practical approach?
Regards,
Fred Schmidt
Data Centre Services
NT Government, Australia
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
Search the archives at http://bama.ua.edu/archives/ibm-main.html
--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
Fred Schmidt
2011-02-18 07:45:41 UTC
Post by Mike Schwab
http://ibmmainframes.com/about35024.html
Restore to disk and dump.
How about copying from tape to tape using your old tape management
system? Or just relying on the cataloged datasets?
Copying tape to tape negates completely the benefits of the new larger
tapes, unless we stack data onto fewer tapes. That is what we want to use
TMM for. Yes, there are other tape stacking products available, but they
cost money and that makes them unlikely for us. TMM is supposed to be IBM's
solution.

Relying on the cataloged datasets means keeping the old tapes. We want to
convert to the new tapes and get rid of the old tapes.

Fred.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
R.S.
2011-02-18 08:13:57 UTC
Post by Fred Schmidt
Hi folks,
We are in the process of migrating from 3590 to 3592 tape. We would like to use HSM's Tape Mount Management (TMM) to stack data on the new tapes, thus taking advantage of their much greater capacity.
However, much of the data we have on 3590 tape currently is backups in DFDSS DUMP format with a blocksize of 229360 (LBI). The fine DFDSS manual says that COPYDUMP is the only supported method for copying DUMP datasets, and that it cannot be used to change the blocksize of the DUMP dataset. This means that we cannot copy this data from tape to DASD as it does not support block sizes larger than 32760. Therefore we cannot copy it to TMM.
I have considered restoring the datasets in these dumps and then re-dumping them to TMM. However, since the datasets in the dumps largely still exist, they would have to be restored to a non-SMS volume as uncataloged datasets. Given that we have some 22,000 datasets on tape and a lot of them are DFDSS DUMP's, that is starting to look very ugly.
So, does anybody have a more practical approach?
The most practical approach: use proper tools. HSM (of FDR/ABR) backups
could be easily RECYCLEd, not to mention ease of backup process.
Of course HMS should be implemented in advance, so this is not an advice
for this case.

For this case I would suggest two approaches:

1. WAIT. Start using new tapes for new backups and wait - old backups
will expire. he longer you wait the more backups expire.

2. Remaining dumps (or everything if you don't want to wait) can be
COPYDUMPed without affecting BLKSIZE or restored on dasd with RENAME.
The last option involve manual process of changing names, at least at
HLQ level, but "manual" (non-HSM) backups do require such knowledge.

BTW: Are you sure the blocksize on 3590 is 229360?
The largest blocksize available on Jaguar is 256kB.
BLKSIZE LIMIT (forgot the keyword) can be up to 2GB, but it is
theoretical (system software) limit. SCSI reference of Jaguar specifies
that blocksize can be up to 2MB.

So, if your blocsize is really 229360, then you don't need to reblock
the dumps!
--
Radoslaw Skorupka
Lodz, Poland


--
BRE Bank SA
ul. Senatorska 18
00-950 Warszawa
www.brebank.pl

Sd Rejonowy dla m. st. Warszawy
XII Wydzia Gospodarczy Krajowego Rejestru Sdowego,
nr rejestru przedsibiorców KRS 0000025237
NIP: 526-021-50-88
Wedug stanu na dzie 16.07.2010 r. kapita zakadowy BRE Banku SA (w caoci wpacony) wynosi 168.248.328 zotych.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
Fred Schmidt
2011-02-20 23:34:28 UTC
Post by R.S.
The most practical approach: use proper tools. HSM (of FDR/ABR) backups
could be easily RECYCLEd, not to mention ease of backup process.
Of course HMS should be implemented in advance, so this is not an advice
for this case.
1. WAIT. Start using new tapes for new backups and wait - old backups
will expire. he longer you wait the more backups expire.
2. Remaining dumps (or everything if you don't want to wait) can be
COPYDUMPed without affecting BLKSIZE or restored on dasd with RENAME.
The last option involve manual process of changing names, at least at
HLQ level, but "manual" (non-HSM) backups do require such knowledge.
BTW: Are you sure the blocksize on 3590 is 229360?
The largest blocksize available on Jaguar is 256kB.
BLKSIZE LIMIT (forgot the keyword) can be up to 2GB, but it is
theoretical (system software) limit. SCSI reference of Jaguar specifies
that blocksize can be up to 2MB.
So, if your blocsize is really 229360, then you don't need to reblock
the dumps!
Sigh. Maybe I didn't make the situation clear.

We already have HSM. We already use TMM. We want to use TMM with our new ATL
(3592 tape) to hold DFDSS DUMP datasets currently on our old ATL (3590
tape). The problem is that these datasets have a blocksize of 229360 (and
yes, I am sure of that). This blocksize is way too big for DASD, which is
where we have to put it for it to be put on tape by TMM. COPYDUMP does not
allow the blocksize to be changed. COPYDUMP is the only supported way to
copy DUMP datasets. So we appear to be stuck without a way of moving this
data to TMM, other than restoring the data and re-dumping it to TMM.

Any better ideas than restoring and re-dumping would be gratefully appreciated.

Waiting for the old backups to expire is not an option, as some of these
have to be kept for 7 years and we need to move the old ATL out now.

Fred

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
McKown, John
2011-02-21 00:15:48 UTC
I do understand that you want to use TMM. However, in this specific case, why not just COPYDUMP from the old ATL to the new ATL without using TMM? I would guess that you will need to do any dataset stacking from the old volumes onto the new volumes "by hand". I'd likely do this in a single step with UNIT=AFF and VOL=REF= in the JCL so that there will only be a single mount of the new tape volume. I don't know how many volumes you have which would be a definate impact on how hard it would be to actually do. Your tape catalog may help because it should have the block count on the existing COPYDUMP tapes so that you can get a decent estimate of how to stack them onto the higher capacity 3592 tapes.

I know it's not what you want to do, but it may be better than doing a bunch of RESTORE and reDUMPs.

--
John McKown
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone *
***@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or proprietary information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. HealthMarkets(r) is the brand name for products underwritten and issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance Company(r), Mid-West National Life Insurance Company of TennesseeSM and The MEGA Life and Health Insurance Company.SM
-----Original Message-----
From: IBM Mainframe Discussion List
Sent: Sunday, February 20, 2011 5:33 PM
Subject: Re: How to move DFDSS DUMPS on tape with large block
sizes to TMM DASD?
On Fri, 18 Feb 2011 09:12:21 +0100, R.S.
Post by R.S.
The most practical approach: use proper tools. HSM (of
FDR/ABR) backups
Post by R.S.
could be easily RECYCLEd, not to mention ease of backup process.
Of course HMS should be implemented in advance, so this is
not an advice
Post by R.S.
for this case.
1. WAIT. Start using new tapes for new backups and wait - old backups
will expire. he longer you wait the more backups expire.
2. Remaining dumps (or everything if you don't want to wait) can be
COPYDUMPed without affecting BLKSIZE or restored on dasd with RENAME.
The last option involve manual process of changing names, at least at
HLQ level, but "manual" (non-HSM) backups do require such knowledge.
BTW: Are you sure the blocksize on 3590 is 229360?
The largest blocksize available on Jaguar is 256kB.
BLKSIZE LIMIT (forgot the keyword) can be up to 2GB, but it is
theoretical (system software) limit. SCSI reference of
Jaguar specifies
Post by R.S.
that blocksize can be up to 2MB.
So, if your blocsize is really 229360, then you don't need to reblock
the dumps!
Sigh. Maybe I didn't make the situation clear.
We already have HSM. We already use TMM. We want to use TMM
with our new ATL
(3592 tape) to hold DFDSS DUMP datasets currently on our old ATL (3590
tape). The problem is that these datasets have a blocksize of
229360 (and
yes, I am sure of that). This blocksize is way too big for
DASD, which is
where we have to put it for it to be put on tape by TMM.
COPYDUMP does not
allow the blocksize to be changed. COPYDUMP is the only
supported way to
copy DUMP datasets. So we appear to be stuck without a way of
moving this
data to TMM, other than restoring the data and re-dumping it to TMM.
Any better ideas than restoring and re-dumping would be
gratefully appreciated.
Waiting for the old backups to expire is not an option, as
some of these
have to be kept for 7 years and we need to move the old ATL out now.
Fred
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
Search the archives at http://bama.ua.edu/archives/ibm-main.html
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
Fred Schmidt
2011-02-21 07:52:21 UTC
On Sun, 20 Feb 2011 18:14:38 -0600, McKown, John
Post by McKown, John
I do understand that you want to use TMM. However, in this specific case,
why not just COPYDUMP from the old ATL to the new ATL without using TMM? I
would guess that you will need to do any dataset stacking from the old
volumes onto the new volumes "by hand". I'd likely do this in a single step
with UNIT=AFF and VOL=REF= in the JCL so that there will only be a single
mount of the new tape volume. I don't know how many volumes you have which
would be a definate impact on how hard it would be to actually do. Your tape
catalog may help because it should have the block count on the existing
COPYDUMP tapes so that you can get a decent estimate of how to stack them
onto the higher capacity 3592 tapes.
Post by McKown, John
I know it's not what you want to do, but it may be better than doing a
bunch of RESTORE and reDUMPs.
Yes, that sounds like a better approach than RESTORE and DUMP. We'll give it
a whirl. Thanks John.

Regards, Fred

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html