Discussion:
SDB (system determined Blksize)
(too old to reply)
Lizette Koehler
2017-05-19 18:02:32 UTC
Permalink
Raw Message
List -

I have gone through a few manuals and cannot determine the answer to the
following questions. Any guidance is appreciated

1) Can the SDB be adjusted from half-track to another setting (quarter or
full)?
2) Are there any new best practices for SDB that have changed in the last 20
years?
3) Is Half-track still considered OPTIMUM?

Since the storage arrays are so fast, I would think that maybe full track would
not be that much of a performance impact any more.

I have been working offlist with a couple of people with JCL and BLKSIZE and
SPACE Sizing. I am trying to figure out how to make the formula more accurate.
When I calculate space - I use full trace blocking. However, I find when SDB
uses half-track, my math is off.

Any thoughts or help would be great.


Lizette Koehler
statistics: A precise and logical method for stating a half-truth inaccurately

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Allan Staller
2017-05-19 18:58:41 UTC
Permalink
Raw Message
IBM for many years supplied a macro or subroutine called TRKCALC that can be used for space calculations. I don't know if it's still around. GIYF.

The actual formula used is:

Physical records per track = 1729/(10+K+D)

D= 9 + (DATALEN + (6x((DATALEN+6/232)) +6/) /34

K = 0 if non-keyed
K= 9 + (keylen + ( 6* ((keylen+6/232)) + 6) /34 (if keyed)



1) I would imagine it can , but I am baffled as to where it would be specified. The most likely options are in the CDS, or IGDSMSxx.
2) See # 3.
3) SLED devices generated the recommendation for 1/2 track blocking for the following reasons:
a) Maximizing the use of space.
b) Minimizing physical IO to the device .

With the advent of RAID devices, the physical IO portion has much less importance. There might be some performance IOS related overhead due to increase number of IO requests
With the reduction in DASD prices, does the percentage of utilization matter as much as it used to

I have not heard of any research in this area? Is Pat Artis still active, or has he retired?

HTH,


<snip>
I have gone through a few manuals and cannot determine the answer to the following questions. Any guidance is appreciated

1) Can the SDB be adjusted from half-track to another setting (quarter or full)?
2) Are there any new best practices for SDB that have changed in the last 20 years?
3) Is Half-track still considered OPTIMUM?

Since the storage arrays are so fast, I would think that maybe full track would not be that much of a performance impact any more.

I have been working offlist with a couple of people with JCL and BLKSIZE and SPACE Sizing. I am trying to figure out how to make the formula more accurate.
When I calculate space - I use full trace blocking. However, I find when SDB uses half-track, my math is off.
</snip>



::DISCLAIMER::
----------------------------------------------------------------------------------------------------------------------------------------------------

The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and other defects.

----------------------------------------------------------------------------------------------------------------------------------------------------

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Charles Mills
2017-05-19 21:20:15 UTC
Permalink
Raw Message
Was just talking about Dr. Artis with an old friend.

He is teaching at Virginia Tech:
https://www.aoe.vt.edu/people/advisory/pat-artis.html

I think Performance Associates is still around: http://perfassoc.com/

Charles


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On
Behalf Of Allan Staller
Sent: Friday, May 19, 2017 11:59 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: SDB (system determined Blksize)

IBM for many years supplied a macro or subroutine called TRKCALC that can be
used for space calculations. I don't know if it's still around. GIYF.

The actual formula used is:

Physical records per track = 1729/(10+K+D)

D= 9 + (DATALEN + (6x((DATALEN+6/232)) +6/) /34

K = 0 if non-keyed
K= 9 + (keylen + ( 6* ((keylen+6/232)) + 6) /34 (if keyed)



1) I would imagine it can , but I am baffled as to where it would be
specified. The most likely options are in the CDS, or IGDSMSxx.
2) See # 3.
3) SLED devices generated the recommendation for 1/2 track blocking for the
following reasons:
a) Maximizing the use of space.
b) Minimizing physical IO to the device .

With the advent of RAID devices, the physical IO portion has much less
importance. There might be some performance IOS related overhead due to
increase number of IO requests
With the reduction in DASD prices, does the percentage of utilization matter
as much as it used to

I have not heard of any research in this area? Is Pat Artis still active, or
has he retired?

HTH,


<snip>
I have gone through a few manuals and cannot determine the answer to the
following questions. Any guidance is appreciated

1) Can the SDB be adjusted from half-track to another setting (quarter or
full)?
2) Are there any new best practices for SDB that have changed in the last
20 years?
3) Is Half-track still considered OPTIMUM?

Since the storage arrays are so fast, I would think that maybe full track
would not be that much of a performance impact any more.

I have been working offlist with a couple of people with JCL and BLKSIZE and
SPACE Sizing. I am trying to figure out how to make the formula more
accurate.
When I calculate space - I use full trace blocking. However, I find when
SDB uses half-track, my math is off.
</snip>



::DISCLAIMER::
----------------------------------------------------------------------------
------------------------------------------------------------------------

The contents of this e-mail and any attachment(s) are confidential and
intended for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as
information could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability
on the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction,
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses
and other defects.

----------------------------------------------------------------------------
------------------------------------------------------------------------

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2017-05-19 21:21:28 UTC
Permalink
Raw Message
SDB was invented to automatically optimize two competing efficiencies: I/O overhead and physical disk storage. To wit, the fewer I/Os the better; and the less wasted track space the better.

For I/O, the ideal operation is reading or writing an entire 'data component' in one operation. For disk storage, the ideal is to occupy every available bit on a track.

To maximize track occupancy, one logical record per block probably achieves maximum efficiency but is horrible in the I/O arena. For I/O efficiency, a largest possible block that occupies that fits on a track achieves the fewest I/Os but may result in egregious wasted space.

For exceptional cases, you can override SDB by coding explicit blocksizes, but that seems like a lot of work. Setting out to supplant the IBM provided SBD algorithms with 'something smarter' seems like even more work with questionable ROI.

BTW I have no idea how to answer your actual question. Relying on my college survival strategy, I'm trying to supply an interesting and maybe even useful answer to a different question. ;-)

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Allan Staller
Sent: Friday, May 19, 2017 11:59 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: SDB (system determined Blksize)

IBM for many years supplied a macro or subroutine called TRKCALC that can be used for space calculations. I don't know if it's still around. GIYF.

The actual formula used is:

Physical records per track = 1729/(10+K+D)

D= 9 + (DATALEN + (6x((DATALEN+6/232)) +6/) /34

K = 0 if non-keyed
K= 9 + (keylen + ( 6* ((keylen+6/232)) + 6) /34 (if keyed)



1) I would imagine it can , but I am baffled as to where it would be specified. The most likely options are in the CDS, or IGDSMSxx.
2) See # 3.
3) SLED devices generated the recommendation for 1/2 track blocking for the following reasons:
a) Maximizing the use of space.
b) Minimizing physical IO to the device .

With the advent of RAID devices, the physical IO portion has much less importance. There might be some performance IOS related overhead due to increase number of IO requests
With the reduction in DASD prices, does the percentage of utilization matter as much as it used to

I have not heard of any research in this area? Is Pat Artis still active, or has he retired?

HTH,


<snip>
I have gone through a few manuals and cannot determine the answer to the following questions. Any guidance is appreciated

1) Can the SDB be adjusted from half-track to another setting (quarter or full)?
2) Are there any new best practices for SDB that have changed in the last 20 years?
3) Is Half-track still considered OPTIMUM?

Since the storage arrays are so fast, I would think that maybe full track would not be that much of a performance impact any more.

I have been working offlist with a couple of people with JCL and BLKSIZE and SPACE Sizing. I am trying to figure out how to make the formula more accurate.
When I calculate space - I use full trace blocking. However, I find when SDB uses half-track, my math is off.
</snip>


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Steve Smith
2017-05-19 21:30:03 UTC
Permalink
Raw Message
Considering the many layers of emulation and fakery between application and
actual recorded media these days, I highly doubt these considerations
matter much at all. While it makes sense to maximize BLKSIZE, that's a
low-order optimization, only worth doing because it's free.

It's rather a shame z/OS is still stuck with emulating a very obsolete disk
technology.

sas
Post by Jesse 1 Robinson
SDB was invented to automatically optimize two competing efficiencies: I/O
overhead and physical disk storage. To wit, the fewer I/Os the better; and
the less wasted track space the better.
For I/O, the ideal operation is reading or writing an entire 'data
component' in one operation. For disk storage, the ideal is to occupy every
available bit on a track.
To maximize track occupancy, one logical record per block probably
achieves maximum efficiency but is horrible in the I/O arena. For I/O
efficiency, a largest possible block that occupies that fits on a track
achieves the fewest I/Os but may result in egregious wasted space.
For exceptional cases, you can override SDB by coding explicit blocksizes,
but that seems like a lot of work. Setting out to supplant the IBM provided
SBD algorithms with 'something smarter' seems like even more work with
questionable ROI.
BTW I have no idea how to answer your actual question. Relying on my
college survival strategy, I'm trying to supply an interesting and maybe
even useful answer to a different question. ;-)
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
-----Original Message-----
Behalf Of Allan Staller
Sent: Friday, May 19, 2017 11:59 AM
Subject: (External):Re: SDB (system determined Blksize)
IBM for many years supplied a macro or subroutine called TRKCALC that can
be used for space calculations. I don't know if it's still around. GIYF.
Physical records per track = 1729/(10+K+D)
D= 9 + (DATALEN + (6x((DATALEN+6/232)) +6/) /34
K = 0 if non-keyed
K= 9 + (keylen + ( 6* ((keylen+6/232)) + 6) /34 (if keyed)
1) I would imagine it can , but I am baffled as to where it would be
specified. The most likely options are in the CDS, or IGDSMSxx.
2) See # 3.
3) SLED devices generated the recommendation for 1/2 track blocking for
a) Maximizing the use of space.
b) Minimizing physical IO to the device .
With the advent of RAID devices, the physical IO portion has much less
importance. There might be some performance IOS related overhead due to
increase number of IO requests
With the reduction in DASD prices, does the percentage of utilization
matter as much as it used to
I have not heard of any research in this area? Is Pat Artis still active,
or has he retired?
HTH,
<snip>
I have gone through a few manuals and cannot determine the answer to the
following questions. Any guidance is appreciated
1) Can the SDB be adjusted from half-track to another setting (quarter or full)?
2) Are there any new best practices for SDB that have changed in the last 20 years?
3) Is Half-track still considered OPTIMUM?
Since the storage arrays are so fast, I would think that maybe full track
would not be that much of a performance impact any more.
I have been working offlist with a couple of people with JCL and BLKSIZE
and SPACE Sizing. I am trying to figure out how to make the formula more
accurate.
When I calculate space - I use full trace blocking. However, I find when
SDB uses half-track, my math is off.
</snip>
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
--
sas

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Gerhard Adam
2017-05-19 21:44:46 UTC
Permalink
Raw Message
z/OS doesn't emulate 3390's, the disk technology does. It also does so, for good reason, because the biggest issue with DASD is differing geometries. That would affect space allocation and the blocksizes that can be used.

Since there is no performance penalty for emulating a 3390, there is zero incentive for anyone to represent their disks as anything except a 3390.

Adam

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Steve Smith
Sent: Friday, May 19, 2017 2:31 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: SDB (system determined Blksize)

Considering the many layers of emulation and fakery between application and actual recorded media these days, I highly doubt these considerations matter much at all. While it makes sense to maximize BLKSIZE, that's a low-order optimization, only worth doing because it's free.

It's rather a shame z/OS is still stuck with emulating a very obsolete disk technology.

sas
Post by Jesse 1 Robinson
I/O overhead and physical disk storage. To wit, the fewer I/Os the
better; and the less wasted track space the better.
For I/O, the ideal operation is reading or writing an entire 'data
component' in one operation. For disk storage, the ideal is to occupy
every available bit on a track.
To maximize track occupancy, one logical record per block probably
achieves maximum efficiency but is horrible in the I/O arena. For I/O
efficiency, a largest possible block that occupies that fits on a
track achieves the fewest I/Os but may result in egregious wasted space.
For exceptional cases, you can override SDB by coding explicit
blocksizes, but that seems like a lot of work. Setting out to supplant
the IBM provided SBD algorithms with 'something smarter' seems like
even more work with questionable ROI.
BTW I have no idea how to answer your actual question. Relying on my
college survival strategy, I'm trying to supply an interesting and
maybe even useful answer to a different question. ;-)
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
-----Original Message-----
On Behalf Of Allan Staller
Sent: Friday, May 19, 2017 11:59 AM
Subject: (External):Re: SDB (system determined Blksize)
IBM for many years supplied a macro or subroutine called TRKCALC that
can be used for space calculations. I don't know if it's still around. GIYF.
Physical records per track = 1729/(10+K+D)
D= 9 + (DATALEN + (6x((DATALEN+6/232)) +6/) /34
K = 0 if non-keyed
K= 9 + (keylen + ( 6* ((keylen+6/232)) + 6) /34 (if keyed)
1) I would imagine it can , but I am baffled as to where it would be
specified. The most likely options are in the CDS, or IGDSMSxx.
2) See # 3.
3) SLED devices generated the recommendation for 1/2 track blocking
a) Maximizing the use of space.
b) Minimizing physical IO to the device .
With the advent of RAID devices, the physical IO portion has much less
importance. There might be some performance IOS related overhead due
to increase number of IO requests With the reduction in DASD prices,
does the percentage of utilization matter as much as it used to
I have not heard of any research in this area? Is Pat Artis still
active, or has he retired?
HTH,
<snip>
I have gone through a few manuals and cannot determine the answer to
the following questions. Any guidance is appreciated
1) Can the SDB be adjusted from half-track to another setting (quarter or full)?
2) Are there any new best practices for SDB that have changed in the last 20 years?
3) Is Half-track still considered OPTIMUM?
Since the storage arrays are so fast, I would think that maybe full
track would not be that much of a performance impact any more.
I have been working offlist with a couple of people with JCL and
BLKSIZE and SPACE Sizing. I am trying to figure out how to make the
formula more accurate.
When I calculate space - I use full trace blocking. However, I find
when SDB uses half-track, my math is off.
</snip>
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send
--
sas

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
John McKown
2017-05-22 14:38:52 UTC
Permalink
Raw Message
Post by Steve Smith
Considering the many layers of emulation and fakery between application and
actual recorded media these days, I highly doubt these considerations
matter much at all. While it makes sense to maximize BLKSIZE, that's a
low-order optimization, only worth doing because it's free.
It's rather a shame z/OS is still stuck with emulating a very obsolete disk
technology.
​I agree. There is just a slight crack in the wall right now. That is
"z/OS FBA Services". The link is supposedly
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r2.ieaa800/fbasvcs.htm
but I'm getting "503 Service Temporarily Unavailable" (#include
"long.obscene.rant.h")

As I recall, this give you "raw" access to a LUN. You cannot use it as a
disk volume with data sets on it, nor create & mount a UNIX filesystem upon
it. ​It has no data management support. The closest is like using a LDS to
read / write one or more bytes at some offset.
Post by Steve Smith
sas
--
Advertising is a valuable economic factor because it is the cheapest way of
selling goods, particularly if the goods are worthless. -- Sinclair Lewis


Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Gerhard Adam
2017-05-19 21:42:02 UTC
Permalink
Raw Message
I'm not sure why you think these are competing efficiencies. A large physical block reduces waste on the track by requiring fewer Inter-Block Gaps (IBG) and it reduces the number of I/O's required to read the data.

The problem is that the maximum blocksize was limited by software to 32K. Since this isn't an even multiple of the track capacity on a 3390, the best blocksize would be a half track. SDB provided the means of letting the system calculate this without requiring the programmer to do the calculation.

However, such an "optimum" blocksize is only relevant when processing sequential files. VSAM uses the CISIZE, so blocksize is irrelevant. For most PDS's there will be more short blocks because members are rarely large enough to use a half-track block. In load libraries, most of the records are irregular sizes, significantly smaller than the blocksize, so again, it doesn't do anything.

Adam

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
R.S.
2017-05-19 21:27:33 UTC
Permalink
Raw Message
Just curious: the formulas can give fractional values. How to round them?
OK, I assume the physrec/trk should be rounded down, but what about D?

Regards
--
Radoslaw Skorupka

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Anne & Lynn Wheeler
2017-05-19 21:45:21 UTC
Permalink
Raw Message
Post by R.S.
Just curious: the formulas can give fractional values. How to round them?
OK, I assume the physrec/trk should be rounded down, but what about D?
remember CKD disks haven't been manufactured for decades, all being
simulated on industry standard FBA devices ... originally 512 byte fixed
block ... but industry moving to 4k byte fixed block.

In between are 4k FBA that simulate 512 byte.
https://en.wikipedia.org/wiki/Advanced_Format

where you might have CKD simulated on 512 byte FBA simulated
on 4096 byte FBA

gets even more complex when RAID is layered over the top.
--
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Ron Hawkins
2017-05-19 23:49:35 UTC
Permalink
Raw Message
Lizette (OP),



I would not recommend going to 32K as a blocking factor. Generically
speaking, all three vendor emulate a CKD track, allocating up to 64KiB of
space for every track.



Whether you use a regular formatted volume, or thin provisioning (DP-VOL in
Hitachi speak), if you write a 32K block to an emulated 3390 track, the
balance of the space will be wasted. Ipso fact you will use 41% more space
for the same amount of data at half-track.



1-(32768/(27998*2))=41%



Performance wise you will see almost no difference between a half track and
32K block IO. The normal defaults and limits for BAM chain length will apply
so you may pick up a few more bytes with the default of 5 buffers and 32K
blocks, but without zHPF there will be no benefit with a BUFNO set greater
than 7 = the BAM chain limit is 240KiB.



SDB also takes into account the format of DSORG=PS or PS-E files, so you
will get the appropriate blocking factor for the LRECL and format. The
underlying data set will use a physical blocks size based on the max block
specified for the data set, so unlike VSAM the physical block size is not
remapped from the CISZ.



I really don't see any upside to change this from half-track blocking. If
you want to speed up BAM IO, then make sure you are using zHPF and set
BUFNO=255. I don't think your application will notice the difference between
6.8MiB and 7.9MiB per SSCH, the TCW cost per SSCH for the controller is the
same, and you may realize just how much you really wanted SSD or FMD drives.



Ron



-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On
Behalf Of Anne & Lynn Wheeler
Sent: Friday, May 19, 2017 2:46 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] SDB (system determined Blksize)
Post by R.S.
Just curious: the formulas can give fractional values. How to round them?
OK, I assume the physrec/trk should be rounded down, but what about D?
remember CKD disks haven't been manufactured for decades, all being
simulated on industry standard FBA devices ... originally 512 byte fixed
block ... but industry moving to 4k byte fixed block.



In between are 4k FBA that simulate 512 byte.

<https://en.wikipedia.org/wiki/Advanced_Format>
https://en.wikipedia.org/wiki/Advanced_Format



where you might have CKD simulated on 512 byte FBA simulated on 4096 byte
FBA



gets even more complex when RAID is layered over the top.



--

virtualization experience starting Jan1968, online at home since Mar1970



----------------------------------------------------------------------

For IBM-MAIN subscribe / signoff / archive access instructions, send email
to <mailto:***@listserv.ua.edu> ***@listserv.ua.edu with the
message: INFO IBM-MAIN


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Charles Mills
2017-05-20 17:39:39 UTC
Permalink
Raw Message
if you write a 32K block to an emulated 3390 track, the balance of the
space will be wasted

Is that true? (Serious question -- everything I know about DASD management
could be written in one paragraph of an e-mail.) Sure, it wastes "virtual"
space on the emulated 3390 track, no doubt, but aren't modern storage arrays
smart enough not to waste the real disk space that you are paying for on
empty 3390 track space?

Charles


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On
Behalf Of Ron Hawkins
Sent: Friday, May 19, 2017 4:47 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: SDB (system determined Blksize)

Lizette (OP),



I would not recommend going to 32K as a blocking factor. Generically
speaking, all three vendor emulate a CKD track, allocating up to 64KiB of
space for every track.



Whether you use a regular formatted volume, or thin provisioning (DP-VOL in
Hitachi speak), if you write a 32K block to an emulated 3390 track, the
balance of the space will be wasted. Ipso fact you will use 41% more space
for the same amount of data at half-track.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Gerhard Adam
2017-05-20 18:08:40 UTC
Permalink
Raw Message
I don't see how the space would not be wasted. Where would it be assigned or accounted for? If you ignored such waste, you could have more capacity available than the volumes you've defined.

Sent from my iPhone
Post by Ron Hawkins
if you write a 32K block to an emulated 3390 track, the balance of the
space will be wasted
Is that true? (Serious question -- everything I know about DASD management
could be written in one paragraph of an e-mail.) Sure, it wastes "virtual"
space on the emulated 3390 track, no doubt, but aren't modern storage arrays
smart enough not to waste the real disk space that you are paying for on
empty 3390 track space?
Charles
-----Original Message-----
Behalf Of Ron Hawkins
Sent: Friday, May 19, 2017 4:47 PM
Subject: Re: SDB (system determined Blksize)
Lizette (OP),
I would not recommend going to 32K as a blocking factor. Generically
speaking, all three vendor emulate a CKD track, allocating up to 64KiB of
space for every track.
Whether you use a regular formatted volume, or thin provisioning (DP-VOL in
Hitachi speak), if you write a 32K block to an emulated 3390 track, the
balance of the space will be wasted. Ipso fact you will use 41% more space
for the same amount of data at half-track.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Charles Mills
2017-05-20 20:32:11 UTC
Permalink
Raw Message
Consider for example "flash copy" and similar technologies. The DASD
subsystem is able to make a "copy" of an entire volume without using any
significant amount of actual honest-to-gosh disk space.

It's a little hard to explain the technology in a quick e-mail paragraph but
basically the controller makes a "pretend" copy of the disk by making a
duplicate copy of an "index" to all of the volume's tracks. Whenever a track
changes, it creates the track image in new disk space and updates the index
to point to that track. Lets companies make an internally consistent backup
of an entire DB2 volume while only having to "freeze" DB2 for a second or
so.

Charles


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On
Behalf Of Gerhard Adam
Sent: Saturday, May 20, 2017 11:09 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: SDB (system determined Blksize)

I don't see how the space would not be wasted. Where would it be assigned
or accounted for? If you ignored such waste, you could have more capacity
available than the volumes you've defined.

Sent from my iPhone
Post by Ron Hawkins
if you write a 32K block to an emulated 3390 track, the balance of the
space will be wasted
Is that true? (Serious question -- everything I know about DASD
management could be written in one paragraph of an e-mail.) Sure, it
wastes "virtual"
Post by Ron Hawkins
space on the emulated 3390 track, no doubt, but aren't modern storage
arrays smart enough not to waste the real disk space that you are
paying for on empty 3390 track space?
Charles
-----Original Message-----
On Behalf Of Ron Hawkins
Sent: Friday, May 19, 2017 4:47 PM
Subject: Re: SDB (system determined Blksize)
Lizette (OP),
I would not recommend going to 32K as a blocking factor. Generically
speaking, all three vendor emulate a CKD track, allocating up to 64KiB
of space for every track.
Whether you use a regular formatted volume, or thin provisioning
(DP-VOL in Hitachi speak), if you write a 32K block to an emulated
3390 track, the balance of the space will be wasted. Ipso fact you
will use 41% more space for the same amount of data at half-track.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
John Eells
2017-05-22 10:48:16 UTC
Permalink
Raw Message
Post by Ron Hawkins
I would not recommend going to 32K as a blocking factor.
Except, of course, for load libraries, the significant exception to this
rule.
--
John Eells
IBM Poughkeepsie
***@us.ibm.com

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-05-19 23:59:22 UTC
Permalink
Raw Message
Post by Gerhard Adam
z/OS doesn't emulate 3390's, the disk technology does. It also does so, for good reason, because the biggest issue with DASD is differing geometries. That would affect space allocation and the blocksizes that can be used.
At the very least, it's z/OS that impels a maximum block size of 32760
while the 3390 suports much larger.
Post by Gerhard Adam
Since there is no performance penalty for emulating a 3390, there is zero incentive for anyone to represent their disks as anything except a 3390.
I'm skeptical that layer(s) of emulation incur no performance penalty.
Wouldn't a hypothetical emulated device supporting two 32760-byte
blocks per track, or one 65535-byte block (the CCW count field) do
better?

Or eliminate an emulation layer and expose the underlying FBA to
the (ststems) programmer. I believe recent OS releases have
(very limited) support for this. An enhanced QSAM could make this
transparent to the application programmer, even as QSAM does for
z/OS UNIX files.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Rob Schramm
2017-05-20 00:22:26 UTC
Permalink
Raw Message
Turbo tune / Ralph Bertrum continues to make money by efficiently blocking
and buffering data set ( vsam mostly ) and saving cycles. So, it may not
be as worthless as you'd expect.

Rob Schramm

On Fri, May 19, 2017, 8:00 PM Paul Gilmartin <
Post by Gerhard Adam
Post by Gerhard Adam
z/OS doesn't emulate 3390's, the disk technology does. It also does so,
for good reason, because the biggest issue with DASD is differing
geometries. That would affect space allocation and the blocksizes that can
be used.
At the very least, it's z/OS that impels a maximum block size of 32760
while the 3390 suports much larger.
Post by Gerhard Adam
Since there is no performance penalty for emulating a 3390, there is zero
incentive for anyone to represent their disks as anything except a 3390.
I'm skeptical that layer(s) of emulation incur no performance penalty.
Wouldn't a hypothetical emulated device supporting two 32760-byte
blocks per track, or one 65535-byte block (the CCW count field) do
better?
Or eliminate an emulation layer and expose the underlying FBA to
the (ststems) programmer. I believe recent OS releases have
(very limited) support for this. An enhanced QSAM could make this
transparent to the application programmer, even as QSAM does for
z/OS UNIX files.
-- gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
--
Rob Schramm

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Gerhard Adam
2017-05-20 02:32:37 UTC
Permalink
Raw Message
Post by Paul Gilmartin
I'm skeptical that layer(s) of emulation incur no performance penalty.
Wouldn't a hypothetical emulated device supporting two 32760-byte blocks per track, or one 65535-byte block (the CCW count field) >do better?
What files would benefit? Other than sequential files, what files will benefit by a larger blocksize versus simply leaving things as they are and using more buffers?

Bear in mind that such a change as you're proposing would also require that every file be redefined, copied, and all accompanying JCL changed for space/blocksize considerations. After all that work, what actual improvement would one expect to see? I don't see anything that warrants much excitement for the effort involved.

Adam

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-05-20 18:31:37 UTC
Permalink
Raw Message
Post by Gerhard Adam
I don't see how the space would not be wasted. Where would it be assigned or accounted for? If you ignored such waste, you could have more capacity available than the volumes you've defined.
Post by Ron Hawkins
if you write a 32K block to an emulated 3390 track, the balance of the
space will be wasted
Is that true? (Serious question -- everything I know about DASD management
could be written in one paragraph of an e-mail.) Sure, it wastes "virtual"
space on the emulated 3390 track, no doubt, but aren't modern storage arrays
smart enough not to waste the real disk space that you are paying for on
empty 3390 track space?
It depends on the implementation. Some virtual disks use a compressed back end.
It's likely that such a design would not store the track balance, or if storing it would
compress it nearly to oblivion. The virtual capacity may exceed the real storage.

Some VM/370 (or so) paging systems depended on rotational latency -- they'd
actually do a read without a search. I know of one early solid state disk product
(ccd-based) that needed delays artificially inserted to accommodate that.

So, your virtual disk can be not slow enough.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-05-21 02:45:12 UTC
Permalink
Raw Message
Post by Charles Mills
Consider for example "flash copy" and similar technologies. The DASD
subsystem is able to make a "copy" of an entire volume without using any
significant amount of actual honest-to-gosh disk space.
It's a little hard to explain the technology in a quick e-mail paragraph but
basically the controller makes a "pretend" copy of the disk by making a
duplicate copy of an "index" to all of the volume's tracks. Whenever a track
changes, it creates the track image in new disk space and updates the index
to point to that track. Lets companies make an internally consistent backup
of an entire DB2 volume while only having to "freeze" DB2 for a second or
so.
The technique is known as "Copy on Write". CoW is also used by quality
implementations of fork(), by ZFS (not zFS; the real one; GIYF), by btrfs,
and by old StorageTek products, Iceberg and EchoView.

In a thread on TSO-REXX a couple days ago, I hinted at how this might be
crafted with a file granularity in a UNIX filesystem by using "pax -lrw" to
create the "index" to the "pretend" copy. This might be a use for PDSE
generations.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Charles Mills
2017-05-21 13:52:31 UTC
Permalink
Raw Message
As I said, I am no expert. My point was simply to give an example to illustrate the answer to
Post by Gerhard Adam
Where would it be assigned or accounted for? If you ignored such
waste, you could have more capacity available than the volumes
you've defined.
and illustrate that defined apparent 3390 space could be greater than actual occupied hardware space.

Good discussion of CoW here: http://stackoverflow.com/questions/628938/what-is-copy-on-write

Charles

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Paul Gilmartin
Sent: Saturday, May 20, 2017 7:46 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: SDB (system determined Blksize)
Post by Gerhard Adam
Consider for example "flash copy" and similar technologies. The DASD
subsystem is able to make a "copy" of an entire volume without using
any significant amount of actual honest-to-gosh disk space.
It's a little hard to explain the technology in a quick e-mail
paragraph but basically the controller makes a "pretend" copy of the
disk by making a duplicate copy of an "index" to all of the volume's
tracks. Whenever a track changes, it creates the track image in new
disk space and updates the index to point to that track. Lets companies
make an internally consistent backup of an entire DB2 volume while only
having to "freeze" DB2 for a second or so.
The technique is known as "Copy on Write". CoW is also used by quality implementations of fork(), by ZFS (not zFS; the real one; GIYF), by btrfs, and by old StorageTek products, Iceberg and EchoView.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2017-05-21 16:17:15 UTC
Permalink
Raw Message
The first (maybe only) hardware I know of that claimed no wasted space was STK Iceberg, which was touted as being so virtual that an emulated 3390 track actually left no unused track bits. I never worked with one, but I heard horror stories about *all* the data getting wasted when the Iceberg lost its brains and couldn't find anything. ;-(

For any sort of conventional emulation, I stand by my earlier point about the tradeoff between massive and miniscule blocksize. Agreed that this applies to sequential processing, but there's plenty of that in current applications, especially for those intent on eliminating all tape in favor of all DASD. There's a tremendous amount of overhead in performing I/O. The more I/Os you do, the longer an application will run for a given volume of data. Every (sequential) I/O transfers one physical record, aka block. Hence the larger the block--physical or emulated--the fewer I/Os you have to perform for a given file.

So why not define all blocks as 32K? For other than FB, that makes sense, and SDB algorithms take that into account. For RECFM FB, however, there would be an egregious amount of unusable space on each 3390 track. Nothing in z architecture can handle splitting a fixed block across tracks. Leftover space on a track cannot be used for anything else. So generally SDB recommends half-track blocking for FB data. Maximum data transfer per I/O, minimum wasted track space.



.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Charles Mills
Sent: Sunday, May 21, 2017 6:53 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: SDB (system determined Blksize)

As I said, I am no expert. My point was simply to give an example to illustrate the answer to
Post by Gerhard Adam
Where would it be assigned or accounted for? If you ignored such
waste, you could have more capacity available than the volumes you've
defined.
and illustrate that defined apparent 3390 space could be greater than actual occupied hardware space.

Good discussion of CoW here: http://stackoverflow.com/questions/628938/what-is-copy-on-write

Charles

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Paul Gilmartin
Sent: Saturday, May 20, 2017 7:46 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: SDB (system determined Blksize)
Post by Gerhard Adam
Consider for example "flash copy" and similar technologies. The DASD
subsystem is able to make a "copy" of an entire volume without using
any significant amount of actual honest-to-gosh disk space.
It's a little hard to explain the technology in a quick e-mail
paragraph but basically the controller makes a "pretend" copy of the
disk by making a duplicate copy of an "index" to all of the volume's
tracks. Whenever a track changes, it creates the track image in new
disk space and updates the index to point to that track. Lets companies
make an internally consistent backup of an entire DB2 volume while only
having to "freeze" DB2 for a second or so.
The technique is known as "Copy on Write". CoW is also used by quality implementations of fork(), by ZFS (not zFS; the real one; GIYF), by btrfs, and by old StorageTek products, Iceberg and EchoView.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-05-21 19:36:15 UTC
Permalink
Raw Message
Post by Jesse 1 Robinson
So why not define all blocks as 32K? For other than FB, that makes sense, and SDB algorithms take that into account. For RECFM FB, however, there would be an egregious amount of unusable space on each 3390 track. Nothing in z architecture can handle splitting a fixed block across tracks. Leftover space on a track cannot be used for anything else. So generally SDB recommends half-track blocking for FB data. Maximum data transfer per I/O, minimum wasted track space.
The same considerations apply to RECFM=VB, assuming in both cases that LRECL is small
compared to track capacity. (Of course, for RECFM=FB BLKSIZE must be a multiple
of LRECL. A surprising case is RECFM=FB, LRECL=7000.)

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Joel C. Ewing
2017-05-22 13:19:05 UTC
Permalink
Raw Message
Post by R.S.
Just curious: the formulas can give fractional values. How to round them?
OK, I assume the physrec/trk should be rounded down, but what about D?
Regards
--
Radoslaw Skorupka
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
The missing piece of information that should have been with the
equations for K and D is "Each equation is rounded up to an integer
value." (from 3390 Reference Summary) - the equivalent of applying the
mathematical Ceiling function. Space on a real IBM 3390 track was
allocated in cells of 34 bytes, so if your data for key or data portion
needed a little bit of another cell the entire next cell on the track
had to be allocated.
Joel C. Ewing
--
Joel C. Ewing, Bentonville, AR ***@acm.org

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Tom Marchant
2017-05-22 14:25:27 UTC
Permalink
Raw Message
Post by Jesse 1 Robinson
The first (maybe only) hardware I know of that claimed no wasted
space was STK Iceberg, which was touted as being so virtual that
an emulated 3390 track actually left no unused track bits. I never
worked with one, but I heard horror stories about *all* the data
getting wasted when the Iceberg lost its brains and couldn't find
anything. ;-(
I worked with an RVA, which was an IBM-branded Iceberg. It worked
very well for us. IIRC, it allocated space on the disk for logical tracks in
sectors, and would allocate the number of sectors required for the
data on each track. The data on the track was compressed before
allocating space for it.

It used RAID 6 in the disk array, with two parity disks in each raid
group, improving reliability of the back end.

One consequence of the compression was that there was no update
in place. When a track was updated, the entire updated track was
written to a new location on disk, and the index for that track was
updated to point to the new location.

Flashcopy was achieved by copying the indexes to the data and
incrementing the count of logical tracks represented by those sectors.

There are still manuals for the RVA:
https://m.ibm.com/https/publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/handheld/Connected/Shelves/CP6BKS03
--
Tom Marchant

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Mike Schwab
2017-05-23 05:07:22 UTC
Permalink
Raw Message
Every time you deal with a block you execute a bit of code to deblock
the records. To fill a Mod 9 volumes to test TDMF I copied a large
dataset with a small block size to a dataset with half track blocking
on a spare volume, then copied the first data set to several other
datasets to fill the volume. The first copy to the half track
blocksize took about 10 minutes. The subsequent several copies ran
much faster and also took about 10 minutes.

On Fri, May 19, 2017 at 1:03 PM, Lizette Koehler
Post by Lizette Koehler
List -
I have gone through a few manuals and cannot determine the answer to the
following questions. Any guidance is appreciated
1) Can the SDB be adjusted from half-track to another setting (quarter or
full)?
2) Are there any new best practices for SDB that have changed in the last 20
years?
3) Is Half-track still considered OPTIMUM?
Since the storage arrays are so fast, I would think that maybe full track would
not be that much of a performance impact any more.
I have been working offlist with a couple of people with JCL and BLKSIZE and
SPACE Sizing. I am trying to figure out how to make the formula more accurate.
When I calculate space - I use full trace blocking. However, I find when SDB
uses half-track, my math is off.
Any thoughts or help would be great.
Lizette Koehler
statistics: A precise and logical method for stating a half-truth inaccurately
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Charles Mills
2017-05-23 13:13:17 UTC
Permalink
Raw Message
The subsequent several copies ran much faster and also took about 10 minutes.
But it was a much faster 10 minutes?

Charles


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Mike Schwab
Sent: Monday, May 22, 2017 10:08 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: SDB (system determined Blksize)

Every time you deal with a block you execute a bit of code to deblock the records. To fill a Mod 9 volumes to test TDMF I copied a large dataset with a small block size to a dataset with half track blocking on a spare volume, then copied the first data set to several other datasets to fill the volume. The first copy to the half track blocksize took about 10 minutes. The subsequent several copies ran much faster and also took about 10 minutes.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Mike Schwab
2017-05-23 21:59:13 UTC
Permalink
Raw Message
About a dozen copies at 1/2 track block size vs `1 copy from small
block size to 1/2 track in the same elapsed time.
Post by Charles Mills
The subsequent several copies ran much faster and also took about 10 minutes.
But it was a much faster 10 minutes?
Charles
-----Original Message-----
Sent: Monday, May 22, 2017 10:08 PM
Subject: Re: SDB (system determined Blksize)
Every time you deal with a block you execute a bit of code to deblock the records. To fill a Mod 9 volumes to test TDMF I copied a large dataset with a small block size to a dataset with half track blocking on a spare volume, then copied the first data set to several other datasets to fill the volume. The first copy to the half track blocksize took about 10 minutes. The subsequent several copies ran much faster and also took about 10 minutes.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2017-05-23 22:11:31 UTC
Permalink
Raw Message
If I understand the details:

First copy operation read a whole bunch of smallish blocks and wrote out SDB blocks at 2/track. Lots of overhead on reads, much less so on writes. I would expect this result.

Second copy operation both read and wrote SDB blocks, so minimal overhead on both sides. I would expect this result.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Mike Schwab
Sent: Tuesday, May 23, 2017 3:00 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: SDB (system determined Blksize)

About a dozen copies at 1/2 track block size vs `1 copy from small block size to 1/2 track in the same elapsed time.
Post by Charles Mills
The subsequent several copies ran much faster and also took about 10 minutes.
But it was a much faster 10 minutes?
Charles
-----Original Message-----
On Behalf Of Mike Schwab
Sent: Monday, May 22, 2017 10:08 PM
Subject: Re: SDB (system determined Blksize)
Every time you deal with a block you execute a bit of code to deblock the records. To fill a Mod 9 volumes to test TDMF I copied a large dataset with a small block size to a dataset with half track blocking on a spare volume, then copied the first data set to several other datasets to fill the volume. The first copy to the half track blocksize took about 10 minutes. The subsequent several copies ran much faster and also took about 10 minutes.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send
--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Pommier, Rex
2017-05-24 13:13:26 UTC
Permalink
Raw Message
The point Charles was making (albeit tongue-in-cheek) was that Mike's comment about the second copies was ambiguous. The first time I read it, I had the exact same thought as Charles - it took 10 minutes to do the first copy then each of the following copies took 10 (faster) minutes. When I re-read it, I saw that Mike was meaning that the combined total of the several copies took a total of 10 minutes. I, for one, got a chuckle out of Charles' reply.

Rex

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Jesse 1 Robinson
Sent: Tuesday, May 23, 2017 5:12 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: SDB (system determined Blksize)

If I understand the details:

First copy operation read a whole bunch of smallish blocks and wrote out SDB blocks at 2/track. Lots of overhead on reads, much less so on writes. I would expect this result.

Second copy operation both read and wrote SDB blocks, so minimal overhead on both sides. I would expect this result.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Mike Schwab
Sent: Tuesday, May 23, 2017 3:00 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: SDB (system determined Blksize)

About a dozen copies at 1/2 track block size vs `1 copy from small block size to 1/2 track in the same elapsed time.
Post by Charles Mills
The subsequent several copies ran much faster and also took about 10 minutes.
But it was a much faster 10 minutes?
Charles
-----Original Message-----
On Behalf Of Mike Schwab
Sent: Monday, May 22, 2017 10:08 PM
Subject: Re: SDB (system determined Blksize)
Every time you deal with a block you execute a bit of code to deblock the records. To fill a Mod 9 volumes to test TDMF I copied a large dataset with a small block size to a dataset with half track blocking on a spare volume, then copied the first data set to several other datasets to fill the volume. The first copy to the half track blocksize took about 10 minutes. The subsequent several copies ran much faster and also took about 10 minutes.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send
--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

The information contained in this message is confidential, protected from disclosure and may be legally privileged. If the reader of this message is not the intended recipient or an employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any disclosure, distribution, copying, or any action taken or action omitted in reliance on it, is strictly prohibited and may be unlawful. If you have received this communication in error, please notify us immediately by replying to this message and destroy the material in its entirety, whether in electronic or hard copy format. Thank you.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Loading...