Discussion:
Temporary Data Sets
Add Reply
Buckton, T. , Theo
2017-10-03 13:38:14 UTC
Reply
Permalink
Raw Message
Hi,

Can somebody direct me to documentation on temporary data sets. I need to understand the characteristics and restrictions of these &&TEMP data sets:
Space limits;
Can it extend over more than 1 volume if so, would a multi-volume temp data set pose a problem if it is referenced again by a different DD statement in the same job step?

Regards
Theo
********************

Nedbank disclaimer and confidentiality notice:

This email may contain information that is confidential, privileged or otherwise protected from disclosure. If you are not an intended recipient of this email or all or some of the information contained therein, do not duplicate or redistribute it by any means. Please delete it and any attachments and notify the sender that you have received it in error. Unless specifically indicated, this email is neither an offer or a solicitation to buy or sell any securities, investment products or other financial product or service, nor is it an official confirmation of any transaction or an official statement of Nedbank. Any views or opinions presented are solely those of the author and do not necessarily represent those of Nedbank. Nedbank Ltd Reg No 1951/000009/06.

The following link displays the names of the Nedbank Board of Directors and Company Secretary. [http://www.nedbank.co.za/terms/DirectorsNedbank.htm]

If you do not want to click on a link, please type the relevant address in your browser

********************


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Blaicher, Christopher Y.
2017-10-03 13:53:43 UTC
Reply
Permalink
Raw Message
Temporary data sets are just like permanent data sets, they just go away at the end of the step if you specify DISP=(NEW,DELETE), or at the end of the job if you specify DISP=(NEW,PASS).

Chris Blaicher
Technical Architect
Mainframe Development
P: 201-930-8234 | M: 512-627-3803
E: ***@syncsort.com

Syncsort Incorporated
2 Blue Hill Plaza #1563
Pearl River, NY 10965
www.syncsort.com

Data quality leader Trillium Software is now a part of Syncsort.


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Buckton, T. (Theo)
Sent: Tuesday, October 3, 2017 9:39 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Temporary Data Sets

Hi,

Can somebody direct me to documentation on temporary data sets. I need to understand the characteristics and restrictions of these &&TEMP data sets:
Space limits;
Can it extend over more than 1 volume if so, would a multi-volume temp data set pose a problem if it is referenced again by a different DD statement in the same job step?

Regards
Theo
********************

Nedbank disclaimer and confidentiality notice:

This email may contain information that is confidential, privileged or otherwise protected from disclosure. If you are not an intended recipient of this email or all or some of the information contained therein, do not duplicate or redistribute it by any means. Please delete it and any attachments and notify the sender that you have received it in error. Unless specifically indicated, this email is neither an offer or a solicitation to buy or sell any securities, investment products or other financial product or service, nor is it an official confirmation of any transaction or an official statement of Nedbank. Any views or opinions presented are solely those of the author and do not necessarily represent those of Nedbank. Nedbank Ltd Reg No 1951/000009/06.

The following link displays the names of the Nedbank Board of Directors and Company Secretary. [http://www.nedbank.co.za/terms/DirectorsNedbank.htm]

If you do not want to click on a link, please type the relevant address in your browser

********************


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

________________________________



ATTENTION: -----

The information contained in this message (including any files transmitted with this message) may contain proprietary, trade secret or other confidential and/or legally privileged information. Any pricing information contained in this message or in any files transmitted with this message is always confidential and cannot be shared with any third parties without prior written approval from Syncsort. This message is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any use, disclosure, copying or distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or Syncsort and destroy all copies of this message in your possession, custody or control.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-10-03 13:54:23 UTC
Reply
Permalink
Raw Message
Post by Buckton, T. , Theo
Space limits;
Can it extend over more than 1 volume if so, would a multi-volume temp data set pose a problem if it is referenced again by a different DD statement in the same job step?
The JCL Ref. says:

REF=*.ddname
...
| VOL=REF obtains the volume serial numbers ...

I assume use of plural implies multiple volumes are supported.

| from the referenced ... earlier DD statement. ...
| No other information is retrieved by VOL=REF; in particular it does not obtain
| the volume sequence number, volume count, or data set sequence number.

Ugh. So the programmer must keep the data set sequence number on a
sticky note on a desktop.

But I suppose that VOL=REF is superfluous for catalogued data sets and
that multiple temp data sets are rarely stacked on volumes.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Steve Smith
2017-10-03 14:09:54 UTC
Reply
Permalink
Raw Message
He would use a DSN referback, not VOL=REF. And no, multi-volume isn't
an issue. As C. Blaicher said above, the only difference is the
system generates the DSN, and tries to get rid of the dataset
automatically.

sas

N.B. By "DSN" I mean "dataset name", not "dataset". Which I'm annoyed
I have to say.

On Tue, Oct 3, 2017 at 9:55 AM, Paul Gilmartin
Post by Paul Gilmartin
Post by Buckton, T. , Theo
Space limits;
Can it extend over more than 1 volume if so, would a multi-volume temp data set pose a problem if it is referenced again by a different DD statement in the same job step?
REF=*.ddname
...
| VOL=REF obtains the volume serial numbers ...
I assume use of plural implies multiple volumes are supported.
| from the referenced ... earlier DD statement. ...
| No other information is retrieved by VOL=REF; in particular it does not obtain
| the volume sequence number, volume count, or data set sequence number.
Ugh. So the programmer must keep the data set sequence number on a
sticky note on a desktop.
But I suppose that VOL=REF is superfluous for catalogued data sets and
that multiple temp data sets are rarely stacked on volumes.
-- gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
--
sas

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Tom Marchant
2017-10-03 14:09:00 UTC
Reply
Permalink
Raw Message
Post by Buckton, T. , Theo
Can somebody direct me to documentation on temporary data sets.
I need to understand the characteristics and restrictions of these
Space limits;
Can it extend over more than 1 volume
AFAIK there are no special limits for temporary data sets that are
imposed by the system, but there may be restrictions imposed by
the way SMS is set up in your shop. Check with your storage
administrators.
--
Tom Marchant

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Tom Marchant
2017-10-03 14:14:36 UTC
Reply
Permalink
Raw Message
Post by Steve Smith
He would use a DSN referback, not VOL=REF. And no, multi-volume isn't
an issue. As C. Blaicher said above, the only difference is the
system generates the DSN, and tries to get rid of the dataset
automatically.
IIRC, when referring to a temporary data set in another DD statement
in the same step, VOL=REF is required.
--
Tom Marchant

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
R.S.
2017-10-03 16:11:07 UTC
Reply
Permalink
Raw Message
Post by Buckton, T. , Theo
Hi,
Space limits;
Can it extend over more than 1 volume if so, would a multi-volume temp data set pose a problem if it is referenced again by a different DD statement in the same job step?
There are no such (as above) disadvantages/restrictions of using temp
datasets. However, if you're afraid of it, just get rid of temp, and use
regular MVS dataset with DISP=DELETE on last DD referencing to it.

BTW: In case of multi-step temp DS one can use VOL=REF of DSN=&&SOME.
IMHO the last one is much more convenient.
--
Radoslaw Skorupka
Lodz, Poland




======================================================================


--
Treść tej wiadomości może zawierać informacje prawnie chronione Banku przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie lub inne działanie o podobnym charakterze jest prawnie zabronione i może być karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is intended solely for business use of the addressee. This e-mail may only be received by the addressee and may not be disclosed to any third parties. If you are not the intended addressee of this e-mail or the employee authorized to forward it to the addressee, be advised that any dissemination, copying, distribution or any other similar activity is legally prohibited and may be punishable. If you received this e-mail by mistake please advise the sender immediately by using the reply facility in your e-mail software and delete permanently this e-mail including any copies of it either printed or saved to hard drive.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, www.mBank.pl, e-mail: ***@mBank.plSąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru Sądowego, nr rejestru przedsiębiorców KRS 0000025237, NIP: 526-021-50-88. Według stanu na dzień 01.01.2016 r. kapitał zakładowy mBanku S.A. (w całości wpłacony) wynosi 168.955.696 złotych.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Lizette Koehler
2017-10-03 17:56:20 UTC
Reply
Permalink
Raw Message
What problem are you seeing with TEMP datasets?

Is it something your storage admin might have in the ACS Code that is causing an issue?

As others have stated, just like Named datasets. Just goes away at step or job end depending on the JCL.


Lizette
Post by Blaicher, Christopher Y.
-----Original Message-----
Behalf Of Buckton, T. (Theo)
Sent: Tuesday, October 03, 2017 6:39 AM
Subject: Temporary Data Sets
Hi,
Can somebody direct me to documentation on temporary data sets. I need to
Space limits;
Can it extend over more than 1 volume if so, would a multi-volume temp data
set pose a problem if it is referenced again by a different DD statement in
the same job step?
Regards
Theo
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Edward Finnell
2017-10-04 04:48:19 UTC
Reply
Permalink
Raw Message
For decades MVS has honored the concept of public, Storage and private DASD. Numerous SHARE papers on how to configure DASD subsystems in order to reduce contention and optimize thruput. WSC under Ray Wicks produced many of them. One of my favorites was the 'The Big Pitcher'. Properly administered SMS can enhance the basic concepts and augment them with storage overflow.
 
If we had more info on the problem better suggestions could be provided. One of the old tricks was to preallocate sortwks and pass them thru the life of the job. No need to worry about vol=ref
 
In a message dated 10/3/2017 12:57:45 PM Central Standard Time, ***@MINDSPRING.COM writes:

 
As others have stated, just like Named datasets. Just goes away at step or job end depending on the JCL.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Anne & Lynn Wheeler
2017-10-04 06:11:00 UTC
Reply
Permalink
Raw Message
Post by Edward Finnell
For decades MVS has honored the concept of public, Storage and private
DASD. Numerous SHARE papers on how to configure DASD subsystems in
order to reduce contention and optimize thruput. WSC under Ray Wicks
produced many of them. One of my favorites was the 'The Big
Pitcher'. Properly administered SMS can enhance the basic concepts and
augment them with storage overflow.
 
If we had more info on the problem better suggestions could be
provided. One of the old tricks was to preallocate sortwks and pass
them thru the life of the job. No need to worry about vol=ref
back when CKD were real ... (rather than various kinds of simulation on
industry fixed-block disks ... all that rotational positioning and arm
motion, track lengths ... are all fiction) ... I was increasingly
pointing out that disk wasn't keeping up with computer technology and by
the early 80s was saying that disk relative system throughput had
declined by a factor of ten times since the 60s (disk throughput
increased 3-5 times, processor and memory throughput increased 40-50
times).

Some disk division executive took exception and assigned the division
performance group to refute the statements. after several weeks they
eventually came back and effectively said that I had slightly under
stated the problem. The analysis was then respun as disk configuration
recommendations for improving system throughput ... SHARE presentation
B874. old post with part of the early 80 comparison
http://www.garlic.com/~lynn/93.html#31
old posts with pieces of B874
http://www.garlic.com/~lynn/2001l.html#56
http://www.garlic.com/~lynn/2006f.html#3

note that memory is the new disk ... current latency for cache miss,
memory access ... when measured in count of processor cycles is similar
to 60s disk latency when measured in 60s processor cycles .... it is
part of the introduction of out-of-order execution, branch prediction,
speculative execution, hyperthreading ... stuff that can go on while
waiting on stalled instruction (waiting for memory on cache miss)
.... these show up in z196 (accounting for at least half the performance
improvement over z10) ... much of this stuff have been in other
platforms for decades.

trivia: 195 pipeline had out-of-order execution ... but no branch
prediction and/or speculative execution ... so conditional branches
stalled the pipeline, most applications would only ran at half 195 rated
performance. I got dragged into proposal to hyperthread 195 ... two
instruction streams simulating multiprocessor ... two simulated
processors running programs around half throughput ... then would keep
195 running at rated speed. It was never done ...

IBM hyper/multi threading patents mentioned in this post about the end
of ACS/360
https://people.cs.clemson.edu/~mark/acs_end.html

from Amdahl interview in the above:

IBM management decided not to do it, for it would advance the computing
capability too fast for the company to control the growth of the
computer marketplace, thus reducing their profit potential. I then
recommended that the ACS lab be closed, and it was.

... snip ...

end of the article has some of the acs/360 features that show up more
than 20yrs later in es/9000.
--
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-10-04 06:25:52 UTC
Reply
Permalink
Raw Message
Post by Edward Finnell
 
If we had more info on the problem better suggestions could be provided. One of the old tricks was to preallocate sortwks and pass them thru the life of the job. No need to worry about vol=ref
 
Temporary data sets are a major convenience; one of the few facilities of
Classic OS that I miss in UNIX. But the Passed Data Set Queue is a sorry
kludge. A smarter interpreter would simply keep temp data sets until the
last job step referencing them, then scratch them, similar to the processing
of data set ENQs nowadays. The programmer should be able to reference
a temp data set multiple times in a step without the VOL=REF circumvention.

I believe the current recommendation is not to preallocate sortwks; DFSORT
knows best.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Loading...