Discussion:
Sample JCL for file transfer using NJE/TCPIP
(too old to reply)
Nathan Astle
2017-05-20 06:37:04 UTC
Permalink
Raw Message
Hi

Could someone please help me with the sample JCL for transferring a
file(SMF) Using NJE TCPIP link.


Regards
Nathan

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Edward Finnell
2017-05-20 09:07:17 UTC
Permalink
Raw Message
What have you tried?

//JOBCARD JOB
//STEP1 EXEC PGM=IKJEFT01,REGION=4096K,DYNAMNBR=16
//SYSTSPRT DD SYSOUT=A
//SYSTSIN DD *
TRANSMIT nodename.userid DSN('my.big.honkin.smf')
//*


In a message dated 5/20/2017 1:38:12 A.M. Central Daylight Time,
***@GMAIL.COM writes:

Could someone please help me with the sample JCL for transferring a
file(SMF) Using NJE TCPIP link.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Nathan Astle
2017-05-20 10:53:03 UTC
Permalink
Raw Message
Hi Ed

Apologies

I was trying to understand if XMIT is used versus FTP then I will have to
run a TSO RECEIVE command from the target LPAR to ensure the file is
available in Target LPAR





On May 20, 2017 2:38 PM, "Edward Finnell" <
0000000248cce9f3-dmarc-***@listserv.ua.edu> wrote:

> What have you tried?
>
> //JOBCARD JOB
> //STEP1 EXEC PGM=IKJEFT01,REGION=4096K,DYNAMNBR=16
> //SYSTSPRT DD SYSOUT=A
> //SYSTSIN DD *
> TRANSMIT nodename.userid DSN('my.big.honkin.smf')
> //*
>
>
> In a message dated 5/20/2017 1:38:12 A.M. Central Daylight Time,
> ***@GMAIL.COM writes:
>
> Could someone please help me with the sample JCL for transferring a
> file(SMF) Using NJE TCPIP link.
>
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Lizette Koehler
2017-05-20 17:17:18 UTC
Permalink
Raw Message
TRSMAIN and TSO XMIT both take the input file and build it into fixed length records. The output file contains the information needed to reconstruct the file in its original format. So LRECL, BLKSIZE, DSORG, RECFM, etc... which it will use to restore the file to its original state.


So some tools that can be used to set-up a file to be transmitted
TSO XMIT
TRSMAIN
DFDSS
FDR products

The SEQ and PDS or PDSE files are easy enough to work with

VB, VBS files or SVC Dump datasets (for example) provide more of a challenge


The preferred method for transmitting files from mainframe to another location will depend on your shop's standards.

When I need to move SMF or SVC Dumps, I will sometimes TRSMAIN the file. Then I can use FTP or other transmission product to send the file where it needs to go. When it lands on another Mainframe host, I can run TRSMAIN again to unterse it. Note: TRSMAIN has pack and super-pack options

If I have a PDS/PDSE/SEQ file, I will sometimes use TSO XMIT. The file is written to an output dataset. Then the output dataset it transmitted to a new host and RECEIVEd.


Once the file resides on your target system, it can remain in the TRSMAIN or TSO XMIT form until you need it. Then you will need to run a process to restore the file. TRSMAIN is UNPACK, TSO XMIT is RECEIVE.






TSO XMIT (Transmit) takes a file and compresses it (if possible)

You can transmit the file and place it on JES2 SPOOL. Or you can XMIT node.id filename OUTDSN('output file name')

The OUTDSN can then be transmitted with FTP or other transmission products.



It is used as a way to transport files between z/OS systems. Even when there is a non-mainframe in the middle.

So like TRSMAIN, it can provide a transport file.

Note: Some shops restrict the number or amount of data that can be XMIT'd via Spool. So using OUTDSN is preferred mechanism


Here is a nice write up of TRSMAIN

http://jensd.be/271/linux/terse-unterse-and-transfer-datasets-between-zos-and-other-platforms-via-ftp


Here is some helpful tips on VSAM

http://www.mainframetips.com/transfer-zos-vsam-file-via-internet/



If this does not help clarify the concepts, let us know what else you need to know.



Lizette




> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On
> Behalf Of Nathan Astle
> Sent: Saturday, May 20, 2017 3:54 AM
> To: IBM-***@LISTSERV.UA.EDU
> Subject: Re: Sample JCL for file transfer using NJE/TCPIP
>
> Hi Ed
>
> Apologies
>
> I was trying to understand if XMIT is used versus FTP then I will have to
> run a TSO RECEIVE command from the target LPAR to ensure the file is
> available in Target LPAR
>
>
>
>
>
> On May 20, 2017 2:38 PM, "Edward Finnell" < 0000000248cce9f3-dmarc-
> ***@listserv.ua.edu> wrote:
>
> > What have you tried?
> >
> > //JOBCARD JOB
> > //STEP1 EXEC PGM=IKJEFT01,REGION=4096K,DYNAMNBR=16
> > //SYSTSPRT DD SYSOUT=A
> > //SYSTSIN DD *
> > TRANSMIT nodename.userid DSN('my.big.honkin.smf')
> > //*
> >
> >
> > In a message dated 5/20/2017 1:38:12 A.M. Central Daylight Time,
> > ***@GMAIL.COM writes:
> >
> > Could someone please help me with the sample JCL for transferring a
> > file(SMF) Using NJE TCPIP link.
> >
> >

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Robert Prins
2017-05-20 17:38:41 UTC
Permalink
Raw Message
On 2017-05-20 17:17, Lizette Koehler wrote:
> TRSMAIN and TSO XMIT both take the input file and build it into fixed length
> records. The output file contains the information needed to reconstruct the
> file in its original format. So LRECL, BLKSIZE, DSORG, RECFM, etc... which
> it will use to restore the file to its original state.
>
> Here is a nice write up of TRSMAIN
>
> http://jensd.be/271/linux/terse-unterse-and-transfer-datasets-between-zos-and-other-platforms-via-ftp

Additional, there are versions of Terse for various white-box OS'es @
https://groups.yahoo.com/neo/groups/hercules-390/files

Look for tersepc.zip

Robert
--
Robert AH Prins
robert.ah.prins(a)gmail.com

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
i***@FOXMAIL.COM
2017-10-12 02:41:32 UTC
Permalink
Raw Message
Hi all

We will transfer a large number of sequential file from mainframe to redhat linux V6.5.

Normally we use FTP or XCOM to transfer file.

Could you tell us wherther there is the best way to transfer file from mainframe to redhat linux V6.5 for

saving the transfer's time ? for example :compress file on mainframe +transfer the file +uncompress file on Linux.

Thanks a lot!

Regards,

Jason Cai

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Rob Schramm
2017-10-12 02:49:06 UTC
Permalink
Raw Message
I know ftp has done a lot of work in performance. I don't know about
xcom. There were some nice presentations on FTP performance.

Rob Schramm

On Wed, Oct 11, 2017, 10:43 PM ***@foxmail.com <***@foxmail.com>
wrote:

> Hi all
>
> We will transfer a large number of sequential file from mainframe to
> redhat linux V6.5.
>
> Normally we use FTP or XCOM to transfer file.
>
> Could you tell us wherther there is the best way to transfer file from
> mainframe to redhat linux V6.5 for
>
> saving the transfer's time ? for example :compress file on mainframe
> +transfer the file +uncompress file on Linux.
>
> Thanks a lot!
>
> Regards,
>
> Jason Cai
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
>
--

Rob Schramm

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
i***@FOXMAIL.COM
2017-10-12 02:53:16 UTC
Permalink
Raw Message
Rob

Could you share some nice presentations on FTP performance ?

Thanks a lot!

Jason Cai

From: Rob Schramm
Date: 2017-10-12 10:50
To: IBM-MAIN
Subject: Re: Transfer a large number of sequential file from mainframe to redhat linux V6.5
I know ftp has done a lot of work in performance. I don't know about
xcom. There were some nice presentations on FTP performance.

Rob Schramm



----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Elardus Engelbrecht
2017-05-20 11:42:25 UTC
Permalink
Raw Message
Nathan Astle wrote:

>I was trying to understand if XMIT is used versus FTP then I will have to run a TSO RECEIVE command from the target LPAR to ensure the file is available in Target LPAR

E Finnell gave you a good answer. (I used that JCL years ago before TCP/IP is in use.)

FTP is faster (no checkpointing, my decision not to use it), while TRANSMIT / RECEIVE is clobbering up your JES2 spool, but has good checkpointing features.

Yes, I am using FTP to transfer my SMF data, RACF DB, etc. from several LPARs to one LPAR, simply for archival purposes.

Sorry, but I can't help you with your original question, simply because I have standardised on using FTP.

I can supply a sample JCL for FTP of SMF data if you wish to experiment with that.

Groete / Greetings
Elardus Engelbrecht

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-05-20 13:29:28 UTC
Permalink
Raw Message
On Sat, 20 May 2017 06:43:22 -0500, Elardus Engelbrecht wrote:
>
>Yes, I am using FTP to transfer my SMF data, RACF DB, etc. from several LPARs to one LPAR, simply for archival purposes.
> ...
>I can supply a sample JCL for FTP of SMF data if you wish to experiment with that.
>
I'll not ask for JCL; I won't use it. But I'm curious. I believe SMF is RECFM=VBS;
RACF (I'm less sure) is VSAM. What FTP options work for those? Must index, data,
and cluster be transferred separately, or does FTP know what to do. Should
the receiving data sets be pre-allocated? Etc? Do you verify that those archives
are functional?

A Google search suggests it's tricky. I suspect it might be true likewise for
database stores on non-mainframe systems.

Thanks,
gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-05-20 18:19:33 UTC
Permalink
Raw Message
On Sat, 20 May 2017 10:18:40 -0700, Lizette Koehler wrote:
>
>TSO XMIT (Transmit) takes a file and compresses it (if possible)
>
I was unaware of a compression facility, and I don't see it mentioned
in the Ref., so I remain skeptical. But I noticed an ENCIPHER option
of which I had previously been unaware.

But I believe OUTDDNAME can be ALLOCATEd to a POSIX pipe that
could feed compress.

>Note: Some shops restrict the number or amount of data that can be XMIT'd via Spool. So using OUTDSN is preferred mechanism
>
I thought this restriction applied even with OUTDD/OUTDSN specified.
And TRANSMITting a PDS involves an IEBCOPY unload which could
exhaust space and I see no way the programmer can control that.

I could imagine TRANSMIT OUTDDNAME() | compress | ssh ***@node "cat >file".

The Ref. mentions that VSAM, ISAM, keyed, and user labels are not supported.
I don't understand "user labels". Is this this the exception that proves the
rule, implying that other sorts of labels are supported?

The output of TRANSMIT is not suitable for use as SYSIN because "/*" may
occur in columns 1-2 of OUTDSN.

>Here is a nice write up of TRSMAIN
>
>http://jensd.be/271/linux/terse-unterse-and-transfer-datasets-between-zos-and-other-platforms-via-ftp
>
>
>Here is some helpful tips on VSAM
>
>http://www.mainframetips.com/transfer-zos-vsam-file-via-internet/
>
Good sumary of TRANSMIT and TRSMAIN.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Elardus Engelbrecht
2017-05-21 10:11:03 UTC
Permalink
Raw Message
Paul Gilmartinwrote:

>>Yes, I am using FTP to transfer my SMF data, RACF DB, etc. from several LPARs to one LPAR, simply for archival purposes.
>>I can supply a sample JCL for FTP of SMF data if you wish to experiment with that.

>I'll not ask for JCL; I won't use it.

This is fine. You can run a script or other software to do your FTP. (or FTPS or SFTP)


>But I'm curious. I believe SMF is RECFM=VBS;

Yes.


>RACF (I'm less sure) is VSAM.

No, it is PSU (PS and Unmovable). Other attributes are mandated by IBM.


>What FTP options work for those?

It is tricky. I will post the FTP settings needed to do FTP of such weird datasets. Or you can search IBM-MAIN, I believe it has already posted in the past.


>Must index, data, and cluster be transferred separately, or does FTP know what to do.

I am not sure about FTP such VSAM clusters. But if I want to FTP a 'difficult' dataset or groups of datasets, I would rather use DFDSS DUMP or TERSE and then transfer that dataset. On the receiving end, I would DFDSS RESTORE or UNTERSE it.


>Should the receiving data sets be pre-allocated?

Preferably and optional, just to make your work easier for scheduled regular FTP tasks.


> Do you verify that those archives are functional?

Of course! I use those datasets for later processing. I hate nasty surprises if I find a broken dataset or two.


>A Google search suggests it's tricky. I suspect it might be true likewise for database stores on non-mainframe systems.

Thanks. It is indeed tricky and there were posts in the past about transfers of such 'difficult' datasets.

Thanks for your comments.

Groete / Greetings
Elardus Engelbrecht

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Edward Gould
2017-05-22 14:31:54 UTC
Permalink
Raw Message
>
> I am not sure about FTP such VSAM clusters. But if I want to FTP a 'difficult' dataset or groups of datasets, I would rather use DFDSS DUMP or TERSE and then transfer that dataset. On the receiving end, I would DFDSS RESTORE or UNTERSE it.
>


For simplicity sake I would recommend Export to seq - FTP -Imort

Ed
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Edward Gould
2017-05-22 14:32:04 UTC
Permalink
Raw Message
> On May 21, 2017, at 5:12 AM, Elardus Engelbrecht <***@SITA.CO.ZA> wrote:
>
> Paul Gilmartinwrote:
>
>>> Yes, I am using FTP to transfer my SMF data, RACF DB, etc. from several LPARs to one LPAR, simply for archival purposes.
>>> I can supply a sample JCL for FTP of SMF data if you wish to experiment with that.
>
>> I'll not ask for JCL; I won't use it.
>
> This is fine. You can run a script or other software to do your FTP. (or FTPS or SFTP)
>
>
>> But I'm curious. I believe SMF is RECFM=VBS;
>
> Yes.
>
>
>> RACF (I'm less sure) is VSAM.
>
> No, it is PSU (PS and Unmovable). Other attributes are mandated by IBM.
>
>
>> What FTP options work for those?
>
> It is tricky. I will post the FTP settings needed to do FTP of such weird datasets. Or you can search IBM-MAIN, I believe it has already posted in the past.
>
>
>> Must index, data, and cluster be transferred separately, or does FTP know what to do.
>
> I am not sure about FTP such VSAM clusters. But if I want to FTP a 'difficult' dataset or groups of datasets, I would rather use DFDSS DUMP or TERSE and then transfer that dataset. On the receiving end, I would DFDSS RESTORE or UNTERSE it.
>
>
>> Should the receiving data sets be pre-allocated?
>
> Preferably and optional, just to make your work easier for scheduled regular FTP tasks.
>
>
>> Do you verify that those archives are functional?
>
> Of course! I use those datasets for later processing. I hate nasty surprises if I find a broken dataset or two.
>
>
>> A Google search suggests it's tricky. I suspect it might be true likewise for database stores on non-mainframe systems.
>
> Thanks. It is indeed tricky and there were posts in the past about transfers of such 'difficult' datasets.
>
> Thanks for your comments.
>
> Groete / Greetings
> Elardus Engelbrecht
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2017-05-21 17:11:05 UTC
Permalink
Raw Message
For a truly first class solution--at a first class price--there's Connect Direct (formerly NDM). Connect Direct offers guaranteed delivery even across IPL outages. It also works between z/OS and UNIX. The product has to be functional at both ends, likely increasing the cost further.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Lizette Koehler
Sent: Saturday, May 20, 2017 10:19 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: Sample JCL for file transfer using NJE/TCPIP

TRSMAIN and TSO XMIT both take the input file and build it into fixed length records. The output file contains the information needed to reconstruct the file in its original format. So LRECL, BLKSIZE, DSORG, RECFM, etc... which it will use to restore the file to its original state.


So some tools that can be used to set-up a file to be transmitted
TSO XMIT
TRSMAIN
DFDSS
FDR products

The SEQ and PDS or PDSE files are easy enough to work with

VB, VBS files or SVC Dump datasets (for example) provide more of a challenge


The preferred method for transmitting files from mainframe to another location will depend on your shop's standards.

When I need to move SMF or SVC Dumps, I will sometimes TRSMAIN the file. Then I can use FTP or other transmission product to send the file where it needs to go. When it lands on another Mainframe host, I can run TRSMAIN again to unterse it. Note: TRSMAIN has pack and super-pack options

If I have a PDS/PDSE/SEQ file, I will sometimes use TSO XMIT. The file is written to an output dataset. Then the output dataset it transmitted to a new host and RECEIVEd.


Once the file resides on your target system, it can remain in the TRSMAIN or TSO XMIT form until you need it. Then you will need to run a process to restore the file. TRSMAIN is UNPACK, TSO XMIT is RECEIVE.






TSO XMIT (Transmit) takes a file and compresses it (if possible)

You can transmit the file and place it on JES2 SPOOL. Or you can XMIT node.id filename OUTDSN('output file name')

The OUTDSN can then be transmitted with FTP or other transmission products.



It is used as a way to transport files between z/OS systems. Even when there is a non-mainframe in the middle.

So like TRSMAIN, it can provide a transport file.

Note: Some shops restrict the number or amount of data that can be XMIT'd via Spool. So using OUTDSN is preferred mechanism


Here is a nice write up of TRSMAIN

http://jensd.be/271/linux/terse-unterse-and-transfer-datasets-between-zos-and-other-platforms-via-ftp


Here is some helpful tips on VSAM

http://www.mainframetips.com/transfer-zos-vsam-file-via-internet/



If this does not help clarify the concepts, let us know what else you need to know.



Lizette




> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU]
> On Behalf Of Nathan Astle
> Sent: Saturday, May 20, 2017 3:54 AM
> To: IBM-***@LISTSERV.UA.EDU
> Subject: Re: Sample JCL for file transfer using NJE/TCPIP
>
> Hi Ed
>
> Apologies
>
> I was trying to understand if XMIT is used versus FTP then I will
> have to run a TSO RECEIVE command from the target LPAR to ensure the
> file is available in Target LPAR
>
>
>
>
>
> On May 20, 2017 2:38 PM, "Edward Finnell" < 0000000248cce9f3-dmarc-
> ***@listserv.ua.edu> wrote:
>
> > What have you tried?
> >
> > //JOBCARD JOB
> > //STEP1 EXEC PGM=IKJEFT01,REGION=4096K,DYNAMNBR=16
> > //SYSTSPRT DD SYSOUT=A
> > //SYSTSIN DD *
> > TRANSMIT nodename.userid DSN('my.big.honkin.smf')
> > //*
> >
> >
> > In a message dated 5/20/2017 1:38:12 A.M. Central Daylight Time,
> > ***@GMAIL.COM writes:
> >
> > Could someone please help me with the sample JCL for transferring a
> > file(SMF) Using NJE TCPIP link.
> >
> >


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-05-21 19:18:41 UTC
Permalink
Raw Message
On Sun, 21 May 2017 05:12:00 -0500, Elardus Engelbrecht wrote:
>
>>RACF (I'm less sure) is VSAM.
>
>No, it is PSU (PS and Unmovable). Other attributes are mandated by IBM.
>
"Unmovable" would seem to imply uncopyable; the copy would have to go
in a different place. But there must be some provision for backing it up,
and little point in trying to move it to another system with such as FTP.

Why not VSAM? Performance? Antiquity? It feels as if RACF has a
built-in DB engine.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2017-05-21 20:28:35 UTC
Permalink
Raw Message
RACF data base is not required to be PSU. PS with RECFM F is fine. RACF predates VSAM.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Paul Gilmartin
Sent: Sunday, May 21, 2017 12:20 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: Sample JCL for file transfer using NJE/TCPIP

On Sun, 21 May 2017 05:12:00 -0500, Elardus Engelbrecht wrote:
>
>>RACF (I'm less sure) is VSAM.
>
>No, it is PSU (PS and Unmovable). Other attributes are mandated by IBM.
>
"Unmovable" would seem to imply uncopyable; the copy would have to go in a different place. But there must be some provision for backing it up, and little point in trying to move it to another system with such as FTP.

Why not VSAM? Performance? Antiquity? It feels as if RACF has a built-in DB engine.

-- gil


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Clark Morris
2017-05-21 23:33:53 UTC
Permalink
Raw Message
[Default] On 21 May 2017 13:28:35 -0700, in bit.listserv.ibm-main
***@SCE.COM (Jesse 1 Robinson) wrote:

>RACF data base is not required to be PSU. PS with RECFM F is fine. RACF predates VSAM.
>
Since VSAM came in with virtual storage, are you saying RACF was on
OS360?

Clark Morris
>.
>J.O.Skip Robinson
>Southern California Edison Company
>Electric Dragon Team Paddler
>SHARE MVS Program Co-Manager
>323-715-0595 Mobile
>626-543-6132 Office ?=== NEW
>***@sce.com
>
>
>-----Original Message-----
>From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Paul Gilmartin
>Sent: Sunday, May 21, 2017 12:20 PM
>To: IBM-***@LISTSERV.UA.EDU
>Subject: (External):Re: Sample JCL for file transfer using NJE/TCPIP
>
>On Sun, 21 May 2017 05:12:00 -0500, Elardus Engelbrecht wrote:
>>
>>>RACF (I'm less sure) is VSAM.
>>
>>No, it is PSU (PS and Unmovable). Other attributes are mandated by IBM.
>>
>"Unmovable" would seem to imply uncopyable; the copy would have to go in a different place. But there must be some provision for backing it up, and little point in trying to move it to another system with such as FTP.
>
>Why not VSAM? Performance? Antiquity? It feels as if RACF has a built-in DB engine.
>
>-- gil
>
>
>----------------------------------------------------------------------
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Elardus Engelbrecht
2017-05-22 06:21:02 UTC
Permalink
Raw Message
Ok - this thread is now really drifting. ;-)

Clark Morris wrote:

>Since VSAM came in with virtual storage, are you saying RACF was on OS360?

I don't know what to answer you, but RACF v1.1 came in September 1976. I'm not sure what operating system(s) were then active.


Jesse Robinson wrote:

>RACF data base is not required to be PSU. PS with RECFM F is fine. RACF predates VSAM.

It is true. PSU is not required, but recommended. Wait until someone move that, say with space management, then you get weird abends and an IPL is probably waiting.

From Security Server RACF System Programmer's Guide this quote:

"Guideline: Make a RACF database unmovable. If an active database is moved from where RACF thinks it is, for example, by a DFSMSdss DEFRAG operation on the volume, results are unpredictable. Requests for RACF services might fail, and profile updates might be lost. If you choose to make a RACF database movable, you should put procedural controls in place that
guarantee that the RACF database is not moved unless an RVARY INACTIVE command is issued."


Paul Gilmartin wrote:

>"Unmovable" would seem to imply uncopyable; the copy would have to go in a different place. But there must be some provision for backing it up, and little point in trying to move it to another system with such as FTP.

I should have said - Unmovable because of IBM recoomendations. See above.

I should also said, when I'm doing FTP of SMF data or RACF DB, I use FTP on a _COPY_ of that dataset to avoid any interference of the live dataset(s).


>Why not VSAM? Performance? Antiquity? It feels as if RACF has a built-in DB engine.

Ask big blue why not! But the RACF DB is a sort of a database build specifically for speed and usage by RACF subsystem. Usage of IRRUT200 will tell you quickly and confirms that "RACF has a built-in DB engine". RACF DB is split up in segments and blocks identified by relative byte addresses and index entries, so the layout is more or less similar to VSAM or a type of indexed data.

Now, we are back to the scheduled discussions of transfers of SMF with JCL or so... ;-)

Groete / Greetings
Elardus Engelbrecht

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-10-12 03:20:21 UTC
Permalink
Raw Message
On Thu, 12 Oct 2017 10:42:25 +0800, ibmmain wrote:
>
> We will transfer a large number of sequential file from mainframe to redhat linux V6.5.
>
>Normally we use FTP or XCOM to transfer file.
>
> Could you tell us wherther there is the best way to transfer file from mainframe to redhat linux V6.5 for
>
Are the files Classic data sets or z/OS UNIX files?

Are the files binary or text?

If binary, does it matter if record boundaries are lost?

If text, are you sure the have no embedded control characters which
may introduce spurious line breaks?

Do they have embedded packed decimal data which need special treatment?

There's little to be done with program objects/load modules.

Might the files be NFS mounted on Linux?

>saving the trsnsfer's time ? for example :compress file on mainframe +transfer the file +uncompress file on Linux.
>
Which compression technique(s) are you considering?

For UNIX files only, pax will perform EBCDIC-->ASCII conversion (large
variety of code pages) and has a compression option compatible with Linux.

.zip is fairly portable. You may find a port of zip to z/OS that meets your needs
and can be extracted on Redhat.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
W Mainframe
2017-10-12 03:57:36 UTC
Permalink
Raw Message
A suggestion...Could be better or not... You could try... Convert your volume to a cckd image using Hercules utilies. Once they are converted, you will be take advantage of compression and time to this transfer. I did same thing some thing, with success. Btw there is one a problem, you need to move your datasets to a same volume.. Make sense?

Dan


Sent from Yahoo Mail for iPhone


On Thursday, October 12, 2017, 12:21 AM, Paul Gilmartin <0000000433f07816-dmarc-***@LISTSERV.UA.EDU> wrote:

On Thu, 12 Oct 2017 10:42:25 +0800, ibmmain wrote:
>
>  We will transfer a large number of sequential file from mainframe to redhat linux V6.5.
>
>Normally we use FTP or XCOM to transfer file.
>
> Could you tell us wherther there is the best way to transfer file  from mainframe to redhat linux V6.5 for
>
Are the files Classic data sets or z/OS UNIX files?

Are the files binary or text?

If binary, does it matter if record boundaries are lost?

If text, are you sure the have no embedded control characters which
may introduce spurious line breaks?

Do they have embedded packed decimal data which need special treatment?

There's little to be done with program objects/load modules.

Might the files be NFS mounted on Linux?

>saving the trsnsfer's time ? for example :compress file on mainframe +transfer the file +uncompress file on Linux.
>
Which compression technique(s)  are  you considering?

For UNIX files only, pax will perform EBCDIC-->ASCII conversion (large
variety of code pages) and has a compression option compatible with Linux.

.zip is fairly portable.  You may find a port of zip to z/OS that meets your needs
and can be extracted on Redhat.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN



----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
i***@FOXMAIL.COM
2017-10-12 06:24:55 UTC
Permalink
Raw Message
Hi

We could move our datasets to a same volume and convert the volume to a cckd image using Hercules utilies

after we transfer the cckd image to linux,how will the cckd image be used by linux?

Thanks a lot!

Best Regards

Jason Cai


>A suggestion...Could be better or not... You could try... Convert your volume to a cckd image using Hercules utilies. Once they are converted, you will be take advantage of compression and time to this transfer. I did same thing some thing, with success. Btw there is one a problem, you need to move your datasets to a same volume.. Make sense?

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
David Mingee
2017-10-19 02:44:06 UTC
Permalink
Raw Message
Hello, another option would be to add the line MODE C before the put or mput line in your FTP'S. This does compression only during the FTP. This could speed up the FTP's.

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of ***@foxmail.com
Sent: Thursday, October 12, 2017 2:26 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: Transfer a large number of sequential file from mainframe to redhat linux V6.5

Hi

We could move our datasets to a same volume and convert the volume to a cckd image using Hercules utilies

after we transfer the cckd image to linux,how will the cckd image be used by linux?

Thanks a lot!

Best Regards

Jason Cai


>A suggestion...Could be better or not... You could try... Convert your volume to a cckd image using Hercules utilies. Once they are converted, you will be take advantage of compression and time to this transfer. I did same thing some thing, with success. Btw there is one a problem, you need to move your datasets to a same volume.. Make sense?

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
John McKown
2017-10-19 12:43:47 UTC
Permalink
Raw Message
On Wed, Oct 18, 2017 at 9:45 PM, David Mingee <***@prodigy.net> wrote:

> Hello, another option would be to add the line MODE C before the put or
> mput line in your FTP'S. This does compression only during the FTP. This
> could speed up the FTP's.
>

​This would require that the FTP server, on Linux in the OP's case,
supports MODE C. From what I've read, this seems to be a z/OS only option.​


--
I just child proofed my house.
But the kids still manage to get in.


Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
R.S.
2017-10-19 13:39:11 UTC
Permalink
Raw Message
What about NFS?

More general: What is the work flow?
Usually the transfer of many big files is not the goal itself. Maybe the
files should be opened and read byt some application. NFS (of DRF/SMB)
would save the time by simply avoiding the transfer before the
applications start reading data.

BTW: I know many cases where the files were transferred from node to
node several times (multi-ftp server chain) and the simplest and
cheapest tuning method was to cut the chain and transfer directly from
source to target.
Or just file sharing as I suggested above.


--
Radoslaw Skorupka
Lodz, Poland




======================================================================


--
Treść tej wiadomości może zawierać informacje prawnie chronione Banku przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie lub inne działanie o podobnym charakterze jest prawnie zabronione i może być karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is intended solely for business use of the addressee. This e-mail may only be received by the addressee and may not be disclosed to any third parties. If you are not the intended addressee of this e-mail or the employee authorized to forward it to the addressee, be advised that any dissemination, copying, distribution or any other similar activity is legally prohibited and may be punishable. If you received this e-mail by mistake please advise the sender immediately by using the reply facility in your e-mail software and delete permanently this e-mail including any copies of it either printed or saved to hard drive.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, www.mBank.pl, e-mail: ***@mBank.plSąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru Sądowego, nr rejestru przedsiębiorców KRS 0000025237, NIP: 526-021-50-88. Według stanu na dzień 01.01.2016 r. kapitał zakładowy mBanku S.A. (w całości wpłacony) wynosi 168.955.696 złotych.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
i***@FOXMAIL.COM
2017-10-12 06:07:29 UTC
Permalink
Raw Message
Hi

> Are the files Classic data sets or z/OS UNIX files?

The files are Classic data sets from unloading zos db2 table.

> Are the files binary or text?

The files are text

> Which compression technique(s) are you considering?

Any compression technique(s) we are considering

Thanks a lot!

Jason Cai

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
John McKown
2017-10-12 12:18:49 UTC
Permalink
Raw Message
On Thu, Oct 12, 2017 at 1:08 AM, ***@foxmail.com <***@foxmail.com>
wrote:

> Hi
>
> > Are the files Classic data sets or z/OS UNIX files?
>
> The files are Classic data sets from unloading zos db2 table.
>
> > Are the files binary or text?
>
> The files are text
>
> > Which compression technique(s) are you considering?
>
> Any compression technique(s) we are considering
>
> Thanks a lot!
>
> Jason Cai
>
>
​My first though is to "tune" your network. I'm assuming you are talking
from z/OS to Linux via TCPIP over ethernet. ​From what little I know, most
seem to use an MTU of 1500. You might get better throughput if you could
configure the MTU to be larger. This is oft times called "jumbo frames" (
https://en.wikipedia.org/wiki/Jumbo_frame ).

Whether to compress or not is basically a trade off between how long it
takes to compress-transfer-uncompress vs. just transfer. This will depend
on the power of the boxes on each end and the "size of the pipe" between
them.

Assuming a "large pipe" (that is 10 Gib/s or larger):
I am not certain how you generate the list of files to be transferred. But
what I would possibly do is multiple concurrent FTP jobs running at the
same time. For example: job 1 in the job stream finds all the DSNs to be
transferred. For each DSN, it creates an FTP "put" control card. Assuming
REXX as the language of choice, each control card is recorded in a stem
variable. Something like: ftp_control.0=<number of ftp control cards>;
ftp_control.1="put ...."; and so on. Now determine the number of concurrent
FTPs you want to do. Divide the number of ftp control cards by this number.
Create a normal batch job where each job runs a single FTP step which has
"n" ftp_control cards in it. Submit each job to z/OS using the internal
reader. Have enough initiators running to run those jobs. Let them all (or
a subset) run at one time. The extreme of this is to have each job do a
single FTP. And have "n" initiators running those jobs. You could generate,
say, 20 FTP jobs, and run 5 at a time by having 5 initiators set up &
dedicated to running just those jobs (by dedicating a specific JOBCLASS to
this purpose).

Anyway, I was just thinking that instead of serially compressing,
transferring, and uncompressing the data; it might be faster with a "large
pipe" to do multiple transfers concurrently.


--
I just child proofed my house.
But the kids still manage to get in.


Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
i***@FOXMAIL.COM
2017-10-12 16:03:14 UTC
Permalink
Raw Message
Hi John

Could you give us a sample of the ftp job or the rexx?

Thanks a lot !

Jason Cai


>>​My first though is to "tune" your network. I'm assuming you are talking
>>from z/OS to Linux via TCPIP over ethernet. ​From what little I know, most
seem to use an MTU of 1500. You might get better throughput if you could
configure the MTU to be larger. This is oft times called "jumbo frames" (
https://en.wikipedia.org/wiki/Jumbo_frame ).

Whether to compress or not is basically a trade off between how long it
takes to compress-transfer-uncompress vs. just transfer. This will depend
on the power of the boxes on each end and the "size of the pipe" between
them.

Assuming a "large pipe" (that is 10 Gib/s or larger):
I am not certain how you generate the list of files to be transferred. But
what I would possibly do is multiple concurrent FTP jobs running at the
same time. For example: job 1 in the job stream finds all the DSNs to be
transferred. For each DSN, it creates an FTP "put" control card. Assuming
REXX as the language of choice, each control card is recorded in a stem
variable. Something like: ftp_control.0=<number of ftp control cards>;
ftp_control.1="put ...."; and so on. Now determine the number of concurrent
FTPs you want to do. Divide the number of ftp control cards by this number.
Create a normal batch job where each job runs a single FTP step which has
"n" ftp_control cards in it. Submit each job to z/OS using the internal
reader. Have enough initiators running to run those jobs. Let them all (or
a subset) run at one time. The extreme of this is to have each job do a
single FTP. And have "n" initiators running those jobs. You could generate,
say, 20 FTP jobs, and run 5 at a time by having 5 initiators set up &
dedicated to running just those jobs (by dedicating a specific JOBCLASS to
this purpose).

Anyway, I was just thinking that instead of serially compressing,
transferring, and uncompressing the data; it might be faster with a "large
pipe" to do multiple transfers concurrently.


--
I just child proofed my house.
But the kids still manage to get in.


Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Clark Morris
2017-10-12 17:18:18 UTC
Permalink
Raw Message
[Default] On 12 Oct 2017 09:03:14 -0700, in bit.listserv.ibm-main
***@FOXMAIL.COM (***@foxmail.com) wrote:

>Hi John
>
> Could you give us a sample of the ftp job or the rexx?
>
>Thanks a lot !
A major consideration, at least on the z series side is the cpu and
MSU costs of doing the transfer with compression versus the costs of
doing it without compression.

Clark Morris
>
>Jason Cai
>
>
>>>?My first though is to "tune" your network. I'm assuming you are talking
>>>from z/OS to Linux via TCPIP over ethernet. ?From what little I know, most
>seem to use an MTU of 1500. You might get better throughput if you could
>configure the MTU to be larger. This is oft times called "jumbo frames" (
>https://en.wikipedia.org/wiki/Jumbo_frame ).
>
>Whether to compress or not is basically a trade off between how long it
>takes to compress-transfer-uncompress vs. just transfer. This will depend
>on the power of the boxes on each end and the "size of the pipe" between
>them.
>
>Assuming a "large pipe" (that is 10 Gib/s or larger):
>I am not certain how you generate the list of files to be transferred. But
>what I would possibly do is multiple concurrent FTP jobs running at the
>same time. For example: job 1 in the job stream finds all the DSNs to be
>transferred. For each DSN, it creates an FTP "put" control card. Assuming
>REXX as the language of choice, each control card is recorded in a stem
>variable. Something like: ftp_control.0=<number of ftp control cards>;
>ftp_control.1="put ...."; and so on. Now determine the number of concurrent
>FTPs you want to do. Divide the number of ftp control cards by this number.
>Create a normal batch job where each job runs a single FTP step which has
>"n" ftp_control cards in it. Submit each job to z/OS using the internal
>reader. Have enough initiators running to run those jobs. Let them all (or
>a subset) run at one time. The extreme of this is to have each job do a
>single FTP. And have "n" initiators running those jobs. You could generate,
>say, 20 FTP jobs, and run 5 at a time by having 5 initiators set up &
>dedicated to running just those jobs (by dedicating a specific JOBCLASS to
>this purpose).
>
>Anyway, I was just thinking that instead of serially compressing,
>transferring, and uncompressing the data; it might be faster with a "large
>pipe" to do multiple transfers concurrently.
>
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
John McKown
2017-10-12 16:58:21 UTC
Permalink
Raw Message
On Thu, Oct 12, 2017 at 11:04 AM, ***@foxmail.com <***@foxmail.com>
wrote:

> Hi John
>
> Could you give us a sample of the ftp job or the rexx?
>

​I don't know that I can give any REXX code which would be generic enough
to be helpful. Where do you get your list of DSNs to be transferred? Where
do you want the files to be stored on the Linux box? Do you want the file
name on the Linux box to be the same as the DSN on the z/OS system? If not,
how do you map the z/OS DSN to the Linux file name?

The REXX program would need to be able to read, or generate, this list. The
following "snippet" of REXX may be of some help as to how to do this.​

/* REXX PROGRAM DB2FTP */
/* DO SOMETHING TO GENERATE THE DSN. STEM VARIABLES */
/* */
/* SET UP THE TOP OF THE JOB IN THE JOB. STEM */
JOB.1='//DB2FTP JOB CLASS=F,MSGCLASS=H'
JOB.2='//SENDFILE EXEC PGM=FTP,PARM=LINUX (EXIT'
JOB.3='//SYSPRINT DD SYSOUT=*'
JOB.4='//OUTPUT DD SYSOUT=*'
JOB.5='//INPUT DD *'
JOB.6='USERID' /* CHANGE TO THE PROPER VALUE */
JOB.7='PASSWORD' /* CHANGE TO THE PROPER VALUE */
JOB.8='cd /the/receiving/directory' /* EVERY FILE GOES HERE */
JOB.9='PUT DSN' /* PLACE HOLDER, REPLACED LATER */
JOB.10='QUIT'
JOB.11='/*'
JOB.12='//'
JOB.13='/*EOF'
JOB.0=13
/* AT THIS POINT, THE DSNS TO BE TRANSFERRED ARE IN THE
DSN. STEM WITH DSN.0 BEING THE NUMBER OF ENTRIES 1..?
SO CREATE & SUBMIT A JOB FOR EACH DSN .
*/
DO I=1 TO DSN.0 /* SUBMIT A JOB FOR EACH DSN */
JOB.9='put' DSN /* REPLACE THE DSN IN THE PUT COMMAND */
"EXECIO "JOB.0" DISKW INTRDR(FINIS STEM JOB."
END
/* END OF THIS PART */

I don't know where the actual list of DSNs would come from, so I don't know
how to set up the DSN. stem variables. You need to modify the "cd" command
in JOB.8 to be the correct directory in Linux. The names of the files in
this directory will be the same as the DSN on z/OS. I don't try to "map"
them to something "reasonable".

This could be run in batch TSO with JCL similar to:

//SUBMITS EXEC PGM=IKJEFT01,REGION=0M
//SYSTSPRT DD SYSOUT=*
//SYSEXEC DD DISP=SHR,DSN=dsn.containing.rexx.program
//INTRDR DD SYSOUT=(*,INTRDR)
//SYSTSIN DD *
%DB2FTP
/*
//

​You will need to change the value in the CLASS= and MSGCLASS= to be
correct. You will need to have multiple initiators ​of the proper class
started. And, critically, the class specified in the CLASS= must have the
DUPL_JOB attribute set to NODELAY. The z/OS (JES2) command to do this for
(example) class F would be: $TJOBCLASS(F),DUPL_JOB=NODELAY. The default is
DELAY and if left at that, the job will run one at a time. Or you need to
change the REXX somehow to guarantee that the JOB name is unique.



>
> Thanks a lot !
>
> Jason Cai
>
>


--
I just child proofed my house.
But the kids still manage to get in.


Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-10-12 04:13:01 UTC
Permalink
Raw Message
On Thu, 12 Oct 2017 03:54:44 +0000, W Mainframe wrote:

>A suggestion...Could be better or not... You could try... Convert your volume to a cckd image using Hercules utilies. Once they are converted, you will be take advantage of compression and time to this transfer. I did same thing some thing, with success. Btw there is one a problem, you need to move your datasets to a same volume.. Make sense?
>
Is this useful only if OP runs Hercules on his RedHat?

>On Thu, 12 Oct 2017 10:42:25 +0800, ibmmain wrote:
>>
>>  We will transfer a large number of sequential file from mainframe to redhat linux V6.5.
>>
>>Normally we use FTP or XCOM to transfer file.
>>
>> Could you tell us wherther there is the best way to transfer file  from mainframe to redhat linux V6.5 for

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Loading...