Discussion:
RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
(too old to reply)
John McKown
2017-06-14 14:29:39 UTC
Permalink
This is just a kind of "speculation" on my part. It is to avoid problems
when doing transfers of data between z/OS and Intel based platforms. I.e.
when I want to do a binary transfer of a file from Linux or <blech> Windows
to z/OS for processing, perhaps due to the complexity of the data and the
"non z/OS end" people being uncooperative about translating their data to
something like non-binary XML or JSON.

I would like a way to specify that either specific integer variables be in
"Little-Endian" format instead of the IBM z's "Big Endian". It seems to me
that this should be "simple" by just using the "Load Reverse" and "Store
Revere" instructions instead of the normal "Load" and "Store" instructions.
There are 2, 4, & 8 byte variants of these instructions.

In addition to the above, I am wondering about the
reading/writing/processing of character data in ASCII instead of EBCDIC. I
know of the ASCII compile option (which I can't review right now due the
the abominable KC being unavailable right now - I am really PISSED at IBM
for this unreliability). Well, enough on that digression. Does the ASCII
compile option allow for reading, writing, and processing of ASCII char
data? Of course, that I'd really like is a "simple" (not iconv) way to
intermix ASCII & EBCDIC characters. And, yes, I know that I'm opening up a
whole can of mega-worms with this "easy desire".
--
Veni, Vidi, VISA: I came, I saw, I did a little shopping.

Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Anthony Giorgio
2017-06-14 14:46:21 UTC
Permalink
Post by John McKown
This is just a kind of "speculation" on my part. It is to avoid problems
when doing transfers of data between z/OS and Intel based platforms. I.e.
when I want to do a binary transfer of a file from Linux or <blech> Windows
to z/OS for processing, perhaps due to the complexity of the data and the
"non z/OS end" people being uncooperative about translating their data to
something like non-binary XML or JSON.
I would like a way to specify that either specific integer variables be in
"Little-Endian" format instead of the IBM z's "Big Endian". It seems to me
that this should be "simple" by just using the "Load Reverse" and "Store
Revere" instructions instead of the normal "Load" and "Store" instructions.
There are 2, 4, & 8 byte variants of these instructions.
In addition to the above, I am wondering about the
reading/writing/processing of character data in ASCII instead of EBCDIC. I
know of the ASCII compile option (which I can't review right now due the
the abominable KC being unavailable right now - I am really PISSED at IBM
for this unreliability). Well, enough on that digression. Does the ASCII
compile option allow for reading, writing, and processing of ASCII char
data? Of course, that I'd really like is a "simple" (not iconv) way to
intermix ASCII & EBCDIC characters. And, yes, I know that I'm opening up a
whole can of mega-worms with this "easy desire".
John,

When I write C programs that have to move data between little and
big-endian systems, I generally use a byte-swapping function to
manipulate the specific fields. If you know that data coming into z/OS
is little-endian, then you could just perform the translation once on
each field, and then treat it as a native value for the rest of its life.

There are some specific built-in C functions that perform little to big
endian byte swaps (htons, htonl), but they are more intended for an x86
machine working with network data. However,m GCC seems to have some
extensions that you can call to perform this byte swapping, as detailed
here:

https://stackoverflow.com/a/105339/9816

I don't know if XLC has something similar, as I haven't used it in over
a decade.
--
Anthony Giorgio
Advisory Software Engineer
IBM z Systems Platform Performance Manager
Twitter: @a_giorgio

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Bernd Oppolzer
2017-06-14 14:47:51 UTC
Permalink
What you really would need is an attribute on the variable definition
(in addition to a compile option) which tells if a variable is
BIGENDIAN or LITTLEENDIAN or in case of a char variable or
string, what encoding is has. PL/1, AFAIK, has all that.

If you mix BIGENDIAN and LITTLEENDIAN variables in an
expression, there is no problem; they can be processed together,
and each one is stored in its proper format.

When assigning char strings or single chars of different
encoding, the compiler provides the translation.

The compiler option tells the default, if no ENDIANNESS
or encoding scheme has been specified on the variable definition.

HTH,
kind regards

Bernd
Post by John McKown
This is just a kind of "speculation" on my part. It is to avoid problems
when doing transfers of data between z/OS and Intel based platforms. I.e.
when I want to do a binary transfer of a file from Linux or <blech> Windows
to z/OS for processing, perhaps due to the complexity of the data and the
"non z/OS end" people being uncooperative about translating their data to
something like non-binary XML or JSON.
I would like a way to specify that either specific integer variables be in
"Little-Endian" format instead of the IBM z's "Big Endian". It seems to me
that this should be "simple" by just using the "Load Reverse" and "Store
Revere" instructions instead of the normal "Load" and "Store" instructions.
There are 2, 4, & 8 byte variants of these instructions.
In addition to the above, I am wondering about the
reading/writing/processing of character data in ASCII instead of EBCDIC. I
know of the ASCII compile option (which I can't review right now due the
the abominable KC being unavailable right now - I am really PISSED at IBM
for this unreliability). Well, enough on that digression. Does the ASCII
compile option allow for reading, writing, and processing of ASCII char
data? Of course, that I'd really like is a "simple" (not iconv) way to
intermix ASCII & EBCDIC characters. And, yes, I know that I'm opening up a
whole can of mega-worms with this "easy desire".
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Charles Mills
2017-06-14 15:19:33 UTC
Permalink
Would be a very amusing C++ exercise to build some structs littleend16_t and so forth and develop overrides for +, =, ++, etc. If you did it right it would be very cute.

I guess something ditto for a string_ascii_t type but I am less enamored of that exercise.

Charles

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of John McKown
Sent: Wednesday, June 14, 2017 7:31 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian

This is just a kind of "speculation" on my part. It is to avoid problems when doing transfers of data between z/OS and Intel based platforms. I.e.
when I want to do a binary transfer of a file from Linux or <blech> Windows to z/OS for processing, perhaps due to the complexity of the data and the "non z/OS end" people being uncooperative about translating their data to something like non-binary XML or JSON.

I would like a way to specify that either specific integer variables be in "Little-Endian" format instead of the IBM z's "Big Endian". It seems to me that this should be "simple" by just using the "Load Reverse" and "Store Revere" instructions instead of the normal "Load" and "Store" instructions.
There are 2, 4, & 8 byte variants of these instructions.

In addition to the above, I am wondering about the reading/writing/processing of character data in ASCII instead of EBCDIC. I know of the ASCII compile option (which I can't review right now due the the abominable KC being unavailable right now - I am really PISSED at IBM for this unreliability). Well, enough on that digression. Does the ASCII compile option allow for reading, writing, and processing of ASCII char data? Of course, that I'd really like is a "simple" (not iconv) way to intermix ASCII & EBCDIC characters. And, yes, I know that I'm opening up a whole can of mega-worms with this "easy desire".

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Tony Harminc
2017-06-14 16:13:26 UTC
Permalink
Post by Charles Mills
Would be a very amusing C++ exercise to build some structs littleend16_t
and so forth and develop overrides for +, =, ++, etc. If you did it right
it would be very cute.
For all these proposals other than actual built-in knowledge by the
compiler of the endianness, I think the question is whether the compiler
will generate good code to do the conversions. There have been Load/Store
Reversed instructions in various sizes since the original zArch, but I'll
bet that the kind of C++ structs/classes you mention won't end up
generating them.

Tony H.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Charles Mills
2017-06-14 16:51:52 UTC
Permalink
Might be able to implement using __lrv() and friends.

Charles


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Tony Harminc
Sent: Wednesday, June 14, 2017 9:14 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
Post by Charles Mills
Would be a very amusing C++ exercise to build some structs
littleend16_t and so forth and develop overrides for +, =, ++, etc. If
you did it right it would be very cute.
For all these proposals other than actual built-in knowledge by the compiler of the endianness, I think the question is whether the compiler will generate good code to do the conversions. There have been Load/Store Reversed instructions in various sizes since the original zArch, but I'll bet that the kind of C++ structs/classes you mention won't end up generating them.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
John McKown
2017-06-14 17:00:28 UTC
Permalink
Post by Charles Mills
Might be able to implement using __lrv() and friends.
Charles
​Excellent! Many thanks, got the pointer to:
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r2.cbcpx01/cbc1p2374.htm
for all those "builtins.h" routines.​
--
Veni, Vidi, VISA: I came, I saw, I did a little shopping.

Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-06-14 16:39:22 UTC
Permalink
Post by John McKown
I would like a way to specify that either specific integer variables be in
"Little-Endian" format instead of the IBM z's "Big Endian". It seems to me
that this should be "simple" by just using the "Load Reverse" and "Store
Revere" instructions instead of the normal "Load" and "Store" instructions.
There are 2, 4, & 8 byte variants of these instructions.
Sounds like a new data type [modifier] with automatic or explicit
coercions. Coercing functions could be set to no-op, contioned on
the platform endianness.
Post by John McKown
In addition to the above, I am wondering about the
reading/writing/processing of character data in ASCII instead of EBCDIC. I
know of the ASCII compile option (which I can't review right now due the
the abominable KC being unavailable right now - I am really PISSED at IBM
for this unreliability). Well, enough on that digression. Does the ASCII
compile option allow for reading, writing, and processing of ASCII char
The I/O is handled by autoconversion by the kernel, provided you tag
your files and draw the needed pentagrams on your cell wall.
Post by John McKown
data? Of course, that I'd really like is a "simple" (not iconv) way to
intermix ASCII & EBCDIC characters.>
Need to supply compiler option; XPLINK; and draw more pentagrams in
environment variables. Then things such as sprintf() work surprisingly
well with all ASCII arguments. Can't then mix with EBCDIC, AFAIK.
Sockets is documented as supported; I haven't tried. Environment
variables work surprisingly well.

Documented lbrary deficiencies: Curses, X11, ...
Post by John McKown
... And, yes, I know that I'm opening up a
whole can of mega-worms with this "easy desire".
-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Frank Swarbrick
2017-06-14 20:28:21 UTC
Permalink
There are big-endian machines other than z. Shouldn't you investigate how the issue is dealt with outside of z before asking for z exclusive language extensions?

________________________________
From: IBM Mainframe Discussion List <IBM-***@LISTSERV.UA.EDU> on behalf of John McKown <***@GMAIL.COM>
Sent: Wednesday, June 14, 2017 8:30 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian

This is just a kind of "speculation" on my part. It is to avoid problems
when doing transfers of data between z/OS and Intel based platforms. I.e.
when I want to do a binary transfer of a file from Linux or <blech> Windows
to z/OS for processing, perhaps due to the complexity of the data and the
"non z/OS end" people being uncooperative about translating their data to
something like non-binary XML or JSON.

I would like a way to specify that either specific integer variables be in
"Little-Endian" format instead of the IBM z's "Big Endian". It seems to me
that this should be "simple" by just using the "Load Reverse" and "Store
Revere" instructions instead of the normal "Load" and "Store" instructions.
There are 2, 4, & 8 byte variants of these instructions.

In addition to the above, I am wondering about the
reading/writing/processing of character data in ASCII instead of EBCDIC. I
know of the ASCII compile option (which I can't review right now due the
the abominable KC being unavailable right now - I am really PISSED at IBM
for this unreliability). Well, enough on that digression. Does the ASCII
compile option allow for reading, writing, and processing of ASCII char
data? Of course, that I'd really like is a "simple" (not iconv) way to
intermix ASCII & EBCDIC characters. And, yes, I know that I'm opening up a
whole can of mega-worms with this "easy desire".


--
Veni, Vidi, VISA: I came, I saw, I did a little shopping.

Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-06-14 21:43:41 UTC
Permalink
Post by Frank Swarbrick
There are big-endian machines other than z. Shouldn't you investigate how the issue is dealt with outside of z before asking for z exclusive language extensions?
Yes. But big-endian is a vanishing breed. Motorola 68K is gone; PowerPC is
mostly gone, and its endianness was selectable. There's little interest in
Sparc. Others?

Dismayinglly, big-endian may come to be perceived as the same sort
of lunatic fringe as EBCDIC, and support will evaporate with the scarcity
of testing platforms. But the EBCDIC nightmare can be avoided: Linux
runs fine on z hardware.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Frank Swarbrick
2017-06-14 21:57:21 UTC
Permalink
I won't try to justify EBCDIC, but big-endian rules! :-)

________________________________
From: IBM Mainframe Discussion List <IBM-***@LISTSERV.UA.EDU> on behalf of Paul Gilmartin <0000000433f07816-dmarc-***@LISTSERV.UA.EDU>
Sent: Wednesday, June 14, 2017 3:44 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
Post by Frank Swarbrick
There are big-endian machines other than z. Shouldn't you investigate how the issue is dealt with outside of z before asking for z exclusive language extensions?
Yes. But big-endian is a vanishing breed. Motorola 68K is gone; PowerPC is
mostly gone, and its endianness was selectable. There's little interest in
Sparc. Others?

Dismayinglly, big-endian may come to be perceived as the same sort
of lunatic fringe as EBCDIC, and support will evaporate with the scarcity
of testing platforms. But the EBCDIC nightmare can be avoided: Linux
runs fine on z hardware.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Clark Morris
2017-06-15 21:18:00 UTC
Permalink
[Default] On 14 Jun 2017 14:57:21 -0700, in bit.listserv.ibm-main
Post by Frank Swarbrick
I won't try to justify EBCDIC, but big-endian rules! :-)
Unfortunately, little-endian which comes from the same warped thinking
that went into the COND JCL statement seems to be ubiquitous.
Little-endian is illogical and a royal pain in so many ways. The
developers of it should be ashamed of themselves.

Clark Morris
Post by Frank Swarbrick
________________________________
Sent: Wednesday, June 14, 2017 3:44 PM
Subject: Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
Post by Frank Swarbrick
There are big-endian machines other than z. Shouldn't you investigate how the issue is dealt with outside of z before asking for z exclusive language extensions?
Yes. But big-endian is a vanishing breed. Motorola 68K is gone; PowerPC is
mostly gone, and its endianness was selectable. There's little interest in
Sparc. Others?
Dismayinglly, big-endian may come to be perceived as the same sort
of lunatic fringe as EBCDIC, and support will evaporate with the scarcity
of testing platforms. But the EBCDIC nightmare can be avoided: Linux
runs fine on z hardware.
-- gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Frank Swarbrick
2017-06-15 22:04:37 UTC
Permalink
The following link gives a few reasons why little-endian might be preferred: https://softwareengineering.stackexchange.com/questions/95556/what-is-the-advantage-of-little-endian-format. As a human I still prefer big-endian, regardless of any perceived advantages for little-endian!


Frank

[Loading Image...

architecture - What is the advantage of little endian ...<https://softwareengineering.stackexchange.com/questions/95556/what-is-the-advantage-of-little-endian-format>
softwareengineering.stackexchange.com
There are arguments either way, but one point is that in a little-endian system, the address of a given value in memory, taken as a 32, 16, or 8 bit width, is the same.




________________________________
From: IBM Mainframe Discussion List <IBM-***@LISTSERV.UA.EDU> on behalf of Clark Morris <***@NS.SYMPATICO.CA>
Sent: Thursday, June 15, 2017 3:18 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian

[Default] On 14 Jun 2017 14:57:21 -0700, in bit.listserv.ibm-main
Post by Frank Swarbrick
I won't try to justify EBCDIC, but big-endian rules! :-)
Unfortunately, little-endian which comes from the same warped thinking
that went into the COND JCL statement seems to be ubiquitous.
Little-endian is illogical and a royal pain in so many ways. The
developers of it should be ashamed of themselves.

Clark Morris
Post by Frank Swarbrick
________________________________
Sent: Wednesday, June 14, 2017 3:44 PM
Subject: Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
Post by Frank Swarbrick
There are big-endian machines other than z. Shouldn't you investigate how the issue is dealt with outside of z before asking for z exclusive language extensions?
Yes. But big-endian is a vanishing breed. Motorola 68K is gone; PowerPC is
mostly gone, and its endianness was selectable. There's little interest in
Sparc. Others?
Dismayinglly, big-endian may come to be perceived as the same sort
of lunatic fringe as EBCDIC, and support will evaporate with the scarcity
of testing platforms. But the EBCDIC nightmare can be avoided: Linux
runs fine on z hardware.
-- gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
John McKown
2017-06-16 01:32:13 UTC
Permalink
On Thu, Jun 15, 2017 at 5:05 PM, Frank Swarbrick <
Post by Frank Swarbrick
The following link gives a few reasons why little-endian might be
preferred: https://softwareengineering.stackexchange.com/questions/
95556/what-is-the-advantage-of-little-endian-format. As a human I still
prefer big-endian, regardless of any perceived advantages for little-endian!
I must disagree with the "as a human" portion of the above. It is more a
"as a speaker of a Western European language using Arabic numering"
​( in UNICODE these are called "European digits")​
. We got our writing direction, left to right, from the Romans (I'm not
sure where they got it). But we got our positional numbering system from
the Hindus via the Arabs (thus the "Arabic Numerals"). We write the most
significant digit on the left because they Arabs did it that way. But the
Arab languages are written right to left. So, from their view point, they
are reading the least significant digit first. I.e. Arabic Numerals are
written "little endian" in Arabic. Europeans just wrote it the same
physical
​direction
because that's how they learned it. Using "little endian" is actually
easier. How we do it now: 100 + 10 = 110. In our minds we must "align" the
trailing digits (or the decimal point). But if it were written 001 + 01,
you could just add the digits in the order in which we write them without
"aligning" them in your mind. In the example, add the first two 0s
together. Then add the second 0 & second 1. Finally "add" the last 1 just
by writing it out. In a totally logical universe, the least significant
digit (or bit if we are speaking binary) should be the first digit (or bit)
encountered as we read. So the number one in an octet
​ (aka byte)​
, in hex, would be written 0x10 or in binary as b'10000000'. And just to
round out this totally off topic weirdness, we can all be glad that we
don't write in boustrophedon style
​ (switch directions every line) ref: http://wordinfo.info/unit/3362/ip:21​
Post by Frank Swarbrick
Frank
[https://cdn.sstatic.net/Sites/softwareengineering/img/
stackexchange.com/questions/95556/what-is-the-advantage-
of-little-endian-format>
architecture - What is the advantage of little endian ...<https://
softwareengineering.stackexchange.com/questions/
95556/what-is-the-advantage-of-little-endian-format>
softwareengineering.stackexchange.com
There are arguments either way, but one point is that in a little-endian
system, the address of a given value in memory, taken as a 32, 16, or 8 bit
width, is the same.
--
Veni, Vidi, VISA: I came, I saw, I did a little shopping.

Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
David W Noon
2017-06-16 15:42:38 UTC
Permalink
On Thu, 15 Jun 2017 20:33:13 -0500, John Mckown
(***@GMAIL.COM) wrote about "Re: RFE? xlc compile option
for C integers to be "Intel compat" or Little-Endian" (in
Post by John McKown
On Thu, Jun 15, 2017 at 5:05 PM, Frank Swarbrick <
Post by Frank Swarbrick
The following link gives a few reasons why little-endian might be
preferred: https://softwareengineering.stackexchange.com/questions/
95556/what-is-the-advantage-of-little-endian-format. As a human I still
prefer big-endian, regardless of any perceived advantages for little-endian!
I must disagree with the "as a human" portion of the above. It is more a
"as a speaker of a Western European language using Arabic numering"
​( in UNICODE these are called "European digits")​
. We got our writing direction, left to right, from the Romans (I'm not
sure where they got it). But we got our positional numbering system from
the Hindus via the Arabs (thus the "Arabic Numerals"). We write the most
significant digit on the left because they Arabs did it that way. But the
Arab languages are written right to left. So, from their view point, they
are reading the least significant digit first. I.e. Arabic Numerals are
written "little endian" in Arabic. Europeans just wrote it the same
physical
​direction
because that's how they learned it. Using "little endian" is actually
easier.
This would only be reflective of little-endian ordering if it used full
bit reversal. Computers use bits, so any Arabic ordering would require
all the bits to be reversed, not the bytes.
Post by John McKown
How we do it now: 100 + 10 = 110. In our minds we must "align" the
trailing digits (or the decimal point). But if it were written 001 + 01,
you could just add the digits in the order in which we write them without
"aligning" them in your mind. In the example, add the first two 0s
together. Then add the second 0 & second 1. Finally "add" the last 1 just
by writing it out. In a totally logical universe, the least significant
digit (or bit if we are speaking binary) should be the first digit (or bit)
encountered as we read. So the number one in an octet
​ (aka byte)​
, in hex, would be written 0x10 or in binary as b'10000000'.
This is not the way computers do arithmetic. Adding, subtracting, etc.,
are performed in register-sized chunks (except packed decimal) and the
valid sizes of those registers is determined by architecture.

In fact, on little-endian systems the numbers are put into big-endian
order when loaded into a register. Consequently, these machines do
arithmetic in big-endian.

As someone who was programming DEC PDP-11s more than 40 years ago, I can
assure everybody that little-endian sucks.
Post by John McKown
And just to
round out this totally off topic weirdness, we can all be glad that we
don't write in boustrophedon style
​ (switch directions every line) ref: http://wordinfo.info/unit/3362/ip:21​
That's all Greek to me.
--
Regards,

Dave [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
***@googlemail.com (David W Noon)
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Frank Swarbrick
2017-06-16 17:27:04 UTC
Permalink
If it were a true comparison I would expect x01234578 to be stored as x876543210 rather than x78452301. I guess that's because I read next thinking of each hex digit independently, rather than 1 byte (2 hex digits) at a time.

Frank

________________________________
From: IBM Mainframe Discussion List <IBM-***@LISTSERV.UA.EDU> on behalf of John McKown <***@GMAIL.COM>
Sent: Thursday, June 15, 2017 7:33 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian

On Thu, Jun 15, 2017 at 5:05 PM, Frank Swarbrick <
Post by Frank Swarbrick
The following link gives a few reasons why little-endian might be
preferred: https://softwareengineering.stackexchange.com/questions/
[Loading Image...

Newest Questions - Software Engineering Stack Exchange<https://softwareengineering.stackexchange.com/questions/>
softwareengineering.stackexchange.com
Q&A for professionals, academics, and students working within the systems development life cycle
Post by Frank Swarbrick
95556/what-is-the-advantage-of-little-endian-format. As a human I still
prefer big-endian, regardless of any perceived advantages for little-endian!
I must disagree with the "as a human" portion of the above. It is more a
"as a speaker of a Western European language using Arabic numering"
( in UNICODE these are called "European digits")
. We got our writing direction, left to right, from the Romans (I'm not
sure where they got it). But we got our positional numbering system from
the Hindus via the Arabs (thus the "Arabic Numerals"). We write the most
significant digit on the left because they Arabs did it that way. But the
Arab languages are written right to left. So, from their view point, they
are reading the least significant digit first. I.e. Arabic Numerals are
written "little endian" in Arabic. Europeans just wrote it the same
physical
direction
because that's how they learned it. Using "little endian" is actually
easier. How we do it now: 100 + 10 = 110. In our minds we must "align" the
trailing digits (or the decimal point). But if it were written 001 + 01,
you could just add the digits in the order in which we write them without
"aligning" them in your mind. In the example, add the first two 0s
together. Then add the second 0 & second 1. Finally "add" the last 1 just
by writing it out. In a totally logical universe, the least significant
digit (or bit if we are speaking binary) should be the first digit (or bit)
encountered as we read. So the number one in an octet
(aka byte)
, in hex, would be written 0x10 or in binary as b'10000000'. And just to
round out this totally off topic weirdness, we can all be glad that we
don't write in boustrophedon style
(switch directions every line) ref: http://wordinfo.info/unit/3362/ip:21
boustro- - Word Information <http://wordinfo.info/unit/3362/ip:21%E2%80%8B>
wordinfo.info
Greek: turning like oxen in plowing; alternate lines in opposite directions; zig-zag procedure
Post by Frank Swarbrick
Frank
[https://cdn.sstatic.net/Sites/softwareengineering/img/
stackexchange.com/questions/95556/what-is-the-advantage-
of-little-endian-format>
architecture - What is the advantage of little endian ...<https://
softwareengineering.stackexchange.com/questions/
95556/what-is-the-advantage-of-little-endian-format>
softwareengineering.stackexchange.com
There are arguments either way, but one point is that in a little-endian
system, the address of a given value in memory, taken as a 32, 16, or 8 bit
width, is the same.
--
Veni, Vidi, VISA: I came, I saw, I did a little shopping.

Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Rob Scott
2018-05-09 15:35:55 UTC
Permalink
If a field in a control block is marked as being a programming interface, it does not matter what language is used to reference it and REXX "STORAGE" is just as valid as assembler "MVC" .

What is being pointed out is that a REXX exec that uses "STORAGE" to access reverse-engineered or non-programming interface fields of control blocks is liable to break in some fashion in future releases or maintenance levels.

IPLINFO is just a REXX exec and it executes in problem state.

CSVAPF is the interface to retrieve the APF list in the same way that UCBSCAN is the interface to retrieve UCB info.

Are there OCO control blocks that you can access in memory to "bypass" the supported interface? Yes.
Is this a good idea - I would suggest not.

As stated before, my advice would be to convert the REXX usage of such non-GUPI fields to use some sort of external function that uses supported interfaces and returns the information using IRXEXCOM.

Rob


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Paul Gilmartin
Sent: Wednesday, May 9, 2018 3:58 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: AC(1)
<snip>
I believe you. The code that was shown was assembler. Regardless,
being an exec still means that the choice was made not to use an
intended programming interface.
</snip>
If a data area is described with "Programming Interface Information"
and then referenced via Rexx STORAGE calls, is that considered a choice
to not use an intended programming interface?
...
MVC is "an intended programming interface". A carelessly authorized program can do a lot of damage with MVC.
I am an automation administrator with regrettably zero assembler
programming skills, and tend to use such Rexx calls to alleviate the
painful process of MVS command output parsing to get information, if
available, when I can.
Might one use fork() (BPX1FRK, SYSCALL fork, ...) to run unvetted Rexx code such as IPLINFO safely unauthorized in a separate address space, returning results via a pipe or socket to an authorized caller?

(Is IPLINFO free of the constraints of TSO?)

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
================================
Rocket Software, Inc. and subsidiaries ■ 77 Fourth Avenue, Waltham MA 02451 ■ Main Office Toll Free Number: +1 855.577.4323
Contact Customer Support: https://my.rocketsoftware.com/RocketCommunity/RCEmailSupport
Unsubscribe from Marketing Messages/Manage Your Subscription Preferences - http://www.rocketsoftware.com/manage-your-email-preferences
Privacy Policy - http://www.rocketsoftware.com/company/legal/privacy-policy
================================

This communication and any attachments may contain confidential information of Rocket Software, Inc. All unauthorized use, disclosure or distribution is prohibited. If you are not the intended recipient, please notify Rocket Software immediately and destroy all copies of this communication. Thank you.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2017-06-15 21:41:50 UTC
Permalink
I guess I could use a bit of (gentle) education. S/360 was the first architecture I learned, so little-endian seems pretty natural. My occasional forays into big-endian mystified me (still) as to why it would be preferable to interpret an address from right to left, including literal street addresses. I don't read decimal numbers that way. Why is it any more sensible for binary (hex)? Or am I misremembering my hazy knowledge of big-endian?

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Clark Morris
Sent: Thursday, June 15, 2017 2:19 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
Post by Frank Swarbrick
I won't try to justify EBCDIC, but big-endian rules! :-)
Unfortunately, little-endian which comes from the same warped thinking that went into the COND JCL statement seems to be ubiquitous.
Little-endian is illogical and a royal pain in so many ways. The developers of it should be ashamed of themselves.

Clark Morris
Post by Frank Swarbrick
________________________________
behalf of Paul Gilmartin
Sent: Wednesday, June 14, 2017 3:44 PM
Subject: Re: RFE? xlc compile option for C integers to be "Intel
compat" or Little-Endian
Post by Frank Swarbrick
There are big-endian machines other than z. Shouldn't you investigate how the issue is dealt with outside of z before asking for z exclusive language extensions?
Yes. But big-endian is a vanishing breed. Motorola 68K is gone;
PowerPC is mostly gone, and its endianness was selectable. There's
little interest in Sparc. Others?
Dismayinglly, big-endian may come to be perceived as the same sort of
lunatic fringe as EBCDIC, and support will evaporate with the scarcity
of testing platforms. But the EBCDIC nightmare can be avoided: Linux
runs fine on z hardware.
-- gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Clark Morris
2017-06-15 21:47:34 UTC
Permalink
[Default] On 15 Jun 2017 14:41:50 -0700, in bit.listserv.ibm-main
Post by Jesse 1 Robinson
I guess I could use a bit of (gentle) education. S/360 was the first architecture I learned, so little-endian seems pretty natural. My occasional forays into big-endian mystified me (still) as to why it would be preferable to interpret an address from right to left, including literal street addresses. I don't read decimal numbers that way. Why is it any more sensible for binary (hex)? Or am I misremembering my hazy knowledge of big-endian?
S360 was big-endian. z series are predominantly big-endian with
little-endian capabilities. DEC and Intel can be blamed for
little-endian.

Clark Morris
Post by Jesse 1 Robinson
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ?=== NEW
-----Original Message-----
Sent: Thursday, June 15, 2017 2:19 PM
Subject: (External):Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
Post by Frank Swarbrick
I won't try to justify EBCDIC, but big-endian rules! :-)
Unfortunately, little-endian which comes from the same warped thinking that went into the COND JCL statement seems to be ubiquitous.
Little-endian is illogical and a royal pain in so many ways. The developers of it should be ashamed of themselves.
Clark Morris
Post by Frank Swarbrick
________________________________
behalf of Paul Gilmartin
Sent: Wednesday, June 14, 2017 3:44 PM
Subject: Re: RFE? xlc compile option for C integers to be "Intel
compat" or Little-Endian
Post by Frank Swarbrick
There are big-endian machines other than z. Shouldn't you investigate how the issue is dealt with outside of z before asking for z exclusive language extensions?
Yes. But big-endian is a vanishing breed. Motorola 68K is gone;
PowerPC is mostly gone, and its endianness was selectable. There's
little interest in Sparc. Others?
Dismayinglly, big-endian may come to be perceived as the same sort of
lunatic fringe as EBCDIC, and support will evaporate with the scarcity
of testing platforms. But the EBCDIC nightmare can be avoided: Linux
runs fine on z hardware.
-- gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Charles Mills
2017-06-17 00:30:52 UTC
Permalink
(As others pointed out, your terminology is reversed. I will ignore that and get it "right.")

FWIW, humans do arithmetic little-endian. To do the sum

1234
+5678
-----

You say "4 plus 8 equals 12, put down the 2 and carry the 1, 1 plus 3 plus 7 equals 11, ..."

You do everything that way except division. (That's why thousand-digit division comes relatively easily to crypto. You just work your way through it left to right until you get to the end.)

Charles


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Jesse 1 Robinson
Sent: Thursday, June 15, 2017 2:42 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian

I guess I could use a bit of (gentle) education. S/360 was the first architecture I learned, so little-endian seems pretty natural. My occasional forays into big-endian mystified me (still) as to why it would be preferable to interpret an address from right to left, including literal street addresses. I don't read decimal numbers that way. Why is it any more sensible for binary (hex)? Or am I misremembering my hazy knowledge of big-endian?

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2017-06-15 22:39:01 UTC
Permalink
Thanks for being gentle. I had it backwards. I owned a hobby machine based on a Z89 processor where I learned the 'opposite orientation'. I should have headed straight to Wikipedia today before advertising my ignorance. ;-(

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Clark Morris
Sent: Thursday, June 15, 2017 2:48 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
Post by Jesse 1 Robinson
I guess I could use a bit of (gentle) education. S/360 was the first architecture I learned, so little-endian seems pretty natural. My occasional forays into big-endian mystified me (still) as to why it would be preferable to interpret an address from right to left, including literal street addresses. I don't read decimal numbers that way. Why is it any more sensible for binary (hex)? Or am I misremembering my hazy knowledge of big-endian?
S360 was big-endian. z series are predominantly big-endian with little-endian capabilities. DEC and Intel can be blamed for little-endian.

Clark Morris
Post by Jesse 1 Robinson
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ?=== NEW
-----Original Message-----
On Behalf Of Clark Morris
Sent: Thursday, June 15, 2017 2:19 PM
Subject: (External):Re: RFE? xlc compile option for C integers to be
"Intel compat" or Little-Endian
Post by Frank Swarbrick
I won't try to justify EBCDIC, but big-endian rules! :-)
Unfortunately, little-endian which comes from the same warped thinking that went into the COND JCL statement seems to be ubiquitous.
Little-endian is illogical and a royal pain in so many ways. The developers of it should be ashamed of themselves.
Clark Morris
Post by Frank Swarbrick
________________________________
behalf of Paul Gilmartin
Sent: Wednesday, June 14, 2017 3:44 PM
Subject: Re: RFE? xlc compile option for C integers to be "Intel
compat" or Little-Endian
Post by Frank Swarbrick
There are big-endian machines other than z. Shouldn't you investigate how the issue is dealt with outside of z before asking for z exclusive language extensions?
Yes. But big-endian is a vanishing breed. Motorola 68K is gone;
PowerPC is mostly gone, and its endianness was selectable. There's
little interest in Sparc. Others?
Dismayinglly, big-endian may come to be perceived as the same sort of
lunatic fringe as EBCDIC, and support will evaporate with the scarcity
of testing platforms. But the EBCDIC nightmare can be avoided: Linux
runs fine on z hardware.
-- gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Mike Schwab
2017-06-16 01:29:51 UTC
Permalink
The original reason was math. Process the first order byte, determine
overflow, store result, increment address, process next bytes.
Instead of determining the end address and decrementing.

On Thu, Jun 15, 2017 at 5:39 PM, Jesse 1 Robinson
Post by Jesse 1 Robinson
Thanks for being gentle. I had it backwards. I owned a hobby machine based on a Z89 processor where I learned the 'opposite orientation'. I should have headed straight to Wikipedia today before advertising my ignorance. ;-(
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
-----Original Message-----
Sent: Thursday, June 15, 2017 2:48 PM
Subject: (External):Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
Post by Jesse 1 Robinson
I guess I could use a bit of (gentle) education. S/360 was the first architecture I learned, so little-endian seems pretty natural. My occasional forays into big-endian mystified me (still) as to why it would be preferable to interpret an address from right to left, including literal street addresses. I don't read decimal numbers that way. Why is it any more sensible for binary (hex)? Or am I misremembering my hazy knowledge of big-endian?
S360 was big-endian. z series are predominantly big-endian with little-endian capabilities. DEC and Intel can be blamed for little-endian.
Clark Morris
Post by Jesse 1 Robinson
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ?=== NEW
-----Original Message-----
On Behalf Of Clark Morris
Sent: Thursday, June 15, 2017 2:19 PM
Subject: (External):Re: RFE? xlc compile option for C integers to be
"Intel compat" or Little-Endian
Post by Frank Swarbrick
I won't try to justify EBCDIC, but big-endian rules! :-)
Unfortunately, little-endian which comes from the same warped thinking that went into the COND JCL statement seems to be ubiquitous.
Little-endian is illogical and a royal pain in so many ways. The developers of it should be ashamed of themselves.
Clark Morris
Post by Frank Swarbrick
________________________________
behalf of Paul Gilmartin
Sent: Wednesday, June 14, 2017 3:44 PM
Subject: Re: RFE? xlc compile option for C integers to be "Intel
compat" or Little-Endian
Post by Frank Swarbrick
There are big-endian machines other than z. Shouldn't you investigate how the issue is dealt with outside of z before asking for z exclusive language extensions?
Yes. But big-endian is a vanishing breed. Motorola 68K is gone;
PowerPC is mostly gone, and its endianness was selectable. There's
little interest in Sparc. Others?
Dismayinglly, big-endian may come to be perceived as the same sort of
lunatic fringe as EBCDIC, and support will evaporate with the scarcity
of testing platforms. But the EBCDIC nightmare can be avoided: Linux
runs fine on z hardware.
-- gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-06-16 03:08:35 UTC
Permalink
Post by Clark Morris
Post by Frank Swarbrick
I won't try to justify EBCDIC, but big-endian rules! :-)
Unfortunately, little-endian which comes from the same warped thinking
that went into the COND JCL statement seems to be ubiquitous.
Little-endian is illogical and a royal pain in so many ways. The
developers of it should be ashamed of themselves.
There's a lot of epistemology here. People firmly believe the scheme they
learned earliest is Natural Law, whether little-endian vs big-endian or
EBCDIC vs. ASCII.

In both cases there were in the day minor hardware economies to flouting
established convention: programmed arithmetic could be done low-to-high
and existing punched cards could be translated to EBCDIC with fewer gates
than to ASCII.

JCL COND isn't "warped thinking"; merely tunnel vision. An assembler
programmer thinking of branching around a block of code if the CC mask
matches thought likewise of bypassing a job step if COND matches.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2017-06-16 17:26:27 UTC
Permalink
(Thankfully this topic has warped into Friday.) As to which endian is more 'natural', it occurred to me that one hurdle an English speaker has in learning German is the disconnect between some numerals and the corresponding verbiage: we write '24' but say 'vierundzwanzig'. This endian reversal is limited; numbers over one hundred still contain the big value(s) on the left.

I can't say I fully understand all the posts on this subject--I could not really explain endianness (Wikipedia term) fully to someone else--but I'm on surer footing than I was before. Thanks for that!

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of David W Noon
Sent: Friday, June 16, 2017 8:44 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian

On Thu, 15 Jun 2017 20:33:13 -0500, John Mckown
Post by John McKown
On Thu, Jun 15, 2017 at 5:05 PM, Frank Swarbrick <
Post by Frank Swarbrick
The following link gives a few reasons why little-endian might be
preferred: https://softwareengineering.stackexchange.com/questions/
95556/what-is-the-advantage-of-little-endian-format. As a human I
still prefer big-endian, regardless of any perceived advantages for little-endian!
I must disagree with the "as a human" portion of the above. It is more
a "as a speaker of a Western European language using Arabic numering"
​( in UNICODE these are called "European digits")​ . We got our
writing direction, left to right, from the Romans (I'm not sure where
they got it). But we got our positional numbering system from the
Hindus via the Arabs (thus the "Arabic Numerals"). We write the most
significant digit on the left because they Arabs did it that way. But
the Arab languages are written right to left. So, from their view
point, they are reading the least significant digit first. I.e. Arabic
it the same physical ​direction because that's how they learned it.
Using "little endian" is actually easier.
This would only be reflective of little-endian ordering if it used full bit reversal. Computers use bits, so any Arabic ordering would require all the bits to be reversed, not the bytes.
Post by John McKown
How we do it now: 100 + 10 = 110. In our minds we must "align" the
trailing digits (or the decimal point). But if it were written 001 +
01, you could just add the digits in the order in which we write them
without "aligning" them in your mind. In the example, add the first
two 0s together. Then add the second 0 & second 1. Finally "add" the
last 1 just by writing it out. In a totally logical universe, the
least significant digit (or bit if we are speaking binary) should be
the first digit (or bit) encountered as we read. So the number one in
an octet ​ (aka byte)​ , in hex, would be written 0x10 or in binary as
b'10000000'.
This is not the way computers do arithmetic. Adding, subtracting, etc., are performed in register-sized chunks (except packed decimal) and the valid sizes of those registers is determined by architecture.

In fact, on little-endian systems the numbers are put into big-endian order when loaded into a register. Consequently, these machines do arithmetic in big-endian.

As someone who was programming DEC PDP-11s more than 40 years ago, I can assure everybody that little-endian sucks.
Post by John McKown
And just to
round out this totally off topic weirdness, we can all be glad that we
don't write in boustrophedon style ​ (switch directions every line)
ref: http://wordinfo.info/unit/3362/ip:21​
That's all Greek to me.
--
Regards,

Dave [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
***@googlemail.com (David W Noon)
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-06-16 17:54:50 UTC
Permalink
Post by David W Noon
...
This is not the way computers do arithmetic. Adding, subtracting, etc.,
are performed in register-sized chunks (except packed decimal) and the
valid sizes of those registers is determined by architecture.
I suspect programmed decimal arithmetic was a major motivation for
little-endian.
Post by David W Noon
In fact, on little-endian systems the numbers are put into big-endian
order when loaded into a register. Consequently, these machines do
arithmetic in big-endian.
Ummm... really? I believe IBM computers number bits in a register with
0 being the most significant bit; non-IBM computers with 0 being the
least sighificant bit. I'd call that a bitwise little-endian. And it gives an
easy summation formula for conversion to unsigned integers.
Post by David W Noon
As someone who was programming DEC PDP-11s more than 40 years ago, I can
assure everybody that little-endian sucks.
But do the computers care? (And which was your first system? Did you
feel profound relief when you discovered the alternative convention?)

IIRC, PDP-11 provided for writing tapes little-endian, which was wrong for
sharing numeric data with IBM systems, or big-endian, which was wrong
for sharing text data.

For those who remain unaware on a Friday:
https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_and_politics

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
David W Noon
2017-06-16 21:55:19 UTC
Permalink
On Fri, 16 Jun 2017 12:55:53 -0500, Paul Gilmartin
(0000000433f07816-dmarc-***@LISTSERV.UA.EDU) wrote about "Re: RFE?
xlc compile option for C integers to be "Intel compat" or Little-Endian"
Post by Paul Gilmartin
Post by David W Noon
...
This is not the way computers do arithmetic. Adding, subtracting, etc.,
are performed in register-sized chunks (except packed decimal) and the
valid sizes of those registers is determined by architecture.
I suspect programmed decimal arithmetic was a major motivation for
little-endian.
AFAIAA, there are no little-endian platforms that perform decimal
arithmetic as such, except on a byte-by-byte basis in a loop.

The nearest I can offer is the Intel 80x87 FPU. This can load a packed
decimal number [in little-endian order] into a floating point register,
converting to IEEE binary floating point as it goes; reflexively, it can
store a binary floater into packed decimal. However, all arithmetic is
done as floating point.

In fact, I have seen only 2 hardware platforms that perform packed
decimal arithmetic: IBM and plug-compatible mainframes; Groupe Bull /
Honeywell-Bull / Honeywell/GE / General Electric mainframes derived from
the GE-600 series -- although these did not get packed decimal until
they became the Honeywell H-6000 series.
Post by Paul Gilmartin
Post by David W Noon
In fact, on little-endian systems the numbers are put into big-endian
order when loaded into a register. Consequently, these machines do
arithmetic in big-endian.
Ummm... really?
Yes.
Post by Paul Gilmartin
I believe IBM computers number bits in a register with
0 being the most significant bit; non-IBM computers with 0 being the
least sighificant bit. I'd call that a bitwise little-endian. And it gives an
easy summation formula for conversion to unsigned integers.
The endianness is determined by where the MSB and LSB are stored. On IBM
machines the MSB is in the left-most byte of the register and the LSB in
the right-most byte. That is big-endian.

Ascribing indices to the bit positions in either order makes no
difference. It is the order of *storage* that determines endianness.
Post by Paul Gilmartin
Post by David W Noon
As someone who was programming DEC PDP-11s more than 40 years ago, I can
assure everybody that little-endian sucks.
But do the computers care? (And which was your first system? Did you
feel profound relief when you discovered the alternative convention?)
The computers perform their arithmetic in whatever byte order the
hardware designers choose.

My first system was a clone of an IBM 360. I felt dismay when I first
read a core dump from a PDP-11.
Post by Paul Gilmartin
IIRC, PDP-11 provided for writing tapes little-endian, which was wrong for
sharing numeric data with IBM systems, or big-endian, which was wrong
for sharing text data.
Text data were not a problem as the data were written as a byte stream.
Binary data was where endian differences arose.

Fortunately, DEC realized that their design was crap and added a
hardware instruction to put 16-bit binary integers into big-endian
order; it had the assembler mnemonic SWAB (SWAp Bytes). The company I
worked for in the 1970s exchanged data between many PDP-11s and a
central IBM 370, usually without problems.
Post by Paul Gilmartin
https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_and_politics
I have long enjoyed Swift (not the programming language).
--
Regards,

Dave [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
***@googlemail.com (David W Noon)
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*



----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2017-06-16 18:18:42 UTC
Permalink
TGIF. With due respect to the view that Indian (Hindi? Sanskrit?) via Arabic numerals were the progenitor of our modern big-endian bias, I'd like to point out that Roman numerals--remember them you old dudes?--are apparently big-endian. Lord knows who invented that convoluted system, but it persisted in academia and in commerce for centuries.

Friday off topic. I read somewhere that at the time of American independence circa 1776, it was de rigueur for an educated person to be able to do *arithmetic* in Roman numerals. You could not otherwise claim to be properly schooled. A footnote on the whimsy of stodgy education standards.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Paul Gilmartin
Sent: Friday, June 16, 2017 10:56 AM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
Post by David W Noon
...
This is not the way computers do arithmetic. Adding, subtracting, etc.,
are performed in register-sized chunks (except packed decimal) and the
valid sizes of those registers is determined by architecture.
I suspect programmed decimal arithmetic was a major motivation for little-endian.
Post by David W Noon
In fact, on little-endian systems the numbers are put into big-endian
order when loaded into a register. Consequently, these machines do
arithmetic in big-endian.
Ummm... really? I believe IBM computers number bits in a register with
0 being the most significant bit; non-IBM computers with 0 being the least sighificant bit. I'd call that a bitwise little-endian. And it gives an easy summation formula for conversion to unsigned integers.
Post by David W Noon
As someone who was programming DEC PDP-11s more than 40 years ago, I
can assure everybody that little-endian sucks.
But do the computers care? (And which was your first system? Did you feel profound relief when you discovered the alternative convention?)

IIRC, PDP-11 provided for writing tapes little-endian, which was wrong for sharing numeric data with IBM systems, or big-endian, which was wrong for sharing text data.

For those who remain unaware on a Friday:
https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_and_politics

-- gil


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-06-16 18:46:06 UTC
Permalink
Post by Jesse 1 Robinson
TGIF. With due respect to the view that Indian (Hindi? Sanskrit?) via Arabic numerals were the progenitor of our modern big-endian bias, I'd like to point out that Roman numerals--remember them you old dudes?--are apparently big-endian. Lord knows who invented that convoluted system, but it persisted in academia and in commerce for centuries.
Roman numerals belong in the Archaeology department, not in the Mathematics
department. Except for copyright notices; we can only hope they get all
better soon.
Post by Jesse 1 Robinson
Friday off topic. I read somewhere that at the time of American independence circa 1776, it was de rigueur for an educated person to be able to do *arithmetic* in Roman numerals. You could not otherwise claim to be properly schooled. A footnote on the whimsy of stodgy education standards.
The abacus or soroban was a technology of choice in much of the
Eastern Hemisphere until the advent of pocket-sized calculators.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Clark Morris
2017-06-17 02:23:56 UTC
Permalink
[Default] On 16 Jun 2017 11:18:42 -0700, in bit.listserv.ibm-main
Post by Jesse 1 Robinson
TGIF. With due respect to the view that Indian (Hindi? Sanskrit?) via Arabic numerals were the progenitor of our modern big-endian bias, I'd like to point out that Roman numerals--remember them you old dudes?--are apparently big-endian. Lord knows who invented that convoluted system, but it persisted in academia and in commerce for centuries.
As I recall 9 is IX not VIIII and 90 is XC not LXXXX. Is anyone
energetic enough to verify this. I am not tonight.

Clark Morris
Post by Jesse 1 Robinson
Friday off topic. I read somewhere that at the time of American independence circa 1776, it was de rigueur for an educated person to be able to do *arithmetic* in Roman numerals. You could not otherwise claim to be properly schooled. A footnote on the whimsy of stodgy education standards.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ?=== NEW
-----Original Message-----
Sent: Friday, June 16, 2017 10:56 AM
Subject: (External):Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
Post by David W Noon
...
This is not the way computers do arithmetic. Adding, subtracting, etc.,
are performed in register-sized chunks (except packed decimal) and the
valid sizes of those registers is determined by architecture.
I suspect programmed decimal arithmetic was a major motivation for little-endian.
Post by David W Noon
In fact, on little-endian systems the numbers are put into big-endian
order when loaded into a register. Consequently, these machines do
arithmetic in big-endian.
Ummm... really? I believe IBM computers number bits in a register with
0 being the most significant bit; non-IBM computers with 0 being the least sighificant bit. I'd call that a bitwise little-endian. And it gives an easy summation formula for conversion to unsigned integers.
Post by David W Noon
As someone who was programming DEC PDP-11s more than 40 years ago, I
can assure everybody that little-endian sucks.
But do the computers care? (And which was your first system? Did you feel profound relief when you discovered the alternative convention?)
IIRC, PDP-11 provided for writing tapes little-endian, which was wrong for sharing numeric data with IBM systems, or big-endian, which was wrong for sharing text data.
https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_and_politics
-- gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Steve Smith
2017-06-17 02:52:16 UTC
Permalink
1-10 by 1: I, II, III, IV, V, VI, VII, VIII, IX, X
10-100 by 10: X, XX, XXX, XL, L, LX, LXX, LXXX, XC, C
100-1000 by 100: C, CC, CCC, CD, D, DC, DCC, DCCC, CM, M

Combine as needed. I don't torture myself doing math with Roman Numerals,
but they are cool for many purposes. Much to my surprise, the Super Bowl
is sticking with them through LI and beyond, which is pretty rare these
days.

sas
Post by Clark Morris
[Default] On 16 Jun 2017 11:18:42 -0700, in bit.listserv.ibm-main
Post by Jesse 1 Robinson
TGIF. With due respect to the view that Indian (Hindi? Sanskrit?) via
Arabic numerals were the progenitor of our modern big-endian bias, I'd like
to point out that Roman numerals--remember them you old dudes?--are
apparently big-endian. Lord knows who invented that convoluted system, but
it persisted in academia and in commerce for centuries.
As I recall 9 is IX not VIIII and 90 is XC not LXXXX. Is anyone
energetic enough to verify this. I am not tonight.
Clark Morris
Post by Jesse 1 Robinson
Friday off topic. I read somewhere that at the time of American
independence circa 1776, it was de rigueur for an educated person to be
able to do *arithmetic* in Roman numerals. You could not otherwise claim to
be properly schooled. A footnote on the whimsy of stodgy education
standards.
Post by Jesse 1 Robinson
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ?=== NEW
-----Original Message-----
Behalf Of Paul Gilmartin
Post by Jesse 1 Robinson
Sent: Friday, June 16, 2017 10:56 AM
Subject: (External):Re: RFE? xlc compile option for C integers to be
"Intel compat" or Little-Endian
Post by Jesse 1 Robinson
Post by David W Noon
...
This is not the way computers do arithmetic. Adding, subtracting, etc.,
are performed in register-sized chunks (except packed decimal) and the
valid sizes of those registers is determined by architecture.
I suspect programmed decimal arithmetic was a major motivation for
little-endian.
Post by Jesse 1 Robinson
Post by David W Noon
In fact, on little-endian systems the numbers are put into big-endian
order when loaded into a register. Consequently, these machines do
arithmetic in big-endian.
Ummm... really? I believe IBM computers number bits in a register with
0 being the most significant bit; non-IBM computers with 0 being the
least sighificant bit. I'd call that a bitwise little-endian. And it
gives an easy summation formula for conversion to unsigned integers.
Post by Jesse 1 Robinson
Post by David W Noon
As someone who was programming DEC PDP-11s more than 40 years ago, I
can assure everybody that little-endian sucks.
But do the computers care? (And which was your first system? Did you
feel profound relief when you discovered the alternative convention?)
Post by Jesse 1 Robinson
IIRC, PDP-11 provided for writing tapes little-endian, which was wrong
for sharing numeric data with IBM systems, or big-endian, which was wrong
for sharing text data.
Post by Jesse 1 Robinson
https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_
and_politics
Post by Jesse 1 Robinson
-- gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
--
sas

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
CM Poncelet
2017-06-17 03:15:23 UTC
Permalink
FWIW I had an analog wall-clock in the late-50's / early-60's that
showed 4 as IIII - not IV. I cannot remember what its 9 was. Using
letters as numerals prevented the Romans and Greeks etc. from inventing
algebra. <grin> CP
Post by Clark Morris
[Default] On 16 Jun 2017 11:18:42 -0700, in bit.listserv.ibm-main
Post by Jesse 1 Robinson
TGIF. With due respect to the view that Indian (Hindi? Sanskrit?) via Arabic numerals were the progenitor of our modern big-endian bias, I'd like to point out that Roman numerals--remember them you old dudes?--are apparently big-endian. Lord knows who invented that convoluted system, but it persisted in academia and in commerce for centuries.
As I recall 9 is IX not VIIII and 90 is XC not LXXXX. Is anyone
energetic enough to verify this. I am not tonight.
Clark Morris
Post by Jesse 1 Robinson
Friday off topic. I read somewhere that at the time of American independence circa 1776, it was de rigueur for an educated person to be able to do *arithmetic* in Roman numerals. You could not otherwise claim to be properly schooled. A footnote on the whimsy of stodgy education standards.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ?=== NEW
-----Original Message-----
Sent: Friday, June 16, 2017 10:56 AM
Subject: (External):Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
Post by David W Noon
...
This is not the way computers do arithmetic. Adding, subtracting, etc.,
are performed in register-sized chunks (except packed decimal) and the
valid sizes of those registers is determined by architecture.
I suspect programmed decimal arithmetic was a major motivation for little-endian.
Post by David W Noon
In fact, on little-endian systems the numbers are put into big-endian
order when loaded into a register. Consequently, these machines do
arithmetic in big-endian.
Ummm... really? I believe IBM computers number bits in a register with
0 being the most significant bit; non-IBM computers with 0 being the least sighificant bit. I'd call that a bitwise little-endian. And it gives an easy summation formula for conversion to unsigned integers.
Post by David W Noon
As someone who was programming DEC PDP-11s more than 40 years ago, I
can assure everybody that little-endian sucks.
But do the computers care? (And which was your first system? Did you feel profound relief when you discovered the alternative convention?)
IIRC, PDP-11 provided for writing tapes little-endian, which was wrong for sharing numeric data with IBM systems, or big-endian, which was wrong for sharing text data.
https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_and_politics
-- gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Randy Hudson
2017-06-20 06:25:01 UTC
Permalink
Post by CM Poncelet
FWIW I had an analog wall-clock in the late-50's / early-60's that
showed 4 as IIII - not IV. I cannot remember what its 9 was. Using
letters as numerals prevented the Romans and Greeks etc. from inventing
algebra. <grin> CP
By convention, clocks with Roman numerals all used IIII for 4.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Paul Gilmartin
2017-06-16 22:55:39 UTC
Permalink
Post by David W Noon
Post by Paul Gilmartin
Post by David W Noon
In fact, on little-endian systems the numbers are put into big-endian
order when loaded into a register. Consequently, these machines do
arithmetic in big-endian.
Ummm... really?
Yes.
Post by Paul Gilmartin
I believe IBM computers number bits in a register with
0 being the most significant bit; non-IBM computers with 0 being the
least sighificant bit. I'd call that a bitwise little-endian. And it gives an
easy summation formula for conversion to unsigned integers.
The endianness is determined by where the MSB and LSB are stored. On IBM
machines the MSB is in the left-most byte of the register and the LSB in
the right-most byte. That is big-endian.
Ascribing indices to the bit positions in either order makes no
difference. It is the order of *storage* that determines endianness.
??? We're talking about *registers* here. See your first paragraph I quoted.

What do you mean by "the order of *storage*" of bits in a register
other than how one ascribes indices? If I rotate my laptop 180° on
my desk, have I swapped the left end and the right end?
Post by David W Noon
The computers perform their arithmetic in whatever byte order the
hardware designers choose.
If they operate serially, it's simplest if they start at the less significant end.
Post by David W Noon
My first system was a clone of an IBM 360. I felt dismay when I first
read a core dump from a PDP-11.
That's one data point confirming my conjecture that people perceive
the conventions they learned earliest as Natural Law.
Post by David W Noon
Fortunately, DEC realized that their design was crap and added a
hardware instruction to put 16-bit binary integers into big-endian
order; it had the assembler mnemonic SWAB (SWAp Bytes). The company I
worked for in the 1970s exchanged data between many PDP-11s and a
central IBM 370, usually without problems.
EBCDIC? Well, not if your data were entirely numeric. Hexadecimal floating point?
Post by David W Noon
Post by Paul Gilmartin
https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_and_politics
-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
David W Noon
2017-06-16 23:28:08 UTC
Permalink
On Fri, 16 Jun 2017 17:56:42 -0500, Paul Gilmartin
(0000000433f07816-dmarc-***@LISTSERV.UA.EDU) wrote about "Re: RFE?
xlc compile option for C integers to be "Intel compat" or Little-Endian"
[snip]
Post by Paul Gilmartin
Post by David W Noon
The endianness is determined by where the MSB and LSB are stored. On IBM
machines the MSB is in the left-most byte of the register and the LSB in
the right-most byte. That is big-endian.
Ascribing indices to the bit positions in either order makes no
difference. It is the order of *storage* that determines endianness.
??? We're talking about *registers* here. See your first paragraph I quoted.
What do you mean by "the order of *storage*" of bits in a register
other than how one ascribes indices? If I rotate my laptop 180° on
my desk, have I swapped the left end and the right end?
The bytes are ordered, otherwise shift instructions would produce rather
random results. The bits are ordered within bytes; that ordering remains
fixed (i.e. bits are never reversed when loading/storing), even during
shift instructions as bits are shifted in and out.

You can see the byte ordering when coding an ICM or STCM instruction.
These instructions have a bit mask to select affected bytes and the
bytes are ordered in the same order as the bit mask -- and it is big-endian.
Post by Paul Gilmartin
Post by David W Noon
The computers perform their arithmetic in whatever byte order the
hardware designers choose.
If they operate serially, it's simplest if they start at the less significant end.
True, but a serial ALU can start at the LSB end, even on big-endian systems.
Post by Paul Gilmartin
Post by David W Noon
My first system was a clone of an IBM 360. I felt dismay when I first
read a core dump from a PDP-11.
That's one data point confirming my conjecture that people perceive
the conventions they learned earliest as Natural Law.
I never cited it as "Natural Law".
Post by Paul Gilmartin
Post by David W Noon
Fortunately, DEC realized that their design was crap and added a
hardware instruction to put 16-bit binary integers into big-endian
order; it had the assembler mnemonic SWAB (SWAp Bytes). The company I
worked for in the 1970s exchanged data between many PDP-11s and a
central IBM 370, usually without problems.
EBCDIC? Well, not if your data were entirely numeric. Hexadecimal floating point?
No.

Text was in ASCII and binary data as integers; these PDP-11s did not
have FPUs, so no floaters were used even in emulation.

Note also that we did not use DECtape drives to exchange with the
mainframe. We used a 9-track tape transport (spring tensioners, no
vacuum columns) that wrote at 800BPI. The tapes had ANSI labels.
--
Regards,

Dave [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
***@googlemail.com (David W Noon)
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*



----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Bernd Oppolzer
2017-06-17 21:49:14 UTC
Permalink
What about some examples to make things clear?

500 decimal is 0x1f4 in hex (256 + 15 * 16 + 4)

in a big endian halfword (2 bytes), this looks like 01 F4

big endian fullword (4 bytes): 00 00 01 F4

when processed by a 32 bit machine (for example IBM mainframe),
both representations (2 bytes and 4 bytes) will be processed in a 32 bit
register,
that is, the value will be expanded to

00 00 01 F4

More interesting with negative values: -500 = FE 0C (2 Bytes) = FF FF FE
0C (4 Bytes)

The 2 Bytes value is expanded to 4 Bytes by propagating the leftmost bit
(which is 1
on negative values) to the left.

Now little endian (the PC representation):

the Intel CPUs have 16 bit registers and today even 32 bit registers
(AFAIK).
Binary values in registers are represented exactly the same, but in the
main storage
(and on files etc.), binary data is stored in reverse order (bites
swapped).

For example:

500 binary stored in 2 bytes looks like F4 01

500 binary stored in 4 bytes looks like F4 01 00 00

When loaded in a CPU register, the first byte (with the lowest address)
goes into the "right" position of the register (the least significant 8
bits)
and so on ...

Same goes for negative values, of course:

- 500 in 2 bytes = 0C FE

- 500 in 4 bytes = 0C FE FF FF


My explanation for these differences goes like this:

big endian solutions are natural for machines which provide a "word"
access to the
storage and which have (for example) 32 bit CPU registers from the
start. BTW,
we had a 48 bit machine in Germany; the 48 bit word could be accessed
alternatively
as two 24 bit halfwords; of course, both number representations were big
endian, too.

little endian solutions are inspired by computers which originally had 8
bit word size,
which is true for the PDPs (AFAIK) and for the 8080 and Z80s, which were
the anchestors
of the PC hardware (8086). Because of their restricted instruction set
and register size,
there was no other solution than to process longer numbers byte by byte,
and in that
case it is better if the least significant bytes are stored first in
storage. This was never
changed, although later more powerful processors were available.

BTW: when I was studying computer science in the late 1970s, we had to
write a program
for an 8080 processor, assemble it manually and then enter it into the
storage
of some sort of test platform via switches. But nobody told us before
about this
endianness topic. We had no idea of this, because we only knew big
endian machines
before. We only got some information about the instruction set, some
information
about how to handle the test platform and the problem which we had to
solve -
and 2 or 3 hours time to do it. Only 15 minutes before the exercise
started,
someone told us that all numbers have to be stored in the "opposite"
order :-(
all our preparation had to be reworked in a hurry ... that was no fun,
but in
the end we succeeded :-)

Kind regards

Bernd
Post by David W Noon
On Fri, 16 Jun 2017 17:56:42 -0500, Paul Gilmartin
xlc compile option for C integers to be "Intel compat" or Little-Endian"
[snip]
Post by Paul Gilmartin
Post by David W Noon
The endianness is determined by where the MSB and LSB are stored. On IBM
machines the MSB is in the left-most byte of the register and the LSB in
the right-most byte. That is big-endian.
Ascribing indices to the bit positions in either order makes no
difference. It is the order of *storage* that determines endianness.
??? We're talking about *registers* here. See your first paragraph I quoted.
What do you mean by "the order of *storage*" of bits in a register
other than how one ascribes indices? If I rotate my laptop 180° on
my desk, have I swapped the left end and the right end?
The bytes are ordered, otherwise shift instructions would produce rather
random results. The bits are ordered within bytes; that ordering remains
fixed (i.e. bits are never reversed when loading/storing), even during
shift instructions as bits are shifted in and out.
You can see the byte ordering when coding an ICM or STCM instruction.
These instructions have a bit mask to select affected bytes and the
bytes are ordered in the same order as the bit mask -- and it is big-endian.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Jesse 1 Robinson
2017-06-17 02:44:38 UTC
Permalink
Touché and touché. But 29 is XIX and 190 is CXC. Talk about confused endian. No wonder the Roman Empire collapsed. Reports of Attila the Hun's onslaught were misreported over and over again. ;-(

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-***@LISTSERV.UA.EDU] On Behalf Of Clark Morris
Sent: Friday, June 16, 2017 7:25 PM
To: IBM-***@LISTSERV.UA.EDU
Subject: (External):Re: RFE? xlc compile option for C integers to be "Intel compat" or Little-Endian
Post by Jesse 1 Robinson
TGIF. With due respect to the view that Indian (Hindi? Sanskrit?) via Arabic numerals were the progenitor of our modern big-endian bias, I'd like to point out that Roman numerals--remember them you old dudes?--are apparently big-endian. Lord knows who invented that convoluted system, but it persisted in academia and in commerce for centuries.
As I recall 9 is IX not VIIII and 90 is XC not LXXXX. Is anyone energetic enough to verify this. I am not tonight.

Clark Morris
Post by Jesse 1 Robinson
Friday off topic. I read somewhere that at the time of American independence circa 1776, it was de rigueur for an educated person to be able to do *arithmetic* in Roman numerals. You could not otherwise claim to be properly schooled. A footnote on the whimsy of stodgy education standards.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ?=== NEW
-----Original Message-----
On Behalf Of Paul Gilmartin
Sent: Friday, June 16, 2017 10:56 AM
Subject: (External):Re: RFE? xlc compile option for C integers to be
"Intel compat" or Little-Endian
Post by David W Noon
...
This is not the way computers do arithmetic. Adding, subtracting,
etc., are performed in register-sized chunks (except packed decimal)
and the valid sizes of those registers is determined by architecture.
I suspect programmed decimal arithmetic was a major motivation for little-endian.
Post by David W Noon
In fact, on little-endian systems the numbers are put into big-endian
order when loaded into a register. Consequently, these machines do
arithmetic in big-endian.
Ummm... really? I believe IBM computers number bits in a register with
0 being the most significant bit; non-IBM computers with 0 being the least sighificant bit. I'd call that a bitwise little-endian. And it gives an easy summation formula for conversion to unsigned integers.
Post by David W Noon
As someone who was programming DEC PDP-11s more than 40 years ago, I
can assure everybody that little-endian sucks.
But do the computers care? (And which was your first system? Did you
feel profound relief when you discovered the alternative convention?)
IIRC, PDP-11 provided for writing tapes little-endian, which was wrong for sharing numeric data with IBM systems, or big-endian, which was wrong for sharing text data.
https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_and_politic
s
-- gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Tom Marchant
2017-06-19 16:11:59 UTC
Permalink
Post by David W Noon
I have seen only 2 hardware platforms that perform packed
decimal arithmetic: IBM and plug-compatible mainframes; Groupe Bull /
Honeywell-Bull / Honeywell/GE / General Electric mainframes derived from
the GE-600 series -- although these did not get packed decimal until
they became the Honeywell H-6000 series.
It's not strictly packed decimal arithmetic because there is no provision
for a sign, but the 6502 did decimal arithmetic. The add and subtract
instructions operate under control of the Decimal flag in the Status
Register. When the bit is set, the operation acts upon two decimal
digits in each byte.

This was an improvement over the use of Decimal Adjust that was used
on the 8080, 8085, z80, and 6800 processors to enable decimal
arithmetic.
--
Tom Marchant

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Loading...