Discussion:
z/VM Live Guest Relocation (Was: z/OSMF....)
(too old to reply)
Timothy Sipples
2018-04-28 10:43:38 UTC
Permalink
Tempus fugit. I have (personally) seen z/VM do the same
thing circa 2015.
Yes, and others saw that clever trick years earlier. Here's the history.

In July, 2010, IBM issued a Statement of Direction announcing its intention
to add Live Guest Relocation to z/VM. IBM then made Live Guest Relocation
generally available as part of z/VM Version 6.2 Single System Image (SSI)
on November 29, 2011.

http://www.vm.ibm.com/ssi/

http://www.redbooks.ibm.com/redbooks/pdfs/sg248039.pdf

z/VM SSI is an optional z/VM feature. On April 10, 2018, IBM announced that
z/VM SSI will become part of the base z/VM product in the next release of
z/VM, Version 7.1. IBM expects to release z/VM 7.1 in the third quarter of
2018.

https://www.ibm.com/common/ssi/rep_ca/0/897/ENUS218-150/ENUS218-150.PDF

--------------------------------------------------------------------------------------------------------
Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z & LinuxONE,
Multi-Geography
E-Mail: ***@sg.ibm.com

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Shane G
2018-04-28 13:02:36 UTC
Permalink
Post by Timothy Sipples
Yes, and others saw that clever trick years earlier.
When competitors achieve something it's a "clever trick" - when IBM finally catch up it's "innovative technology".

Now I remember why I unsubscribed. Enough.

Shane ...

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Edward Gould
2018-04-28 14:29:53 UTC
Permalink
Post by Timothy Sipples
Yes, and others saw that clever trick years earlier. Here's the history.
In July, 2010, IBM issued a Statement of Direction announcing its intention
to add Live Guest Relocation to z/VM. IBM then made Live Guest Relocation
generally available as part of z/VM Version 6.2 Single System Image (SSI)
on November 29, 2011.
Timothy,

I do not follow VM so I am not sure I follow. From your note saying that the extra charge feature was incorporated (this I understand).
Are you suggesting that a company just did a demonstration of what a feature of VM and called it their own?

Ed
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Timothy Sipples
2018-04-30 00:04:01 UTC
Permalink
Post by Edward Gould
Are you suggesting that a company just did a demonstration
of what a feature of VM and called it their own?
I'll try again.

1. In mid-2010, IBM indicated that it planned to release z/VM Live Guest
Relocation.

2. In November, 2011, IBM released z/VM Live Guest Relocation. It shipped
(and is shipping) as part of the z/VM Single System Image feature.

3. In April, 2018 (earlier this month), IBM indicated that it plans to
include z/VM Single System Image in the base z/VM operating system, in the
next release of z/VM. (Standard disclaimers apply, i.e. that plans might
change.)

--------------------------------------------------------------------------------------------------------
Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z & LinuxONE,
Multi-Geography
E-Mail: ***@sg.ibm.com

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
David Crayford
2018-04-30 04:16:34 UTC
Permalink
PowerVM had live migration in 2007 [1]. VMware released VMotion in 2003
[2] so I guest the trailblazer was VMware.

[1] https://en.wikipedia.org/wiki/Live_Partition_Mobility
[2] https://en.wikipedia.org/wiki/VMware
Post by Timothy Sipples
Post by Edward Gould
Are you suggesting that a company just did a demonstration
of what a feature of VM and called it their own?
I'll try again.
1. In mid-2010, IBM indicated that it planned to release z/VM Live Guest
Relocation.
2. In November, 2011, IBM released z/VM Live Guest Relocation. It shipped
(and is shipping) as part of the z/VM Single System Image feature.
3. In April, 2018 (earlier this month), IBM indicated that it plans to
include z/VM Single System Image in the base z/VM operating system, in the
next release of z/VM. (Standard disclaimers apply, i.e. that plans might
change.)
--------------------------------------------------------------------------------------------------------
Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z & LinuxONE,
Multi-Geography
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Anne & Lynn Wheeler
2018-04-30 05:25:28 UTC
Permalink
Post by David Crayford
PowerVM had live migration in 2007 [1]. VMware released VMotion in
2003 [2] so I guest the trailblazer was VMware.
[1] https://en.wikipedia.org/wiki/Live_Partition_Mobility
[2] https://en.wikipedia.org/wiki/VMware
the internal world-wide sales&marketing (vm/370 based) HONE system had
multi-system single-sysetm image, load-balancing and fall-over by 1978
... largest was the US HONE had consolidated datacenters in Palo Alto in
the mid-70s (trivia: when FACEBOOK moved into silicon valley, it was
into a new bldg built next to the old HONE datacenter). The US HONE
datacenter was then replicated in Dallas ... with load-balancing and
fall-over between the two complexes ... and finally a third replicated
in Boulder. They never got around to doing live migration (POK was
constantly putting heavy pressure on HONE to migrate to MVS ... by 1980
they were constantly forced to dump huge amount of resources into
repeated failed MVS migrations).

However, earlier in the 70s ... the commercial virtual machine CP67
service bureau spin-offs from the science center ... besides doing
multi-machine single system image (load-balancing & fall-over) ... had
also implemented live migration ... originally to provide 7x24 non-stop
operation ... initially for when machine systems and/or hardware was
being taken down for IBM service and maintenance.

Part of the enormous pressure that POK was putting on HONE ... after
Future System failed and there was mad rush to get products back into
370 pipeline, POK manage to convince corporate to to kill the vm370
product, shutdown the VM370 development group, and move all the people
to POK (or supposedly they would miss the MVS/XA customer ship date some
7-8yrs later). Eventually Endicott did manage to save the VM370 product
mission, but had to reconstitute a development group from scratch ...
some of the resulting code quality issues shows up in the VMSHARE
archives
http://vm.marist.edu/~vmshare/

so it is 40 years since HONE had (virtual machine) single-system image
and load-balancing/fall-over capability within datacenter and also
across datacenters ... but something like 45 years since the commercial
virtual machine service bureaus had live migration (around 30yrs before
VMware) ... but would never see such features from IBM because of the
enormous political pressure MVS group exerted.

trivia: the last product that my wife and I did before leaving IBM in '92
was RS/6000 HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

While out marketing, I had coined terms disaster survivability and
geographic survivability ... and was asked to write a section for the
corporate continuous availability strategy document ... but then the
section got pulled because both rochester (as/400) and POK (mvs)
complained that they couldn't meet the goals.

past posts mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone
past posts mentioning HA/CMP
http://www.garlic.com/~lynn/subtopic.html#hacmp
past posts mentioning cotinuous availability
http://www.garlic.com/~lynn/submain.html#available
--
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
David Crayford
2018-04-30 07:41:14 UTC
Permalink
Great story. You should add some content to the Wikipedia page
https://en.wikipedia.org/wiki/Live_migration.
Post by Anne & Lynn Wheeler
Post by David Crayford
PowerVM had live migration in 2007 [1]. VMware released VMotion in
2003 [2] so I guest the trailblazer was VMware.
[1] https://en.wikipedia.org/wiki/Live_Partition_Mobility
[2] https://en.wikipedia.org/wiki/VMware
the internal world-wide sales&marketing (vm/370 based) HONE system had
multi-system single-sysetm image, load-balancing and fall-over by 1978
... largest was the US HONE had consolidated datacenters in Palo Alto in
the mid-70s (trivia: when FACEBOOK moved into silicon valley, it was
into a new bldg built next to the old HONE datacenter). The US HONE
datacenter was then replicated in Dallas ... with load-balancing and
fall-over between the two complexes ... and finally a third replicated
in Boulder. They never got around to doing live migration (POK was
constantly putting heavy pressure on HONE to migrate to MVS ... by 1980
they were constantly forced to dump huge amount of resources into
repeated failed MVS migrations).
However, earlier in the 70s ... the commercial virtual machine CP67
service bureau spin-offs from the science center ... besides doing
multi-machine single system image (load-balancing & fall-over) ... had
also implemented live migration ... originally to provide 7x24 non-stop
operation ... initially for when machine systems and/or hardware was
being taken down for IBM service and maintenance.
Part of the enormous pressure that POK was putting on HONE ... after
Future System failed and there was mad rush to get products back into
370 pipeline, POK manage to convince corporate to to kill the vm370
product, shutdown the VM370 development group, and move all the people
to POK (or supposedly they would miss the MVS/XA customer ship date some
7-8yrs later). Eventually Endicott did manage to save the VM370 product
mission, but had to reconstitute a development group from scratch ...
some of the resulting code quality issues shows up in the VMSHARE
archives
http://vm.marist.edu/~vmshare/
so it is 40 years since HONE had (virtual machine) single-system image
and load-balancing/fall-over capability within datacenter and also
across datacenters ... but something like 45 years since the commercial
virtual machine service bureaus had live migration (around 30yrs before
VMware) ... but would never see such features from IBM because of the
enormous political pressure MVS group exerted.
trivia: the last product that my wife and I did before leaving IBM in '92
was RS/6000 HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
While out marketing, I had coined terms disaster survivability and
geographic survivability ... and was asked to write a section for the
corporate continuous availability strategy document ... but then the
section got pulled because both rochester (as/400) and POK (mvs)
complained that they couldn't meet the goals.
past posts mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone
past posts mentioning HA/CMP
http://www.garlic.com/~lynn/subtopic.html#hacmp
past posts mentioning cotinuous availability
http://www.garlic.com/~lynn/submain.html#available
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Edward Gould
2018-04-30 08:40:09 UTC
Permalink
Great story. You should add some content to the Wikipedia page https://en.wikipedia.org/wiki/Live_migration.
I would expect IBM to object to it unless she tones down the MVS part.
Ed
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Seymour J Metz
2018-04-30 15:42:18 UTC
Permalink
What MVS part?


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3

________________________________________
From: IBM Mainframe Discussion List <IBM-***@listserv.ua.edu> on behalf of Edward Gould <***@COMCAST.NET>
Sent: Monday, April 30, 2018 4:41 AM
To: IBM-***@listserv.ua.edu
Subject: Re: z/VM Live Guest Relocation
Great story. You should add some content to the Wikipedia page https://secure-web.cisco.com/1v9nAbiec2xEjlr3F3fd3ilp0JOSz9rliVFru-pYVPZgcOPAGuEZfvH7ThuuLu2OXu04bx09IOUhjPEW3oI_3MnuIi9c8wwiiTNoZcbNI7PgSkjyDCNQbD9l5iXDiT5tl7FAifZg1QfvQ7Xl5ZJRf02QUqgZvAtjXAjDU51Z7G_XsbvD-WL1PA-PhsVFbHd2ilfE4SVbljtctfyK16ktWBFCwt-KaP1gf32B706_2X-9aPfcW_-vmdFqMpARV5PlWubyu-QQ_cDoSZiJw7k3ZZuX6fwfFK2zFU96vjZiLwsVaj4tlkUCKZiPptFC31PfEkPtv4YZ92avGaDZlNesrxZCTzNSJPwZaHIQLt8l3ynDnisThtXWtQCV8cbpZ2Ct0o60VDPcW6O_NywOFEgY6-hDN3ijuVo7GLfgAi6DNbwwZeMfyr6A8ycnnP69gMheV-V604OenVs9qSpi-i75PNA/https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FLive_migration.
I would expect IBM to object to it unless she tones down the MVS part.
Ed
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Edward Gould
2018-05-01 05:06:55 UTC
Permalink
Post by Seymour J Metz
What MVS part?
--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3 <http://mason.gmu.edu/~smetz3>
There are scattered references to being conn’d (I think it is the usage) and items like this:
other trivia: my wife had been in the gburg JES group and was part of
the ASP "catcher" team turning ASP into JES3. She was then con'ed into
going to POK to be in charge of loosely-coupled architecture (mainframe
for cluster). While there she did peer-coupled shared data architecture
... past posts

etc etc...


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Anne & Lynn Wheeler
2018-04-30 17:48:25 UTC
Permalink
Post by David Crayford
Great story. You should add some content to the Wikipedia page
https://en.wikipedia.org/wiki/Live_migration.
re:
http://www.garlic.com/~lynn/2018c.html#77 z/VM Live Guest Relocation
http://www.garlic.com/~lynn/2018c.html#78 z/VM Live Guest Relocation

need to get people from the two virtual machine based commercial online
service bureaus (spin-offs from the science center) One was done by
co-op student that worked for me on cp67 at the science center ... and
then went to service bureau when he graduated (trivia: a couple years
earlier, the same service bureau tried to hire me when I was an
undergraudate, but when I graduated, I went to the science center
instead).

past posts mentioning science center, 4th flr, 545 tech sq
http://www.garlic.com/~lynn/subtopic.html#545tech

trivia: HONE had availability issues 1st shift was with all branch
office people using the systems ... but wasn't concerned about 7x24
offshift service .... so they didn't have to worry about "live guest
relocation" as work around to standard mainframe downtime for service
and maintenance (evenings and weekends). posts mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone

other trivia: my wife had been in the gburg JES group and was part of
the ASP "catcher" team turning ASP into JES3. She was then con'ed into
going to POK to be in charge of loosely-coupled architecture (mainframe
for cluster). While there she did peer-coupled shared data architecture
... past posts
http://www.garlic.com/~lynn/submain.html#shareddata

she didn't remain long ... in part because of 1) little uptake (except
for IMS hot-standby until much later sysplex & parallel sysplex) and 2)
constant battles with the communication group trying to force her into
using SNA/VTAM for loosely-coupled operation.

much later we do high availability rs/6000 HA/CMP (cluster,
loosely-coupled) product ... but we still have lots of battles with the
communication group and other mainframe groups.
http://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Anne & Lynn Wheeler
2018-05-02 00:49:21 UTC
Permalink
re:
http://www.garlic.com/~lynn/2018c.html#77 z/VM Live Guest Relocation
http://www.garlic.com/~lynn/2018c.html#78 z/VM Live Guest Relocation
http://www.garlic.com/~lynn/2018c.html#79 z/VM Live Guest Relocation

note that US consolidated HONE (branch office sales&marketing support)
systems running SSI in Palo Alto had eight 2-processor POK machines
... "AP", only one with channels ... so had channel connectivity for
eight systems with twice the processing power. HONE apps were heavily
APL applications so they needed max. processing power ... with
relatively heavy I/O. Problem putting larger numbers in the complex was
disk connectivity, IBM offered each disk connected to string-switch
which connected to two 4-channel 3830 controllers (maximum of eight
systems).

Part of my wife's problem with POK growing resistance to increasingly
sophisticated loosely-coupled (cluster) was burgeoning cluster vm/4341s
(both inside ibm and at customers). vm/4341 cluster had more aggregate
processing power than 3033, more aggregate I/O and more aggregate
memory, for less money, lower environmentals and much smaller floor
space.

In Jan. 1979 I was con'ed into doing LLNL benchmark on engineering 4341
(before customer ship) that was looking at getting 70 4341s for compute
farm (leading edge of coming cluster supercomputing tsunmai). Inside
IBM, there was big upsurge in budget for internal computing power
... however dataceenter floor space was becoming critical resource ...
vm/4341 clusters were very attractive alternative to POK 3033. vm/4341
also didn't require raised floor along with FBA 3370 and could be placed
out into departmental areas ... customers (and IBM business units) were
acquiring 4341s hundreds at a time (leading edge of distributed
computing tsunami). The cluster 4341s and departmental 4341s were
addressing the raised floor bottleneck (both at customers and inside
IBM).

email from long ago and far away with extract from "Adessa" newsletter

Date: 08/26/82 09:35:43
From: wheeler

re: i/o capacity on 4341; from The Adessa Advantage, Volume 1, Number 1,
October 1981., Strategies for Coping with Technology:

... as of this writing, for roughly $500,000 you can purchase a procssor
with the capacity to execute about 1.6 million instructions per
second. This system, the 4341 model group 2, comes with eight megabytes
of storage and six channels. Also at this time, a large processor like
the IBM 3033 costs about $2,600,000 when configured with sixteen
megabytes of memory and twelve channels. The processor will execute
about 4.4 million instructions per second.

... What would happen happen if the 3033 capacity for computing was
replaced by some number of 4341 model group 2 processors? How many of
these newer processors would be needed, and what benefits might result
by following such a course of action?

... three of the 4341 systems will do quite nicely. In fact, they can
provide about 10 per cent more instruction execution capacity than the
3033 offers. If a full complement of storage is installed on each of the
three 4341 (8 megs. at this time) processors then the total 24 megabytes
will provide 50 percent more memory than the 3033 makes available. With
respect to the I/O capabilities, three 4341 systems together offer 50
per cent more channels than does the 3033.

.. The final arbiter in many acquisition proposals is the price. Three
4341 group 2 systems have a total cost of about $1.5 million. If another
$500,000 is included for additional equipment to support the sharing of
the disk, tape and other devices amoung the three processors, the total
comes to $2 million. The potential saving over the cost of installing a
3033 exceeds $500,000.

- - - - - - - - - - - - - - - - - - - - - - - - -

of course Adessa offers a VM/SP enhancement known as Single System
Image (SSI) ... making it possible to operate multiple VM machines as
a single system.

... snip ...

note Adessa company specialized in VM/370 software enhancements, and
included some number of former IBM employees. However, live migration
implementation was still limited to a few (virtual-machine based)
commercial online service providers (original two were spinoffs of the
ibm cambridge science center in the 60s). trivia: IBM San Jose Research
had also done vm/4341 clusters implementation ... but lost to VTAM/SNA
(battle my wife got tired of fighting) ... cluster operations that had
been taking much less than second elapsed time become over 30 seconds
with move to VTAM/SNA (my wife also had enhancements for trotter/3088,
eight system CTCA that reduced latency and increased throughput, but
couldn't get it approved).

Note that 3033 was quick&dirty effort kicked off after the failure of FS
(along with 3081 in parallel) ... initially 168-3 logic remapped to 20%
faster chips ... various tweaks eventually get it to 4.4-4.5MIPS. 303x
external channel "director" was 370/158 engine with the integrated
channel microcode and w/o the 370 microcode. The engineering 4341 in the
(San Jose) bldg. 15 product test lab ... with a couple tweaks was used
for 3380 3mbyte/sec datastreaming testing ... something that wasn't
even remotely possible with 3033 (& 303x channel director). There is
Endicott folklore that POK was so threatened by 4341s that at one point
they convinced corporate to cut in half the allocation of critical 4341
manufacturing component.
--
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Anne & Lynn Wheeler
2018-05-07 00:37:20 UTC
Permalink
re:
http://www.garlic.com/~lynn/2018c.html#77 z/VM Live Guest Relocation
http://www.garlic.com/~lynn/2018c.html#78 z/VM Live Guest Relocation
http://www.garlic.com/~lynn/2018c.html#79 z/VM Live Guest Relocation
http://www.garlic.com/~lynn/2018c.html#80 z/VM Live Guest Relocation

some other trivia about the cp67 (precursor to vm370) commercial
spinoffs besides cluster, loosely-coupled, single-system-image, load
balancing and fall-over as well as live guest relocation.

other trivia: I recently posted scans of 1969 "First Financial Language"
manual to facebook. I got copy when one of the cp67 commercial spinoffs
(science center and MIT lincoln labs) was recruiting me ... and the
person primarily responsible for first financial language implementation
then makes some comments. turns out that he had teamed up a decade
later with bricklin to form software arts and implement visicalc.
https://en.wikipedia.org/wiki/VisiCalc

the other cp67 commercial spinoff from the same period ... was also
heavily into 4th generation reporting language ... another science
center spin-off and moved up value chain with RAMIS from
Mathematica at NCSS
https://en.wikipedia.org/wiki/Ramis_software
and then NOMAD
https://en.wikipedia.org/wiki/Nomad_software
RAMIS followon, FOCUS
https://en.wikipedia.org/wiki/FOCUS
FOCUS also on another (virtual machine based) commercial online service
https://en.wikipedia.org/wiki/Tymshare

of course all these mainframe 4th generation languages were eventually
pretty much subsumed by SQL/RDBMS which was developed on VM370 system at
IBM San Jose Research, System/R ... some past posts
http://www.garlic.com/~lynn/submisc.html#systemr

and Tymshare trivia ... started providing its CMS-based online computer
conferencing (precursor to listserv on ibm sponsored bitnet in the 80s
and modern social media) free to SHARE ... as VMSHARE in Aug1976 (later
also added PCSHARE). vmshare archive
http://vm.marist.edu/~vmshare/

and vm/bitnet trivia (used technology similar to the IBM internal
network ... primarily VM-based)
https://en.wikipedia.org/wiki/BITNET
and vm/listserv reference
http://www.lsoft.com/products/listserv-history.asp

which is where this ibm-main group eventually originates
--
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Anne & Lynn Wheeler
2018-04-30 16:32:48 UTC
Permalink
re:
http://www.garlic.com/~lynn/2018c.html#77 z/VM Live Guest Relocation

Other CP/67 7x24 trivia. Initially moving to 7x24 was some amount of
chicken & egg. This was back in the days when machines were rented that
IBM charged based on the system "meter" ... that ran when ever the cpu
and/or any channels were operating ... and datacenters recovered their
costs with "use" charges. Initially there was little offshift use but in
order to encourage offshift use, the system had to be available at all
times. To minimize their offshift costs ... there was a lot of CP/67
work down to oeprate "dark room" w/o operator present ... and to have
special CCWs that allowed the channel to stop when nothing was going on
... but startup immediately when there was incoming characters (allowing
system be up and available but the system meter would stop when idle).

Note that for system meter to actually come to stop, cpu(s) and all
channels had to be completely idle for at least 400milliseconds.
trivia: long after business had moved from rent to purchase, MVS still
had a timer task that woke up every 400milliseconds making sure that if
system was IPL'ed, the system meter never stopped.

with regard to MVS killing VM370 product (with excuse they needed the
people to work on MVS/XA) ... the VM370 development group was out in the
old IBM SBC (service bureau corporation) in Burlington Mall (mass, after
outgrowing 3rd, 545tech sq space in cambridge). The shutdown/move plan
was to not notify the people until just before the move ... in order to
minimize the number that would escape. However the information leaked
early ... and a lot managed to escape to DEC (joke was major contributer
to the new DEC VAX/VMS system development was the head of POK). There
was then a witch hunt to find out the source of the leak ... fortunately
for me, nobody gave up the leaker.

past posts mentioning Future System product ... its demise (and
some mention of POK getting the VM370 product killed)
http://www.garlic.com/~lynn/submain.html#futuresys

not long after that, I transferred from science center out to IBM San
Jose Research ... which was not long after US HONE datacenter
consolidation up in Palo Alto. One of my hobbies from time I originally
joined IBM was enhanced production operating systems for internal
datacenters ... and HONE was a long time customer from just about their
inception (and when started clones in other parts of the world, I would
get asked to go along for the install). I have some old email from HONE
about the head of POK telling them that they had to moved to MVS because
VM370 would no longer be supported on high-end POK processors (just
low-end and mid-range 370s from Endicott) ... and then later having to
retract the statements. past posts mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone
some old HONE related email
http://www.garlic.com/~lynn/lhwemail.html#hone

in previous post I had mentioned VMSHARE ... TYMSHARE started offering
its CMS-based online computer conferencing, free to SHARE starting in
August1976. I cut a deal with TYMSHARE to get monthly distribution tape
of all VMSHARE (and later PCSHARE) files for putting up on internal IBM
systems (also available over the internal network) ... including HONE.
The biggest problem I had was from the lawyers that were afraid IBMers
would be contaminated by customer information. some old email
http://www.garlic.com/~lynn/lhwemail.html#vmshare

another run in with the MVS group ... was that I was allowed to wander
around the San Jose area ... eventually getting to play disk engineer,
DBMS developer, HONE development, visit lots of customers, make
presentations at customer user group meetings, etc.

bldg. 14 disk enginner lab and bldg. 15 disk product test lab had "test
cells" with stand-alone, mainframe test time, prescheduled around the
clock. They had once tried to run testing under MVS (for some
concurrent testing), but MVS had 15min MTBF in that environment
(requiring manual re-ipl). I offerred to rewrite input/output supervisor
to be bullet proof and never fail ... allowing for anytime, on-demand
concurrent testing greatly improving productivity. I then wrote up an
internal research report on all the work and happened to mention the MVS
15min MTBF ... which brought down the wrath of the MVS organization on
my head. It was strongly implied that they attempted to separate me from
the company and when they couldn't they would make things unpleasant in
other ways.

past posts getting to play disk engineer in bldgs. 14&15
http:///www.garlic.com/~lynn/subtopic.html#disk

part of what I had to deal with was new 3380 ... another MVS story
... FE had developed regression test of 57 3380 errors that they would
typically expect in customer shops. Not long before 3380 customer ship,
MVS was failing (requiring reipol) in all 57 cases ... and in 2/3rds of
the cases there wasn't any indication of what caused the failure. old
email
http://www.garlic.com/~lynn/2007.html#email801015

While at SJR, I was also involved in the original SQL/relational
implementation, System/R. System/R was done on modified VM370 running on
370/145. The official next generation DBMS was EAGLE ... and while the
corporation was preoccupied with EAGLE, we managed to do tech transfer
"under the radar" to Endicott and get it released as SQL/DS. Then when
EAGLE imploded, there was a request about how fast it would take to port
System/R to MVS. This was eventually released as DB2 (originally for
decision support only, note IMS was sort of database1 and EAGLE would
have been database2 ... but System/R becomes it replacement). past posts
mentioning System/R
http://www.garlic.com/~lynn/submain.html#systemr

previous posts mentioned last product we did at IBM was HA/CMP, past
posts
http://www.garlic.com/~lynn/subtopic.html#hacmp
We were also doing commercial cluster scaleup with RDBMS vendors
and scientific/technical cluster scaleup with national labs.
reference to Jan1992 meeting in Ellison's conference room
on commercial cluster scaleup
http://www.garlic.com/~lynn/95.html#13

within a few weeks of the Ellison meeting, cluster scaleup was
transferred to Kingston, announced as IBM supercomputer, and we were
told that we couldn't work on anything with more than four processors.
Likely contributing factor was that the (mainframe) DB2 group had been
complaining that if we were allowed to go ahead, it would be at least
five years ahead of them. A few months later we depart the company.
some old email
http://www.garlic.com/~lynn/lhwemail.html#medusa
17Feb1992 press, for scientific/technical "ONLY"
http://www.garlic.com/~lynn/2001n.html#6000clusters1
11May1992 press, surprised by national lab intersest
http://www.garlic.com/~lynn/2001n.html#6000clusters2

trivia: later two of the Oracle people (mentioned in the Ellison
meeting) have also left Oracle and are at a small client/server startup
responsible for something called the "commerce server". We are brought
in as consultants because they want to do payment transactions on their
server, the startup had also invented this technology they called "SSL"
they want to use ... the result is now frequently called "electronic
commerce".

note in this time-frame, IBM had gone into the red and was being
reorganized into the 13 "baby-blues" in preparation for breaking up the
company ... reference behind paywall, but lives (mostly) free at wayback
machine
http://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html

although we had left the company, we get a call from the bowels of
Armonk asking if we can help with the breakup. Business units were using
MOUs to leverage supplied contracts that were frequently with other
divisions. With the breakup, these would be in other corporations and
the MOUs would have to be cataloged and turned into their own
contracts. before we get started, a new CEO is brought in and reverses
the breakup.
--
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to ***@listserv.ua.edu with the message: INFO IBM-MAIN
Loading...