Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] How do Linux guys back up Windows?



VM370 was pretty good.  Of course it was not built as an OS but as a
hardware emulator.  (IBM initially built it because they couldn't afford a
mainframe for every development group, hardware and software, to use.)

I was a sysprog for several years at a big company where we had thousands
of users logged in and mainly active running anything from CMS to MVS under
VM. (CMS was a single user simple OS that most interactive users used.  We
often ran entire VM systems second level to do production work especially
for disaster recovery testing, testing new releases of VM or other systems,
or on rare occasion for better computing security.

VM took about 5% of the hardware 'bandwidth' to run the VM.  But like was
mentioned it was very efficient at virtual memory.  To test, the most VM I
ran under VM was 6 levels deep.  Even with a nice size mainframe, it was
terribly slow at that many levels of VM running under each other.

But like was mentioned for VS1, most OSes could detect that they were
running 'second level' (or further) and passed back up to the top level VM
to actually handle the paging/swapping (and there was a difference back
then).

VMs were not new with IBM either. I think it was Burroughs that used them
and published on it way before IBMs VM came about.

Oh, and to let the secret out, the way IBM did VM was to use a 'diagnose'
instruction which was officially an un-authorized machine language
instruction so it generated an interrupt and an interrupt service routine
took over (just like would happen when there was a I/O operation end or a
hardware timer go off, or violating virtual memory boundries).  The
interrupt service routing looked at the instruction and the byte or two
after it in the 'user memory' to determine if it was a 'real diagnose' or
'a user problem', and took appropriate action.  The diagnose was initially
an instruction used by the IBM CE (customer engineer) in their hardware
diagnostic programs, but the software guys got in on the act too.

Oh on each mainframe we typically ran around 2000 CMS users and a MVS or
two for batch or to take care of SNA networking.  RSCS was another virtual
system that ran the network communications.  VM did it's own spooling and
driving of printers and such directly, but it could let guest operating
systems like MVS or VS1 deal with it too.

And yes, we had a source code license, and we did no occasion read the
source code. There were many places in the source that the documentation
was not 'kept up' or did not describe what the code really did, so we still
had to be able to read the assembler.  Most of VM had been rewritten from
assembler to SPL (System Programming Language, similar to PL/1) by the time
I got into the arena in the early to mid 80's.

My first UNIX was running Amdahl's UTS under VM.  A different animal when
working on block mode terminals.  We did bring up UTS on first level a time
or two for play, and it did scream.  In those years we did a lot of seismic
processing on Perkin-Elmer machines (small IBM 360 type clones), and they
ran their own unix-like system but with a more MVS style JES JCL (job
control language) flavor.

One consultant we had on VM over the years was on the initial HASP
development team in Clear Lake City at NASA.  NASA had a lot of big iron
mainframes in those days, but even their budget couldn't support what IBM
told them they needed, so NASA said they needed a way to share input and
output peripherals (mainly card readers and line printers in those days),
so IBM put together a small team to make it happen.  In just a few months
they did come out with the first cut of HASP (Houston Automatic Spooling
Program).  It basically ran all the cards to disk, then fed them to
computers (not just 1) on demand.  And took the 'printer output' as
generated on each computer and saved it to disk till the job on the other
computer was done, then put it in a print queue.  Saved a lot of hardware
and IBMs collective butt at NASA.  IBM was almost thrown out over it, but
HASP saved the day, and went on to be a big cash cow and saved mainframes
for many years.  It eventually was 'virtualized' and rolled into the basics
of JES in its flavors (Job Entry System that also handled spooling
printouts, and job control networking).

Ahh, memories.



BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org