Memories, Dreams and Refractions: Network Products Part I

In 1981 Business Application Systems was making a version of the BASPort portable operating system (private labeled VOS) for SCI Systems, a corporation based in Huntsville Alabama. That spring BAS agreed to sell the portion of BAS developing VOS to SCI.

Flash back to the 1960s when the integration of components for the Saturn V Instrument Unit was taking place in an IBM facility in Huntsville Alabama. SCI, then called Space Craft Incorporated, had created two of the boxes to be bolted into the IU. My father was the engineer responsible for determining that those boxes worked properly (backed up at a less technical level by NASA engineers). In the course of working with SCI employees my dad got a good feel for the SCI corporate culture as strongly formed by the asshole who ran it by the name of Olin King. Dad’s reports left me with a crystal clear judgement about whether I’d jump off a bridge or work for SCI.

So when news of SCI acquiring us was sprung I started shopping for a new job. By what I now consider to be one of the biggest coincidences that I’ve never experienced, a month or two after the acquisition Bob Nichols and Steve Schleimer approached me about joining their new startup Network Products that was chartered to make data communications equipment. I jumped at it. Working with Steve again was going to be simply sublime, as he’d been my mentor at Data General (he authored the virtual machine of the commercial language system I worked on). And Bob and I had got along pretty well at BAS (Bob wrote the commercial system’s compiler). Steve had left Data General where he was a software architect and developer of Data General’s Fountainhead project while Bob had already left BAS. Also joining us were Steve Hafele, a hardware design engineer also from Fountainhead and Steve Chewning, an ex-HP hardware design engineer.

Babymux

To be continued.

Memories, Dreams and Refractions: Encore Computer Part I

Steve Goldman was still at SCI Systems (that had acquired part of Business Application Systems, the first startup we both worked at) while I was at a second startup called Network Products when we were offered positions at a new startup called Encore Computer in 1984. This was less than a year after its founding and very early in its process of acquiring and pulling together a set of organizations:

  • Hydra Computer Systems, a maker of symmetric multiprocessor (SMP) computer systems and developer of a multiprocessor version of BSD Unix in Natick, Mass.
  • Resolution Systems, a maker of somewhat smart computer monitors in Marlboro, Mass. Together with the Hydra guys, eventually laid off when the Marlboro plant was closed after most of the company resided in south Florida (after Encore swallowed the Gould Computer Systems Division whale using Japanese money).
  • Foundation Computers, a maker of a fourth generation application development language in Cary, NC. Sold to Unisys in 1985.
  • ?? (forgot name) A network appliance maker, in particular the original creator of the Annex network terminal server, based in Marlboro, MA. Sold by Encore to Xylogics in 1986 as one of the all time bone head business moves of the 20th century.
  • ?? (forgot name), a development shop porting Unix System V Release 4 (SVR4) based in San Diego, CA. These guys (originally Larry and ? but later maybe two other guys for four total). Laid off, discovering their layoff via an announcement by Ken Fisher at a trade show!
  • ?? (forgot name), a group of consultants associated with Carnegie Mellon in Pittsburg, PA. These guys moved up to Marlboro or went on to other things as far as I could tell.

Although Steve Goldman and I were hired by Earl Gilmore1, cofounder of Foundation, our charter from the beginning was development of the third generation computer languages for Encore’s systems. So from the earliest days we worked in an office in Cary while telecommuting and reporting to Encore’s parent engineering management organization in Wellesley Hills, Mass. Soon after Foundation became part of Encore Foundation got a Vax 11/780 running BSD Unix. That was such a sweet single user system. Unfortunately about a dozen and a half of us shared it and so we each got about an eighteenth of a Vax MIP: a truly miserable situation. It was slower than my first PC (a Southwest Tech 6800). I recall working funny hours for the sake of getting a bigger piece of the computing pie. However, we also got onto Arpanet with domain encore.com, and by virtue of that and our development charter Steve and I gradually accumulated access to hardware at other sites via Arpanet as we also accumulated local hardware resources. As I type this I have no foggy memory of the earliest hardware resources apart from the 11/780 and our dumb terminals. In those earliest days Encore outdid Amdahl for original funding and way way outdid Amdahl in profligate spending. I was disgusted to hear that the executive branch had hired dozens of salesmen way, way, WAY before having product to sell. What a cushy job I imagined, to visit prospective clients and spin tales of Encore’s products to be more than a year before the hardware transitioned from vaporware.

Suddenly in 1985 the insane burn rate caught up with Encore and they sold Foundation to Unisys. Steve’s reaction was to load up his hang gliders and drive to Oregon and back with wife and friend, flying their gliders at various places along the way. Meanwhile, I found the cheapest office space in Cary and filled it with the furniture and equipment we’d accumulated. By pure luck a byproduct of my previous startup, Network Products, had been creation of an inexpensive statistical multiplexer. This presented a set of eight asynchronous serial ports on each end of a synchronous leased phone line. I got a line and had one mux connected on the Massachusetts end (by this time at the Marlboro, MA site) with the other and connected at the Cary office. This gave us 960 bytes per second bandwidth in each direction. I got two of the Resolution terminals that put three logical screens on one very large physical screen with each logical screen connected to a separate mux port. The two remaining mux ports were connected to a system in Marlboro to drive a printer and a dialout modem on our end. On the Marlboro end we were connected to a 16 processor Multimax (Encore’s SMP product name in the early days) that was the beta test system named ?. Another Multimax hosted the Unix kernel and hardware developers and it was named Pinochio. Pinochio was the alpha test system and crashed many times a day while our beta system made it through the average day with only one or two crashes in the months after the Multimax hardware and the BSD port were first brought up. This was the ultimate “eat your own dog food” experience and the pressure on us getting the C compiler’s code generation right was enormous. One of the first parallel applications developed at Encore was our version of fsck. With 16 fsck processes running in parallel the filesystems could be fixed up fast to hasten the reboots after a crash!

Encore was intent on selling heavily into education markets and at the time that meant support for the Pascal language was a must have. BSD Pascal, created by Bill Joy and others, had major drawbacks from the point of view of Steve and I. In retrospect, while BSD Pascal genuinely stunk like a skunk, an imaginary better management should have told us to hold our nose and bundle it while turning attention to an implementation of Fortran that generated parallel code. But this assumed the imaginary management had a generous helping of precognition.

We got approval for development of a Pascal implementation using Oregon Software Pascal as a starting point while embracing Green Hills for its C compiler (and its Pascal that compiled the C compiler as it was itself a Pascal program but not standardized and so the Oregon Software compiler would not compile the Green Hills compilers). We got a source license from Oregon Software for free in return for giving them a new code generator and other improvements. Oregon Pascal had a very much stronger front end that would give students a fighting chance to correlate a compiler error message to the defect in their code that caused it and that would compile a great deal faster to give them merciful turn around times.

I got to write the back end of the compiler that translated the internal program representation into relocatable machine code targeting the National Semiconductor NS32k series of microprocessors. The NS32k architecture was the sweetest I’d ever encountered by a wide margin, being more regular than the Motorola 68k. One loopy detail that kept me busy was the fact that floating point literals in the instruction stream were stored in big endian byte order while the rest of the processor was little endian. Coupled with the fact that our development process required a lot of rehosting/retargeting steps where some hosts were big endian and some small I ended up with quite a collection of routines for manipulating instructions until we finally were running on the Encore machine and generating code directly for it. At this point the compiler was compiled by itself without any cross development tools. This compiler did not rely on an assembler: it output object files. To us this was a no brainer that gave a substantial performance edge. It’s funny how a large fraction of current main stream compilers go through an assembly phase. In fact I can’t think of a single one that compiles to machine code without using an assembler.

We finished the compiler and got it certified as ANSI compatible and it was an excellent tool for instruction at Encore sites like the University of Minnesota in Deluth. We had many friendly phone calls with the folks in Deluth and got first hand accounts of what it was like in the winter alongside Lake Superior (making my winters spent in Massachusetts seem like a equatorial vacations by comparison). Unfortunately for us all, Encore’s sales force was less than feeble and did not establish an education market that moved the sales needle.

But there was one interesting accomplishment stemming from the fact that Steve and I and others in the compiler group were able to keep entirely focus on our products: we ran out of bugs. Were were able to put out release notes that declared “no known bugs”. After more recently working on Java for years with a bug pile the size of a mountain it’s almost dream like to have been involved with nontrivial code bases with no bugs. Part of the reason why we got bug free and kept bug free was that we had regression test suite collections that were insanely large, and we required development of new tests for new features. And we had a big rule: nothing leaves the company without going through the regression tests. One of the Fortran tests was compiling and running a 737 simulator. I don’t recall how we got a copy of that, but I’m sure it wasn’t via a “front door”. We compiled the Pascal compiler through itself to multiple generations and compared the code generated for generation X with that of X-1. We were quality fiends in retrospect. It simply never got into our part of the Encore culture to ship broken software if we could avoid it and we did our best to avoid it.

For C, C++ and Fortran we ported Greenhills compilers that already had a 32k code generator. The C++ compiler was quite involved and luckily we had excellent folks in the group by that time like Jonathan Polito. We also ported Greenhills Pascal in order to compile the others. But Green Hills Pascal wasn’t even as attractive as BSD Pascal and so we never considered developing it into a product.

I just recalled one side trip I took in the very early days with Greenhills Fortran. For reasons that are long lost Greenhills supplied its own libm (math library) to to link against. Actually, the reason was clear: it was to win benchmarks. One reason it won benchmarks was that the transcendental functions (e.g. cosine) were written to execute fast. A math thing called a Taylor Series was used for these transcendentals and very few terms were used. The precision was terrible. It was so bad some of our earliest customers flagged it as a show stopper. So I found a decent book and rewrote parts of libm to have very much better precision while still runing as fast as possible. Benchmark performance suffered but our customer applications ran correctly.

One exciting piece of work we did with the Green Hills compilers was to make them automatically generate parallel code. That is, the compiler would see a loop, figure out the induction variable (the variable governing the iterations) and arrange to spawn threads of control for groups of iterations in parallel. There was some heavy duty global code analysis that had to happen to pull this off and I won’t go into it here. Steve did the heavy lifting while I was split between some of that development and more of the maintenance.

At some point it occurred to us to make the compilers themselves parallel. We split the compilers into five pieces and had each piece run on its own processor. This might possibly have led to publication. At the time Steve and I never gave any thought to publishing about our development work, but that’s another story. Anyway, our parallel compilers did not perform as expected. This turned out to be because our use of shared memory between the processes tickled a weakness in Encore’s BSD Unix kernel, causing thrashing (most likely of TLBs). At the time we had a vague understanding and called it the mystery overhead as we tried to get help from the kernel guys to understand it. We never figured a way around this. It’s real shame we didn’t publish our work as it was possibly novel.

One other thing that I did that I should have published about was to make the Pascal product automatically run user compilations in parallel with scheduling. Using some shared memory magic the compiler processes communicated with each other and agreed to when to execute compilations to get as many done as possible without overloading the system. So if there were twenty compiler users on an eight processor system I kept the “load factor” (number of concurrent active compiler processes) at or below some number that I’d empirically determined to be the “sweet spot” for throughput. I never even revealed that feature in the user documentation: it was just automatic and silent in its operation. This was 1986 or 1987.

Our part of the company responsible for the four languages grew from two people to a peak of 11 before shrinking down and going into caretaker mode.

Later we migrated from 32k to Motorola 88k products. I think that was the end of the road for the Oregon Software port, as I don’t remember them having an 88k back end and we didn’t create one. But at that point we were going after the simulation market with military contractors like Hughes Aircraft and nobody cared about Pascal in the field anymore.

At some point we upgraded the leased line to Marlboro from 9600 to 56k baud and with that boost of speed we were able to run X terminals. But we had a great deal of hardware in Cary by that time and were no longer at the mercy of the serial connection. The Network Products multiplexers never missed a beat. They just ran and ran, 24/7 until we were acquired by Sun Microsystems 12 years later. I was pretty satisfied as I’d written a large fraction of the firmware for those two boxes.

References:

  1. Forest Earl Gilmore was the director of software development at Data General in the mid 1970s and my second level manager when I worked there working on a commercial language system. He was cofounder of Business Application Systems that spun out of Data General in late 1977 with a charter to make a portable operating system (“BASport”) and rich set of business applications that could be “written once and run anywhere” with the relatively easy ports of a virtual machine and kernel OS to each new hardware environment. Following that Earl cofounded Foundation Computer Systems, a developer of a fourth generation computer language designed to make application development a drag and drop experience. Encore Computer acquired Encore and later sold Foundation to Unisys, at which point Earl left to be an independent entrepreneur. Tragically a very aggressive cancer took Earl’s life when he was in his mid 40s. Earl was an extraordinary human being.

Memories, Dreams and Refractions: Sun Microsystems Part II

I recently came across a Hacker News article that was just a pointer to a copy of the SunOS version 4.11 source code. This led me to want to add my first comment to Hacker News but the UI of my browser (Harmonic) defeated me. Visiting the main web site (https://news.ycombinator.com) didn’t help. So I said to myself “heck with it, I’ll publish it here”.

Ironically rev 4 of SunOS was the version that was a steaming pile of bugs  and the developers were emulating the OS360 misadventure: adding bugs as fast as fixes. It was this disaster that led to development of Sun's Software Development Framework (SDF). When I joined in '97 the kernel development standards were amazingly high and continuing to improve. PSARC (Platform Solaris Architecture Committee), headed by Glen Skinner, rode herd over the interfaces for most of the time I was at the company. The single coolest aspect of the Sun SDF was acknowledgment that one size fits all doesnt work together with support for local dev process customization. They also included celebrations of milestone completions. It was written in a joyful tone and my copy of this document is a personal treasure. 

I joined Sun in 1997 when Encore Computer Inc, an early SMP pioneer that morphed into a smart storage vendor, was acquired. Steve Goldman and I, based in a hole in the wall office in Cary, North Carolina, were chased out of the storage division of Sun at the end of 1998 when division director Janpieter Shreeder put out an edict that everybody working for him had to be in locations A-D. Jenny and I, with daughter Emily a babe in arms, visited the Broomfield Colorado Sun site but decided not to move. We got our layoff papers but as that was happening an angel employee behind the scenes informed us of an opportunity in the Java Technology Group (JTG) of the Solaris Software Division. We “interviewed” at the Burlington Massachusetts site to join the runtime group within the java virtual machine development organization. I quote this because our reputations preceded us and the meeting was just a formality. I was very sick while getting over a severe sinus infection but shared in the upbeat enthusiasm of the Java VM Runtime department that hired us.

The first assignment given to Steve and I was to evaluate the Hotspot virtual machine and the Solaris Java Exact VM. We did a very thorough job and by the end the Exact VM and Hotspot VM camps hated us equally. We declared Hotspot the technology Solaris Software should adopt in place of Exact VM and one of the Sun Labs researchers quit on the spot and there were a lot of hard feelings. Steve and I regretted this being our introduction to the division, but we were old pros and respected for how we handled it. We and a few of the other runtime group members proceeded to port Hotspot to X86 Solaris before upper management realized they would want that to happen. That was an amusing summer and fall of upper management befuddlement, but our line manager Laurie Tolson was THE BEST and gave us her full support.

Janpieter had been director of Solaris Software before taking over Storage. During one visit to Burlington Steve and I were told that in the former era, prior to Janpieter visiting that site a manager had to visit the mens rooms and make sure his picture was removed from the bottom of the urinals. That’s how much they loved Janpieter.

A bit of History

The_BAS_BoysA rogue’s gallery from Steve Goldman’s surprise anniversary party around 2003. Left to right, Eric Teagarden, Don Parce, Steve Goldman, Bob Leivian, and Pete Soper.

  • Eric – Business Application Systems, IBM, SAS
  • Don – Data General, Business Application Systems, SCI Systems, Foundation/Encore Computer, Sun Microsystems
  • Steve – Motorola, Texas Gulf, Business Application Systems, SCI Systems, Foundation/Encore Computer, Sun Microsystems
  • Bob – Data General, Business Application System, SCI Systems, Motorola Research
  • Pete – Data General, Business Application Systems, SCI Systems, Network Products, Foundation/Encore Computer, Sun Microsystems, Apex Proto Factory

Trouble in Arduino Paradise

(This is another article I wrote a year or more ago but never got around to publishing until now)

I’ve been helping an ecologist make a “compass bearing data logger” using an Arduino Uno. Actually, I’ve been doing most of the implementation while Erik has defined the requirements based on his many years doing field work with other logging tools. (It is pure joy to have crisp requirements so you know your solution happens to match the problem at hand!) Erik picked the Uno and it seemed like an excellent choice because it was very easy for him to combine bits and pieces from Adafruit to create a solution. We quickly became aware of off the shelf software libraries that either come with the Arduino IDE or that are available as add-ons from the main Arduino repositories to support the hardware. At the start of the project it seemed unimaginable to exhaust the Uno’s memory capacity with such a simple application. Does that sound familiar?

After getting about 98% of the functionality in place the IDE still reported only about 24 kilobytes of text usage along side the Uno’s 32 kilobyte capacity figure. But after the addition of the last hundred or so lines of C++ the system became unstable. It wasn’t unstable in the usual sense that the new code didn’t work right the first time. (My batting average up to that point had been excellent, but there had been a number of surprises.) The system was unstable in the sense that only the first sliver of initialization code was executing, but it was executing over and over forever. The CPU was reseting after just a little bit of the application code had executed.

When I was much younger I might have thrashed with this a long time, struggling to determine what broken code fragment I’d added somehow explained the failure. Instead I got out the machete and gutted the bodies of several functions until the overall code size was similar to what it had been the last time the system had run properly. Sure enough, it ran properly again. Replacing stub code with full function bodies in various combinations proved that it was simply the amount of code involved that caused the instability. I should point out that this program has very little “variable” storage in relation to the Uno’s 2kb of RAM. That is, it has maybe a dozen scalar variables, one small character array for building file pathnames, and a couple of objects to do with the clock/calendar and compass chips and the SPI interface to the SD card used for the actual data logging. Also, there are no recursive routines and very few local variables and very shallow call nesting, so stack demands are trivial too. In short, the bad magic was to do with undiagnosed overflow of “something” to do with the amount of text (machine instructions produced by the C++ compiler).

Except that C semantics require initialized string constants to be put into the data segment, and this has to be mutable, and therefore in RAM vs flash memory. Duh. So I was overflowing RAM, causing the stack to walk over the top of variable storage as it nested during routine calls.

The trouble in paradise is that in my world it’s just not acceptable that overflow of a statically allocated memory segment would go unnoticed by the tool chain. In my world this kind of misbehavior forces the Arduino IDE into the “piece of sh*t” bucket and I’m only persevering with this tool chain now for the sake of Erik’s target user group being able to make this logger with user-friendly tools. The Arduino IDE is fantastically user-friendly for making an Arduino blink LEDs. Going much beyond that in my experience has given appreciation for the “get what you paid for” adage.

But the other trouble is that it appears that some combination of Linux, the USB library “RXTX”, and the Arduino IDE are conspiring to ruin my system’s uptime record. If I had a nickle for every time a failure to do with the USB connection between the IDE and my Uno has forced a reboot I could buy another several TI MSP430 Launchpads. More on this here.

 

 

Raspberry Pi with little SD cards

If you have an old SD card that’s less than the 2gb needed to run the blessed Raspbian (Debian Wheezy) Linux distro made for the RPi, or like me, you had a 4gb SD card but a bad spot turned it into an 870mb SD card, fear not. If you’ve got a thumb drive you can trivially carry on. Here’s how.

First thing is to copy the image file to both devices with commands like this, after:

1) BEING SURE you substitute your /dev/paths and

2) Unmounting the card and/or thumb drive before copying images over the top of them

sudo dd if=2012-07-15-wheezy-raspbian.img of=/dev/sdk ; **>>MY<<** thumb drive

sudo dd if=2012-07-15-wheezy-raspbian.img of=/dev/sdi ; **>>MY<<**SD card

If the SD card was 64mb or better, then you should have an intact copy of the FAT filesystem on the front of your card, and if you have a current Linux system it automagically mounted . If not, mount it with something like mount -t vfat /dev/sdi1 /somepath and then cd into /somepath (or the actual filesystem typically under /media).

Edit cmdline.txt to look like this:

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/sda2 rootfstype=ext4 elevator=deadline rootwait

Then cd out of this filesystem and umount it, put the card into your Raspberry Pi and the thumb drive into one of it’s USB sockets (or a hub socket) and power it up. The command line above will tell the system on the SD card to mount the root filesystem on the thumbdrive instead of trying to use the (uncopied or corrupt/incomplete) 2nd partition of the SD card.

 

Linux can’t tolerate a spewing USB device

I’ve been spending a lot of time with TI MSP430 chips using the incredibly inexpensive LaunchPad eval board. This board ships with an MSP430 chip pre-programmed with a temperature measurement demo program that immediately spews temperature readings out its UART pins, which by default are routed through the USB interface to the PC. This causes my Linux system extreme indigestion. It appears that Linux can’t properly identify and set up to interact with a USB device that mindlessly sends data at it right from the get go. I’d gotten over the severe disappointment of the incredibly powerful TI Eclipse-based Code Composer Studio IDE for the MSP430 not supporting the LaunchPad board on Linux (a classic big-corporation screwup: Good luck getting core Arduino-lovers interested in MSP430, TI!). I thought I was being clever to run CCStudio on Windows within a VirtualBox VM session on my Linux system, but I could not get joy. I eventually found a clue about why the LaunchPad wouldn’t work and temporarily disconnected the serial port path from the chip, brought up the board, and put a tweaked version of the program into the chip that paused a few seconds before its spewage.

Coincidentally, a somewhat similar bug manifested in the Arduino Duo IDE when I had an application that output serial data in an uncoordinated fashion. In this case, the Java-based IDE crashed with an unkillable JVM process that hung the Linux USB device connected to the Arduino, forcing me to reboot my Linux system. I even had a case where the X11 session hung with a frozen mouse pointer and stopped keystroke echoes. Apart from a misbehaving NVidia graphics driver and power failures in the past (and a maddening keyboard “autorepeat” bug that I’ll be writing about later), my Linux system never goes down, and I never have to reboot for updates or package installs. So having to reboot Linux every few minutes while trying to figure out how to hold my face with the Arduino board and its IDE became obnoxious in short order. The two bugs below describe similar issues registered with the IDE’s developer bug database. I wish I could say anybody is doing anything about these bugs, even so far as explaining what’s really happening, how to work around them ,etc. Unfortunately these bugs are unloved, and my mail to the developer’s list pleading for more information was ignored.

Relevant Arduino IDE bug reports 1 and 2.

Goals for my Raspberry Pi computers

I have two Raspberry Pi boards now, and both seem to be finding a long term role. The first I’ve been using for some time as a PC at Splat Space meetings. Connected to the meeting room’s LAN, a mouse, keyboard, and USB-based hard drive, this RP makes a usable stand-alone Linux system for keeping up with email and the like. A fellow Splat Spacer is making a case for this RP with his 3D printer (thanks, Geoff!). Instead of hauling all the pieces and parts and plugging them together for every meeting, I intend to strap the RP, hard drive, USB hub, etc to the back of the monitor. A wireless bridge is needed, too. I still have several left over from when our broadband was via Starband and I was my next door neighbor’s ISP.  This RP is named kludge-pc.

The other RP is destined to live on my home LAN. The original charter I had in mind for it when I ordered it in February was as a low power, battery-backed “overseer” to monitor the various bits and bobs in the house and make available clues as to what’s right or wrong with things. I especially want it to be able to diagnose common failure modes, such as when the wireless repeater gets unplugged for the sake of the vacuum cleaner and then we all wonder what happened to the Internet connection. But since deciding on this job for it, I’ve since realized we badly need a local caching DNS server to make today’s URL-heavy web content less painful to access. Before we could get broadband at this house in the boondocks, I ran an autodial modem-based LAN, and a DNS server was key to making it tolerable. The problem with that (bind-based) setup was that it had to be highly available, and apart from it becoming painful to reboot while others were using computers in the house, I begrudged the watts of electricity. I can’t use my main Linux system because I can’t guarantee it being up all the time (of all things the Arduino development IDE is capable of killing it: more on that in another post.) But a 3.5 watt board like the Raspberry Pi will be perfect. Finally, the GPIO on this board in combination with the Adafruit Linux distro will naturally tie into various monitoring functions, such as how many hours a day the water pumps are actually running. The obvious name for this RP will be eye.

Windows 8 first try

I downloaded the Windows 8 release preview and set it up under VirtualBox 4.0.12 on my 4-core AMD system running Ubuntu 10.04. (I’m “this close” to switching that system to XUbuntu 12.04 now that I seem to have found an alternative to the new “improved” Gnome that is in fact UNUSABLE Gnome.) Windows 8 installed very quickly and with an impressively small number of inputs on my part. But when I tried to shut the VM down and save the session it hung badly, and for the first time in my experience, VirtualBox has a process running that doesn’t respond to a “force quit” GUI action. It did respond to a kill 9. This was with the 32 bit version of W8, and I realize I specified “Windows/other” but maybe Windows/Windows 7 (32 bit) would be a better choice. I made this choice and reinstalled and then did a shutdown of Windows (spending only 3-4 minutes trying to find the interface for this, giving up, and finding the secret on the net: hover around on the right side of the screen to get the “charm bar”, click on “settings”, then on “power”, then on “shutdown”). Now I can start up W8 each time under VirtualBox. However it’s still the case that trying to just save the VM state for a quick restart results in a VB failure dialog and another hard to kill process.

The Windows 8 user interface is astonishing and I’ll leave judgments of that for others. My goal now is to determine what the device support situation is.

Raspberry Pi round two

I ordered a Raspberry Pi in late February, soon after the gate opened, but didn’t get it until late May. My vendor Newark opened the gate again, this time with a quantity ten limit, on July 5th and I put in for 10 more as a group buy for fellow Splatspace enthusiasts. Newark published “late August” delivery as they were opening the gate, gave us a crazy 162 day lead time right after the order was put in, then pulled it back to August 16. The 10 new boards arrived yesterday, three weeks after the order and three weeks ahead of schedule. They were triple-boxed and this weekend we’ll be testing each one before sending it on to its new home.