Photo of Jason Robert Carey Patterson

Jason Robert Carey Patterson's
Computing Background

Childhood

Like many of my generation, my first computer was a Commodore 64. My family was desperately poor at the time, but my parents were forward-looking and somehow managed to scrimp and save to buy one when I was twelve years old (1984). On it I learned to play games really well, like a lot of boys that age do today, but I also had many ideas for better games, so I decided to learn to write my own. By reading books and trying things, I taught myself to program in BASIC, spending hours after school every day with the computer hooked up to our family TV screen.

Photo of Commodore 64

My first computer, a Commodore 64

I wrote a few simple games in BASIC (car racing, snow skiing), but I soon wanted to write better games, and interpreted BASIC just wasn't fast enough, so I taught myself to program in assembly language (machine code). I saved up to buy critical books (no World Wide Web back then!) and even hand-entered the code to a complete assembler, linker and sprite editor (with the help of my mother, who would read as I typed), since I couldn't afford expensive commercial programming tools (no open source back then, at least in the mainstream). As strange and tedious as it sounds (and was), learning machine and assembly language in that way turned out to be a huge benefit in the long term – that was where I first became interested in the "nitty-gritty" hardware aspects of how computers really worked, especially their processors and graphics systems. I was an impressionable pre-teen, just the perfect age for learning, and this was completely amazing stuff... "unreal", to use the word from those days!

I wrote several games in assembly language for the '64, including a sailing game with a "keel design" stage (it was America's Cup fever at the time, with the winged keel having just broken the longest winning streak in sporting history), and a never-completed jet-fighter game (not quite a flight simulator, but close, with an out-of-the-cockpit, psuedo-3D view). Writing games was a great learning exercise – they're short, simple projects that build skills in a fun way, and are exactly the kinds of projects that would make ideal assignments in a "programming 101" course, although for some reason schools don't seem to use games as their early projects, which I think is a mistake. I also wrote my first pieces of non-game software on the '64, including a simple drawing program controlled by a joystick (no mouse back then!), and a stockmarket charting and analysis program called The Chartist. More than 25 years later, The Chartist was still being used, in hugely improved form. It wasn't finally retired until 2014!

Three years later (1987) our family upgraded to an Amiga 500, a tremendous step up both in performance and especially in functionality, with its amazing, mouse-driven graphical user interface, powerhouse 4096-color graphics system and multitasking OS. This further increased my interest in processors and graphics hardware, as well as kindling an interest in user interfaces and operating systems. I quickly learned 68k assembly language and then the C programming language, again spending most of what little money I had on a C compiler and the Deluxe Paint drawing program. I wrote a new version of The Chartist for the Amiga which was interactive, producing charts in windows on the screen rather than as printouts. My aborted jet-fighter game from the '64 served as the inspiration for teaching my brother to program in BASIC on the Amiga, where together we had lots of fun writing a 2D scroller jet-fighter game, like many popular arcade games of that era.

Photo of Amiga

My second computer, an Amiga

I learned a lot about CPU performance and graphics hardware by writing various graphical demos on the Amiga, which was the graphics pioneer of its day (in the consumer market, at least). This taught me just how amazingly fast a computer can be when you get everything just right, and how quickly that speed falls away when things are even a little bit off; how important it is to play to the strengths of the memory system and avoid its weaknesses; and when/why the graphics system is better than the CPU, and visa versa.

Having learned assembly language first, then C, gave me a deep, intimate insight into how compilers work and the code they really generate from programming languages. Normal students learning to program rarely learn these things from the bottom up as I did, and that's a great pity in many respects. Learning how the system really works "under the hood" is absolutely invaluable! That's why today I strongly believe all computer-science degrees should have a compilers course as a mandatory part of the degree – it turns hardware and programming languages from "magic" into nuts-and-bolts understanding, and the students come out the other side "enlightened" and far, far better programmers.

I was just becoming comfortable programming in C (and loving it, such a huge step up from assembly!) when I finished high school and was awarded a full scholarship to Bond University (one of only two awarded university-wide that year). That was very fortunate, because without a scholarship I could never have afforded to attend an expensive, elite university like Bond...

Bond University (Undergrad)

During my degree in computer science at Bond University I learned most of what I know about computers and programming. My interest and existing background knowledge, not to mention already having five and a half years of programming under my belt, meant everything just clicked.

I really enjoyed the whole degree, especially the courses on data structures, hardware, compilers, operations research (graph/network theory), 3D graphics, user interfaces and AI. In fact, I enjoyed the work so much that I did more than a standard workload and completed the degree more quickly than usual. In hindsight, I wouldn't recommend this – enjoy that wonderful time, don't rush it!

While at Bond, I was instrumental in getting Internet access for students. The university was reluctant, worried about security due to the paranoia caused by the movie War Games, but I felt strongly that the Internet was the future and I organized a huge petition pleading for the university to allow students and staff to dial-in from home. This was in 1990, several years before the World Wide Web really existed. 9600kbps modems were the state of the art (1 KB/sec!) and things like the Usenet newsgroups and anonymous FTP were all the rage, plus, of course, email. A few years later when the web arrived, Bond was ahead of its time, thanks in no small part to that petition!

Having access to the Internet allowed me to do a bit of open-source development too, including enhancing the hard-disk drivers of my OS kernel (elevator algorithms!) and doing some bug fixing in the driver for my graphics card (anyone remember Godzilla's Guide To Porting The X Server?). This was well before Linux existed, or the term "open source" for that matter, back when the GNU Project was the big new thing. It's been amazing to watch open source gain momentum over the years, and in general I'm a supporter (and contributor), although I have conflicting thoughts about open source, especially its commercial viability in the long term.

By the end of the degree, my primary interests were still the same – programming languages, compilers, processors and graphics – but I had also been given the chance to explore many other areas, including AI which I very much enjoyed. I even wrote a chess game which debuted at one of the university's open days to much fanfare, along with a simpler tic-tac-toe game which the uni promised any person $100 if they could beat (knowing all the while that it wasn't possible, of course). It's been great to see the AI field finally making progress recently, particularly with neural networks for image and speech recognition, because AI struggled so much for so long, with so many unfulfilled promises (famous AI saying: "A year spent in artificial intelligence is enough to make one believe in God" – Alan Perlis). Ironically, the general public now severely overestimate what is possible with AI, and have little understanding of its realistic limits in the foreseeable future (another AI saying: "AI stands for artificial idiot" – Vint Cerf). I just hope the current over-hyping and almost inevitable disappointments don't lead to another "AI winter" like the one in the 1980s and '90s.

My final undergraduate degree project was a visual, programmable microarchitecture simulator and debugger with a nice graphical user interface, intended to be used to assist in teaching the concepts of computer architecture, microarchitecture and microcode. Unfortunately, it was never used – the RISC revolution made microcode obsolete. Timing was against me there. If I had done the project just one year later, it would have been based on RISC ideas and thus useful going forward, not to mention being a much simpler project.

Photo of giving the valedictory address

Giving the valedictory address at Bond

My interest in computer graphics led me to go a bit further with my presentations than most students. I prepared several spectacular 2D and semi-3D animated graphics presentations and displayed most of them on the huge projected screens of Bond's lecture theaters. Nothing shows up boring, static slides better than a presentation with spinning and exploding animated titles, projected onto a cinema-sized screen, with backing music!

By the time I reached my honors year, I wanted to take on a big project related to compilers and processors, so I chose to write a C++ compiler targeting the MIPS RISC and Motorola 68k CISC architectures. This turned out to be much too big a project, and I ended up working like a slave to complete it in time (8 months). Even then, it ended up only compiling a subset of the C++ language (a large subset, although C++ circa 1991 was a much simpler language than it is today). Considering the amount of time I had to write it, my J++ compiler did an excellent job of the difficult problem of instruction selection for the Motorola 68040 processor – an extreme CISC design – using a novel and original instruction selection algorithm which I invented, in an otherwise simple, clean, RISC-style code generator. Its instruction selection was competitive with production compilers of the time, showing that it's possible to use low-level, RISC-like intermediate code and still perform excellent instruction selection for traditional, complex instruction sets (something we see today in projects like LLVM).

I was selected as university valedictorian and graduated from Bond in May 1992, less than two and a half years after starting there, setting the record for the fastest undergraduate honors degree in Australia (a 4-year degree completed in 2.33 years), as well as the GPA record at Bond (3.9 out of a maximum 4.0, or 97.5%). It was over all too quickly, but I had a great time at Bond, and I'll never forget it. Those were probably the best couple of years of my life.

Of course, at that point I was still very young. When I gave the valedictory address, for example, I'd only turned 19 just two months earlier. And being that young can be a problem...

To Move To America, Or Not?

Photo of Alpha processor

Alpha, the future of computer architecture in 1992

Towards the end of my undergraduate degree at Bond, I was contacted by some people from Digital Equipment Corporation's research labs with the idea of having DEC fund my PhD if I moved to the USA. DEC's offer was very tempting, both well paid and highly prestigious, with the likely outcome eventually being a position in DEC's research labs as DEC pushed forth with the Alpha architecture, which looked to be the future of processor hardware at the time (even Microsoft saw this, and ported Windows NT to Alpha).

I struggled greatly with the decision, going back and forth several times, and receiving conflicting advice from the many people I asked, but in the end I decided to decline the offer of a DEC-funded scholarship to attend Stanford University. At the time, I really didn't want to move to America. This was partially because I was a loyal Australian and wanted to help build a strong Australian computing industry, which did seem possible back then, before we understood the chasm/tornado dynamics of the software marketplace. It was also partially because I was only 19 years old at the time, and at that age, moving alone, permanently, to the other side of the world, didn't appeal – and everybody knew if I went I wouldn't be coming back.

Today, I sometimes regret that decision. Most of all, though, I regret having to make it at such a young age. That's why I'm very much against accelerated degree programs now. It is possible to get where you're wanting to go too early, and that can be just as problematic as getting there too late. As the book Outliers argues so spectacularly, in life timing is everything. Had I been 22 years old, as would have been normal, I may well have felt differently, and the entire course of my life might have been very different. Certainly, I would have been far more ready to make such a major life decision.

Photo of family at airport

My family seeing me off at the airport, as I
head to America for yet another conference

Looking back now, of course, we have the hindsight of knowing the Alpha architecture did not succeed commercially, despite being the best-designed architecture with the fastest processors in the world, and indeed DEC no longer even exists, so it is difficult to say exactly where I would have ended up, but it is fair to say things would have been very different.

I tried to make up for not moving to America by flying over to attend all of the SIGPLAN conferences on programming language design and implementation (SIGPLAN PLDI, the top conference in the languages/compilers field), as well as a couple of ASPLOS conferences, even though it was an incredible strain on my meager finances. I always found the conferences to be highly enjoyable, and I especially relished the chance to interact with other people "like me". In many ways, it's a great pity the conference system is fading today, replaced by the web, which is certainly faster and vastly cheaper, but also much less personal.

Queensland University of Technology (PhD)

Instead of moving to America, I chose to do my PhD at QUT under John Gough. At that time, QUT had the leading languages/compilers group in Australia, based around the "Garden's Point" compilers for the Pascal family of languages. I didn't personally work with those compilers though – my work was always fiercely independent.

My PhD research went very well indeed. I focused on the area of code optimization (making programs faster), in particular direct, whole-program optimization of executable files, and my thesis, entitled VGO – A Very Global Optimizer, was met with great acclaim from its examiners, including this fantastic quote by Charles Fischer, author of a leading textbook and former editor of ACM TOPLAS, the premier journal in the field...

This is quite simply the finest Ph.D. dissertation in experimental computer science that I have ever read. The quality of the writing is that of a polished research monograph. The quality of the research, both in depth and breadth, is extraordinary. The author displays a knowledge of compiler theory and practice worthy of a senior don, far beyond someone just establishing his credentials.     — Charles Fischer

Most of my PhD work remains confidential because I'm commercializing the work and making a product out of it. It's a serious chance to "change the world", so wish me luck!

Photo of John Gough

John Gough,
my PhD advisor

The only major piece of the work which is not confidential is the value range propagation algorithm, which I presented as a paper at the SIGPLAN PLDI conference in 1995. That algorithm is now used by many, possibly even most, production compilers, both for branch prediction and for other value/type propagation optimizations and analyses. As far as I know, I'm still the youngest person to ever present at SIGPLAN PLDI, the top conference in the languages/compilers field (I was 22 years old at the time).

I was also the first person to use color slides at SIGPLAN PLDI, in keeping with my presentation style from Bond, though my PLDI presentation was conservative (no animations or "builds", you can see the slides here). Organizing it was an enormous hassle, but I don't mind pushing the boundaries in the name of progress so I went out of my way, with conference chair David Wall's help, to arrange a projector and make it happen. It certainly shook things up, and within a couple of years there wasn't a single overhead-based presentation to be found at those conferences anymore.

My PhD thesis included three "reader background" appendices, intended to ensure readers would really understand and "get" what I was talking about, and not be struggling with the basics and terminology of the modern compilers and hardware world. Due to surprising and unexpected popular demand, those appendices have since been extracted into separate articles in their own right, and have gone on to lead independent lives of their own.

Screenshot

Modern Microprocessors - A 90-Minute Guide! article at number 2 on Google :-)

One of those articles, Modern Microprocessors – A 90-Minute Guide! has grown and grown in popularity over the years, and is now one of the world's most popular and widely used introductory articles on processor (CPU) design, with over 800,000 readers and often in the Google top 10 for the word "microprocessors". It's even been as high as number 2. If it was a book, it would officially qualify as a best seller! It is updated regularly to keep it current, and is used by many university courses around the world. In fact, that article has now been read by far more people than belong to the ACM. It's been amazing, especially considering I almost didn't include that appendix in my thesis at all, thinking everybody probably already knew that stuff!

The other two introductory articles, Basic Instruction Scheduling (and Software Pipelining) and Register Allocation by Graph Coloring, are less widely read but are nonetheless used by many university compiler courses worldwide, because they're better than most textbook coverage. It's nice to know people must really like my writing style!

Lighterra

Part way through my PhD, when I began to feel my work had real commercial potential, I decided to switch my PhD to part time and start my own small company, Lighterra (originally called PattoSoft, first named back when I was 12 years old after my nickname, "Patto"). The idea was for the company to provide an income via some sideline system administration and networking work, while allowing me the time to pursue the commercialization of my PhD research.

Screenshot of Netscape

Netscape, THE web browser

At that time (late 1995), the World Wide Web was just beginning to grow, and I'd been completely blown away by seeing Lynx, a simple text-only web browser, 18 months earlier. I felt sure the web was the next big thing, which was confirmed by the release of Mosaic and then Netscape over the ensuing months. Recognizing the opportunity, I helped my parents start a small web-design company and set myself up to do system administration and networking work while I simultaneously completed my PhD and commercialized my PhD ideas. That was the plan, anyway.

Unfortunately, things didn't go quite to plan. Once again, timing wasn't on my side.

What began as a software company with a small networking and system-administration sideline, quickly morphed into what was, effectively, a networking services company. Far from allowing plenty of free time to develop and commercialize my PhD research, the explosive success and ongoing evolution of the World Wide Web resulted in me being swamped with time-critical but uninteresting system administration, networking and security work instead, leaving little or no spare time. Technically, the work focused mainly on networking, security and UNIX system administration (primarily Solaris & Linux), but also included some custom software development for tasks such as enhancing internal search engines, hardening email daemons, customizing DNS systems and content caching/mirroring systems etc. Additional, secondary tasks included teaching/training and consulting in web design, search engine optimization, web performance and video encoding.

Photo of installing server

Installing a server at the PEER1 data center
in LA as part of my HostEverywhere CDN

With great effort, I was able to squeeze in the time to complete my PhD by late 2000 (officially February 2001), but I continued to be swamped by other work and precious little was being done towards commercializing my PhD ideas. I was still determined to finish and release a product from the work, but I just couldn't get the time to do it. The whole situation was deeply frustrating and depressing, and if it hadn't been for the book Learned Optimism, I may well have given up on everything at that point. Years went by, and I hung on hoping things would improve, but nothing changed. Eventually, something had to give.

In early 2011, 10 years after finishing my PhD, and after 10 years of doing mainly system administration, I decided a major course correction was necessary. I considered switching to a teaching/research position, despite my quite cirtical opinion of the educational/academic system, but while I was still evaluating different universities, an unexpected opportunity presented itself doing 3D graphics for the mining industry. This allowed me to cease offering system administration, networking and security services, and eventually phase out the HostEverywhere CDN, while giving me an income from more interesting 3D graphics work, taking far less time and not being as time-critical and unbounded as system administration, thereby leaving me the time to pursue my real work, and keeping the all-important freedom to do so. And that is what Lighterra was originally intended to provide, all along.

So that's where I've come from, where I am today, and where I'm heading. Wish me luck!