tbolt

Dealers of Lightning

Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age is a good read on the significance PARC played in the history of computing. The book is twenty years old now, but it still offers great insight. One bit that stood out to me was this excerpt:

Their first step was to do something PARC had never tried before: They analyzed how non-engineers would actually use a computer.

This survey was conducted back at Ginn, to which Mott returned with an Alto display, keyboard, and mouse. He installed them as a sort of dummy setup (the machine was nonfunctional) and invited editors to seat themselves in from of the equipment, imagine they were editing on-line, and describe what they expected it to do.

“They were a little skeptical,” he recalled. “But —surprise, surprise— what you got was them wanting the machine to mimic what they would do on paper” They even described the processes in terms of the tools they had always used. That is why to this day every conventional word processor’s commands for deleting a block of text and placing it elsewhere in a file are called “cut” and “paste” —because Ginn’s editors, the first non-engineers ever to use such a system, were thinking about the scissors and paste pots they used to rearrange manuscripts on paper.

Skeuomorphism is often misunderstood in design. The recent period of “flatness” in the past 5-10 years was a shallow episode that only harped on being visually defiant to Web 2.0 era designs. If we take the “design is how it works” stance, then skeuomorphism never really went away. We never dropped “cut” or “paste.”

Favorite Programming Language Names

Here are my favorite programming language names. I tried to be unbiased and not factor in my knowledge of the language design, history, or the surrounding community.

Programmers are often notoriously bad at naming things, so it’s no surprise that this list of programming languages is full of boring names. Not that it really matters, but if a new language was released named K++# I likely wouldn’t pay it any attention.

Honorable mentions: C Ruby ColdFusion Simula AWK

Rust

A great name for a programming language, and easily my favorite. The subtle hint that this is intended to be a systems language and “closer to the metal” is just perfect.

Ada

The name Ada stands up on it’s own. Being named after the first computer programmer is just icing on top.

Lua

What does Lua mean? Don’t know, but I like it.

Swift

I haven’t used Swift enough to know if it’s a truly swift programming language, but the name checks out.

Update: Lua means “moon” in Portuguese.

Prototypes and Production

Great post by Jeremy Keith on Prototyping, I particularly like this bit:

Build prototypes to test ideas, designs, interactions, and interfaces …and then throw the code away. The value of a prototype is in answering questions and testing hypotheses. Don’t fall for the sunk cost fallacy when it’s time to switch over into production mode.

I’m a huge proponent of prototyping for all the reasons mentioned, but anytime a prototype morphs into the final product, it causes more headache than if we were to shelve it and start fresh.

Window Systems, Window Managers, Desktop Environments

If you’ve ever been confused when seeing names like “desktop environment” and “window manager,” this exerpt from A History of the GUI is a nice summary on it’s origins:

The initial design goal of the X Window System (which was invented at MIT in 1984) was merely to provide the framework for displaying multiple command shells and a clock on a single large workstation monitor. The philosophy of X was to “separate policy and mechanism” which meant that it would handle basic graphical and windowing requests, but left the overall look of the interface up to the individual program.

To provide a consistent interface, a second layer of code, called a “window manager” was required on top of the X Window server. The window manager handled the creation and manipulation of windows and window widgets, but was not a complete graphical user interface. Another layer was created on top of that, called a “desktop environment” or DE, and varied depending on the Unix vendor, so that Sun’s interface would look different from SGI’s. With the rise of free Unix clones such as Linux and FreeBSD in the early 90s, there came a demand for a free, open-sourced desktop environment. Two of the more prominent projects that satisfied this need were the KDE and GNOME efforts, which were started in 1996 and 1997 respectively.

The Rise and Demise of RSS

Great post on the history of RSS:

There are two stories here. The first is a story about a vision of the web’s future that never quite came to fruition. The second is a story about how a collaborative effort to improve a popular standard devolved into one of the most contentious forks in the history of open-source software development.

I still use RSS everyday. It’s my preferred way of reading news. Feedbin for syncing and managing feeds, Reeder for iOS, and Readkit for MacOS.

Learning From Terminals to Design the Future of User Interfaces

Really enjoyed the post Learning From Terminals to Design the Future of User Interfaces, specifically this point:

Modern applications and interfaces frustrate me. In today’s world every one of us has the awesome power of the greatest computers in human history in our pockets and at our desks. The computational capacity at our finger tips would have been unimaginable even to the most audacious thinkers of thirty years ago.

These powerful devices should be propelling our workflows forward with us gangly humans left barely able to keep up, and yet, almost without exception we wait for our computers instead of the other way around. We’re conditioned ourselves to think that waiting 30+ seconds for an app to load, or interrupting our workflow to watch a half second animations a thousand times a day, are perfectly normal.

He goes on to describe how the web spawned as a new application platform:

Somewhere around the late 90s or early 00s we made the decision to jump ship from desktop apps and start writing the lion’s share of new software for the web. This was largely for pragmatic reasons: the infrastructure to talk to a remote server became possible for the first time, good cross platform UI frameworks had always been elusive beasts [1], and desktop development frameworks were intimidating compared to more approachable languages like Perl and PHP.

The other reason was cosmetic: HTML and CSS gave developers total visual control over what their interfaces looked like, allowing them to brand them and build experiences that were pixel-perfect according to their own ends. This seemed like a big improvement over more limiting desktop development, but it led us to the world we have today where every interface is a different size and shape, and the common display conventions that we used to have to aid with usability have become distant memories of the past.

…and the issues that we’re now still dealing with:

Web technology isn’t conducive to fast and efficient UIs, but that’s not the only problem we’re facing. Somewhere along the way UX designers became addicted to catchy, but superfluous, interface effects.

Think of all the animations that an average user sits through in a day: switching between spaces in Mac OS, 1Password’s unlock, waiting for iOS to show the SpringBoard after hitting the home button, entering full screen from a Mac OS app, or switching between tabs in mobile Safari.

Web technology could have really fast and efficient UIs, but a few things get in the way

  • Developers reach for libraries and frameworks instead of writing minimal plain JavaScript. Typically due to naiveness, time-constraints, or apathy towards the user.
  • Third-party libraries get added to sites without hesitation to support tracking, advertising, and analytics.
  • Designers dream up fancy animations and layouts instead of focusing on usability.

I think the web would have been better suited as what it was originally intended for, a linked document browser, rather than an app-delivery platform.

From Interface Builder to Windows

Fascinating look at the early iterations of Microsoft’s Interface Builder, which would later become a big part of Windows Doing Windows, Part 2: From Interface Manager to Windows

I particularly liked these excerpts:

I like the obvious analogy of a restaurant. Let’s say I go to a French restaurant and I don’t speak the language. It’s a strange environment and I’m apprehensive. I’m afraid of making a fool of myself, so I’m kind of tense. Then a very imposing waiter comes over and starts addressing me in French. Suddenly, I’ve got clammy hands. What’s the way out?

The way out is that I get the menu and point at something on the menu. I cannot go wrong. I may not get what I want — I might end up with snails — but at least I won’t be embarrassed.

But imagine if you had a French restaurant without a menu. That would be terrible.

It’s the same thing with computer programs. You’ve got to have a menu. Menus are friendly because people know what their options are, and they can select an option just by pointing. They do not have to look for something that they will not be able to find, and they don’t have to type some command that might be wrong.