07 Oct 2018
There are two stories here. The first is a story about a vision of the web’s future that never quite came to fruition. The second is a story about how a collaborative effort to improve a popular standard devolved into one of the most contentious forks in the history of open-source software development.
Really enjoyed the post Learning From Terminals to Design the Future of User Interfaces, specifically this point:
Modern applications and interfaces frustrate me. In today’s world every one of us has the awesome power of the greatest computers in human history in our pockets and at our desks. The computational capacity at our finger tips would have been unimaginable even to the most audacious thinkers of thirty years ago.
These powerful devices should be propelling our workflows forward with us gangly humans left barely able to keep up, and yet, almost without exception we wait for our computers instead of the other way around. We’re conditioned ourselves to think that waiting 30+ seconds for an app to load, or interrupting our workflow to watch a half second animations a thousand times a day, are perfectly normal.
He goes on to describe how the web spawned as a new application platform:
Somewhere around the late 90s or early 00s we made the decision to jump ship from desktop apps and start writing the lion’s share of new software for the web. This was largely for pragmatic reasons: the infrastructure to talk to a remote server became possible for the first time, good cross platform UI frameworks had always been elusive beasts , and desktop development frameworks were intimidating compared to more approachable languages like Perl and PHP.
The other reason was cosmetic: HTML and CSS gave developers total visual control over what their interfaces looked like, allowing them to brand them and build experiences that were pixel-perfect according to their own ends. This seemed like a big improvement over more limiting desktop development, but it led us to the world we have today where every interface is a different size and shape, and the common display conventions that we used to have to aid with usability have become distant memories of the past.
…and the issues that we’re now still dealing with:
Web technology isn’t conducive to fast and efficient UIs, but that’s not the only problem we’re facing. Somewhere along the way UX designers became addicted to catchy, but superfluous, interface effects.
Think of all the animations that an average user sits through in a day: switching between spaces in Mac OS, 1Password’s unlock, waiting for iOS to show the SpringBoard after hitting the home button, entering full screen from a Mac OS app, or switching between tabs in mobile Safari.
Web technology could have really fast and efficient UIs, but a few things get in the way
- Third-party libraries get added to sites without hesitation to support tracking, advertising, and analytics.
- Designers dream up fancy animations and layouts instead of focusing on usability.
I think the web would have been better suited as what it was originally intended for, a linked document browser, rather than an app-delivery platform.06 Sep 2018
Fascinating look at the early iterations of Microsoft’s Interface Builder, which would later become a big part of Windows Doing Windows, Part 2: From Interface Manager to Windows
I particularly liked these excerpts:
29 Aug 2018
I like the obvious analogy of a restaurant. Let’s say I go to a French restaurant and I don’t speak the language. It’s a strange environment and I’m apprehensive. I’m afraid of making a fool of myself, so I’m kind of tense. Then a very imposing waiter comes over and starts addressing me in French. Suddenly, I’ve got clammy hands. What’s the way out?
The way out is that I get the menu and point at something on the menu. I cannot go wrong. I may not get what I want — I might end up with snails — but at least I won’t be embarrassed.
But imagine if you had a French restaurant without a menu. That would be terrible.
It’s the same thing with computer programs. You’ve got to have a menu. Menus are friendly because people know what their options are, and they can select an option just by pointing. They do not have to look for something that they will not be able to find, and they don’t have to type some command that might be wrong.
Great article by Laura Kalbag on why semantic HTML is better for usability and accessibility:
As developers, we like to use divs and spans as they’re generic elements. They come with no associated default browser styles or behaviour except that div displays as a block, and span displays inline. If we make our page up out of divs and spans, we know we’ll have absolute control over styles and behaviour cross-browser, and we won’t need a CSS reset.
Absolute control may seem like an advantage, but there’s a greater benefit to less generic, more semantic elements. Browsers render semantic elements with their own distinct styles and behaviours. For example, button looks and behaves differently from a. And ul is different from ol. These defaults are shortcuts to a more usable and accessible web. They provide consistent and well-tested components for common interactions.
It’s pretty common to see “div-itis.” Where there’s a semantic HTML element nested many layers deep in a series of
divs. I’m certainly guilty of this, but i’m conscious of it and try to look for ways to improve. Developers should really start paying attention to their HTML. It’s seems to be the first thing ignored.
Good usability is good accessibility
This is my experience as well. In fact, when I previously performed in-person usability testing with blind participants, one individual even stated that often times sites or apps are not usable at all, nevermind accessibility issues.26 Apr 2018
These Vintage Soviet control rooms are stunning. Pre-GUI interfaces like these control centers should be the study of UI Designers as there is much to learn from the layout, control types, and schematics.12 Jan 2018
Matthew Green, Cryptographer at Johns Hopkins, writing on encryption in light of recent Kaspersky reports:
16 Oct 2017
At the end of the day we, as a society, have a decision to make. We can adopt the position that your data must always be accessible—first to the company that made your software and secondly to its government. This will in some ways make law enforcement’s job easier, but at a great cost to industry and our own cybersecurity. It will make us more vulnerable to organized hackers and could potentially balkanize the tech industry—exposing every U.S. software firm to the same suspicions that currently dog Kaspersky.
Alternatively, we can accept that to protect user data, companies have let it go—and the single most powerful tool technologists have developed to accomplish this goal is encryption. Software with encryption can secure your data, and in the long run this—properly deployed and verified—can help our software industry spread competitively across the world. This will not be without costs: It will make (some) crimes harder to solve. But the benefits will be real as well.
Software and service providers are not deploying encryption merely to frustrate the U.S. government. Providers know their business far better than the Justice Department does—when they choose to deploy encryption, it’s because their business depends on it. And while it may be frustrate law enforcement, in this case Silicon Valley’s interests and consumers’ interests are aligned.
This post from the Pixelmator blog is great, loved this part:
28 Sep 2017
The app icon is a fundamental part of any app. I personally judge apps by their icons and I am very comfortable admitting that. The icon is a reflection of lots of things, including quality, beauty, innovation, platform nativeness, and even the developer’s values. All of this is visible from the very first glimpse. It’s incredibly rare for an app with a beautiful icon to be crap. Even more, app icons are of utmost importance in macOS, since we, Mac users, care a lot about how our apps look and feel.