Here are my favorite programming language names. I tried to be unbiased and not factor in my knowledge of the language design, history, or the surrounding community.
Programmers are often notoriously bad at naming things, so it’s no surprise that this list of programming languages is full of boring names. Not that it really matters, but if a new language was released named
K++# I likely wouldn’t pay it any attention.
A great name for a programming language, and easily my favorite. The subtle hint that this is intended to be a systems language and “closer to the metal” is just perfect.
The name Ada stands up on it’s own. Being named after the first computer programmer is just icing on top.
What does Lua mean? Don’t know, but I like it.
I haven’t used Swift enough to know if it’s a truly swift programming language, but the name checks out.
Update: Lua means “moon” in Portuguese.09 Mar 2019
Great post by Jeremy Keith on Prototyping, I particularly like this bit:
Build prototypes to test ideas, designs, interactions, and interfaces …and then throw the code away. The value of a prototype is in answering questions and testing hypotheses. Don’t fall for the sunk cost fallacy when it’s time to switch over into production mode.
I’m a huge proponent of prototyping for all the reasons mentioned, but anytime a prototype morphs into the final product, it causes more headache than if we were to shelve it and start fresh.24 Dec 2018
If you’ve ever been confused when seeing names like “desktop environment” and “window manager,” this exerpt from A History of the GUI is a nice summary on it’s origins:
20 Nov 2018
The initial design goal of the X Window System (which was invented at MIT in 1984) was merely to provide the framework for displaying multiple command shells and a clock on a single large workstation monitor. The philosophy of X was to “separate policy and mechanism” which meant that it would handle basic graphical and windowing requests, but left the overall look of the interface up to the individual program.
To provide a consistent interface, a second layer of code, called a “window manager” was required on top of the X Window server. The window manager handled the creation and manipulation of windows and window widgets, but was not a complete graphical user interface. Another layer was created on top of that, called a “desktop environment” or DE, and varied depending on the Unix vendor, so that Sun’s interface would look different from SGI’s. With the rise of free Unix clones such as Linux and FreeBSD in the early 90s, there came a demand for a free, open-sourced desktop environment. Two of the more prominent projects that satisfied this need were the KDE and GNOME efforts, which were started in 1996 and 1997 respectively.
07 Oct 2018
There are two stories here. The first is a story about a vision of the web’s future that never quite came to fruition. The second is a story about how a collaborative effort to improve a popular standard devolved into one of the most contentious forks in the history of open-source software development.
Really enjoyed the post Learning From Terminals to Design the Future of User Interfaces, specifically this point:
Modern applications and interfaces frustrate me. In today’s world every one of us has the awesome power of the greatest computers in human history in our pockets and at our desks. The computational capacity at our finger tips would have been unimaginable even to the most audacious thinkers of thirty years ago.
These powerful devices should be propelling our workflows forward with us gangly humans left barely able to keep up, and yet, almost without exception we wait for our computers instead of the other way around. We’re conditioned ourselves to think that waiting 30+ seconds for an app to load, or interrupting our workflow to watch a half second animations a thousand times a day, are perfectly normal.
He goes on to describe how the web spawned as a new application platform:
Somewhere around the late 90s or early 00s we made the decision to jump ship from desktop apps and start writing the lion’s share of new software for the web. This was largely for pragmatic reasons: the infrastructure to talk to a remote server became possible for the first time, good cross platform UI frameworks had always been elusive beasts , and desktop development frameworks were intimidating compared to more approachable languages like Perl and PHP.
The other reason was cosmetic: HTML and CSS gave developers total visual control over what their interfaces looked like, allowing them to brand them and build experiences that were pixel-perfect according to their own ends. This seemed like a big improvement over more limiting desktop development, but it led us to the world we have today where every interface is a different size and shape, and the common display conventions that we used to have to aid with usability have become distant memories of the past.
…and the issues that we’re now still dealing with:
Web technology isn’t conducive to fast and efficient UIs, but that’s not the only problem we’re facing. Somewhere along the way UX designers became addicted to catchy, but superfluous, interface effects.
Think of all the animations that an average user sits through in a day: switching between spaces in Mac OS, 1Password’s unlock, waiting for iOS to show the SpringBoard after hitting the home button, entering full screen from a Mac OS app, or switching between tabs in mobile Safari.
Web technology could have really fast and efficient UIs, but a few things get in the way
- Third-party libraries get added to sites without hesitation to support tracking, advertising, and analytics.
- Designers dream up fancy animations and layouts instead of focusing on usability.
I think the web would have been better suited as what it was originally intended for, a linked document browser, rather than an app-delivery platform.06 Sep 2018
Fascinating look at the early iterations of Microsoft’s Interface Builder, which would later become a big part of Windows Doing Windows, Part 2: From Interface Manager to Windows
I particularly liked these excerpts:
29 Aug 2018
I like the obvious analogy of a restaurant. Let’s say I go to a French restaurant and I don’t speak the language. It’s a strange environment and I’m apprehensive. I’m afraid of making a fool of myself, so I’m kind of tense. Then a very imposing waiter comes over and starts addressing me in French. Suddenly, I’ve got clammy hands. What’s the way out?
The way out is that I get the menu and point at something on the menu. I cannot go wrong. I may not get what I want — I might end up with snails — but at least I won’t be embarrassed.
But imagine if you had a French restaurant without a menu. That would be terrible.
It’s the same thing with computer programs. You’ve got to have a menu. Menus are friendly because people know what their options are, and they can select an option just by pointing. They do not have to look for something that they will not be able to find, and they don’t have to type some command that might be wrong.
Great article by Laura Kalbag on why semantic HTML is better for usability and accessibility:
As developers, we like to use divs and spans as they’re generic elements. They come with no associated default browser styles or behaviour except that div displays as a block, and span displays inline. If we make our page up out of divs and spans, we know we’ll have absolute control over styles and behaviour cross-browser, and we won’t need a CSS reset.
Absolute control may seem like an advantage, but there’s a greater benefit to less generic, more semantic elements. Browsers render semantic elements with their own distinct styles and behaviours. For example, button looks and behaves differently from a. And ul is different from ol. These defaults are shortcuts to a more usable and accessible web. They provide consistent and well-tested components for common interactions.
It’s pretty common to see “div-itis.” Where there’s a semantic HTML element nested many layers deep in a series of
divs. I’m certainly guilty of this, but i’m conscious of it and try to look for ways to improve. Developers should really start paying attention to their HTML. It’s seems to be the first thing ignored.
Good usability is good accessibility
This is my experience as well. In fact, when I previously performed in-person usability testing with blind participants, one individual even stated that often times sites or apps are not usable at all, nevermind accessibility issues.26 Apr 2018