Let’s do magic!

Thank you!
We'll contact you soon
Something went wrong. Keep calm and try again.

Curiosities

Brief History of Interfaces: from abacuses to cyberpunk

The evolution of interfaces from their earliest iterations to the UX+UI we know today

The entire history of interfaces won’t fit into a single text, but maybe after this article, you’ll show off a little in front of your friends. To not quote Wikipedia yet again, we’ll keep it simple: the interface is something, through which two objects interact. These can be machines, apps, people and devices. Even our arms and legs, eyes and ears could be considered interfaces — through them, we interact with the world around us. In this article, we’ll mostly focus on HMI, Human-Machine Interfaces (but we’ll mention the rest of em too).

Before machinery

That’s also an interface

It used to be that we could only statically converse information. Cave paintings or abacuses, neither involved actions afterward, no active manipulation of data. Writing systems allowed us to save and transfer information, and for the first time, we could get and share data without active physical interaction.

Only manuscripts, xylographies, and various driftwood. The invention of the printing press brought our ability to save and share data to a whole new level.

The industrial revolution and its consequences on interfaces

Interfaces in our common understanding showed up during the industrial revolution. All kinds of machines were invented, and these machines needed to be manipulated. For that, we needed universal and understandable patterns. This is the next step of interface evolution — these machines needed standardization, so they could be mass-produced, and it were relatively easy to teach how to work them.

Metropolis, 1927, Fritz Lang — in the city of the future someone still has to pull the levers

1804 is when the first coding happened — as telling the machine some sort of algorithm. Jacquard loom became the first machine that could be programmed. An incredible breakthrough in terms of repeatable actions. The machine was programmed using punched cards.

Punched cards were the progenitors of programming, and soon enough there were all kinds of machines using them, even mechanical pianos. And in 1873 we saw the first iteration of the now common QWERTY.

Sholes & Glidden — the keyboard didn’t have the numbers 1 and 0 to save on manufacturing, and instead capitals i and o were used. Some keyboards didn’t have the number 1 all the way to the 1970s

Going digital

It would take quite some time to go through every stage of computing evolution, so let’s get to the fun part. To the digital revolution. The 1980s marked a ubiquitous transformation from analog tech to digital. These are the interfaces that already existed (out of those that are between people and machines):

  • Gesture-based interface: steering wheel, joystick, etc (analog ways to control planes, trains, and automobiles);
  • Command line, instructions are given through keyboard input (DOS, BIOS);
  • Graphic user interface.

GUI is where functions are represented by graphical elements. The so-called WIMP: windows, icons, menus, pointers.

The first GUI was developed in Xerox Palo Alto Research Center (PARC) for the computer Xerox Alto, made in 1973. It was not a commercial product, meant mostly for scientific research.

Xerox Alto and Xerox 8010 Star

And then some — Macintosh System Software "System 1" was made in 1984.

In 1985 Amiga gets ahead of the pack and introduces GUI that supports whole four colors.

The very same year Microsoft introduces Windows 1.0, 1985.

Ten years of progressive development go by, and Microsoft introduces Windows 95. The height of fashion for GUI at the time, and still has something to it (but maybe that’s nostalgia talking). Multiple solutions were introduced that later became commonplace — the start button, multitasking with the eponymous windows, and different conditions of elements as part of interaction and response.

In 2001 we see MacOS — it introduced solutions that became the core for all the following iterations of GUI in MacOS.

And without a mouse?

The evolution of GUI hinged on the quality of the representation of the elements, that became commonplace in work. Until smartphones showed up. This revolution was in the making for a long time, but a widespread adaptation needs more than just a new technology. It also has to be reliable, precise, and affordable.

Further ahead we see the technology getting smaller, due to the improvements in the technology for batteries and microchips. We get wearable devices (watches, trackers). And the tech gets adopted by all kinds of industries for mass production, and is even used in outer space.

Astronauts aren’t pulling levers
The first tablets in 2001: A Space Odyssey — Samsung even used this as an argument in a patent dispute with Apple

Functionally, the interface for smartphones hasn’t changed much since the first iPhone (especially compared to the difference between Windows 1.0 and Windows 95). Hybrid interfaces appeared — the combinations of touch screens and voice assistants. Take a look at how the UI changed visually: from the neo-skeuomorphism of 2012 to the iOS of today:

What’s next

The voice is used more and more in SILK interfaces — Speech, Image, Language, Knowledge. Maybe someday we’ll see neurointerfaces that transmit data between neurons and machines through implants. It’s unlikely that we’ll see the end of GUI anytime soon — but technology keeps on moving forward, and when technology gets advanced and available enough, it usually sweeps its predecessor from the map.

Johnny Mnemonic, 1995, Robert Longo
Behance case on interfaces made for Blade Runner 2049

What do you think we’ll see in ten years, twenty, hundred? Will it be smartphones, will we even have screens at all?

Don’t miss out

Sing up to our newsletter to stay up-to-date

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.