Skip to content(if available)orjump to list(if available)

Past and Present Futures of User Interface Design

gyomu

I once worked in a design research lab for a famous company. There was a fairly senior, respected guy there who was determined to kill the keyboard as an input mechanism.

I was there for about a decade and every year he'd have some new take on how he'd take down the keyboard. I eventually heard every argument and strategy against the keyboard you can come up with - the QWERTY layout is over a century old, surely we can do better now. We have touchscreens/voice input/etc., surely we can do better now. Keyboards lead to RSI, surely we can come up with input mechanisms that don't cause RSI. If we design an input mechanism that works really well for children, then they'll grow up not wanting to use keyboards, and that's how we kill the keyboard. Etc etc.

Every time his team would come up with some wacky input demos that were certainly interesting from an academic HCI point of view, and were theoretically so much better than a keyboard on a key dimension or two... but when you actually used them, they sucked way more than a keyboard.

My takeaway from that as an interface designer is that you have to be descriptivist, not prescriptivist, when it comes to interfaces. If people are using something, it's usually not because they're idiots who don't know any better or who haven't seen the Truth, it's because it works for them.

I think the keyboard is here to stay, just as touchscreens are here to stay and yes, even voice input is here to stay. People do lots of different things with computers, it makes sense that we'd have all these different modalities to do these things. Pro video editors want keyboard shortcuts, not voice commands. Illustrators want to draw on touch screens with styluses, not a mouse. People rushing on their way to work with a kid in tow want to quickly dictate a voice message, not type.

The last thing I'll add is that it's also super important, when you're designing interfaces, to actually design prototypes people can try and use to do things. I've encountered way too many "interface designers" in my career who are actually video editors (whether they realize it or not). They'll come up with really slick demo videos that look super cool, but make no sense as an interface because "looking cool in video form" and "being a good interface to use" are just 2 completely different things. This is why all those scifi movies and video commercials should not be used as starting points for interface design.

fxtentacle

My experience with people trying to replace a keyboard is that they forget about my use cases and then they're surprised that their solution won't work for me. For example:

1. I'm in a team video conference and while we are discussing what needs to be done, I'm taking notes of my thoughts on what others said.

2. I'm working as a cashier and the scanner sometimes fails to recognize the barcode so I need to manually select the correct product.

Now let's look at common replacements:

A. Voice interface? Can't work. I would NOT want to shout my private notes into a team video call. The entire point of me writing them down is that they are meant for me, not for everyone.

B. Touch screen? Can't work. I can type without looking at the keyboard because I can feel the keys. Blindly typing on a touch screen, on the other hand, provides no feedback to guide me. Also, I have waited for cashiers suffering through image-heavy touch interfaces often enough to know that it's easily 100x slower than a numpad.

C. Pencil? Drawing tablet? Works badly because the computer will need to use AI to interpret what I meant. If I put in some effort to improve my handwriting, this might become workable for call notes. For the cashier, the pen sounds like one more thing that'll get lost or stuck or dirty. (Some clients are clumsy, that's why cashiers sometimes have rubber covers on the numpad.)

I believe everyone who wants to "replace" the usual computer interface should look into military aircrafts first. HOTAS. "hands on throttle-and-stick". That's what you need for people to react fast and reliably do the right thing. As many tactile buttons as you can reach without significantly moving your hands. And a keyboard already gets pretty close to that ideal...

SAI_Peregrinus

Don't forget MFDs from military/aviation/marine interfaces. Buttons on the edges of the screen, and the interface has little boxes with a word (or abbreviation or icon) for what the button does just above each button on the screen. When the system mode changes, the boxes change their contents to match the new function of the buttons. So you get the flexible functions of a touch screen with the tactile feedback of buttons.

Some test equipment (oscilloscopes, spectrum analyzers, etc.) has the same thing.

saagarjha

TI calculators do the same thing.

SilasX

>Voice interface? Can't work. I would NOT want to shout my private notes into a team video call. The entire point of me writing them down is that they are meant for me, not for everyone.

Heh, I had a weird nightmare about that. I was typing on my laptop at a cafe, and someone came up to me and said, "Neat, you're going real old-school. I like it!" [because everyone had moved to AI voice transcription]

I was like, "But that's not a complete replacement, right? There are those times when you don't want to bother the people around you, or broadcast what you're writing."

And then there was a big reveal that AI had mastered lip-reading, so even in those cases, people would put their lips up to a camera and mouth out what they wanted to write.

TeMPOraL

There are many times I really wished to use voice interface but in private. Some notes - both personal and professional - I feel I can voice better than type them out. Sometimes I can't type - it's actually a frequent occurrence when you have small kids. For all those scenarios, I wish for some kind of microphone/device that could read from subvocalizations or lip movement.

In a similar fashion, many times I dreamed about contact lenses with screens built-in, because there are many times I'd like to pull up a screen and read or write something, but I can't, because it would be disturbing to people around me, or because the content is not for their eyes.

jodrellblank

> "As many tactile buttons as you can reach without significantly moving your hands. And a keyboard already gets pretty close to that ideal..."

The DataHand came even closer: https://en.wikipedia.org/wiki/DataHand

but I'm not sre that's good; moving your hands to reach more keys, without looking, brings even more keys into reach. I can hit the control keys with the palms of my hands - and often do that with the palm of the knuckle under the pinky finger - and feel where they are by the gaps around them, similar with ESC and some of the F-keys, and backspace from its shape, etc. I don't know of a keyboard which is designed to maximise that effect, or how one would be.

SSLy

https://store.azeron.eu/azeron-keypads does this in a bit different (better?) way

inhumantsar

I'm 100% with you on this but I will admit there was one concept / short run project that actually looked like it was on the right track: The Optimus Maximus keyboard[1].

The keyboard itself was not good for a bunch of reasons, but the idea was gold. Individual, mechanical keys which could change their legends to suit the current context. You wouldn't have to memorize every possible layout before using it, and you could change the layout to suit whatever you're currently doing.

The closest equivalent I've seen would be the pre-mouse-era keyboards which could accept laminated paper legends for specific applications. The next closest, tho in the opposite direction, would be modern configurable keyboards with support for multiple layers layers.

1: https://www.artlebedev.com/optimus/maximus/

gyomu

There's this great article on the Optimus Maximus, and how it directly lead to the now popular Stream Deck (by Elgato, not the Steam Deck by Valve)

https://www.theverge.com/c/features/24191410/elgato-stream-d...

regularfry

The fundamental problem with a dynamic layout is that you need to look at it to know what the keys are. The one huge, underrated benefit of a static layout is that it's constant across the environments that you use it in, so it's (always to some degree, rarely perfectly) memorised. Qwerty doubly so, because so many people have it memorised. It avoids the problem with the Maximus that in order to take advantage of the dynamic layout, you really want to be able to see through your fingers. Your fingertips by default block the information you need.

I can see the Maximus being useful for switching between and learning new layouts - so if you want to give colemak a try you can, without needing to swap all your keycaps (even if that's possible on your keyboard), or swapping to blanks and forcing yourself to learn everything by heart. But I think the reason you don't see this idea repeated much is that it's self-defeating.

taeric

Do you have to be looking at it? My keyboard has blank keys. I used to use vim, which had a modal input scheme. My music keyboard has modes where different keys play different instruments.

I agree that having the legends are good for affordances when learning. But they oddly hurt training. Specifically, they make it harder to remove the visual from the feedback loop. When training typing a long time ago, you wouldn't even look at the screen til you typed it all.

mncharity

> you really want to be able to see through your fingers

As when graphic artists draw on, and interact with, a tablet, while watching a screen, rather than using a tablet display. Similarly, while I enjoy the feel of a thinkpad keyboard, I do wish it did whole-surface multitouch and stylus and richly dynamic UI as well. So I tried kludging a flop-up overhead keyboard cam, and displaying it raw-ish as a faint screen overlay, depth segregated by offsetting it slightly above the shutter-glasses-3D screen. In limited use (panel didn't like being flickered), I recall it as... sort of tolerable. Punted next steps included seeing if head-motion perspective 3D was sufficient for segregation, and using the optical tracking to synthesize a less annoying overlay. (A ping-pong ball segment lets a stylus glide over keys.)

warp

The flux keyboard seems to be a modern attempt at the same concept. They're taking pre-orders, I don't know if they've shipped any yet or how close they are to shipping.

https://fluxkeyboard.com/

kurthr

I may have worked for that company, but I came away with a different take.

People are User Interface bigots!

People get used to something and that's all they want. The amazing thing Apple was able to do was get people to use the mouse, then the scrollwheel, and then the touchscreen. Usually, that doesn't mean that you get rid of an interface that already exists, but when you create a new device you can rethink the interface. I used the scroll wheel for the iPod before it came out and it was not intuitive, but the ads showed how it worked, and once you used it 20-50x it just seemed right... and everything else was wrong! People would tell me how intuitive it was, and I would laugh, because without the ads and other people using it, it was not at all.

Now we're in a weird space, because an entire generation is growing up with swipe interfaces (and a bit of game controller), and that's going to shape their UI attitudes for another generation. I think the keyboard will have a large space, but with LLM prediction, maybe not as much as we've come to expect.

I could go on about Fitts testing and cognitive load and the performance of various interfaces, but frankly people ignore it.

m463

Strangely, apple sucks at mice. A multi-button mouse with a scroll wheel is way better than any apple mouse I've used (especially the round one).

That said, the touchpad on some of their laptops is pretty good when you can't carry a mouse, but nowhere near a good mouse.

(I have owned all their mice, all their trackpads, etc)

Their keyboards have gone downhill too. I like the light feel of current keyboards, but the lack of sculpted keys to center and cushion fingers and arranged keys for the hands has really replaced function with form.

all the people who knew these kinds of truth have probably retired. sigh.

juliendorra

The multiple-button mouse predates the one-button Apple mouse by 2 decades.

The one-button mouse paired to a GUI was an innovative solution: Xerox couldn’t find a way to make a GUI work with one button only, as per their design 1983 article on designing the Alto UI. They tried, did a lot of HMI research but were trap in local maxima in terms of GUI.

Jeff Raskin and others from PARC who moved to Apple (Tesler if I recall well) had seen how three buttons brought confusion even amongst the people who were themselves designing the UI!

So Raskin insisted that with one button, every user would always click the right button every time. He invented drag diagonally to select items, and all the other small interactions we still use. Atkinson then created the ruffling drop down menus, a perfect fit for a one-button mouse.

They designed all the interface elements you know today around and for the one-button mouse. That’s why you can still use a PC or Mac without using the ‘context’ command attached to the secondary button.

RossBencina

> People get used to something and that's all they want.

It's more than "getting used to." Learning to type (or to edit text fast using a mouse) is a non-trivial investment of time and energy. I don't think wanting to leverage hard-earned skills is bigotry, seems more like pragmatism to me. Unless the "new way" has obvious advantages (and is not handicapped by suboptimal implementation) the switching cost will seem too high.

dxdm

It's not just that people don't want to waste an investment into learning something; that investment can actually enable you way more than a more easily accessible interaction method, and you stick to it because it's _better_.

Once you've learned to use the keyboard property, it's simply faster for many applications. Having buttons at your fingertips beats pointing a cursor and clicking at a succession of targets. For example, I can jump through contexts in a tyling window manager and manipulate windows and their contents much faster with a keyboard, than wading through a stack of windows and clicking things with a mouse.

It all depends on what you're interacting with, and how often. I mostly have to deal with text, and do not need to manipulate arbitrary 2d or 3d image data.

But suggesting that I am simply too set in my ways to ditch the keyboard in favor of poking things with a pointy thing or talking into a box is just too reductive.

wagglecontrol

> Unless the "new way" has obvious advantages

I agree with this. The cases are rare. Still there are cases like the current sad state of motion control in video game consoles, where I have to agree to the opposite. Pretty much everyone who's put the time to play with motion controls outperforms those who don't and can play to satisfaction even without aim assist (which is relentlessly ubiquitous, for those unaware). But the tech started out kinda ass, and the Xbox still don't have a built in gyroscope, so the adoption is artificially stunted. The result? The masses still call it "waggle" with disdain.

skydhash

The scroll wheel is a step up over both d-pad and touch screen. I also had a Creative Zen which has a scroll lane and it was great too. Why? Because interaction was a factor of motion control and it has great feedback. Same with Apple touchpad. Yep you still have to learn it, but it was something done in a few minutes and fairly visual.

There's a reason a lot of actually important interfaces still have a lot of buttons, knob, lights, and levers and other human shaped controls. Because, they rely on more than visual to actually convey information.

florbnit

> it's usually not because they're idiots who don't know any better or who haven't seen the Truth, it's because it works for them.

The reason we have touch screen phones today is exactly because Apple dared to challenge that assertion. We should not assume that what is out there now is the end goal. Users don’t have a choice they can only buy and use what’s available to them in stores. The second touch screen phones were available the entire market shifted in a short period, but the mantra at the time was just like you have now “physical keyboard are the only way” who knows what could come from people who think outside the box in the future.

skydhash

I was recommending laptop to someone and the only criteria he had was a number pad and a big screen because he mostly use Excel. I think input method is fairly context sensitive. Touch is the most versatile one as it acts directly on the output, but I still prefers a joystick for playing game, a midi keyboard for creating music, a stylus for drawing, and voice when I'm driving or simple tasks. Even a simple remote is better than mouse+keyboard for an entertainment center (as long as you're not entering text). We need to bring the human aspect of interface instead of the marketing one (branding and aesthetics).

m463

> Even a simple remote is better than mouse+keyboard for an entertainment center

where you going to find a simple remote anymore?

The only things simple are the giant keys marketing dedicated to partners (like the youtube or netflix buttons)

I want a skip forward button!

plastic3169

Apple TV remote has it’s problems but at least it strives to be simple. Magically it also controls enough amplifier and projector (I don’t know how, hdmi signals?) so I don’t need to touch any other remotes on daily basis.

ninalanyon

Before anyone bothers reinventing the keyboard I would rather that it were made practical to easily and reliably type accented and other characters on a UK keyboard in all applications. I use English and Norwegian regularly with the occasional French, German, or Swedish word. I have been unable to find a simple method of configuring Linux Mint to support these other than by switching layouts every time I need an ø or and e acute, etc.

I did once get the compose key to work but the settings didn't survive an upgrade and I have been unable to get them to work again in Firefox.

wooque

Use character composition. You then type those characters buy pressing compose key (I've set it to Caps Lock) and then sequence of characters. Much easier than switching keyboard layouts, and you can type other non-usual characters like °, µ, €

ø = Compose -> / -> o

é = Compose -> ' -> e

° = Compose -> o -> o

µ = Compose -> m -> u

€ = Compose -> = -> e

https://en.wikipedia.org/wiki/Compose_key#Common_compose_com...

https://cgit.freedesktop.org/xorg/lib/libX11/plain/nls/en_US...

WhyNotHugo

The Compose approach is extremely handy if you need to type several languages (e.g.: Spanish, German and Pinyin).

A wrote a short article on it a while ago: https://whynothugo.nl/journal/2024/07/12/typing-non-english-...

I also keep a handy alias to quickly find how to write new symbols:

    alias compose='fzf < /usr/share/X11/locale/en_US.UTF-8/Compose'

agumonkey

takes a bit of time but you end up being fast enough without changing keyboard layout, pretty great

ninalanyon

The compose key defined in Linux Mint's own keyboard settings doesn't work in Firefox.

alexanderchr

MacOS developers have solved this problem pretty neatly:

https://support.apple.com/en-qa/guide/mac-help/mh27474/mac

jiriro

This is cool!:-)

How come this is not the first “tip” on a fresh Mac?

miniBill

It's very useful but it's sloooow

whstl

This is cool!

But I thought you were going to recommend pressing "fn" to switch layouts (I believe you can use either fn or ctrl+space on macOS).

I use to switch from German (for chat/documentation) and English (for coding), and it's quite instant and second nature to me.

jiriro

Is there a similar trick for non-letter characters ?

ehecatl42

`~/.XCompose` is your friend.

I frequently input International Phonetic Alphabet glyphs, some polytonic Greek, some Spanish and some Old English. Nothing is more than three key-presses away after an AltGr.

ninalanyon

I'll look into that. The compose key defined in Linux Mint's own keyboard settings doesn't work in Firefox.

bgan7

Thanks for sharing the keyboard story!

I agree that keyboards can be improved, but I think gradual changes—like making them split and wireless—are a better approach. I use a split keyboard myself and can comfortably do development with just 34–36 keys.

If the interface changes too much in a short time, it can become quite a hassle.

danielvaughn

My personal prediction is that nothing will replace the keyboard except direct brain-to-computer interfaces. The keyboard is an incredible tool that people take for granted.

ojschwa

I'm actually working on a voice controlled, tldraw canvas based UI – and I'm a designer. So I feel quite seen by this article.

For my app, I'm trying to visualise and express the 'context' between the user and the AI assistant. The context can be quite complex! We've got quite a challenge to help humans keep up with reasoning and realtime models speed/accuracy.

Having a voice input and output (in the form of an optional text to speech) ups the throughput on understanding and updating the context. The canvas is useful for the user to apply spatial understanding, given that users can screen share with the assistant, you can even transfer understanding that way too.

I'm not reaching for the future, I'm solving a real pain point of a user now.

You can see a demo of it in action here -> https://x.com/ojschwa/status/1901581761827713134

tony-allan

I know that most developers prefer keyboard shortcuts when developing software but I prefer using the mouse mostly because I cannot remember all of the shortcuts in a range of different environments.

Given my preference it would be interesting to explore a more tactile interface.

  - a series of physical knobs to skip back and forward by function, variable reference, etc
  - a separate touch screen with haptic feedback for common functions and jump to predefined functions in my code
  - a macro-pad with real buttons to do the above

Other thoughts

When watching videos physical buttons and knobs would be good. I know professional video and audio engineers already use these technologies but i've never tried them myself.

lolinder

> I prefer using the mouse mostly because I cannot remember all of the shortcuts in a range of different environments.

This is why one of the greatest changes in power user tooling in recent years is the "find anywhere" hotkey, which is now available almost everywhere.

Mouse interaction is slow and hardly a panacea for finding features buried in menus. "Find anywhere" type interactions with fuzzy search allow you to use they keyboard and highly mnemonic abbreviations to turn up what you're looking for. With a few exceptions, I tend to lean on them even for things that I use regularly, because it's easier to learn which few keystrokes will turn up the option I'm looking for than it is to rebind a fresh hotkey in each environment or, as you say, memorize the built-in one.

vincnetas

Fun anectada. Was launching Remote desktop app which was unsurprisingly called:

"Microsoft Remote Desktop"

by just typing "Remo" in to a spotlight. Then one day it stoped working. I thought i was going crazy, because i dont remember that i uninstalled it. Spotlight was returning no results. Then i found out that someone at microsoft thought that it was a great idea to rename the app to ...

"Windows App".

What the hell were they smoking?

fifticon

I wish the 'find anywhere' instead expanded and highlighted the correct place of that command in the menu. In that way, it would teach me gradually a mental map of the shape and set of commands I have available, ie I imagine it would let me notice 'neighbour commands'. I realise this is not the most efficient way to do this, but it would be 'incremental on the job learning'.

csb6

macOS does this when searching for a command in the Help menu - it will open up the context menu and highlight the place where the command is so you know where it is next time.

RossBencina

> "find anywhere" hotkey, which is now available almost everywhere

Almost everywhere? I'd love to see a list of other examples.

The only place I can name is the vscode Ctrl-Shift-P thing, and in that case it's a wholesale replacement for an explorable/discoverable UI (i.e. traditional menu bar).

Sure there are search boxes in other places, but usually that's literally for finding things, not for performing application domain commands/actions/manipulations, which is what I understood the parent to be describing.

skydhash

> one of the greatest changes in power user tooling in recent years

Alt+x in Emacs was here for ages. Even the command prompt in vim follows the same pattern. And while it's useful, I still prefers to bind commonly use commands to keybinding, especially if there can be prefix.

lolinder

There's a reason I didn't say "innovation"—I knew that people would immediately point out it's been around forever. What's new is that it's in mainstream tooling.

swah

If OSes were optimized like RTS games, maybe the mouse could be plenty fast. Something like https://charmstone.app/ but for many actions.

zombot

And what is the "find anywhere" hotkey that works everywhere?

swah

OP is talking about the "Cmd + P" aka "Run command" command. Ideally it should list the shortcuts of the commands you run so next time you can do that directly.

thrdbndndn

I use the mouse a lot, even for (typing) coding.

I'm pretty "fluent" with navigation shortcuts, things like Ctrl/Alt/Shift combined with arrow keys, PgUp/PgDn/Home/End etc. and I do use them extensively. And yeah, constantly switching between the keyboard and mouse with my right hand is a bit annoying.

But still, in many cases, using the mouse is just faster. For example, jumping to a specific position in a source code file, scrolling and clicking gets me there much quicker than navigating with the keyboard alone.

(This is also one thing I really hate about using terminals: you can't just click to move the cursor quickly! Editing part of a long string without spaces is a pain in the ass, and it's something I have to do surprisingly often.)

When it comes to shortcuts, I prefer one- or two-key combinations whenever possible. Three-key shortcuts, however, depend on their layout: many just aren’t that convenient. Sometimes I’ll just click through the menu manually, even if I know the shortcut.

snide

> For example, jumping to a specific position in a source code file, scrolling and clicking gets me there much quicker than navigating with the keyboard alone

I say this with the intention of providing context, not to say the way you do things is bad. It's all user preference in the end and there is no wrong way.

Lots of folks consider your "fast" example with a mouse as their "slow" example that forced them into learning more advanced features of their editor. For example. Most Vim users can get to any character or partial string, or parameter, or line, or paragraph, or function start or what you have within three quick keys on their home row. They do this quickly, and can immediately start doing other things right after because their hands never moved.

The mouse is fast because people don't need to memorize things. The keyboard is fast because the keyboard is fast.

It's like the old joke from the movie Heist. "What do you mean you don't like money? That's why they call it money".

thrdbndndn

No offence taken; there's always room to learn. Just curious, since I never use Vim, how exactly one navigates to (and/or selects) "aaa" part in the following string with just three keys in Vim?

    url = "https://example.com/keyword=aaa&name=john"

affinepplan

> I really hate about using terminals: you can't just click to move the cursor quickly!

alt-click?

thrdbndndn

Doesn't seem to work for any emulators I have (cmd, Windows Terminal, Git Bash, etc.) :(

submeta

There are compromises that should have never been made. Just to please the eye. The Blackberry keybord worked fantastically well. It was killed because Steve beleieved that touch screen is always better. Then there are the knobs in cars replaced by touchscreens. I hope there are no attempts at replacing physical keyboards by touchscreens in laptops. I cannot type on touchscreens. I love keyboards. Especially my mechanical keyboard. Love it.

Cthulhu_

I think Steve (et al, I will not attribute everything to one person) did have a point; it's 2025 and an on-screen keyboard makes different inputs easier, for a lot more languages than physical keyboards can, without needing separate production lines.

There's multiple input methods; typing, swiping/path writing, braille, dictation. It's not for everyone, sure, but for others it's ideal and preferable over a physical keyboard.

inetknght

> an on-screen keyboard makes different inputs easier

Sure, it can certainly make different inputs easier. But you'll be hard-pressed to convince me that those different inputs are better specifically because a touchscreen doesn't have the kind of physical feedback that comes from feeling the boundaries of different selections (eg, letters/characters) for that input.

With a keyboard, I can center my fingers without looking at them. I know exactly what input selections are nearby and in what directions; and I know exactly when my finger has crossed the boundary of one and is in a null-zone or is on another input. For many keyboards, I can even feel the embossing of the individual input selection (character).

crubier

> It was killed because Steve beleieved that touch screen is always better.

No, it was killed because people much preferred Steve's product than Blackberry's. In large part thanks to trading off a 5% of keyboard effectiveness in favor of 50% additional screen space.

mystified5016

> I hope there are no attempts at replacing physical keyboards by touchscreens in laptops.

There have been many. Laptops that are just two touchscreens hinged together. Every review absolutely demolishes them for the obvious.

mncharity

> For almost half a century now, we haven't really managed to come up with something better, and that's not for lack of trying.

In contrast, my impression is that deprioritization of trying is a defining characteristic. Patents - "yeah it's neat... you can't sell/buy/have/buildon anything like it". Narrowing optimization - "thinkpad is 2-key rollover, because business apps don't use 3, and it saves some cents". Software design-space badlands - "to tweak that, reimplement the full stack and apps". Unicorn dreams - "mass-market or nothing... most often nothing". So instead of a rich ferment of DIY and multi-scale commercial exploring for diverse viable niches, we have an innovation monoculture desert of "wait years for bigco to maybe find incredibly-challenging mass-market fit and afterward backfill niches". So a "we can't sell it because patents so here's a kit"-crippled but still creative AR/VR DIY community hits "Facebook bought Oculus for how much!?!" unicorn dreams and dies. So... etc.

WillAdams

More than anything, the interface which I want to normalize is holding a laptop as a book and using the touchscreen (and optional stylus) on one side, and the keyboard (say for drawing shortcuts/modifiers) on the other side.

Still surprised that this wasn't a standard for say the Voyager e-book reader.

Hopeful that the Lenovo Yogabook 9i will help to popularize this (and if it had Wacom EMR, I'd have one and be working on such concepts) --- annoyingly, my Samsung Galaxy Book 3 Pro 360 is just a bit too large for this, and the screen is so impossibly thin, trying it makes me worry about breaking it.

dataviz1000

The next biggest shift in interface is moving from a tactile input -- keyboard, mouse, touch screen, ect. -- and visual screen output to none tactile input -- voice, brain implants, ect. -- and none visual output mostly automation or multistep tasks. Some attempts so far haven't been successful, i.e. Alexa and Siri, others look promising like OpenAI Operator, and it exists in sci-fi like Iron Man's JARVIS, nonetheless, it is definitely the future.

I worked on a browser automation virtual assistant for close to a year -- injecting JavaScript into third party webpages like a zombie-ant fungus to control the pages. The idea of tactile input and visual output is so hard coded into the concept of an internet browser, to rethink the input and output of the interface between the human and the machine, everything becomes hack.

After a decade working in UI, it was strange to be on a project where the output wasn't something visual or occasionally audio, but rather the output of the UI was automation.

ch4s3

I’m highly skeptical about voice for most interactions. It’s inherently inappropriate in most public settings.

rochak

+1. I see it being helpful for the differently abled but for everyone to just speak out loud their every action would drive everyone nuts in a public setting. Not to mention I can type “code” much faster than if I were to speak it.

regularfry

There are a few ideas about subvocal recognition kicking about which might change that. If your voice assistant is in an earpiece that can (somehow) read what your vocal muscles are doing without you actually needing to make a sound, it makes it practical to the extent that it could become the default. There's a lot of ocean between here and there, though. Particularly in the actual sensor tech. That's got to get to the point where you can wear it in public on a highly visible part of the body without feeling like a loon, and that's not trivial.

ch4s3

That maybe makes it vaguely less anti-social but still imprecise and frankly invasive. Typing by comparison is great. You can visualize the thoughts as you compose something and make edits in a buffer before submitting. The input serves as a proxy for your working memory. Screenless voice interfaces are strictly worse.

dataviz1000

We were putting this into classrooms where teachers were speaking all day anyhow. The system completely automated teaching tools, smart boards, and browsers. I don't think it gained a lot of traction, nonetheless, the company raised $100,000,000 to focus on the automation part of the product as a vertical.

My point is that as a UI developer, I was moving from all output to screens to output which is automated tasks. There are different types of output and they almost all relate to the senses which is the were the interface between the human and the machine exists. For example, to screen, eyes, to sound, ears, and haptic feedback on mobile devices, touch. Because in my space, the browser, I was using JavaScript and browser APIs the same but the end result was different.

Automation as an output is fundamentally different from all the UI I built the decade prior.

ch4s3

What’s the value prop here? That seems antithetical to actual education.

Shit writing on a white board has better physical feedback than some janky smart board.

SilasX

True, but we could move to lip-reading for that:

https://news.ycombinator.com/item?id=43400636

DonHopkins

On Star Trek TNG they have what appears to be a bank of Fujitsu Eagle disk drives on the bridge to the right of the elevator. Looks like 10 for a total of 4.7 GB worth about $100,000! Those are so noisy, you'd think the'd put them in another room so they didn't have to work next to them.

https://en.wikipedia.org/wiki/Fujitsu_Eagle

https://www.syfy.com/syfy-wire/chosen-one-of-the-day-those-c...

TeMPOraL

Hah, I never made the connection :).

Though on the show, the computer core is far from the bridge, and it only makes cute sci-fi noises. As for the thing you refer to, I don't recall it being ever revealed what those are. They kind of look more like equipment lockers, but if they were, the placement doesn't make sense.

scyzoryk_xyz

“In a commercial for the Alto, we meet a man - some kind of upper middle management, presumably - going about his daily business. He works in a spacious private office and, using the Alto, he can read and send email and produce laser printouts. Eventually, the Alto conjures up a high resolution image of flowers. The man wonders why, and the computer replies - with text on screen - that it's the man's wedding anniversary. "I forgot," says the man, to which the Alto replies, "It's okay, we're only human."”

Sounds like fruit intelligence features

alt219

> Imagine having to raise your arm to swipe, pinch and tap across an ultra-wide screen several times per minute. Touch works best on small surfaces, even if it looks impressive on a bigger screen.

I regularly find myself wishing pinch zoom were available on my large multi-monitor setup, even if i only used it occasionally, i.e. to augment interactions, not as a replacement for other input methods. As a (poor) substitute, I keep an Apple trackpad handy and switch from a mouse to trackpad to do zooming. Sadly I’ve found not all macOS apps respond to Magic Mouse zooming maneuvers.

rifty

Surely i'm not the only one but I've always felt when it comes to the discussions about replacing the computer desktop experience there's some serious historical blinders on.

Work surfaces like the desk as an idea has existed and had been refined for millennias as human interactive spaces before being borrowed and codified to work on computers.

it's not bad to look into but i don't think its surprising to find that human physiology hasn't changed, and so the interface environment ideas are still useful for similar situations

fedeb95

I don't agree with the fact that touchscreens are necessarily cheaper everywhere: take cars as an example. If a touchscreen makes me more prone to car crashes, because I have to look at it more often than with knobs, in the end it costs more. May take a while to fully reach better interfaces. Sitting at my desk, otherwise, means I can look at a touchscreen.

TeMPOraL

It's cheaper for the manufacturer. It shifts the UI work from expensive hardware design and manufacturing, that needs to be in-sync with the rest of design and manufacturing of the car, into purely software realm that can be outsourced to some cheap noname company. All the hardware complexity goes away, you only have a glass slab to fit somewhere.

Though arguably, the starker difference is with home appliances, where touch interfaces replaced buttons. There, there's no dynamic display (at best, only a light behind a button turning on or off) - so you don't have touch screens, you have a much simpler touch recognition technology, that has zero moving parts and can be pretty much etched onto the board, which is a significant saving over mechanical buttons. And it looks cool, which helped with marketing initially.

fedeb95

that's correct, however, as consumers, my opinion is that we (that is, we that share my point of view, obviously!) should try to buy what we prefer regardless of what's cheaper for the manufacturer. If possible of course, we should spend more for something that in the end we think it's better and not fall for flashy designs that makes it look like you're driving some kind of futuristic car.

About home appliances, specifically kitchens, in my country (Italy) there has been a widespread wave of electric kitchens with touch interface, which are awful if you do intense cooking: it all becomes covered in water and other substances making it harder to cook. Maybe cooking quality and ease is not that relevant, considered the other benefits of having a touch screen, i don't know: but the design itself, in my opinion, expressed as beauty that better vehicles functionality, is "broken", that is, breaks this definition.

Of course this is all my biased opinion.