Computer People vs. Normal People

August 6, 2013

Have you noticed how there seem to be some people who just “get” computers and others who don’t? I call the former “computer people” while the latter are simply “normal.”

There’s nothing wrong with not being a computer person. In fact, I think the majority of the people in the world are not “computer people.” But here’s a true confession: I did not realize I actually was a computer person until much later in life. I thought I was like everyone else, until people started referring to me like I was “one of those people that understands computers.”

After pondering this for a great many years, I’ve finally acknowledged that I’m different, but more importantly, I understand the difference between normal people and computer people: “computer people” think like machines. They understand machines. They feel right at home with computers because computers are made by (and, except for a few exceptional attempts to the contrary which we’ll discuss shortly, for) people who enjoy controlling and operating machines.  I think this is why it’s difficult for these two camps to understand each other. One side is baffled that understanding technology is so difficult for the other; and the other thinks the first has been born with some innate magical ability.

I thought everyone would be able to learn to understand computers as easily as I did… until I began the journey of writing The Ultimate PC Primer. It was an attempt to make the mysterious approachable for the commoner and was eye-opening for me, forcing me to put myself in others’ positions to see what they don’t understand. Has anyone else thought this way, about bridging the gap? Actually, yes, and I think it worked well for him and for his customers.

Steve Jobs is arguably one of the most successful designers to find a way to bridge “normal people” with modern computing capabilities, and to do so wildly successfully in the public marketplace (not just in a research space). According to this article (Review: iOS 7 Gives Us Insight Into the Future of Mobility) he was a fan of skeuomorphic design. Skeuomorphism is when something mimics the materials or ornamental elements of something that exists in the real world (source).

I think this is partially why iOS and Apple’s mobile products experienced such a rapid adoption amongst “normal people,” even those without much understanding of prior personal computing technology. Mental associations with familiar things is both comforting and illustriative for “normal people,” something I’ve found “computer people” often don’t understand. They don’t need to because it comes easily, naturally. They understand machines just fine without any “artificial” metaphors. But for all the “normal people,” the mysterious black box is more usable when it feels like something from a past experience in real life. In  fact, there are some indications that connections to physical things are being craved more and more as our existence becomes increasingly virtual. Skeuomorphic design certainly plays off these desires nicely, but it can go overboard, as I (and others) have pointed out previously. Still, cleverly and subtly connecting a real world concept  — either audial or visual — to a digital interface  can be powerful and effective.

Since that iOS review article hints that skeuomorphic design is on its way out at Apple now, it will be interesting to see if the resulting design of computing devices once again starts to feel like it’s “by computer people, for computer people.”


Why it’s important to think outside the check box

April 28, 2011

Once again, I recently found myself in a consulting meeting, needing to explain the difference between check boxes and radio buttons. Those that follow my blog might ask, “Ben, why do you gripe about this so much?” So let me clarify why it’s important to think through this issue.

My issue is NOT about

1. expecting others to be able to describe computer technology correctly. (After all, the functionality of check boxes and radio buttons is retroactively obvious. Once you see the function of both explained side by side, it’s readily apparent what the differences are and why.) It has never been my desire to turn the general population into computer engineers, developers, or designers.

It’s only partially about

2. others being able to design appropriately — describing the appropriate interface element for the application — though this is crucially important. I have increasingly seen check boxes and radio buttons used interchangeably and the functionality behind them intentionally coded incorrectly, resulting in a radio button interface behaving like a set of check boxes and check boxes being leveraged like radio buttons. In fact, just a handful of weeks ago I was auditing some on-line training from a standard provider for that particular industry. All the exams were multiple-choice questions. The questions did not indicate if one response or multiple responses were required. Check boxes were used, yet most often, only one response was correct. But not always. Perhaps this was an intentional design choice. But having also just consulted with individuals designing a new website (who clearly did not know the difference and selected the wrong interface element), I can’t guarantee the choice was an informed one. In this case, had I not thought through the interface, applied my technology background, and then assessed the questions again, I might have failed the exam.

Foremost, my quandary is Read the rest of this entry »

Taking a lesson from Harry Potter

November 16, 2010

In Harry Potter and the Goblet of Fire, Harry walks to the opening of a small tent and peers inside, only to find the inside is actually the interior of an enormous house. In response, Harry smiles and says to himself, “I love magic.”

Technology users love “magic,” too, but most of us understand enough about how modern devices work to demystify the magic, making them just “cool,” not magic. But I think there’s a lesson we can learn from Harry Potter’s magic tent experience. For just as Harry wasn’t totally astonished to see what was inside the tent, neither are the technology-savvy surprised or confounded when adopting a next-generation technology bearing new interface concepts. So why are older adopters? In particular, why are those who have never used modern technology more confused than ever even though terrific new interface developments like touch-screen phones are eliminating some of the physical interface barriers of the past?

Back in Why Do “Old Folks” Need Technology Explained, Anyway, I mentioned how evolving technology usually piggybacks on previous technology in a stair-step fashion. This means that unless a user has experience with the previous technology reference point, a newer technology based on the previous one is not easier to understand but harder. For example, if a person doesn’t understand what a standard two-button computer mouse is for, then the newer mouse with a scroll wheel only makes it more daunting and less understandable. In light of this, let’s consider the new generation of smartphones with the touchable, swipable interfaces that I continually hear lauded as “intuitive!”

If you’ve followed this blog long enough, you’ve probably noted that I have given indications that touchscreen technology is both a good development and yet presents many challenges. Lest you think I’m not content either way, let me forever clarify my position on touchable/swipable interfaces: for those users past the initial mental hurdles associated with software-driven interfaces, touch screens will be wonderful. The ability to touch, swipe, and move content on the screen via finger(s) rather than an intermediary (mouse) does offer great opportunities for implementing interfaces that are more natural. Current smartphones are a great example. In light of this, today’s question to ponder is: then why are smartphones just as confusing for brand new technology users as PCs? If their interfaces are so intuitive, why can’t everyone just pick one up and naturally understand how it works? After all, there’s no mouse, no clicking vs. double-clicking to explain, etc.

Here’s a scenario to challenge your thinking. Have you ever put a smartphone in front of a person who is not already a PC user? (Let that percolate for just a minute. Never used a PC. Not familiar with software interface concepts. Yes, such people are out there.) If you’ve done this (and have undoubtedly had to talk them through it), what was their reaction the first time you swiped the screen? I’ll bet it was something like this. First, a confused, blank stare at the device, followed by the inevitable “I don’t understand” sheepish look. And then: “Where’d everything go? What’s this stuff? Why is this on the screen now? Why is the stuff that was there before no longer there?”

It’s like magic. But why? Why do they act like what is no longer on the screen is gone forever? Why are they confused by something now appearing on the screen that wasn’t there before? I think it’s something stemming from the Software Generation Gap. For non-software users, a pre-software (solid state) interface is what it is and that’s all it ever is. So what’s the chief difference between this person and you? You have already mastered the previous-generation technology and understand software-driven interface concepts. You have already experienced and internalized what Harry Potter experienced with the magic-driven tent. With a software-driven device, paradoxically, the inside is larger than the outside.

You see, though a mouse and scrollbars are not the most intuitive interface conventions, because you previously adopted, mastered, and mentally internalized them, next-generation smartphone interfaces make perfect sense to you because your mental model already supports the fact that much more can exist outside the boundaries of the screen. The non-software user, on the other hand, can’t (yet) conceptualize what I’ve sketched here:

A diagram or a phonephone's off-screen real estate

You see, you already understand that the inside is larger than the outside, so simply using a different technique to move what is off-screen onto/into the screen doesn’t require you to change your mental model. For you, the only difference is physical — that you’re now using your finger to bring what is off-screen into the screen space, rather than a clunky old mouse. Because you have already used scrollbars on a PC, you think nothing of accessing something off-screen. The freedom to interact sans mouse is what makes you think it’s a radically new, intuitive experience. But for as marvelous as touch-screen technology is, it is not even half of what makes touchable, swipable interfaces “intuitive.” The intuitive part comes from what’s already in your mind. You are unconsciously leveraging the mental concepts drawn from your PC software experience. I’d argue that this is what makes the smartphone’s interface seem so intuitive and quick to learn. Already possessing a solid mental model allows you to feel that the physical interface is more “natural” than the virtual one requiring a mouse.

So where does this leave us? I guarantee that once core software concepts are understood, it doesn’t matter so much if the mechanism to slide content through a screen/viewport is a mouse, trackpad, finger directly on-screen, hand gestures in the air, voice commands, or an eye-tracking technique. Humans can adapt remarkably well once they have a model for how they need to behave. The device itself isn’t the hurdle that prevents folks from understanding; it’s the concepts that present the gap. Until the gap can be bridged for newcomers, the digital world might as well be as magic as Harry Potter’s.

Going mental: the day it all made sense

October 22, 2010

There have been a number of defining moments in my career, but for the purposes of this blog two stand out above the rest. The first was the day I, a computer novice, looked around at my colleagues and realized they considered me a computer expert. I remember thinking, “What in the world happened? Just a few years ago I was a complete newcomer. Why do I suddenly find myself being referenced as the expert?”  That realization started a chain reaction of self-analysis and tracking back through time to figure out why I had such a good grasp of computing technology while others still seemed to be “hunting and pecking.” And that led me the second big realization — the answer to my search: Read the rest of this entry »