Going mental: the day it all made sense

There have been a number of defining moments in my career, but for the purposes of this blog two stand out above the rest. The first was the day I, a computer novice, looked around at my colleagues and realized they considered me a computer expert. I remember thinking, “What in the world happened? Just a few years ago I was a complete newcomer. Why do I suddenly find myself being referenced as the expert?”  That realization started a chain reaction of self-analysis and tracking back through time to figure out why I had such a good grasp of computing technology while others still seemed to be “hunting and pecking.” And that led me the second big realization — the answer to my search:

I had built a solid mental model. Specifically, I had a broad model of computer concepts which I was leveraging in the face of specific unknowns, while others were still looking for the “right set of steps” every time they encountered a new situation. I recognized where the new challenge fit in the model, and as a result, I saw solutions. They just saw another problem that didn’t fit an “exact step” process. They saw yet more confusing obstacles.

That was the day I became determined to create a computer concepts mental model that would free people from needing step-by-step processes to guide them through the computing experience. That was also the day the idea of The Ultimate PC Primer was born — the idea to craft a guide that would provide a foundational mental model of key concepts rather than yet another short-lived, Point A to Point B, How-To guide for “computer dummies.” (Those books quickly become obsolete  and require updates as software references change rapidly, but I’m guessing publi$her$ gladly benefit from $uch nece$$ary ver$ioning at the readers/users expense.)

Why invest in mental models? Usability expert Jakob Nielson recently posted an article on how mental models impact user interface design, specifically for Internet sites. (I don’t think he’d object to his lessons in this particular article being extended beyond web sites.) Nielson sums up by saying:

Mental models are a key concept in the development of instructions, documentation, tutorials, demos, and other forms of user assistance. All such information must be short, while teaching the key concepts that people need to know to make sense of the overall site.

That sounds a bit like the mission statement for The Ultimate PC Primer, yes? Nielson also confirms:

There’s great inertia in users’ mental models: stuff that people know well tends to stick…

Obviously, my computing mental model has served me (and others I’ve worked with) well, but if you’re (like me) trying to design a mental model for something “new” to your user, Nielson also warns…

…you face an immense design challenge: How do you explain the new concept such that users have a living chance of constructing a valid mental model…

especially since

…mental models are in flux exactly because they’re embedded in a brain rather than fixed in an external medium.

Now consider this: Newcomers to computing often don’t have any sort of mental model… other than their existing models of the non-software-based world. To illustrate the difficulty I faced writing The Ultimate PC Primer, imagine building and implanting a user’s entire mental model for modern PC usage from the ground up and to do it so solidly that nearly everything the user will encounter via the PC fits the model (so that the newly acquired model doesn’t need to be updated constantly.) That’s why I chose an external medium — the real, non-software world — to build the model upon. Sure, all analogies break down at some point, but by the point they break down — the point the user realizes his/her mental model needs to be updated — the user has likely achieved foundational baseline with core computing concepts.

Newcomers aside, today there exists a glut of PC users who have dabbled enough to get something done with a computer but really have no solid grasp of the key concepts behind the interface features they’re using. Nielson notes:

Many of the usability problems we observe in studies stem from users having mixed-up mental models that confuse different parts of the system.

For example…

Users don’t just confuse search fields; many less-techy users don’t understand the differences between many other common features…

His thoughts fit perfectly with my observations. An incredible number of users I encounter have no idea what they’re looking at on the screen. Each new program might as well be an alien from a different planet, because the user doesn’t see the commonalities. The common features are present, but what the user “sees” is interpreted through the ingrained mental model instead.

Nielson suggests that sometimes part of an existing or flawed mental model can be corrected, but having worked as a designer/developer myself, I know that can be difficult to do when your product’s goal is supposed to be productivity, not education and correction. (Note, though, I don’t disagree. It is possible. I’ve taught people to fish, occasionally.) It often requires you to deal with “mental collisions” when the existing model was learned some time ago and must essentially be unlearned.

I still maintain that constructing a solid, foundational, overarching mental model is the best way to help PC users.  As for convincing clients to address it, users to admit it, and publishers to distribute it… I welcome your advice.

Advertisements

2 Responses to Going mental: the day it all made sense

  1. Ryan Herr says:

    Ben, you might also be interested in this blog post by a Microsoft employee:

    http://www.globalnerdy.com/2009/05/25/mental-models-mantras-and-my-mission/

    He, like many other folks, is talking about the shift from a PC-centric mental model to a web-centric mental model. He makes an analogy to the non-software world, using an interesting story about electric motors. And, he even mentions the statelessness of the web, one of your other favorite topics. 🙂

    So anyways, what are the implications (if any) for your Ultimate PC Primer?

    • Ben K says:

      Thanks for the link, Ryan. I certainly don’t discount what Joey DeVilla is talking about in his post. (In fact, I laughed in agreement when he described .NET making use of “tomfoolery,” and you know I’ve also often dialoged on the conflict of the stateless nature of the web vs. client expectations for stateful apps.) But I think he (by his own admission) is talking about mental models from the technologist’s standpoint. I’d claim he’s speaking more like an engineer or developer, as his main point is about the benefits and potential of new computing “power.”

      I absolutely concur that the mobile and cloud space is where the power of computing is heading, free from the “shackles” of a physical desk and a “FAT” box with on-site storage. I also agree that this could change how people actually consume computing. Joey mentions Sun’s intentions to run Java apps over the network. I’ve talked before about what an architectural game changer that would have been, but it would have largely been transparent to the end user because the UI wouldn’t have really changed. There’s no doubt of the IT advantages of such architectures, especially now that the “power” is there to support them. However, for the purposes of my efforts…

      Joey seems to be looking beyond the architectural advantages, but I can’t tell how far. So here’s what I think: Cloud computing, thin clients, app stores, yada, yada, yada, right? Most common users and especially newcomers don’t care much about where the computing is being done at, where the data is physically being stored, where the software resides or is served from, etc. They just want computing to work for them, and nowadays they want it to be “cooler” rather than “lamer” as DeVilla notes. So, sure, in the future, computing architecture will most certainly be different. Average users won’t care unless it impacts their experience in a visible or felt way. It’s like the folks who just want to get in their car and go somewhere, without needing to know what is under the hood or how it works. So I think the big question is: does the change in “power” change the UI for the average user? Does driving a hybrid cause you to have to re-learn how to drive? Despite interpretations of the physical form factor as being “the computer,” the UI is computer usage to non-computer people just as the steering wheel and pedals are driving to most of us. How often do those change? I recall a friend buying a first-generation Prius years ago and sharing what actually was different in the interface features that broke several existing conventions. So I know they can change, but still not radically so. DeVilla seemed to admit that the interface conventions in Windows Mobile have simply been shrunk down from Windows Desktop. Well, that might not be cutting edge, but most users’ understanding of computing is still built around a mental model of those “Windows 95” UI concepts. So I guess I’m not surprised. It’s easy to make really new, cool stuff when no one has to know how to understand how to use it. But that’s my challenge…

      The Ultimate PC Primer is about helping real people who are not “gearheads” (meant affectionately, as I very much respect my gearhead friends) understand how to operate the car from the cabin, not from under the hood. As I’ve said before, most people understand that a computer is a machine — like any other — but what they don’t understand is how to operate it. For most non-technophile users, the interface is what a computer offers them, but many still don’t grasp even Windows 95 interface concepts. It shows in their approaches to solving problems and lack of productivity and self-sufficiency. It exposes their mental models and how little they really understand the UI. The underlying architecture is taken for granted, whatever it is. It’s irrelevant or magic or both.

      Now, does a new architecture mean the interface could change? Yes. As I noted in the last two years’ predictions, the arrival (finally) of touchable screens on consumer devices actually does start to change the interface. It potentially replaces the mouse (which some newcomers have significant difficulty understanding — trust me, I have stories), but I’d still argue it’s one of the only major interface concept “game changers” in decades. It starts to break down the clunky input and interaction barriers of the past, making the computer more “accessible” to how people think about manipulating things (using their existing real world mental models). So in some ways, the new touch-screen interfaces are more a tardy arrival of “Yes, this actually makes sense! I move this with my finger just as I move things in real life!” than what we’ve endured for decades, meaning the mouse. In my opinion, that actually helps my job of explaining PC interface and operation concepts. Yet, most of the other interface concepts still don’t change, and they haven’t for years. That’s the backbone of my book, and so I think the move to the cloud and to mobile changes little until the day the majority of the interface concepts (and input/output methods, for that matter) change, too.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: