What is 64-bit?

October 1, 2013

Have you noticed that the technical specifications for computing technology always seem to be numbers that double after a while? This has particularly been true of numbers with the word “bit” after them. 8-bit. 16-bit. 32-bit. And now, yes, 64-bit is the latest buzz. But if you’re a normal person (not a computer person), what does this mean?

One of my colleagues (Ed) sent me a link to this short article that loosely explains what 64-bit means to you, the normal person: What the iPhone 5s ’64-bit’ processor means, in plain English. I particularly like the library and book analogy. While I see from the comments that the true technophiles object to the explanation, I’m still going to call it good enough for the normal person. By passing it along, I hope it’s helpful to you or someone you know.


What Steve Jobs saw in Siri (and why I’m glad he lived to see it)

October 10, 2011

I was truly speechless the moment I found out Steve Jobs passed away — so suddenly, only a day after Apple’s first big announcement without him at the helm. I wonder if he held on, wanting to see the transition to Tim Cook. Perhaps that was pure coincidence. Or maybe he held on for another reason: to see Siri announced to the world.

Siri was Apple’s largest news on October 4th, but many consumers shrugged it off, saying, “It’s cool, but it’s not an iPhone 5, like we expected. And this voice-driven technology has been creeping forward for some time. It’s hardly big news.”  After recovering from the shock and taking time to reflect on what Jobs really did during his lifetime — making computer-driven technology usable for the everyday person — I’m thinking that, for Jobs, Siri might have been the beginning of something much greater that Apple has been dreaming about for a long time — something significant that would eventually, once again, lead to a dramatic shift in personal computing.

Interfaces

Steve Jobs and Steve Wozniak changed computing to be personal foremost by changing the interface. My lifespan happens to coincide with their work, so I can attest to the effect of their genius on ordinary individuals like me. I grew up on the Apple II in grade school, my family owned an Apple IIGS (limited edition, bearing Woz’s signature!), and I eventually moved on to use the Macintosh interface and the subsequent Windows graphical interfaces based on it.

The Apple IIGS had the first primitive GUI operated by mouse. The introduction of the graphical operating system was the pivotal shift in computing that added the “personal” to computer. Everything hinged on this until Apple redefined elegant human-to-computer interface again with the iPhone and iPad, leveraging touch and gestures to make the device even more natural and personal.

Now, a great many people are claiming that touch and gestures are the way we’ll all interface with computers in the future and that the mouse as we know it is dead. (Here’s one example: The Mouse Dies. Touch and Gesture Take Center Stage) And while I don’t deny the usefulness of touch screens to simplify interface, I’ve said before they aren’t a magic bullet. They actually intimidate some newcomers. (Don Norman, the great design guru, also highlights some issues in Gesture Wars.) Though the harmony between hand and screen has been streamlined, users still must understand graphical conventions. That’s why I’ve been posting for some time that core computing concepts still need to be taught to newcomers. And that’s also why I’m pretty excited about Apple with Siri, because I think Jobs saw it as the way to reinvent the human-to-computer interface once again…

Your voice is the next interface

Back in 1987, Apple released this futuristic video showcasing a voice-commanded personal assistant. (Note also that the implied date in the video is either September of 2010 or 2011, very close to the actual Siri announcement.)

Though the depiction above feels a little more like the ship’s computer from Star Trek: The Next Generation, Siri is likely not (yet) as advanced as the personal assistant in the Knowledge Navigator video. But it’s not hard to see the connection. And I’m by no means the first one who has drawn comparisons between Siri and Knowledge Navigator (and Star Trek for that matter). Just do a web search and you’ll find plenty of hits. But here’s the salient point…

If you could reliably control your computer by taking to it — like Star Trek — then the need for understanding graphical UI elements and gestures is significantly reduced and the barrier to computing (for newcomers) becomes virtually non-existent. Your voice doesn’t need to be taught. The interface is built-in and the conventions are only limited to what the computer can understand.

As the person who wrote the primer for newcomers learning to understand desktop PC interfaces, the prospect of using one’s voice as the primary interface absolutely thrills me. Think of this: no need to teach someone how to relate to the computer. No need to explain icons and procedures. The teaching of the “interface” is essentially offloaded to whomever teaches the individual to speak. No longer would computer vendors be burdened with GUI usability; they would only focus on voice command recognition. This would truly be a revolution in computer interface, and it’s only a matter of time before the technology is powerful and adaptive enough to provide this capability. Apple may simply be, as usual, taking a vision to market first.

The future of interfaces

While the mouse may be history very soon, I don’t think some artists will ever get away from a physical stylus or external device to assist with detailed pixel-resolution work. And touch screens are certainly here to stay. But I believe that Steve Jobs was preparing to take us to a place where both hand-powered interface and icon-based operation as we know it take a backseat. I’d like to know what Ted Nelson thinks of this, since he suggested that any person ought to be able to understand a computer within ten seconds. Ted, maybe the future that Jobs was planning to bring to us was not a world where we understand the computer, but where the computer understands us. If we are able to speak to our computer like we would a fellow human and have it obey us reliably, then anyone who can speak would, regardless of prior “computer experience,” be able to immediately accomplish the most common computing tasks without the overhead of required pre-existing mental models for software operation based on metaphors. And that would mean that, though he didn’t live to see it fulfilled, Steve Jobs would have once again orchestrated the rebirth of personal computing for ordinary people.


A Tale of Two Tablets (and a lesson learned)

September 27, 2011

an iPad and Samsung Galaxy Tab

Having recently had the opportunity to work with both an iPad and Samsung Galaxy Tab, I have the following impressions (which are predicated upon the fact that I am a long time PC user.)

iPad

  • very easy to learn to use
  • easier screen turn on (thanks to physical home button)
  • slightly faster “on” from power off state
  • pretty simple and intuitive operation — launch app from home menu, use app, push single home button to return to app menu
  • pretty good for simple, input only text entry; painful for editing existing text
  • Overall, a very elegant casual consumption device.

Galaxy

  • harder to learn to use
  • slightly slower and slight less convenient screen turn on
  • longer initial startup from power off state
  • once acclimating, felt more sophisticated, akin to my desktop PC experience, especially with web browsing and office-like tasks
  • great for simple, input only text entry; painful for editing existing text
  • Overall, seems more powerful in the traditional computing sense.

Which did I like better? Honestly, I liked both, but for different reasons. It would be difficult to pick between the two. But here’s the ironic twist to this tale… Read the rest of this entry »


How will you be served? A map of modern computing devices

August 17, 2011

My friend who got me into computing made a prediction almost 20 years ago. He was holding a calculator in his hand and said something to the effect of, “Our kids will walk around with something smaller than this that will be far more powerful than what’s on our desks today.” Now back then, powerful was a 486 or early Pentium class machine. But it wasn’t until the advent of the smartphone with the cell data network did his prediction come true. Smartphones are more computer than phone.

And yet, we still have desktop PCs. They’re tremendously more powerful and capable than smartphones (for now) — perhaps even more powerful than we should have imagined 20 years ago. So we find ourselves in a world with a variety of computing devices where no one “uber computing device” rules. I thought I’d perform a somewhat academic exercise and map these devices on a type of “infographic,” to help pinpoint where the gaps are as well as where each type of device excels. It’s a first draft, so feel free to comment. (Click below to get the full size version.)

A Comparison Map/Infographic of Modern Personal Computing Devices

A Comparison Map/Infographic of Modern Personal Computing Devices (Click to view full size image.)

(Note: I realize there certainly could be more spectra added to the map. For instance, I didn’t attempt to include Cost (both purchase price and on-going maintenance costs), User Skill Required for operation, options for Peripherals, etc. It’s not intended to be exhaustive.)

After completing the map, one thing stands out: there is no perfect device yet — no one personal computer option that is at the “best end” of all spectra nor one which falls solidly in the middle of all of them as a perfect balance. What do you think will fill the gap? How will you be served in the future?


Taking a lesson from Harry Potter

November 16, 2010

In Harry Potter and the Goblet of Fire, Harry walks to the opening of a small tent and peers inside, only to find the inside is actually the interior of an enormous house. In response, Harry smiles and says to himself, “I love magic.”

Technology users love “magic,” too, but most of us understand enough about how modern devices work to demystify the magic, making them just “cool,” not magic. But I think there’s a lesson we can learn from Harry Potter’s magic tent experience. For just as Harry wasn’t totally astonished to see what was inside the tent, neither are the technology-savvy surprised or confounded when adopting a next-generation technology bearing new interface concepts. So why are older adopters? In particular, why are those who have never used modern technology more confused than ever even though terrific new interface developments like touch-screen phones are eliminating some of the physical interface barriers of the past?

Back in Why Do “Old Folks” Need Technology Explained, Anyway, I mentioned how evolving technology usually piggybacks on previous technology in a stair-step fashion. This means that unless a user has experience with the previous technology reference point, a newer technology based on the previous one is not easier to understand but harder. For example, if a person doesn’t understand what a standard two-button computer mouse is for, then the newer mouse with a scroll wheel only makes it more daunting and less understandable. In light of this, let’s consider the new generation of smartphones with the touchable, swipable interfaces that I continually hear lauded as “intuitive!”

If you’ve followed this blog long enough, you’ve probably noted that I have given indications that touchscreen technology is both a good development and yet presents many challenges. Lest you think I’m not content either way, let me forever clarify my position on touchable/swipable interfaces: for those users past the initial mental hurdles associated with software-driven interfaces, touch screens will be wonderful. The ability to touch, swipe, and move content on the screen via finger(s) rather than an intermediary (mouse) does offer great opportunities for implementing interfaces that are more natural. Current smartphones are a great example. In light of this, today’s question to ponder is: then why are smartphones just as confusing for brand new technology users as PCs? If their interfaces are so intuitive, why can’t everyone just pick one up and naturally understand how it works? After all, there’s no mouse, no clicking vs. double-clicking to explain, etc.

Here’s a scenario to challenge your thinking. Have you ever put a smartphone in front of a person who is not already a PC user? (Let that percolate for just a minute. Never used a PC. Not familiar with software interface concepts. Yes, such people are out there.) If you’ve done this (and have undoubtedly had to talk them through it), what was their reaction the first time you swiped the screen? I’ll bet it was something like this. First, a confused, blank stare at the device, followed by the inevitable “I don’t understand” sheepish look. And then: “Where’d everything go? What’s this stuff? Why is this on the screen now? Why is the stuff that was there before no longer there?”

It’s like magic. But why? Why do they act like what is no longer on the screen is gone forever? Why are they confused by something now appearing on the screen that wasn’t there before? I think it’s something stemming from the Software Generation Gap. For non-software users, a pre-software (solid state) interface is what it is and that’s all it ever is. So what’s the chief difference between this person and you? You have already mastered the previous-generation technology and understand software-driven interface concepts. You have already experienced and internalized what Harry Potter experienced with the magic-driven tent. With a software-driven device, paradoxically, the inside is larger than the outside.

You see, though a mouse and scrollbars are not the most intuitive interface conventions, because you previously adopted, mastered, and mentally internalized them, next-generation smartphone interfaces make perfect sense to you because your mental model already supports the fact that much more can exist outside the boundaries of the screen. The non-software user, on the other hand, can’t (yet) conceptualize what I’ve sketched here:

A diagram or a phonephone's off-screen real estate

You see, you already understand that the inside is larger than the outside, so simply using a different technique to move what is off-screen onto/into the screen doesn’t require you to change your mental model. For you, the only difference is physical — that you’re now using your finger to bring what is off-screen into the screen space, rather than a clunky old mouse. Because you have already used scrollbars on a PC, you think nothing of accessing something off-screen. The freedom to interact sans mouse is what makes you think it’s a radically new, intuitive experience. But for as marvelous as touch-screen technology is, it is not even half of what makes touchable, swipable interfaces “intuitive.” The intuitive part comes from what’s already in your mind. You are unconsciously leveraging the mental concepts drawn from your PC software experience. I’d argue that this is what makes the smartphone’s interface seem so intuitive and quick to learn. Already possessing a solid mental model allows you to feel that the physical interface is more “natural” than the virtual one requiring a mouse.

So where does this leave us? I guarantee that once core software concepts are understood, it doesn’t matter so much if the mechanism to slide content through a screen/viewport is a mouse, trackpad, finger directly on-screen, hand gestures in the air, voice commands, or an eye-tracking technique. Humans can adapt remarkably well once they have a model for how they need to behave. The device itself isn’t the hurdle that prevents folks from understanding; it’s the concepts that present the gap. Until the gap can be bridged for newcomers, the digital world might as well be as magic as Harry Potter’s.


ET’s Predictions (Year 2)

August 18, 2010

In keeping with Explain Technology’s anniversary tradition, it’s time to make some predictions about the technology market as it impacts new users of technology and how we who explain technology may need to acclimate. From my perspective, the future looks pretty sweet, and so this can be pretty short… Read the rest of this entry »


What “dropping my land line” really means

February 28, 2010

I’ve had yet another set of friends indicate they will be “dropping their land line.” Their reason? Well, it just makes financial sense, they claim. Why pay for a dedicated home phone? They can get all the features they need and then some as part of their cell plan. (After all, a mobile phone is pretty much a necessity these days. So it’s the immobile phone that’s really optional, apparently). The financials aside, mobility is indeed a huge motivating factor as well. They’re rarely at home, so my chances of catching them are slim. With a cell phone, I can probably always reach them, right?

Now clearly, they’re in the majority these days, with cell coverage, features, and pricing making the dedicated home phone more of a novelty. But I’ve noticed a strange irony amongst all my friends who now have only cell phones: I can never get a hold of them. I used to call their land land, and if they were home, they’d answer. If not home, I’d leave a message, and they’d call me back. But now, I almost never talk to them. I call, and the phone just rings… or goes to voicemail immediately.

I make excuses for them. They’re driving through a school zone. They’re out for a kid-free romantic dinner. Or at a doctor’s appointment. Yeah, that’s it. Who would want to be interrupted during their appointment with the proctologist or OB/GYN?

But the truth is, after time after time of this, I’m beginning to wonder if the mobile-only movement is more annoying than convenient. Why aren’t they answering this time? Is it just me? Or is it that the nature of having mobile-only contact means that they’re always receiving calls while they’re in the middle of being out and about — already busy, with the constantly unexpected ringing phone conflicting with already present pressures of space and time in public? Is it possible that having a mobile phone actually makes it less convenient for the receiver of the call and simply more convenient for the caller? Is it really a selfish, one-sided convenience which inevitably breeds the use (and perhaps even necessity) of “silence” features and voicemail?

I’m not going to let this post devolve into a rant against the break down of the social structure due to modern technological developments, but I am growing convinced that “I’m dropping my land line” means “you’ll likely never talk to me ever again without my voicemail screening you first.”