Reflections on my relationship with Flash

February 14, 2014

Someone recently asked me why I liked Flash so much and have been a loyal advocate for so long. Even though HTML5 and modern browsers have allegedly (according to web design enthusiasts) rung the death knell for Flash, my affection for Flash lives on. But why? Well, I had to do some soul-searching to answer that, and my answer may surprise you with it’s simplicity that (perhaps rightly, perhaps blindly, depending on your perspective) ignores nearly all the technical pros/cons, tell-tale marketplace warnings, and standards-based arguments in the Flash vs. HTML5 debate. But hear me out….

I loved Flash because the products I created with it… just… worked. By “worked,” I mean my creations, designed once by me, were experienced by all end users how I meant for the experience to be “played back.”  Flash worked because the creation software and player/plugin were all offered and controlled by the same company. By having everything from creation to playback owned under one roof, you can expect it ought to have worked flawlessly. And nearly all of the time, it did. It’s not to say creating and deploying Flash-based projects was always easy or as clear to achieve as it could have been. But, from my designer’s point of view, once I validated that my creation worked, I could be confident it would work almost universally, for everyone. I could depend on it — rely on it to be close enough to “fact” that I didn’t have to worry about my creation being presented as anything other than intended to the end user. There’s something that feels “right” about that — something inside that says, “Yeah, it shouldn’t have to be any harder than this.”

Contrast that to the web browsers of Macromedia Flash’s hayday, all from different vendors, all with different features, all with incompatibilities, standardization largely lacking. As I’m fond of saying, no one won the browser war, but the consumers definitely lost. (And in some ways, perhaps us designers did, too.) Faced with those browsers — a “playback” mechanism that I could never be guaranteed would present a creation consistently or faithfully — it’s still not hard for me to look back and validate my affection for Flash. Imagine being a major motion picture studio, releasing a film to theaters around the world, and never knowing if each moviegoer in each theater will have the same experience. Imagine an experience where, in some theaters’ presentation of your film, an actor mysteriously (or perhaps magically) simply doesn’t show up in a scene or two of the completed film. That’s what trying to create rich web experiences felt like during the browser wars without Flash. And while HTML5 standards seem to be heading the right direction now, it still feels that way to a certain extent. It feels like creating a single experience shouldn’t be as hard as it still is.

I understand all the pros/cons of standards, innovation, monopolies, software life-cycles, etc. My mind knows all that. But I can’t help feeling that those of us who love designing digital experiences for others to consume continue to spend more time on technical nuances of the presentation/playback product (out of unfortunate necessity) than on the part of the craft we enjoy. It’s that part of me that loves Flash. In a relatively easy-to-understand interface, it allowed creators to produce relatively rich experiences for the masses to consume, without the mess of sifting the browser vendors’ junk. So Flash or no Flash, I hope those days I remember so fondly will return in the future — for the good of all of us who love creation of the experience more than the gritty technical details.


UPDATE: Three days after posting this, Lars Doucet shared some similar feelings on Flash development in his post: Flash is dead; long live OpenFL!


Computer People vs. Normal People

August 6, 2013

Have you noticed how there seem to be some people who just “get” computers and others who don’t? I call the former “computer people” while the latter are simply “normal.”

There’s nothing wrong with not being a computer person. In fact, I think the majority of the people in the world are not “computer people.” But here’s a true confession: I did not realize I actually was a computer person until much later in life. I thought I was like everyone else, until people started referring to me like I was “one of those people that understands computers.”

After pondering this for a great many years, I’ve finally acknowledged that I’m different, but more importantly, I understand the difference between normal people and computer people: “computer people” think like machines. They understand machines. They feel right at home with computers because computers are made by (and, except for a few exceptional attempts to the contrary which we’ll discuss shortly, for) people who enjoy controlling and operating machines.  I think this is why it’s difficult for these two camps to understand each other. One side is baffled that understanding technology is so difficult for the other; and the other thinks the first has been born with some innate magical ability.

I thought everyone would be able to learn to understand computers as easily as I did… until I began the journey of writing The Ultimate PC Primer. It was an attempt to make the mysterious approachable for the commoner and was eye-opening for me, forcing me to put myself in others’ positions to see what they don’t understand. Has anyone else thought this way, about bridging the gap? Actually, yes, and I think it worked well for him and for his customers.

Steve Jobs is arguably one of the most successful designers to find a way to bridge “normal people” with modern computing capabilities, and to do so wildly successfully in the public marketplace (not just in a research space). According to this article (Review: iOS 7 Gives Us Insight Into the Future of Mobility) he was a fan of skeuomorphic design. Skeuomorphism is when something mimics the materials or ornamental elements of something that exists in the real world (source).

I think this is partially why iOS and Apple’s mobile products experienced such a rapid adoption amongst “normal people,” even those without much understanding of prior personal computing technology. Mental associations with familiar things is both comforting and illustriative for “normal people,” something I’ve found “computer people” often don’t understand. They don’t need to because it comes easily, naturally. They understand machines just fine without any “artificial” metaphors. But for all the “normal people,” the mysterious black box is more usable when it feels like something from a past experience in real life. In  fact, there are some indications that connections to physical things are being craved more and more as our existence becomes increasingly virtual. Skeuomorphic design certainly plays off these desires nicely, but it can go overboard, as I (and others) have pointed out previously. Still, cleverly and subtly connecting a real world concept  — either audial or visual — to a digital interface  can be powerful and effective.

Since that iOS review article hints that skeuomorphic design is on its way out at Apple now, it will be interesting to see if the resulting design of computing devices once again starts to feel like it’s “by computer people, for computer people.”


Explaining Computer Viruses (with zombie chefs)

February 8, 2012

Ever wondered how to explain the concept of computer viruses? My PC recently acquired a virus. While I was killing it (using my anti-virus software), one of my children observed, “Oh! Computers get sick, too?”  Cute? Well, adults often have the same question. So here’s an answer and explanation using analogies and a fun story involving zombies…

No, computers can’t get biological diseases. Like most personal computing concepts, the term is metaphorical, borrowed from the real-world equivalent. A real human or animal virus is an entity that intrudes — gets into the inside of the body — and goes about doing something it shouldn’t, usually causing harm.

The same is true in computing. In The Ultimate PC Primer, I explain that computer programs (software) are really sets of instructions, like a recipe. The computer is just a mindless machine following these instructions. In fact, it knows nothing else except how to follow its instructions — precisely. And ideally, that’s what you want from a computer: consistency and precision in obeying the instructions given it. That’s what makes it useful for you. It follows instructions presumably intended to produce a helpful result. But what would happen if some instructions were given to it that were designed to do something harmful?

Imagine you are entering a cooking or baking competition. You must organize and direct the efforts of five chefs who will prepare five dishes you have selected from five recipes. The completed dishes will be presented to the judges of the competition. The chefs will follow any instructions exactly. (They’re like mindless chef-zombies who know only about following recipes.) All you need to do is provide the instructions for how they are to prepare each dish. So you select five sophisticated recipes from your recipe collection or cookbook and set them out for the zombie chefs to follow. But… Read the rest of this entry »


A Tale of Two Tablets (and a lesson learned)

September 27, 2011

an iPad and Samsung Galaxy Tab

Having recently had the opportunity to work with both an iPad and Samsung Galaxy Tab, I have the following impressions (which are predicated upon the fact that I am a long time PC user.)

iPad

  • very easy to learn to use
  • easier screen turn on (thanks to physical home button)
  • slightly faster “on” from power off state
  • pretty simple and intuitive operation — launch app from home menu, use app, push single home button to return to app menu
  • pretty good for simple, input only text entry; painful for editing existing text
  • Overall, a very elegant casual consumption device.

Galaxy

  • harder to learn to use
  • slightly slower and slight less convenient screen turn on
  • longer initial startup from power off state
  • once acclimating, felt more sophisticated, akin to my desktop PC experience, especially with web browsing and office-like tasks
  • great for simple, input only text entry; painful for editing existing text
  • Overall, seems more powerful in the traditional computing sense.

Which did I like better? Honestly, I liked both, but for different reasons. It would be difficult to pick between the two. But here’s the ironic twist to this tale… Read the rest of this entry »


Microsoft on File Management

September 18, 2011

A few weeks back Microsoft posted a note about upcoming Improvements in Windows Explorer (in Windows 8). I’ve previously identified file management as the second most important concept for computer literacy (in The Top 15 Most Important Understandings Needed for Solid PC Literacy). I can also say without any hesitation that the single most difficult, most time consuming, and frequently edited/re-written chapter (for both writing and illustrating) in The Ultimate PC Primer was the one on storage and file management. As such, I was thrilled to have confirmation from Microsoft that they’re taking the importance of file management for all level of users seriously. They call out Windows Explorer as

the most widely used desktop tool

More importantly, they admit that only a small group of “power users” push Explorer to its limits (and add plugins) while the majority use a handful of common features — copy, paste, rename, delete — frequently.  As a result, they claim:

Our goal is to improve the usage experience for a majority of customers

and continue to say that their number 1 goal with the Windows Explorer rebuild is:

Optimize Explorer for file management tasks. Return Explorer to its roots as an efficient file manager and expose some hidden gems, those file management commands already in Explorer that many customers might not even know exist.

I don’t often find occasion to publicly thank Microsoft, but in this case I’m quite glad they’re affirming the importance of arming users with better ability to manage their files. I also applaud their broad confirmation that power users don’t represent all users. Now that said, much of their work is focused on the Ribbon. While I have yet to encounter a single user who likes the Ribbon, Microsoft seems to have done quite a bit of research on this. So if we must use the Ribbon — is it too much to ask to give users the choice? — at least they’re planning on bringing the most commonly used features to the top left of it. We’ll see how this (and Windows 8 in general) is received once delivered.

In the meantime, if you know a newcomer to computing who has yet to grasp what file management is all about, check out Lesson 9 in The Ultimate PC Primer. Nearly all illustrations in there apply to all past and current versions of Windows Explorer (and the overall lesson will apply to storage and file management in nearly any operating system).


Prepare to disown your PC: a prediction on the future of personal computing

August 24, 2011

The introduction of the iPhone changed the destiny of your desktop PC. You probably just didn’t know it at the time.

Clearly, the iPhone was more than just another digital cell phone. It was primarily a computer that happened to include voice calling capabilities. It was a mini-PC in your pocket, like one of my close friends predicted over 15 years ago. But that’s not all. The “computer’s” operating system was different. The app store model for acquiring and installing “software” was drastically different as well. But in hindsight, though those were what got a lot of press at the time, there’s something else that the iPhone did to set the stage for the iPad and the next generation of personal computers: Read the rest of this entry »


Why the iPad isn’t a desktop PC killer (yet)

August 20, 2011

A question to ponder: with the introduction of the iPad, is there a need to own a PC? In other words, does the iPad mean death for the traditional PC for most consumers? After all, it has that slick touch screen and doesn’t require keyboard, mouse, and most other bulky peripheral components that traditional PCs do. Plus, with that slim form factor, you can take it nearly anywhere. And thanks to dual WIFI and cell network compatibility, it can be connected to the internet almost continuously. What more could a user want?

But is it enough to completely replace a PC? Well, in my evaluation, yes…but no. Here’s why… Read the rest of this entry »