Explaining Programming to Kids

September 14, 2013

There’s been a lot of focus on preparing the next generation to learn to code/program. (See Code.org for starters.) And many websites have sprung up sporting interactive tutorials for hands-on learning. But what about really young kids? Is it ever too early to learn the core concepts of programming?

Dan Shapiro doesn’t think so. He took a leave of absence from his job at Google to give the world another way: a board game. His project, Robot Turtles, went up on Kickstarter last week and is gaining funding. I hope he gets everything he needs, because this is a clever approach. Give kids a fun board game with great graphics, but at the heart of learning to route the Turtles appropriately, players learn the logic for writing basic computer programs. It’s brilliant.

So if you’ve got some kids you want to introduce to programming concepts in a fun, offline format, consider supporting the Robot Turtles project. It looks like it’s going to be a reality for now, but like Dan says, it may never be in print again.

Robot Turtles board game, courtesy Kickstarter project by Dan Shapiro

Advertisements

Copyright explained with puppets (by YouTube’s lawyers)

July 10, 2013

While Copyright law isn’t technology itself, if affects plenty of things dealing with technology. So I couldn’t help but re-post this clever video created by YouTube that explains copyright basics with….puppets!

Well done, YouTube!


Backspace x 10000: a true story about the value of highlighting & modifying

July 3, 2012

The following sounds like it belongs in the annals of computing lore along with other Tales of Tech Urban Legends like the infamous “cupholder” CD-ROM drive incident. But I swear I am not making this up.

Years ago a trustworthy colleague told me the true story of one coworker who seemed to take forever meeting deadlines when they involved composing and editing documents in a word processing program. Upon investigation, it was discovered that anytime the coworker found a mistake in a document, she would Backspace, Backspace, Backspace, Backspace, Backspace, Backspace, Backspace, Backspace, Backspace, Backspace over every single letter in the document until she had erased all words up to the typo. She would then begin re-typing the remainder of the document, additional errors would inevitably ensue, and she would again resort to backspacing over all her work. If only she had known the core computing concepts — highlighting and modifying — at her disposal. Just a small bit of knowledge explained the right way would have saved her much time, carpal tunnel surgery, and a lot of new Backspace keys.

I’ve never forgotten that story. It became the genesis for one of the first prop-based analogies I conceived to explain the concept of soft text when writing The Ultimate PC Primer. Here’s the introductory lesson in video form, something I whipped up to commemorate the book’s anniversary and the memory of that funny story that started it all:


UPCP 1-Year Anniversary Thoughts (and 15% discount)

May 4, 2012

Today, I’m celebrating the 1-Year anniversary of the publication of The Ultimate PC Primer.  And for the entire month of May, the 15% discount is back. That’s right. This isn’t a one-day special just for Star Wars Day; I’m keeping the party going all month. Use code 3SGC9EP7 to get 15% off all copies when you order here.

In case you haven’t noticed from this blog, I’m pretty passionate about the book’s contents, approach, and mission. Here are some brief thoughts on why a guy with a family and full-time job spends 6 years of his own (precious little spare) time and money producing, of all things, a printed book explaining basic computer concepts…

Nothing else uses the same approach — analogies, metaphors, stories, and illustrations from real, physical things — to explain essential computing concepts. I searched for and even purchased a good number of “intro” computer books, only find they were mostly procedural “how-to” guides. Only one even came close to helping readers actually understand what they were supposed to be learning to use. In short, most computer books assumed readers just needed to know steps to “do” a task. I wanted to help build a foundation from which new users could begin to be self sufficient.

I wanted to help users who truly knew nothing; help them lose the fear, and begin to relate to common computer technology. Many newcomers have at least some pre-existing knowledge or computer exposure, and there are some pretty good books out there for them. But there’s not much in print from major computer book publishers for those who are at absolute ground zero, having never touched a PC before. Some naysayers have suggested to me such a book is already obsolete. Since my day job is handling technology-based training for a Fortune 100 company, I certainly know where those skeptics are coming from. Yes, the PC landscape is changing. But even with tablets, touch and persistent connectivity, many core concepts of computing are very much the same, no matter the device. It’s the mental framework for engaging a personal computing device that most newcomers really need. So while some of the content will undoubtedly eventually become obsolete, the approach certainly won’t. If anything, I’m finding more opportunity than ever to explain computers and other digital technology using that approach.

The book’s target audience is those approaching or in retirement; essentially senior citizens. This is where the greatest gap in adoption still exists. I caught wind of this new Pew research data on Internet usage, and as you can see, there is still a huge gap between those under 65 and those above. (It’s less than 50%  adoption for those 65 and older, though total adoption is 80%. 12% don’t even own a computer!) This is particularly interesting and timely considering I also just read a news article about how US Series E Savings Bonds can now only be purchased online. It’s the latest in a number of government services that can only be acquired via the Internet. You’d think in light of this, plus limited mobility of those older, would actually equate to higher usage than 50%. A lot of the people in the 60 and older demographic clearly think computing and the internet are too difficult to engage later in life. I didn’t want them to be left behind. I wanted to provide the encouragement, break down the fear barrier, and do that through everyday things each reader could relate to. And while I’m a big supporter of local community classes offered by colleges, libraries, and SeniorNet, some newcomers don’t have the gumption to sign-up for such services. It requires admitting in front of others they’re in that segment which knows nothing, feeling like an outsider. That’s precisely why I wrote and published a book (in print) rather than making an e-book, DVD, online video, lecture series or curriculum. I wanted to offer each reader the chance to explore the concepts of computing in the most familiar, safe way possible, at an affordable price.

My book will never sell a million copies. I don’t care. I didn’t do it to become famous or rich. Computer literacy training isn’t my day job (though I can say what I learned by doing it has also helped me professionally in a number of ways.) I did it because I know my approach works, no one else was doing it, and I had what it took to put it all together. It’s my hope it changes a lot of newcomers’ worlds for the better.  And for the rest — the explainers, designers, and developers — I hope the approach and concept — the way of thinking about and presenting intangible, virtual concepts through narratives, analogies, metaphors, and direct comparisons to real-world physical things — helps you think of your users and audiences in new ways as well.

Happy anniversary!


Explaining 4 Key Programming Concepts with Household Items

April 19, 2012

I’ve often found myself providing others with simple explanations of fundamental programming concepts. While I’m by no means committed to writing an Ultimate Programming Primer, my work on The Ultimate PC Primer got me thinking: what analogies, stories, or props would I use to explain the basics of programming/scripting to total newcomers?

First, newcomers need to understand that programming or scripting is the discipline of writing instructions for a computer to follow. In order to be useful, the instructions must conform to a language the computer can understand. Simple scripting (like Javascript for the Web) often involves taking information in textual or numerical form, structuring it, manipulating it, and producing output (like textual or numerical information, or even animation). In short, scripting is providing the recipe (as I call software in The Ultimate PC Primer) for a computer to “cook” something up.

Learning how to write a “virtual recipe” can be difficult, since computer languages, as with real spoken and written languages, require correct grammar. In fact, computers are unforgiving with grammar, and usually they’re not smart enough to guess at what you mean. So, programming is challenging due to the precision required as well as the foreign nature of the structural concepts. And that’s why comparing the intangible to something physical — something from the real world — is often so handy.

Here’s how I explain four of the most common computer scripting/programming concepts to newcomers trying to grasp them, leveraging a few common household items. (Since I haven’t included photos of each of these, be sure to see the video at the bottom of this post to grasp the power of the visuals.) Read the rest of this entry »


What Steve Jobs saw in Siri (and why I’m glad he lived to see it)

October 10, 2011

I was truly speechless the moment I found out Steve Jobs passed away — so suddenly, only a day after Apple’s first big announcement without him at the helm. I wonder if he held on, wanting to see the transition to Tim Cook. Perhaps that was pure coincidence. Or maybe he held on for another reason: to see Siri announced to the world.

Siri was Apple’s largest news on October 4th, but many consumers shrugged it off, saying, “It’s cool, but it’s not an iPhone 5, like we expected. And this voice-driven technology has been creeping forward for some time. It’s hardly big news.”  After recovering from the shock and taking time to reflect on what Jobs really did during his lifetime — making computer-driven technology usable for the everyday person — I’m thinking that, for Jobs, Siri might have been the beginning of something much greater that Apple has been dreaming about for a long time — something significant that would eventually, once again, lead to a dramatic shift in personal computing.

Interfaces

Steve Jobs and Steve Wozniak changed computing to be personal foremost by changing the interface. My lifespan happens to coincide with their work, so I can attest to the effect of their genius on ordinary individuals like me. I grew up on the Apple II in grade school, my family owned an Apple IIGS (limited edition, bearing Woz’s signature!), and I eventually moved on to use the Macintosh interface and the subsequent Windows graphical interfaces based on it.

The Apple IIGS had the first primitive GUI operated by mouse. The introduction of the graphical operating system was the pivotal shift in computing that added the “personal” to computer. Everything hinged on this until Apple redefined elegant human-to-computer interface again with the iPhone and iPad, leveraging touch and gestures to make the device even more natural and personal.

Now, a great many people are claiming that touch and gestures are the way we’ll all interface with computers in the future and that the mouse as we know it is dead. (Here’s one example: The Mouse Dies. Touch and Gesture Take Center Stage) And while I don’t deny the usefulness of touch screens to simplify interface, I’ve said before they aren’t a magic bullet. They actually intimidate some newcomers. (Don Norman, the great design guru, also highlights some issues in Gesture Wars.) Though the harmony between hand and screen has been streamlined, users still must understand graphical conventions. That’s why I’ve been posting for some time that core computing concepts still need to be taught to newcomers. And that’s also why I’m pretty excited about Apple with Siri, because I think Jobs saw it as the way to reinvent the human-to-computer interface once again…

Your voice is the next interface

Back in 1987, Apple released this futuristic video showcasing a voice-commanded personal assistant. (Note also that the implied date in the video is either September of 2010 or 2011, very close to the actual Siri announcement.)

Though the depiction above feels a little more like the ship’s computer from Star Trek: The Next Generation, Siri is likely not (yet) as advanced as the personal assistant in the Knowledge Navigator video. But it’s not hard to see the connection. And I’m by no means the first one who has drawn comparisons between Siri and Knowledge Navigator (and Star Trek for that matter). Just do a web search and you’ll find plenty of hits. But here’s the salient point…

If you could reliably control your computer by taking to it — like Star Trek — then the need for understanding graphical UI elements and gestures is significantly reduced and the barrier to computing (for newcomers) becomes virtually non-existent. Your voice doesn’t need to be taught. The interface is built-in and the conventions are only limited to what the computer can understand.

As the person who wrote the primer for newcomers learning to understand desktop PC interfaces, the prospect of using one’s voice as the primary interface absolutely thrills me. Think of this: no need to teach someone how to relate to the computer. No need to explain icons and procedures. The teaching of the “interface” is essentially offloaded to whomever teaches the individual to speak. No longer would computer vendors be burdened with GUI usability; they would only focus on voice command recognition. This would truly be a revolution in computer interface, and it’s only a matter of time before the technology is powerful and adaptive enough to provide this capability. Apple may simply be, as usual, taking a vision to market first.

The future of interfaces

While the mouse may be history very soon, I don’t think some artists will ever get away from a physical stylus or external device to assist with detailed pixel-resolution work. And touch screens are certainly here to stay. But I believe that Steve Jobs was preparing to take us to a place where both hand-powered interface and icon-based operation as we know it take a backseat. I’d like to know what Ted Nelson thinks of this, since he suggested that any person ought to be able to understand a computer within ten seconds. Ted, maybe the future that Jobs was planning to bring to us was not a world where we understand the computer, but where the computer understands us. If we are able to speak to our computer like we would a fellow human and have it obey us reliably, then anyone who can speak would, regardless of prior “computer experience,” be able to immediately accomplish the most common computing tasks without the overhead of required pre-existing mental models for software operation based on metaphors. And that would mean that, though he didn’t live to see it fulfilled, Steve Jobs would have once again orchestrated the rebirth of personal computing for ordinary people.


A Tale of Two Tablets (and a lesson learned)

September 27, 2011

an iPad and Samsung Galaxy Tab

Having recently had the opportunity to work with both an iPad and Samsung Galaxy Tab, I have the following impressions (which are predicated upon the fact that I am a long time PC user.)

iPad

  • very easy to learn to use
  • easier screen turn on (thanks to physical home button)
  • slightly faster “on” from power off state
  • pretty simple and intuitive operation — launch app from home menu, use app, push single home button to return to app menu
  • pretty good for simple, input only text entry; painful for editing existing text
  • Overall, a very elegant casual consumption device.

Galaxy

  • harder to learn to use
  • slightly slower and slight less convenient screen turn on
  • longer initial startup from power off state
  • once acclimating, felt more sophisticated, akin to my desktop PC experience, especially with web browsing and office-like tasks
  • great for simple, input only text entry; painful for editing existing text
  • Overall, seems more powerful in the traditional computing sense.

Which did I like better? Honestly, I liked both, but for different reasons. It would be difficult to pick between the two. But here’s the ironic twist to this tale… Read the rest of this entry »