Explain/Teach Nearly Anything with this Hands-on Game

November 20, 2013

cloudBoard-TopAngle

cloudBoard is a new project on Kickstarter that intrigues me as it uses a physical board with puzzle-like pieces and a digital computer game component to teach various concepts, primarily to kids. I think this learning approach is ingenious, as it blends kids’ natural desire to play with (and relate to) something physical (like a classic board/puzzle game) and yet leverages the ability to drive video games, providing a rich visual element kids love. And not just kids. The guys at Digital Dream Labs have play-tested their invention with kids and adults alike, often finding that it bridges the generation gap, engaging young and old alike. Early play-testing also revealed some encouraging results with autistic individuals.

How’s it work? Players place various tiles in the cloudBoard in specific positions or sequences, and that becomes a direct link to functionality and features enabled in the video game. Repetitive play — the key to learning — is easy, accomplished by simply swapping or rearranging the pieces in the board, allowing learners to tinker with various configurations.

cloudBoard-TopDownWithStickers

3 things I love most about the cloudBoard project:

It bridges the physical and virtual. I’ve long been a believer in leveraging aspects of the physical world in explanation and teaching because people connect with physical things in a unique and memorable way. cloudBoard takes this to a whole new level, grabbing the best aspects of board and puzzle-piece games from our childhood while marrying them to what’s possible with modern digital computer technologies. And unlike the Skylanders concept, where the physical figure’s position or relationship to other figures is irrelevant, the physical cloudBoard pieces mean different things when they’re in different positions on the board.

cloudboard-cork

It’s not just a single game. Multiple games can be supported. While the Cork The Volcano game — designed to teach trial-and-error concepts, key to understanding computer programming — is the first game to be released, the Digital Dream Labs dudes have multiple other concepts in the works, including a chemisty game, a music game, and a farming game. Virtually any game can be created with the right software and new tops to the puzzle pieces. And here’s where the real extensibility of the cloudBoard concept shines…

Developers can extend it themselves.  I spoke with Justin Sabo from Digital Dream Labs, and I think this is a point worth emphasizing about their project. The game APIs will be open to those who wish program their own video game to interface with the cloudBoard hardware. And unlike some other game systems, cloudBoard is designed to run the same across many platforms (tablet, PC, etc.), extending its usefulness.

cloudBooard-piecesAlone

The tops of the puzzle pieces can also be swapped with other tops, allowing the physical aspect to be ever-changable but using the original pieces. (You don’t necessarily need brand new pieces for every game; just change the toppers.) Change the pictures? Fine. Add fuzzy three-dimensional toppers? Go for it. 3D-print your own? Why not? This is part of the future vision that Justin shared with me — that what they’ve created is a platform others can easily expand upon. Who knows how many educational games could be created.

Here’s what I don’t like about the cloudBoard project:

It’s not fully funded yet.  So spread the word and head on over to the cloudBoard kickstarter page to help make it a reality.

Advertisements

Screen Resolution Explained

March 21, 2012

What is screen resolution? Low resolution? High resolution? What’s the difference?

If you or someone you know is baffled by the concept of screen resolution, including why the new iPad (iPad 3?) screen resolution of 2048×1536 is noteworthy, here’s a video from the Explain Technology YouTube channel that explains it.

A while back I wrote a post on how to use pinscreens to explain screen resolution, but I decided to take my own advice and use my suggested props to explain the concept of resolution via video.


Books: when old and new technology meet

January 20, 2012

Are books old or new technology? Is printing books on paper obsolete, or does it still have an advantage in some cases? After Apple’s announcement about iBooks Textbooks yesterday, I think the answer is: both.

At the heart of it, iBooks is 1.) about the difference between electronic and printed books and 2.) about bringing the real-time digital world into the traditional publishing sphere. Kindle, Nook, and (to a lesser extent) iPad have been in the business of taking that which was formerly constrained to paper and making it available quickly, sometimes more cheaply, but certainly in a more portable package: e-books. And there’s not doubt that the fusion of e-books with the rest of the digital world was going to happen eventually. Plus, there’s that unspoken rule in the tech market which says, “New is always better than anything older.” So by that reasoning, books must be no exception, right? Well, there are numerous advantages, but I think there is one exception. Read the rest of this entry »


What Steve Jobs saw in Siri (and why I’m glad he lived to see it)

October 10, 2011

I was truly speechless the moment I found out Steve Jobs passed away — so suddenly, only a day after Apple’s first big announcement without him at the helm. I wonder if he held on, wanting to see the transition to Tim Cook. Perhaps that was pure coincidence. Or maybe he held on for another reason: to see Siri announced to the world.

Siri was Apple’s largest news on October 4th, but many consumers shrugged it off, saying, “It’s cool, but it’s not an iPhone 5, like we expected. And this voice-driven technology has been creeping forward for some time. It’s hardly big news.”  After recovering from the shock and taking time to reflect on what Jobs really did during his lifetime — making computer-driven technology usable for the everyday person — I’m thinking that, for Jobs, Siri might have been the beginning of something much greater that Apple has been dreaming about for a long time — something significant that would eventually, once again, lead to a dramatic shift in personal computing.

Interfaces

Steve Jobs and Steve Wozniak changed computing to be personal foremost by changing the interface. My lifespan happens to coincide with their work, so I can attest to the effect of their genius on ordinary individuals like me. I grew up on the Apple II in grade school, my family owned an Apple IIGS (limited edition, bearing Woz’s signature!), and I eventually moved on to use the Macintosh interface and the subsequent Windows graphical interfaces based on it.

The Apple IIGS had the first primitive GUI operated by mouse. The introduction of the graphical operating system was the pivotal shift in computing that added the “personal” to computer. Everything hinged on this until Apple redefined elegant human-to-computer interface again with the iPhone and iPad, leveraging touch and gestures to make the device even more natural and personal.

Now, a great many people are claiming that touch and gestures are the way we’ll all interface with computers in the future and that the mouse as we know it is dead. (Here’s one example: The Mouse Dies. Touch and Gesture Take Center Stage) And while I don’t deny the usefulness of touch screens to simplify interface, I’ve said before they aren’t a magic bullet. They actually intimidate some newcomers. (Don Norman, the great design guru, also highlights some issues in Gesture Wars.) Though the harmony between hand and screen has been streamlined, users still must understand graphical conventions. That’s why I’ve been posting for some time that core computing concepts still need to be taught to newcomers. And that’s also why I’m pretty excited about Apple with Siri, because I think Jobs saw it as the way to reinvent the human-to-computer interface once again…

Your voice is the next interface

Back in 1987, Apple released this futuristic video showcasing a voice-commanded personal assistant. (Note also that the implied date in the video is either September of 2010 or 2011, very close to the actual Siri announcement.)

Though the depiction above feels a little more like the ship’s computer from Star Trek: The Next Generation, Siri is likely not (yet) as advanced as the personal assistant in the Knowledge Navigator video. But it’s not hard to see the connection. And I’m by no means the first one who has drawn comparisons between Siri and Knowledge Navigator (and Star Trek for that matter). Just do a web search and you’ll find plenty of hits. But here’s the salient point…

If you could reliably control your computer by taking to it — like Star Trek — then the need for understanding graphical UI elements and gestures is significantly reduced and the barrier to computing (for newcomers) becomes virtually non-existent. Your voice doesn’t need to be taught. The interface is built-in and the conventions are only limited to what the computer can understand.

As the person who wrote the primer for newcomers learning to understand desktop PC interfaces, the prospect of using one’s voice as the primary interface absolutely thrills me. Think of this: no need to teach someone how to relate to the computer. No need to explain icons and procedures. The teaching of the “interface” is essentially offloaded to whomever teaches the individual to speak. No longer would computer vendors be burdened with GUI usability; they would only focus on voice command recognition. This would truly be a revolution in computer interface, and it’s only a matter of time before the technology is powerful and adaptive enough to provide this capability. Apple may simply be, as usual, taking a vision to market first.

The future of interfaces

While the mouse may be history very soon, I don’t think some artists will ever get away from a physical stylus or external device to assist with detailed pixel-resolution work. And touch screens are certainly here to stay. But I believe that Steve Jobs was preparing to take us to a place where both hand-powered interface and icon-based operation as we know it take a backseat. I’d like to know what Ted Nelson thinks of this, since he suggested that any person ought to be able to understand a computer within ten seconds. Ted, maybe the future that Jobs was planning to bring to us was not a world where we understand the computer, but where the computer understands us. If we are able to speak to our computer like we would a fellow human and have it obey us reliably, then anyone who can speak would, regardless of prior “computer experience,” be able to immediately accomplish the most common computing tasks without the overhead of required pre-existing mental models for software operation based on metaphors. And that would mean that, though he didn’t live to see it fulfilled, Steve Jobs would have once again orchestrated the rebirth of personal computing for ordinary people.


A Tale of Two Tablets (and a lesson learned)

September 27, 2011

an iPad and Samsung Galaxy Tab

Having recently had the opportunity to work with both an iPad and Samsung Galaxy Tab, I have the following impressions (which are predicated upon the fact that I am a long time PC user.)

iPad

  • very easy to learn to use
  • easier screen turn on (thanks to physical home button)
  • slightly faster “on” from power off state
  • pretty simple and intuitive operation — launch app from home menu, use app, push single home button to return to app menu
  • pretty good for simple, input only text entry; painful for editing existing text
  • Overall, a very elegant casual consumption device.

Galaxy

  • harder to learn to use
  • slightly slower and slight less convenient screen turn on
  • longer initial startup from power off state
  • once acclimating, felt more sophisticated, akin to my desktop PC experience, especially with web browsing and office-like tasks
  • great for simple, input only text entry; painful for editing existing text
  • Overall, seems more powerful in the traditional computing sense.

Which did I like better? Honestly, I liked both, but for different reasons. It would be difficult to pick between the two. But here’s the ironic twist to this tale… Read the rest of this entry »


Prepare to disown your PC: a prediction on the future of personal computing

August 24, 2011

The introduction of the iPhone changed the destiny of your desktop PC. You probably just didn’t know it at the time.

Clearly, the iPhone was more than just another digital cell phone. It was primarily a computer that happened to include voice calling capabilities. It was a mini-PC in your pocket, like one of my close friends predicted over 15 years ago. But that’s not all. The “computer’s” operating system was different. The app store model for acquiring and installing “software” was drastically different as well. But in hindsight, though those were what got a lot of press at the time, there’s something else that the iPhone did to set the stage for the iPad and the next generation of personal computers: Read the rest of this entry »


Why the iPad isn’t a desktop PC killer (yet)

August 20, 2011

A question to ponder: with the introduction of the iPad, is there a need to own a PC? In other words, does the iPad mean death for the traditional PC for most consumers? After all, it has that slick touch screen and doesn’t require keyboard, mouse, and most other bulky peripheral components that traditional PCs do. Plus, with that slim form factor, you can take it nearly anywhere. And thanks to dual WIFI and cell network compatibility, it can be connected to the internet almost continuously. What more could a user want?

But is it enough to completely replace a PC? Well, in my evaluation, yes…but no. Here’s why… Read the rest of this entry »