Jump to content

THIS, not the iphone/pad/whatever, is the future


Geoffrey Taucer
 Share

Recommended Posts

The way computers are now yes we do rely heavily on tactile sensation. While not as a radical departure from the previous method, the invention of a mouse to interface with a computer is much different than using a keyboard only. The point of this analogy is to show that yeah it might be awkward at first but the learning curve is quite shallow. And as you've pointed out, using in-air gestures would be as easy as using a trackpad or a touchscreen. But like touchscreens, I only get a feeling that I've touched a button, (or that I've done something i.e. touch a "button" on a screen that feels the same everywhere, bring my thumb in to click etc.) I don't know which button I pressed until I see the results of that action. Performing an action such as, moving your finger towards your projection, is essentially the same as touching the screen. And not all of his motions are two-dimensional, in fact, most of the gestures are in three dimensions.

So you admit that it may not be as natural and intuitive as first explained? Because that's basically what I'm trying to point out. I'm not saying it's impossible to adapt, but the video makes it sound like it's easy as pie when obviously it requires quite a bit of learning involved to operate it.

Clicking is auditory feedback, hence the onomatopoeia. Why can't this device do the same?

Um, as Gollgagh has pointed out, clicking is both auditory and tactile feedback.

All the motion looked like common things. I really don't see how pinching or making a rectangle with your fingers varies so much from person to person that the device would be confused. Perhaps individuals who have a low fluid intelligence (older folks) may not be able to pick this device up and use it well within the first day or so. I sure as hell couldn't type on my iPod touch for the longest time, or even type on a keyboard at a decent WPM for several years.

And even if the device came with pre-defined rules for clicking, zooming, and whatnot, I am sure (as he stated it would become open-source) that you can define your own rules and methods for performing certain actions when you interface with your projection.

Open source is another way to say "This is useful if you have a major interest in computer programming". And many of the gestures do not work exactly the way the video makes them look. Forming a rectangle to take a picture? How do you adjust the settings? How do you take the picture? How does the camera (which is mounted on your chest) take a picture from the same view as your eyes? The video, for me anyway, provides a lot of fluff and what-ifs instead of true blue this-is-how-it-works moments. I would like to see this device in real life to believe it works as well as it does in the video.

A final response to tactile feedback: I do think the feeling sense is very important, but it isn't necessary. Using a Wiimote is vastly different than what this device does. Basically, trying to mimic a specific motion versus interacting with an interface projected on a 2d surface. I don't need tactile feedback telling me I threw a virtual bowling ball down a virtual alley. I can see it, and if isn't 2 in the morning, I can hear it too. I also don't need tactile feedback when I can literally see (and sometimes hear) that I can "pushed" a virtual button.

I'm not saying it's necessary either, but it certainly helps, especially if you want to accomplish tasks quickly. Human vision is notoriously bad but our kinestetics and tactile sensation compensate. Computer vision is also notoriously bad, but simple buttons and touch sensors compensate. Of course this device works. The video proves that. But does it work quickly, while still maintaining accuracy and efficiency? That is what I doubt. Vision is the hardest thing for computers to gather and interpret, so why are we so convinced that computers should rely on vision so much?

Link to comment
Share on other sites

I saw this a while ago also, and while the tech is certainly cool, its just a stepping stone to see it in a practical device, same with the Ipad. I'm not a fan of the ipad, but if it can ride its hype train long enough then it could very well revolutionize the handheld computer market by making it an accessible non-niche device. Still... its just a stepping stone.

SixthSense is cool, but personally, I'm not wearing an ugly ass necklace like that to hold the projector... just won't. Put it on a watch or a some kind of interesting ear piece, OR something not so blatantly gaudy and maybe I'd be willing to bite. I know his version was just a prototype but still... impressive, but not really to a stage where the mainstream would be willing to use it. Same with the ipad and its quality control.

That said, I like the fact that its open-source. If it gets a good community behind it (looks like it does already) then real-world applications (and innovations) can start popping up with the tech. Other companies will start integrating this technology into phones and things... it'll naturally progress with our phones because that's what we're used to... right now having a neat little projector with vision detection is cool, but its nothing more than extraneous at this point.

Love the phone numbers on the hand though... and the pinching files to your computer and vice versa... and the extra book information. There's a lot of really cool possibilities. :)

Link to comment
Share on other sites

I didn't watch much of the video, but I've been familiar with this kind of thing for a while. I completely agree with what Cerrax is saying. Except for this:

Open source is another way to say "This is useful if you have a major interest in computer programming".

Making it open source is the best thing they can do if they actually want to get some useful applications out of it. Programmers who are passionate about this kind of stuff (or at least the potential of it) will jump on this and find ways to make it usable and intuitive.

I'm not saying it's necessary either, but it certainly helps, especially if you want to accomplish tasks quickly. Human vision is notoriously bad but our kinestetics and tactile sensation compensate. Computer vision is also notoriously bad, but simple buttons and touch sensors compensate. Of course this device works. The video proves that. But does it work quickly, while still maintaining accuracy and efficiency? That is what I doubt. Vision is the hardest thing for computers to gather and interpret, so why are we so convinced that computers should rely on vision so much?

You just listed a bunch of problems that IMO making it open source has the potential to solve. All it takes is a community of dedicated developers to say "we're going to make this usable by compensating for the flaws of vision and gestures", and BAM! you've got yourself a project that will only get more attention and more and better developers, and grow into something great that's much more than a tech demo of cool-but-flawed interfaces. And like Strike911 says: phones.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...