While systems such as the Kinect for Microsoft’s Xbox gaming console have brought gesture-based navigation to a mainstream audience, an Austin, Texas-based startup is trying to take the idea to the next level.
It has developed a new type of “agnostic” motion-based system that is not dependent on specific sensors and hardware devices. When it comes to displays, its aims are equally ambitious, believing its approach to navigation is applicable to an array of devices, from smartwatches and other connected wearables, to smartphones, tablets and big screen TVs.
That company, Quantum Interface (Qi), claims that its technology, which can work in a touch or touch-free format, detects the movement of the user’s fingers or hands, and is capable of anticipating what the user may to do next.
First, a bit about the technology: Qi uses a vector-based system that applies the angle and speed of movement to predict what users want to do as they scroll through menus, according to Qi’s co-founder and chief technology officer Jonathan Josephson. That differs from many of the Cartesian-based systems on the market today that, like a mouse cursor, use exact coordinates on an X/Y axis to dictate the on-screen action.
Cartesian-based systems require gestures or specific poses and postures to complete an action, Josephson said, culminating in a mixture of movements that can add latency. Qi’s motion-based system, he said, is more dynamic and variable, allowing the user to perform fast-action functions such as scrolling and pinching-and-zooming, he said during a recent demo of the technology on a tablet and an LG Electronics connected TV running WebOS.
“This is Minority Report,” Josephson said, referring to the 2002 fi lm in which star Tom Cruise uses a dynamic, motion-based navigation system to manipulate a futuristic crime-prevention system. “This is about simultaneous selection and attribution control.”
Qi claims its technology is agnostic, as it can work in conjunction with a range of sensors and cameras to triangulate the user’s motions and translate them onscreen.
One aim is to help consumers more rapidly and intuitively navigate through menus, including those that run on pay TV services. Qi believes its technology is also well-suited for interfaces tied to emerging virtual reality and augmented reality systems, as well as apps for connected cars and other environments that don’t rely on keyboards and traditional remote control devices.
As for its go-to-market strategy, Qi’s primary model is business-to-business, as it intends to license its technology and to provide design services. Stephenson said two “large” TV makers have approached the company as they pursue next-gen products.
In the meantime, Qi has started the ball rolling on a much smaller screen with the debut of an Android smartwatch launcher called QiLaunch Wear that helps the user navigate and manage content. Early on, Qi is making it available in a by-invitation-only beta trial.
Qi was founded in 2012 and has about a dozen employees. It’s been funded primarily through “friends and family,” but is considering raising a formal “A” round of financing, Josephson said.
While systems such as the Kinect for Microsoft’s Xbox gaming console have brought gesture-based navigation to a mainstream audience, an Austin, Texas-based startup is trying to take the idea to the next level.Subscribe for full article
Get Access to Our Exclusive Content
Already subscribed?Log In