Researcher talks about using hand gestures to create better virtual experiences

by

Hand gesture

New technology to capture complex hand gestures could lead to more realistic virtual worlds, says an Imperial expert.

Dr Tae-Kyun Kim from the Department of Electrical and Electronic Engineering at Imperial College London is in charge of one of the world’s leading labs in human-machine interface technology. His latest breakthrough is the development of prototype 3D hand-gesture interface technology.

Colin Smith caught up with Dr Kim to find out more about the technology and how it could change the way we interact with computers and the wider world.

hand

Dr Kim and the hand gesture recognition technology

 

What is a hand gesture interface?

The technology consists of a depth camera, which records hand movements, and relays information to a computer where a program takes the information and creates a diagram of the hand. Each hand movement is plotted as a set of 3D coordinates on the diagram. If a coordinate on the diagram moves it is interpreted by the computer as a command. This enables the user to control a computer by simply moving.

How is this technology currently used?

Many gamers will know the KINECT technology. This is a highly successful body motion camera recognition system that can record body movements in real-time to control games.

What are the drawbacks with the technology?

They can only recognise a limited set of hand movements and the information captured is only displayed in two dimensions.

In the real world, we are constantly using our hands in complex configurations to communicate. These gestures can be rapid and varied. To enable users to have more complex interactions with this technology then hand gestures need to be captured in 3D. However, the technologies are not yet mature and need to be more accurate with the information they are capturing.

It is a major area of investigation for engineers around the world. They are trying to develop the technology so that it can detect a much wider range of movements with greater accuracies, which could create a more natural and seamless way for users to interface with computers.

How is your research group addressing this challenge?

We’ve developed technology that recognises full hand movements from 21 coordinates on the hand - capturing different articulations and viewpoints of the hand in 3D, which means that we can detect much more complex hand gestures.

Are hand movements easy to capture using this technology?

Absolutely not! We humans make very rapid movements when we use our hands, much more so than other parts of our body, which is very hard for the technology to capture. The complexity of movements can also distort our modelling. This is because most parts of the hand are not visible to the cameras capturing the information. Moving fingers and movements in the palm can also cause distortions. We call them self-occlusions. Our hand also contorts in a range of different shapes, which is another challenge we had to overcome.

How are you overcoming these challenges?

We use machine learning techniques that enable computers to learn with minimal instruction from us. The program we developed analyses the data captured from the camera and relays this information to a model of the hand, which we call a hand skeleton. It uses the information to predict hand movements. This predictive capability means that the computer can essentially keep up with rapid hand movements and continually model them in real-time. This could pave the way for more sophisticated interactions with computers.

The ultimate hope is that scientists and engineers could create virtual or augmented worlds that are more realistic in the way that we interact with them. Advances in the field could lead to different ways of interacting with communication technologies such as tablets and phones. It may also lead to completely different ways of controlling autonomous vehicles, via new types of driver assistance technologies, as well as make gaming even more interactive than it currently is.

What are the implications for your breakthrough?

In the future, we could create environments in virtual or augmented reality that are much more realistic for the user. Imagine popping on a virtual reality headset that detects every subtle hand gesture you make, so that you could play a virtual instrument like a violin. Or imagine being able to create a virtual world with your children where you animate virtual puppets with hand gestures as an amusing way to pass your time. These are just random frivolous examples, but just like how we interact with objects, people and the environment in the real world, this technology could make our virtual encounters just as real.

Reporter

Colin Smith

Colin Smith
Communications and Public Affairs

Click to expand or contract

Contact details

Email: press.office@imperial.ac.uk
Show all stories by this author

Tags:

Research, Strategy-collaboration
See more tags

Comments

Comments are loading...

Leave a comment

Your comment may be published, displaying your name as you provide it, unless you request otherwise. Your contact details will never be published.