Computer learns sign language by watching TV

It's not only humans that can learn from watching television. Software developed in the UK has worked out the basics of sign language by absorbing TV shows that are both subtitled and signed.

While almost all shows are broadcast with subtitles, some are also accompanied with sign language because it is easier for many deaf people to follow.

Shows with both text and signing are a bit like a Rosetta Stone - a carving that provided the breakthrough in decoding Egyptian hieroglyphics from an adjacent translation in classical Greek.

So Patrick Buehler and Andrew Zisserman at the University of Oxford, along with Mark Everingham at the University of Leeds set out to see if software that can already interpret the typed word could learn British Sign Language from video footage.

They first designed an algorithm to recognise the gestures made by the signer. The software uses the arms to work out the rough location of the fast-moving hands, and identifies flesh-coloured pixels in those areas to reveal precise hand shapes.

Once the team were confident the computer could identify different signs in this way, they exposed it to around 10 hours of TV footage that was both signed and subtitled. They tasked the software with learning the signs for a mixture of 210 nouns and adjectives that appeared multiple times during the footage.

The program did so by analysing the signs that accompanied each of those words whenever it appeared in the subtitles. Where it was not obvious which part of a signing sequence relates to the given keyword, the system compared multiple occurrences of a word to pinpoint the correct sign.

Source: newscientist.comAdded: 14 July 2009