Google translates hand gestures to speech with sign language AI

An image of Google, AI, Google translates hand gestures to speech with sign language AI

Google has announced a new machine learning model for tracking hands and recognising gestures, giving once soundless sign language a voice

There are thousands of spoken languages the world over. Each has its own subtleties and nuances. Developing AI to interpret and translate those can be difficult, and even the most sophisticated translation tools struggle to do so correctly. Sentences can become jumbled, meanings are misread, and colloquialisms are mostly lost on machines. 

However, there has been a breakthrough in perceiving and translating the language of the human hand.

Google’s development of “Real-Time Hand Tracking”, which perceives hand movements and gestures, allows for direct on-device translation to speech.

The tech giant has mapped 21 3D keypoints, or coordinates, to around 30,000 real images of hands performing a variety of gestures and shapes. 

Using a complex machine learning model, they have created a mixed training schema to synchronise the data from rendered images, real-world images, and hand presence, to give a “hand present classification.”

Google hand tracking machine learning

In a blog post, Google said: “The ability to perceive the shape and motion of hands can be a vital component in improving the user experience across a variety of technological domains and platforms.”

The research and development of this machine learning algorithm could create numerous possibilities, not least for sign language understanding.

Google has not developed a stand-alone app for its algorithm but has published it open-source, allowing for other developers to integrate it into their own tech, a move which is welcomed by campaigners.

There are also potential uses for virtual reality control and digital-overlay augmented reality, or even gesture control functions in driverless vehicles and smart devices. 

Assistive tech is making strides as more and more entrepreneurs and engineers enter the market ahead of the consumer curve, with wearables and IoT devices aimed at making the lives of those with assistive needs a little easier.

An image of Google, AI, Google translates hand gestures to speech with sign language AI

Luke Conrad

Technology & Marketing Enthusiast

Switching from monolithic web apps to microservices

Jon Hewines • 08th June 2023

Unlocking value in existing applications is highly desirable for CTOs, and one of the most effective way of doing so is to make the switch from monolithic architecture to microservices. It’s not a straightforward task, but the benefits can be huge.

BT launches 5G immersive spaces

Alex Foster • 26th May 2023

New connected spaces provide fully immersive experiences by combining interactive, 360° video content – complete with lights, sounds and smells – compatible with Augmented Reality (AR), Virtual Reality (VR) and Extended Reality (XR)

What are tomorrow’s cloud leaders thinking about today?

David Devine • 25th May 2023

Organisations today – from end-users to channel companies and industry groups – are all working to optimize their use of cloud technologies. Although there are increasing numbers of businesses seeing cloud’s benefits, and diminishing minorities operating on-premises, there are nonetheless hurdles that are common to all. OVHcloud’s annual conference at the London Stadium on June...

Job creation and learning from China

Arnold Ma • 22nd May 2023

With the barrage of negative headlines, however, it’s often easy to overlook how the introduction of and advances in new technologies can boost the economy by opening up job opportunities. And with reports of unemployment on the rise, as the UK’s economy struggles for growth momentum, this is more welcome news.