Meta Has Utilized AI for Real-Time Translation of Hokkien.

Meta Has Utilized AI for Real-Time Translation of Hokkien.

Meta announced that it had developed artificial intelligence (AI) software to convert Hokkien into English.

The company said it is the first time that an unwritten language can be translated in real-time. Meta added that it would enable Hokkien and English users to converse with the AI tool.

Formerly known as Facebook, Meta said this technology of translation is part of the Meta’s Universal Speech Translator (UST) project, which aims to develop new AI methods that “eventually enable real-time speech-to speech translators across any language, and even languages that are not primarily spoken.”

Hokkien is a dialect spoken by approximately 46 million people from south-eastern China, Taiwan, and among the Chinese diaspora of Singapore, the Philippines, Malaysia and other Southeast Asia.

The open-source translation software is an element of Meta’s Universal Speech Translator (UST) project, which has achieved the first breakthrough in this area.

Languages such as Hokkien cannot be translated because machine translation tools require large quantities of written text to work on, and these languages lack widespread writing systems. To address this issue, Meta used Mandarin – another Chinese language that has abundant and easily accessible training data to act as an intermediary between English in comparison to Hokkien.

Furthermore, there are very few translators of human English-to-Hokkien, making it challenging to gather and use data to annotate.

To get around these obstacles, Meta researchers used a text written in Mandarin, like Hokkien, to act as an intermediate between English and Hokkien for training its AI. The team also collaborated with Hokkien natives to verify the translations were accurate.

“Our team initially transliterated English as well as Hokkien speech into Mandarin text, afterwards, it was translated to Hokkien or English using humans and also automatically,” said Meta researcher Juan Pino. The team also utilized audio sounds to generate wave forms. They then taught an AI model to recognize Hokkien tones in speech.

It develops and can translate one complete sentence at a given time, but the ultimate goal is simultaneous translation. Meta said that researchers would release their model, code, and benchmarking data available for others to build on their research.

While the model is an ongoing work in progress, it allows those who speak Hokkien to communicate with people who speak English. The only drawback is that it can only translate one complete sentence at once. Meta encourages others to join the bandwagon, releasing technology such as Speech Matrix to help develop their speech-to-speech translation methods or build upon its existing work. Meta has also made available its Hokkien translation models and the research papers related to them.


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *