Machines can translate the words on your screen in a blink of an eye, but have you ever wondered how in-browser translators perform such a complicated task?
Most people don’t think twice when they click this option, but there’s a lot going on behind the scenes. This software relies on powerful algorithms built around the rules of languages so that it can convert text from one language to another.
The average in-browser translator is just one example of a greater trend within the translation services industry. Businesses and law firms need professional translation services to translate a large volume of documents.
As technology allows for increasingly complex algorithms, the way MT works has evolved over time. Here are three different kinds of MT.
1. Rules-Based MT
Rules-Based MT (RBMT) is built around the vast dictionaries and grammatical rules of both the original and target languages. This data provides a framework for how it should convert text from one language to the next.
With RBMT, the software will analyze the original document to define individual words and identify the rules of its sentence structure. Using this information, it then finds the matching terms and rules in the target language to convert each sentence into the new language.
Because parsing through each word and sentence in this way is thorough, it can be a slower, more expensive method. However, it can produce some of the most technically correct MT, as engineers can edit its rules to accommodate statistically unlikely translations.
2. Statistical-Based MT
While RBMT relies on learning each language’s vocabulary and grammatical rules, Statistical-Based MT (SBMT) learns through example.
Engineers feed this software with all the existing translations already completed in the original and target languages. It then basis its own future translations on what these examples set, choosing the most statistically likely words and phrases according to these data sets.
Unlike RBMT, which generates a translated copy by moving from word to word, Statistical-Based MT (SBMT) focuses on phrases. While it’s often correct when choosing the most common phrases, there is a chance that it makes a mistake when unusual topics, slang, and industry-specific terms are involved.
3. Neural MT
Neural MT (NMT) is similar to SBMT, in that it learns how to translate from existing translations in the original and target languages. Software engineers feed NMT software with huge volumes of data collected from both languages, including existing translations, so that it can make phrase-by-phrase translations that are the most statistically likely according to this data.
But the way it generates these translations puts it in a field of its own. NMT uses advanced algorithms built around deep learning to mimic how the human brain learns, processes, and stores languages. This allows the NMT to learn new languages faster than other MT methods, even if the data is unstructured or unlabelled.
MT is an umbrella term that describes three different ways an algorithm can convert text from one language to another. RBMT, SBMT, and NMT may approach translations differently, but they all work without any need for human oversight beyond the initial coding.
This can result in faster and cheaper translations than those done by hand. But for as much as MT has advanced in the past ten years, it still has a long way to come. For important projects, nothing beats a human linguist who can recognize unique context, slang, and industry-specific terms.