Remember the mythical Babel fish in the Hitchhiker’s Guide to the Galaxy? These fish are universal translators that sit in the ear of a carrier and quietly translate languages into the carrier’s mother tongue. Here is how Douglas Adams describes them: “The Babel fish is small, yellow, leech-like, and probably the oddest thing in the universe. It feeds on brainwave energy received not from its own carrier, but from those around it. It feeds on brain wave energy, absorbing all unconscious frequencies and then excreting telepathically a matrix formed from the conscious frequencies and nerve signals picked up from the speech centers of the brain, the practical upshot of which is that if you stick one in your ear, you can instantly understand anything said to you in any form of language: the speech you hear decodes the brain wave matrix.”

Essentially the Babel fish, according to Douglas Adams, translates languages and feeds it directly to the brain (the speech centers). Let’s ignore the feeding into the brain part (and the mumbo-jumbo of unconscious frequencies) for now. How could we build a universal translator - a machine that takes in sentences of any language as an input and outputs the equivalent sentence in any language? To do this, we have to understand the best translator known to us, ourselves.

How are humans so good at (intuitively) translating languages? If it is just a matter of remembering sentences and linking them to sentences in other languages, Google Translate, our favorite translator, would have done quite well (and much better than humans). The key to translation is understanding a sentence, as any bilingual or multilingual person would tell you. If that is the case, the true representation of a sentence is the understanding of it, i.e. the conceptual meaning in a sentence. If we follow this idea, the brain actually translates a sentence into a concept, and then re-translates the concept into an other language. This idea partly relates to the universal grammar idea of Noam Chomsky.

So, when a bilingual/multilingual person is translating languages, he intuitively converts the input language (via the speech processing regions of his brain) to a group of concepts, i.e., the understanding of a sentence. These concepts are possibly represented as the firing of a population of neurons. In other words, the states of a population of neurons represent concepts. The speech processing areas of the brain could then function as a language to concept converter that activates the neurons representing a concept by processing a sentence. It could also have multiple distinct representations for each language, i.e., it has a function that translates English to concepts and a separate function that converts Tamil to concepts. These functions are reversible - they can take in a sentence in any given language (assuming that the person has reasonably good command over it), and convert it into a concept and vice versa (Check out the explanatory image below).


The key idea of translating languages, therefore, is to have an internal concept representation that is mapped to languages. As it turns out, this idea is valid not just for language translation but also for the ability to picture things while reading or to write down a description of an image (They are also language translation in some sense - from pictures to words and vice versa). You can find more of my ramblings on the concept representations here.

Let’s get back to universal language translation. So, in order to get a machine to translate efficiently between languages, we need to first make it learn a universal language - preferably concepts represented by artificial neurons. Once a conceptual space is created, we need to create reversible functions that translate languages to concepts and vice versa. This is probably the easy part - once we figure out how to represent concepts in a machine, of course (read concept representations here). But this would probably be a good way to build a universal translator.

Back to the Babel fish. Douglas Adams’ idea about the universal translator seems quite similar to the concept I’ve proposed above, once we think about it. The Babel fish feeds on “unconscious frequencies” of people talking to the carrier, i.e. it could be directly reading the concepts represented in their brains instead of listening to the language. It then “excretes telepathically a matrix formed from the conscious frequencies and nerve signals”, which could be a fancy way of saying that it recreates the concepts in the carrier’s brain by stimulating the neurons appropriately. Therefore, the Babel fish seems to circumvent the whole transformation of a concept to a language (and vice-versa), and instead conveys the concept directly. I guess this would be another way to go, as it would not need any universal translators, and it would make communication far more efficient.

Note: This idea arose from a discussion with a friend about a mangastream blog post (and a feature comment in it) about why translating Japanese manga into English is not an easy task and how it is hard to get exact translations across because of the two very different natures of the languages. After reading the blog post, I realized that computer algorithms, at-least the good ones currently available (like Google translate), cannot be used to make such translations. This prompted a discussion about why humans are good at learning and translating distinctly different languages like English and Japanese, and that gave rise to the idea behind the above blog post.

Republished in my wordpress blog