How can deep learning be combined with theoretical linguistics?

Natural language processing is mostly done using deep learning and neural networks nowadays. In a typical NLP paper, you might see some Transformer models, some RNNs built using linear algebra and statistics, but very little linguistic theory. Is linguistics irrelevant to NLP now, or can the two fields still contribute to each other?

In a series of articles in the Language journal, Joe Pater discussed the history of neural networks and generative linguistics, and invited experts to give their perspectives of how the two may be combined going forward. I found their discussion very interesting, although a bit long (almost 100 pages). In this blog post, I will give a brief summary of it.

Generative Linguistics and Neural Networks at 60: Foundation, Friction, and Fusion

Research in generative syntax and neural networks began at the same time in 1957, and were both broadly considered under AI, but the two schools mostly stayed separate, at least for a few decades. In neural network research, Rosenblatt proposed the perceptron learning algorithm and realized that you needed hidden layers to learn XOR, but didn’t know of a procedure to train multi-layer NNs (backpropagation wasn’t invented yet). In generative grammar, Chomsky studied natural language like formal languages, and proposed controversial transformational rules. Interestingly, both schools faced challenges from learnability of their systems.

Above: Frank Rosenblatt and Noam Chomsky, two pioneers of neural networks and generative grammar, respectively.

The first time these two schools were combined was in 1986, when a RNN was used to learn a probabilistic model of past tense. This shows that neural networks and generative grammar are not incompatible, and the dichotomy is a false one. Another method of combining them comes from Harmonic Optimality Theory in theoretical phonology, which extends OT to continuous constraints and the procedure for learning them is similar to gradient descent.

Neural models have proved to be capable of learning a remarkable amount of syntax, despite having a lot less structural priors than Chomsky’s model of Universal Grammar. At the same time, they fail with certain complex examples, so maybe it’s time to add back some linguistic structure.

Linzen’s Response

Linguistics and DL can be combined in two ways. First, linguistics is useful for constructing minimal pairs for evaluating neural models, when such examples are hard to find in natural corpora. Second, neural models can be quickly trained on data, so they’re useful for testing learnability. By comparing human language acquisition data with various neural architectures, we can gain insights about how human language acquisition works. (But I’m not sure how such a deduction would logically work.)

Potts’s Response

Formal semantics has not had much contact with DL, as formal semantics is based around higher-order logic, while deep learning is based on matrices of numbers. Socher did some work of representing tree-based semantic composition as operations on vectors.

Above: Formal semantics uses higher-order logic to build representations of meaning. Is this compatible with deep learning?

In several ways, semanticists make different assumptions from deep learning. Semantics likes to distinguish meaning from use, and consider compositional meaning separately from pragmatics and context, whereas DL cares most of all about generalization, and has no reason to discard context or separate semantics and pragmatics. Compositional semantics does not try to analyze meaning of lexical items, leaving them as atoms; DL has word vectors, but linguists criticize that individual dimensions of word vectors are not easily interpretable.

Rawski and Heinz’s Response

Above: Natural languages exhibit features that span various levels of the Chomsky hierarchy.

The “no free lunch” theorem in machine learning says that you can’t get better performance for free, any gains in some problems must be compensated by decreases in performance on other problems. A model performs well if it has an inductive bias well-suited for the type of problems it applies to. This is true for neural networks as well, and we need to study the inductive biases in neural networks: which classes of languages in the Chomsky hierarchy are NNs capable of learning? We must not confuse ignorance of bias with absence of bias.

Berent and Marcus’s Response

There are significant differences between how generative syntax and neural networks view language, that must be resolved before the fields can make progress with integration. The biggest difference is the “algebraic hypothesis” — the assumption that there exists abstract algebraic categories like Noun, that’s distinct from their instances. This allows you to make powerful generalizations using rules that operate on abstract categories. On the other hand, neural models try to process language without structural representations, and this results in failures in generalizations.

Dunbar’s Response

The central problem in connecting neural networks to generative grammar is the implementational mapping problem: how do you decide if a neural network N is implementing a linguistic theory T? The physical system might not look anything like the abstract theory, eg: implementing addition can look like squiggles on a piece of paper. Some limited classes of NNs may be mapped to harmonic grammar, but most NNs cannot, and the success criterion is unclear right now. Future work should study this problem.

Pearl’s Response

Neural networks learn language but don’t really try to model human neural processes. This could be an advantage, as neural models might find generalizations and building blocks that a human would never have thought of, and new tools in interpretability can help us discover these building blocks contained within the model.

The biggest headache with Chinese NLP: indeterminate word segmentation

I’ve had a few opportunities to work with NLP in Chinese. English and Chinese are very different languages, yet generally the same techniques apply to both. But there is one source of frustration that comes up from time to time, and it’s perhaps not what you’d expect.

The difficulty is that Chinese doesn’t put words between spaces. Soallyourwordsarejumbledtogetherlikethis.

“Okay, that’s fine,” you say. “We’ll just have to run a tokenizer to separate apart the words before we do anything else. And here’s a neural network that can do this with 93% accuracy (Qi et al., 2020). That should be good enough, right?”

Well, kind of. Accuracy here isn’t very well-defined because Chinese people don’t know how to segment words either. When you ask two native Chinese speakers to segment a sentence into words, they only agree about 90% of the time (Wang et al., 2017). Chinese has a lot of compound words and multi-word expressions, so there’s no widely accepted definition of what counts as a word. Some examples: 吃饭,外国人,开车,受不了. It is also possible (but rare) for a sentence to have multiple segmentations that mean different things.

Arguably, word boundaries are ill-defined in all languages, not just Chinese. Hapselmath (2011) defined 10 linguistic criteria to determine if something is a word (vs an affix or expression), but it’s hard to come up with anything consistent. Most writing systems puts spaces in between words, so there’s no confusion. Other than Chinese, only a handful of other languages (Japanese, Vietnamese, Thai, Khmer, Lao, and Burmese) have this problem.

Word segmentation ambiguity causes problems in NLP systems when different components expect different ways of segmenting a sentence. Another way the problem can appear is if the segmentation for some human-annotated data doesn’t match what a model expects.

Here is a more concrete example from one of my projects. I’m trying to get a language model to predict a tag for every word (imagine POS tagging using BERT). The language model uses SentencePiece encoding, so when a word is out-of-vocab, it gets converted into multiple subword tokens.

“expedite ratification of the proposed law”
=> [“expedi”, “-te”, “ratifica”, “-tion”, “of”, “the”, “propose”, “-d”, “law”]

In English, a standard approach is to use the first subword token of every word, and ignore the other tokens, like this:

This doesn’t work in Chinese — because of the word segmentation ambiguity, the tokenizer might produce tokens that span across multiple of our words:

So that’s why Chinese is sometimes headache-inducing when you’re doing multilingual NLP. You can work around the problem in a few ways:

  1. Ensure that all parts of the system uses a consistent word segmentation scheme. This is easy if you control all the components, but hard when working with other people’s models and data though.
  2. Work on the level of characters and don’t do word segmentation at all. This is what I ended up doing, and it’s not too bad, because individual characters do carry semantic meaning. But some words are unrelated to their character meanings, like transliterations of foreign words.
  3. Do some kind of segment alignment using Levenshtein distance — see the appendix of this paper by Tenney et al. (2019). I’ve never tried this method.

One final thought: the non-ASCII Chinese characters surprisingly never caused any difficulties for me. I would’ve expected to run into encoding problems occasionally, as I had in the past, but never had any character encoding problems with Python 3.

References

  1. Haspelmath, Martin. “The indeterminacy of word segmentation and the nature of morphology and syntax.” Folia linguistica 45.1 (2011): 31-80.
  2. Qi, Peng, et al. “Stanza: A python natural language processing toolkit for many human languages.” Association for Computational Linguistics (ACL) System Demonstrations. 2020.
  3. Tenney, Ian, et al. “What do you learn from context? Probing for sentence structure in contextualized word representations.” International Conference on Learning Representations. 2019.
  4. Wang, Shichang, et al. “Word intuition agreement among Chinese speakers: a Mechanical Turk-based study.” Lingua Sinica 3.1 (2017): 13.

Representation Learning for Discovering Phonemic Tone Contours

My paper titled “Representation Learning for Discovering Phonemic Tone Contours” was recently presented at the SIGMORPHON workshop, held concurrently with ACL 2020. This is joint work with Jing Yi Xie and Frank Rudzicz.

Problem: Can an algorithm learn the shapes of phonemic tones in a tonal language, given a list of spoken words?

Answer: We train a convolutional autoencoder to learn a representation for each contour, then use the mean shift algorithm to find clusters in the latent space.

sigmorphon1

By feeding the centers of each cluster into the decoder, we produce a prototypical contour that represents each cluster. Here are the results for Mandarin and Chinese.

sigmorphon2

We evaluate on mutual information with the ground truth tones, and the method is partially successful, but contextual effects and allophonic variation present considerable difficulties.

For the full details, read my paper here!

I didn’t break the bed, the bed broke: Exploring semantic roles with VerbNet / FrameNet

Some time ago, my bed fell apart, and I entered into a dispute with my landlord. “You broke the bed,” he insisted, “so you will have to pay for a new one.”

Being a poor grad student, I wasn’t about to let him have his way. “No, I didn’t break the bed,” I replied. “The bed broke.”

bedbroke

Above: My sad and broken bed. Did it break, or did I break it?

What am I implying here? It’s interesting how this argument relies on a crucial semantic difference between the two sentences:

  1. I broke the bed
  2. The bed broke

The difference is that (1) means I caused the bed to break (eg: by jumping on it), whereas (2) means the bed broke by itself (eg: through normal wear and tear).

This is intuitive to a native speaker, but maybe not so obvious why. One might guess from this example that any intransitive verb when used transitively (“X VERBed Y”) always means “X caused Y to VERB“. But this is not the case: consider the following pair of sentences:

  1. I attacked the bear
  2. The bear attacked

Even though the syntax is identical to the previous example, the semantic structure is quite different. Unlike in the bed example, sentence (1) cannot possibly mean “I caused the bear to attack”. In (1), the bear is the one being attacked, while in (2), the bear is the one attacking something.

broke-attackAbove: Semantic roles for verbs “break” and “attack”.

Sentences which are very similar syntactically can have different structures semantically. To address this, linguists assign semantic roles to the arguments of verbs. There are many semantic roles (and nobody agrees on a precise list of them), but two of the most fundamental ones are Agent and Patient.

  • Agent: entity that intentionally performs an action.
  • Patient: entity that changes state as a result of an action.
  • Many more.

The way that a verb’s syntactic arguments (eg: Subject and Object) line up with its semantic arguments (eg: Agent and Patient) is called the verb’s argument structure. Note that an agent is not simply the subject of a verb: for example, in “the bed broke“, the bed is syntactically a subject but is semantically a patient, not an agent.

Computational linguists have created several corpora to make this information accessible to computers. Two of these corpora are VerbNet and FrameNet. Let’s see how a computer would be able to understand “I didn’t break the bed; the bed broke” using these corpora.

broke-verbnet

Above: Excerpt from VerbNet entry for the verb “break”.

VerbNet is a database of verbs, containing syntactic patterns where the verb can be used. Each entry contains a mapping from syntactic positions to semantic roles, and restrictions on the arguments. The first entry for “break” has the transitive form: “Tony broke the window“.

Looking at the semantics, you can conclude that: (1) the agent “Tony” must have caused the breaking event, (2) something must have made contact with the window during this event, (3) the window must have its material integrity degraded as a result, and (4) the window must be a physical object. In the intransitive usage, the semantics is simpler: there is no agent that caused the event, and no instrument that made contact during the event.

The word “break” can take arguments in other ways, not just transitive and intransitive. VerbNet lists 10 different patterns for this word, such as “Tony broke the piggy bank open with a hammer“. This sentence contains a result (open), and also an instrument (a hammer). The entry for “break” also groups together a list of words like “fracture”, “rip”, “shatter”, etc, that have similar semantic patterns as “break”.

broke-framenet

Above: Excerpt from FrameNet entry for the verb “break”.

FrameNet is a similar database, but based on frame semantics. The idea is that in order to define a concept, you have to define it in terms of other concepts, and it’s hard to avoid a cycle in the definition graph. Instead, it’s sometimes easier to define a whole semantic frame at once, which describes a conceptual situation with many different participants. The frame then defines each participant by what role they play in the situation.

The word “break” is contained in the frame called “render nonfunctional“. In this frame, an agent affects an artifact so that it’s no longer capable of performing its function. The core (semantically obligatory) arguments are the agent and the artifact. There are a bunch of optional non-core arguments, like the manner that the event happened, the reason that the agent broke the artifact, the time and place it happened, and so on. FrameNet tries to make explicit all of the common-sense world knowledge that you need to understand the meaning of an event.

Compared to VerbNet, FrameNet is less concerned with the syntax of verbs: for instance, it does not mention that “break” can be used intransitively. Also, it has more fine-grained categories of semantic roles, and contains a description in English (rather than VerbNet’s predicate logic) of how each semantic argument participates in the frame.

An open question is: how can computers use VerbNet and FrameNet to understand language? Nowadays, deep learning has come to dominate NLP research, so that VerbNet and FrameNet are often seen as relics of a past era, when people still used rule-based systems to do NLP. It turned out to be hard to use VerbNet and FrameNet to make computers do useful tasks.

But recently, the NLP community is realizing that deep learning has limitations when it comes to common-sense reasoning, that you can’t solve just by adding more layers on to BERT and feeding it more data. So maybe deep learning systems can benefit from these lexical semantic resources.

Why do polysynthetic languages all have very few speakers?

Polysynthetic languages are able to express complex ideas in one word, that in most languages would require a whole sentence. For example, in Inuktitut:

qangatasuukkuvimmuuriaqalaaqtunga

“I’ll have to go to the airport”

There’s no widely accepted definition of a polysynthetic language. Generally, polysynthetic languages have noun incorporation (where noun arguments are expressed affixes of a verb) and serial verb construction (where a single word contains multiple verbs). They are considered some of the most grammatically complex languages in the world.

Polysynthetic languages are most commonly found among the indigenous languages of North America. Only a few such languages have more than 100k speakers: Nahuatl (1.5m speakers), Navajo (170k speakers), and Cree (110k speakers). Most polysynthetic languages are spoken by a very small number of people and many are in danger of becoming extinct.

Why aren’t there more polysynthetic languages — major national languages with millions of speakers? Is it mere coincidence that the most complex languages have few speakers? According to Wray (2007), it’s not just coincidence, rather, languages spoken within a small, close-knit community with little outside contact tend to develop grammatical complexity. Languages with lots of external contact and adult learners tend to be more simplified and regular.

It’s well known that children are better language learners than adults. L1 and L2 language acquisition processes work very differently, so that children and adults have different needs when learning a language. Adult learners prefer regularity and expressions that can be decomposed into smaller parts. Anyone who has studied a foreign language has seen tables of verb conjugations like these:

french-conjugation

korean-verb

For adult learners, the ideal language is predictable and has few exceptions. The number 12 is pronounced “ten-two“, not “twelve“. A doctor who treats your teeth is a “tooth-doctor“, rather than a “dentist“. Exceptions give the adult learner difficulties since they have to be individually memorized. An example of a very predictable language is the constructed language Esperanto, designed to have as few exceptions as possible and be easy to learn for native speakers of any European language.

Children learn languages differently. At the age of 12 months (the holophrastic stage), children start producing single words that can represent complex ideas. Even though they are multiple words in the adult language, the child initially treats them as a single unit:

whasat (what’s that)

gimme (give me)

Once they reach 18-24 months of age, children pick up morphology and start using multiple words at a time. Children learn whole phrases first, then only later learn to analyze them into parts on an as-needed basis, thus they have no difficulty with opaque idioms and irregular forms. They don’t really benefit from regularity either: when children learn Esperanto as a native language, they introduce irregularities, even when the language is perfectly regular.

We see evidence of this process in English. Native speakers frequently make mistakes like using “could of” instead of “could’ve“, or using “your” instead of “you’re“. This is evidence that native English speakers think of them as a single unit, and don’t naturally analyze them into their sub-components: “could+have” and “you+are“.

According to the theory, in languages spoken in isolated communities, where few adults try to learn the language, it ends up with complex and irregular words. When lots of grown-ups try to learn the language, they struggle with the grammatical complexity and simplify it. Over time, these simplifications eventually become a standard part of the language.

Among the world’s languages, various studies have found correlations between grammatical complexity and smaller population size, supporting this theory. However, the theory is not without its problems. As with any observational study, correlation doesn’t imply causation. The European conquest of the Americas decimated the native population, and consequently, speakers of indigenous languages have declined drastically in the last few centuries. Framing it this way, the answer to “why aren’t there more polysynthetic languages with millions of speakers” is simply: “they all died of smallpox or got culturally assimilated”.

If instead, Native Americans had sailed across the ocean and colonized Europe, would more of us be speaking polysynthetic languages now? Until we can go back in time and rewrite history, we’ll never know the answer for sure.

Further reading

  • Atkinson, Mark David. “Sociocultural determination of linguistic complexity.” (2016). PhD Thesis. Chapter 1 provides a good overview of how languages are affected by social structure.
  • Kelly, Barbara, et al. “The acquisition of polysynthetic languages.” Language and Linguistics Compass 8.2 (2014): 51-64.
  • Trudgill, Peter. Sociolinguistic typology: Social determinants of linguistic complexity. Oxford University Press, 2011.
  • Wray, Alison, and George W. Grace. “The consequences of talking to strangers: Evolutionary corollaries of socio-cultural influences on linguistic form.” Lingua 117.3 (2007): 543-578. This paper proposes the theory and explains it in detail.

Directionality of word class conversion

Many nouns (like google, brick, bike) can be used as verbs:

  • Let me google that for you.
  • The software update bricked my phone.
  • Bob biked to work yesterday.

Conversely, many verbs (like talk, call) can be used as nouns:

  • She gave a talk at the conference.
  • I’m on a call with my boss.

Here, we just assumed that {google, brick, bike} are primarily nouns and {talk, call} are primarily verbs — but is this justified? After all, all five of these words can be used as either a noun or a verb. Then, what’s the difference between the first group {google, brick, bike} and the second group {talk, call}?

These are examples of word class flexibility: words that can be used across multiple part-of-speech classes. In this blog post, I’ll describe some objective criteria to determine if a random word like “sleep” is primarily a noun or a verb.

Five criteria for deciding directionality

Linguists have studied the problem of deciding what is the base / dominant part-of-speech category (equivalently, deciding the directionality of conversion). Five methods are commonly listed in the literature: frequency of occurrence, attestation date, semantic range, semantic dependency, and semantic pattern (Balteiro, 2007; Bram, 2011).

  1. Frequency of occurrence: a word is noun-dominant if it occurs more often as a noun than a verb. This is the easiest to compute since all you need is a POS-tagged corpus. The issue is the direction now depends on which corpus you use, and there can be big differences between genres.
  2. Attestation date: a word is noun-dominant if it was used first as a noun and only later as a verb. This works for newer words, Google (the company) existed for a while before anyone started “googling” things. But we run into problems with older words, and the direction then depends on the precise dating of Middle English manuscripts. If the word is from Proto-Germanic / Proto-Indo-European then finding the attestation date becomes impossible. This method is also philosophically questionable because you shouldn’t need to know the history of a language to describe its current form.
  3. Semantic range: if a dictionary lists more noun meanings than verb meanings for a word, then it’s noun-dominant. This is not so reliable because different dictionaries disagree on how many senses to include, and how different must two senses be in order to have separate entries. Also, some meanings are rare or domain specific (eg: “call option” in finance) and it doesn’t seem right to count them equally.
  4. Semantic dependency: if the definition of the verb meaning refers to the noun meaning, then the word is noun-dominant. For example, “to bottle” means “to put something into a bottle”. This criterion is not always clear to decide, sometimes you can define it either way, or have neither definition refer to the other.
  5. Semantic pattern: a word is noun-dominant if it refers to an entity / object, and verb-dominant if refers to an action. A bike is something that you can touch and feel; a walk is not. Haspelmath (2012) encourages distinguishing {entity, action, property} rather than {noun, verb, adjective}. However, it’s hard to determine without subjective judgement (especially for abstract nouns like “test” or “work”), whether the entity or action sense is more primary.

Comparisons using corpus methods

How do we make sense of all these competing criteria? To answer this question, Balteiro (2007) compare 231 pairs of flexible noun/verb pairs and rated them all according to the five criteria I listed above, as well as a few more that I didn’t include. Later, Bram (2011) surveyed a larger set of 2048 pairs.

The details are quite messy, because applying the criteria are not so straightforward. For example, polysemy: the word “call” has more than 40 definitions in the OED, and some of them are obsolete, so which one do you use for attestation date? How do you deal with homonyms like “bank” that have two unrelated meanings? With hundreds of pages of painstaking analysis, the researchers came to a judgement for each word. Then, they measured the agreement between each pair of criteria:

bram-thesis-tableTable of pairwise agreement (adapted from Table 5.2 of Bram’s thesis)

There is only a moderate level of agreement between the different criteria, on average about 65% — better than random, but not too convincing either. Only frequency and attestation date agree more than 80% of the time. Only a small minority of words have all of the criteria agree.

Theoretical ramifications

This puts us in a dilemma: how do we make sense of these results? What’s the direction of conversion if these criteria don’t agree? Are some of the criteria better than others, perhaps take a majority vote? Is it even possible to determine a direction at all?

Linguists have disagreed for decades over what to do with this situation. Van Lier and Rijkhoff (2013) gives a survey of the various views. Some linguists maintain that flexible words must be either noun-dominant or verb-dominant, and is converted to the other category. Other linguists note the disagreements between criteria and propose instead that words are underspecified. Just like a stem cell that can morph into a skin or lung cell as needed, a word like “sleep” is neither a noun or verb, but a pre-categorical form that can morph into either a noun or verb depending on context.

Can we really determine the dominant category of a conversion pair? It seems doubtful that this issue will ever be resolved. Presently, none of the theories make any scientific predictions that can be tested and falsified. Until then, the theories co-exist as different ways to view and analyze the same data.

The idea of a “dominant” category doesn’t exist in nature, it is merely an artificial construct to help explain the data. In mathematics, it’s nonsensical to ask if imaginary numbers really “exist”. Nobody has seen an imaginary number, but mathematicians use them because they’re good for describing a lot of things. Likewise, it doesn’t make sense to ask if flexible words really have a dominant category. We can only ask whether a theory that assumes the existence of a dominant category is simpler than a theory that does not.

References

  1. Balteiro, Isabel. The directionality of conversion in English: A dia-synchronic study. Vol. 59. Peter Lang, 2007.
  2. Bram, Barli. “Major total conversion in English: The question of directionality.” (2011). PhD Thesis.
  3. Haspelmath, Martin. “How to compare major word-classes across the world’s languages.” Theories of everything: In honor of Edward Keenan 17 (2012): 109-130.
  4. Van Lier, Eva, and Jan Rijkhoff. “Flexible word classes in linguistic typology and grammatical theory.” Flexible word classes: a typological study of underspecified parts-of-speech (2013): 1-30.

Explaining chain-shift tone sandhi in Min Nan Chinese

In my previous post on the Teochew dialect, I noted that Teochew has a complex system of tone sandhi. The last syllable of a word keeps its citation (base) form, while all preceding syllables undergo sandhi. For example:

gu5 (cow) -> gu1 nek5 (cow-meat = beef)

seng52 (play) -> seng35 iu3 hi1 (play a game)

The sandhi system is quite regular — for instance, if a word’s base tone is 52 (falling tone), then its sandhi tone will be 35 (rising tone), across many words:

toin52 (see) -> toin35 dze3 (see-book = read)

mang52 (mosquito) -> mang35 iu5 (mosquito-oil)

We can represent this relationship as an edge in a directed graph 52 -> 35. Similarly, words with base tone 5 have sandhi tone 1, so we have an edge 5 -> 1. In Teochew, the sandhi graph of the six non-checked tones looks like this:

teochew-sandhi

Above: Teochew tone sandhi, Jieyang dialect, adapted from Xu (2007). For simplicity, we ignore checked tones (ending in -p, -t, -k), which have different sandhi patterns.

This type of pattern is not unique to Teochew, but exists in many dialects of Min Nan. Other dialects have different tones but a similar system. It’s called right-dominant chain-shift, because the rightmost syllable of a word keeps its base tone. It’s also called a “tone circle” when the graph has a cycle. Most notably, the sandhi pattern where A -> B, and B -> C, yet A !-> C is quite rare cross-linguistically, and does not occur in any Chinese dialect other than in the Min family.

Is there any explanation for this unusual tone sandhi system? In this blog post, I give an overview of some attempts at an explanation from theoretical phonology and historical linguistics.

Xiamen tone circle and Optimality Theory

The Xiamen / Amoy dialect is perhaps the most studied variety of Min Nan. Its sandhi system looks like this:

xiamen-sandhi

Barrie (2006) and Thomas (2008) attempt to explain this system with Optimality Theory (OT). In modern theoretical phonology, OT is a framework that describes how the underlying phonemes are mapped to the output phonemes, not using rules, but rather with a set of constraints. The constraints dictate what kinds of patterns that are considered “bad” in the language, but some violations are worse than others, so the constraints are ranked in a hierarchy. Then, the output is the solution that is “least bad” according to the ranking.

To explain the Xiamen tone circle sandhi, Thomas begins by introducing the following OT constraints:

  • *RISE: incur a penalty for every sandhi tone that has a rising contour.
  • *MERGE: incur a penalty when two citation tones are mapped to the same sandhi tone.
  • DIFFER: incur penalty when a base tone is mapped to itself as a sandhi tone.

Without any constraints, there are 5^5 = 3125 possible sandhi systems in a 5-tone language. With these constraints, most of the hypothetical systems are eliminated — for example, the null system (where every tone is mapped to itself) incurs 5 violations of the DIFFER constraint.

These 3 rules aren’t quite enough to fully explain the Xiamen tone system: there are still 84 hypothetical systems that are equally good as the actual system. With the aid of a Perl script, Thomas then introduces more rules until only one system (the actual observed one) emerges as the best under the constraints.

Problems with the OT explanation

There are several reasons why I didn’t find this explanation very satisfying. First, it’s not falsifiable: if your constraints don’t generate the right result, you can keep adding more and more constraints, and tweak the ranking, until they produce the result you want.

Second, the constraints are very arbitrary and lack any cognitive-linguistic motivation. You can explain the *MERGE constraint as trying to preserve contrasts, which makes sense from an information theory point of view, but what about DIFFER? It’s unclear why base tones shouldn’t be mapped to the same sandhi tone, especially since many languages (like Cantonese) manage fine with no sandhi at all.

Even considering Teochew, which is more closely related to the Xiamen dialect, we see that all three constraints are violated. I’m not aware of any analysis of Teochew sandhi using OT, and it would be interesting to see, but surely it would have a very different set of constraints from the Xiamen system.

Nevertheless, OT has been an extremely successful framework in modern phonology. In some cases, OT can describe a pattern very cleanly, where you’d need very complicated rules to describe them. In that case, the set of OT constraints would be a good explanation for the pattern.

Also, if the same constraint shows up in a lot of languages, then that increases its credibility that it’s a true cross-language tendency, rather than a just a made-up rule to explain the data. For example, if the *RISE constraint shows up in OT grammars for many languages, then you could claim that there’s a general tendency for languages to prefer falling tones over rising tones.

Evidence from Middle Chinese

Chen (2000) gives a different perspective. Essentially, he claims that it’s impossible to make sense of the data in any particular modern-day dialect. Instead, we should compare multiple dialects together in the context of historical sound changes.

The evidence he gives is from the Zhangzhou dialect, located about 40km inland from Xiamen. The Zhangzhou dialect has a similar tone circle as Xiamen, but with different values!

3sandhi

It’s not obvious how the two systems are related, until you consider the mapping to Middle Chinese tone categories:

mc-circle

The roman numerals I, II, III denote tones of Middle Chinese, spoken during ~600AD. Middle Chinese had four tones, but none of the present day Chinese dialects retain this system, after centuries of tone splits and merges. In many dialects, a Middle Chinese tone splits into two tones depending on whether the initial is voiced or voiceless. When comparing tones from different dialects, it’s often useful to refer to historical tone categories like “IIIa”, which roughly means “syllables that were tone III in Middle Chinese and the initial consonant is voiceless”.

It’s unlikely that both Xiamen and Zhangzhou coincidentally developed sandhi patterns that map to the same Middle Chinese tone categories. It’s far more likely that the tone circle developed in a common ancestral language, then their phonetic values diverged afterwards in the respective present-day dialects.

That still leaves open the question of: how exactly did the tone circle develop in the first place? It’s likely that we’ll never know for sure: the details are lost to time, and the processes driving historical tone change are not very well understood.

In summary, theoretical phonology and historical linguistics offer complementary insights that explain the chain-shift sandhi patterns in Min Nan languages. Optimality Theory proposes tendencies for languages to prefer certain structures over others. This partially explains the pattern; a lot of it is simply due to historical accident.

References

  1. Barrie, Michael. “Tone circles and contrast preservation.” Linguistic Inquiry 37.1 (2006): 131-141.
  2. Chen, Matthew Y. Tone sandhi: Patterns across Chinese dialects. Vol. 92. Cambridge University Press, 2000. Pages 38-49.
  3. Thomas, Guillaume. “An analysis of Xiamen tone circle.” Proceedings of the 27th West Coast Conference on Formal Linguistics. Cascadilla Proceedings Project, Somerville, MA. 2008.
  4. Xu, Hui Ling. “Aspect of Chaozhou grammar: a synchronic description of the Jieyang variety.” (2007).

Learning the Teochew (Chaozhou) Dialect

Lately I’ve been learning my girlfriend’s dialect of Chinese, called the Teochew dialect.  Teochew is spoken in the eastern part of the Guangdong province by about 15 million people, including the cities of Chaozhou, Shantou, and Jieyang. It is part of the Min Nan (闽南) branch of Chinese languages.

teochew-map

Above: Map of major dialect groups of Chinese, with Teochew circled. Teochew is part of the Min branch of Chinese. Source: Wikipedia.

Although the different varieties of Chinese are usually refer to as “dialects”, linguists consider them different languages as they are not mutually intelligible. Teochew is not intelligible to either Mandarin or Cantonese speakers. Teochew and Mandarin diverged about 2000 years ago, so today they are about as similar as French is to Portuguese. Interestingly, linguists claim that Teochew is one of the most conservative Chinese dialects, preserving many archaic words and features from Old Chinese.

Above: Sample of Teochew speech from entrepreneur Li Ka-shing.

Since I like learning languages, naturally I started learning my girlfriend’s native tongue soon after we started dating. It helped that I spoke Mandarin, but Teochew is not close enough to simply pick up by osmosis, it still requires deliberate study. Compared to other languages I’ve learned, Teochew is challenging because very few people try to learn it as a foreign language, thus there are few language-learning resources for it.

Writing System

The first hurdle is that Teochew is primarily spoken, not written, and does not have a standard writing system. This is the case with most Chinese dialects. Almost all Teochews are bilingual in Standard Chinese, which they are taught in school to read and write.

Sometimes people try to write Teochew using Chinese characters by finding the equivalent Standard Chinese cognates, but there are many dialectal words which don’t have any Mandarin equivalent. In these cases, you can invent new characters or substitute similar sounding characters, but there’s no standard way of doing this.

Still, I needed a way to write Teochew, to take notes on new vocabulary and grammar. At first, I used IPA, but as I became more familiar with the language, I devised my own romanization system that captured the sound differences.

Cognates with Mandarin

Note (Jul 2020): People in the comments have pointed out that some of these examples are incorrect. I’ll keep this section the way it is because I think the high-level point still stands, but these are not great examples.

Knowing Mandarin was very helpful for learning Teochew, since there are lots of cognates. Some cognates are obviously recognizable:

  • Teochew: kai shim, happy. Cognate to Mandarin: kai xin, 开心.
  • Teochew: ing ui, because. Cognate to Mandarin: ying wei, 因为

Some words have cognates in Mandarin, but mean something slightly different, or aren’t commonly used:

  • Teochew: ou, black. Cognate to Mandarin: wu, 乌 (dark). The usual Mandarin word is hei, 黑 (black).
  • Teochew: dze: book. Cognate to Mandarin: ce, 册 (booklet). The usual Mandarin word is shu, 书 (book).

Sometimes, a word has a cognate in Mandarin, but sound quite different due to centuries of sound change:

  • Teochew: hak hau, school. Cognate to Mandarin: xue xiao, 学校.
  • Teochew: de, pig. Cognate to Mandarin: zhu, 猪.
  • Teochew: dung: center. Cognate to Mandarin: zhong, 中.

In the last two examples, we see a fairly common sound change, where a dental stop initial (d- and t-) in Teochew corresponds to an affricate (zh- or ch-) in Mandarin. It’s not usually enough to guess the word, but serves as a useful memory aid.

Finally, a lot of dialectal Teochew words (I’d estimate about 30%) don’t have any recognizable cognate in Mandarin. Examples:

  • da bo: man
  • no gya: child
  • ge lai: home

Grammatical Differences

Generally, I found Teochew grammar to be fairly similar to Mandarin, with only minor differences. Most grammatical constructions can transfer cognate by cognate and still make sense in the other language.

One significant difference in Teochew is the many fused negation markers. Here, a syllable starts with the initial b- or m- joined with a final to negate something. Some examples:

  • bo: not have
  • boi: will not
  • bue: not yet
  • mm: not
  • mai: not want
  • ming: not have to

Phonology and Tone Sandhi

The sound structure of Teochew is not too different from Mandarin, and I didn’t find it difficult to pronounce. The biggest difference is that syllables may end with a stop: -t, -k, -p, and -m, whereas Mandarin syllables can only end with a vowel or nasal. The characteristic of a Teochew accent in Mandarin is replacing /f/ with /h/, and indeed there is no /f/ sound in Teochew.

The hardest part of learning Teochew for me were the tones. Teochew has either six or eight tones depending on how you count them, which isn’t difficult to produce in isolation. However, Teochew has a complex system of tone sandhi rules, where the tone of each syllable changes depending on the tone of the following syllable. Mandarin has tone sandhi to some extent (for example, the third tone sandhi rule where nǐ + hǎo is pronounced níhǎo rather than nǐhǎo). But Teochew takes this to a whole new level, where nearly every syllable undergoes contextual tone change.

Some examples (the numbers are Chao tone numerals, with 1 meaning lowest and 5 meaning highest tone):

  • gu5: cow
  • gu1 nek5: beef

Another example, where a falling tone changes to a rising tone:

  • seng52: to play
  • seng35 iu3 hi1: to play a game

There are tables of tone sandhi rules describing in detail how each tone gets converted to what other tone, but this process is not entirely regular and there are exceptions. As a result, I frequently get the tone wrong by mistake.

Update: In this blog post, I explore Teochew tone sandhi in more detail.

Resources for Learning Teochew

Teochew is seldom studied as a foreign language, so there aren’t many language learning resources for it. Even dictionaries are hard to find. One helpful dictionary is Wiktionary, which has the Teochew pronunciation for most Chinese characters.

Also helpful were formal linguistic grammars:

  1. Xu, Huiling. “Aspects of Chaoshan grammar: A synchronic description of the Jieyang dialect.” Monograph Series Journal of Chinese Linguistics 22 (2007).
  2. Yeo, Pamela Yu Hui. “A sketch grammar of Singapore Teochew.” (2011).

The first is a massively detailed, 300-page description of Teochew grammar, while the second is a shorter grammar sketch on a similar variety spoken in Singapore. They require some linguistics background to read. Of course, the best resource is my girlfriend, a native speaker of Teochew.

Visiting the Chaoshan Region

After practicing my Teochew for a few months with my girlfriend, we paid a visit to her hometown and relatives in the Chaoshan region. More specifically, Raoping County located on the border between Guangdong and Fujian provinces.

 

Left: Chaoshan railway station, China. Right: Me learning the Gongfu tea ceremony, an essential aspect of Teochew culture.

Teochew people are traditional and family oriented, very much unlike the individualistic Western values that I’m used to. In Raoping and Guangzhou, we attended large family gatherings in the afternoon, chatting and gossiping while drinking tea. Although they are still Han Chinese, the Teochew consider themselves a distinct subgroup within Chinese, with their unique culture and language. The Teochew are especially proud of their language, which they consider to be extremely hard for outsiders to learn. Essentially, speaking Teochew is what separates “ga gi nang” (roughly translated as “our people”) from the countless other Chinese.

My Teochew is not great. Sometimes I struggle to get the tones right and make myself understood. But at a large family gathering, a relative asked me why I was learning Teochew, and I was able to reply, albeit with a Mandarin accent: “I want to learn Teochew so that I can be part of your family”.

raoping-sea

Above: Me, Elaine, and her grandfather, on a quiet early morning excursion to visit the sea. Raoping County, Guangdong Province, China.

Thanks to my girlfriend Elaine Ye for helping me write this post. Elaine is fluent in Teochew, Mandarin, Cantonese, and English.

Paper Review: Linguistic Features to Identify Alzheimer’s Disease

Today I’m going to be sharing a paper I’ve been looking at, related to my research: “Linguistic Features Identify Alzheimer’s Disease in Narrative Speech” by Katie Fraser, Jed Meltzer, and my adviser Frank Rudzicz. The paper was published in 2016 in the Journal of Alzheimer’s Disease. It uses NLP to automatically diagnose patients with Alzheimer’s disease, given a sample of their speech.


Alzheimer’s disease is a disease that you might have heard of, but it doesn’t get much attention in the media, unlike cancer and stroke. It is a neurodegenerative disease that mostly affects elderly people. 5 million Americans are living with Alzheimer’s, including 1 in 9 over the age of 65, and 1 in 3 over the age of 85.

Alzheimer’s is also the most expensive disease in America. After diagnosis, patients may continue to live for over 10 years, and during much of this time, they are unable to care for themselves and require a constant caregiver. In 2017, 68% of Medicare and Medicaid’s budget is spent on patients with Alzheimer’s, and this number is expected to increase as the elderly population grows.

Despite a lot of recent advances in our understanding of the disease, there is currently no cure for Alzheimer’s. Since the disease is so prevalent and harmful, research in this direction is highly impactful.

Previous tests to diagnose Alzheimer’s

One of the early signs of Alzheimer’s is having difficulty remembering things, including words, leading to a decrease in vocabulary. A reliable way to test for this is a retrieval question like the following (Monsch et al., 1992):

In the next 60 seconds, name as many items as possible that can be found in a supermarket.

A healthy person could rattle out about 20-30 items in a minute, whereas someone with Alzheimer’s could only produce about 10. By setting the threshold at 16 items, they could classify even mild cases of Alzheimer’s with about 92% accuracy.

This doesn’t quite capture the signs of Alzheimer’s disease though. Patients with Alzheimer’s tend to be rambly and incoherent. This can be tested with a picture description task, where the patient is given a picture and asked to describe it with as much detail as possible (Giles, Patterson, Hodges, 1994).

73c894ea4d2dc12ca69a6380e51f1d62Above: Boston Cookie Theft picture used for picture description task

There is no time limit, and the patients talked until they indicated they had nothing more to say, or if they didn’t say anything for 15 seconds.

Patients with Alzheimer’s disease produced descriptions with varying degrees of incoherence. Here’s an example transcript, from the above paper:

Experimenter: Tell me everything you see going on in this picture

Patient: oh yes there’s some washing up going on / (laughs) yes / …… oh and the other / ….. this little one is taking down the cookie jar / and this little girl is waiting for it to come down so she’ll have it / ………. er this girl has got a good old splash / she’s left the taps on (laughs) she’s gone splash all down there / um …… she’s got splash all down there

You can clearly tell that something’s off, but it’s hard to put a finger on exactly what the problem is. Well, time to apply some machine learning!

Results of Paper

Fraser’s 2016 paper uses data from the DementiaBank corpus, consisting of 240 narrative samples from patients with Alzheimer’s, and 233 from a healthy control group. The two groups were matched to have similar age, gender, and education levels. Each participant was asked to describe the Boston Cookie Theft picture above.

Fraser’s analysis used both the original audio data, as well as a detailed computer-readable transcript. She looked at 370 different features covering all sorts of linguistic metrics, like ratios of different parts of speech, syntactic structures, vocabulary richness, and repetition. Then, she performed a factor analysis and identified a set of 35 features that achieves about 81% accuracy in distinguishing between Alzheimer’s patients and controls.

According to the analysis, a few of the most important distinguishing features are:

  • Pronoun to noun ratio. Alzheimer’s patients produce vague statements and tend to substitute pronouns like “he” for nouns like “the boy”. This also applies to adverbial constructions like “the boy is reaching up there” rather than “the boy is reaching into the cupboard”.
  • Usage of high frequency words. Alzheimer’s patients have difficulty remembering specific words and replace them with more general, therefore higher frequency words.

Future directions

Shortly after this research was published, my adviser Frank Rudzicz co-founded WinterLight Labs, a company that’s working on turning this proof-of-concept into an actual usable product. It also diagnoses various other cognitive disorders like Primary Progressive Aphasia.

A few other grad students in my research group are working on Talk2Me, which is a large longitudinal study to collect more data from patients with various neurodegenerative disorders. More data is always helpful for future research.

So this is the starting point for my research. Stay tuned for updates!

Great Solo Asian Trip Part 2: Languages of East Asia

This is the second blog post in my two-part series on my 4-month trip to Asia. Here is part one. In this second blog post, I will focus on the languages I encountered in Asia and my attempts at learning them.

I’ve always enjoyed learning languages (here is a video of me speaking a bunch of them) — and Asia is a very linguistically diverse place compared to North America, with almost every country speaking a different language. So in every country I visited, I tried to learn the language as best as I could. Realistically, it’s not possible to go from zero to fluency in the span of a vacation, but you can actually learn a decent amount in a week or two. Travelling in a foreign country is a great motivator for learning languages, and I found myself learning new words much faster than I did studying it at home.

I went to five countries on this trip, in chronological order: China, Japan, South Korea, Vietnam, and Malaysia.

China

In the first month of my trip, I went to a bunch of cities in China with my mom and sister. For the most part, there wasn’t much language learning, as I already spoke Mandarin fluently.

One of the regions we went to was Xishuangbanna, in southern Yunnan province. Xishuangbanna is a special autonomous prefecture, designated by the Chinese government for the Dai ethnic minority. The outer fringes of China are filled with various groups of non-Chinese minority groups, each with their own unique culture and language. Home to 25 officially recognized ethnic groups and countless more unrecognized ones, Yunnan is one of the most linguistically diverse places in the world.

1Above: Bilingual signs in Chinese and Dai in Jinghong

In practice, recent migration of the Chinese into the region meant that even in Xishuangbanna, the Han Chinese outnumber the local Dai people, and Mandarin is spoken everywhere. In the streets of Jinghong, you can see bilingual signs written in Mandarin and the Dai language (a language related to Thai). Their language is written in the Tai Lue script, which looks pretty cool, but I never got a chance to learn it.


Next stop on my trip was Hong Kong. The local language here is Cantonese, which shares a lot of similar vocabulary and grammatical structure with my native Mandarin, since they were both descended from Middle Chinese about 1500 years ago. However, a millennium of sound changes means that today, Mandarin and Cantonese are quite different languages and are not at all mutually intelligible.

I was eager to practice my Cantonese in my two days in Hong Kong, but found that whenever I said something incorrect, they would give me a weird look and immediately switch to Mandarin or English. Indeed, learning a language is very difficult when everybody in that country is fluent in English. Oh well.

Japan

A lot of travellers complain that the locals speak no English; you don’t often hear of complaints that their English is too good! Well, Japan did not leave me disappointed. Although everyone studies English in school, most people have little practice actually using it, so Japan is ranked near the bottom in English proficiency among developed nations. Perfect!

Before coming to Japan, I already knew a decent amount of Japanese, mostly from watching lots of anime. However, there are very few Japanese people in Canada, so I didn’t have much practice actually speaking it.

I was in Japan for one and a half months, the most of any single country of this trip. In order to accelerate my Japanese learning process, I enrolled in classes at a Japanese language school and stayed with a Japanese homestay family. This way, I learned formal grammatical structures in school and got conversation practice at home. I wrote a more detailed blog post here about this part of the trip.


Phonologically, Japanese is an easy language to pronounce because it has a relatively small number of consonants and only five vowels. There are no tones, and every syllable has form CV (consonant followed by a vowel). Therefore, an English speaker will have a much easier time pronouncing Japanese correctly than the other way around.

Grammatically, Japanese has a few oddities that take some time to get used to. First, the subject of a sentence is usually omitted, so the same phrase can mean “I have an apple” or “he has an apple”. Second, every time you use a verb, you have to decide between the casual form (used between friends and family) or the polite form (used when talking to strangers). Think of verb conjugations, but instead of verb endings differing by subject, they’re conjugated based on politeness level.

The word order of Japanese is also quite different from English. Japanese is an agglutinative language, so you can form really long words by attaching various suffixes to verbs. For example:

  • iku: (I/you/he) goes
  • ikanai: (I/you/he) doesn’t go
  • ikitai: (I/you/he) wants to go
  • ikitakunai: (I/you/he) doesn’t want to go
  • ikanakatta: (I/you/he) didn’t go
  • ikitakunakatta: (I/you/he) didn’t want to go
  • etc…

None of this makes Japanese fundamentally hard, just different from a lot of other languages. This also explains why Google Translate sucks so much at Japanese. When translating Japanese to English, the subjects of sentences are implicit in Japanese but must be explicit in English; when translating English to Japanese, the politeness level is implicit in English but must be explicit in Japanese.

One more thing to beware of is the Japanese pitch accent. Although it’s nowhere close to a full tonal system like Chinese, stressed syllables have a slightly higher pitch. For example, the word “kirei” (pretty) has a pitch accent on the first syllable: “KI-rei”. Once I messed this up and put the accent on the second syllable instead: “ki-REI”, but unbeknownst to me, to native Japanese this sounds like “kirai” (to hate), which has the accent on the second syllable. So I meant to say “nihon wa kirei desu” (Japan is pretty) but it sounded more like “nihon wa kirai desu” (I hate Japan)!

2.png

That was quite an awkward moment.

When I headed west from Tokyo into the Kansai region of Kyoto and Osaka, I noticed a bit of dialectal variation. The “u” in “desu” is a lot more drawn out, and the copula “da” was replaced with “ya”, so on the streets of Kyoto I’d hear a lot of “yakedo” instead of “dakedo” in Tokyo. I got to practice my Japanese with my Kyoto Airbnb host every night, and picked up a few words of the Kansai dialect. For example:

  • ookini: thank you (Tokyo dialect: arigatou)
  • akan: no good (Tokyo dialect: dame)
  • okan: mother (Tokyo dialect: okaasan)

The writing system of Japanese is quite unique and deserves a mention. It actually has three writing systems: the Hiragana syllabary for grammatical particles, the Katakana syllabary for foreign loanwords, and Kanji, logographic characters borrowed from Chinese. A Kanji character can be read in several different ways. Typically, when you have two or more Kanji together, it’s a loanword from Chinese read using a Chinese-like pronunciation (eg: novel, 小説 is read shousetsu) but when you have a single Kanji character followed by a bunch of Hiragana, it’s a Japanese word that means the same thing but sounds nothing like the Chinese word (eg: small, 小さい is read chiisai).

The logographic nature of Kanji is immensely helpful for Chinese people learning Japanese. You get the etymology of every Chinese loanword, and you get to “read” texts well above your level as you know the meaning of most words (although it gives you no information on how the word is pronounced).

My Japanese improved a lot during my 6 weeks in the country. By the time I got to Fukuoka, at the western end of Japan, I had no problems holding a conversation for 30 minutes with locals in a restaurant (provided they speak slowly, of course). It’s been one of my most rewarding language learning experiences to date.

South Korea

From Fukuoka, I traveled across the sea for a mere three hours, on a boat going at a speed slower than a car on a freeway, and landed in a new country. Suddenly, the script on the signs were different, and the language on the street once again strange and unfamiliar. You can’t get the same satisfaction arriving in an airplane.

IMG_2240 (Medium)Above: Busan, my first stop in Korea

Of course, I was in the city of Busan, in South Korea. I was a bit nervous coming here, since it was the first time in my life that I’d been in a country where I wasn’t at least conversationally proficient in the language. Indeed, procuring a SIM card on my first day entailed a combination of me trying to speak broken Korean, them trying to speak broken English, hand gesturing, and (shamefully) Google Translate.

Before coming to Korea, I knew how to read Hangul (the Korean writing system) and a couple dozen words and phrases I picked up from Kpop and my university’s Korean language club. I also tried taking Korean lessons on italki (a language learning website) and various textbooks, but the language never really “clicked” for me, and now I still can’t hold a conversation in Korean for very long.

I suspect the reason has to do with passive knowledge: I’ve had a lot of exposure to Japanese from hundreds of hours of watching anime, but nowhere near as much exposure to Korean. Passive knowledge is important because humans learn language from data, and given enough data, we pick up on a lot of grammatical patterns without explicitly learning them.

Also, studying Kpop song lyrics is not a very effective way to learn Korean. The word distribution in song lyrics is sufficiently different from the word distribution in conversation that studying song lyrics would likely make you better at understanding other songs but not that much better at speaking Korean.


Grammatically, Japanese and Korean are very similar: they have nearly identical word order, and grammatical particles almost have a one-to-one correspondence. They both conjugate verbs differently based on politeness, and form complex words by gluing together suffixes to the end of verbs. The grammar of the two languages are so similar that you can almost translate Japanese to Korean just by translating each morpheme and without changing the order — and both are very different from Chinese, the other major language spoken in the region.

Phonologically, Korean is a lot more complex than Japanese, which is bad news for language learners. Korean has about twice as many different vowels as Japanese, and a few more consonants as well. Even more, Korean maintains a three-way distinction for many consonants: for example, the ‘b’ sound has a plain version (불: bul), an aspirated version (풀: pul), and a tense version (뿔: ppul). I had a lot of difficulty telling these sounds apart, and often had to guess many combinations to find a word in the dictionary.

Unlike Chinese and Japanese, Korean does not use a logographic writing system. In Hangul, each word spells out how the word sounds phonetically, and the system is quite regular. On one hand, this means that Hangul can be learned in a day, but on the other hand, it’s not terribly useful to be able to sound out Korean text without knowing what anything means. I actually prefer the Japanese logographic system, since it makes the Chinese cognates a lot clearer. In fact, about 60% of Korean’s vocabulary are Chinese loanwords, but with a phonetic writing system, it’s not always easy to identify what they are.

Vietnam

The next country on my trip was Vietnam. I learned a few phrases from a Pimsleur audio course, but apart from that, I knew very little about the Vietnamese language coming in. The places I stayed were sufficiently touristy that most people spoke enough English to get by, but not so fluently as to make learning the language pointless.

Vietnamese is a tonal language, like Mandarin and Cantonese. It has 6 tones, but they’re quite different from the tones in Mandarin (which has 4-5). At a casual glance, Vietnamese may sound similar to Chinese, but the languages are unrelated and there is little shared vocabulary.

3Above: Comparison between Mandarin tones (above) and Vietnamese tones (below)

Vietnamese syllables have a wide variety of distinct vowel diphthongs; multiplied with the number of tones, this means that there are a huge number of distinct syllables. By the laws of information theory, this also means that one Vietnamese syllable contains a lot of information — I was often surprised at words that were one syllable in Vietnamese but two syllables in Mandarin.

My Vietnamese pronunciation must have sounded very strange to the locals: often, when I said something, they would understand what I said, but then they’d burst out laughing. Inevitably, they’d follow by asking if I was overseas Vietnamese.

Vietnamese grammar is a bit like Chinese, with a subject-verb-object word order and lack of verb conjugations. So in Vietnamese, if you string together a bunch of words in a reasonable order, there’s a good chance it would be correct (and close to zero chance in Japanese or Korean). One notable difference is in Vietnamese, the adjective comes after the noun, whereas it comes before the noun in Chinese.

One language peculiarity is that Vietnamese doesn’t have pronouns for “I” or “you”. Instead, you must determine your social relationship to the other party to determine what words to use. If I’m talking to an older man, then I refer to him as anh (literally: older brother) and I refer to myself as em (literally: younger sibling). These words would change if I were talking to a young woman, or much older woman, etc. You can imagine that this system is quite confusing for foreigners, so it’s acceptable to use Tôi which unambiguously means “I”, although native speakers don’t often use this word.

Written Vietnamese uses the Latin alphabet (kind of like Chinese Pinyin), and closely reflects the spoken language. Most letters are pronounced more or less the way you’d expect, but there are some exceptions, for example, ‘gi’, ‘di’, and ‘ri’ are all pronounced like ‘zi’.

In two weeks in Vietnam, I didn’t learn enough of the language to have much of a conversation, but I knew enough for most of the common situations you encounter as a tourist, and could haggle prices with fruit vendors and motorcycle taxi drivers. I also learned how to tell between the northern Hanoi dialect and the southern Saigon dialect (they’re mutually intelligible but have a few differences).

Malaysia

The final country on my trip was Malaysia. Malaysia is culturally a very diverse country, with ethnic Malays, Chinese, and Indians living in the same country. The Malay language is frequently used for interethnic communication. I learned a few phrases of the language, but didn’t need to use it much, because everybody I met spoke either English or Mandarin fluently.

Malaysia is a very multilingual country. The Malaysian-Chinese people speak a southern Chinese dialect (one of Hokkien, Hakka, or Cantonese), Mandarin, Malay, and English. In Canada, it’s common to speak one or two languages, but we can only dream of speaking 4-5 languages fluently, as many Malaysians do.

Rate of Language Learning

I kept a journal of new words I learned in all my languages. Whenever somebody said a word I didn’t recognize, I would make a note of it, look it up later, and record it in my journal. When I wanted to say something but didn’t know the word for it, I would also add it to my journal. This way, I learned new words in a natural way, without having to memorize lists of words.

4Above: Tally of words learned in various languages

On average, I picked up 3-5 new words for every day I spent in a foreign country. At this rate, I should be able to read Harry Potter (~5000 unique words) after about 3 years.


That’s all for now. In September, I will be starting my master’s in Computational Linguistics; hopefully, studying all these random languages will come to some use.

With so much linguistic diversity, and with most people speaking little English, Asia is a great vacation spot for language nerds and aspiring polyglots!

Further discussion of this article on /r/languagelearning.