Hypothesis testing for difference in Pearson / Spearman correlations

The Pearson and Spearman correlation coefficients measure how closely two variables are correlated. They’re useful as an evaluation metric in certain machine learning tasks, when you want the model to predict some kind of score, but the actual value of the score is arbitrary, and you only care that the model puts high-scoring items above low-scoring items.

An example of this is the STS-B task in the GLUE benchmark: the task is to rate pairs of sentences on how similar they are. The task is evaluated using Pearson and Spearman correlations against the human ground-truth. Now, if model A has Spearman correlation of 0.55 and model B has 0.51, how confident are you that model A is actually better?

Recently, the NLP research community has advocated for more significance testing (Dror et al., 2018): report a p-value when comparing two models, to distinguish between true improvements and fluctuations due to random chance. However, hypothesis testing is rarely done for Pearson and Spearman metrics — it’s not mentioned in the hitchhiker’s guide linked above, and not supported by the standard ML libraries in Python and R. In this post, I describe how to do significance testing for a difference in Pearson / Spearman correlations, and give some references to the statistics literature.

Definitions and properties

The Pearson correlation coefficient is defined by:

r_{xy} = \frac{\sum_i^n (x_i - \bar{x}) (y_i - \bar{y})}{\sqrt{\sum_i^n (x_i - \bar{x})^2} \sqrt{\sum_i^n (y_i - \bar{y})^2}}

The Spearman rank-order correlation is defined as the Pearson correlation between the ranks of the two variables, and measures the relative order between them. Both correlation coefficients range between -1 and 1.

Pearson’s correlation is simpler, has nicer statistical properties, and is default option in most software packages. However, de Winter et al. (2016) argues that Spearman’s correlation works better with non-normal data and is more robust to outliers, so is generally preferred over Pearson’s correlation.

Significance testing

Suppose we have the predictions of model A and model B, and we wish to compute a p-value for whether their Pearson / Spearman correlation coefficients are different. Start by computing the correlation coefficients for both models against the ground truth.

Then, apply the Fisher transformation to each correlation coefficient:

z = \frac{1}{2} \log(\frac{1+r}{1-r})

This transforms r which is between -1 and 1 into z, which ranges the whole real number line. It turns out that z is approximately normal, with nearly constant variance that only depends on N (the number of data points) and not on r.

For Pearson correlation, the standard deviation of the estimator \hat r_p is given by:

\mathrm{SD}(\hat r_p) = \sqrt{\frac{1}{N-3}}

For Spearman rank-order correlation, the standard deviation of the estimator \hat r_s is given by:

\mathrm{SD}(\hat r_s) = \sqrt{\frac{1.060}{N-3}}

Now, we can compute the p-value because the difference of the two z values follows a normal distribution with known variance.

R implementation

The following R function computes a p-value for the two-tailed hypothesis test, given a ground truth vector and two model output vectors:

cor_significance_test <- function(truth, x1, x2, method="pearson") {
  n <- length(truth)
  cor1 <- cor(truth, x1, method=method)
  cor2 <- cor(truth, x2, method=method)
  fisher1 <- 0.5*log((1+cor1)/(1-cor1))
  fisher2 <- 0.5*log((1+cor2)/(1-cor2))
  if(method == "pearson") {
    expected_sd <- sqrt(1/(n-3))
  }
  else if(method == "spearman") {
    expected_sd <- sqrt(1.060/(n-3))
  }
  2*(1-pnorm(abs(fisher1-fisher2), sd=expected_sd))
}

Naturally, the one-tailed p-value is half of the two-sided one.

For details of other similar computations involving Pearson and Spearman correlations (eg: confidence intervals, unpaired hypothesis tests), I recommend the Handbook of Parametric and Nonparametric Statistical Procedures (Sheskin, 2000).

Caveats and limitations

The formula for the Pearson correlation is solid and very accurate. For Spearman, the constant 1.060 has no theoretical backing and was rather derived experimentally by Fieller et al. (1957), by running simulations using variables from a bivariate normal distribution. Fieller claimed that the approximation was accurate for correlations between -0.8 and 0.8. Borkowf (2002) warns that this approximation may be off if the distribution is far from a bivariate normal.

The procedure here for Spearman correlation may not be appropriate if the correlation coefficient is very high (above 0.8) or if the data is not approximately normal. In that case, you might want to try permutation tests or bootstrapping methods — refer to Bishara and Hittner (2012) for a detailed discussion.

References

  1. Dror, Rotem, et al. “The hitchhiker’s guide to testing statistical significance in natural language processing.” Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vol. 1. 2018.
  2. de Winter, Joost CF, Samuel D. Gosling, and Jeff Potter. “Comparing the Pearson and Spearman correlation coefficients across distributions and sample sizes: A tutorial using simulations and empirical data.” Psychological methods 21.3 (2016): 273.
  3. Sheskin, David J. “Parametric and nonparametric statistical procedures.” Chapman & Hall/CRC: Boca Raton, FL (2000).
  4. Fieller, Edgar C., Herman O. Hartley, and Egon S. Pearson. “Tests for rank correlation coefficients. I.” Biometrika 44.3/4 (1957): 470-481.
  5. Borkowf, Craig B. “Computing the nonnull asymptotic variance and the asymptotic relative efficiency of Spearman’s rank correlation.” Computational statistics & data analysis 39.3 (2002): 271-286.
  6. Bishara, Anthony J., and James B. Hittner. “Testing the significance of a correlation with nonnormal data: comparison of Pearson, Spearman, transformation, and resampling approaches.” Psychological methods 17.3 (2012): 399.

Why Time Management in Grad School is Difficult

Graduate students are often stressed and overworked; a recent Nature report states that grad students are six times more likely to suffer from depression than the general population. Although there are many factors contributing to this, I suspect that a lot of it has to do with poor time management.

In this post, I will describe why time management in grad school is particularly difficult, and some strategies that I’ve found helpful as a grad student.


As a grad student, I’ve found time management to be far more difficult than either during my undergraduate years as well as working in the industry. Here are a few reasons why:

  1. Loose supervision: as a grad student, you have a lot of freedom over how you spend your time. There are no set hours, and you can go a week or more without talking to your adviser. This can be both a blessing and a curse: some find the freedom liberating while others struggle to be productive. In contrast, in an industry job, you’re expected to report to daily standup, you get assigned tickets each sprint, so others essentially manage your time for you.
  2. Few deadlines: grad school is different from undergrad in that you have a handful of “big” deadlines a year (eg: conference submission dates, major project due dates), whereas in undergrad, the deadlines (eg: assignments, midterms) are smaller and more frequent.
  3. Sparse rewards: most of your experiments will fail. That’s the nature of research — if you know it’s going to work, then it’s no longer research. It’s hard to not get discouraged when you struggle for weeks without getting a positive result, and start procrastinating on a multitude of distractions.

Basically, poor time management leads to procrastination, stress, burnout, and generally having a bad time in grad school 😦


Some time management strategies that I’ve found to be useful:

  1. Track your time. When I first started doing this, I was surprised at how much time I spent doing random, half-productive stuff not really related to my goals. It’s up to you how to do this — I keep a bunch of Excel spreadsheets, but some people use software like Asana.
  2. Know your plan. My adviser suggested a hierarchical format with a long-term research agenda, medium-term goals (eg: submit a paper to ICML), and short-term tasks (eg: run X baseline on dataset Y). Then you know if you’re progressing towards your goals or merely doing stuff tangential to it.
  3. Focus on the process, not the reward. It’s tempting to celebrate when your paper gets accepted — but the flip side is you’re going to be depressed if it gets rejected. Your research will have have many failures: paper rejections and experiments that somehow don’t work. Instead, celebrate when you finish the first draft of your paper; reward yourself when you finish implementing an algorithm, even if it fails to beat the baseline.

Here, I plotted my productive time allocation in the last 6 months:

time_allocation.png

Most interestingly, only a quarter of my time is spent coding or running experiments, which seems to be much less than most grad students. I read a lot of papers to try to avoid reinventing things that others have already done.

On average, I spend about 6 hours a day doing productive work (including weekends) — a quite reasonable workload of about 40-45 hours a week. Contrary to some perceptions, grad students don’t have to be stressed and overworked to be successful; allowing time for leisure and social activities is crucial in the long run.

MSc Thesis: Automatic Detection of Dementia in Mandarin Chinese

My master’s thesis is done! Read it here:

MSc Thesis (PDF)

Video

Slides

Talk Slides (PDF)

Part of this thesis is replicated in my paper “Detecting dementia in Mandarin Chinese using transfer learning from a parallel corpus” which I will be presenting at NAACL 2019. However, the thesis contains more details and background information that were omitted in the paper.

Onwards to PhD!

Books I’ve read in 2018

I read 28 books in 2018 (about one every 2 weeks). Recently, I’ve been getting into the habit of taking notes in the margins and writing down a summary of what I learned after finishing them.

This blog post is a more-or-less unedited dump of some of my notes on some of the books I read last year. They were originally notes for myself and weren’t meant to be published, so a lot of ideas aren’t very well fleshed out. Without further ado, let’s begin.


Understanding Thermodynamics by H. C. Van Ness

Understanding Thermodynamics (Dover Books on Physics)

Pretty short, 100 page book that gives an intuitive introduction to various topics in thermodynamics and statistical mechanics. It’s meant to be a supplementary text, not a main text, so some really important things were omitted, which was confusing to me, since I’ve never studied this topic before. Some ideas I learned:

  • Energy can’t really be defined since it’s not a physical property. Can only write it as a sum of a bunch of things, and note that within a closed system, it always stays the same (first law of thermodynamics).
  • A process is reversible if you can do it in reverse to get back the initial state. No physical process is perfectly reversible, but closer it is to reversible, the more efficient it is.
  • Heat engines convert a heat differential into work. Two types are the Otto cycle (used in cars) and the Carnot cycle. Surprisingly, heat engines cannot be perfectly efficient, even under ideal conditions; the Carnot limit puts an upper bound. A heat engine that perfectly converts heat into work violates the second law of thermodynamics.
  • Second law of thermodynamics says that entropy always increases; moreover, it increases for irreversible processes and remains the same for reversible processes. This is useful for determining when a “box of tricks” (taking in compressed air, outputting cold air at one end and hot air at the other end) is possible. The book doesn’t give much intuition about why the definition of entropy makes sense though, it literally tries random combinations of variables until one “works” (gives a constant value experimentally).
  • Second law of thermodynamics is merely an empirical observation, and can’t be proved. In fact, it can be challenged at the molecular level (eg: Maxwell’s demon) which isn’t easily refutable.
  • Statistical mechanics gives an alternate definition of entropy in terms of molecular states, and from it, you can derive various macroscopic properties like temperature and pressure. However, it only works well for ideal gases, and doesn’t quite explain or replace thermodynamics.

Indian Horse by Richard Wagamese

Indian Horse: A Novel

This book is about the life of an Ojibway Indian, living in northern Ontario and growing up in the 60s. When he was young, they sent him to a residential school where he was badly treated and not allowed to speak his own language. He found hockey and got really good at it, but faced problems with racism so he couldn’t really make it in the big leagues with white people. Later, he faced more racism in his job as a logger. Eventually, he developed an alcohol addiction due to this disillusionment and finally comes to terms with his life.

Very interesting perspective on the indigenous people of Canada, a group that most of us don’t think about often. Despite numerous government subsidies, they’re still some of the poorest people in the country, with low education levels. Some people think it’s laziness, but they’ve had a history of mistreatment in residential schools and were subjected to racism until very recently, so it’s difficult for them to integrate into society. Their reserves are often a long distance from major population centers, which means very few opportunities. Furthermore, their culture doesn’t really value education. Overall, great read about a group currently marginalized in Canadian society.

The Power of Habit by Charles Duhigg

The Power of Habit: Why We Do What We Do in Life and Business

Book that discusses various aspects of how habits work. On a high level, habits have three components: cue, routine, and reward. The cue is a set of conditions, such that you automatically perform a routine in order to get a reward. After a while, you will crave the reward when given the cue, and perform the routine automatically (even if the reward is intermittent).

To change a habit, you can’t just force yourself not to do it, because you will constantly crave the reward. Instead, replace the routine with something else that gives a similar reward but is less harmful. Forcing yourself to do something against habit depletes your willpower, so it’s much better to change the habit, so you do it automatically and retain your willpower.

Large changes are often precipitated by a small “keystone” habit change that catalyze a series of systemic changes. For example, Alcoa, an aluminum company, improved its overall efficiency when it decided to focus on safety. Sometimes a disaster is needed to bring about an systemic change in an organization, like a fire in King’s Cross station or operating on the wrong side of a patient in a hospital. Peer pressure is important, for example it’s a key component in Alcoholics Anonymous and making the black civil rights movement go through.

Overall, pretty interesting read, although I think there’s too much dramatic storytelling and anecdotes; I would’ve preferred more scientific discussion and a bit less storytelling.

Why We Sleep by Matthew Walker

Why We Sleep: Unlocking the Power of Sleep and Dreams

This book gives a comprehensive scientific overview of sleep. Although there are still many unanswered questions, there’s been a lot of research lately and this book sums it up.

Sleep is a very necessary function of life. Every living organism requires it, although in different amounts, and total lack of sleep very quickly leads to death. However it’s still unclear exactly why sleep is so important.

There are two types of sleep: REM (rapid eye movement) and NREM sleep. REM sleep is a much lighter form of sleep where you’re closer to the awake state, and is also when you dream; NREM is a much deeper sleep. You can distinguish the type of sleep easily by measuring brain waves.

Sleep deprivation is really bad. You don’t even need total deprivation, even six hours of sleep a day for a few nights is as bad as pulling an all-nighter. When you’re sleep deprived, you’re a lot worse at learning things, controlling your emotions, and you’re also more likely to get sick and more susceptible to cancer.

Dreams aren’t that well understood, but they seem to consolidate memories, including moving them from short term to long term storage. REM sleep especially lets your brain find connections between different ideas, and you’re better at problem solving immediately after.

Insomnia is a really common problem in our society, in part due to it being structured to encourage sleeping less. Sleeping pills are ineffective at best (prescription ones like Ambien and Benzodiazepines are actually really harmful), the recommended treatment is behavioral, like sleeping in a regular schedule, avoiding caffeine and nicotine and alcohol, don’t take naps, avoid light in the bedroom.

My parents always told me it’s bad to stay up so late, but science doesn’t really support this. Different people have different chronotypes, which are determined by genetics (and somewhat changes by age). It’s okay to sleep really late, as long as you maintain a consistent sleep schedule.

Overall I learned a lot from this book but it’s a fairly dense read, with lots of information about different topics, and it took me over a month to finish it.

Notes from the Underground by Fyodor Dostoyevsky

Notes From The Underground

I read this Dostoyevsky book because it had an interesting plot of a man who tries to rescue a prostitute. It turns out that the rescuing prostitute part is not really the central event of the book, but nevertheless I found it quite interesting. The novella is short enough (90 pages) unlike Dostoyevsky’s other books which are super long. It explores a lot of philosophical and psychological ideas in an interesting setting.

The unnamed narrator is a man from the “underground” — he is some kind of civil servant, middle aged, and has health problems. He rejects the idea that man must do the rational thing, as then he is like a machine. He rejoices in doing stupid things from time to time, just because he feels like it, then he can retain some of his humanity. In the second part of the book, the narrator feels like he is not seen as equal by his peers, and goes to extreme lengths to remedy it. He forcefully invites himself to a dinner party with old friends, and is dismayed that his social status is so low that he’s just ignored. He would much rather have a fight than be ignored, and tries to provoke a fight in an autistic manner. Later he meets a prostitute Liza, whom he offers to save. However, when she actually shows up at his place, he is stuck in his own world and lectures to her about the virtues of morality, without actually helping her.

The narrator feels surreal, kind of like valuing social acceptance to an extreme degree. After all, the narrator is physically well-off, he is at least rich enough to hire one servant. However, as long as he feels inferior to his peers, he is frustrated. Also, the more he tries to gain respect from his peers, the more his efforts backfire and his position is lowered in their eyes. Social recognition isn’t something you should pursue directly.

Factfulness by Hans Rosling

Factfulness: Ten Reasons We're Wrong About the World--and Why Things Are Better Than You Think

This book was written by Hans Rosling (the same guy that made The Joy of Stats documentary) just before he died in 2017. It uses stats to show that despite what the media portrays, and despite popular conception, the world is not such a bad place. Extreme poverty is on the decline, children are being vaccinated, women are going to school.

At the beginning of the book, he gives a quiz of 13 questions. Most people score terribly, worse than random chance, by consistently guessing that the world is worse than it actually is. Without looking at stats, it’s easy to be systematically mislead and fall into a bunch of falacies like not considering magnitude of effects, generalizing your experience to others, or acting based on fear. Maybe because of my stats background, a lot of what he says is quite obvious to me. Also I scored 9 on the quiz, which is higher than pretty much everyone. It confirmed some stuff that I already knew, but still it had good insights on poverty and developing nations.

A big takeaway for me is to be thankful of what we have, seeing the difference of lives in levels 1-3. Canada is a level 4 country (where people spend more than $32 dollars a day) yet people make fun of me for making 20k/year “poverty” grad school wages. Grad students in Canada should be thankful that we have electricity, running water, can eat out at restaurants, and not sad that we can’t afford luxury cars and condos.

Sky Burial by Xinran Xue

Sky Burial by Xinran (2005) Paperback

In this novel, a Chinese women, Shu Wen from Suzhou, travels to Tibet to search for her missing husband. This was in 1958, when the Chinese Communist Party annexed Tibet. On the way there, she picks up a Tibetan woman, Zhuoma. They get into some trouble in the mountains and meet a Tibetan family, and gradually Wen integrates into the Tibetan culture and learns the language and customs. Time passes by quickly and before you realize it, 30 years has passed while they have practically no information from the outside world. In the end, Wen does find out what happened to her husband through his diaries, but it’s a bittersweet sort of ending as her world is changed unrecognizably and her husband is dead.

The author makes it ambiguous whether this is a work of fiction or it actually happened — all the facts seem believable, other than somehow not finding out about the great famine and cultural revolution for decades. A lot of interesting Tibetan customs are explained: their nomadic lifestyle, polyamorous family structure, buddhist religious beliefs, and their practice of sky burial which lets vultures eat their dead. The relationship between the Chinese and Tibetan has always been a contentious one, and in this book they form a connection of understanding between the two ethnic groups.

Tibet seems like a really interesting place that I should visit someday. However, it’s unclear how much of their traditional culture is still accessible, due to the recent Han Chinese migrations. Also, it’s currently impossible to travel freely in Tibet without a tour group if you’re not a Chinese citizen.

Getting to YES by Fisher, Ury, and Patton

Getting to Yes: Negotiating Agreement Without Giving In

This book tells you how to negotiate more effectively. A common negotiating mistake is to use positional negotiation, which is each side picking an arbitrary position (eg: buy the car for $5000), and going back and forth until you’re tired and agree, or you both walk out. Positional negotiation is highly arbitrary, and often leads to no agreement, which is bad for both parties.

Some ways to negotiate in a more principled way:

  • Emphasize with the other party, get to know them and their values, treat it as both parties against a common problem rather than you trying to “win” the negotiation.
  • Focus on interests, rather than positions. During the negotiation, figure out what each party really wants; sometimes, it’s possible to give them something that’s valuable for them but you don’t really care about. Negotiation is a nonzero sum game, so try to find creative solutions that fulfill everybody’s interests, rather than fight over a one-dimensional figure.
  • When creative solutions are not possible (both sides just want money), defer to objective measures like industry standards. This gives you both an anchor to use, rather than negotiating in a vacuum.
  • Be aware of your and the other party’s BATNA: best alternative to negotiated agreement. This determines who holds more power in a negotiation, and improving it is a good way to get more leverage.

Trump: A Graphic Biography by Ted Rall

Trump: A Graphic Biography

A biography of Trump in graphical novel format. This book was written after Trump won the republican primaries (May 2016) but before he won the presidency (Nov 2016).

First, the book describes the political and economic circumstances that led to Trump coming into power. After the 2008 financial crisis, many low-skilled Americans felt like there was little economic opportunity for them. Many politicians had come and gone, promising change, but nothing happened. For them, Trump represented a change from the political establishment. They didn’t necessarily agree with all of his policies, they just wanted something radical.

Trump was born after WW2 to a wealthy family in New York City. He studied economics and managed a real estate empire for a few decades, which made him a billionaire. Through his deals in real estate, he proved himself a cunning and ruthless negotiator who is willing to behave unethically and use deception to get what he wanted.

This was a good read because most of my friend group just thinks Trump is “stupid”, and everyone who voted for him is stupid. I never really understood why he was so popular among the other demographic. As a biography, the graphic novel format is good because it’s much shorter; most other biographies go into way too much detail about a single person’s life than I care to know about.

12 Rules for Life by Jordan Peterson

Jordan Peterson’s new book that quickly hit #1 on the bestsellers lists after being released this year. He’s famous around UofT for speaking out against social justice warriors, but I later found out that he has a lot of YouTube videos on philosophy of how to live your life. This book summarizes a lot of these ideas into a single book form, in the form of 12 “rules” to live by, in order to live a good and meaningful life.

These ideas are the most interesting and novel to me:

  • Dominance hierarchy: humans (especially men) instinctively place each other on a hierarchy, where the person at the top has all the power and status, and gets all the resources. Women want to date guys near the top of the hierarchy, and men near the top get many women easily while men at the bottom can’t even find one. Therefore, it’s essential to rise to the top of the dominance hierarchy.
  • Order and chaos: order is the part of the world that we understand, that behaves according to rules; chaos is the unknown, risk, failure. To live a meaningful life is to straddle the boundary between order and chaos, and have a little bit of both.
  • When raising children, it’s the parents’ responsibility to educate them how to behave properly to follow social norms, because otherwise, society will treat them harshly and this will snowball into social isolation later in life. Also, they should be encouraged to do risky things (within reason) to explore / develop their masculinity.

Some of the other rules are more obvious. Examples include: be truthful to yourself, choose your friends wisely, improve yourself incrementally rather than comparing yourself to others, confront issues quickly as they arise. I guess depending on your personality and prior experience, you might find a different subset of these rules to be obvious.

Initially, I found JP to be obnoxious because of the lack of scientific rigour in his arguments, he just seems convincing because he’s well-spoken. The book does a slightly better job than the videos in substantiating the arguments and citing various psychology research papers. JP also has a tendency to cite literature; when he goes into stuff like bible archetypes of Christ, or Cain/Abel, then I have no idea what he’s talking about anymore. The book felt a bit long. Overall still a good read, I learned a lot from this book and also by diving deeper into the psychology papers he cited.

Analects by Confucius

The Analects of Confucius: A Philosophical Translation (Classics of Ancient China)

The Analects (论语) is a book of philosophy by Confucius and lays down the groundwork for much of Chinese thinking for the next 2500 years. It’s the second book I’ve read in ancient Chinese literature after the Art of War. It’s written in a somewhat different style — it has 20 chapters of varying lengths, but the chapters aren’t really organized by topic and the writing jumps around a lot.

Confucius tells you how to live your life not by appeal to religion, but rather by showing characteristics that he considers “good”, and gives examples of what is and what isn’t considered good. A few reoccuring ideas:

  • junzi 君子 – exemplary person. The ideal, wise person that we should strive to be. A junzi strives to be excellent (德) and honorable (信), and not be arrogant or greedy or materialistic. He seeks knowledge, respects elders, is not afraid to speak up, and conducts himself authoratatively.

  • li 礼- ritual propriety. The idea that there are certain “rituals” that society observes, and that if a leader respects them, then things will go smoothly. Kind of like the “meta” in games — modern examples would be the employer/employee relationship, or what situations do you perform a handshake with someone.

  • xiao 孝 – filial responsibility. A son must respect his parents and take care of them in old age, and mourn for them for three years after their death (since for three years after birth, a child is helpless unless for his parents).

  • haoxue 好学 – love of learning for the sake of learning

  • ren 仁 – authorative conduct / benevolence / humanity. Basically a leader should conduct himself in a responsible manner, be fair yet firm.

  • dao 道 – the way. One should forge one’s path through life.

An obvious question is why should we listen to Confucius if there’s no appeal either to a higher power (like the bible) or by axiomizing everything. I don’t really know, but many Chinese have studied this book and lived their lives according to its principles, so by studying it, we can better understand how Chinese think.

I feel like the Analects tells us how an ideal Chinese is “supposed” to think, but modern Chinese people are very much the opposite. Modern Chinese people are generally very materialistic, competitive, and care about comparing themselves to people around them. A friend said much of what is written here is “obvious” to any Chinese person — but then why don’t they actually follow it? I guess modern Chinese society is very unequal, and one must be competitive to rise to the top to prosper. So the cynical answer is that recent economic forces override thousand-year philosophy, which is the ideal, but falls apart when push comes to shove.

The Analects is a very thought-provoking book. It’s surprising how many things Confucius said 2500 years ago is still true today. I probably missed a lot of things in my first pass through it — but this is a good starting point for further reading on Chinese philosophy and literature.

Pachinko by Min Jin Lee

Pachinko (National Book Award Finalist)

Pachinko is the name of the Japanese pinball game, where you watch metal balls tumble through a machine. It’s also the name of this novel, that traces a Korean family in Japan through four generations (Yangjin/Hoonie/Hansu -> Sunja/Isak -> Noa/Mozasu -> Solomon/Phoebe). Sunja is the first generation to immigrate to Japan during the 1930s, after being tricked by a rich guy who got her pregnant. Afterwards, they make their livelihoods in Japan, but they are always considered outsiders, despite being in the country for many generations.

It’s surprising to see so much racism in Japan towards Koreans, since Canada is so multicultural and so accepting of people from other places. Japan is very different: even after four generations in Japan, a Korean boy is still considered a guest and must register with the government every few years or risk getting deported. The Koreans in Japan can’t work the same jobs as the Japanese, can’t legally rent property, and get bullied at school, so they end up working in pachinko parlors, which the Japanese consider “dirty”. All the Korean men: Mozasu, Noa, and Solomon end up working in pachinko, hence the name of the book.

One thing that struck me was how so many of the characters valued idealism more than rationality. Yoseb doesn’t want his wife to go out to work because he considers it improper. Sunja and Noa don’t want to accept Hansu’s help because of shame, even though they could have benefitted a lot, materially. All the Christians have this sort of idealist irrationality, which I guess is part of being religious — only Hansu behaves in a way that makes sense to me. This book gets a bit slow in the end as there are too many minor characters, but is overall a thought provoking read about racism in Japanese society.

Visual Intelligence by Amy Herman

Visual Intelligence: Sharpen Your Perception, Change Your Life

This book uses art to teach you to notice your surroundings more, which is very interesting. The basic premise is there’s a lot of things that we miss, but can be quite important. The two biggest ideas in this book for me:

  1. Train yourself to be more visually perceptive by looking at art, and trying to notice every detail. This seems trivial but often we miss things. Now in the real world, do the same thing and see things in a different way.

  2. Our experiences shape how we perceive things, so it’s important to describe things objectively rather than subjectively. Do not make assumptions, rather, describe only the facts of what you see. From a picture you can’t infer a person is “homeless”, but rather that he’s “lying on a street next to a shopping cart”.

Memoirs of a Geisha by Arthur Golden

Memoirs of a Geisha (Vintage Contemporaries)

This novel tells the story of the geisha Sayuri, from her childhood until her death. It pretends to be a real memoir, but it’s written by an American man. The facts are thoroughly researched, so we get a feel of what Kyoto was like before the war.

Essentially, society in Japan was very unequal — the women have to go through elaborate rituals and endure a lot of suffering to please the men, who just have a lot of money. However, even without formal power, the geishas like Mameha and Hatsumomo construct elaborate schemes of deceit and trickery.

The plot was exciting to read, but certain characters felt flat. Sayuri’s infatuation for the chairman for decades doesn’t seem believable — maybe I would’ve had a crush like that as a teenager, but certainly a woman in her late 20s should know better. Hatsumomo’s degree of evilness didn’t seem convincing either.

Lastly, having read some novels by actual Japanese authors, this book feels nothing like them. Japanese literature is a lot more mellow, and the characters more reserved: certainly nobody would act in such an obviously evil manner. Japanese novels also typically have themes of loneliness and isolation and end with people committing suicide, which doesn’t happen in this novel either.

 

Deep Learning for NLP: SpaCy vs PyTorch vs AllenNLP

Deep neural networks have become really popular nowadays, producing state-of-the-art results in many areas of NLP, like sentiment analysis, text summarization, question answering, and more. In this blog post, we compare three popular NLP deep learning frameworks: SpaCy, PyTorch, and AllenNLP: what are their advantages, disadvantages, and use cases.

SpaCy

Pros: easy to use, very fast, ready for production

Cons: not customizable, internals are opaque

spacy_logo.jpg

SpaCy is a mature and batteries-included framework that comes with prebuilt models for common NLP tasks like classification, named entity recognition, and part-of-speech tagging. It’s very easy to train a model with your data: all the gritty details like tokenization and word embeddings are handled for you. SpaCy is written in Cython which makes it faster than a pure Python implementation, so it’s ideal for production.

The design philosophy is the user should only worry about the task at hand, and not the underlying details. If a newer and more accurate model comes along, SpaCy can update itself to use the improved model, and the user doesn’t need to change anything. This is good for getting a model up and running quickly, but leaves little room for a NLP practitioner to customize the model if the task doesn’t exactly match one of SpaCy’s prebuilt models. For example, you can’t build a classifier that takes both text, numerical, and image data at the same time to produce a classification.

PyTorch

Pros: very customizable, widely used in deep learning research

Cons: fewer NLP abstractions, not optimized for speed

pytorch_logo.jpeg

PyTorch is a deep learning framework by Facebook, popular among researchers for all kinds of DL models, like image classifiers or deep reinforcement learning or GANs. It uses a clear and flexible design where the model architecture is defined with straightforward Python code (rather than TensorFlow’s computational graph design).

NLP-specific functionality, like tokenization and managing word embeddings, are available in torchtext. However, PyTorch is a general purpose deep learning framework and has relatively few NLP abstractions compared to SpaCy and AllenNLP, which are designed for NLP.

AllenNLP

Pros: excellent NLP functionality, designed for quick prototyping

Cons: not yet mature, not optimized for speed

allennlp_logo.jpg

AllenNLP is built on top of PyTorch, designed for rapid prototyping NLP models for research purposes. It supports a lot of NLP functionality out-of-the-box, like text preprocessing and character embeddings, and abstracts away the training loop (whereas in PyTorch you have to write the training loop yourself). Currently, AllenNLP is not yet at a 1.0 stable release, but looks very promising.

Unlike PyTorch, AllenNLP’s design decouples what a model “does” from the architectural details of “how” it’s done. For example, a Seq2VecEncoder is any component that takes a sequence of vectors and outputs a single vector. You can use GloVe embeddings and average them, or you can use an LSTM, or you can put in a CNN. All of these are Seq2VecEncoders so you can swap them out without affecting the model logic.

The talk “Writing code for NLP Research” presented at EMNLP 2018 gives a good overview of AllenNLP’s design philosophy and its differences from PyTorch.

Which is the best framework?

It depends on how much you care about flexibility, ease of use, and performance.

  • If your task is fairly standard, then SpaCy is the easiest to get up and running. You can train a model using a small amount of code, you don’t have to think about whether to use a CNN or RNN, and the API is clearly documented. It’s also well optimized to deploy to production.
  • AllenNLP is the best for research prototyping. It supports all the bells and whistles that you’d include in your next research paper, and encourages you to follow the best practices by design. Its functionality is a superset of PyTorch’s, so I’d recommend AllenNLP over PyTorch for all NLP applications.

There’s a few runner-ups that I will mention briefly:

  • NLTK / Stanford CoreNLP / Gensim are popular libraries for NLP. They’re good libraries, but they don’t do deep learning, so they can’t be directly compared here.
  • Tensorflow / Keras are also popular for research, especially for Google projects. Tensorflow is the only framework supported by Google’s TPUs, and it also has better multi-GPU support than PyTorch. However, multi-GPU setups are relatively uncommon in NLP, and furthermore, its computational graph model is harder to debug than PyTorch’s model, so I don’t recommend it for NLP.
  • PyText is a new framework by Facebook, also built on top of PyTorch. It defines a network using pre-built modules (similar to Keras) and supports exporting models to Caffe to be faster in production. However, it’s very new (only released earlier this month) and I haven’t worked with it myself to form an opinion about it yet.

That’s all, let me know if there’s any that I’ve missed!

The Ethics of (not) Tipping at Restaurants

A customer finishes a meal at a restaurant. He gives a 20-dollar bill to the waiter, and the waiter returns with some change. The customer proceeds to pocket the change in its entirety.

“Excuse me sir,” the waiter interrupts, “but the gratuity has not been included in your bill”

The customer nods and calmly smiles at the waiter. “Yes, I know,” he replies. He gathers his belongings and walks out, indifferent to the astonished look on the waiter’s face.

notip.png

This fictional scenario makes your blood boil just thinking about it. It evokes a feeling of unfairness, where a shameless and rude customer has cheated an innocent, hardworking waiter out of his well-deserved money. Not many situations provoke such a strong emotional response, yet still be perfectly legal.

There is compelling reason not to tip. On an individual level, you can save 10-15% on your meal. On a societal level, economists have criticized tipping for its discriminatory effects. Yet we still do it, but why?

In this blog post, we look at some common arguments in favor of tipping, but we see that these arguments may not hold up to scrutiny. Then, we examine the morality of refusing to tip under several ethical frameworks.

Arguments in favor of tipping (and their rebuttals)

Here are four common reasons for why we should tip:

  1. Tipping gives the waiter an incentive to provide better service.
  2. Waiters are paid less than minimum wage and need the money.
  3. Refusing to tip is embarrassing: it makes you lose face in front of the waiter and your colleagues.
  4. Tipping is a strong social norm and violating it is extremely rude.

I’ve ordered these arguments from weakest to strongest. These are good reasons, but I don’t think any of them definitively settles the argument. I argue that the first two are factually inaccurate, and for the last two, it’s not obvious why the end effect is bad.

Argument 1: Tipping gives the waiter an incentive to provide better service. Since the customer tips at the end of the meal, the waiter does a better job to make him happy, so that he receives a bigger tip.

Rebuttal: The evidence for this is dubious. One study concluded that service quality has at most a modest correlation with how much people tip; many other factors affected tipping, like group size, day of week, and amount of alcohol consumed. Another study found that waitresses earned more tips from male customers if they wore red lipstick. The connection between good service and tipping is sketchy at best.

Argument 2: Waiters are paid less than minimum wage and need the money. In many parts of the USA, waiters earn a base rate of about $2 an hour and must rely on tips to survive.

Rebuttal: This is false. In Canada, all waiters earn at least minimum wage. In the USA, the base rate for waiters is less than minimum wage in some states, but restaurants are required to pay the difference if they make less than minimum wage after tips.

You may argue that restaurant waiters are poor and deserve more than minimum wage. I find this unconvincing as we there are lots of service workers (cashiers, janitors, retail clerks, fast food workers) that do strenuous labor and make minimum wage, and we don’t tip them. I don’t see why waiters are an exception. Arguably Uber drivers are the most deserving of tips, since they make less than minimum wage after accounting for costs, but tipping is optional and not expected for Uber rides.

Argument 3: Refusing to tip is embarrassing: it makes you lose face in front of the waiter and your colleagues. You may be treated badly the next time you visit the restaurant and the waiter recognizes you. If you’re on a date and you get confronted for refusing to tip, you’re unlikely to get a second date.

Rebuttal: Indeed, the social shame and embarrassment is a good reason to tip, especially if you’re dining with others. But what if you’re eating by yourself in a restaurant in another city that you will never go to again? Most people will still tip, even though the damage to your social reputation is minimal. So it seems that social reputation isn’t the only reason for tipping.

It’s definitely embarrassing to get confronted for not tipping, but it’s not obvious that being embarrassed is bad (especially if the only observer is a waiter who you’ll never interact with again). If I give a public speech despite feeling embarrassed, then I am praised for my bravery. Why can’t the same principle apply here?

Argument 4: Tipping is a strong social norm and violating it is extremely rude. Stiffing a waiter is considered rude in our society, even if no physical or economic damage is done. Giving the middle finger is also offensive, despite no clear damage being done. In both cases, you’re being rude to an innocent stranger.

Rebuttal: Indeed, the above is true. A social norm is a convention that if violated, people feel rude. The problem is the arbitrariness of social norms. Is it always bad to violate a social norm, or can the social norm itself be wrong?

Consider that only a few hundred years ago, slavery was commonplace and accepted. In medieval societies, religion was expected and atheists were condemned, and in other societies, women were considered property of their husbands. All of these are examples of social norms; all of these norms are considered barbaric today. It’s not enough to justify something by saying that “everybody else does it”.

Tipping under various ethical frameworks

Is it immoral not to tip at restaurants? We consider this question under the ethical frameworks of ethical egoism, utilitarianism, Kant’s categorical imperative, social contract theory, and cultural relativism.

trolley.pngAbove: The trolley problem, often used to compare different ethical frameworks, but unlikely to occur in real life. Tipping is a more quotidian situation to apply ethics.

1) Ethical egoism says it is moral to act in your own self-interest. The most moral action is the one that is best for yourself.

Clearly, it is in your financial self-interest not to tip. However, the social stigma and shame creates negative utility, which may or may not be worth more than the money saved from tipping. This depends on the individual. Verdict: Maybe OK.

2) Utilitarianism says the moral thing to do is maximize the well-being of the greatest number of people.

Under utilitarianism, you should tip if the money benefits the waiter more than it would benefit you. This is difficult to answer, as it depends on many things, like your relative wealth compared to the waiter’s. Again, subtract some utility for the social stigma and shame if you refuse to tip. Verdict: Maybe OK.

3) Kant’s categorical imperative says that an action is immoral if the goal of the action would be defeated if everyone started doing it. Essentially, it’s immoral to gain a selfish advantage at the expense of everyone else.

If everyone refused to tip, then the prices of food in restaurants would universally go up to compensate, which negates the intended goal of saving money in the first place. Verdict: Not OK.

4) Social contract theory is the set of rules that a society of free, rational people would agree to obey in order to benefit everyone. This is to prevent tragedy of the commons scenarios, where the system would collapse if everyone behaved selfishly.

There is no evidence that tipping makes a society better off. Indeed, many societies (eg: China, Japan) don’t practice tipping, and their restaurants operate just fine. Verdict: OK.

5) Cultural relativism says that morals are determined by the society that you live in (ie, social norms). There is a strong norm in our culture that tipping is obligatory in restaurants. Verdict: Not OK.

Conclusion

In this blog post, we have considered a bunch of arguments for tipping, and examined it under several ethical frameworks. Stiffing the waiter is a legal method of saving some money when eating out. There is no single argument that shows it’s definitely wrong to do this, and some ethical frameworks consider it acceptable while some don’t. This is often the case in ethics when you’re faced with complicated topics.

However, refusing to tip has several negative effects: rudeness of violating a strong social norm, feeling of embarrassment to yourself and colleagues, and potential social backlash. Furthermore, it violates some ethical systems. Therefore, one should reconsider if saving 10-15% at restaurants by not tipping is really worth it.

I trained a neural network to describe images, then I gave it dementia

This blog post is a summary of my work from earlier this year: Dropout during inference as a model for neurological degeneration in an image captioning network.

For a long time, deep learning has had an interesting connection to neuroscience. The definition of the neuron in neural networks was inspired by early models of the neuron. Later, convolutional neural networks were inspired by the structure of neurons in the visual cortex. Many other models also drew inspiration from how the brain functions, like visual attention which replicated how humans looked at different areas of an image when interpreting it.

The connection was always a loose and superficial, however. Despite advances in neuroscience about better models of neurons, these never really caught on among deep learning researchers. Real neurons obviously don’t learn by gradient back-propagation and stochastic gradient descent.

In this work, we study how human neurological degeneration can have a parallel in the universe of deep neural networks. In humans, neurodegeneration can occur by several mechanisms, such as Alzheimer’s disease (which affects connections between individual neurons) or stroke (in which large sections of brain tissue die). The effect of Alzheimer’s disease is dementia, where language, motor, and other cognitive abilities gradually become impaired.

To simulate this effect, we give our neural network a sort of dementia, by interfering with connections between neurons using a method called dropout.

robot_apocalypse.jpg

Yup, this probably puts me high up on the list of humans to exact revenge in the event of an AI apocalypse.

The Model

We started with an encoder-decoder style image captioning neural network (described in this post), which looks at an image and outputs a sentence that describes it. This is inspired by a picture description task that we give to patients suspected of having dementia: given a picture, describe it in as much detail as possible. Patients with dementia typically exhibit patterns of language different from healthy patients, which we can detect using machine learning.

To simulate neurological degeneration in the neural network, we apply dropout in the inference mode, which randomly selects a portion of the neurons in a layer and sets their outputs to zero. Dropout is a common technique during training to regularize neural networks to prevent overfitting, but usually you turn it off during evaluation for the best possible accuracy. To our knowledge, nobody’s experimented with applying dropout in the evaluation stage in a language model before.

We train the model using a small amount of dropout, then apply a larger amount of dropout during inference. Then, we evaluate the quality of the sentences produced by BLEU-4 and METEOR metrics, as well as sentence length and similarity of vocabulary distribution to the training corpus.

Results

When we applied dropout during inference, the accuracy of the captions (measured by BLEU-4 and METEOR) decreased with more dropout. However, the vocabulary generated was more diverse, and the word frequency distribution was more similar (measured by KL-divergence to the training set) when a moderate amount of dropout was applied.

metrics.png

When the dropout was too high, the model degenerated into essentially generating random words. Here are some examples of sentences that were generated, at various levels of dropout:

sample.png

Qualitatively, the effects of dropout seemed to cause two types of errors:

  • Caption starts out normally, then repeats the same word several times: “a small white kitten with red collar and yellow chihuahua chihuahua chihuahua”
  • Caption starts out normally, then becomes nonsense: “a man in a baseball bat and wearing a uniform helmet and glove preparing their handles won while too frown”

This was not that similar to speech produced by people with Alzheimer’s, but kind of resembled fluent aphasia (caused by damage to the part of the brain responsible for understanding language).

Challenges and Difficulties

Excited with our results, we submitted the paper to EMNLP 2018. Unfortunately, our paper was rejected. Despite the novelty of our approach, the reviewers pointed out that our work had some serious drawbacks:

  1. Unclear connection to neuroscience. Adding dropout during inference mode has no connections to any biological models of what happens to the brain during atrophy.
  2. Only superficial resemblance to aphasic speech. A similar result could have been generated by sampling words randomly from a dictionary, without any complicated RNN models.
  3. Not really useful for anything. We couldn’t think of any situations where this model would be useful, such as detecting aphasia.

We decided that there was no way around these roadblocks, so we scrapped the idea, put the paper up on arXiv and worked on something else.

For more technical details, refer to our paper: