What if the government hadn’t released any Coronavirus economic stimulus?

It is March 23, 2020. After a month of testing fiascos, COVID-19 is ravaging the United States, with 40,000 cases in the US and 360,000 worldwide. There is a growing sense of panic as cities begin to lock down. Market circuit breakers have triggered four times in quick succession, with the stock market losing 30% of its value in mere weeks. There’s no sign that the worst is over.

covid-mar-23Above: Global Coronavirus stats on March 23, 2020, when the S&P 500 reached its lowest point during the pandemic (Source).

With businesses across the country closed and millions out of work, it’s clear that a massive financial stimulus is needed to prevent a total economic collapse. However, Congress is divided and is unable to pass the bill. Even when urgent action is needed, they squabble and debate over minor details, refusing to come to a compromise. The president denies that there’s any need for action. Both the democrats and the republicans are willing to do anything to prevent the other side from scoring a victory. The government is in a gridlock.

Let the businesses fail, they say. Don’t bail them out, they took the risk when the times were good, now you reap what you sow. Let them go bankrupt, punish the executives taking millions of dollars of bonuses. Let the free market do its job, after all, they can always start new businesses once this is all over.

April comes without any help from the government. Massive layoffs across all sectors of the economy as companies see their revenues drop to a fraction of normal levels, and layoff employees to try to preserve their cash. Retail and travel sectors are the most heavily affected, but soon, all companies are affected since people are hesitant to spend money. Unemployment numbers skyrocket to levels even greater than during the Great Depression.

Without a job, millions of people miss their rent payments, instead saving their money for food and essential items. Restaurants and other small businesses shut down. When people and businesses cannot pay rent, their landlords cannot pay the mortgages that they owe to the bank. A few small banks go bankrupt, and Wall Street waits anxiously for a government bailout. But unlike 2008, the state is in a deadlock, and there is no bailout coming. In 2020, no bank is too big to fail.

Each bank that goes down takes another bank down with it, until there is soon a cascading domino effect of bank failures. Everyone rushes to withdraw cash from their checking accounts before the bank collapses, which of course makes matters worse. Those too late to withdraw their cash lose their savings. This is devastation for businesses: even for those that escaped the pandemic, but there is no escaping systemic bank failure. Companies have no money in the bank to pay suppliers or make payroll, and thus, thousands of companies go bankrupt overnight.

Across the nation, people are angry at the government’s inaction, and take to the streets in protest. Having depleted their savings, some rob and steal from grocery stores to avoid starvation. The government finally steps in and deploys the military to keep order in the cities. They arrange for emergency supplies, enough to keep everybody fed, but just barely.

The lockdown lasts a few more months, and the virus is finally under control. Everyone is free to go back to work, but the problem is there are no jobs to go back to. In the process of all the biggest corporations going bankrupt, society has lost its complex network of dependencies and organizational knowledge. It only takes a day to lay off 100,000 employees, but to build up this structure from scratch will take decades.

A new president is elected, but it is too late, the damage has been done and cannot be reversed. The economy slowly recovers, but with less efficiency than before, and with workers employed in less productive roles, and the loss of productivity means that everyone enjoys a lower standard of living. Five years later, the virus is long gone, but the economy is nowhere close to its original state. By then, China emerges as the new dominant world power. The year 2020 goes down in history as a year of failure, where through inaction, a temporary health crisis led to societal collapse.


In our present timeline, fortunately, none of the above actually happened. The democrats and republicans put aside their differences and on March 25, swiftly passed a $2 billion dollar economic stimulus. The stock market immediately rebounded.

There was a period in March when it was seemed the government was in gridlock, and it wasn’t clear whether the US was politically capable of passing such a large stimulus bill. Is an economic collapse likely? Not really — no reasonable government would have allowed all of the banks to fail, so we would likely have a recession and not a total collapse. Banks did fail during the Great Depression, but macroeconomic theory was in its infancy at that time, and there’s no way such mistakes would’ve been repeated today. Still, this is the closest we’ve come to an economic collapse in a long time, and it’s fun to speculate about the consequences of what it would be like.

Predictions for 2030

Now that it’s Jan 1, 2020, I’m going to make some predictions about what we will see in the next decade. By the year 2030:

  • Deep learning will be a standard tool and integrated into workflows of many professions, eg: code completion for programmers, note taking during meetings. Speech recognition will surpass human accuracy. Machine translation will still be inferior to human professionals.

  • Open-domain conversational dialogue (aka the Turing Test) will be on par with an average human, using a combination of deep learning and some new technique not available today. It will be regarded as more of a “trick” than strong AI; the bar for true AGI will be shifted higher.

  • Driverless cars will be in commercial use in a few limited scenarios. Most cars will have some autonomous features, but full autonomy still not widely deployed.

  • S&P 500 index (a measure of the US economy, currently at 3230) will double to between 6000-7000. Bitcoin will still exist but its price will fall under 1000 USD (currently ~7000 USD).

  • Real estate prices in Toronto will either have a sharp fall or flatten out; overall increase in 2020-2030 period will not exceed inflation.

  • All western nations will have implemented some kind of carbon tax as political pressure increases from young people; no serious politician will suggest removing carbon tax.

  • About half of my Waterloo cohort will be married, but majority will not have any kids, at the age of 35.

  • China will overtake USA as world’s biggest economy, but growth will slow down, and PPP per capita will still be well below USA.

Non-technical challenges of medical NLP research

Machine learning has recently made a lot of headlines in healthcare applications, like identifying tumors from images, or technology for personalized treatment. In this post, I describe my experiences as a healthcare ML researcher: the difficulties in doing research in this field, as well as reasons for optimism.

My research group focuses on applications of NLP to healthcare. For a year or two, I was involved in a number of projects in this area (specifically, detecting dementia through speech). From my own projects and from talking to others in my research group, I noticed that a few recurring difficulties frequently came up in healthcare NLP research — things that rarely occurred in other branches of ML. These are non-technical challenges that take up time and impede progress, and generally considered not very interesting to solve. I’ll give some examples of what I mean.

Collecting datasets is hard. Any time you want to do anything involving patient data, you have to undergo a lengthy ethics approval process. Even with something as innocent as an anonymous online questionnaire, there is a mandatory review by an ethics board before the experiment is allowed to proceed. As a result, most datasets in healthcare ML are small: a few dozen patient samples is common, and you’re lucky to have more than a hundred samples to work with. This is tiny compared to other areas of ML where you can easily find thousands of samples.

In my master’s research project, where I studied dementia detection from speech, the largest available corpus had about 300 patients, and other corpora had less than 100. This constrained the types of experiments that were possible. Prior work in this area used a lot of feature engineering approaches, because it was commonly believed that you needed at least a few thousand examples to do deep learning. With less data than that, deep learning would just learn to overfit.

Even after the data has been collected, it is difficult to share with others. This is again due to the conservative ethics processes required to share data. Data transfer agreements need to be reviewed and signed, and in some cases, data must remain physically on servers in a particular hospital. Researchers rarely open-source their code along with the paper, since there’s no point of doing so without giving access to the data; this makes it hard to reproduce any experimental results.

Medical data is messy. Data access issues aside, healthcare NLP has some of the messiest datasets in machine learning. Many datasets in ML are carefully constructed and annotated for the purpose of research, but this is not the case for medical data. Instead, data comes from real patients and hospitals, which are full of shorthand abbreviations of medical terms written by doctors, which mean different things depending on context. Unsurprisingly, many NLP techniques fail to work. Missing values and otherwise unreliable data are common, so a lot of not-so-glamorous data preprocessing is often needed.


I’ve so far painted a bleak picture of medical NLP, but I don’t want to give off such a negative image of my field. In the second part of this post, I give some counter-arguments to the above points as well as some of the positive aspects of research.

On difficulties in data access. There are good reasons for caution — patient data is sensitive and real people can be harmed if the data falls into the wrong hands. Even after removing personally identifiable information, there’s still a risk of a malicious actor deanonymizing the data and extracting information that’s not intended to be made public.

The situation is improving though. The community recognizes the need to share clinical data, to strike a balance between protecting patient privacy and allowing research. There have been efforts like the relatively open MIMIC critical care database to promote more collaborative research.

On small / messy datasets. With every challenge, there comes an opportunity. In fact, my own master’s research was driven by lack of data. I was trying to extend dementia detection to Chinese, but there wasn’t much data available. So I proposed a way to transfer knowledge from the much larger English dataset to Chinese, and got a conference paper and a master’s thesis from it. If it wasn’t for lack of data, then you could’ve just taken the existing algorithm and applied it to Chinese, which wouldn’t be as interesting.

Also, deep learning in NLP has recently gotten a lot better at learning from small datasets. Other research groups have had some success on the same dementia detection task using deep learning. With new papers every week on few-shot learning, one-shot learning, transfer learning, etc, small datasets may not be too much of a limitation.

Same applies to messy data, missing values, label leakage, etc. I’ll refer to this survey paper for the details, but the take-away is that these shouldn’t be thought of as barriers, but as opportunities to make a research contribution.

In summary, as a healthcare NLP researcher, you have to deal with difficulties that other machine learning researchers don’t have. However, you also have the unique opportunity to use your abilities to help sick and vulnerable people. For many people, this is an important consideration — if this is something you care deeply about, then maybe medical NLP research is right for you.

Thanks to Elaine Y. and Chloe P. for their comments on drafts of this post.

NAACL 2019, my first conference talk, and general impressions

Last week, I attended my first NLP conference, NAACL, which was held in Minneapolis. My paper was selected for a short talk of 12 minutes in length, plus 3 minutes for questions. I presented my research on dementia detection in Mandarin Chinese, which I did during my master’s.

Here’s a video of my talk:

Visiting Minneapolis

Going to conferences is a good way as a grad student to travel for free. Some of my friends balked at the idea of going to Minneapolis rather than somewhere more “interesting”. However, I had never been there before, and in the summer, Minneapolis was quite nice.

Minneapolis is very flat and good for biking — you can rent a bike for $2 per 30 minutes. I took the light rail to Minnehaha falls (above) and biked along the Mississippi river to the city center. The downside is that compared to Toronto, the food choices are quite limited. The majority of restaurants serve American food (burgers, sandwiches, pasta, etc).

Meeting people

It’s often said that most of the value of a conference happens in the hallways, not in the scheduled talks (which you can often find on YouTube for free). For me, this was a good opportunity to finally meet some of my previous collaborators in person. Previously, we had only communicated via Skype and email. I also ran into people whose names I recognize from reading their papers, but had never seen in person.

Despite all the advances in video conferencing technology, nothing beats face-to-face interaction over lunch. There’s a reason why businesses spend so much money to send employees abroad to conduct their meetings.

Talks and posters

The accepted papers were split roughly 50-50 into talks and poster presentations. I preferred the poster format, because you get to have a 1-on-1 discussion with the author about their work, and ask clarifying questions.

Talks were a mixed bag — some were great, but for many it was difficult to make sense of anything. The most common problem was that speakers tended to dive into complex technical details, and lost sense of the “big picture”. The better talks spent a good chunk of time covering the background and motivation, with lots of examples, before describing their own contribution.

It’s difficult to make a coherent talk in only 12 minutes. A research paper is inherently a very narrow and focused contribution, while the audience come from all areas of NLP, and have probably never seen your problem before. The organizers tried to group talks into related topics like “Speech” or “Multilingual NLP”, but even then, the subfields of NLP are so diverse that two random papers had very little in common.

Research trends in NLP

Academia has a notorious reputation for inventing impractically complex models to squeeze out a 0.2% improvement on a benchmark. This may be true in some areas of ML, but it certainly wasn’t the case here. There was a lot of variety in the problems people were solving. Many papers worked with new datasets, and even those using existing datasets often proposed new tasks that weren’t considered before.

A lot of papers used similar model architectures, like some sort of Bi-LSTM with attention, perhaps with a CRF on top. None of it is directly comparable to one another because everybody is solving a different problem. I guess it shows the flexibility of Bi-LSTMs to be so widely applicable. For me, the papers that did something different (like applying quantum physics to NLP) really stood out.

Interestingly, many papers did experiments with BERT, which was presented at this conference! Last October, the BERT paper bypassed the usual conventions and announced their results without peer review, so the NLP community knew about it for a long time, but only now it’s officially presented at a conference.

Why Time Management in Grad School is Difficult

Graduate students are often stressed and overworked; a recent Nature report states that grad students are six times more likely to suffer from depression than the general population. Although there are many factors contributing to this, I suspect that a lot of it has to do with poor time management.

In this post, I will describe why time management in grad school is particularly difficult, and some strategies that I’ve found helpful as a grad student.


As a grad student, I’ve found time management to be far more difficult than either during my undergraduate years as well as working in the industry. Here are a few reasons why:

  1. Loose supervision: as a grad student, you have a lot of freedom over how you spend your time. There are no set hours, and you can go a week or more without talking to your adviser. This can be both a blessing and a curse: some find the freedom liberating while others struggle to be productive. In contrast, in an industry job, you’re expected to report to daily standup, you get assigned tickets each sprint, so others essentially manage your time for you.
  2. Few deadlines: grad school is different from undergrad in that you have a handful of “big” deadlines a year (eg: conference submission dates, major project due dates), whereas in undergrad, the deadlines (eg: assignments, midterms) are smaller and more frequent.
  3. Sparse rewards: most of your experiments will fail. That’s the nature of research — if you know it’s going to work, then it’s no longer research. It’s hard to not get discouraged when you struggle for weeks without getting a positive result, and start procrastinating on a multitude of distractions.

Basically, poor time management leads to procrastination, stress, burnout, and generally having a bad time in grad school 😦


Some time management strategies that I’ve found to be useful:

  1. Track your time. When I first started doing this, I was surprised at how much time I spent doing random, half-productive stuff not really related to my goals. It’s up to you how to do this — I keep a bunch of Excel spreadsheets, but some people use software like Asana.
  2. Know your plan. My adviser suggested a hierarchical format with a long-term research agenda, medium-term goals (eg: submit a paper to ICML), and short-term tasks (eg: run X baseline on dataset Y). Then you know if you’re progressing towards your goals or merely doing stuff tangential to it.
  3. Focus on the process, not the reward. It’s tempting to celebrate when your paper gets accepted — but the flip side is you’re going to be depressed if it gets rejected. Your research will have have many failures: paper rejections and experiments that somehow don’t work. Instead, celebrate when you finish the first draft of your paper; reward yourself when you finish implementing an algorithm, even if it fails to beat the baseline.

Here, I plotted my productive time allocation in the last 6 months:

time_allocation.png

Most interestingly, only a quarter of my time is spent coding or running experiments, which seems to be much less than most grad students. I read a lot of papers to try to avoid reinventing things that others have already done.

On average, I spend about 6 hours a day doing productive work (including weekends) — a quite reasonable workload of about 40-45 hours a week. Contrary to some perceptions, grad students don’t have to be stressed and overworked to be successful; allowing time for leisure and social activities is crucial in the long run.

Deep Learning for NLP: SpaCy vs PyTorch vs AllenNLP

Deep neural networks have become really popular nowadays, producing state-of-the-art results in many areas of NLP, like sentiment analysis, text summarization, question answering, and more. In this blog post, we compare three popular NLP deep learning frameworks: SpaCy, PyTorch, and AllenNLP: what are their advantages, disadvantages, and use cases.

SpaCy

Pros: easy to use, very fast, ready for production

Cons: not customizable, internals are opaque

spacy_logo.jpg

SpaCy is a mature and batteries-included framework that comes with prebuilt models for common NLP tasks like classification, named entity recognition, and part-of-speech tagging. It’s very easy to train a model with your data: all the gritty details like tokenization and word embeddings are handled for you. SpaCy is written in Cython which makes it faster than a pure Python implementation, so it’s ideal for production.

The design philosophy is the user should only worry about the task at hand, and not the underlying details. If a newer and more accurate model comes along, SpaCy can update itself to use the improved model, and the user doesn’t need to change anything. This is good for getting a model up and running quickly, but leaves little room for a NLP practitioner to customize the model if the task doesn’t exactly match one of SpaCy’s prebuilt models. For example, you can’t build a classifier that takes both text, numerical, and image data at the same time to produce a classification.

PyTorch

Pros: very customizable, widely used in deep learning research

Cons: fewer NLP abstractions, not optimized for speed

pytorch_logo.jpeg

PyTorch is a deep learning framework by Facebook, popular among researchers for all kinds of DL models, like image classifiers or deep reinforcement learning or GANs. It uses a clear and flexible design where the model architecture is defined with straightforward Python code (rather than TensorFlow’s computational graph design).

NLP-specific functionality, like tokenization and managing word embeddings, are available in torchtext. However, PyTorch is a general purpose deep learning framework and has relatively few NLP abstractions compared to SpaCy and AllenNLP, which are designed for NLP.

AllenNLP

Pros: excellent NLP functionality, designed for quick prototyping

Cons: not yet mature, not optimized for speed

allennlp_logo.jpg

AllenNLP is built on top of PyTorch, designed for rapid prototyping NLP models for research purposes. It supports a lot of NLP functionality out-of-the-box, like text preprocessing and character embeddings, and abstracts away the training loop (whereas in PyTorch you have to write the training loop yourself). Currently, AllenNLP is not yet at a 1.0 stable release, but looks very promising.

Unlike PyTorch, AllenNLP’s design decouples what a model “does” from the architectural details of “how” it’s done. For example, a Seq2VecEncoder is any component that takes a sequence of vectors and outputs a single vector. You can use GloVe embeddings and average them, or you can use an LSTM, or you can put in a CNN. All of these are Seq2VecEncoders so you can swap them out without affecting the model logic.

The talk “Writing code for NLP Research” presented at EMNLP 2018 gives a good overview of AllenNLP’s design philosophy and its differences from PyTorch.

Which is the best framework?

It depends on how much you care about flexibility, ease of use, and performance.

  • If your task is fairly standard, then SpaCy is the easiest to get up and running. You can train a model using a small amount of code, you don’t have to think about whether to use a CNN or RNN, and the API is clearly documented. It’s also well optimized to deploy to production.
  • AllenNLP is the best for research prototyping. It supports all the bells and whistles that you’d include in your next research paper, and encourages you to follow the best practices by design. Its functionality is a superset of PyTorch’s, so I’d recommend AllenNLP over PyTorch for all NLP applications.

There’s a few runner-ups that I will mention briefly:

  • NLTK / Stanford CoreNLP / Gensim are popular libraries for NLP. They’re good libraries, but they don’t do deep learning, so they can’t be directly compared here.
  • Tensorflow / Keras are also popular for research, especially for Google projects. Tensorflow is the only framework supported by Google’s TPUs, and it also has better multi-GPU support than PyTorch. However, multi-GPU setups are relatively uncommon in NLP, and furthermore, its computational graph model is harder to debug than PyTorch’s model, so I don’t recommend it for NLP.
  • PyText is a new framework by Facebook, also built on top of PyTorch. It defines a network using pre-built modules (similar to Keras) and supports exporting models to Caffe to be faster in production. However, it’s very new (only released earlier this month) and I haven’t worked with it myself to form an opinion about it yet.

That’s all, let me know if there’s any that I’ve missed!

The Ethics of (not) Tipping at Restaurants

A customer finishes a meal at a restaurant. He gives a 20-dollar bill to the waiter, and the waiter returns with some change. The customer proceeds to pocket the change in its entirety.

“Excuse me sir,” the waiter interrupts, “but the gratuity has not been included in your bill”

The customer nods and calmly smiles at the waiter. “Yes, I know,” he replies. He gathers his belongings and walks out, indifferent to the astonished look on the waiter’s face.

notip.png

This fictional scenario makes your blood boil just thinking about it. It evokes a feeling of unfairness, where a shameless and rude customer has cheated an innocent, hardworking waiter out of his well-deserved money. Not many situations provoke such a strong emotional response, yet still be perfectly legal.

There is compelling reason not to tip. On an individual level, you can save 10-15% on your meal. On a societal level, economists have criticized tipping for its discriminatory effects. Yet we still do it, but why?

In this blog post, we look at some common arguments in favor of tipping, but we see that these arguments may not hold up to scrutiny. Then, we examine the morality of refusing to tip under several ethical frameworks.

Arguments in favor of tipping (and their rebuttals)

Here are four common reasons for why we should tip:

  1. Tipping gives the waiter an incentive to provide better service.
  2. Waiters are paid less than minimum wage and need the money.
  3. Refusing to tip is embarrassing: it makes you lose face in front of the waiter and your colleagues.
  4. Tipping is a strong social norm and violating it is extremely rude.

I’ve ordered these arguments from weakest to strongest. These are good reasons, but I don’t think any of them definitively settles the argument. I argue that the first two are factually inaccurate, and for the last two, it’s not obvious why the end effect is bad.

Argument 1: Tipping gives the waiter an incentive to provide better service. Since the customer tips at the end of the meal, the waiter does a better job to make him happy, so that he receives a bigger tip.

Rebuttal: The evidence for this is dubious. One study concluded that service quality has at most a modest correlation with how much people tip; many other factors affected tipping, like group size, day of week, and amount of alcohol consumed. Another study found that waitresses earned more tips from male customers if they wore red lipstick. The connection between good service and tipping is sketchy at best.

Argument 2: Waiters are paid less than minimum wage and need the money. In many parts of the USA, waiters earn a base rate of about $2 an hour and must rely on tips to survive.

Rebuttal: This is false. In Canada, all waiters earn at least minimum wage. In the USA, the base rate for waiters is less than minimum wage in some states, but restaurants are required to pay the difference if they make less than minimum wage after tips.

You may argue that restaurant waiters are poor and deserve more than minimum wage. I find this unconvincing as we there are lots of service workers (cashiers, janitors, retail clerks, fast food workers) that do strenuous labor and make minimum wage, and we don’t tip them. I don’t see why waiters are an exception. Arguably Uber drivers are the most deserving of tips, since they make less than minimum wage after accounting for costs, but tipping is optional and not expected for Uber rides.

Argument 3: Refusing to tip is embarrassing: it makes you lose face in front of the waiter and your colleagues. You may be treated badly the next time you visit the restaurant and the waiter recognizes you. If you’re on a date and you get confronted for refusing to tip, you’re unlikely to get a second date.

Rebuttal: Indeed, the social shame and embarrassment is a good reason to tip, especially if you’re dining with others. But what if you’re eating by yourself in a restaurant in another city that you will never go to again? Most people will still tip, even though the damage to your social reputation is minimal. So it seems that social reputation isn’t the only reason for tipping.

It’s definitely embarrassing to get confronted for not tipping, but it’s not obvious that being embarrassed is bad (especially if the only observer is a waiter who you’ll never interact with again). If I give a public speech despite feeling embarrassed, then I am praised for my bravery. Why can’t the same principle apply here?

Argument 4: Tipping is a strong social norm and violating it is extremely rude. Stiffing a waiter is considered rude in our society, even if no physical or economic damage is done. Giving the middle finger is also offensive, despite no clear damage being done. In both cases, you’re being rude to an innocent stranger.

Rebuttal: Indeed, the above is true. A social norm is a convention that if violated, people feel rude. The problem is the arbitrariness of social norms. Is it always bad to violate a social norm, or can the social norm itself be wrong?

Consider that only a few hundred years ago, slavery was commonplace and accepted. In medieval societies, religion was expected and atheists were condemned, and in other societies, women were considered property of their husbands. All of these are examples of social norms; all of these norms are considered barbaric today. It’s not enough to justify something by saying that “everybody else does it”.

Tipping under various ethical frameworks

Is it immoral not to tip at restaurants? We consider this question under the ethical frameworks of ethical egoism, utilitarianism, Kant’s categorical imperative, social contract theory, and cultural relativism.

trolley.pngAbove: The trolley problem, often used to compare different ethical frameworks, but unlikely to occur in real life. Tipping is a more quotidian situation to apply ethics.

1) Ethical egoism says it is moral to act in your own self-interest. The most moral action is the one that is best for yourself.

Clearly, it is in your financial self-interest not to tip. However, the social stigma and shame creates negative utility, which may or may not be worth more than the money saved from tipping. This depends on the individual. Verdict: Maybe OK.

2) Utilitarianism says the moral thing to do is maximize the well-being of the greatest number of people.

Under utilitarianism, you should tip if the money benefits the waiter more than it would benefit you. This is difficult to answer, as it depends on many things, like your relative wealth compared to the waiter’s. Again, subtract some utility for the social stigma and shame if you refuse to tip. Verdict: Maybe OK.

3) Kant’s categorical imperative says that an action is immoral if the goal of the action would be defeated if everyone started doing it. Essentially, it’s immoral to gain a selfish advantage at the expense of everyone else.

If everyone refused to tip, then the prices of food in restaurants would universally go up to compensate, which negates the intended goal of saving money in the first place. Verdict: Not OK.

4) Social contract theory is the set of rules that a society of free, rational people would agree to obey in order to benefit everyone. This is to prevent tragedy of the commons scenarios, where the system would collapse if everyone behaved selfishly.

There is no evidence that tipping makes a society better off. Indeed, many societies (eg: China, Japan) don’t practice tipping, and their restaurants operate just fine. Verdict: OK.

5) Cultural relativism says that morals are determined by the society that you live in (ie, social norms). There is a strong norm in our culture that tipping is obligatory in restaurants. Verdict: Not OK.

Conclusion

In this blog post, we have considered a bunch of arguments for tipping, and examined it under several ethical frameworks. Stiffing the waiter is a legal method of saving some money when eating out. There is no single argument that shows it’s definitely wrong to do this, and some ethical frameworks consider it acceptable while some don’t. This is often the case in ethics when you’re faced with complicated topics.

However, refusing to tip has several negative effects: rudeness of violating a strong social norm, feeling of embarrassment to yourself and colleagues, and potential social backlash. Furthermore, it violates some ethical systems. Therefore, one should reconsider if saving 10-15% at restaurants by not tipping is really worth it.

How to read research papers for fun and profit

One skill that I’ve learned after a year in grad school is how to effectively read research papers. Previously I had found them impenetrable, but now I find them a great source of information about cutting-edge science while it is being done and before it’s made its way into textbooks. Now I read about 4-5 of them every week.

My research area is natural language processing and machine learning, but I read papers in lots of fields, not just in AI and computer science. Papers are my go-to source for a myriad of scientific inquiries, for example: does drinking alcohol cause cancer? Are women more talkative than men? Was winter in Toronto abnormally cold this year? Etc.

Why read scientific papers?

If you try to Google questions like these, you typically end up on Wikipedia or some random article on the internet. Research papers are an underutilized resource that have several advantages over other common sources of information on the internet.

Advantages over articles on the internet: no matter what topic, you will undoubtedly find articles on it on the internet. Some of these articles are excellent, but others are opinionated nonsense. Without being an expert yourself, it can be difficult to decide what information to trust. Peer-reviewed research papers are held to a much higher minimum quality standard, and for every claim they make, they have to clearly state their evidence, assumptions, how they arrived at the conclusion, and their degree of confidence in their result. You can examine the paper for yourself and decide if the assumptions are reasonable and the conclusions follow logically, rather than trust someone else’s word for it. With some digging deeper and some critical thinking, you can avoid a lot of misinformation on the internet.

Advantages over Wikipedia: Wikipedia is a pretty reliable source of truth; in fact, it often cites scientific papers as its sources. However, Wikipedia is written to be concise, so that oftentimes, a 30-page research paper is summarized to 1-2 sentences. If you only read Wikipedia, you will miss a lot of the nuances contained in the original paper, and only develop a cursory understanding compared to going directly to the source.

Finding the right paper to read

If your professor or colleague has assigned you a specific paper to read, then you can skip this section.

A big part of the challenge of reading papers is deciding which ones to read. There are a lot of papers out there, and only a few will be relevant to you. Therefore, deciding what to read is a nontrivial skill in itself.

Research papers are the most useful when you have a specific problem or question in mind. When I first started out reading papers, I approached this the wrong way. One day, I’d suddenly decide “hmm, complexity theory is pretty interesting, let’s go on arXiv and look at some recent complexity theory papers“. Then, I’d open a few, attempt to read them, get confused, and conclude I’m not smart enough to read complexity theory papers. Why is this a bad idea? A research paper exists to answer a very specific question, so it makes no sense to pick up a random paper without the background context. What is the problem? What approaches have been tried in the past, and how have they failed? Without understanding background information like this, it’s impossible to appreciate the contribution of a specific paper.

2.pngAbove: Use the forward citation and related article buttons on Google Scholar to explore relevant papers.

It’s helpful to think of each research paper as a node in a massive, interconnected graph. Rather than each paper existing as a standalone item, a paper is deeply connected to the research that came before and after it.

Google Scholar is your best friend for exploring this graph. Begin by entering a few keywords and picking a few promising hits from the first 2-3 pages. Good, this is your starting point. Here are some heuristics for traversing the paper graph:

  • To go forward in time, look at works that cited this paper. A paper being cited usually means one of two things: (1) the future paper uses some technique or result developed in the current paper for some other purpose, or (2) the future paper improves on the techniques in the current paper. Citations of the second type are more useful.
  • To go backward in time, look at the paper’s introduction and related work. This puts the paper in context of previous work. Occasionally, you find a survey paper that doesn’t contribute anything novel of its own, but summarizes a bunch of previous related work; these are really helpful when you’re beginning your research in a topic.
  • Citation count is a good indicator of a paper’s importance and merit. If the paper has under 10 citations, take its claims with a grain of salt (even more so if it’s an arXiv preprint and not a peer-reviewed paper). Over 100 citations means the paper has made a significant contribution; over 1000 citations indicates a landmark paper in the field and is probably worth reading. Citation count is not a perfect metric, especially for very recent work, but it’s a useful heuristic that’s applicable across disciplines.

The first pass: High level overview

Great, you’ve decided on a paper to read. Now how to read it effectively?

Reading a paper is not like reading a novel. When you read a novel, you start at the beginning and read linearly until you reach the end. However, reading a paper is most efficient by hopping around the sections as appropriate, rather than read linearly from beginning to end.

The goal of your first reading of a paper is to first get a high level overview of the paper, before diving into the details. As you go through the paper, here are some good questions that you should be asking yourself:

  • What is the problem being solved?
  • What approaches have been tried before, and what are their limitations?
  • What is this paper’s novel contribution?
  • What experiments were done, using what dataset? How successful were the results?
  • Can the method in this paper be applied to my problem?
  • If not, what assumptions are needed for this method to work?

3.pngAbove: Treat each paper as a node in a massive graph of research, rather than a standalone item in a vacuum.

When I read a paper, I usually proceed in the following order:

  1. Abstract: a long paragraph that summarizes the entire paper. Read this to decide if the rest of the paper is worth reading or not.
  2. Introduction, diagrams, tables, and conclusion. Often, reading the diagrams and captions gives you a good idea of what’s going on with minimal effort.
  3. If the field is unfamiliar to you, then note down any interesting references in the introduction and related works sections to explore later. If the field is familiar, then just skim these sections.
  4. Read the main body of the paper: model, experiment, and discussion, without getting too bogged down in the details. If a section is confusing, skip it for now and come back to it on a second reading.

That’s it — you’ve finished reading a paper! Now you can either go back and read it again, focusing on the details you skimmed over the first pass, or move on to a different paper that you’ve added to your backlog.

When reading a paper, you should not expect to understand every aspect of the paper by the time you’re done. You can always refer back to the paper at a later time, as needed. Generally, you don’t need to understand all the details, unless you’re trying to replicate or extend the paper.

Help, I’m stuck!

Sometimes, despite your best efforts, you find that a paper is impenetrable. It’s not necessarily your fault — some papers are hastily written hours before a conference deadline. What do you do now?

Look for a video or blog post explaining the paper. If you’re lucky, someone may have recorded a lecture where the author presents the paper at a conference. Maybe somebody wrote a blog post summarizing the paper (Colah’s blog has great summaries of machine learning research). These are often better at explaining things than the actual paper.

If there’s a lot of background terminology that don’t make sense, it may be better to consult other sources like textbooks and course lectures rather than papers. This is especially true if the research is not new (>10 years old). Research papers are not always the best at explaining a concept clearly: by their nature, they document research as it’s being done. Sometimes, the paper paints an incomplete picture of something that’s better understood later. Textbook writers can look back on research after it’s already done, and thereby benefit from hindsight knowledge that didn’t exist when the paper was written.

Basic statistics is useful in many experimental fields — concepts like linear / logistic regression, p-values, hypothesis testing, and common statistical distribution. Any paper that deals with experimental data will use at least some statistics, so it’s worthwhile to be comfortable with basic stats.


That’s it for my advice. The densely packed two-column pages of text may appear daunting to the uninitiated reader, but they can be conquered with a bit of practice. Whether it’s for work or for fun, you definitely don’t need a PhD to read papers.

Publishing Negative Results in Machine Learning is like Proving Dragons don’t Exist

I’ve been reading a lot of machine learning papers lately, and one thing I’ve noticed is that the vast majority of papers report positive results — “we used method X on problem Y, and beat the state-of-the-art results”. Very rarely do you see a paper that reports that something doesn’t work.

The result is publication bias — if we only publish the results of experiments that succeed, even statistically significant results could be due to random chance, rather than anything actually significant happening. Many areas of science are facing a replication crisis, where published research cannot be replicated.

There is some community discussion of encouraging more negative paper submissions, but as of now, negative results are rarely publishable. If you attempt an experiment but don’t get the results you expected, your best hope is to try a bunch of variations of the experiment until you get some positive result (perhaps on a special case of the problem), after which you pretend the failed experiments never happened. With few exceptions, any positive result is better than a negative result, like “we tried method X on problem Y, and it didn’t work”.

Why publication bias is not so bad

I just described a cynical view of academia, but actually, there’s a good reason why the community prefers positive results. Negative results are simply not very useful, and contribute very little to human knowledge.

Now why is that? When a new paper beats the state-of-the-art results on a popular benchmark, that’s definite proof that the method works. The converse is not true. If your model fails to produce good results, it could be due to a number of reasons:

  • Your dataset is too small / too noisy
  • You’re using the wrong batch size / activation function / regularization
  • You’re using the wrong loss function / wrong optimizer
  • Your model is overfitting
  • You have a bug in your code

lattice2.pngAbove: Only when everything is correct will you get positive results; many things can cause a model to fail. (Source)

So if you try method X on problem Y and it doesn’t work, you gain very little information. In particular, you haven’t proved that method X cannot work. Sure, you found that your specific setup didn’t work, but have you tried making modification Z? Negative results in machine learning are rare because you can’t possibly anticipate all possible variations of your method and convince people that all of them won’t work.

Searching for dragons

Suppose we’re scientists attending the International Conference of Flying Creatures (ICFC). Somebody mentioned it would be nice if we had dragons. Dragons are useful. You could do all sorts of cool stuff with a dragon, like ride it into battle.

1.jpg

“But wait!” you exclaim: “Dragons don’t exist!”

I glance at you questioningly: “How come? We haven’t found one yet, but we’ll probably find one soon.”

Your intuition tells you dragons shouldn’t exist, but you can’t articulate a convincing argument why. So you go home, and you and your team of grad students labor for a few years and publish a series of papers:

  • “We looked for dragons in China and we didn’t find any”
  • “We looked for dragons in Europe and we didn’t find any”
  • “We looked for dragons in North America and we didn’t find any”

Eventually, the community is satisfied that dragons probably don’t exist, for if they did, someone would have found one by now. But a few scientists still harbor the possibility that there may be dragons lying around in a remote jungle somewhere. We just don’t know for sure.

This remains the state of things for a few years until a colleague publishes a breakthrough result:

  • “Here’s a calculation that shows that any dragon with a wing span longer than 5 meters will collapse under its own weight”

You read the paper, and indeed, the logic is impeccable. This settles the matter once and for all: dragons don’t exist (or at least the large, flying sort of dragons).

When negative results are actually publishable

The research community dislikes negative results because they don’t prove a whole lot — you can have a lot of negative results and still not be sure that the task is impossible. In order for a negative result to be valuable, it needs to present a convincing argument why the task is impossible, and not just a list of experiments that you tried that failed.

This is difficult, but it can be done. Let me give an example from computational linguistics. Recurrent neural networks (RNNs) can, in theory, compute any function defined over a sequence. In practice, however, they had difficulty remembering long-term dependencies. Attempts to train RNNs using gradient descent ran into numerical difficulties known as the vanishing / exploding gradient problem.

Then, Bengio et al. (1994) formulated a mathematical model of an RNN as an iteratively applied function. Using ideas from dynamical systems theory, they showed that as the input sequence gets longer and longer, the result is more and more sensitive to noise. The details are technical, but the gist of it is that under some reasonable assumptions, training RNNs using gradient descent is impossible. This is a rare example of a negative result in machine learning — it’s an excellent paper and I’d recommend reading it.

3.pngAbove: A Long Short Term Memory (LSTM) network handles long term dependencies by adding a memory cell (Source)

Soon after the vanishing gradient problem was understood, researchers invented the LSTM (Hochreiter and Schmidhuber, 1997). Since training RNNs with gradient descent was hopeless, they added a ‘latching’ mechanism that allows state to persist through many iterations, thus avoiding the vanishing gradient problem. Unlike plain RNNs, LSTMs can handle long term dependencies and can be trained with gradient descent; they are among the most ubiquitous deep learning architectures in NLP today.


After reading the breakthrough dragon paper, you pace around your office, thinking. Large, flying dragons can’t exist after all, as they would collapse under their own weight — but what about smaller, non-flying dragons? Maybe we’ve been looking for the wrong type of dragons all along? Armed with new knowledge, you embark on a new search…

4.jpgAbove: Komodo Dragon, Indonesia

…and sure enough, you find one 🙂

Simple models in Kaggle competitions

This week I participated in the Porto Seguro Kaggle competition. Basically, you’re asked to predict a binary variable — whether or not an insurance claim will be filed — based on a bunch of numerical and categorical variables.

With over 5000 teams entering the competition, it was the largest Kaggle competition ever. I guess this is because it’s a fairly well-understood problem (binary classification) with a reasonably sized dataset, making it accessible to beginning data scientists.

Kaggle is a machine learning competition platform filled with thousands of smart data scientists, machine learning experts, and statistics PhDs, and I am not one of them. Still, I was curious to see how my relatively simple tools would fare against the sophisticated techniques on the leaderboard.


The first thing I tried was logistic regression. All you had to do was load the data into memory, invoke the glm() function in R, and output the predictions. Initially my logistic regression wasn’t working properly and I got a negative score. It took a day or so to figure out how to do logistic regression properly, which got me a score of 0.259 on the public leaderboard.

Next, I tried gradient boosted decision trees, which I had learned about in a stats class but never actually used before. In R, this is simple — I just needed to change the glm() call to gbm() and fit the model again. This improved my score to 0.265. It was near the end of the competition so I stopped here.

At this point, the top submission had a score of 0.291, and 0.288 was enough to get a gold medal. Yet despite being within 10% of the top submission in overall accuracy, I was still in the bottom half of the leaderboard, ranking in the 30th percentile.

The public leaderboard looked like this:

Rplot.pngAbove: Public leaderboard of the Porto Seguro Kaggle competition two days before the deadline. Line in green is my submission, scoring 0.265.

This graph illustrates the nature of this competition. At first, progress is easy, and pretty much anyone who submitted anything that was not “predict all zeros” got over 0.200. From there, you make steady, incremental progress until about 0.280 or so, but afterwards, any further improvements is limited.

The top of the leaderboard is very crowded, with over 1000 teams having the score of 0.287. Many teams used ensembles of XGBoost and LightGBM models with elaborate feature engineering. In the final battle for the private leaderboard, score differences of less than 0.001 translated to hundreds of places on the leaderboard and spelled the difference between victory and defeat.

591926572-christophe-lemaitre-of-france-usain-bolt-of-jamaica.jpg.CROP.promo-xlarge2.jpgAbove: To run 90% as fast as Usain Bolt, you need to run 100 meters in 10.5 seconds. To get 90% of the winning score in Kaggle, you just need to call glm().

This pattern is common in Kaggle and machine learning — often, a simple model can do quite well, at least the same order of magnitude as a highly optimized solution. It’s quite remarkable that you can get a decent solution with a day or two of work, and then, 5000 smart people working for 2 months can only improve it by 10%. Perhaps this is obvious to someone doing machine learning long enough, but we should look back and consider how rare this is. The same does not apply to most activities. You cannot play piano for two days and become 90% as good as a concert pianist. Likewise, you cannot train for two days and run 90% as fast as Usain Bolt.

Simple models won’t win you Kaggle competitions, but we shouldn’t understate their effectiveness. Not only are they quick to develop, but they are also easier to interpret, and can be trained in a few seconds rather than hours. It’s comforting to see how far you can get with simple solutions — the gap between the best and the rest isn’t so big after all.

Read further discussion of this post on the Kaggle forums!