Kaggle Speech Recognition Challenge

For the past few weeks, I’ve been working on the TensorFlow Speech Recognition Challenge on Kaggle. The task is to recognize a one-second audio clip, where the clip contains one of a small number of words, like “yes”, “no”, “stop”, “go”, “left”, and “right”.

In general, speech recognition is a difficult problem, but it’s much easier when the vocabulary is limited to a handful of words. We don’t need to use complicated language models to detect phonemes, and then string the phonemes into words, like Kaldi does for speech recognition. Instead, a convolutional neural network works quite well.

First Steps

The dataset consists of about 64000 audio files which have already been split into training / validation / testing sets. You are then asked to make predictions on about 150000 audio files for which the labels are unknown.

Actually, this dataset had already been published in academic literature, and people published code to solve the same problem. I started with GCommandPytorch by Yossi Adi, which implements a speech recognition CNN in Pytorch.

The first step that it does is convert the audio file into a spectrogram, which is an image representation of sound. This is easily done using LibRosa.

1.pngAbove: Sample spectrograms of “yes” and “no”

Now we’ve converted the problem to an image classification problem, which is well studied. To an untrained human observer, all the spectrograms may look the same, but neural networks can learn things that humans can’t. Convolutional neural networks work very well for classifying images, for example VGG16:

2.pngAbove: A Convolutional Neural Network (LeNet). VGG16 is similar, but has even more layers.

For more details about this approach, refer to these papers:

  1. Convolutional Neural Networks for Small-footprint Keyword Spotting
  2. Honk: A PyTorch Reimplementation of Convolutional Neural Networks for Keyword Spotting

Voice Activity Detection

You might ask: if somebody already implemented this, then what’s there left to do other than run their code? Well, the test data contains “silence” samples, which contain background noise but no human speech. It also has words outside the set we care about, which we need to label as “unknown”. The Pytorch CNN produces about 95% validation accuracy by itself, but the accuracy is much lower when we add these two additional requirements.

For silence detection, I first tried the simplest thing I could think of: taking the maximum absolute value of the waveform and decide it’s “silence” if the value is below a threshold. When combined with VGG16, this gets accuracy 0.78 on the leaderboard. This is a crude metric because sufficiently loud noise would be considered speech.

Next, I tried running openSMILE, which I use in my research to extract various acoustic features from audio. It implements an LSTM for voice activity detection: every 0.05 seconds, it outputs a probability that someone is talking. Combining the openSMILE output with the VGG16 prediction gave a score of 0.81.

More improvements

I tried a bunch of things to improve my score:

  1. Fiddled around with the neural network hyperparameters which boosted my score to 0.85. Each epoch took about 10 minutes on a GPU, and the whole model takes about 2 hours to train. Somehow, Adam didn’t produce good results, and SGD with momentum worked better.
  2. Took 100% of the data for training and used the public LB for validation (don’t do this in real life lol). This improved my score to 0.86.
  3. Trained an ensemble 3 versions of the same neural network with same hyperparameters but different randomly initialized weights and took a majority vote to do prediction. This improved the score to 0.87. I would’ve liked to train more, but other people in my research group needed to use the GPUs.

In the end, the top scoring model had a score of 0.91, which beat my model by 4 percentage points. Although not enough to win a Kaggle medal, my model was in the top 15% of all submissions. Not bad!

My source code for the contest is available here.

What if math contests were scored using Principal Component Analysis?

In many math competitions, all problems are weighted equally, even though the problems have very different difficulties. Sometimes, the harder problems are weighted more. But how should we assign weights to each problem?

Usually, the organizers make up weights based on how difficult they believe the problems are. However, they sometimes misjudge the difficulty of problems. Wouldn’t it be better if the weightings were determined from data?

pca.gif

Let’s try Principal Component Analysis!

Principal Component Analysis (PCA) is a statistical procedure that finds a transformation of the data that maximizes the variance. In our case, the first principal component gives a relative weighting of the problems that maximizes the variance of the total scores. This makes sense because we want to separate the good and bad students in a math contest.

IMO 2017 Data

The International Mathematics Olympiad (IMO) is an annual math competition for top high school students around the world. It consists of six problems, divided between two days: on each day, contestants are given 4.5 hours to solve three problems.

Here are the 2017 problems, if you want to try them.

3.pngAbove: Score distribution for IMO 2017

This year, 615 students wrote the IMO. Problems 1 and 4 were the easiest, with the majority of contestants receiving full scores. Problems 3 and 6 were the hardest: only 2 students solved the third problem. Problems 2 and 5 were somewhere in between.

This is a good dataset to play with, because the individual results show what each student scored for every problem.

Derivation of PCA for the 1-dimensional case

Let X be a matrix containing all the data, where each column represents one problem. There are 615 contestants and 6 problems so X has 615 rows and 6 columns.

We wish to find a weight vector \vec u \in \mathbb{R}^{6 \times 1} such that the variance of X \vec u is maximized. Of course, scaling up \vec u by a constant factor also increases the variance, so we need the constraint that | \vec u | = 1.

First, PCA requires that we center X so that the mean for each of the problems is 0, so we subtract each column by its mean. This transformation shifts the total score by a constant, and doesn’t affect the relative weights of the problems.

Now, X \vec u is a vector containing the total scores of all the contestants; its variance is the sum of squares of its elements, or | X \vec u |^2.

To maximize |X \vec u |^2 subject to |\vec u| = 1, we take the singular value decomposition of X = U \Sigma V^T. Then, the leftmost column of V (corresponding to the largest singular value) gives \vec u that maximizes | X \vec u|^2. This gives the first principal axis, and we are done.

Experiments

Running PCA on the IMO 2017 data produced interesting results. After re-scaling the weights so that the minimum possible score is 0 and the maximum possible score is 42 (to match IMO’s scoring), PCA recommends the following weights:

  • Problem 1: 9.15 points
  • Problem 2: 9.73 points
  • Problem 3: 0.15 points
  • Problem 4: 15.34 points
  • Problem 5: 5.59 points
  • Problem 6: 2.05 points

This is the weighting that produces the highest variance. That’s right, solving the hardest problem in the history of the IMO would get you a fraction of 1 point. P4 had the highest variance of the six problems, so PCA gave it the highest weight.

5.png

The scores and rankings produced by the PCA scheme are reasonably well-correlated with the original scores. Students that did well still did well, and students that did poorly still did poorly. The top students that solved the harder problems (2, 3, 5, 6) usually also solved the easier problems (1 and 2). The students that would be the unhappiest with this scheme are a small number of people who solved P3 or P6, but failed to solve P4.

Here’s a comparison of score distributions with the original and PCA scheme. There is a lot less separation between the best of the best students and the middle of the pack. It is easy to check that PCA does indeed produce higher variance than weighing all six problems equally.

4.png

Now, let me comment on the strange results.

It’s clearly absurd to give 0.15 points to the hardest problem on the IMO, and make P4, a much easier problem, be worth 100 times more. But it makes sense from PCA’s perspective. About 99% of the students scored zero on P3, so its variance is very low. Given that PCA has a limited amount of weight to “spend” to increase the total variance, it would be wasteful to use much of it on P3.

The PCA score distribution has less separation between the good students and the best students. However, by giving a lot of weight to P1 and P4, it clearly separates mediocre students that solve one problem from the ones who couldn’t solve anything at all.

In summary, scoring math contests using PCA doesn’t work very well. Although it maximizes overall variance, math contests are asymmetrical as we care about differentiating between the students on the top end of the spectrum.

Source Code

If you want to play with the data, I uploaded it as a Kaggle dataset.

The code for this analysis is available here.

Further discussion of this article on /r/math.

Real World Applications of Automaton Theory

This term, I was teaching a course on intro to theory of computation, and one of the topics was finite automatons — DFAs, NFAs, and the like. I began by writing down the definition of a DFA:

A deterministic finite automaton (DFA) is a 5-tuple (Q, \Sigma, \delta, q_0, F) where: Q is a finite set of states…

I could practically feel my students falling asleep in their seats. Inevitably, a student asked the one question you should never ask a theorist:

“So… how is this useful in real life?”

DFAs as a model of computation

I’ve done some theoretical research on formal language theory and DFAs, so my immediate response was why DFAs are important to theorists.

1.png

Above: A DFA requires O(1) memory, regardless of the length of the input.

You might have heard of Turing machines, which abstracts the idea of a “computer”. In a similar vein, regular languages describe what is possible to do with a computer with very little memory. No matter how long the input is, a DFA only keeps track of what state it’s currently in, so it only requires a constant amount of memory.

By studying properties of regular languages, we gain a better understanding of what is and what isn’t possible with computers with very little memory.

This explains why theorists care about regular languages — but what are some real world applications?

DFAs and regular expressions

Regular expressions are a useful tool that every programmer should know. If you wanted to check if a string is a valid email address, you might write something like:

/^([a-z0-9_\.-]+)@([\da-z\.-]+)\.([a-z\.]{2,6})$/

Behind the scenes, this regular expression gets converted into an NFA, which can be quickly evaluated to produce an answer.

You don’t need to understand the internals of this in order to use regular expressions, but it’s useful to know some theory so you understand its limitations. Some programmers may try to use regular expressions to parse HTML, but if you’ve seen the Pumping Lemma, you will understand why this is fundamentally impossible.

DFAs in compilers

In every programming language, the first step in the compiler or interpreter is the lexer. The lexer reads in a file of your favorite programming language, and produces a sequence of tokens. For example, if you have this line in C++:

cout << "Hello World" << endl;

The lexer generates something like this:

IDENTIFIER  cout
LSHIFT      <<
STRING      "Hello World"
LSHIFT      <<
IDENTIFIER  endl
SEMICOLON   ;

The lexer uses a DFA to go through the source file, one character at a time, and emit tokens. If you ever design your own programming language, this will be one of the first things you will write.

2.gif

Above: Lexer description for JSON numbers, like -3.05

DFAs for artificial intelligence

Another application of finite automata is programming simple agents to respond to inputs and produce actions in some way. You can write a full program, but a DFA is often enough to do the job. DFAs are also easier to reason about and easier to implement.

The AI for Pac-Man uses a four-state automaton:

3.png

Typically this type of automaton is called a Finite State Machine (FSM) rather than a DFA. The difference is that in a FSM, we do an action depending on the state, whereas in a DFA, we care about accepting or rejecting a string — but they’re the same concept.

DFAs in probability

What if we took a DFA, but instead of fixed transition rules, the transitions were probabilistic? This is called a Markov Chain!

4.jpg

Above: 3 state Markov chain to model the weather

Markov chains are frequently used in probability and statistics, and have lots of applications in finance and computer science. Google’s PageRank algorithm uses a giant Markov chain to determine the relative importance of web pages!

You can calculate things like the probability of being in a state after a certain number of time steps, or the expected number of steps to reach a certain state.


In summary, DFAs are powerful and flexible tools with myriad real-world applications. Research in formal language theory is valuable, as it helps us better understand DFAs and what they can do.

On Multiple Hypothesis Testing and the Bonferroni Correction

When you’re doing a statistical analysis, it’s easy to run into the multiple comparisons problem.

Imagine you’re analyzing a dataset. You perform a bunch of statistical tests, and one day you get a p-value of 0.02. This must be significant, right? Not so fast! If you tried a lot of tests, then you’ve fallen into the multiple comparisons fallacy — the more tests you do, the higher chance you get a p-value < 0.05 by pure chance.

Here’s an xkcd comic that illustrates this:

significant.png

They conducted 20 experiments and got a p-value < 0.05 on one of the experiments, thus concluding that green jelly beans cause acne. Later, other researchers will have trouble replicating their results — I wonder why?

What should they have done differently? Well, if they knew about the Bonferroni Correction, they would have divided the p-value 0.05 by the number of experiments, 20. Then, only a p-value smaller than 0.0025 is a truly significant correlation between jelly beans and acne.

Let’s dive in to explain why this division makes sense.

Šidák Correction

Time for some basic probability. What’s the chance that the scenario in the xkcd comic would happen? In other words, if we conduct 20 experiments, each with probability 0.05 of producing a significant p-value, then how likely will at least one of the experiments produce a significant p-value? Assume all the experiments are independent.

The probability of an experiment not being significant is 1 - 0.05, so the probability of all 20 experiments not being significant is (1-0.05)^{20}. Therefore the probability of at least 1 of 20 experiments being significant is 1 - (1-0.05)^{20} = 0.64. Not too surprising now, isn’t it?

We want the probability of accidentally getting a significant p-value by chance to be 0.05, not 0.64 — the definition of p-value. So flip this around — we need to find an adjusted p-value p_{adj} to give an overall p-value 0.05:

1 - (1 - p_{adj})^{20} = 0.05

Solving for p_{adj}:

p_{adj} = 1 - 0.95^{1/20} \approx 0.00256

Okay, this seems reasonably close to 0.0025. In general, if the overall p-value is p and we are correcting for N comparisons, then

p_{adj} = 1 - (1 - p)^{1/N}

This is known as the Šidák Correction in literature.

Bonferroni Correction

Šidák’s method works great, but eventually people started complaining that Šidák’s name had too many diacritics and looked for something simpler (also, it used to be difficult to compute nth roots back when they didn’t have computers). How can we approximate this formula?

Approximate? Use Taylor series, of course!

Assume N is constant, and define:

f(p) = 1 - (1-p)^{1/N}

We take the first two terms of the Taylor series of f(p) centered at 0:

f(p) = f(0) + f'(0)p + O(p^2)

Now f(0) = 0 and f'(p) = \frac{1}{N} (1-p)^{-(N-1)/N} so f'(0) = \frac1N. Therefore,

f(p) = p_{adj} \approx \frac{p}{N}.

That’s the derivation for the Bonferroni Correction.

Since we only took the first two terms of the Taylor series, this produces a p_{adj} that’s slightly lower than necessary.

In the real world, p is close to zero, so in practice it makes little difference whether we use the exact Šidák Correction or the Bonferroni approximation.

That’s it for now. Next time you do multiple comparisons, just remember to divide your p-value by N. Now you know why.

Simple models in Kaggle competitions

This week I participated in the Porto Seguro Kaggle competition. Basically, you’re asked to predict a binary variable — whether or not an insurance claim will be filed — based on a bunch of numerical and categorical variables.

With over 5000 teams entering the competition, it was the largest Kaggle competition ever. I guess this is because it’s a fairly well-understood problem (binary classification) with a reasonably sized dataset, making it accessible to beginning data scientists.

Kaggle is a machine learning competition platform filled with thousands of smart data scientists, machine learning experts, and statistics PhDs, and I am not one of them. Still, I was curious to see how my relatively simple tools would fare against the sophisticated techniques on the leaderboard.


The first thing I tried was logistic regression. All you had to do was load the data into memory, invoke the glm() function in R, and output the predictions. Initially my logistic regression wasn’t working properly and I got a negative score. It took a day or so to figure out how to do logistic regression properly, which got me a score of 0.259 on the public leaderboard.

Next, I tried gradient boosted decision trees, which I had learned about in a stats class but never actually used before. In R, this is simple — I just needed to change the glm() call to gbm() and fit the model again. This improved my score to 0.265. It was near the end of the competition so I stopped here.

At this point, the top submission had a score of 0.291, and 0.288 was enough to get a gold medal. Yet despite being within 10% of the top submission in overall accuracy, I was still in the bottom half of the leaderboard, ranking in the 30th percentile.

The public leaderboard looked like this:

Rplot.pngAbove: Public leaderboard of the Porto Seguro Kaggle competition two days before the deadline. Line in green is my submission, scoring 0.265.

This graph illustrates the nature of this competition. At first, progress is easy, and pretty much anyone who submitted anything that was not “predict all zeros” got over 0.200. From there, you make steady, incremental progress until about 0.280 or so, but afterwards, any further improvements is limited.

The top of the leaderboard is very crowded, with over 1000 teams having the score of 0.287. Many teams used ensembles of XGBoost and LightGBM models with elaborate feature engineering. In the final battle for the private leaderboard, score differences of less than 0.001 translated to hundreds of places on the leaderboard and spelled the difference between victory and defeat.

591926572-christophe-lemaitre-of-france-usain-bolt-of-jamaica.jpg.CROP.promo-xlarge2.jpgAbove: To run 90% as fast as Usain Bolt, you need to run 100 meters in 10.5 seconds. To get 90% of the winning score in Kaggle, you just need to call glm().

This pattern is common in Kaggle and machine learning — often, a simple model can do quite well, at least the same order of magnitude as a highly optimized solution. It’s quite remarkable that you can get a decent solution with a day or two of work, and then, 5000 smart people working for 2 months can only improve it by 10%. Perhaps this is obvious to someone doing machine learning long enough, but we should look back and consider how rare this is. The same does not apply to most activities. You cannot play piano for two days and become 90% as good as a concert pianist. Likewise, you cannot train for two days and run 90% as fast as Usain Bolt.

Simple models won’t win you Kaggle competitions, but we shouldn’t understate their effectiveness. Not only are they quick to develop, but they are also easier to interpret, and can be trained in a few seconds rather than hours. It’s comforting to see how far you can get with simple solutions — the gap between the best and the rest isn’t so big after all.

Read further discussion of this post on the Kaggle forums!

My First Research Paper: State Complexity of Overlap Assembly

My first research paper is completed and has been uploaded to Arxiv! It’s titled “State Complexity of Overlap Assembly”, by Janusz Brzozowski, Lila Kari, Myself, and Marek Szykuła. It’s in the area of formal language theory, which is an area of theoretical computer science. I worked on it as a part-time URA research project for two terms during my undergrad at Waterloo.

The contents of the paper are fairly technical. In this blog post, I will explain the background and motivation for the problem, and give a statement of the main result that we proved.

What is Overlap Assembly?

The subject of this paper is a formal language operation called overlap assembly, which models annealing of DNA strands. When you cool a solution containing a lot of short DNA strands, they tend to stick together in a predictable manner: they stick together to form longer strands, but only if the ends “match”.

We can view this as a formal operation on strings. If we have two strings a and b such that some suffix of a matches some prefix of b, then the overlap assembly is the string we get by combining them together:

1.pngAbove: Example of overlap assembly of two strings

In some cases, there might be more than one way of combining the strings together, for example, a=CATA and b=ATAG — then both CATATAG and CATAG are possible overlaps. Therefore, overlap assembly actually produces a set of strings.

We can extend this idea to languages in the obvious way. The overlap assembly of two languages is the set of all the strings you get by overlapping any two strings in the respective languages. For example, if L1={ab, abab, ababab, …} and L2={ba, baba, bababa, …}, then the overlap language is {aba, ababa, …}.

It turns out that if we start with two regular languages, then the overlap assembly language will always be regular too. I won’t go into the details, but it suffices to construct an NFA that recognizes the overlap language, given two DFAs recognizing the two input languages.

2.pngAbove: Example of construction of an overlap assembly NFA (Figure 2 of our paper)

What is State Complexity?

Given that overlap assembly is closed under regular languages, a natural question to ask is: how “complex” is the regular language that gets produced? One measure of complexity of regular languages is state complexity: the number of states in the smallest DFA that recognizes the language.

State complexity was first studied in 1994 by Sheng Yu et al. Some operations do not increase state complexity very much: if two regular languages have state complexities m and n, then their union has state complexity at most mn. On the other hand, the reversal operation can blow up state complexity exponentially — it’s possible for a language to have state complexity n but its reversal to have state complexity 2^n.

Here’s a table of the state complexities of a few regular language operations:

3.png

Over the years, state complexity has been studied for a wide range of other regular language operations. Overlap assembly is another such operation — the paper studies the state complexity of this operation.

Main Result

In our paper, we proved that the state complexity of overlap assembly (for two languages with state complexities m and n) is at most:

2(m-1) 3^{n-1} + 2^n

Further, we constructed a family of DFAs that achieve this bound, so the bound is tight.

That’s it for my not-too-technical summary of this paper. I glossed over a lot of the details, so check out the paper for the full story!

EulerCoin: Earn digital tokens by solving difficult mathematical Project Euler problems!

Today I’m going to show you something I built during the ETHWaterloo hackathon this weekend.

ETHWaterloo is a hackathon themed around the cryptocurrency Ethereum, which is gaining a lot of traction lately. You have to build something using Ethereum technology, and you can listen to many talks given by experts in the cryptocurrency community. It was a great way to learn more about this new blockchain technology.

vitalik.jpgAbove: Ethereum founder Vitalik Buterin speaking in the opening ceremony

The hackathon opened with a series of talks. I used to be good friends with Vitalik back in first year of university, but haven’t talked to him much after he dropped out of university to work on Ethereum. It was good to see him in person again at this hackathon.

After the opening ceremony, we went to a few tutorials on how to set up the Solidity environment, then brainstormed ideas for what to build. Eventually, we came up with the this:

logo.pngEulerCoin: our new Ethereum-based ICO

We often hear the media describe Bitcoin as “computers solving hard math problems to generate money”. Well, we made a new cryptocurrency, Eulercoin, where tokens are created by correctly submitting the answers to Project Euler problems!

So how does this work?

  • EulerCoin is implemented as a Ethereum smart contract that runs on the blockchain. A user submits answers to the smart contract, which then tries to submit it to Project Euler. If it’s correct, the user is awarded some EulerCoin.
  • Correct answers are recorded on the blockchain forever so anyone can use it to cheat on Project Euler and access the solution forums.
  • For each problem, only the first submitter gets awarded EulerCoin. The amount is larger if the problem has fewer solvers (meaning it’s a harder problem).

I used to maintain a list of Project Euler answers since 2009, but recently it’s been hard to convince people to post their answers publicly. Something like EulerCoin would incentivize people to contribute their answers.

After working on it for a day or so, we got a prototype working:

screenshot.pngAbove: Screenshot of EulerCoin prototype

Here’s a simple web UI, where clicking “Submit” opens up Metamask to initiate an Ethereum transaction. To avoid actually spamming Project Euler for a hackathon project, we mocked up the answer-checking component, but everything else works.


This was the first time programming in Ethereum / Solidity for all of us, so it was a novel experience. After doing this hackathon, I have a much better understanding of what Ethereum does. Bitcoin is digital currency that can only be transferred and not much else. Ethereum is a whole platform where you can run your own custom, Turing-complete smart contracts, all on the blockchain.

Ethereum also has some major limitations that may hinder widespread adoption. Here’s a few facts about Ethereum that I was surprised to learn:

  • All Solidity code is run once by everybody in the blockchain. This is a big scalability problem — imagine that when you buy some coffee, everyone in the world from Canada to France to Vietnam has to process this fact! To sync an Ethereum node, you have to have downloaded a blockchain containing every Ethereum transaction from the beginning of time — too much data for my laptop to handle. This also means that storing even small amounts of data on the blockchain can get incredibly expensive.
  • All data stored in the blockchain is visible to everybody. Because of this, it’s quite difficult to maintain any kind of privacy — this was a problem we ran into for one of our ideas. You can try encryption, but where to store the keys? If you put the keys on the blockchain, then anyone can see it, and you’re back to square one. There are some ideas of using fancy cryptography like ring signatures to keep privacy in limited situations, but these ideas aren’t quite mature yet.

I’m curious to see how Ethereum will solve these big problems in the upcoming years.


22553486_10214254283333445_746148568_o.jpgAbove: Team EulerCoin (Left to Right: Andrei Danciulescu, Chuyi Liu, Shala Chen, Me)

Our submission didn’t make the final round of judging, but we had a good time. Before the closing ceremony, our team gathered for a group photo.

That’s it for now. I’m not sure if I want to invest my own money into Ethereum just yet, but certainly I will read more about cryptocurrencies. Bubble or not, the mathematical ideas and technologies are worth looking into.