Roboroast: upload your photo to get an algorithmically generated insult!

April 27, 2016

I’d like to share a side project I’ve been working on for the past few weeks. Roboroast is an app that automatically generates humorous insults for you or a friend based on how you look. It was written in collaboration with my friend Andrei Danciulescu.

The basic operation is as follows. There’s a subreddit called /r/RoastMe where random people post a picture of themselves, and other people proceed to “roast” the person with funny comments making fun of his appearance.

Our app takes your photo and uses a face recognition algorithm to find a poster in /r/RoastMe who looks like you. Then we display the comments for your closest matches.

You can try it at roboroast.tk.

Sample Results

Here’s some roasts for myself:

Here’s some for Andrei:

High Level Overview

The project comprises of roughly 3 parts:

Part 1 is the Reddit scraper. We use the PRAW API to go through all posts on the /r/RoastMe subreddit, saving comments to MongoDB and saving images to the filesystem.

Part 2 is the Face++ uploader. Face++ is a cloud service with a REST API that handles our face matching. To use it, we upload all the images from part 1 into a “faceset” which we can query later.

The first two components only need to be run periodically, maybe once a month to update the faceset with new posts from Reddit. Part 3 is the webapp, which is the use facing component. It accepts user uploads, searches for matches using the Face++ API, and renders a list of insults to the user.

Technology Stack

As mentioned before, we used a number of third party APIs; PRAW for scraping Reddit posts, and Face++ for face recognition.

All the backend code is written in Python. The web app uses the Flask web framework, and is wrapped with NGINX and Gunicorn to handle connections and serve static files. We use MongoDB for the database.

The frontend is built with Bootstrap. We also use javascript libraries jQuery and handlebars.js.

The whole thing is hosted on a single AWS EC2 instance.

How good is the face matching?

The face matching is actually decent. Face++ produces reasonable matches most of the time.

To see the matching results for yourself, you can append ?r=1 to the end of the URL (on the results page). This is hidden by default.

Do the insults make sense?

Although the face matching does a decent job, we found that the quality of results were somewhat hit-or-miss.

When we envisioned the concept for this app, we assumed that most insults were going to make fun of the subject’s face. However, many insults refer to their non-facial appearance, or clothing, or objects in the background. Since we only do face matching, these comments will make no sense.

Other times, comments will refer to the title of the post — in other words, an insult depends on both the submission title and the picture. Again, these make no sense with only the picture.

We attempt to mitigate this with heuristics that analyze the comment, in order to exclude roasts which refer to the title or articles of clothing. This approach had limited success because natural language processing is hard.

Conclusion

When Andrei initially proposed this idea for an app, I thought the concept was pretty cool and unique. In a month or so we had a prototype, and I spent a few more weeks polishing the project for release. The quality of results you get is still highly variable, but we’re working on improving our algorithms.

In any case, it’s my first time with a lot of these technologies, and I had fun and learned a lot building it.


Teaching Myself Electronics: Zero to Arduino in 5 Weeks

April 1, 2016

I’m about to graduate with a degree in computer science, but I can’t describe how a computer works. Okay, maybe that’s an exaggeration. I can tell you all about assembly language and operating system kernels, and I have a good idea of how to build a CPU out of basic logic gates.

That’s where my knowledge ends. I have no idea how to build an AND gate, or how to coerce my 120V power supply to gently power these gates without frying them.

Learning is good, and this is a pretty big knowledge gap. I’m going to teach myself electronics. My plan is to learn by building things. There’s a lot of mathematical theory to learn, much of it is not that useful, and it’s easy to get bogged down in random details. Much better is to just experiment and go back and learn the theory when needed.

Week 1: Electronic Playground

The first problem was getting components. Unlike computer programming, where everything you need is on the internet, for hardware I’ll actually need to buy things. This is difficult when you don’t know exactly what you need. I also didn’t want a million different parts littering my bedroom haphazardly.

Eventually I settled on this all-in-one kit (cost $30).

It has a lot of components: LEDs, resistors, capacitors, even antenna and speakers. All the components are fixed to a board, and to connect them together, you use wires that clip to springs protruding from the board.

The kit comes with an instruction booklet that describes all kinds of things you can wire with it. For example, here’s a “harp” — it makes different tones when you hover your hand over the photoresistor:

This schematic is a bit too advanced for me at this stage — unfortunately the booklet doesn’t attempt to explain how it works.

That’s fine, the following books do an excellent job of starting from the basics:

After playing with this for a while, I learned a lot of basic things like how current / voltage / resistance works, how to read common schematic symbols, and how to decode a resistor.

Week 2: Multimeter

Electricity is invisible, and debugging circuits is difficult without being able to see what’s going on. I went ahead and got a multimeter (cost $20):

It was easy enough to measure resistance and voltage (both AC and DC). The current measurement was not very sensitive though and I could barely register any reading.

Around this time I attended a workshop in Manhattan that taught how to read schematics and build it on a breadboard. We made a 555 timer circuit which made a LED blink on and off:

I can’t understand how it works right now, but breadboards are pretty neat. Much easier than sticking wires into springs on my electronic kit at home.

Week 3: Baby steps with Arduino

By now I was reaching the limits of what my electronic kit could offer, and I needed to graduate to something more serious.

So I went to the nearest electronics shop and got an Arduino Uno kit (cost $90). The Arduino is a microcontroller and lets you prototype circuits easily with a breadboard. The Arduino Uno is only $25, but my kit comes with an assortment of components and sensors.

Before long I had the Arduino up and running. It runs a dialect of C, so I felt at home in the programming environment.

Here’s a program that blinks the onboard LED on and off in a loop (kind of equivalent of hello world):

// the setup function runs once when you press reset or power the board
void setup() {
  // initialize digital pin 13 as an output.
  pinMode(13, OUTPUT);
}

// the loop function runs over and over again forever
void loop() {
  digitalWrite(13, HIGH);   // turn the LED on (HIGH is the voltage level)
  delay(1000);              // wait for a second
  digitalWrite(13, LOW);    // turn the LED off by making the voltage LOW
  delay(1000);              // wait for a second
}

Week 4: Transistor Switching

I didn’t really know what a transistor did, but it’s what logic gates are made of and the backbone of all computers, so it can’t hurt to learn about them, right?

I started off building logic gates from transistors, but couldn’t get it work. It turned out that I misunderstood how a transistor operates (it’s not the most intuitive at first glance). Luckily, I had a friend in electrical engineering and she patiently cleared up my misconceptions.

Transistors are used for logic gates, but I didn’t know that transistors can also amplify a current. I also learned about the many different types of transistors.

Here’s a circuit that I built (from the Arduino kit manual). It uses a transistor as a switch to control a motor:

By the way, here’s how you make an AND gate with two transistors:

While wiring things up, I accidentally burned a LED and a BJT transistor. Apparently 5 volts without a resistor is fatal to many components. In software, if you mess up, you get a segmentation fault in your console or something — never the smell of burnt plastic in your room.

Week 5: Arduino Controlled Desk Lamp

Here’s an idea. Wouldn’t it be nice if your lamp can turn itself on when it gets dark? Useful or not, let’s build it!

This is actually a major milestone for me. Up until now, I’ve been mostly following existing schematics, using parts carefully selected by people who wrote the schematics. But for this project, we improvise everything from scratch. Oh yea, also it’s the first time I’m working with 120V alternating current.

First I got a $15 desk lamp. It’s the kind that plugs into the wall AC socket. I start by using a wire stripper to expose copper wires that I can plug into the breadboard:

I’m a bit nervous working with 120V current, obviously it’s a lot more powerful than the 5V of Arduino. Generally, 120V won’t kill or seriously injure you, but it does deliver an unpleasant shock.

After stripping the cord, the lamp can be plugged into the breadboard. But the voltage is too high for the Arduino to handle directly; instead, I need a relay which acts as a buffer and a switch. A transistor can act as a switch too, but relays can handle more current.

To detect light, I have a separate circuit that uses a photoresistor (changes resistance depending on amount of light shining on it). The Arduino can read the photoresistor by measuring analog voltage.

Here’s the schematic of my design:

Now we do software. It does a loop every 500ms, detects the amount of light, and decides whether the lamp should be on or off. It’s slightly complicated by the fact that when the lamp is on, it produces light and that affects the photoresistor readings. I need to compensate for that but it’s not too bad.

Here’s the code I came up with:

int photoPin = 5;
int lampPin = 3;

bool isLampOn = false;

void setup(){
  pinMode(lampPin, OUTPUT);
  Serial.begin(9600);
}

bool shouldTurnLampOn(){
  int lightLevel = analogRead(photoPin);
  if(isLampOn){
    return lightLevel < 70;
  }
  else{
    return lightLevel < 50;
  }
}

void loop(){
  if(shouldTurnLampOn()){
    digitalWrite(lampPin, true);
    isLampOn = true;
  }
  else{
    digitalWrite(lampPin, false);
    isLampOn = false;
  }
  delay(500);
}

Here’s a video demo:

Onwards

Actually my lamp idea probably isn’t that useful. Nevertheless, I made a lot of progress in just a few weeks and I’m proud of myself for that.

I’ve barely scratched the surface of all the cool things you can do with electronics, but it’s a good start. Now I have all kinds of ideas on what to build next. I’d better get to it!


Four life lessons learned by playing Hearthstone

March 14, 2016

I’ve played Hearthstone on and off for a few years, since it first came out. As I played more and more, I began to notice parallels between my decision making processes in Hearthstone and in real life. This is a self-reflective post, and probably my first serious attempt to describe the core features of my mentality and decision making process. Although I wanted to write this for a long time, I found it difficult to put my ideas into words because they have been part of my personality for so long.

Why is Hearthstone a good representation of real life? Two reasons:

  • First, it’s a game of imperfect information and chance, so you must take risks and deal with uncertainty. Real life situations are usually like this. Games of perfect information (like chess) lack this probabilistic aspect and behave very differently.
  • Second, Hearthstone is a game about decision making skills, rather than mechanics. Every game has some element of decision making, but many games require doing some mechanical action (eg. last hitting) better than your opponent. Mechanical skills are confined to the specific game and are less likely to be relevant in real life.

By playing Hearthstone, I developed a general internal model for making decisions in uncertain situations. This is a broad criterion and covers many situations in day-to-day life.

Lesson 1: There is always a correct decision, and it’s your job to find it

The goal in Hearthstone is to reduce your opponent’s life to zero. How do you accomplish this? You make a plan, perhaps flooding the board with minions, perhaps unleashing a deadly combination of spells.

For our purposes, it doesn’t matter what your strategy is. At the start of the turn, you look at the cards in your hand, the state of the board, what cards your opponent played before. Call this information the game state. You ponder for a bit and come up with an action that best improves your position.

You execute your action on the board, but you still don’t know what happens next with certainty. There are many things you cannot control, which I will call RNG. RNG is short for Random Number Generator, and I will use it to mean anything you don’t have control over.

I use the term RNG for lack of a better term, but I’m not just talking about random game mechanics. RNG includes any state hidden from you, like your opponent’s hand and strategy.  Think of it as a random variable with a known distribution (eg. you play a card that destroys a random minion, which minion will it hit?) or with an unknown distribution (eg. what is the probability your opponent has two flamestrikes in his deck?). Even if the information is known to your opponent, it’s simpler to treat it as a random variable.

Here’s the model summarized in a diagram:

In any game state, there must be one “correct” action that gives you the highest chance of winning the game. The decision-making player aims to consider all possible actions and choose the best, “correct” one.

As a corollary, decision making should be perfectly rational. Otherwise, if my decision engine generates two different actions depending on my emotional state of mind, they cannot both be correct.

A second corollary is actions should always be justifiable through fundamental values. It’s unacceptable to do things by habit, or because other people are doing it — everything I do should have a positive expected value on the things I want to accomplish.

For me, one of my “meta” goals in life is to make correct decisions as much as possible. This is not to say that I behave like a robot — I still experience emotions like everyone else — but I try to eliminate emotions from my decision making process.

In Hearthstone, doing so gives you the highest chance of winning the game. It makes sense then, by extrapolation, that correct decision making gives you the best shot of getting what you want out of life.

Lesson 2: Information is valuable, treat information gathering as a subgoal

One rule of thumb in Hearthstone is “RNG first”. If you are going to play a sequence of cards, one of which has a random effect, it’s better to play the random effect first. This way you extract information out of the RNG pool of unknowns, and with this extra information you might be able to make a better play.

Another useful thing is to keep track of enemy secrets. Imagine you have this on the board:

You want to play a giant, but you’re worried that the secret is “mirror entity”, which summons a copy of the next minion you play.

Without any other information, you’re in a tough spot. But what if you played a minion last turn and the secret did not activate? Then you know that the secret isn’t mirror entity, and confidently play the giant.

Alternatively, suppose that you don’t have this information handy. One tactic is you can “test out” the secret by playing a small minion, and seeing whether the secret activates. You are paying a price with a normally inferior move, but the information you obtain is valuable for future decisions.

A similar concept occurring in real life is flirting. You’re at a party and you see a cute girl walk by. At first, you make a few playful comments, and observe her reaction and assess if she is interested in you. Flirting isn’t just an arbitrary social custom; it makes sense logically as a way of gathering information.

While information isn’t the final end-goal by itself, even a little information can greatly improve decision making, by eliminating vast swathes of possibilities that no longer need to be considered. Whether it be playing a giant, making a big purchase, or asking someone out on a date, gathering information is a useful subgoal.

Lesson 3: Focus on things you have control over, RNG evens out in the long run

Often in Hearthstone, luck is just not on your side. Have you ever seen your opponent topdeck the pyroblast and instantly win the game? Or that mad bomber that hits you three times in the face? How do you feel?

It’s natural to feel angry when this happens to you, especially if it ends up losing you the game. But eventually I realized how pointless it was to get upset at unlucky RNG. What’s the use of worrying about things you have no control over?

I see this all the time — people getting visibly upset when the bus is late, or when a player on your team goes AFK in a game of league. I try to adopt the opposite mindset: worry about my own decision making and simply accept random events beyond my control.

Let me give you an example. Last term, during an important phone interview, my phone stopped working during the middle of the interview. Calmly I got up and notified the CECA front desk, and waited as they spent the next 20 minutes troubleshooting the problem. Most people would be stressed out at this point, but I didn’t feel stressed at all. Rather than getting upset, my mind was relaxed, because I took comfort in knowing that I did everything that could be done; whatever happens next was out of my control.

The law of large numbers says that when you repeat a random event many times, the average outcome will surely converge to the expected value. Hearthstone is so random that a legend player will beat a rank 5 player no more than 55% of the time. Any single game is close to a coin flip, just marginally in favor of the stronger player. But over the long run, it’s a mathematical guarantee that the better player will end up on top.

Lesson 4: Separate the outcome of a decision from the decision itself

In real life and in Hearthstone, you can’t directly tell if a decision was good or not. You only know the outcome, and you can decide if the outcome is good or bad. But the outcome is a function of the decision and RNG, which adds noise to the process.

In other words, the correct decision does not always produce a good outcome, and sometimes a bad decision produces a good outcome. It would be a mistake to retroactively label a decision as “correct” simply because you got lucky.

Here’s a Hearthstone example:

Your opponent is a mage, and on turn 6 you flood the board with a lot of small minions. If he has flamestrike, playing it deals 4 damage to each of your minions, instantly killing your whole board.

Turn 7 comes and it turns out he doesn’t have flamestrike, so you win the game easily. You conclude that playing all your minions was a great idea because he didn’t have flamestrike.

This logic is fallacious: it fails to separate decision from outcome. A correct action is the one that maximizes the win probability, given the information available at the time. Therefore it makes no sense to look at the outcome and retroactively judge the correctness of the initial decision.

So in this example, playing all these minions was a mistake because there’s a high chance the mage has flamestrike. It doesn’t matter if he actually has flamestrike or not, the mistake is equally bad. (A better play would be to play fewer minions, thus mitigating the risk).

Now here’s a real life example. Last term, I had multiple job offers for software engineering internships and I had trouble deciding which one to accept. So I tried to negotiate: I picked one of the companies, told them about my other offers, and asked for a 20% raise in salary. My request was denied.

Does this mean that negotiating was a waste of time? Absolutely not. I know friends who successfully negotiated a higher salary by doing something similar. My particular outcome was not successful, but this doesn’t indicate my attempt was a mistake; if I found myself again with multiple offers, I would do the same thing.

Alfred Tennyson wrote the following about romance:

‘Tis better to have loved and lost

Than never to have loved at all

There are many ways to interpret this quote, here’s mine. Even if the outcome of a romantic encounter is unfavorable (to have loved and lost), it does not mean the decision to pursue the relationship was a mistake.

Why I still do stupid things

Alas, despite my best efforts, I still find myself doing stupid things — quite frequently even. Mistakes happen for a variety of reasons, but after analyzing some, I group them into three broad categories.

The first type of mistake happens when the situation is complicated, and the amount of data available exceeds my brain’s capacity to process it. In theory, I should never lose at chess — all the information is known. Of course, the number of positions explodes combinatorially and in reality I’m a mediocre chess player. Chess grandmasters group information in “chunks” and can reason about positions more efficiently — but this requires experience. In general, humans are prone to making mistakes in complicated situations.

The second category of mistake is having an incorrect model of the world. When we evaluate possible actions, we “simulate” the effects with a simplified version of the world. Problems arise when there is a discrepancy between the model in our heads and the real world.

This discrepancy can manifest itself in several related ways. We may incorrectly value subgoals, for example, a newbie Hearthstone player, knowing the objective is to reduce the enemy’s health to zero, thus decides to deal the maximum damage to the enemy’s hero every turn and ignoring everything else. We may overlook important factors, for example, leaving a Gadgetzan Auctioneer on the board, not realizing its potential, and being surprised next turn when your opponent draws 10 spells using its special ability. Or we may simply miss a possible play that never even occurred to us.

This type of mistake is the most common, but fortunately the most fixable of the three. As you gain more experience with the domain, your model of the world becomes a more accurate representation of the real thing. Then you learn to correctly assign values to things, and generate the full set of possibilities for a situation. For me, this gradual process of learning and self-improvement is one of the most satisfying things in life.

The third and final category of mistake is making decisions without thinking, thereby short-circuiting the entire decision making process. This could be when you’re stressed, emotional, or just tired. An example of this is when you casually trade some minions in Hearthstone, then realizing you had lethal. If only you thought more carefully, you would have easily found the correct play.

It’s not necessarily bad to do things without thinking too hard: it would be silly to invoke the full mechanism to choose between a burrito or a sandwich for lunch. It’s important, however, to realize when a decision is likely to have far-reaching consequences. In that case, it’s wise to defer making the decision until you had time to think things through.

There’s a lot more I could talk about, but this post is getting quite long so I’ll stop here. Whether you agree or disagree with my view of the world, please leave a comment!


Why is it so rainy in El Yunque – travels in Puerto Rico

March 6, 2016

This week, the entire engineering team at Yext went on a trip to Puerto Rico. Three nights at a beach resort, all expenses paid for.

What?! As an intern? No way! That was my reaction when I first heard about it. Friends at other software companies boasted about corporate housing, gourmet meals, and pantries stocked full of snacks of every kind, but Yext’s Puerto Rico offsite takes the cake.

The Resort

San Juan, the main city in Puerto Rico, is a 4 hour flight from New York. Puerto Rico is a popular destination because it’s a US territory, so you don’t need to worry about things like visas or international currencies. Also the drinking age is 18, rather than 21 for most of the US.

The resort was located 1 hour from San Juan, in the Fajardo region. I had never been to a Caribbean resort before, but the experience was more or less identical to my preconception of what a resort should be like. Along with my fellow engineers, we had a good time swimming in the beach, playing beach volleyball, and drinking lots of mojitos.

Here’s me on the beach:

Since this is a company offsite, there were some serious activities too. For half the day, senior Yext engineers gave tech talks on things like domain driven design and how to write integration tests.

After the Resort

For me, the amount of fun I have at a resort is not constant. The first day at the resort is the most amazing thing ever. Then on the second and third day, when you redo the same resort activities, the excitement wears off. I think if I spent a week at the resort, I’d be pretty bored by the end of it.

After the 3 days that were officially scheduled, some of the engineers decided to stay at the resort for the weekend. I joined a group that rented a car and drove to El Yunque — a tropical rainforest not too far away. After that, I spent another day exploring the city of San Juan by myself before getting on the plane back to New York.

El Yunque was surprisingly rainy. Even though we knew it was a rainforest, the amount of rain caught all of us off guard.

Standing on a lush green mountainside, you could see the dark clouds releasing a constant downpour of rain. Yet in the distance, the beach resort remained warm and sunny. The skies cleared up the moment we left the rainforest.

It seemed all the rain was concentrated within the boundaries of El Yunque national park, as if artificially constrained by a force field.

So why is it rainy in El Yunque?

The curious climate of El Yunque intrigued me. When I got home, I did some research on why it behaved this way.

A quick Google search gave me this precipitation map, which confirmed my suspicions:

Figure: Mean Annual Precipitation of Puerto Rico in 1963-1996

The purple region in the northeast is El Yunque. It receives 120 inches of rainfall a year, which is 3 times more than San Juan.

It might also be worth looking at a relief map of Puerto Rico:

The rainforest area is on a higher elevation than the surrounding region. So the rain falls where there are mountains. Gotcha.

This phenomenon is called orographic precipitation (orographic means relating to mountains). When warm and humid air is forced up a mountain, it cools and forms clouds, which then precipitates. The other side of the mountain experiences a rain shadow effect as the descending is devoid of moisture.

Also, in the Caribbeans, the trade winds tend to blow from east to west. This explains why El Yunque is a rainforest, but there are higher mountains in other parts of the island which are not rainforests.

Actually, in retrospect this seems like a fact we all learned in grade school. I don’t know what explanation I was expecting, something fancier?

In any case, this mix of geographical and weather conditions gives us a unique and beautiful landscape — and the only tropical rainforest in the US.


Achievement Unlocked: publish app on iOS App Store without testing it on a device

August 3, 2015

This week marks the end of a hobby project I’ve been working on for the last few months. It’s called WATisRain, and here’s a link to github. Initially an Android app, I ported the app to iOS over a period of two months. Yesterday, the app was approved on the app store, you can download it here.

Some background

I was never an Apple person. I do not own a Macbook, iPod, iPhone, or any Apple device.

My first mobile platform is Android. In this post I talk about my first impressions as an Android developer. That’s a story for another day.

Last year I started work on an app to navigate the tunnels between buildings of my university campus. The network of buildings, tunnels, and bridges were not very well known, even among upper years, so I figured it would be a cool idea for an Android app.

I worked on this idea on and off for a few months, then released version 1.0 to Google Play. The app quickly got about 2000 downloads and a couple dozen positive reviews. I was pretty happy.

An obvious next step to take is port this to iOS. The campus population is split between Android and iOS, so an Android-only app locks out a significant fraction of the user base. Unfortunately, I didn’t build my app with any of the cross-platform technologies, so this involves porting the entire codebase (2k lines of Java) into objective-C. I also didn’t have a Macbook or iPhone, both of which happens to be pretty crucial for iOS development.

Few months later, I landed an internship at Minted, in San Francisco. My company lent me a Macbook Pro, so I finally have the hardware to work on an iOS port. I still didn’t have an iPhone, but no matter. Surely the simulator is sufficient, right?

Motivated by a hard deadline (I had to return my Macbook at the end of my internship), I worked evenings and weekends to finish the iOS port. I ignored all coding conventions and translated my Java code, literally line by line, into objective-C.

It only took a few weeks to port over all the features and get it on the app store. I called a few of my friends who had iPhones, and asked them to download my app. They confirmed the app works. Mission accomplished.

Impressions on iOS development

My overall impression on iOS development so far is mixed.

I’m impressed with the technical aspects of Apple’s products, from the iPhone devices themselves to the IDE, Xcode, that Apple provides for developers. Compared to Android development, I was faced with far fewer random IDE glitches, inconsistencies between devices, and the like. Developing on the simulator worked amazingly well — enough to get me to the app store. For comparison, it would be unimaginable to develop the same Android app entirely on the emulator.

What I really disliked is the closed and proprietary approach Apple takes for its products. First of all, you need a Mac of some sort to develop for iOS, period. I can happily develop Android on any platform I want, but I cannot run Xcode on Windows.

Next, you need to enroll in Apple’s developer program, at a cost of $119 per year. At the end of the year, if you don’t renew your membership, your app is removed from the store. Even if you just want to develop for fun, without submitting to the app store, you still need this license to push your app to your device. In contrast, Google Play charges a $25 lifetime fee for the same thing.

One last thing I have to mention is the app review process takes 1-2 weeks. This is incredibly frustrating, since any bugfix will take a week to push to users.

In practice all these factors combined leads to a high barrier of entry for a hobbyist like me. Let’s calculate. If you spend $2500 on a Macbook Pro, $500 for some sort of iPhone, $119 for the developer program, that’s already over 3000 dollars before you can even start coding.

Can you really develop without a device?

All across internet forums, people advocate that you should test your app thoroughly across many devices before submitting to the app store. It seems that trying to develop without a device is an edge case, often the instructions for a task assumes you have a device, and you have to find a workaround if you don’t.

In my case, it was successful, in the sense that I produced an app that didn’t crash and got past app review. But I don’t know if I got lucky, because things could have turned out badly.

Throughout the whole process, I was worried that running the app on a real device would exhibit bugs that aren’t producible in the simulator. In that case, I’d have no way to debug the problem, and the project would be done. My app uses nothing but the most basic functionality, so I had a good chance of dodging this bullet. But still, the possibility loomed over me, threatening to kill the project just as it crosses the finish line.

A second problem is by copying my original Android app feature by feature, the resulting iOS app looks and feels like an Android app. A friend pointed this out when I sent him the app. I hadn’t noticed it, but after looking at some other iOS apps, I have to agree with him. Actually in hindsight it shouldn’t surprise anyone that without seeing other iOS apps, I don’t really know how an iOS app should behave. But it just never occurred to me that the natural way to do things for me might be unnatural for iOS users.

Finally, this is subjective, but for me it wasn’t very fun to develop for a simulator. Without the tactile sensation of your creation running on an actual phone, the whole experience feels detached from reality. You feel like an unwelcome foreigner in a country where the customs are different, and you begin to question yourself, why am I doing this iOS port anyway?

Part of what kept me going was sunk cost fallacy. I paid $119 to be an iOS developer, so I’d better get at least something on the app store, have something to show for it.

Now that the app is finished, I think I’m done with iOS development. Perhaps the app store is fertile ground for developers and startups looking to make a profit, but the cost of entry is unreasonable for someone making a few open source apps for fun.


What’s the hardest bug you’ve ever debugged?

June 19, 2015

In a recent interview, I was asked this question: “what’s the most difficult bug you’ve encountered, and how did you fix it?” I thought this was an interesting question because there are so many answers you could give to this question, and the sort of answer you give demonstrated your level of experience with developing software.

I thought for a moment, recalling all the countless bugs I had seen and fixed. Which one was the most difficult and interesting? In this article I’m going to describe my most difficult bug to date.

It was an iOS app. I was working as a four-month intern at the time. “We’ve been seeing reports from our users that the app randomly display a black screen,” my boss explained one afternoon. “No error message, no crash log, nothing. The app is simply stuck at a black screen state until you kill it.”

“Fair enough. How do I reproduce it?”

He shrugged. “I don’t know. Users are reporting it happens randomly. Here’s what you gotta do: grab an iPad, download the game off the app store. Create an account and play the game until you hit the bug.”

So I did. I was reduced to one of these typewriter monkeys, banging away mindlessly at the keyboard until I stumble upon the sequence of button presses to trigger the undiscovered bug by sheer coincidence.

For an afternoon I monkeyed away, but no matter what buttons I pressed, the mythical black screen would not appear. I left the office, defeated and mentally exhausted.

The next morning I checked into the office, picked up the iPad, and resumed my monkeying. But this time my fortune was different: within 15 minutes, lo and behold, the screen flashed white, followed by an unrepentant screen of black.

What did I do to trigger this? I retraced my steps, trying to repeat the miracle. It happened again. Methodically I searched for a deterministic sequence of actions that brought our app to its knees. Go to the profile page. Hit button X. Go to page Y and back to the profile page. Hit button Z. The screen flickered for a millisecond, the black. Ten times out of ten.

With a sigh of relief, I jotted down this strange choreography and went for a walk. Returning with a fresh mind ready to tackle the next stage of the problem, I executed the sequence one more time, just to make sure. But the bug was nowhere to be seen.

I racked my brain for an explanation. The same sequence of actions now produced different results, I reasoned. Which meant something must have changed. But what?

It occurred to me that the page looked a little different now from when I was able to reproduce the bug. In the morning, when I came in, there was a little countdown timer in the corner of the screen that indicated the time until an upcoming event. The timer was not there anymore. Could it be the culprit…

To test this hypothesis, I produced a build that pointed the game to the dev server, and fired up a system event. The timer appeared. I executed my sequence — profile, tap, home screen, back to profile, tap, and sure enough, with a flicker the black screen appeared. I turned off the timer, repeated the sequence — profile, tap, home, profile, tap — no black screen. I had finally discovered the heart of the matter. There was some strange interaction going on between the timer and other things on the page.

At this point, with 100% reproducibility, the worst was over. It took a few more hours for me to investigate the issue and come up with a fix. My patch was quickly rolled out to production, and users stopped complaining about random black screens. Then my team went out for some celebratory beer.

I will now describe exactly what happened — and why did a timer cause such an insidious bug.

The timer widget was implemented using an NSTimer which made a callback every second. To do this, the timer holds a reference to the parent view which contains it. This is not too unusual, and is generally innocent and harmless — until you combine it with Objective C’s garbage collection system.

Objective C’s garbage collection system uses a reference counting algorithm. I’ll remind you what this means. The garbage collector maintains, for each object, a count of how many references lead to it. When this reference count reaches zero, it means your object is dead, since there is no way to reach it from anywhere in the system. Thus the garbage collector is free to delete it.

This doesn’t work for NSTimer, though. When two objects hold references to each other, their reference count remain at least 1, which means they can never get garbage collected. In our app, this meant that whenever the view with the timer goes out of view, it doesn’t get disposed, but remains in the background forever. A memory leak.

A memory leak, by itself, can go unnoticed for a long time with no impact. The last part of the puzzle that brought everything crashing down had to do with the way a certain button was implemented. This button, when pressed, broadcasted a message, which would then be received by the profile view.

When the timer is active, it is possible to get the system into a state with two profile views — a real one and a zombie one kept alive by a reference cycle with the timer.

Then when the message is broadcasted, both the real and zombie views receive the message in parallel. The button logic is executed twice in rapid succession, which understandably causes the whole system to give in.

With this mechanism in mind, the fix was easy. Just invalidate the timer when the view goes out of view. Without the reference cycle, the profile view is disposed of correctly and all is well again.

I think this story demonstrates a fundamental truth about debugging: in order to debug effectively, you need to have a deep understanding of your technology stack. This is not always true of programming in general — quite often you can write code that works yet not really understand what it’s doing. When developing a feature in an unfamiliar technology, the typical workflow is, if you don’t know how to do something, copy something similar from StackOverflow or a different part of your code base, make some changes until it works. And that’s a fine way to do things.

But debugging requires a more structured methodology. When many things are breaking in haphazard ways, you need to narrow down the problem to its very core, to identify precisely which component is broken. In this case it was a reference cycle that wouldn’t get released. The core of the problem may be buried within layers upon layers of an API, even an API you believe to be bulletproof. It might require digging into assembly code, even hardware.

To find that core requires an understanding of a mind-boggling stack of technologies that software today sits upon. That’s what it takes to become a master debugger.

So, what’s the hardest bug you’ve ever debugged?


Algorithmic Trading Hackathon

March 22, 2015

The name of the hackathon was Code B: UW Algorithmic Trading Competition. It was hosted by Bloomberg and various UW student groups. It’s a 17 hour hackathon where you “create the best trading platform completely from scratch”. As far as I know, this is the first time the hackathon has been run, and in this article I’m going to write about my experience.

11053550_361293904063033_5062748457639600040_o

We were allowed teams of up to three, but my roommate Andrei and I signed up as a team of two. Like myself, Andrei is also a CS major. Neither of us had any experience with trading stocks, or anything finance related, for that matter. When asked to choose a team name, we named ourselves team /dev/rand (implying that we were so bad that we’d be no better than a random number generator)

The hackathon was scheduled to start Friday evening, running through the night until noon the next day. The goal was to write a program to autonomously trade stocks over a 20 minute period, battling other programs to earn as much money as possible. The programs communicated by connecting to a central server on Bloomberg’s side, so we could use any programming language we wanted. It was decided that Andrei would come up with strategies, and I would implement them in Python.

Rules of the Game

The specifics of the API and mechanics of the game were not revealed until the official start of the hackathon. The 50-60 teams packed into an auditorium as the organizers started to explain the technical details.

The rules turned out to be fairly simple. The only actions allowed were to bid (attempt to purchase) on a stock for some price, or ask (attempt to sell) a stock for some price. If at any point someone’s bid is higher than someone else’s ask, the deal goes through and the stock changes hands.

Now all of this was fairly standard, but after this part, the rules diverged from real life. In order to encourage people to buy stocks (and not just hoard the initial money), each share of a stock paid dividends to its owner every second. And to prevent simply buying one stock and holding it for the entire duration, the dividends given out quickly diminishes the longer you own the stock.

This quirky dividends system turned out to be central to our strategy. Additionally, the differences from real stock markets meant that any previous experience with finance and stock trading was less useful — definitely a good thing for us because many of our competitors were seriously studying finance and we had no experience anyway.

And it begins!

After the rules presentation, the hackathon kicked off. It was slightly past 7pm, and very quickly you could see teams buying and selling stocks. We decided to take it slow, discussing strategies over dinner.

We started work around 8pm. I began writing code to parse the input, while Andrei worked on deciphering the rather cryptic specifications document. Although API specs were clear enough, they were (intentionally) vague about how the system behaved behind the scenes. There were many formulas with lots of variables, many of which we had no idea what they meant.

So we took an experimental approach. Tentatively we put in a bid for a few shares of Google stock — and our net worth immediately took a nosedive. But the stock rapidly generated dividends, and before long, our net worth recovered to what it was initially, and it kept on going up! The success was short lasted, however, as quickly the dampening effects of the dividends started to kick in, and our rate of return quickly diminished to near zero.

We tried again, buying a few shares of the Twitter stock. The same thing happened — our value went down, quickly recovered, then gradually leveled to 50 dollars more than we started with.

With this information, we formulated a rough strategy. We didn’t know how to predict which stocks will go up; neither did we have a plan for buying and selling stocks at a favorable rate. Instead, we would take advantage of a stock’s “golden period”, where the stock initially pays massive dividends. It was crucial to buy as quickly as possible, since the clock started ticking as soon as you own one share of the stock. So we use all our money to buy as many shares of the stock as possible, doesn’t matter what price. Now we wait as the golden period payout multiplied by our entire bankroll makes us rich. Then, a few minutes later, when the golden period runs out, we would slowly sell, iteratively lowering our asking price until we found a buyer.

Once we sold the last share of a stock, the dividend clock doesn’t immediately reset, it slowly regenerates. So if we wait a while, say 5 minutes, then buy back the stock, we get another brief golden period. Taking this one step further, we decided on a strategy that cycled through the 10 stocks: at any given point, we would hold at most 4 of them, while the other 6 were left to “recharge”.

I proceeded to code the algorithm, while Andrei analyzed the spec document and brainstormed ways to improve the strategy. From the equations in the spec, he came up with a formula to determine what stock generated the highest dividends. Every half hour, the scoreboard would reset, and by 3am, I was basically done, and our algorithm consistently came either first or second by the end of each round. Our algorithm worked beautifully, simultaneously juggling a bunch of different stocks, some buying, others slowly selling. We watched the scoreboard as we earned hundreds of dollars every minute, ending with a ridiculous amount of money by the time it reset.

It seemed at this point that a lot of the teams were having implementation issues, like connecting to the network and parsing input, and only a handful were making any money at all, so I was pretty happy with our results.

But at 4am, disaster struck. A new round started, and our algorithm instantly plummeted to the bottom of the leaderboard. Every time we bought or sold anything, we lost money, and none of it was coming back through dividends. What happened?? It turns out that the parameters were changed, so that a very low amount of dividends were paid for owning a stock, and the only profits were made by buying low and selling high. This meant that our whole strategy, which centered around maximizing dividends, was rendered useless.

What’s worse — I discovered a bug in my implementation where our stocks were not being cycled properly: it would sell a certain stock, then instantly buy back the same stock, which didn’t allow the dividends clock to reset, meaning no dividends. Also, by this point a lot of teams were flooding the network with requests, making any network call have a small chance of throwing an exception and crashing the whole thing entirely.

The network problem was easy to fix, but at 5am, I was really tired and had difficulty tracking down the bug that was causing it to buy back the same stock. Andrei suggested a new set of strategies for the “low dividends” scenario, but by now, I was too tired to implement another set of strategies. Instead, we tweaked various constants in our program to make it play more patiently and more predictably, so even in the worst case it would make marginal gains instead of finishing dead last. After 2 hours of debugging, we managed to track down the cycling bug.

It was 7am and I could hardly keep my eyes open so we found a couch and napped for two hours, until the mock competition began.

Mock Competition and final tweaking

At 9am, a few hours before the final competition, there was a mock competition which was meant to be identical to the final competition. There were three rounds: a high dividends scenario, a low dividends, and one in the middle.

We won the high dividends round hands down, unsurprisingly as our entire strategy was designed around this set of parameters in mind. In the low dividends round, we didn’t do as well, but thanks to careful tweaking, we still made a modest amount of money, coming in fifth. In the medium round, we got second place. This was enough to win the mock competition.

Now, let me give you a summary of our competition. Most of the teams increased gradually in net worth, with their score slowly increasing as they slowly accumulated dividends. We were confident that we could play the dividends game, so it didn’t trouble us too much. What was really troubling was a team called “vlad” (I don’t quite remember what their name was, but it ended with vlad). Instead of gradually gaining money a few dollars at a time, “vlad” remained at a constant net worth for a long time, then suddenly gain hundreds of dollars instantly. This meant that their algorithm operated completely differently from ours, and we had absolutely no explanation of what was going on.

It didn’t help that the formula for net worth was complicated and we didn’t fully understand it. Our net worth clearly increased when we did well, but it fluctuated wildly, sometimes dipping by hundreds of points when we made a large transaction, only to bounce back when dividends started rolling in.

The next few hours were fairly unproductive, since we had no more ideas on how to improve our algorithm. Although Andrei had some ideas on strategies for the low dividends game, after pulling an all-nighter I was in no shape to try implementing them.

The Final Game

It was soon time for the final competition, the cumulation of all our efforts. Having carefully noted down the parameters for the mock competition, we were ready to use this information to get every edge we could for the finals.

Round 1 was high dividends. We played with a highly aggressive set of parameters, dumping our bad stocks for very cheap in pursuit of the dividends regeneration. The early game was contentious, but by the 10 minute mark, we gained a solid lead over the competition and maintained the lead until the end. We won round 1, with “vlad” coming in third place.

Round 2 was low dividends. We deployed the patient strategy, which was less eager to dump anything, holding onto bad stocks until we get a good price for them, since there were little dividends to fight over anyway. We came fifth place, with “vlad” coming in fourth.

Round 3 was medium dividends. We started off uncertain — at the halfway mark we were still in the middle of the pack — but slowly we gained ground, and five minutes before the end, we were in third position. “vlad” was in first place, with a big enough lead that neither we nor the second place team were going to overtake him. But at this point, we knew that from our points in the first round, we only needed second place to beat “vlad” and win the competition — and with 3 minutes left on the clock, we overtook the second place team. We were going to win it!

Then, the whole scoreboard goes black.

It didn’t crash, no, it was the contest organizers’ tactic to increase suspense so the final winners are not known until the winners are announced. We waited anxiously as the final seconds ticked down, the organizers announcing fourth place, third place, UI award. We just needed second place in this round to win, if we get third place in this round, “vlad” beats us by a hair.

And the second place goes to… team /dev/rand. What? We stared in disbelief as we realize we lost to “vlad”.

Going home

Turns out that in the last 2 minutes of competition, we got overtaken by not one, but two teams. So we actually finished round 3 in fourth place.

Our prize for winning second place? A playstation 4 (worth ~450) and a parrot drone (worth ~100), and most importantly, the satisfaction of winning a finance competition without knowing the first thing about finance. Team “vlad” got two playstations and a drone (well, they could have taken all 3 playstations but they were nice enough to leave us one)

Big thanks to all the organizers and volunteers for keeping everything running smoothly!

If you’re interested, our source code is in a git repo here. It’s 400 lines of hackathon-level-bad python code.

What about real algo trading?

A natural question to ask is, can we get rich IRL with this algorithm? Answer is clearly no — we essentially gamed the system by greedily grabbing the golden period of dividends, a mechanic designed to encourage people to buy and sell stocks. Of course, in the real world, dividends don’t work like that.

Then other than this mechanic, how else is this competition different from real world algo trading? Unfortunately, I don’t know enough about this topic to answer that question.

Philosophically, I still don’t understand how it’s possible that they basically pull money out of thin air. I mean, a stock trader doesn’t intrinsically create value for society, but they get rich doing it? I don’t know.


Follow

Get every new post delivered to your Inbox.

Join 89 other followers