How to succeed in your first tech internship

October 23, 2016

Congratulations, you’ve just landed your first software engineer internship! You’ve passed a round or two of interviews, signed an offer letter, and you’re slated to start next month. What now? You might be a bit excited, a bit apprehensive, wondering what the startup life is like, are you even smart enough to do the work they give you…

I felt all these things when I started my first internship three years ago. Now, I’ve completed four internships and I’m halfway through my fifth one; I’m sort of a veteran intern by now. In these five internships, I’ve learned a good deal about what it takes to succeed in an internship, things that are not obvious to those just starting out. Hopefully by sharing this, others can avoid some of the mistakes I made.

Your first week at [startup]

Chances are that you’ve coded in assignments for schoolwork, and maybe you’ve coded a few side projects for fun. Work is a bit different: you’re working with a massive codebase that you didn’t write, and probably no single engineer in the company understands it all. Facing a codebase of this complexity, you might feel overwhelmed, struggling to find the right file to start. You feel uneasy that a small change is taking you hours, afraid that your boss thinks you’re underperforming.

Relax, you’re doing fine. If you got the job, it means they have faith in your abilities to learn and to succeed. I’ve talked to hundreds of Waterloo interns, and I’ve never heard of anyone getting dismissed for underperforming. The first few weeks will be rough as you come to terms with the codebase and technology stack, but trust me, it gets much, much easier afterwards.

Asking for help

As an intern, you’re not expected to know everything, and often you will be asking for help from more experienced, full-time engineers.

Before asking for help, you should spend a minute or so searching Google, or Stack Overflow, or the company wiki. Most general questions (not relating to company specific code) can be answered with Google, and you save everyone’s time this way.

When you do ask for help, be aware that they might be working on a completely different project, so they don’t have the same mental context as you. Rather than jump straight into the intricate technical details of your problem, you should describe at a high level what you’re trying to accomplish, and what you tried, and only then delve into the exact technical details.

An example of a poorly phrased question would be: “hey, how do I invalidate a FooBarWindow object if its parent is not visible?” You’re likely to get some confused stares — this might make perfect sense to you, but they’re wondering what is FooBarWindow and why are you trying to invalidate it at all.

A better way to phrase it would be something like: “hey, I’m working on X feature, and I’m encountering a problem where the buttons stop working after you press the back button. After looking a bit, I discovered my component should have been invalidated when its parent is no longer visible, but that’s not happening…” This time, you’ve done a much better job of describing your problem.

It’s always helpful to take notes, so you never ask the same question again. How do you commit your code to Git? How do you deploy the app to stage? If you don’t write it down, you’re going to forget.

At the start, you’re going to be asking 5 questions an hour, which is okay. Soon you will find yourself needing to ask less and less, and eventually you’ll only ask a handful per day.

Taking charge of your own learning

Like it or not, software engineering is a rapidly shifting field, where a new Javascript framework comes out every six months. You have to be continuously learning things, or your skills will become obsolete. Learning is even more important when you’re an intern, still learning the ropes. Fortunately, a tech internship is a great opportunity to learn quickly.

Not all software engineers are equal — at some point, you get to choose what you want to do: frontend, backend, or full stack? Web, iOS, or Android? Become an expert in Django or Ruby on Rails? Depending on the company, you often get considerable say on what team you’re on, and what project you work on within your team. Use this as an opportunity to get paid to learn new, interesting stuff!

Good technologies to learn should satisfy two criteria: it should be something you’re interested in, and it should also be widely used in the industry. That is to say, it’s more useful to know a popular web framework than an internal company-specific framework that does the same thing.

When you get to pick what project to take next, it might be tempting to pick something familiar, where you already know how to do everything. But you learn a lot more by working on something new; in my experience, employers have always been accommodating to my desire to work on a variety of different things.

You will overhear people talk passionately, with phrases like, “oh, it’s running Nginx inside Docker and fetches the data from a Cassandra cluster…” If you’ve never heard of these technologies, this sentence would be nonsensical to you. It’s well worth the time to spend 10 minutes reading about each technology that you hear mentioned, not to become an expert, but just to have a passing understanding of what each of these things do. With a few minutes of research, you’d be able to answer: “when should you use Cassandra over MySQL?

Learning is valuable even when it’s not immediately relevant to you. Occasionally, you’ll find yourself in meetings where you don’t have a clue what’s going on, say with business managers or projects you’re not involved in. Rather than zone out and browse Reddit for the duration of the meeting, listen in and learn as much as you can, and take notes if you begin to fall asleep! The human brain has near infinite capacity for learning new things, and at no point will it reach “capacity” like a hard drive.

Take responsibility and deliver results

A common misconception is programmers are paid to write code. Wrong: as a programmer, your job is to deliver results and provide value to your company; part of this job involves writing code, but a lot of the work is communicating with managers, designers, and other engineers to figure out what code to write.

When you’re assigned a project, you own it and you’re in charge of any tasks required to push it through to completion. What if something is broken in an API owned by another team? You might be tempted to hand in your code and proclaim, “my code works fine, so my job here is done, I can show you that their API is broken, so it’s their fault.” No, if your feature is broken then you need to fix it one way or another. So go and ping the engineer responsible, schedule a meeting with him, anything to get your project completed.

Sometimes you run into problems that seem insurmountable, so complex that you feel compelled to put down your sword and give up, and tell yourself, “this is too hard for an intern“. This is a bad idea, you should never expect a full-time engineer to come in, take over, and bail you out of the situation. Your mentors are not superhuman — it’s not like they can instantly conjure a solution, no, they have to work through the problem one piece at a time, just like you. There’s no reason you can’t do the same.

The product you deliver is what ultimately matters, so don’t worry about secondary measures of productivity, like how many lines of code you commit, or how many story points you rack up on Jira. There’s an apocryphal tale of a programmer who disagreed with management measuring productivity by lines of code, and writing “-2000” because he made the code simpler. Likewise, you aren’t being judged if you come in 30 minutes after your manager does, or if you leave 30 minutes before he does, or if you just feel like taking a mid-day stroll in the park, as long as you’re consistently delivering quality features.

Many interns suffer from “intern mentality” and consider themselves fundamentally different from full-times in some way. This is an irrational belief — your skills are probably on par with those of a junior engineer (or will be in a few weeks). This means you should behave like any other full-time engineer (albeit minus interview and on-call duties); the only difference is you’re leaving in a few months. Don’t be afraid to contribute your insights and ideas and consider them less valuable because you’re “just an intern”.

Other tips

What should you learn to prepare for an internship if you have spare time? Learn Git! Git is a version control system used in most companies, and is both non-trivial to pick up, and used more or less the same way everywhere. Other stuff is less useful to pre-learn because they’re either easy to pick up, or can be used in lots of different ways so it’s more efficient to learn on the job.

Internships are a great way to travel places, if that interests you. I picked 5 internships in 4 different cities for this reason. Unlike school, you don’t have to think about work during weekends, which leaves you lots of time to travel to nearby destinations.

I’ve only talked about what happens during work. If your internship is in the USA, the Unofficial Waterloo USA Intern Guide was super helpful in answering all my logistical questions. Also, some of my friends have written about crafting a resume, and how to ace the coding interview.

I have a Youtube channel!

August 14, 2016

Here’s something I’ve been working on recently: a Youtube channel of my guitar covers. I’ve been playing guitar for a few years now (I started in first year university) and I thought it would be fun to record myself playing my favorite songs.

At time of writing, I have 11 videos. Here’s a few of them:

I’m going to upload more as I have time. Please subscribe!

A Brief Introduction to DNA Computing

July 28, 2016

DNA computing is the idea of using chemical reactions on biological molecules to perform computation, rather than silicon and electricity. We often hear about quantum computers, and there is a lot of discussion about whether it will actually work, or crack RSA, stuff like that. In the domain of alternative computers, DNA computers are often overlooked, but they’re easy to understand (none of the quantum weirdness), and have potential to do massively parallel computations efficiently.

I first heard about DNA computers when doing my undergrad research project this term. I won’t bore you with the details, but it has to do with the theoretical aspects of DNA self-assembly which we will see is related to DNA computing.

The study of DNA computing is relatively new: the field was started by Leonard Adleman who published a breakthrough paper in Science in 1994. In this paper, he solved the directed Hamiltonian Path problem on 7 vertices using DNA reactions. This was the first time anything like this had been done. In this article, I will summarize this paper.

Operations on DNA

DNA is a complicated molecule with a lot of interesting properties, but we can view it as a string over a 4 letter alphabet (A, C, G, T). Each string has a Watson-Crick complement, where A is complement to T, and C is complement to G.

Without delving too deep into chemistry, I’ll describe some of the operations we can do with DNA.

1. Synthesis. We can use a machine to create a bunch of single DNA strands of any string we like. The technical term for these is oligonucleotides, but they’re just short DNA pieces. One limitation is we can only make strands of 20-25 nucleotides with current lab techniques.

2. Amplify. Given a test tube with only a few strands of DNA, we can amplify them into millions of strands using a process called polymerase chain reaction (PCR).

3. Annealing. Given a test tube with a lot of single stranded DNA, cooling it will cause complementary strands to attach to each other to form double strands.

4. Sort by length. By passing an electrical field through a solution, we can cause longer DNA strands to move to one side of the solution, a technique called gel electrophoresis. If desired, we can extract only strands of a certain length.

5. Extract pattern. Given a test tube of DNA, we can extract only those that contain a given pattern as a substring. To do this, put the complement of the pattern string into the solution and cause it to anneal. Only strands that contain the pattern will anneal, and the rest can be washed away.

This list is by no means exhaustive, but gives a sample of what operations are possible.

Solving Directed Hamiltonian Path with DNA

The Directed Hamiltonian Path problem asks, given a directed graph, does there exist a path from s to that goes through all the vertices?

For example, in this graph, if s=1 and t=3, then 1->4->2->3 is a directed Hamiltonian path.

This problem is related to the Travelling Salesman Problem, and is particularly interesting because it is NP-complete, so conventional computers can’t solve it efficiently. It would be really nice if DNA could solve it better than normal computers.

Here I’ll describe the process Leonard Adleman performed in 1994. He solved an instance of Directed Hamiltonian Path on 7 vertices, which is obviously trivial, and yet this took him 7 days of laboratory time. Early prototypes often tend to be laughably impractical.

Main Idea: we represent each vertex as a random string of 20 nucleotides, divided into two parts, each of 10 nucleotides. We represent a directed uv-edge by taking the second half of u and the first half of v, and taking the complement of all that.

The idea is that now, a directed path consists of vertex strands interleaved with edge strands, in a brick wall pattern, like this:

When we put all the vertex and edge strands into a test tube, very quickly the solution will anneal (and not just one, but millions of copies of it). However, the test tube also contains all kinds of strands that don’t represent Hamiltonian paths at all. We have to do a tricky sequence of chemical reactions to filter out only the DNA strands representing valid solutions.

Step 1. Keep only paths that start on s and end on t. This is done by filtering only strands that start and end with a given sequence, and this is possible with a variation of PCR using primers.

Step 2. Sort the DNA by length, and only keep the ones that visit exactly n vertices. Since each vertex is encoded by a string of length 20, in our example we would filter for strands of length 80.

Step 3. For each vertex, perform an extract operation to filter only paths that visit this vertex. After doing this n times, we are left with paths that visit every vertex. This is the most time consuming step in the whole process.

Step 4. Any strands remaining at this point correspond to Hamiltonian paths, so we just amplify them with PCR, and detect if any DNA remain in the test tube. If yes, there exists a directed Hamiltonian path from s to t in the graph.

That’s it for the algorithm. Adleman went on to describe the incredible potential of DNA computers. A computer today can do about 10^9 operations a second, but you can easily have 10^{20} DNA molecules in a test tube.

DNA Computing since 1994

Shortly after Adleman’s paper, researchers applied similar ideas to solve difficult problems, like 3-SAT, the maximal clique problem, the shortest common superstring problem, even breaking DES. Usually it was difficult to implement these papers in the lab, for example, Richard Lipton proposed a procedure to solve 3-SAT in 1995, but only in 2002 did Adleman solve an instance of 3-SAT with 20 variables in the lab.

On the theoretical side, there was much progress in formalizing rules and trying to construct “universal” DNA computers. Several different models of DNA computing were proven Turing complete (actually my research adviser Lila Kari came up with one of them). It has been difficult to build these computers, because some of the enzymes required for some operations don’t exist yet.

There has been progress on the practical side as well. Since Adleman, researchers have looked into other models of using biological molecules for computation, like solving 3-SAT with hairpin formation or solving the knight’s tour problem with RNA instead of DNA.

In 2006, a simplified DNA computer was built that had the ability to detect if a combination of enzymes were present, and only release medicine if they are all present (indicating that the patient had a disease). In 2013, researchers built the “transcriptor”: DNA versions of logic gates. One reason these are important because transcriptors are reusable, whereas previously all reagents have to be thrown away after each operation.

Current Limitations of DNA Computing

Clearly, the method I described is very time consuming and labor intensive. Each operation takes hours of lab work. This is not really a fundamental problem, in the future we might use robots to automate these lab operations.

The biggest barrier to solving large instances is that right now, we can’t synthesize arbitrary long strands of DNA (oligonucleotides). We can synthesize strands of 20-25 nucleotides with no problem, but as this number increases, the yield quickly becomes too low to be practical. The longest we can synthesize with current technology is a strand of length about 60. (Edit: Technology has improved since the papers I was looking at were written. According to my adviser, we can do 100 to a few thousand base pairs now in 2016).

Why do we need to synthesize long oligonucleotides? To represent larger problem instances, each vertex needs a unique encoding. If the encoding is too short, there will be a high probability of random sections overlapping by accident when they’re not supposed to, thereby ruining the experiment.

One promising direction of research is DNA self-assembly, so instead of painstakingly building oligonucleotides one base at a time, we put short strands in a test tube and let them self-assemble into the structures we want. My URA project this term deals with what kind of patterns can be constructed with self-assembly.

Today, if you need to solve a Hamiltonian path problem, like finding the optimal way to play Pokemon Go, you would still use a conventional computer. But don’t forget that within 100 years, computers have turned from impractical contraptions into devices that everyone carries in their pockets. I’ll bet that DNA computers will do the same.


  1. Adleman, Leonard. “Molecular computation of solutions to combinatorial problems“. Science, volume 266, 1994.
  2. Lipton, Richard. “DNA solution of hard computational problems“. Science, volume 268, 1995.

CS488 Final Project: OpenGL Boat Game

July 24, 2016

Here’s something I’ve been working on for the past few weeks for one of my courses, CS488 – Intro to Computer Graphics. For the final project, you’re allowed to do any OpenGL or raytracing project, as long as it has 10 reasonable graphics related objectives. Here’s a video of mine:

A screenshot:

It’s a simple game where you control a boat and go around a lake collecting coins. When you collect a coin, there’s a bomb that spawns and follows you around. You die when you hit a bomb. Also if two bombs collide then they both explode (although you can’t see that in the video).

Everything is implemented in bare-metal OpenGL, so none of those modern game engines or physics engines. It’s around 1000-ish lines of C++ (difficult to count because there’s a lot of donated code).

Edit (8/10/2016) – I received an Honorable Mention for this project!

Some thoughts about CS488

For those that haven’t heard about CS488, it’s one of the “big three” — fourth year CS courses with the heaviest workload and with large projects (the other two being Real-time and Compilers). It’s one of the hardest courses at Waterloo, but also probably the most rewarding and satisfying course I’ve taken.

There are four assignments, each walking you step by step through graphics techniques, like drawing a cube with OpenGL, or building a puppet with hierarchical modelling, or writing a simple ray tracer. Then there’s the final project, where you can choose to make something with OpenGL or extend your ray tracer. The class is split 50/50, about half the class did OpenGL and the other half did a ray tracer. I personally feel that OpenGL gives you more room to be creative and create something unique whereas ray tracing projects end up implementing a mix of different algorithms.

The first two assignments weren’t too bad (I estimate it took me about 10 hours each), but some time during assignment 3 I realized I was spending a lot of time in the lab, so I got an hours tracking app on my phone to track exactly how much time I was spending working on this course. Assignments 3 and 4 each took me 15 hours. I spent 35 hours on my final project, over a period of 3 weeks. I chose relatively easy objectives that I was confident I could do well, which left time to polish the game and do a few extra objectives. I’m not sure what the average is for time spent on the final project, but it’s common to spend 50-100 hours. Bottom line: you can put in potentially unbounded amounts of time to try to get the gold medal, but the effort actually required to get a good grade is quite reasonable.

Now the bad part about this course (obviously not the instructor’s fault) is OpenGL is so incredibly difficult to work with. Even to draw a line on the screen, you have to deal with a lot of low level concepts like vertex array objects, vertex buffer objects, uniform attributes to pass to shaders, stuff like that. It doesn’t help that when something goes wrong in a shader (which runs on the GPU), there’s no way to pass an error message back to the CPU so you can print out variables and debug it. It also doesn’t help that there’s a lot of incompatible OpenGL versions, and code you find in an online tutorial could be subtly broken for the version you’re using. On the other hand, working with OpenGL really makes you appreciate modern game engines like Unity which takes care of all the low level stuff for you.

Roboroast: upload your photo to get an algorithmically generated insult!

April 27, 2016

I’d like to share a side project I’ve been working on for the past few weeks. Roboroast is an app that automatically generates humorous insults for you or a friend based on how you look. It was written in collaboration with my friend Andrei Danciulescu.

The basic operation is as follows. There’s a subreddit called /r/RoastMe where random people post a picture of themselves, and other people proceed to “roast” the person with funny comments making fun of his appearance.

Our app takes your photo and uses a face recognition algorithm to find a poster in /r/RoastMe who looks like you. Then we display the comments for your closest matches.

You can try it at

Sample Results

Here’s some roasts for myself:

Here’s some for Andrei:

High Level Overview

The project comprises of roughly 3 parts:

Part 1 is the Reddit scraper. We use the PRAW API to go through all posts on the /r/RoastMe subreddit, saving comments to MongoDB and saving images to the filesystem.

Part 2 is the Face++ uploader. Face++ is a cloud service with a REST API that handles our face matching. To use it, we upload all the images from part 1 into a “faceset” which we can query later.

The first two components only need to be run periodically, maybe once a month to update the faceset with new posts from Reddit. Part 3 is the webapp, which is the use facing component. It accepts user uploads, searches for matches using the Face++ API, and renders a list of insults to the user.

Technology Stack

As mentioned before, we used a number of third party APIs; PRAW for scraping Reddit posts, and Face++ for face recognition.

All the backend code is written in Python. The web app uses the Flask web framework, and is wrapped with NGINX and Gunicorn to handle connections and serve static files. We use MongoDB for the database.

The frontend is built with Bootstrap. We also use javascript libraries jQuery and handlebars.js.

The whole thing is hosted on a single AWS EC2 instance.

How good is the face matching?

The face matching is actually decent. Face++ produces reasonable matches most of the time.

To see the matching results for yourself, you can append ?r=1 to the end of the URL (on the results page). This is hidden by default.

Do the insults make sense?

Although the face matching does a decent job, we found that the quality of results were somewhat hit-or-miss.

When we envisioned the concept for this app, we assumed that most insults were going to make fun of the subject’s face. However, many insults refer to their non-facial appearance, or clothing, or objects in the background. Since we only do face matching, these comments will make no sense.

Other times, comments will refer to the title of the post — in other words, an insult depends on both the submission title and the picture. Again, these make no sense with only the picture.

We attempt to mitigate this with heuristics that analyze the comment, in order to exclude roasts which refer to the title or articles of clothing. This approach had limited success because natural language processing is hard.


When Andrei initially proposed this idea for an app, I thought the concept was pretty cool and unique. In a month or so we had a prototype, and I spent a few more weeks polishing the project for release. The quality of results you get is still highly variable, but we’re working on improving our algorithms.

In any case, it’s my first time with a lot of these technologies, and I had fun and learned a lot building it.

Teaching Myself Electronics: Zero to Arduino in 5 Weeks

April 1, 2016

I’m about to graduate with a degree in computer science, but I can’t describe how a computer works. Okay, maybe that’s an exaggeration. I can tell you all about assembly language and operating system kernels, and I have a good idea of how to build a CPU out of basic logic gates.

That’s where my knowledge ends. I have no idea how to build an AND gate, or how to coerce my 120V power supply to gently power these gates without frying them.

Learning is good, and this is a pretty big knowledge gap. I’m going to teach myself electronics. My plan is to learn by building things. There’s a lot of mathematical theory to learn, much of it is not that useful, and it’s easy to get bogged down in random details. Much better is to just experiment and go back and learn the theory when needed.

Week 1: Electronic Playground

The first problem was getting components. Unlike computer programming, where everything you need is on the internet, for hardware I’ll actually need to buy things. This is difficult when you don’t know exactly what you need. I also didn’t want a million different parts littering my bedroom haphazardly.

Eventually I settled on this all-in-one kit (cost $30).

It has a lot of components: LEDs, resistors, capacitors, even antenna and speakers. All the components are fixed to a board, and to connect them together, you use wires that clip to springs protruding from the board.

The kit comes with an instruction booklet that describes all kinds of things you can wire with it. For example, here’s a “harp” — it makes different tones when you hover your hand over the photoresistor:

This schematic is a bit too advanced for me at this stage — unfortunately the booklet doesn’t attempt to explain how it works.

That’s fine, the following books do an excellent job of starting from the basics:

After playing with this for a while, I learned a lot of basic things like how current / voltage / resistance works, how to read common schematic symbols, and how to decode a resistor.

Week 2: Multimeter

Electricity is invisible, and debugging circuits is difficult without being able to see what’s going on. I went ahead and got a multimeter (cost $20):

It was easy enough to measure resistance and voltage (both AC and DC). The current measurement was not very sensitive though and I could barely register any reading.

Around this time I attended a workshop in Manhattan that taught how to read schematics and build it on a breadboard. We made a 555 timer circuit which made a LED blink on and off:

I can’t understand how it works right now, but breadboards are pretty neat. Much easier than sticking wires into springs on my electronic kit at home.

Week 3: Baby steps with Arduino

By now I was reaching the limits of what my electronic kit could offer, and I needed to graduate to something more serious.

So I went to the nearest electronics shop and got an Arduino Uno kit (cost $90). The Arduino is a microcontroller and lets you prototype circuits easily with a breadboard. The Arduino Uno is only $25, but my kit comes with an assortment of components and sensors.

Before long I had the Arduino up and running. It runs a dialect of C, so I felt at home in the programming environment.

Here’s a program that blinks the onboard LED on and off in a loop (kind of equivalent of hello world):

// the setup function runs once when you press reset or power the board
void setup() {
  // initialize digital pin 13 as an output.
  pinMode(13, OUTPUT);

// the loop function runs over and over again forever
void loop() {
  digitalWrite(13, HIGH);   // turn the LED on (HIGH is the voltage level)
  delay(1000);              // wait for a second
  digitalWrite(13, LOW);    // turn the LED off by making the voltage LOW
  delay(1000);              // wait for a second

Week 4: Transistor Switching

I didn’t really know what a transistor did, but it’s what logic gates are made of and the backbone of all computers, so it can’t hurt to learn about them, right?

I started off building logic gates from transistors, but couldn’t get it work. It turned out that I misunderstood how a transistor operates (it’s not the most intuitive at first glance). Luckily, I had a friend in electrical engineering and she patiently cleared up my misconceptions.

Transistors are used for logic gates, but I didn’t know that transistors can also amplify a current. I also learned about the many different types of transistors.

Here’s a circuit that I built (from the Arduino kit manual). It uses a transistor as a switch to control a motor:

By the way, here’s how you make an AND gate with two transistors:

While wiring things up, I accidentally burned a LED and a BJT transistor. Apparently 5 volts without a resistor is fatal to many components. In software, if you mess up, you get a segmentation fault in your console or something — never the smell of burnt plastic in your room.

Week 5: Arduino Controlled Desk Lamp

Here’s an idea. Wouldn’t it be nice if your lamp can turn itself on when it gets dark? Useful or not, let’s build it!

This is actually a major milestone for me. Up until now, I’ve been mostly following existing schematics, using parts carefully selected by people who wrote the schematics. But for this project, we improvise everything from scratch. Oh yea, also it’s the first time I’m working with 120V alternating current.

First I got a $15 desk lamp. It’s the kind that plugs into the wall AC socket. I start by using a wire stripper to expose copper wires that I can plug into the breadboard:

I’m a bit nervous working with 120V current, obviously it’s a lot more powerful than the 5V of Arduino. Generally, 120V won’t kill or seriously injure you, but it does deliver an unpleasant shock.

After stripping the cord, the lamp can be plugged into the breadboard. But the voltage is too high for the Arduino to handle directly; instead, I need a relay which acts as a buffer and a switch. A transistor can act as a switch too, but relays can handle more current.

To detect light, I have a separate circuit that uses a photoresistor (changes resistance depending on amount of light shining on it). The Arduino can read the photoresistor by measuring analog voltage.

Here’s the schematic of my design:

Now we do software. It does a loop every 500ms, detects the amount of light, and decides whether the lamp should be on or off. It’s slightly complicated by the fact that when the lamp is on, it produces light and that affects the photoresistor readings. I need to compensate for that but it’s not too bad.

Here’s the code I came up with:

int photoPin = 5;
int lampPin = 3;

bool isLampOn = false;

void setup(){
  pinMode(lampPin, OUTPUT);

bool shouldTurnLampOn(){
  int lightLevel = analogRead(photoPin);
    return lightLevel < 70;
    return lightLevel < 50;

void loop(){
    digitalWrite(lampPin, true);
    isLampOn = true;
    digitalWrite(lampPin, false);
    isLampOn = false;

Here’s a video demo:


Actually my lamp idea probably isn’t that useful. Nevertheless, I made a lot of progress in just a few weeks and I’m proud of myself for that.

I’ve barely scratched the surface of all the cool things you can do with electronics, but it’s a good start. Now I have all kinds of ideas on what to build next. I’d better get to it!

Four life lessons learned by playing Hearthstone

March 14, 2016

I’ve played Hearthstone on and off for a few years, since it first came out. As I played more and more, I began to notice parallels between my decision making processes in Hearthstone and in real life. This is a self-reflective post, and probably my first serious attempt to describe the core features of my mentality and decision making process. Although I wanted to write this for a long time, I found it difficult to put my ideas into words because they have been part of my personality for so long.

Why is Hearthstone a good representation of real life? Two reasons:

  • First, it’s a game of imperfect information and chance, so you must take risks and deal with uncertainty. Real life situations are usually like this. Games of perfect information (like chess) lack this probabilistic aspect and behave very differently.
  • Second, Hearthstone is a game about decision making skills, rather than mechanics. Every game has some element of decision making, but many games require doing some mechanical action (eg. last hitting) better than your opponent. Mechanical skills are confined to the specific game and are less likely to be relevant in real life.

By playing Hearthstone, I developed a general internal model for making decisions in uncertain situations. This is a broad criterion and covers many situations in day-to-day life.

Lesson 1: There is always a correct decision, and it’s your job to find it

The goal in Hearthstone is to reduce your opponent’s life to zero. How do you accomplish this? You make a plan, perhaps flooding the board with minions, perhaps unleashing a deadly combination of spells.

For our purposes, it doesn’t matter what your strategy is. At the start of the turn, you look at the cards in your hand, the state of the board, what cards your opponent played before. Call this information the game state. You ponder for a bit and come up with an action that best improves your position.

You execute your action on the board, but you still don’t know what happens next with certainty. There are many things you cannot control, which I will call RNG. RNG is short for Random Number Generator, and I will use it to mean anything you don’t have control over.

I use the term RNG for lack of a better term, but I’m not just talking about random game mechanics. RNG includes any state hidden from you, like your opponent’s hand and strategy.  Think of it as a random variable with a known distribution (eg. you play a card that destroys a random minion, which minion will it hit?) or with an unknown distribution (eg. what is the probability your opponent has two flamestrikes in his deck?). Even if the information is known to your opponent, it’s simpler to treat it as a random variable.

Here’s the model summarized in a diagram:

In any game state, there must be one “correct” action that gives you the highest chance of winning the game. The decision-making player aims to consider all possible actions and choose the best, “correct” one.

As a corollary, decision making should be perfectly rational. Otherwise, if my decision engine generates two different actions depending on my emotional state of mind, they cannot both be correct.

A second corollary is actions should always be justifiable through fundamental values. It’s unacceptable to do things by habit, or because other people are doing it — everything I do should have a positive expected value on the things I want to accomplish.

For me, one of my “meta” goals in life is to make correct decisions as much as possible. This is not to say that I behave like a robot — I still experience emotions like everyone else — but I try to eliminate emotions from my decision making process.

In Hearthstone, doing so gives you the highest chance of winning the game. It makes sense then, by extrapolation, that correct decision making gives you the best shot of getting what you want out of life.

Lesson 2: Information is valuable, treat information gathering as a subgoal

One rule of thumb in Hearthstone is “RNG first”. If you are going to play a sequence of cards, one of which has a random effect, it’s better to play the random effect first. This way you extract information out of the RNG pool of unknowns, and with this extra information you might be able to make a better play.

Another useful thing is to keep track of enemy secrets. Imagine you have this on the board:

You want to play a giant, but you’re worried that the secret is “mirror entity”, which summons a copy of the next minion you play.

Without any other information, you’re in a tough spot. But what if you played a minion last turn and the secret did not activate? Then you know that the secret isn’t mirror entity, and confidently play the giant.

Alternatively, suppose that you don’t have this information handy. One tactic is you can “test out” the secret by playing a small minion, and seeing whether the secret activates. You are paying a price with a normally inferior move, but the information you obtain is valuable for future decisions.

A similar concept occurring in real life is flirting. You’re at a party and you see a cute girl walk by. At first, you make a few playful comments, and observe her reaction and assess if she is interested in you. Flirting isn’t just an arbitrary social custom; it makes sense logically as a way of gathering information.

While information isn’t the final end-goal by itself, even a little information can greatly improve decision making, by eliminating vast swathes of possibilities that no longer need to be considered. Whether it be playing a giant, making a big purchase, or asking someone out on a date, gathering information is a useful subgoal.

Lesson 3: Focus on things you have control over, RNG evens out in the long run

Often in Hearthstone, luck is just not on your side. Have you ever seen your opponent topdeck the pyroblast and instantly win the game? Or that mad bomber that hits you three times in the face? How do you feel?

It’s natural to feel angry when this happens to you, especially if it ends up losing you the game. But eventually I realized how pointless it was to get upset at unlucky RNG. What’s the use of worrying about things you have no control over?

I see this all the time — people getting visibly upset when the bus is late, or when a player on your team goes AFK in a game of league. I try to adopt the opposite mindset: worry about my own decision making and simply accept random events beyond my control.

Let me give you an example. Last term, during an important phone interview, my phone stopped working during the middle of the interview. Calmly I got up and notified the CECA front desk, and waited as they spent the next 20 minutes troubleshooting the problem. Most people would be stressed out at this point, but I didn’t feel stressed at all. Rather than getting upset, my mind was relaxed, because I took comfort in knowing that I did everything that could be done; whatever happens next was out of my control.

The law of large numbers says that when you repeat a random event many times, the average outcome will surely converge to the expected value. Hearthstone is so random that a legend player will beat a rank 5 player no more than 55% of the time. Any single game is close to a coin flip, just marginally in favor of the stronger player. But over the long run, it’s a mathematical guarantee that the better player will end up on top.

Lesson 4: Separate the outcome of a decision from the decision itself

In real life and in Hearthstone, you can’t directly tell if a decision was good or not. You only know the outcome, and you can decide if the outcome is good or bad. But the outcome is a function of the decision and RNG, which adds noise to the process.

In other words, the correct decision does not always produce a good outcome, and sometimes a bad decision produces a good outcome. It would be a mistake to retroactively label a decision as “correct” simply because you got lucky.

Here’s a Hearthstone example:

Your opponent is a mage, and on turn 6 you flood the board with a lot of small minions. If he has flamestrike, playing it deals 4 damage to each of your minions, instantly killing your whole board.

Turn 7 comes and it turns out he doesn’t have flamestrike, so you win the game easily. You conclude that playing all your minions was a great idea because he didn’t have flamestrike.

This logic is fallacious: it fails to separate decision from outcome. A correct action is the one that maximizes the win probability, given the information available at the time. Therefore it makes no sense to look at the outcome and retroactively judge the correctness of the initial decision.

So in this example, playing all these minions was a mistake because there’s a high chance the mage has flamestrike. It doesn’t matter if he actually has flamestrike or not, the mistake is equally bad. (A better play would be to play fewer minions, thus mitigating the risk).

Now here’s a real life example. Last term, I had multiple job offers for software engineering internships and I had trouble deciding which one to accept. So I tried to negotiate: I picked one of the companies, told them about my other offers, and asked for a 20% raise in salary. My request was denied.

Does this mean that negotiating was a waste of time? Absolutely not. I know friends who successfully negotiated a higher salary by doing something similar. My particular outcome was not successful, but this doesn’t indicate my attempt was a mistake; if I found myself again with multiple offers, I would do the same thing.

Alfred Tennyson wrote the following about romance:

‘Tis better to have loved and lost

Than never to have loved at all

There are many ways to interpret this quote, here’s mine. Even if the outcome of a romantic encounter is unfavorable (to have loved and lost), it does not mean the decision to pursue the relationship was a mistake.

Why I still do stupid things

Alas, despite my best efforts, I still find myself doing stupid things — quite frequently even. Mistakes happen for a variety of reasons, but after analyzing some, I group them into three broad categories.

The first type of mistake happens when the situation is complicated, and the amount of data available exceeds my brain’s capacity to process it. In theory, I should never lose at chess — all the information is known. Of course, the number of positions explodes combinatorially and in reality I’m a mediocre chess player. Chess grandmasters group information in “chunks” and can reason about positions more efficiently — but this requires experience. In general, humans are prone to making mistakes in complicated situations.

The second category of mistake is having an incorrect model of the world. When we evaluate possible actions, we “simulate” the effects with a simplified version of the world. Problems arise when there is a discrepancy between the model in our heads and the real world.

This discrepancy can manifest itself in several related ways. We may incorrectly value subgoals, for example, a newbie Hearthstone player, knowing the objective is to reduce the enemy’s health to zero, thus decides to deal the maximum damage to the enemy’s hero every turn and ignoring everything else. We may overlook important factors, for example, leaving a Gadgetzan Auctioneer on the board, not realizing its potential, and being surprised next turn when your opponent draws 10 spells using its special ability. Or we may simply miss a possible play that never even occurred to us.

This type of mistake is the most common, but fortunately the most fixable of the three. As you gain more experience with the domain, your model of the world becomes a more accurate representation of the real thing. Then you learn to correctly assign values to things, and generate the full set of possibilities for a situation. For me, this gradual process of learning and self-improvement is one of the most satisfying things in life.

The third and final category of mistake is making decisions without thinking, thereby short-circuiting the entire decision making process. This could be when you’re stressed, emotional, or just tired. An example of this is when you casually trade some minions in Hearthstone, then realizing you had lethal. If only you thought more carefully, you would have easily found the correct play.

It’s not necessarily bad to do things without thinking too hard: it would be silly to invoke the full mechanism to choose between a burrito or a sandwich for lunch. It’s important, however, to realize when a decision is likely to have far-reaching consequences. In that case, it’s wise to defer making the decision until you had time to think things through.

There’s a lot more I could talk about, but this post is getting quite long so I’ll stop here. Whether you agree or disagree with my view of the world, please leave a comment!