Visualizing Quaternions with Unity

November 24, 2014

How do you model the position and orientation of an airplane?

Position is easy, just represent it with a point in 3D space. But how do you specify its orientation — which direction it’s pointing?

At first glance, it seems a vector will do. After all, a vector points in some direction, right? If the plane is pointing east, represent its orientation by a unit vector pointing east.

Unfortunately, we quickly run into trouble when we try to roll. If we’re facing east, and we roll 90 degrees, we’re still facing east. Clearly we’re missing something.

Euler Angles

When real pilots talk about their orientation, they talk about roll, yaw, pitch. Pitch is going up or down, yaw is going left or right, roll is, well, roll.

Any change in orientation can be described by some combination of roll, yaw, pitch. This is the basis for Euler Angles. We use three angles to represent the airplane’s orientation.

This is all fine and dandy if we want to represent the orientation of a static object in space. But when we try to adjust our orientation, we start to run into problems.

You’re thinking, this should be simple! When we turn left or right, we just increment the yaw variable, right? Yes, it seems to work, at least initially. You can turn left and right, up and down, and roll around.

Implement it in Unity and play around a bit, however, and you begin to notice that things don’t quite behave the way you expect.

In this animation, I’m holding down the right button:

The plane does rotate to the right, but it’s not rotating relative to itself. Instead it’s rotating around some invisible y-axis. If it was rotating relative to itself, the green arrow shouldn’t be moving.

The problem becomes more and more severe when the pitch of the plane becomes higher and higher. The worst case is when the airplane is pointing straight up: then roll and yaw become the same thing! This is called gimbal lock: we have lost a degree of freedom and we can only rotate in 2 dimensions! Definitely not something desirable if we’re controlling a plane or spaceship.

It turns out that no matter what we do, we will suffer from some form of gimbal lock. As long as we use Euler Angles, there is one direction where if we turn too far, everything starts to screw up.

Practical Introduction to Quaternions

All is not lost, however. There is a way to represent orientation that represents all axes equally and does not suffer from gimbal lock. This mythical structure is called the quaternion. Unlike Euler Angles which describe your orientation relative to a fixed set of axes, quaternions do not rely on any fixed axis.

The drawback is that quaternions are unintuitive to understand for humans. There is no way to “look” at a quaternion and be able to visualize what rotation it represents. Fortunately for us, it’s not that difficult to make use of quaternions, even if we can’t visualize quaternions.

There is a lot of theory behind how quaternions work, but in this article, I will gloss over the theory and give a quick primer to quaternions, just the most common facts you need to use them. At the same time, I will implement the operations I describe in C#, so I can integrate them with Unity. If you don’t know C#, you can freely ignore the code.

Definition

A quaternion is an ordered pair of 4 real numbers (w,x,y,z). We write this as

w+xi+yj+zk

The letters i,j,k are not variables. Rather, they are independent axes. If you like, you can think of the quaternions as a 4 dimensional vector space.

The defining property of quaternions is:

i^2 = j^2 = k^2 = ijk = -1

Play around with it a bit and you can derive 6 more identites:

ij = k

jk = i

ki = j

ji = -k

kj = -i

ik = -j

If you’ve worked with complex numbers, this should seem familiar. Instead of 2 parts of a complex number (the real and imaginary parts), we have 4 parts for a quaternion.

The similarity doesn’t end here. Multiplying complex numbers represents a rotation in 2 dimensions. Similarly, multiplying by a quaternion represents a rotation in 3D.

One curious thing to note: we have ij=k and ji=-k. We switched around the terms and the product changed. This means that multiplying quaternions is kind of like multiplying matrices — the order matters. So multiplication is not commutative.

Here’s a framework for a quaternion in C#:

public class Quat{
	// Represents w + xi + yj + zk
	public float w, x, y, z;
	public Quat(float w, float x, float y, float z){
		this.w = w;
		this.x = x;
		this.y = y;
		this.z = z;
	}
}

Normalizing Quaternions

The norm of a quaternion is

N(\mathbf{q}) = \sqrt{w^2+x^2+y^2+z^2}

When we use quaternions to represent rotations, we typically want unit quaternions: quaternions with norm 1. This is straightforward: to normalize a quaternion, divide each component by the norm.

In C#:

public float Norm(){
  return Mathf.Sqrt (w * w + x * x + y * y + z * z);
}

public Quat Normalize(){
  float m = Norm ();
  return new Quat (w / m, x / m, y / m, z / m);
}

Multiplying Quaternions

Multiplying is simple, just a little tedious. If we have two quaternions:

(w_1 + x_1i + y_1j + z_1k) (w_2+x_2i+y_2j+z_2k)

Then their product is this ugly mess:

\begin{array}{l} w_1w_2-x_1x_2-y_1y_2-z_1z_2 \\ + (w_1x_2+x_1w_2+y_1z_2-z_1y_2)i \\ + (w_1y_2+y_1w_2-x_1z_2+z_1x_2) j \\ + (w_1z_2+z_1w_2+x_1y_2-y_1x_2) k \end{array}

In C#:

// Returns a*b
public static Quat Multiply(Quat a, Quat b){
  float w = a.w * b.w - a.x * b.x - a.y * b.y - a.z * b.z;
  float x = a.w * b.x + a.x * b.w + a.y * b.z - a.z * b.y;
  float y = a.w * b.y + a.y * b.w - a.x * b.z + a.z * b.x;
  float z = a.w * b.z + a.z * b.w + a.x * b.y - a.y * b.x;
  return new Quat (w,x,y,z).Normalize();
}

Since multiplication is not commutative, I made this function static to avoid confusing left and right multiplication. Also, I normalize the product so that floating point errors don’t accumulate.

Constructing Rotation Quaternions

Every rotation operation can be written as a rotation of some angle, \theta, around some vector (u_x, u_y, u_z):

The following formula gives a quaternion that represents this rotation:

\mathbf{q} = \cos \frac{\theta}{2} + (u_x i + u_y j + u_z k) \sin \frac{\theta}{2}

For our purposes, \theta is a very small number, say 0.01, and we use one of the three basis vectors to rotate around. For example, if we are rotating around (1,0,0) then our quaternion is

\cos \frac{0.01}{2} + \sin \frac{0.01}{2}i

That’s it: given any quaternion, multiplying on the left by our quaternion rotates it slightly around the x axis.

In C#, our code might look like this:

Quat qx = new Quat (Mathf.Cos (0.01 / 2), 0, 0, Mathf.Sin (0.01 / 2));
Quat qy = new Quat (Mathf.Cos (0.01 / 2), 0, Mathf.Sin (0.01 / 2), 0);
Quat qz = new Quat (Mathf.Cos (0.01 / 2), Mathf.Sin (0.01 / 2), 0, 0);

Putting it together

That’s all we need to do interesting things with quaternions. Let’s combine everything we have. Here’s our quaternion class thus far:

public class Quat{
	// Represents w + xi + yj + zk
	public float w, x, y, z;
	public Quat(float w, float x, float y, float z){
		this.w = w;
		this.x = x;
		this.y = y;
		this.z = z;
	}

	public float Norm(){
		return Mathf.Sqrt (w * w + x * x + y * y + z * z);
	}

	public Quat Normalize(){
		float m = Norm ();
		return new Quat (w / m, x / m, y / m, z / m);
	}

	// Returns a*b
	public static Quat Multiply(Quat a, Quat b){
		float w = a.w * b.w - a.x * b.x - a.y * b.y - a.z * b.z;
		float x = a.w * b.x + a.x * b.w + a.y * b.z - a.z * b.y;
		float y = a.w * b.y + a.y * b.w - a.x * b.z + a.z * b.x;
		float z = a.w * b.z + a.z * b.w + a.x * b.y - a.y * b.x;
		return new Quat (w,x,y,z).Normalize();
	}

	public Quaternion ToUnityQuaternion(){
		return new Quaternion (w, x, y, z);
	}
}

Now we just need to read the input, perform our calculations, and output the rotation quaternion to Unity:

public class Airplane : MonoBehaviour {
  public GameObject airplane;
  public Quat quat = new Quat (0, 0, 0, -1);
  public float speed = 0.01f;

  void FixedUpdate(){
    float inputX = Input.GetAxis("UpDown");
    float inputY = Input.GetAxis("LeftRight");
    float inputZ = Input.GetAxis("Roll");

    Quat qx = new Quat (Mathf.Cos (speed * inputX / 2), 0, 0, Mathf.Sin (speed * inputX / 2));
    Quat qy = new Quat (Mathf.Cos (speed * inputY / 2), 0, Mathf.Sin (speed * inputY / 2), 0);
    Quat qz = new Quat (Mathf.Cos (speed * inputZ / 2), Mathf.Sin (speed * inputZ / 2), 0, 0);

    quat = Quat.Multiply (qx, quat);
    quat = Quat.Multiply (qy, quat);
    quat = Quat.Multiply (qz, quat);

    airplane.transform.rotation = quat.ToUnityQuaternion ();
  }
}

In Unity, the input is not given to us as a single true/false value, but a float between -1 and 1. So holding right increases the LeftRight input gradually until it reaches 1, avoiding a sudden jump in movement.

What’s ToUnityQuaternion? Well, it turns out that Unity already has a Quaternion class that does everything here and much more, so all this could have literally been implemented in one line if we wanted.

Anyways, let’s see the result.

As you can see, holding right turns the plane relative to itself now, and the green arrow stays still. Hooray!


Beginner’s comparison of Computer Algebra Systems (Mathematica / Maxima / Maple)

August 11, 2014

I’ve never been very good at doing manual computations, and whenever I need to do a tedious computation for an assignment, I like to automate it by writing a computer program. Usually I implemented an ad-hoc solution using Haskell, either using a simple library or rolling my own implementation if the library didn’t have it. But I found this solution to be unsatisfactory: my Haskell programs worked with integers and floating numbers and I couldn’t easily generalize it to work with symbolic expressions. So I looked to learn a CAS (computer algebra system), so in the future I won’t have to hack together buggy code for common math operations.

I have no experience with symbolic computing, so it wasn’t clear to me where to begin. To start off, there are many different competing computer algebra systems, all incompatible with each other, and it’s far from clear which one is best for my needs. I began to experiment with several systems, but after a few days I still couldn’t decide which one was the winner.

I narrowed it down to 3 platforms. Here’s my setup (all running on Windows 7):

  • Mathematica 8.0
  • Maxima 5.32 with wxMaxima 13.04
  • Maple 18.00

So I came up with a trial — I had a short (but nontrivial) problem representative of the type of problem I’d be looking at, and I would try to solve it in all 3 languages, to determine which one was easiest to work with.

The Problem

This problem came up as a part of a recent linear algebra assignment.

Let the field be \mathbb{Z}_5 (so all operations are taken modulo 5). Find all 2×2 matrices P such that

P^T \left( \begin{array}{cc} 2 & 0 \\ 0 & 3 \end{array} \right) P = I

We can break this problem into several steps:

  • Enumerate all lists of length 4 of values between 0 to 4, that is, [[0,0,0,0],[0,0,0,1],…,[4,4,4,4]]. We will probably do this with a cartesian product or list comprehension.
  • Figure out how to convert a list into a 2×2 matrix form that the system can perform matrix operations on. For example, [1,2,3,4] might become matrix([1,2],[3,4])
  • Figure out how to do control flow, either by looping over a list (procedural) or with a map and filter (functional)
  • Finally, multiply the matrices modulo 5 and check if it equals the identity matrix, and output.

This problem encompasses a lot of the challenges I have with CAS software, that is, utilize mathematical functions (in this case, we only use matrix multiplication and transpose), yet at the same time express a nontrivial control flow. There are 5^4=625 matrices to check, so performance is not a concern; I am focusing on ease of use.

For reference, here is the answer to this problem:

These are the 8 matrices that satisfy the desired property.

I have no prior experience in programming in any of the 3 languages, and I will try to solve this problem with the most straightforward way possible with each of the languages. I realize that my solutions will probably be redundant and inefficient because of my inexperience, but it will balance out in the end because I’m equally inexperienced in all of the languages.

Mathematica

I started with Mathematica, a proprietary system by Wolfram Research and the engine behind Wolfram Alpha. Mathematica is probably the most powerful out of the three, with capabilities with working with data well beyond what I’d expect from a CAS.

What I found most jarring about Mathematica is its syntax. I’ve worked with multiple procedural and functional languages before, and there are certain things that Mathematica simply does differently from everybody else. Here are a few I ran across:

  • To use a pure function (equivalent of a lambda expression), you refer to the argument as #, and the function must end with the & character
  • The preferred shorthand for Map is /@ (although you can write the longhand Map)
  • To create a cartesian product of a list with itself n times, the function is called Tuples, which I found pretty counterintuitive

Initially I wanted to convert my flat list into a nested list by pattern matching Haskell style, ie f [a,b,c,d] = [[a,b],[c,d]], but I wasn’t sure how to do that, or if the language supports pattern matching on lists. However I ran across Partition[xs,2] which does the job, so I went with that.

Despite the language oddities, the functions are very well documented, so I was able to complete the task fairly quickly. The UI is fairly streamlined and intuitive, so I’m happy with that. I still can’t wrap my head around the syntax — I would like it more if it behaved more like traditional languages — but I suppose I’ll get the hang of it after a while.

Here’s the program I came up with:

SearchSpaceLists := Tuples[Range[0, 4], 4]
SearchSpaceMatrices := 
 Map[Function[xs, Partition[xs, 2]], SearchSpaceLists]
Middle := {{2, 0}, {0, 3}}
FilteredMatrices := 
 Select[SearchSpaceMatrices, 
  Mod[Transpose[#].Middle.#, 5] == IdentityMatrix[2] &]
MatrixForm[#] & /@ FilteredMatrices

Maxima

Maxima is a lightweight, open source alternative to Mathematica; I’ve had friends recommend it as being small and easy to use.

The syntax for Maxima is more natural, with things like lists and loops and lambda functions working more or less the way I expect. However, whenever I tried to do something with a function that isn’t the most common use case, I found the documentation lacking and often ended up combing through old forum posts.

Initially I tried to generate a list with a cartesian product like my Mathematica version, but I couldn’t figure out how to do that, eventually I gave up and used 4 nested for loops because that was better documented.

Another thing I had difficulty with was transforming a nested list into a matrix using the matrix command. Normally you would create a matrix with matrix([1,2],[3,4]), so by passing in two parameters. The function doesn’t handle passing in matrix([[1,2],[3,4]]), so to get around that you need to invoke a macro: funmake(‘matrix,[[1,2],[3,4]]).

Overall I found that the lack of documentation made the system frustrating to work with. I would however use it for simpler computations that fall under the common use cases — these are usually intuitive in Maxima.

Here’s the program I came up with:

Middle:matrix([2,0],[0,3]);
Ident:identfor(Middle);
for a:0 thru 4 do
  for b:0 thru 4 do
    for c:0 thru 4 do
      for d:0 thru 4 do
        (P:funmake('matrix,[[a,b],[c,d]]),
         P2:transpose(P).Middle.P,
         if matrixmap(lambda([x],mod(x,5)),P2) = Ident then
             print(P));

Shortly after writing this I realized I didn’t actually need the funmake macro, since there’s no need to generate a nested list in the first place, I could simply do matrix([a,b],[c,d]). Oh well, the point still stands.

Maple

Maple is a proprietary system developed by Maplesoft, a company based in Waterloo. Being a Waterloo student, I’ve had some contact with Maple: professors used it for demonstrations, some classes used it for grading. Hence I felt compelled to give Maple a shot.

At first I was pleasantly surprised that matrix multiplication in a finite field was easy — the code to calculate A*B in \mathbb{Z}_5 is simply A.B mod 5. But everything went downhill after that.

The UI for Maple feels very clunky. Some problems I encountered:

  • It’s not clear how to halt a computation that’s in a an infinite loop. It doesn’t seem to be possible within the UI, and the documentation suggests it’s not possible in all cases (it recommends manually terminating the process). Of course, this loses all unsaved work, so I quickly learned to save before every computation.
  • I can’t figure out how to delete a cell without googling it. It turns out you have to select your cell and a portion of the previous cell, then hit Del.
  • Copy and pasting doesn’t work as expected. When I tried to copy code written inside Maple to a text file, all the internal formatting and syntax highlighting information came with it.
  • Not an UI issue, but error reporting is poor. For example, the = operator works for integers, but when applied to matrices, it silently returns false. You have to use Equals(a,b) to compare matrices (this is kind of like java).

In the end, I managed to complete the task but the poor UI made the whole process fairly unpleasant. I don’t really see myself using Maple in the future; if I had to, I would try the command line.

Here’s the program I came up with:

with(LinearAlgebra):
with(combinat, cartprod):
L := [seq(0..4)]:
T := cartprod([L, L, L, L]):
Middle := <2,0;0,3>:
while not T[finished] do
  pre_matrix := T[nextvalue]();
  matr := Matrix(2,2,pre_matrix);
  if Equal(Transpose(matr).Middle.matr mod 5, IdentityMatrix(2)) then
    print(matr);
  end if
end do:

Conclusion

After the brief trial, there is still no clear winner, but I have enough data to form some personal opinions:

  • Mathematica is powerful and complete, but has a quirky syntax. It has the most potential — definitely the one I would go with if I were to invest more time into learning a CAS.
  • Maxima is lightweight and fairly straightfoward, but because of lack of documentation, it might not be the best tool to do complicated things with. I would keep it for simpler calculations though.
  • Maple may or may not be powerful compared to the other two, I don’t know enough to compare it. But its UI is clearly worse and it would take a lot to compensate for that.

A retrospective on the BALL programming language

August 7, 2014

One of the courses I’m taking this term is CS241 (Foundations of Sequential Programs). This course begins with MIPS assembly, then moves on to lexing and parsing, and eventually cumulates in writing a compiler for a subset of C down to MIPS assembly.

As I wrote my compiler, tediously coding one typechecking rule after another, my mind wandered. There used to be a time when things were simpler, the time when I tried to create my own programming language.

I was 14 back then, still in middle school, having just learned how to program in Java. Rather than going outside and kicking a ball like other kids my age, I, being a true nerd, stayed at home and tinkered with programming languages. The name of the language was BALL, short for “BaiSoft All-purpose List-oriented Language”. It was my first ever “major” programming project.

As you can imagine, my attempt was not quite the next GCC-killer. I knew nothing about compilers, none of the theory of using finite state automatons to scan input into tokens and so on. I used the little I did know, but in the end I was pleased with my efforts.

The BALL Language

One of the first oddities you notice is the GUI. Yes, a graphical user interface — I decided that running programs from the command line wasn’t very cool. To run a program, you would open ball.jar and paste your program into a textbox, then hit the Run button.

When you hit the Run button, your output would appear on a console window which conveniently pops up on the right:

The language itself was essentially a glorified form of assembly. A program consisted of a list of “instructions”, each of which was one line. My language supported two types of variables: string and integer. The only form of control flow was an unconditional jump and a conditional jump.

You are allowed 200 string variables and 300 integer variables. Whenever you use a variable, you have to tell the interpreter what type it is: you write #x if x is a number and &x if x is a string.

String literals were not enclosed by double quotations, rather, they are placed directly into the code. If you want a space character, you write *s.

Some other oddities (questionable design decisions?):

  • A keyword to redefine other keywords. Done primarily to obfuscate code and confuse readers.
  • A keyword to delay the program by n milliseconds. I still remember debugging a bug where the whole UI became unresponsive when a delay was used (you aren’t allowed to sleep on the UI thread in Java). That was my first taste of multithreaded programming.
  • A keyword to emit a beep. I have no idea.

A typical program looks like this:

new number rep 0
write Input *s A *s Number. *n
input #rep
new number counter 0
 hereis repeat
 set #counter #counter + 1
 write #counter *n
 delay 30
if #counter < #rep repeat

This program asks the user for a number, then counts up to that number.

Examples of BALL

Here is the original manual for BALL, written in 2008. It contains a number of example programs, here are a few:

Prime number generator:

Double buffered animation:

Surprisingly the original website itself is still up. I wonder how long it will remain so.

Verdict

Just from running the executable, it seems that the program, although quirky, mostly works. Only when digging through the old source code do I realize what a mess the whole thing was.

The string syntax for example. The first step in decoding an instruction was to tokenize it by the space character, so print “Hello World” would tokenize to [print,”Hello,World”]. Of course, this loses all the whitespace characters in the string literal. My solution? Use *s for space, so the tokenized list is [print,Hello,*s,World] and everything works out.

It’s often said that a programmer should always hate his old code, as that’s a sign that he’s improving. I still haven’t mastered programming, but I’ve definitely improved since I started back in eighth grade.


My trip into the world of Android Programming (with my first two apps)

July 19, 2013

Update: you can get these apps on Google Play now: Champions Quiz and Easy Metronome

I’m a newbie Android developer, having started about a month ago. I started by picking up a beginners Android book (for dummies!) and seeing how far I could get with it.

My device is a relatively crappy Samsung Galaxy Ace smartphone.

I got the SDK and environment set up and got a “Hello World” running without running into trouble. Then I worked through some example apps from the book, again without too much difficulty.

After that, with a very basic understanding of Activities, Intents, Views, and all that, I deviated from the beaten path, using Google when I needed help (which happened pretty often). I wanted to make something new (not copying someone else’s app idea) but still do things typical apps would do (better chance of Googling for help).

First App: Champions Quiz

This is an app for League of Legends players. In this game, there are more than 110 champions, each having 5 abilities — each with a unique name. This results in about 600 distinct abilities.

The goal of the quiz is to match the correct champion name, given the ability name.

Second App: Easy Metronome

This app is a fully functional, animated metronome. Drag the circle up and down to set the tempo like on a real metronome, and press a button and it goes.

The idea is, if you take a look on the Google Play Store for the metronome apps, they tend to have sliders, buttons, many needless customization options, and advertisements, making the interface feel extremely cluttered, given the small screen of the phone.

Instead of dozens of options, the Easy Metronome app brings you a more friendly interface:

My feelings so far on Android Development

There’s the good and the bad.

The good — Android builds on Java, a language I’m highly familiar with, dampening the learning curve for me. There are lots of tutorials for beginners on the web to get you started. At this stage, if you run into a problem, usually someone else has run into the same problem before; I didn’t have to ask any new Stackoverflow questions.

The bad — From the developer’s perspective, the Android tool chain feels buggy and unstable. Perhaps some of these resulted from me doing something stupid, some are annoyances, some are bugs that ideally the developer should never have to deal with. I’ll list a few of these problems, grouping them by where the problem manifests itself.

Problems showing up on the computer:

  • Eclipse can screw up, and when it does, it is not obvious how to fix it. One day, without me changing anything, it suddenly refuses to build the critical R.java file. Fixing it took an hour of painful cleaning, rebuilding, importing, re-importing.
  • Emulators are unusable. They take 15-20 minutes to boot up, and when they do their frame rate is 1-2 fps; they are unresponsive and frequently ignore keyboard input.
  • Ran into an Eclipse bug where Logcat sometimes shows a blank screen. Restarting Eclipse does not fix it. The solution appears to be  to instead use the commandline tool “adb logcat”.

Problems showing up on the phone:

  • I could not get the Face Detection API to work, even when using identical images and code that works for other people. (although I understand face detection is hard, so I’m not too upset)
  • Ran into an Android bug where only the first line of text in an Alert Dialog is shown. The solution was confusing (involved switching to a different theme) with no explanation given.
  • Ran into an Android bug where the text color was ignored, but only on some devices and not others. I haven’t bothered to find the solution to this.

Overall, Android programming is a mildly frustrating experience, compared to what I normally work with. It would be much better without constant minor annoyances and crashes / bugs.

What next

I originally wanted to make a bunch of apps and release them for free, but I later realized that Google charges $25 per developer to be able to publish apps. Being very cheap, I didn’t release any of my apps because of this.

I could try charging a small price (like $0.99) for the metronome app — I can’t imagine anyone paying for my league quiz app. Or I might make more apps and at some point release them all for free.


How to Write your own Minesweeper AI

December 23, 2012

A while ago, I wrote a minesweeper AI. I intended to publish a writeup, but due to university and life and exams, I never got around to writing it. But having just finished my Fall term, I have some time to write a decent overview of what I did.

Short 30 second video of the AI in action here:

How to Play Minesweeper

If you’re an experienced minesweeper player, you can probably skip this section. Otherwise, I’ll just give a quick overview of some basic strategies that we can use to solve an easy minesweeper game.

We start with a 10×10 Beginner’s grid, and click on a square in the middle:

We can quickly identify some of the mines. When the number 1 has exactly one empty square around it, then we know there’s a mine there.

Let’s go ahead and mark the mines:

Now the next strategy: if a 1 has a mine around it, then we know that all the other squares around the 1 cannot be mines.

So let’s go ahead and click on the squares that we know are not mines:

Keep doing this. In this case, it turns out that these two simple strategies are enough to solve the Beginner’s grid:

Roadmap to an AI

All this seems easy enough. Here’s what we’ll need to do:

  1. Read the board. If we use a screenshot function, we can get a bitmap of all the pixels on the board. We just need to ‘read’ the numbers on the screen. Luckily for us, the numbers tend to have different colors: 1 is blue, 2 is green, 3 is red, and so on.
  2. Compute.  Run the calculations, figure out where the mines are. Enough said.
  3. Click the board. This step is easy. In Java, we can use the Robot class in the standard library to send mouse clicks to the screen.

Reading the Field

There’s not a whole lot to this step, so I’m going to skim over it quickly.

At the beginning of the run, while we have a completely empty grid, we invoke a calibration routine – which takes a screenshot and looks for something that looks like a Minesweeper grid. Using heuristics, it determines the location of the grid, the size of a grid square, the dimensions of the board, and things like that.

Now that we know where the squares are, if we want to read a square, we crop a small section of the screenshot and pass it to a detection routine, which looks at a few pixels and figures out what’s in the square.

A few complications came up in the detection routine:

  • The color for the number 1 is very close to the color of an unopened square: both are a dark-blue color. To separate them apart, I compared the ‘variance’ of the patch from the average color for the patch.
  • The color for 3 is identical to that for 7. Here, I used a simple edge-detection heuristic.

Straightforward Algorithm

The trivially straightforward algorithm is actually good enough to solve the beginner and intermediate versions of the game a good percent of the time. Occasionally, if we’re lucky, it even manages to solve an advanced grid!

When humans play minesweeper, we compete for the fastest possible time to solve a grid of minesweeper. So it doesn’t matter if we lose 20 games for every game we win: only the wins count.

This is clearly a silly metric when we’re a robot that can click as fast as we want to. Instead, we’ll challenge ourselves with a more interesting metric:

Win as many games as possible.

Consider the following scenario:

Using the straightforward method, we seem to be stuck.

Up until now, whenever we mark a square as having a mine or safe, we’ve only had to look at a single 3×3 chunk at a time. This strategy fails us here: the trick is to employ a multisquare algorithm – look at multiple different squares at once.

From the lower 2, we know that one of the two circled squares has a mine, while the other doesn’t. We just don’t know which one has the mine:

Although this doesn’t tell us anything right now, we can combine this information with the next 2: we can deduce that the two yellowed squares are empty:

Let’s click them to be sure.

And voilà. They’re empty. The rest of the puzzle can be solved easily, after we’ve made the deduction that those two squares were empty.

The Tank Solver Algorithm

It’s difficult to make the computer think deductively like we just did. But there is a way to achieve the same results, without deductive thinking.

The idea for the Tank algorithm is to enumerate all possible configurations of mines for a position, and see what’s in common between these configurations.

In the example, there are two possible configurations:

You can check for yourself that no other configuration could work here. We’ve deduced that the one square with a cross must contain a mine, and the three squares shaded white below must not contain a mine:

This works even better than human deduction!

We always try to apply the simple algorithm first, and only if that gets us stuck, then we bring in the Tank algorithm.

To implement the Tank algorithm, we first make a list of border tiles: all the tiles we aren’t sure about but have some partial information.

Now we have a list of T  border tiles. If we’re considering every possible configuration, there are 2^T of them. With backtracking, this number is cut down enough for this algorithm to be practical, but we can make one important optimization.

The optimization is segregating the border tiles into several disjoint regions:

If you look carefully, whatever happens in the green area has no effect on what happens in the pink area – we can effectively consider them separately.

How much of a speedup do we get? In this case, the green region has 10 tiles, the pink has 7. Taken together, we need to search through 2^{17} combinations. With segregation, we only have 2^{10} + 2^7: about a 100x speedup.

Practically, the optimization brought the algorithm from stopping for several seconds (sometimes minutes) to think, to giving the solution instantly.

Probability: Making the Best Guess

Are we done now? Can our AI dutifully solve any minesweeper grid we throw at it, with 100% accuracy?

Unsurprisingly, no:

One of the two squares has a mine. It could be in either, with equal probability. No matter how cleverly we program our AI, we can’t do better than a 50-50 guess. Sorry.

The Tank solver fails here, no surprise. Under exactly what circumstances does the Tank algorithm fail?

If it failed, it means that for every border tile, there exists some configuration that this tile has a mine, and some configuration that this tile is empty. Otherwise the Tank solver would have ‘solved’ this particular tile.

In other words, if it failed, we are forced to guess. But before we put in a random guess, we can do some more analysis, just to make sure that we’re making the best guess we could make.

Try this. What do we do here:

From the 3 in the middle, we know that three of them are mines, as marked. But marking mines doesn’t give us any new information about the grid: in order to gain information, we have to uncover some square. Out of the 13 possible squares to uncover, it’s not at all clear which one is the best.

The Tank solver finds 11 possible configurations. Here they are:

Each of these 11 configurations should be equally likely to be the actual position – so we can assign each square a probability that it contains a mine, by counting how many (of the 11) configurations does it contain a mine:

Our best guess would be to click on any of the squares marked ‘2’: in all these cases, we stand an 82% chance of being correct!

Two Endgame Tactics

Up until now, we haven’t utilized this guy:

The mine counter. Normally, this information isn’t of too much use for us, but in many endgame cases it saves us from guessing.

For example:

Here, we would have a 50-50 guess, where two possibilities are equally likely.

But what if the mine counter reads 1? The 2-mine configuration is eliminated, leaving just one possibility left. We can safely open the three tiles on the perimeter.

Now on to our final tactic.

So far we have assumed that we only have information on a tile if there’s a number next to it. For the most part, that’s true. If you pick a tile in some distant unexplored corner, who knows if there’s a mine there?

Exceptions can arise in the endgame:

The mine counter reads 2. Each of the two circled regions gives us a 50-50 chance – and the Tank algorithm stops here.

Of course, the middle square is safe!

To modify the algorithm to solve these cases, when there aren’t that many tiles left, do the recursion on all the remaining tiles, not just the border tiles.

The two tricks here have the shared property that they rely on the mine counter. Reading the mine counter, however, is a non-trivial task that I won’t attempt; instead, the program is coded in with the total number of mines in the grid, and keeps track of the mines left internally.

Conclusion, Results, and Source Code

At this point, I’m convinced that there isn’t much more we could do to improve the win rate. The algorithm uses every last piece of information available, and only fails when it’s provably certain that guessing is needed.

How well does it work? We’ll use the success rate for the advanced grid as a benchmark.

  • The naïve algorithm could not solve it, unless we get very lucky.
  • Tank Solver with probabilistic guessing solves it about 20% of the time.
  • Adding the two endgame tricks bumps it up to a 50% success rate.

Here’s proof:

I’m done for now; the source code for the project is available on Github if anyone is inclined to look at it / tinker with it:

https://github.com/luckytoilet/MSolver


A CMOQR Problem and why not to Trust Brute Force

March 6, 2012

Recently I was invited to compete in the CMOQR – a qualifier contest for the Canadian Math Olympiad. The contest consisted of eight problems, and contestants were allowed about a week’s time to submit written solutions via email.

After a few days, I was able to solve all of the problems except one — the second part of the seventh problem:

Seven people participate in a tournament, in which each pair of players play one game, and one player is declared the winner and the other the loser. A triplet ABC is considered cyclic if A beats B, B beats C, and C beats A.

Can you always separate the seven players into two rooms, so that neither room contains a cyclic triplet?

(Note: the first half of the problem asked the same question for six people — and it’s not too difficult to prove that no matter what, we can put them into two rooms so that neither the first nor the second room contains a cyclic triplet.)

But what happens when we add another person? Can we still put four people in one room, and three people in the other, so that neither rooms contain a cyclic triplet?

There are two possibilities here:

  • One, it’s always possible. No matter what combinations of wins and losses have occurred, we can always separate them into two rooms in such a way. To prove this, we’ll need to systematically consider all possible combinations, and one by one, verify that the statement is possible for each of the cases.
  • Two, it’s not always possible. Then there is some counterexample — some combination of wins and losses so that no matter how we separate them, one of the rooms has a cyclic triplet. This is easier to prove: provided that we have the counterexample, we just have to verify that indeed, this case is a counterexample to the statement.

But there’s a problem. Which of the cases does the solution fall into? That is, should we look for a quick solution by counterexample, or look for some mathematical invariant that no counterexample can exist?

Brute Force?

It would be really helpful if we knew the counterexample, or knew for sure what the counterexample was. What if we wrote a computer program to check all the cases? After all, there are only 7 people in the problem, and 7 choose 2 or 21 games played. Then since each game is either won by one player or the other, there are only 2^21 combinations overall (although that does count some duplicates). And 2^21 is slightly over two million cases to check — completely within the bounds of brute force.

So I coded up a possibility-checker. Generate all 2^21 possible arrangements, then for each one, check all possible ways to separate them into two rooms. If it turns out that no matter how we arrange them, a cyclic triplet persists, then display the counterexample. Simple.

I ran the program. It quickly cycled through every possible arrangement, three seconds later exiting without producing a counterexample.

Alright. So there’s no counterexample. I would have to find some nice mathematical invariant, showing that no matter what, there is always some way to group the players so that neither room has a cyclic triplet.

But no such invariant came. I tried several things, but in each attempt couldn’t quite show that the statement held for every case. I knew that there was no counterexample, but I couldn’t prove it. But why? There must be some tricky way to show that no counterexample existed; whatever it was, I couldn’t find it.

Brute Force poorly implemented

Reluctantly, as the deadline came and passed, I submitted my set of solutions without solving the problem. When the solutions came out a week later, the solution to this problem did not contain any tricky way to disprove the counterexample. Instead, what I found was this:

Let A_0 \ldots A_6 be seven players. Let A_i beat A_j when the difference i-j \equiv 1,2,4 \mod 7.

Huh? A counterexample, really? Let’s look at it.

Everything is symmetric — we can ‘cycle’ the players around without changing anything. Also, if we take four players, two of them are consecutive. Let them be A_0 and A_1.

At this point everything falls into place: in any subset of four players, three of them are cyclic.

But wait … my program had not found any counterexamples! And right here is a counterexample! The culprit was obvious (the reader may have foreseen this by now) — of course, there had to be a problem with my program.

Running my code through a debugger, I found a logic error in the routine converting binary numbers to array configurations, meaning that not all possible configurations were tried. As a result, the counterexample slipped through the hole.

After fixing the code, the program found not one, but a total of 7520 (although not necessarily distinct) counterexamples. Most of them had no elegant structure, but the solution’s configuration was among them.

For the interested, here is the fixed code.

When to Start Over?

It is true that the program could have been better written, better debugged. But how could you know whether a counterexample existed and your program didn’t find it, or if no counterexample existed at all?

In hindsight, it seems that writing the brute force program made me worse off than if I hadn’t written it at all. After the program ran without finding a single counterexample, I was confident that no counterexample existed, and set out about proving that, instead of looking for counterexamples or symmetry.

When you are stuck on such a math problem — that is, after making a bit of progress you get stuck — it might be profitable to start over. More often than I would like, I prove a series of neat things, without being able to prove the desired result. Then a look at the solutions manual reveals that a very short solution — one or two steps — lay in the opposite direction.

I’ll put an end to my philosophical musings of the day. Fortunately, the cutoff for the CMOQR was low enough that even without solving every single problem, I was still able to meet the cutoff.


My attempt at a Conquest AI

December 9, 2011

Before I move on, let me introduce Conquest – a chesslike minigame in Runescape. This isn’t a detailed description of the rules, just an overview to kind of get to know how the game works, if you haven’t seen it before.

You have a 20×20 board and some pieces (which you choose yourself and get to set up wherever you want, with some restrictions). You’re trying to kill all of your opponent’s pieces, which you have to do in order to win.

Each piece has a certain move radius — so it can move to any square within that radius (diagonals only counting as a distance of one). Then, once it has moved, it is allowed to attack an enemy within its range — again varying from piece to piece.

That’s the gist of the game. Oh yes, at certain times you can also use some silly special attacks in addition to moving and attacking — for instance, freeze an enemy piece for one move or temporarily boost the attack of one of your own pieces. And these special moves can be used on any pieces during your move, and in any combination.

A Conquest AI… or not

So some months ago, I was playing Conquest. I was a decent player, but after a string of losses, I thought: maybe I can write an AI to play this game for me — perhaps better than a human can? Perhaps the same way you would write a chess AI?

Well, I scribbled some back of the envelope calculations, but I quickly realized that the number of possible moves at each position was far too large.

How large? A player has maybe 5 pieces, and if a piece’s average move radius is 4, then we already have 5*9*9 (remember 4 is the radius, not diameter) which is over 400. And that’s if we never attack or use special moves. If we take special moves into account, since special moves can be stacked on top of each other, assuming 5 valid targets per special move and 3 special moves we would multiply the 400 again by 5*5*5. For one move, we would need to consider 50000 possibilities.

In comparison, from any position in chess there are typically 30 or 40 legal moves. So the standard minimax algorithm would probably be no good, at least not without an incredible amount of heuristics. I had to try something different for this to work.

Random has a chance

It turns out that there exists a very different strategy that worked quite well for Go-playing AIs. Instead of constructing and evaluating a huge search tree to find the best move, we simulate thousands random games, with random moves from each side. The move that gives us the best win percentage is played. All else being equal, a position that gives you a 90% chance of winning given random play from that point on is usually better than one with only a 30% chance of winning.

Here’s a screenshot of a connect-4 AI using this technique:

The Monte Carlo approach, as that’s what it’s called, seems absurd — there’s no way playing random moves can beat a good minimax! Indeed, Monte Carlo is never used to play chess, and I could sometimes beat the above AI. But where Monte Carlo really shines is when standard minimax is impractical — my scenario.

After finding and reading a few papers describing the method in detail, I was ready to start coding. However, there is still one more catch: to make it practical to set up the AI against other players over Runescape (and where the time limit for each move is usually 30 seconds), I would have to find a way to integrate the back end — the part that does the computations — with a front end that relayed the moves back and forth to the Runescape game client.

All is not lost, however: there were several freely available hacked clients for botting purposes: Powerbot allowed for users to write java scripts that automated tedious ingame actions.

That was about as far as I got though, unfortunately. Coding the game logic proved trickier than expected. A month or so after I started my account was banned for botting; later, Jagex (the company behind the game) made an update that rendered Powerbot (as well as every other similar hacked client) unusable. Without a game account nor a practical front-end, I called it quits.


Follow

Get every new post delivered to your Inbox.

Join 72 other followers