How to Write your own Minesweeper AI

December 23, 2012

A while ago, I wrote a minesweeper AI. I intended to publish a writeup, but due to university and life and exams, I never got around to writing it. But having just finished my Fall term, I have some time to write a decent overview of what I did.

Short 30 second video of the AI in action here:

How to Play Minesweeper

If you’re an experienced minesweeper player, you can probably skip this section. Otherwise, I’ll just give a quick overview of some basic strategies that we can use to solve an easy minesweeper game.

We start with a 10×10 Beginner’s grid, and click on a square in the middle:

We can quickly identify some of the mines. When the number 1 has exactly one empty square around it, then we know there’s a mine there.

Let’s go ahead and mark the mines:

Now the next strategy: if a 1 has a mine around it, then we know that all the other squares around the 1 cannot be mines.

So let’s go ahead and click on the squares that we know are not mines:

Keep doing this. In this case, it turns out that these two simple strategies are enough to solve the Beginner’s grid:

Roadmap to an AI

All this seems easy enough. Here’s what we’ll need to do:

1. Read the board. If we use a screenshot function, we can get a bitmap of all the pixels on the board. We just need to ‘read’ the numbers on the screen. Luckily for us, the numbers tend to have different colors: 1 is blue, 2 is green, 3 is red, and so on.
2. Compute.  Run the calculations, figure out where the mines are. Enough said.
3. Click the board. This step is easy. In Java, we can use the Robot class in the standard library to send mouse clicks to the screen.

There’s not a whole lot to this step, so I’m going to skim over it quickly.

At the beginning of the run, while we have a completely empty grid, we invoke a calibration routine – which takes a screenshot and looks for something that looks like a Minesweeper grid. Using heuristics, it determines the location of the grid, the size of a grid square, the dimensions of the board, and things like that.

Now that we know where the squares are, if we want to read a square, we crop a small section of the screenshot and pass it to a detection routine, which looks at a few pixels and figures out what’s in the square.

A few complications came up in the detection routine:

• The color for the number 1 is very close to the color of an unopened square: both are a dark-blue color. To separate them apart, I compared the ‘variance’ of the patch from the average color for the patch.
• The color for 3 is identical to that for 7. Here, I used a simple edge-detection heuristic.

Straightforward Algorithm

The trivially straightforward algorithm is actually good enough to solve the beginner and intermediate versions of the game a good percent of the time. Occasionally, if we’re lucky, it even manages to solve an advanced grid!

When humans play minesweeper, we compete for the fastest possible time to solve a grid of minesweeper. So it doesn’t matter if we lose 20 games for every game we win: only the wins count.

This is clearly a silly metric when we’re a robot that can click as fast as we want to. Instead, we’ll challenge ourselves with a more interesting metric:

Win as many games as possible.

Consider the following scenario:

Using the straightforward method, we seem to be stuck.

Up until now, whenever we mark a square as having a mine or safe, we’ve only had to look at a single 3×3 chunk at a time. This strategy fails us here: the trick is to employ a multisquare algorithm – look at multiple different squares at once.

From the lower 2, we know that one of the two circled squares has a mine, while the other doesn’t. We just don’t know which one has the mine:

Although this doesn’t tell us anything right now, we can combine this information with the next 2: we can deduce that the two yellowed squares are empty:

Let’s click them to be sure.

And voilà. They’re empty. The rest of the puzzle can be solved easily, after we’ve made the deduction that those two squares were empty.

The Tank Solver Algorithm

It’s difficult to make the computer think deductively like we just did. But there is a way to achieve the same results, without deductive thinking.

The idea for the Tank algorithm is to enumerate all possible configurations of mines for a position, and see what’s in common between these configurations.

In the example, there are two possible configurations:

You can check for yourself that no other configuration could work here. We’ve deduced that the one square with a cross must contain a mine, and the three squares shaded white below must not contain a mine:

This works even better than human deduction!

We always try to apply the simple algorithm first, and only if that gets us stuck, then we bring in the Tank algorithm.

To implement the Tank algorithm, we first make a list of border tiles: all the tiles we aren’t sure about but have some partial information.

Now we have a list of $T$  border tiles. If we’re considering every possible configuration, there are $2^T$ of them. With backtracking, this number is cut down enough for this algorithm to be practical, but we can make one important optimization.

The optimization is segregating the border tiles into several disjoint regions:

If you look carefully, whatever happens in the green area has no effect on what happens in the pink area – we can effectively consider them separately.

How much of a speedup do we get? In this case, the green region has 10 tiles, the pink has 7. Taken together, we need to search through $2^{17}$ combinations. With segregation, we only have $2^{10} + 2^7$: about a 100x speedup.

Practically, the optimization brought the algorithm from stopping for several seconds (sometimes minutes) to think, to giving the solution instantly.

Probability: Making the Best Guess

Are we done now? Can our AI dutifully solve any minesweeper grid we throw at it, with 100% accuracy?

Unsurprisingly, no:

One of the two squares has a mine. It could be in either, with equal probability. No matter how cleverly we program our AI, we can’t do better than a 50-50 guess. Sorry.

The Tank solver fails here, no surprise. Under exactly what circumstances does the Tank algorithm fail?

If it failed, it means that for every border tile, there exists some configuration that this tile has a mine, and some configuration that this tile is empty. Otherwise the Tank solver would have ‘solved’ this particular tile.

In other words, if it failed, we are forced to guess. But before we put in a random guess, we can do some more analysis, just to make sure that we’re making the best guess we could make.

Try this. What do we do here:

From the 3 in the middle, we know that three of them are mines, as marked. But marking mines doesn’t give us any new information about the grid: in order to gain information, we have to uncover some square. Out of the 13 possible squares to uncover, it’s not at all clear which one is the best.

The Tank solver finds 11 possible configurations. Here they are:

Each of these 11 configurations should be equally likely to be the actual position – so we can assign each square a probability that it contains a mine, by counting how many (of the 11) configurations does it contain a mine:

Our best guess would be to click on any of the squares marked ‘2’: in all these cases, we stand an 82% chance of being correct!

Two Endgame Tactics

Up until now, we haven’t utilized this guy:

The mine counter. Normally, this information isn’t of too much use for us, but in many endgame cases it saves us from guessing.

For example:

Here, we would have a 50-50 guess, where two possibilities are equally likely.

But what if the mine counter reads 1? The 2-mine configuration is eliminated, leaving just one possibility left. We can safely open the three tiles on the perimeter.

Now on to our final tactic.

So far we have assumed that we only have information on a tile if there’s a number next to it. For the most part, that’s true. If you pick a tile in some distant unexplored corner, who knows if there’s a mine there?

Exceptions can arise in the endgame:

The mine counter reads 2. Each of the two circled regions gives us a 50-50 chance – and the Tank algorithm stops here.

Of course, the middle square is safe!

To modify the algorithm to solve these cases, when there aren’t that many tiles left, do the recursion on all the remaining tiles, not just the border tiles.

The two tricks here have the shared property that they rely on the mine counter. Reading the mine counter, however, is a non-trivial task that I won’t attempt; instead, the program is coded in with the total number of mines in the grid, and keeps track of the mines left internally.

Conclusion, Results, and Source Code

At this point, I’m convinced that there isn’t much more we could do to improve the win rate. The algorithm uses every last piece of information available, and only fails when it’s provably certain that guessing is needed.

How well does it work? We’ll use the success rate for the advanced grid as a benchmark.

• The naïve algorithm could not solve it, unless we get very lucky.
• Tank Solver with probabilistic guessing solves it about 20% of the time.
• Adding the two endgame tricks bumps it up to a 50% success rate.

Here’s proof:

I’m done for now; the source code for the project is available on Github if anyone is inclined to look at it / tinker with it:

https://github.com/luckytoilet/MSolver

Solving the AB Game by Brute Force

November 7, 2011

The AB game, a variant of Mastermind, is played with pencil and paper and with two people. One person chooses a four digit number in which no digit is repeated, say X, and the other person tries to guess it in as few moves as possible. After the guesser guesses some number, say Y, the other person gives information on how closely Y matched with X:

• If some digit in Y coincides with a digit in X (in the same position), then the guesser scores an A.
• If some digit in Y exists in X but is in the wrong place, then the guesser scores a B.

For instance, if X is 1234 and we guess 2674, we get an A and a B, because the 4 is in the right place, and the 2 is one of the right numbers but isn’t in the right place.

This proceeds until the guesser gets the exact number.

When humans (or at least beginners) play the AB game, they usually do some form of uncoordinated trial and error, which gets the right answer after some large number of moves. This takes anywhere from about 8 to possibly 30 guesses. When I played this game with a friend, I didn’t have a very systematic strategy, but I wondered if a computer program could solve the game, always entering the optimal guess.

A Bruteforce Solver

My first approach happened to work fairly well. Simply, the computer keeps track of a list of all possible numbers that the answer can be. At random, the computer guesses one of the numbers, and upon receiving feedback, eliminates every number in its list that doesn’t match that feedback. Quickly it eliminates whole swaths of combinations and arrives at the answer.

A typical session might go like this:

Guess 1: 6297. Score: 5040
b
Guess 2: 8512. Score: 1440
abb
Guess 3: 2315. Score: 83
bb
Guess 4: 5842. Score: 29
bb
Guess 5: 9581. Score: 13
ab
Guess 6: 8021. Score: 1


It took only 5 guesses for the computer to narrow down the choices to the only possible answer (8021).

A variant that is usually much harder for humans to solve is to allow the number chosen to have repeats. Although significantly trickier for humans to guess by trial and error, brute force doesn’t seem to be affected too much by it:

Guess 1: 1796. Score: 10000
b
Guess 2: 5881. Score: 3048
a
Guess 3: 0131. Score: 531
abb
Guess 4: 3311. Score: 15
aa
Guess 5: 3201. Score: 9
abb
Guess 6: 2011. Score: 1


The computer’s average of five or six is much better than a human can normally do! (although I haven’t researched possible human algorithms).

Optimal or Not?

At this point you may begin to wonder if this strategy is the optimal one. Unfortunately, it is not — and I only need one counterexample to demonstrate that.

Suppose that instead of four numbers, you were allowed to choose 4 letters from A to Z. You choose the combination ABCW. Now suppose that the computer ‘knows’ that the first three letters are ABC — that is, it has eliminated all other combinations except for ABCD, ABCE, …, ABCZ.

By the random guessing algorithm, the computer is forced to guess ABCJ, ABCP, etc, until it eventually hits on ABCW at random. This may take a very high number of guesses.

A smarter strategy would be to guess combinations of four unknown letters, say DEFG, then HIJK, etc. Instead of eliminating one letter, you eliminate four letters at a time. Although some guesses have no chance of being correct, the number of guesses required is fewer in the long run.

Source Code

I’ll put my somewhat unwieldy java code here for all to see:



import java.util.*;

public class ABSolver{

// Does cc1 and cc2 together match the pattern?
// If repeats are allowed, cc2 is matched against cc1.
static boolean fits(char[] cc1, char[] cc2, int A, int B){

int a = 0;
int b = 0;
for(int i=0; i<4; i++){
if(cc1[i] == cc2[i]) a++;
else{
for(int j=0; j<4; j++){
if(i != j && cc1[j] == cc2[i] && cc1[j]!=cc2[j]){
b++;
break;
}
}
}
}

return a==A && b==B;
}

public static void main(String[] args){

Random rand = new Random();
Scanner scan = new Scanner(System.in);

// Combinations that haven't been eliminated yet
List<char[]> ok = new ArrayList<char[]>(10000);

// Generate all possible combinations
for(int i = 0; i <= 9999; i++){

String i_s = Integer.toString(i);
while(i_s.length() != 4) i_s = "0" + i_s;

// Check for sameness
char a = i_s.charAt(0);
char b = i_s.charAt(1);
char c = i_s.charAt(2);
char d = i_s.charAt(3);
boolean same = a==b || a==c || a==d || b==c || b==d || c==d;

// Comment this line out if we're allowing repeats
if(same) continue;

char[] i_ia = i_s.toCharArray();

}

// Pick the first guess randomly
char[] firstg = ok.get(rand.nextInt(ok.size()));
char[] guess = null;

int nguesses = 1;

// Question answer cycle until we get it right
while(true){

// Ask for user response
if(nguesses > 1){
String ans = scan.nextLine();
int A = 0;
int B = 0;
for(char cc : ans.toCharArray()){
if(cc == 'a') A++;
if(cc == 'b') B++;
}

if(A==4) return;

// For each one check to see if it still fits
List<char[]> ok_ = new ArrayList<char[]>(ok.size());
for(char[] zhe : ok){
if(fits(zhe, guess, A, B))
}
ok = ok_;
}

char[] nextguess = null;
if(nguesses == 1)
nextguess = firstg;
else nextguess = ok.get(rand.nextInt(ok.size()));

System.out.println("Guess " + nguesses + ": " + new String(nextguess)
+ ". Score: " + ok.size());

// we win!
if(ok.size() == 1)
return;

guess = nextguess;
nguesses++;
}

}
}


Algorithmic Counterpoint: How I Wrote a Computer Program to Generate Music at Mathcamp 2011

August 10, 2011

Hello all; I have just returned from Mathcamp, a five week math program for high school students. I’m not going to write in detail about life at Mathcamp, other than that it was amazing in every way.

This year at Mathcamp, I chose to do a computer programming project to ‘teach’ a computer to write ‘music’. The word ‘music’ is in quotes because what it’s expected to generate is more or less random and nothing like actual music by Bach or Mozart; on the other hand, the ‘music’ generated resembles actual music closely enough that it can still be called music, not a random string of notes. Anyways, here’s a short sample of what it can generate:

What can be considered ‘music’ in some sense here is really just a set of rules backed by a random number generator. Hence what it does is really the following:

1. Generate a main melody using a bunch of rules: ie, what notes are allowed to go with each other, what rhythms are allowed to go with what notes, etc.
2. Using the main melody as a sort of a baseline, generate one or more counterpoint melodies: a counterpoint melody is constrained by the same rules that the main melody must follow, but also has to harmonize with the existing melody, thus being constrained by another set of rules.
3. Once all the notes are generated, the program simply uses some library — in our case JFugue – to play the notes through the speakers.

The bulk of our time was spent implementing, tweaking, and debugging the rules and the various other substeps in the above process.

The source code, mostly written by myself over about two weeks, is available as a Google Code project.

Coding a Tetris AI using a Genetic Algorithm

May 27, 2011

About two years ago, when I was in grade 9, I decided to make a tetris clone in Java. One or two months later, I had a fully working and playable tetris, complete with background and sound effects and scoring and graphical effects when a line is cleared.

A year after I created it, I decided to go ahead and write an AI to play my game. Although it played much better than I could (I’m not a particularly good tetris player), it would still die after a few dozen lines when the number of columns is 10. This means it was pretty outclassed by existing AI’s that could go on for thousands of lines!

Now, another year later, I coupled my previous AI algorithm with a genetic algorithm, with some pretty neat results:

Rules of the Game

How does one make a computer program to play tetris? Or more generally, how does one play tetris in the first place? The rules of tetris seem to me to be better viewed than explained, but I’ll give a quick overview of tetris gameplay and tetris strategies.

As I’m sure most of you know, in tetris you have blocks of four (tetrominoes) falling from the top of the board. The player moves and rotates the blocks and stacks them up:

Here the black outline is one of the places you can put the funny shaped block. And when a row is filled entirely with blocks (the row with the red outline below), you get a clear; that entire row is removed and the rest of the board is shifted down (often with a series of beeping noises and a small increase to your score):

If the blocks don’t get cleared and they stack to the top of the board, you lose. So ideally you want to fill as many lines as possible and avoid stacking the blocks up. Very simple.

Simple Strategies

So what are some things that we can try to avoid losing the game? Some things immediately come to mind. We lose when the blocks get stacked up so high that we can’t put in new blocks, right? So it makes sense to avoid piling up large numbers of blocks in high positions in the first place, or to penalize height:

So for each position the computer can calculate a height penalty — for each block the computer adds a number depending on how high it is. Then when the AI tries to decide where to put the next block, it ‘knows’ that blocks piled up really high is a bad thing and tries to avoid it.

Another strategy that seems pretty obvious is to try to get clears! We assign a positive score for each line we clear, in other words we reward clears. Duh.

Anyone who has played a few games of tetris would probably subconsciously know a number of intuitive strategies — packing together blocks as tightly as possible for instance. How do we translate that into code? Well, to start, blocks that are packed tightly has little or no holes — we’ll define these as any empty spaces for which there is a block somewhere directly above it:

Why don’t we want holes? A row is only considered a clear if the entire row is filled — if there’s even a single hole in the row, it doesn’t get removed. Not good. So it makes sense to give a negative score to positions with holes in them — to penalize holes.

It’s best to try not to have any holes at all, but sometimes having a hole or two is inevitable. What can we do after we have holes in our formation? Good question, but we should not pile more blocks on top of our holes. If we define a blockade as any block that’s directly above a hole, we should penalize blockades:

Why are blockades bad again? Well, a hole only stops being a hole if there are no more blocks above it, so stacking more blocks above holes would only make it harder to remove the hole.

These are all the obvious strategies. I also put in less obvious scores rewarding or penalizing for hugging the wall (edge of the current block touching edge of the wall), hugging the floor (edge of current block touching the floor) and flattening (rewarding if the edge of the current block touches an existing  block). Again, these are harder to justify and mostly for fine-tuning — it’s not even clear whether they should be positive or negative.

A Hedonistic AI

These strategies are sufficient to make a passable tetris AI. This algorithm is very simple:

1. Look at the current block and the next block and simulate ALL possible combinations (positions and rotations) of the two blocks.
2. Calculate a score for each of the positions.
3. Move the block to the position with the highest score and repeat.

To give a score for a position, we would use an equation like this:

Score = A * Sum of Heights
+ B * Number of Clears
+ C * Number of Holes
+ D * Number of Blockades

Where A, B, C, and D are weights that we decide — how important is each of the factors. I initially came up with some pretty arbitrary values:

• -0.03 for the height multiplier
• -7.5 per hole
• -3.5 per blockade
• +8.0 per clear
• +3.0 for each edge touching another block
• +2.5 for each edge touching the wall
• +5.0 for each edge touching the floor

The reason I gave such a low multiplier for the height is because the numbers stack up so quickly it racks up a huge penalty for each block on the field. The numbers I chose seem pretty reasonable — and puts blocks more or less where a human would put them.

Playing God: Bringing in the Genetic Algorithm

The biggest problem with this method is that we choosed the weights pretty much arbitrarily. They might work well or they might not, but we don’t really know whether there are better values for them.

What can we do about it? We could brute force it — but with solutions that range across a continuum, there is a better way — a genetic algorithm.

A genetic algorithm is just a searching heuristic; it derives its ideas from evolution, where nature creates complex and sophisticated organisms by making random changes to the DNA.

Charles Darwin specifies four criteria for the process of natural selection to occur:

1. Variation: Organisms in a population must be slightly different from one another.
2. Inheritance: Traits of parent organisms must be passed onto their offspring.
3. Limited space: Only some of the offspring in any generation is able to survive and pass on its genes.
4. Competition: Individuals that are more fit are more likely to pass on their genes to the next generation.

In order to turn this into an algorithm, we’ll need — let’s quote this article:

1. chromosome which expresses a possible solution to the problem as a string
2. fitness function which takes a chromosome as input and returns a higher value for better solutions
3. population which is just a set of many chromosomes
4. selection method which determines how parents are selected for breeding from the population
5. crossover operation which determines how parents combine to produce offspring
6. mutation operation which determines how random deviations manifest themselves
We begin by constructing a chromosome — a solution to the problem of making an AI to play tetris. This is pretty easy, since we can already run an AI with a set of seven weights. So the chromosome is simply an array of seven doubles.

Next, our fitness function is very easy too, since the AI already has a scoring system. Basically the program would run the tetris AI at max speed on a 8 column board until it died, after which it would use the score it earned. Why only 8 columns and not the normal 10? In later generations, AI’s are able to survive for hours in the 10 column version, but when we reduce the number of columns to 8, even the best AI’s can survive for only a few seconds to a minute (we’re still talking about hundreds or thousands of lines here).

I used Nintendo’s original scoring system for tetris — 40 points for one clear, 120 points for two simultaneous clears, 300 for three simultaneous clears, and 1200 for four simultaneous clears. I also added 1 point for each block placed, to differentiate between AI’s that couldn’t score any lines.

Three, I chose a population of sixteen chromosomes. Initially the chromosomes are filled with randomly generated numbers (floating points fitting a normal distribution). Each generation onwards, the population’s chromosomes are derived from the best candidates of the previous generation (more on this later) — but the population size stays the same.

Next, for the selection method I chose a simple tournament method. After we run each candidate from a generation and collect all of their scores, we randomly pair up the candidates. For each pair, we take the high scorer — the winner — and discard the low scorer. Then, we pair up the winners randomly again to generate new offspring for the next generation.

Lastly, I implemented the crossover as follows: for each of the seven attributes in the offspring’s chromosome, we randomly select the respective attribute from the two parents with equal probability.

Occasionally, we have a mutation – a trait in an offspring that does not come from either parent. Each time we give an offspring an attribute, we have a 10% chance of assigning a completely random value to that attribute instead.

Results of the Genetic Algorithm

In the first generation or two, most candidates performed horribly. Many candidates had completely wrong weights — rewarding height and rewarding holes! Needless to say, these programs did not survive very long. But the genetic algorithm quickly came up with some decent solutions, and pretty soon most algorithms were scoring a few hundred lines (my original values gave about 20 lines on the 8-column version by comparison)

After running the genetic algorithm for about ten generations, I picked a candidate that was scoring decently:

• -3.78 for the height multiplier
• -2.31 per hole
• -0.59 per blockade
• +1.6 per clear
• +3.97 for each edge touching another block
• +6.52 for each edge touching the wall
• +0.65 for each edge touching the floor

Whoa — that’s a huge height multiplier. Perhaps the multiplier is so big that it just overwhelms everything else in the list — remember that the height multiplier applies to every block on the field. Also, holes and blockades might not have been as bad as I thought — and what’s with the huge bonus for touching the wall?

I ran the whole thing again from scratch for ten generations — using different randomly generated starting values. What it came up with made me think at first that my program had a bug somewhere:

• -3.71 for the height multiplier
• -4.79 per hole
• +1.4 per blockade
• -1.87 per clear
• +4.8 for each edge touching another block
• +3.22 for each edge touching the wall
• +3.68 for each edge touching the floor

Yup, this one rewarded blockades and penalized clears. And it would outperform both my naive values and the first set of AI values — I used this set in the video. It seems to put blocks in really bad positions, creating holes and blockades when it is unnecessary to — but it still does better than anything else I have.

Conclusion

Was the exercise a success? I would say partially so. There is the good and the bad. The good is that it came up with a much better AI than the one I originally had. But there may have been some things that could’ve been done better:

• What I said about the height multiplier overwhelming everything else is a bit misleading. While it is true that the height multiplier itself applies to every block on the field, it doesn’t really work that way, and really only affects the current block. Reason being, the rest of the field — everything but the current block — stays constant no matter where the current block goes. It’s kind of like if you vote for everybody, you really vote for nobody as your votes have no effect on the outcome.
• The lines cleared factor also turned out to be a bit misleading. While the second AI had a negative weight for clearing a line, it still cleared lines whenever it could: again tying back to the height multiplier. Clearing a line does exactly what it says: removing an entire row of blocks — and removing that many blocks does a huge blow to the height multiplier.
• The fitness function was really kind of screwed up. By the time your AI’s can get a few thousand lines on a tiny 8-column board, the only thing that causes the AI to die is a bad sequence of hard-to-place S and Z blocks — and in any random number generator you’ll eventually get a series of unlucky blocks. So at later generations, simply simulating a 8 column tetris was fairly bad at separating the very good AI’s from the excellent AI’s.
• Selection was also a bit screwed up. After ten generations, the entire population had more or less the same values, with only some minor variations — a bit ironic since this situation was exactly the situation the tournament selection algorithm was supposed to prevent.
• Although a genetic algorithm can converge on a local optimum fairly quickly, producing a decent solution, it is very hard for it to achieve a global optimum — the best possible solution. You might have a situation where mutating any one value seriously harms the candidate, but mutating two or more values simultaneously in a certain way makes it better. This is a drawback for genetic algorithms in general.
So that’s all I have to say. I’m fairly new to genetic algorithms, so I may have botched one or more parts of the algorithm. I’d love to know what I did wrong and how I should’ve done better.

Is 2011 a special number?

January 1, 2011

Today is the first day of the year 2011. 2011 is a prime number. Not only that, but according to numerous math bloggers and twitterers on the internet, 2011 is the sum of 11 consecutive primes — that is, 2011 = 157 + 163 + 167 + 173 + 179 + 181 + 191 + 193 + 197 + 199 + 211. Furthermore (as I already said), 2011 is prime.

Naturally I wonder, how rare are these numbers — numbers that are prime but also a sum of a bunch of consecutive primes? This seems like a problem easily solved with some programming.

Let us write P(n,k) as the number of primes less than or equal to n that can be written as a sum of at least k consecutive primes. How big is P(2011,k) compared to $\pi(2011) = 305$, the number of primes less than or equal 2011?

My method for computing P(n,k) is pretty simple. First we start with a list of prime numbers, then write the cumulative sums for the prime numbers (these images were taken with my laptop camera off a whiteboard):

Next take every pair from the bottom list, and find their differences:

Because of the way I arranged the numbers, we can see that the bottom diagonal are all the numbers that can be written as a sum of 1 prime (obviously, the prime numbers), then the next row are all numbers that can be written as a sum of 2 primes, and so on. If we want to compute P(n,k) we simply list enough prime numbers to complete this table, take everything above a diagonal, and filter out all the duplicates.

Here’s my hopefully correct implementation:

import java.util.*;
import java.lang.*;

class Main
{
public static void main (String[] args) throws java.lang.Exception
{
int LIM = 2011;
int NCONSEC = 3;
boolean[] sieve = new boolean[LIM+1];
Arrays.fill(sieve, true);
for(int i=2; i<=LIM; i++){
if(!sieve[i]) continue;
for(int j=2; j*i<=LIM; j++)
sieve[i*j] = false;
}

List<Integer> primes = new ArrayList<Integer>();
for(int i=2; i<=LIM; i++)

List<Integer> cuml = new ArrayList<Integer>();
int cum = 0;
for(int p : primes){
cum += p;
}

Set<Integer> consums = new TreeSet<Integer>();
for(int i=1; i<cuml.size(); i++){
for(int j=0; j<=i-NCONSEC; j++)
}

int p = 0;
for(int i : consums){
if(i > LIM) break;
if(!sieve[i]) continue;
//System.out.println(i);
p++;
}

System.out.println(p);
}
}


It turns out that P(2011,3) = 147 and since there are 305 primes less or equal to 2011, roughly half of all primes under 2011 can be written as a sum of at least 3 consecutive primes. Hardly a special number at all! Even if we insist on a list of 11 consecutive primes, there are still 56 primes less than or equal to 2011 that can be written as a sum of 11 consecutive primes, about 1 in every 5 primes.

Happy New Year!

IOI 2010: Quality of living

August 18, 2010

The 2010 International Olympiad of Informatics (IOI) finished today. Probably the highest high school level programming competition in the world, with participants from many countries. Anyways.

An interesting problem was Task 3 of Day 1, titled Quality of Living. In a grid of $R$ rows and $C$ columns, each cell is given an unique quality rank, as in 1 being the best, 2 being the second best, and $R \times C$ being the worst.

Given $H$ and $W$ as odd numbers with $H \leq R$ and $W \leq C$, the task is to choose a rectangle of height $H$ and width $W$ so that the median quality rank of the rectangle is at a minimum.

For example, given $R=5$, $C=5$, $H=3$, $W=3$:

The best, or smallest median of all possible $3 \times 3$ rectangles in the configuration is the one shown, with a median of 9. Since no $3 \times 3$ rectangle has a smaller median, 9 is the answer to this problem.

For an arbitrary array and parameters $H$ and $W$, how should a program calculate the smallest median?

Let’s consider brute force. There are $(R-H) \times (C-W)$ possible positions for the rectangle to be placed, and we will check each of them. Checking for the median requires sorting, which is of complexity $n \log n$.

In order to earn full points for this problem, the program needs to handle 3000 x 3000 grids in under 10 seconds. In the worst case that $H=1500$ and $W=1500$, this amounts to over 2 million sorts, or many billion operations, which is clearly too much.

A binary search approach

An efficient solution takes a completely different approach. There is no way to possibly calculate the median for every position.

Instead, we conduct a binary search on $m$, the smallest median. Let us arbitrarily ‘guess’ a value of $m$. For each element on the grid, we assign it -1 if it’s smaller than $m$, 1 if it’s greater than $m$, or 0 if it is equal to (and therefore is) $m$.

In the example above, if we choose 12 for $m$, we get:

Now for any sub-rectangle, we can determine if its median is smaller or larger than our guess by summing it up.

If its sum is negative, then its median is smaller than our guess; if it’s positive then the median is larger than our guess. If the sum is exactly 0, it means that the number of elements smaller is equal to the number of elements bigger, implying that the median of this sub-rectangle is our guess.

So if we look at all the sub-rectangles, we can determine if our guess is correct or not. Furthermore, if it’s not correct, we know whether it’s too large, or too small:

If there is any sub-rectangle having a negative sum, then our guess is too big. That particular sub-rectangle has a smaller median, so our median can’t be the smallest anymore.

On the other hand, if all sub-rectangles have a positive sum, then our guess is too small.

We know we’ve got it when none of the sub-rectangles are negative, and at least one sub-rectangle has a 0 sum.

It turns out that going through the sub-rectangles can be done very efficiently:

The plan is, we start from the top of the first column and go down it, and when we reach the end we start at the top of the next column, and so on. We keep doing this until we reach the end.

As we are dealing with $W$ columns at a time, let us keep the sums of the relevant columns in an array wsum[]. For example, in the $m=12$ grid:

Now to find the sum of the first sub-rectangle, we add up -2 and 1 and 1, getting 0.

Then we can use this result to find the sum of the next sub-rectangle down. Removing -2 and adding 3, we get 5, which is the sum of it. Thus moving down a space can be done with only 2 additions.

We use a similar idea when moving right a column after having finished our current one. Rather than calculate wsum[] over from scratch, we can take the old wsum[], and for each row, subtract the removed element and add the new element.

This combination of ideas allow us to implement this task very efficiently, well able to solve the 3000 x 3000 subtask within 10 seconds.

I have an implementation in Java:


import java.util.*;
import java.io.*;

public class Main{

static int R = 0;
static int C = 0;
static int W = 0;
static int H = 0;
static int[][] a = null;

public static void main(String[] args) throws Throwable {

// Initialize constants
String l1 = in.readLine();
StringTokenizer l1tok = new StringTokenizer(l1);
R = Integer.parseInt(l1tok.nextToken());
C = Integer.parseInt(l1tok.nextToken());
H = Integer.parseInt(l1tok.nextToken());
W = Integer.parseInt(l1tok.nextToken());

// Parse data into array. Each cell has its own line.
a = new int[R][C];
for(int i=0; i<R; i++)
for(int j=0; j<C; j++)

// Seperate the calculations from the parsing
run();
}

// Try m as the new smallest median, and decide if m should be larger or
// smaller. Return -1 if it should be smaller, 1 if it should be bigger.
// If we have found the smallest median then we return 0.
static int tryMedian(int m){

// Does a median m exist at all in this configuration?
boolean exists = false;

// Individual row sums of the column we're on
int wsum[] = new int[R];

// Initiate wsum for the first column layer
for(int i=0;i<R;i++) {
for(int j=0;j<W;j++) {
int temp=0;
if(a[i][j]>m) temp=1;
if(a[i][j]<m) temp=-1;
if(j==0) wsum[i]=temp;
else wsum[i]+=temp;
}
}

// Outer loop: goes through the columns
for(int i=0;i<=C-W;i++) {

// Sum for the rectangle we're looking at
int block=0;

// First block in the column
for(int j=0;j<H;j++)
block+=wsum[j];

// Go through the rest of the rows
for(int j=0;j<=R-H;j++) {

// Found a negative block: m is too big!
if(block<0) return -1;

// The median exists, so this is it (unless we find a negative)
if(block==0) exists=true;

// As long as we're not at the very end, adjust the sum
if(j!=R-H) block+=wsum[j+H]-wsum[j];
}

// If not at the last column, adjust the wsum
if(i!=C-W)
for(int j=0;j<R;j++) {
int temp=0;

// Remove element at the head
if(a[j][i]>m) temp=1;
if(a[j][i]<m) temp=-1;
wsum[j]-=temp; temp=0;

// Add the new element at the end
if(a[j][i+W]>m) temp=1;
if(a[j][i+W]<m) temp=-1;
wsum[j]+=temp;
}
}

// No negatives; return based on if we've found m as a median or not
if(exists) return 0;
return 1;
}

static void run() {

// Min and max for binary search
int min = 1;
int max = R*C;

while(true) {

// Perform new iteration on binary search
int mid = (max+min)/2;
int result = tryMedian(mid);

// Found lowest median, stop here
if(result == 0){
System.out.println(mid);
return;
}

// Keep searching; direction is determined by the result
if(result == 1) min = mid+1;
else max = mid-1;
}
}
}


Project Euler 299: Three similar triangles

July 29, 2010

Being the last Project Euler problem before the summer break, Problem 299 is quite an interesting problem. Solving it involves both geometry and number theory.

Points $A$, $B$, $C$, $D$ are represented on a coordinate plane as $(a,0)$, $(b,0)$, $(0,c)$, and $(0,d)$ respectively, all of which have integer coordinates.

$P$ is a point on $AC$ with integer coordinates such that triangles $DCP$, $DBP$, and $PAB$ are similar.

It can be shown that in order for the triangles to be similar, $a=c$.

For $b+d < 100,000,000$, how many triples $(a,b,d)$ exist such that point $P$ exists?

Initial observations

Before we can start coding, there is some geometry work to be done.

It is given that it must be necessary for $a=c$. Why is this? Suppose that $a \neq c$. Then $\angle OAC \neq \angle OCA$. Next, $\angle DCP \neq \angle PAB$. It is obvious that triangles $DCP$ and $PAB$ cannot be similar if $\angle DCP \neq \angle PAB$.

Since $\angle COA$ is a right angle and $\triangle COA$ is isosceles, it follows that $\angle OCA = \angle OAC = 45^\circ$. So $\angle DCP = \angle PAB = 135^\circ$ and also $\angle DPB = 135^\circ$.

Working backwards, we know that triangles $DCP$, $DPB$, and $PAB$ are all similar, Then $\angle CDP = \angle PDB$ and $\angle DBP = \angle PBA$; lines $DP$ and $PB$ are angular bisectors of $\triangle DOB$, thus $P$ is the incenter of $\triangle DOB$.

So the distance from $P$ to the three sides $OB$, $OD$, and $DB$ are equal. This also means $P$ can be represented as $(i,i)$ since its $x$ and $y$ coordinates are equal.

We should note at this point that there is an additional case, where $\angle CDP = \angle DBP$ and $\angle ABP = \angle BDP$. Then $\angle DPC = \angle BDP$, so lines $AC$ and $BD$ are parallel. However, in this case $P$ is no longer the incenter.

We shall consider the two as separate cases, and refer to them as the incenter and parallel case respectively.

Incenter case

We first consider the incenter case. Note that in this case, $a$ is uniquely determined by $b$ and $d$. For any pair $(b,d)$, there is only one possible $AC$ passing through the incenter.

We need to find pairs of $(b,d)$ such that there exists a point $(i,i)$ where the distance from the point $(i,i)$ to $BD$ is $i$ (and that $i$ is integral).

Line $BD$ can be expressed by the equation

$y = -\frac{d}{b}x + d$,

or in standard form,

$dx + by - bd = 0$.

Recall the distance from point to line formula giving the distance between a point $(m,n)$ to line $Ax + By + C = 0$:

$d = \frac{|Am+Bn+C|}{\sqrt{A^2 + B^2}}$.

By substitution, we have

$i = \frac{|di+bi-bd|}{\sqrt{b^2 + d^2}}$.

Simplifying:

$i^2 (b^2 + d^2) = (di+bi-bd)^2$.

It is necessary and sufficient for $b^2+d^2$ to be a perfect square, as then $i$ will be uniquely determined and will be an integer.

Thus the incenter case reduces to finding all pairs $(b,d)$ for $b+d < 100,000,000$ where $b^2 + d^2$ is a perfect square.

Parallel case

Now we consider the case when $AC || BD$.

Let $X$ be the circumcenter of $\triangle DPB$:

Notice that here there are actually two possible places for $P$. We can ignore this fact for now.

Let $T$ be a point (not shown) such that $DPBC$ is cyclic. Then $\angle DTB = 45^\circ$ because $\angle DPB = 135^\circ$, therefore $\angle DXB = 90^\circ$.

As a consequence of the parallelism of $AC$ and $BD$, $b$ and $d$ must be equal. Since $\angle X$ is a right angle, it follows that the coordinates of $X$ is $(b,b)$, and that $DOBX$ is a square.

The circle around $X$ is centered at $(b,b)$ and has a radius of $b$, thus its equation is

$(x-b)^2 + (y-b)^2 = b^2$.

The line $AC$ has an equation of

$y=c-x$.

By substitution into the circle equation:

$(x-b)^2 + (c-x-b)^2 = b^2$,

Or,

$2x^2 + (-2c)x + (b^2+c^2-2bc)=0$;

Applying the quadratic formula and dividing by 2 gives

$x = \frac{c \pm \sqrt{c^2 - 2(b-c)^2}}{2}$.

Here it is sufficient for $c^2 - 2(b-c)^2$ to be a perfect square, as then $x$ will be an integer.

We prove this by using a parity argument: if $c$ is odd, then $c^2$ is odd as well, and the expression inside the radical is odd; Supposing that it is a perfect square, the square root of that is odd, and when added to $c$, makes an even number. A similar argument applies if $c$ is even.

We can substitute $f$ for $b-c$ giving the perfect square

$c^2 - 2f^2$.

If we let $q^2$ be the perfect square, we get

$q^2 + 2f^2 = c^2$.

Essentially the problem reduces down to finding integral solutions to the above equation, with the limit set to

$2(c+f) < 100,000,000$.

Writing the code

We are now ready to write a program to enumerate integer pairs to satisfy our equations.

We will start with the incenter case, which is somewhat more basic and easier to deal with.

Recall that we have derived this equation (replacing the variables with $x$, $y$, $z$):

$x^2 + y^2 = z^2$,

with the limit being on the sum of $x+y$. Obviously, this is a pythagorean triple. Enumerating pythagorean triples can be done very efficiently.

If $m$ and $n$ are coprime integers with an odd sum, and with $m, then the primitive pythagorean triples can be parameterized by the formulas:

$x = n^2 - m^2$

$y = 2mn$

$z = n^2 + m^2$

Just by using this formula and little else, primitive pythagorean triples triples can be enumerated very quickly.

It is not very difficult to enumerate the non-primitive triples, either. Suppose we have generated the triple $(3,4,5)$. To count the number of similar triples with $x+y < 100000$, sum the $x$ and $y$ values of the triple, which is in this case 7. Then divide the limit by 7, which in this case is $\frac{100,000}{7}$, so we have 14285 such triangles.

A nearly identical approach can be used to enumerate pairs for the parallel case. Here we have

$x^2 + 2y^2 = z^2$,

which is nearly identical to the previous equation and can be parameterized as:

$x = n^2 - 2m^2$

$y = 2mn$

$z = n^2 + 2m^2$

This case is slightly different from the previous case, in the sense that we no longer require $x$ to be positive, so we do not need the restriction that $m. Additionally, we need to divide by $y+z$, instead of $x+y$ as we did before.

This is easy to implement in Java:


public class Main{
final static long L = 100000000;

static long gcd(long m, long n) {
if (m < n) { long t = m; m = n; n = t; }
long r = m % n;
if (r == 0) return n;
else return gcd(n, r); }

static long incenterCase(){
long count = 0;
for(long n = 1; n < L/2; n++)
for(long m = 1; m < n; m++){
if((m+n) % 2 == 0) continue;
if(gcd(m,n)!=1) continue;
long b = n*n - m*m;
long d = 2*n*m;
long sum = b+d;
if(sum >= L) break;
if(b == d) count += L/sum;
else count += 2*(L/sum); }
return count; }

static long parallelCase(){
long count = 0;
for(long n = 1; n < L; n+=2)
for(long m = 1; m < L; m++){
if(gcd(m,n)!=1) continue;
long g = 2*n*m;
long a = n*n + 2*m*m;
long b = g+a;
if(b > L/2) break;
count += (L-1)/(2*b); }
return count; }

public static void main(String[] args) {
System.out.println(incenterCase() + parallelCase()); }
}


This code generates the correct answer in about 25 seconds on my machine.

An introduction to Gregg Shorthand and an attempted English to shorthand converter

July 9, 2010

The idea of strange alternative shorthand writing systems has, for a while, held in me a certain special appeal: the idea of drawing a few short alien symbols to represent entire phrases and sentences.

The Gregg shorthand system, invented over a hundred years ago (1888 to be exact) is one of several such systems. Curiously its original purpose was not to amaze one’s friends. It was originally intended to enable news reporters and secretaries to transcribe english speech at a speed comparable to the speed which english is spoken.

English, or the conventional english writing system, is inheritantly inefficient for such purposes: it is just not physically possible to write english much faster than about 40 words per minute and not have it appear like a collection of meaningless lines.

Shorthand systems address this issue by replacing troublesome letters such as ‘m’ (which always ends up as a scribble when I write it) with simple, clear letters, in this case a straight horizontal line. Plenty of shortening conventions are used, making it possible to write at speeds of 120-160 words per minute. By comparison, I can only type at about 80 words per minute.

As audio recording devices and video camcorders achieved widespread usage, shorthand systems quickly became obsolete and fell into relative obscurity. Just imagine: who would need shorthand when they could just film the speaker and play it back, transcribing in leisure?

Personally the reason that I learned Gregg shorthand a few months ago is less about transcribing other people’s speeches in real time (which I definitely can not do) but more about the ability to write personal notes and diaries, and be relatively confident that nobody (or at least nobody I know) will be able to read them.

Shorthand used to be actually taught in some places. This was decades ago though. On the other hand, if everybody knew Gregg shorthand, it wouldn’t be suitable to use it for writing personal notes anymore.

Just as an example, here’s a notebook of Gregg shorthand (I don’t even know what it’s for):

Looks alien to you? Good.

Actually, shorthand is really simple. The Gregg alphabet is just this:

What’s really smart about this is that similar sounding letters are grouped together, and look similar.

But this is hardly complicated, just different.

The second, less obvious difference is that Gregg shorthand is syllabic, instead of alphabetic.

Let’s try an example:

London bridge is falling down

As shorthand is written the way it’s heard, it would transcribe to something like this:

lndn brej s flng dn

All that is left is the substitution of Gregg syllables for the latin characters:

With a little (okay, a lot) of practice, the above symbols may be written in two or three seconds.

This is pretty much it. Quite a lot easier than learning French or Spanish or Chinese.

There’s a bit more to it. Much of Gregg is the wide variety of brief forms, which are abbreviations of commonly used words to save time. Some of them are pretty obvious:

Most are a little less obvious:

correspondence = kres

A few are just downright retarded:

world = uu

Yea. That’s not even the worst. I’m sure they had a reason to do so, but someone a hundred years ago came up with more and more contrived exceptions to save a few strokes on more and more obscure phrases.

For instance, who really needs a symbol for “I am of the opinion“, or another for “I should like to have“? I wouldn’t be too surprised if they had a brief form for “I slept with your mother“. Unfortunately there is none.

(/rant). I actually like the language. Just not most of the brief forms.

In case you’re wondering, here are the symbols for “I am of the opinion” (i-m-o-p-n) and “I should like to have” (i-sh-d-l-a-v):

An attempt at a text to shorthand generator

For some unknown reason, I decided I had the need for an automatic translator from english plaintext to Gregg shorthand.

Being such an ancient writing system, I wasn’t surprised to find that no such software exists (at least none that I know of). Even unicode, whose extensive glyph tables extend from Latin to Chinese and Hebrew and even to ancient egyptian hieroglyphs, does not offer support for the curves of Gregg shorthand.

Fortunately, a translator is still possible without unicode support, albeit some imagination is required. Output is purely graphical, as shorthand cannot otherwise be represented textually.

In concept, an english to shorthand generator is not a very complicated piece of software. There are essentially two parts to it:

One, the english text has to be lexed into their pronounceable syllables. This problem has been faced many times before, mostly by text to speech programs. Indeed this problem is one of the problems faced by even the most basic TTS programs. Thus, plenty of libraries exist for this task already. For this, I chose the FreeTTS library for Java.

For example, here is a sample code snippet for FreeTTS:

Lexicon lexicon = CMULexicon.getInstance(true);
String[] phones = lexicon.getPhones("luckytoilet","n");
for(String phone : phones) System.out.print(phone + " ");


This generates the pronunciation for luckytoilet:

l ah1 k iy t oy1 l ax t

We can next map the FreeTTS syllables to the Gregg syllables. This is a many-to-one mapping: for instance, gregg does not usually distinguish between long (cake) and short (cat) vowels, both having a mapping to “a”. Additionally FreeTTS syllables contain information about vocal tones, which are irrelevant for our purposes.

The second step is to draw the glyphs, from the plaintext syllables. This step I think I’ve done a rather poor job on.

Each letter is contained in a 100px by 100px square PNG file. Additionally, the program has information on where the ‘stroke’ for each letter begins and ends, so that it can position the letters properly.

For example, the k letter:

If we wanted to draw a n after the k, we are able to do that: place the n such that the starting position of the n coincides with the ending position of the k. It’s with this idea that we are able to chain together elaborate combinations of characters.

This way letters can be drawn at the position where the previous letter ends, giving a connected, cursive look.

These two steps are pretty much the entire program. Additionally, there are certain brief forms in Gregg that are treated specially. Here the brief form list is stored in alphabet/2.dat; it is not really the brief forms of any one dialect of Gregg, but rather a combination of them. Also, vowels are generally omitted, so only longer vowels are displayed.

Here is what I came up with (showing an excerpt of Shakespeare’s Hamlet):

When the user types in the text box at the bottom, the shorthand equivalent is computed and drawn in the top region. It doesn’t handle punctuation (or any nonalphanumeric symbols which are simply stripped out).

The project is available on SVN, or checked out with this command:

svn checkout http://bai-projects.googlecode.com/svn/trunk/gregg gregg

Afterthoughts

Admittedly, my program is more of a proof of concept, and is far from perfect. Rather, it’s actually quite crude.

Most words are botched and simply look wrong. In actual Gregg, letters are placed differently based on context: the th may be drawn under or over depending on what characters precede it for example. Vowels are connected in ways that are really tricky to handle in a program. My program simply draws the letters exactly the same no matter where they appear.

For instance, here’s the word cake as rendered by my program:

Indeed, the word cake is transcribed as k-a-k, which is exactly what’s generated by the program. Compare this with the correct version (as printed in the Gregg dictionary) which (correctly) puts the a (circle) under the ks:

There are cases where the a is drawn over, under, to the left, to the right, curved downwards, curved upwards, ad infinitum. In order to generate more correct Gregg, we would have to implement very elaborate and complicated sets of rules to handle the many rules of standard Gregg shorthand.

Investigations on star polygons with a star polygon / stellar number generator

June 30, 2010

Note 9/3/2010: Redone all of the diagrams

If you are just looking for a download link to the program, here it is. To run, save as Stellar.jar (Java is required)

As school has been over for more than a week now, I’ve had time to (among other things) study geometry. I’m just spending an hour or two every day working through various exercises in Geometry Revisited.

This blog post is long overdue; it had been sitting in my drafts section with a few sentences written since the middle of April. Maybe it’s still useful now, maybe not. Anyways.

I myself am not in IB, because of my hatred of being forced to do copious and tedious amounts of homework. Many of my friends, however, take IB math. One famous project in IB math (not only in my school, but also in other schools) is a large project known as the math portfolio. From what I’ve heard, this consists of a dozen or so pages of writeup on some extended problem.

I first heard about the existence of such a project when a few of my friends asked me for help on it. The project was related to ‘stellar numbers’; a picture of the assignment can be found here. The problem itself seems rather trivial, thus I don’t feel it necessary to discuss it in this blog post.

My investigations here, although inspired by this assignment, is only marginally related to the assignment itself. The end product to be handed in for grades was supposed to include diagrams. However, it isn’t so easy to draw diagrams of stars (see above image) either by hand or on the computer. I saw many of my friends hastily put together their diagrams of stars using Powerpoint or MS Word, using the line and circle tools and a heck lot of copy-and-pasting. The result, unsurprisingly, is messy and inexact. One friend of mine wrote a computer program to output Latex code to draw the stars; the details of how this is done is something I consider more difficult and interesting than the project itself.

Keeping along with my theme of geometry this week, it is useful to figure out the geometry of star polygons before writing programs to generate them.

Some geometric constructions

Consider the five sider star (fig.1):

We can see that the five corners of the star, ABCDE, are the same distance from the center K of the star. This means the five points lie on a circle; furthermore the distances between the corners are the same.

A similar phenomenon occurs with the five points between the corners, FGHIJ. They, too, lie on a circle centered around K.

Let us call the radius of the larger circle r, and the radius of the smaller circle i. Any star can be built around the framework of two circles. For example, our five pointed star (fig.2):

There is, however, one more characteristic present in star polygons that we neglected in the above diagram. It is that we chose the values of r and i pretty much arbitrarily.

Notice that in (fig.1), the sides EF and GB are parallel. It can be shown geometrically that if parallel, EF and GB are also concurrent. In (fig.2), EF and GB are not parallel nor concurrent.

Although any values of i and r for $i are enough to produce a star, it is necessary to further adjust the ratio between i and r in order to produce a correct star polygon.

Let us denote n to mean the number of points of a star, and d to denote the density of a star.

What do we mean by density? In order to define density, we will take a step back and take a look at alternative ways that star polygons can be constructed.

At present we are looking at star polygons as points on two distinct circles joined together with alternating line segments. Here is a completely different way of presenting it (fig.3):

Points ABCDE are spread equally on a circle. For the five points, every two points are connected by a line. We define density to be this number: the density for this star is 2.

It is possible, perhaps easier, to implement a drawing routine this way. However, by doing this we would have to erase the lines FGHIJ, which seems straightforward but is troublesome in implementation. We are going to stick to the two circles paradigm.

If $d=1$, then our ‘star’ would be degenerate; it would merely be a n-sided polygon. On the other hand, we assume that the inner circle has positive radius, thus,

$d < \frac{n}{2}$

For any integer n with $n \geq 3$, the maximum value of d is given by:

$d \leq \lfloor \frac{n-1}{2} \rfloor$

Next is finding a way to express the ratio between the radii in terms of n and d.

Here we combined the previous two diagrams, and added some new lines. Since our star contains so much symmetry, what follows can be extended to all corners of the star, and also to stars with different densities and number of corners.

Let us call the angle $\angle EKA$ to be x, which is $\frac{1}{n}$ of the circle, or $\frac{2 \pi}{n}$ radians.

Let y be the angle such that

$x:x+y = 1:d$

.. whence here $d=2$, therefore $x=y=72^\circ$.

As $\triangle EKB$ is isosceles, $\angle BEK = \angle EBK$, and $\angle BEK + \angle EBK + x + y = 180^\circ$; therefore

$\angle BEK = \frac{1}{2}\left[ 180-(x+y) \right] = 90-\frac{1}{2}(x+y)$

.. whence here $\angle BEK = 18^\circ$.

Next, $\angle EKL = \frac{1}{2}x$, in our case $36^\circ$, thus the measure of $\angle EFK$ can be found:

$\angle EFK = 180 - \left[ 90-\frac{1}{2}(x+y) \right] - \frac{1}{2}x = 90 + \frac{1}{2}y$

In our case $\angle EFK = 126^\circ$.

By the sine law in $\triangle EKF$,

$\frac{r}{\sin (90 + \frac{1}{2}y)} = \frac{i}{\sin \left[90-\frac{1}{2}(x+y) \right]}$

Or,

$i = \frac{r \sin \left[90-\frac{1}{2}(x+y) \right]}{\sin (90 + \frac{1}{2}y)}$

Simplifying, we get

$i = \frac{r \cos \left[ \frac{1}{2}(x+y) \right]}{\cos (\frac{1}{2}y)}$

We defined that $x = \frac{2 \pi}{n}$ and that $x+y = \frac{2 \pi d}{n}$, thus we write y:

$y=\frac{2 \pi (d-1)}{n}$

And that brings us to the final formula:

$i = \frac{r \cos (\frac{\pi d}{n})}{\cos \left[ \frac{\pi (d-1)}{n} \right]}$

Evaluating it for our star, we find that if $r=1$, then $d \approx 0.38197$.

Implementation in Java

Now that we have the math done, it is a fairly easy programming exercise to implement this as a java application. Here’s what I came up with after ~2 hours of hacking in Netbeans (animated gif):

This may be useful to future people doing the same assignment (if it ever gets assigned again), or with modifications it may be useful for other purposes too.

To use it, adjust the settings, take a screenshot, crop it, and it may be necessary to further edit it in photoshop.

Without further ado, here’s the application as a jar file:

Or the source as a Netbeans project:

stellar-src.zip (37kb)

Do whatever you want with them, and have fun!

A school-related computer game

June 8, 2010

My blog has been somewhat neglected for the past few days. I suppose one of the reasons is my work on a really big school project.

In place of a final exam, the Language Arts course at my school offers something called a Language Arts portfolio, consisting of three projects. The creative portion of the portfolio is quite free-form: you were to pick a novel and do pretty much anything you wanted on it.

For this project I chose the novel Siddhartha by Herman Hesse:

I chose to create a Java game for this assignment. The game was a sort of role-playing platformer, and in my own words, ‘a combination of Runescape and Maple Story’.

This project took up most of my time for the last week or so. Planning began about four weeks ago, and I began coding and writing the dialogs about three weeks before. I nevertheless scrambled to complete all of the graphics and put it all together on the weekend.

Unlike most Language Arts assignments, I actually quite enjoyed making this game, and I’m proud of my work. I’m going to release the game along with its source code:

siddhartha.zip (1.7 mb)

On windows, the game can be run by double-clicking game.bat. Java 1.5 or later is required to run this game.

Here’s what the game looks like:

I was given an opportunity to ‘present’ my project to the class today. The school laptop had a problem with some java dll’s, but a classmate had a working laptop so the presentation went fairly well. I discovered one or two bugs in the presentation that I never noticed when testing the game.

I also tried to get my girlfriend to playtest the game, which kind of failed miserably.

Technical details of the game

Before starting coding for the game, I searched for existing, similar, open source, java alternatives. Finding none, I was forced to make my own.

If anyone else ever wants to make a similar game, my work here might be useful. If I myself were to make something like this again in the future, I wouldn’t have to write the game from scratch again. Of course by then I would have forgotten how the code works; it would benefit me to write about it here to refer back to in the future.

The following section is probably too technical and would be only useful when trying to understand my code. Anyways.

The game is written in about 1300 lines of Java code (although about 60% of it was configuration so only about 600 lines of it was game logic). The images are mostly drawn in MS-Paint, since I’m not very skilled at using Photoshop. I only used Photoshop to change all of the white pixels to transparent pixels (impossible in MS-Paint).

I chose JGame for the game engine. This is a decent game engine, although I think it’s a bit buggy and very unprofessional (the sample games look like they’re from the 1980′s). It’s backed by OpenGL, not because the game uses any OpenGL features, but because the non-OpenGL version of the library has a certain bug in one of its functions that I wasn’t sure how to circumvent.

This particular bug was about the setBGImage() function, which seemed to not work on the JRE version of JGame. I asked a question on Stack Overflow, but didn’t really get an answer.

The most complicated portions of the game was handling transitions between maps, and handling dialogs. The handling of maps was done completely in Java code, while the dialogs were handled in an ad-hoc scripting language I created for this purpose.

To transition between maps, I had the idea of a portal, which would activate at certain points and replace the existing map (and all its objects) with a new map, and adjust the game state accordingly. In essence, a portal has a one-way connection to another portal. To make such a connection, a portal keeps a reference to its connecting portal, so that it can change the game accordingly when needed.

The portal has an off state, an on state (when the player is touching the portal), and an active state (when the transition between maps is taking place). If a portal has a connecting portal, it is ‘activated’ when the up key is pressed, otherwise it cannot be activated.

A map in the game takes up exactly one screen, has one background image, can have multiple portals, and at most one NPC (non player character). There are 28 maps in this game (some sharing the same background).

A NPC is a character in the game that’s not the player. A NPC has a still (non-animated) sprite, and holds a constant position in exactly one map. The same NPC appearing in different maps may have the same sprite, but are otherwise completely different. A NPC has a name, which is shown above the NPC’s head on mouse over; when clicked the player enters a conversation. A NPC has exactly one conversation attached to it.

The conversation data in the game is defined in a file: speech.dat. The first part of the file contains a sort of scripting language describing how the conversations flow, while the second part is a dump of all the actual String data to put for the dialogs. Each line in the String data section is assigned a number starting from 0, to be referred to in the rest of the game.

A conversation always starts with the player. From there, whoever is speaking can continue to speak (the same command), or the other person can start speaking (the chgspk command). Either that, or we can present the player with a list of choices (the choose command). When the conversation ends, we direct the next line to END_DIALOG, which is defined on line 0.

The flow of the conversation is contained in arrays or maps of the SpeechCode object. This object contains the line number of the source string, the line number of the target string, and the opcode. When drawing the message, we look up the String from a table using the source key, and when the player clicks the continue part, we look up the new SpeechCode using the target key, and adjust the speaker according to the opcode.

Hopefully this is enough high level documentation for the project. I don’t think I’ll be working on this game again.