Splitting utility costs between roommates is NP-Complete

April 5, 2014

Here’s an easy problem.

You live in a house with 4 people. For simplicity, I will call them Andrei, Bai, Darin, and Young. One person pays for electricity, another person pays for gas, another person pays for water, and the last person pays for internet. However, the utilities cost different amounts, and it is agreed that the total cost should be split equally.

It has come to the time to wrap up the bills. After tallying up the receipts, you find that Andrei has paid $650, Bai has paid $240, Darin has paid $190, and Young has paid $120. What transfers do you make to distribute the costs fairly?

Well that’s easy. Add up all the numbers and you find that the group paid $1200 in total. A quarter of that is $300 — that’s the amount each person should pay in the end. If you’ve already paid $240, then the difference, $60, is the amount you have to pay to compensate.

To see this even more clearly, let us define balance as the difference between what you’re supposed to pay and what you actually paid. From now on, I will use a negative balance to mean you paid more than you supposed to and you are owed money; a positive balance means you owe money to others.

In this case, it’s obvious how to balance the bills. Since Andrei is the only person with a negative balance, everyone simply transfers the correct sum of money to Andrei, problem solved.

But in general…

Being a computer science major, this left me wondering: what if I lived with 20 people? And what if, throughout the term, we lend each other money, so that multiple people have a negative balance, and multiple people have a positive balance? How do we solve this problem then?

For simplicity, from now on I will assume the preliminary calculations have been done, and we will work solely with the balance column. I will also assume that all values are integers.

One immediate observation is the balances always add up to 0. So given a list of integers than add up to 0, how do we find an efficient set of transfers to balance the bill?

What do we mean by efficient? Well, let’s explore several possibilities.

Roommate Problem, version 1

Given a list of balances that add up to 0, find the smallest number of transfers to balance the bill.

This seems at first glance to be the criterion we’re looking for. Writing cheques is a hassle, so we don’t want to write more than what is absolutely necessary.

But if you think about it, there’s a really cheap way to solve this problem:

Sort the list. Starting from the highest number, give all your money to the second highest number, repeat n-1 times.

Somehow this doesn’t feel very satisfying. If there are a lot of people, the people in the middle are going to be handling enormous amounts of money. Let’s try again.

Roommate Problem, version 2

Given a list of balances that add up to 0, minimize the total money transferred to balance the bill.

Perhaps what we really want is to minimize the money transferred? Maybe the bank charges $0.01 for each $1 you transfer?

Unfortunately, this problem can also be solved in a cheap way:

We don’t care how many transfers we make, so let’s just transfer $1 at a time! As long as we always transfer from positive to negative, it doesn’t matter how we do it, we’re always going to transfer a fixed amount of money. Let’s try again.

Roommate Problem, version 3

Given a list of balances that add up to 0, find the smallest set of transfers to balance the bill, with the limitation that transfers are only allowed from a positive to a negative balance.

This captures our intuition that a person should either be transferring money or receiving money, not both.

Version 3 doesn’t fall immediately to a cheap trick like its two predecessors. Instances of this problem can get pretty tricky at times — here are some examples of some optimal solutions:

I couldn’t come up with an efficient algorithm to solve this problem. The best I could come up with was a greedy algorithm:

Assume the input is [-8,-4,5,7]. On each step, look for the number with the least absolute value (-4). Without loss of generality, assume this number is negative. Then ‘zero’ this number by cancelling it with the smallest number on the other side — so transfer $4 from 5 to 4, giving us [-8,1,7]. Repeat this until all numbers are zero.

How bad is this algorithm? Let’s say there are M negative numbers and N positive numbers. Then this algorithm requires at most M+N-1 transfers, since each step zeroes at least one number, and the last step zeroes two numbers.

The optimal solution takes at least max(M,N) transfers. This proves that my greedy algorithm never takes more than 2 times the optimal number of transfers. Not too bad, but not great either.

Unable to progress any further, I asked around in the TopCoder forums. Surprisingly, I got an answer that hinted the problem was impossible to solve efficiently — it is NP-Complete!

NP-Complete by Reduction from SUBSET-SUM

To prove a problem can be solved efficiently, you simply describe an algorithm that solves the problem, then prove this algorithm is efficient. But how do you prove a problem cannot be solved efficiently?

There are certain problems in computer science that are known to be hard: one of them is the Subset Sum problem. Given a set of positive integers and a positive integer N, is it possible to find a subset that sums to exactly N? Return YES if this is possible, or NO otherwise.

For example, say our set is {3,5,7,8,11}. Can we make 16? The answer is YES, because 5+11=16. Can we make 17? The answer is NO – if you check all the possibilities, you discover that no subset sums to exactly 17.

We can leverage the fact that the Subset Sum problem is hard using a proof by contradiction. Assume that there exists some efficient algorithm to solve the Roommate problem. In the diagram, I symbolize it with a black box.

Assume there is also a converter routine: an easy way to convert an input for the Subset Sum problem into an input for the Roommate problem. I’ll get to the details of this converter shortly; right now, assume it exists.

Then combining the Roommate solver with the converter, we have created a Subset Sum solver! If the Roommate solver is efficient, then this Subset Sum solver is also efficient. But we know that no efficient Subset Sum solver exists. Ergo, no efficient Roommate solver exists either.

The only missing piece is to reduce an instance of the Subset Sum problem to an input to the Roommate problem.

Here’s how. For each number in your set, create a roommate with that number as a positive balance. Then create a roommate with a balance of -N (the number you’re trying to sum up to). Then create one final roommate with the exact balance so that all the numbers sum to 0.

Here’s the input for {3,5,7,8,11} and N=16:

There are 5 numbers in the set, and the Roommate solver finds a solution requiring 5 transfers.

By contrast, here’s the input for {3,5,7,8,11} and N=17:

The Roommate solver can’t do better than 6 transfers.

So to solve the Subset Sum problem, plug it into the Roommate solver and see how many transfers it outputs. If it outputs exactly 1 transfer for every element in your set, then output YES. Otherwise, if there are more transfers than elements in your set, output NO.

This proves that the Roommate problem is as least as hard as Subset Sum, so it’s NP-Complete.

Research in Existing Literature and Application to Biology

While researching for this blog post, I came upon this research paper titled “On the Minimum Common Integer Partition Problem” published in 2006 by Xin Cheng, Lan Liu, Zheng Liu, and Tao Jiang.

They investigate a problem they call Minimum Common Integer Partition (MCIP). Given two lists of integers, say [4,8] and [5,7], find the smallest common partition — in this case, [3,4,5].

Compare this to the Roommate problem with input [-4,-8,5,7], and it’s clear that the Roommate problem is identical to 2-MCIP. (The 2 just means we’re finding the smallest partition between 2 lists, the paper also investigates finding the smallest partition between more than 2 lists).

Skimming through this paper, it derives an algorithm similar to my greedy algorithm which approximates the problem by a factor of 2. Using more complicated techniques, it manages to produce an algorithm with a 5/4 approximation.

Doing a bit more searching, it turns out that a more recent paper by David Woodruff reduces the approximation ratio for 2-MCIP down to 1.228; an even better paper reduces it down to 1.125 using network flow techniques. At this point, I think I’m way too sidetracked from the original problem, so I didn’t investigate the details.

What surprised me more was that this research was motivated not by roommates sharing utilities, but by biologists studying genome sequences! Biology is not my area of expertise, so I won’t comment further on that. But I’ll leave you these slides (taken from a presentation by the above-mentioned David Woodruff):

So in short, we can’t solve the Roommate problem perfectly, but with cutting-edge algorithms, we can guarantee ourselves to be off by no more than 12.5%!


A Simple Shorthand Musical Notation

March 23, 2014

Anyone who’s played piano, or any other musical instrument, would be familiar with the “standard” musical notation. It’s clear, unambiguous, accepted worldwide, and has been basically unchanged since Bach. It looks like this:

Now there’s a reason this notation has survived this long — it’s good. It’s easy to read, and allows a musician to read and play a piece he’s never heard before.

But when you try to write music, you find that the notation is actually quite cumbersome to write. The notes are positioned on groups of 5 lines, so you’d better either have sheets of these lines printed, or be prepared to tediously draw these lines with a ruler. The timing of notes is very precise, so if you slightly exceed the allowed time for a bar, sorry, your notation is not valid anymore.

Principles of Shorthand Notation

To solve these frustrations, I created an alternate system of recording music, with the primary goal of being easy to write. It’s possible to jot down a melody in 30 seconds, with just a pencil and normal (not printed sheet) paper.

I do not claim my notation to be better than the standard notation. Rather, I achieve a different goal, sacrificing information for the ease of writing.

Standard notation is good for recording a song so that a musician can play it without having heard it before.

My notation is good for reminding a musician how to play a song he has heard before.

A common use case would be reminding yourself the notes of a song you’re playing, or accompanying a recording of the song. In a way, its purpose is similar to that of guitar tablature.

Here’s my justification for doing this. Most people can produce rhythm intuitively — that is, after hearing a passage a few times, he can clap back the rhythm. It’s much more difficult to find the correct notes after hearing the passage — I stumble upon it by trial and error.

So if you write down the notes but leave out the rhythm, it would often be enough information to play the song.

The tradeoff should become clear if you compare the same passage written side by side (from Bach’s Minuet in G Major):

Rules of Writing Shorthand Notation

Start by writing the notes in a line, and separate bars with a vertical | line. Indicate the key signature at the beginning of the page, if needed. Feel free to liberally clump notes together or space them apart based on rhythm.

Next is the rule for jumps. When the melody goes upwards by a perfect fourth or more (like from C->F), write the jumped note on an elevated line.

Remain on the elevated line as long as the melody is still increasing or stays the same. But as soon as the melody descends, immediately drop back down to the neutral line.

Here’s an example:

As long as the melody consists of small intervals (like C->E->C), we stay on the neutral line. Only when the jump is large (C->F) do we go to the elevated line.

Typically in music, a large jump in one direction is followed by a small step backwards. This means that we spend most of our time on the neutral line. It’s very rare for a melody to have multiple jumps in the same direction.

Here’s another example (Twinkle twinkle little star):

The melody does a large jump on the third note (C->G), so the third note (G) is on the elevated line. On the seventh note, the melody descends one note from A->G, so we immediately drop back to the neutral line. It does not matter that the same G was on the elevated line before.

You do not always have to start on the neutral line. It might be useful to start on an elevated or depressed line. Here’s an example (Harry Potter):

Reasoning behind the Jump Rule

You might be wondering, why make this jump rule so complicated? Why have a jump rule at all?

Well, we need some way of indicating octaves. Otherwise, a interval like C->F would be ambiguous: are we going up a perfect fourth, or going down a perfect fifth?

On the other hand, if we decreased the jump threshold, say a major third (C->E) is a jump, then the melody would be littered with jumps up and down, which would be a nightmare to handle. Setting the threshold to the perfect fourth is a good balance.

The complexities of the jump rule ensures that when you’re shifting upwards, the melody is actually going upwards. It would be confusing to the reader if there was a situation where we return from the elevated line down to the neutral line, while the melody is going upwards!

Another distinct alternative to the jump rule is to divide all the notes into distinct octaves: for instance, put any notes between C4 (middle C) and C5 on the neutral line, everything between C5 and C6 on the elevated line, and so on. I experimented with this, but found it very awkward when the melody straddles on the boundary between two octaves.

And that’s how the jump rule was created. So please experiment with this system, see if you like it!


Simple experimentation with jQuery

December 31, 2013

This term, I got hired for a co-op internship at a small software company in Kitchener.

The job posting required primarily Java programming, but the company uses a combination of Java (for the back end) and Javascript (for the front end). I did not have much experience with Javascript and web programming, so they asked me to learn jQuery and Ajax, and a bunch of other things.

After a few days of playing with jQuery, this is what I came up with:

It’s a “Trivial Collatz Simulator”. The user types in a number, and the program simulates the Collatz procedure (with animations!) until we reach 1.

The program is written using jQuery. On each iteration, it uses Ajax to query a local server (written in PHP), to do the arithmetic and return the next number in the sequence. That’s about it.


Hall’s Marriage Theorem explained intuitively

December 21, 2013

Imagine that you have 4 students looking for a job, and 4 positions available to fill. Not all students are equal — some are smarter than others. So the companies want to hire only the smartest students.

(Students are happy with any job they can get)

In this diagram, a bipartite graph, the students are at the top and the companies are at the bottom. A student and a company is connected if the company wants to hire the student. For example, Costco will hire any student, so Costco is connected to Andrei, Bill, Corki, and Danny.

Hall’s Theorem, formally

Hall’s Theorem tells us when we can have the perfect matching:

Suppose G is a bipartite graph with bipartition (A,B). There is a matching that covers A if and only if for every subset X \subseteq A, N(X) \geq |X| where N(X) is the number of neighbors of X.

Huh what?

Hall’s Theorem, intuitively

If you look closely at the diagram, you’ll notice that it doesn’t quite work:

Both Blizzard and Google want to hire Corki and only Corki. But Corki can only work for one company! So the whole thing collapses; the matching fails.

Let’s rewrite Hall’s condition in the context of students and jobs:

For a set of n companies, denote m to mean the number of students that at least one of these companies want. If m \geq n for every set of companies, then a matching is possible. Otherwise, the matching fails.

Here, a set of {Blizzard, Google} consists of 2 companies, but only one student, Corki, is wanted by either company. Since 1 < 2, the matching fails.

Suppose we tell this to Blizzard’s hiring manager, who decides he’ll hire Andrei instead:

Then the matching is successful and every student gets a job. Yay!

Notice that in this example, there are 4 students and 4 jobs. In general, these numbers don’t need to be equal. If we have 10 students and 4 jobs, and we want to fill every job, we can still use Hall’s Theorem. (of course, not every student will get a job)

I like this theorem because it seems so simple. The matching can fail in an obvious way. But if it doesn’t fail in this obvious way, then there’s no way it can fail in a less obvious way — it can’t fail at all.

Application: Putnam 2012 Problem B3

Let’s apply our knowledge to a harder problem. Actually, this problem becomes quite easy if we know to use Hall’s Theorem:

Suppose 2n teams play in a round-robin tournament. Over a period of 2n-1 days, every team plays every other team exactly once. There are no ties.

Show that for each day we can select a winning team, without selecting the same team twice.

Hint: we can view the teams as one half of the bipartite graph, and the days as the other half. A team is connected to a day if it won its match that day.

Solution

That’s the hint. Here’s a more detailed solution.

We want to find a matching that covers all the days. Suppose, for contradiction, that this is impossible.

From Hall’s Theorem, there has to be a set of n days, in which there are fewer than n winners in these n days.

Let’s call a team a “loser” if it lost every single game in these n days:

So this poor loser team has lost to n different teams in these n days.

But wait! If it has lost to n teams, then these n teams are winners! Yet we just stated that there are less than n winners. Contradiction — QED.


My trip into the world of Android Programming (with my first two apps)

July 19, 2013

Update: you can get these apps on Google Play now: Champions Quiz and Easy Metronome

I’m a newbie Android developer, having started about a month ago. I started by picking up a beginners Android book (for dummies!) and seeing how far I could get with it.

My device is a relatively crappy Samsung Galaxy Ace smartphone.

I got the SDK and environment set up and got a “Hello World” running without running into trouble. Then I worked through some example apps from the book, again without too much difficulty.

After that, with a very basic understanding of Activities, Intents, Views, and all that, I deviated from the beaten path, using Google when I needed help (which happened pretty often). I wanted to make something new (not copying someone else’s app idea) but still do things typical apps would do (better chance of Googling for help).

First App: Champions Quiz

This is an app for League of Legends players. In this game, there are more than 110 champions, each having 5 abilities — each with a unique name. This results in about 600 distinct abilities.

The goal of the quiz is to match the correct champion name, given the ability name.

Second App: Easy Metronome

This app is a fully functional, animated metronome. Drag the circle up and down to set the tempo like on a real metronome, and press a button and it goes.

The idea is, if you take a look on the Google Play Store for the metronome apps, they tend to have sliders, buttons, many needless customization options, and advertisements, making the interface feel extremely cluttered, given the small screen of the phone.

Instead of dozens of options, the Easy Metronome app brings you a more friendly interface:

My feelings so far on Android Development

There’s the good and the bad.

The good — Android builds on Java, a language I’m highly familiar with, dampening the learning curve for me. There are lots of tutorials for beginners on the web to get you started. At this stage, if you run into a problem, usually someone else has run into the same problem before; I didn’t have to ask any new Stackoverflow questions.

The bad — From the developer’s perspective, the Android tool chain feels buggy and unstable. Perhaps some of these resulted from me doing something stupid, some are annoyances, some are bugs that ideally the developer should never have to deal with. I’ll list a few of these problems, grouping them by where the problem manifests itself.

Problems showing up on the computer:

  • Eclipse can screw up, and when it does, it is not obvious how to fix it. One day, without me changing anything, it suddenly refuses to build the critical R.java file. Fixing it took an hour of painful cleaning, rebuilding, importing, re-importing.
  • Emulators are unusable. They take 15-20 minutes to boot up, and when they do their frame rate is 1-2 fps; they are unresponsive and frequently ignore keyboard input.
  • Ran into an Eclipse bug where Logcat sometimes shows a blank screen. Restarting Eclipse does not fix it. The solution appears to be  to instead use the commandline tool “adb logcat”.

Problems showing up on the phone:

  • I could not get the Face Detection API to work, even when using identical images and code that works for other people. (although I understand face detection is hard, so I’m not too upset)
  • Ran into an Android bug where only the first line of text in an Alert Dialog is shown. The solution was confusing (involved switching to a different theme) with no explanation given.
  • Ran into an Android bug where the text color was ignored, but only on some devices and not others. I haven’t bothered to find the solution to this.

Overall, Android programming is a mildly frustrating experience, compared to what I normally work with. It would be much better without constant minor annoyances and crashes / bugs.

What next

I originally wanted to make a bunch of apps and release them for free, but I later realized that Google charges $25 per developer to be able to publish apps. Being very cheap, I didn’t release any of my apps because of this.

I could try charging a small price (like $0.99) for the metronome app — I can’t imagine anyone paying for my league quiz app. Or I might make more apps and at some point release them all for free.


Improving the (physical) Bookmark

June 5, 2013

If you’re an avid reader like me, you might have experienced this frustration with bookmarks.

You open up your book to the bookmarked page, but you aren’t sure where on the page you left off. So you go to the beginning of the page and start reading. But soon you realize that you’ve already read this paragraph, and the next…

A minor annoyance, fair enough. But I’d like to share a trick that neatly solves this problem.

Take any bookmark. (This doesn’t work as well if the bookmark has lots of contrasting colors)

Draw a line through the bookmark at somewhere around the 2/3 or 3/4 mark. Do this only on one side.

We’re done.

Now every time you stop reading, orienting and aligning the bookmark stores enough information that you can start exactly where you left off the next time you start reading. Examples:

I’m not sure whether I’m the first to come up with this or if it’s common knowledge elsewhere, but this trick has saved me a great deal of time and frustration. Hopefully you will find it useful!


How to Write your own Minesweeper AI

December 23, 2012

A while ago, I wrote a minesweeper AI. I intended to publish a writeup, but due to university and life and exams, I never got around to writing it. But having just finished my Fall term, I have some time to write a decent overview of what I did.

Short 30 second video of the AI in action here:

How to Play Minesweeper

If you’re an experienced minesweeper player, you can probably skip this section. Otherwise, I’ll just give a quick overview of some basic strategies that we can use to solve an easy minesweeper game.

We start with a 10×10 Beginner’s grid, and click on a square in the middle:

We can quickly identify some of the mines. When the number 1 has exactly one empty square around it, then we know there’s a mine there.

Let’s go ahead and mark the mines:

Now the next strategy: if a 1 has a mine around it, then we know that all the other squares around the 1 cannot be mines.

So let’s go ahead and click on the squares that we know are not mines:

Keep doing this. In this case, it turns out that these two simple strategies are enough to solve the Beginner’s grid:

Roadmap to an AI

All this seems easy enough. Here’s what we’ll need to do:

  1. Read the board. If we use a screenshot function, we can get a bitmap of all the pixels on the board. We just need to ‘read’ the numbers on the screen. Luckily for us, the numbers tend to have different colors: 1 is blue, 2 is green, 3 is red, and so on.
  2. Compute.  Run the calculations, figure out where the mines are. Enough said.
  3. Click the board. This step is easy. In Java, we can use the Robot class in the standard library to send mouse clicks to the screen.

Reading the Field

There’s not a whole lot to this step, so I’m going to skim over it quickly.

At the beginning of the run, while we have a completely empty grid, we invoke a calibration routine – which takes a screenshot and looks for something that looks like a Minesweeper grid. Using heuristics, it determines the location of the grid, the size of a grid square, the dimensions of the board, and things like that.

Now that we know where the squares are, if we want to read a square, we crop a small section of the screenshot and pass it to a detection routine, which looks at a few pixels and figures out what’s in the square.

A few complications came up in the detection routine:

  • The color for the number 1 is very close to the color of an unopened square: both are a dark-blue color. To separate them apart, I compared the ‘variance’ of the patch from the average color for the patch.
  • The color for 3 is identical to that for 7. Here, I used a simple edge-detection heuristic.

Straightforward Algorithm

The trivially straightforward algorithm is actually good enough to solve the beginner and intermediate versions of the game a good percent of the time. Occasionally, if we’re lucky, it even manages to solve an advanced grid!

When humans play minesweeper, we compete for the fastest possible time to solve a grid of minesweeper. So it doesn’t matter if we lose 20 games for every game we win: only the wins count.

This is clearly a silly metric when we’re a robot that can click as fast as we want to. Instead, we’ll challenge ourselves with a more interesting metric:

Win as many games as possible.

Consider the following scenario:

Using the straightforward method, we seem to be stuck.

Up until now, whenever we mark a square as having a mine or safe, we’ve only had to look at a single 3×3 chunk at a time. This strategy fails us here: the trick is to employ a multisquare algorithm – look at multiple different squares at once.

From the lower 2, we know that one of the two circled squares has a mine, while the other doesn’t. We just don’t know which one has the mine:

Although this doesn’t tell us anything right now, we can combine this information with the next 2: we can deduce that the two yellowed squares are empty:

Let’s click them to be sure.

And voilà. They’re empty. The rest of the puzzle can be solved easily, after we’ve made the deduction that those two squares were empty.

The Tank Solver Algorithm

It’s difficult to make the computer think deductively like we just did. But there is a way to achieve the same results, without deductive thinking.

The idea for the Tank algorithm is to enumerate all possible configurations of mines for a position, and see what’s in common between these configurations.

In the example, there are two possible configurations:

You can check for yourself that no other configuration could work here. We’ve deduced that the one square with a cross must contain a mine, and the three squares shaded white below must not contain a mine:

This works even better than human deduction!

We always try to apply the simple algorithm first, and only if that gets us stuck, then we bring in the Tank algorithm.

To implement the Tank algorithm, we first make a list of border tiles: all the tiles we aren’t sure about but have some partial information.

Now we have a list of T  border tiles. If we’re considering every possible configuration, there are 2^T of them. With backtracking, this number is cut down enough for this algorithm to be practical, but we can make one important optimization.

The optimization is segregating the border tiles into several disjoint regions:

If you look carefully, whatever happens in the green area has no effect on what happens in the pink area – we can effectively consider them separately.

How much of a speedup do we get? In this case, the green region has 10 tiles, the pink has 7. Taken together, we need to search through 2^{17} combinations. With segregation, we only have 2^{10} + 2^7: about a 100x speedup.

Practically, the optimization brought the algorithm from stopping for several seconds (sometimes minutes) to think, to giving the solution instantly.

Probability: Making the Best Guess

Are we done now? Can our AI dutifully solve any minesweeper grid we throw at it, with 100% accuracy?

Unsurprisingly, no:

One of the two squares has a mine. It could be in either, with equal probability. No matter how cleverly we program our AI, we can’t do better than a 50-50 guess. Sorry.

The Tank solver fails here, no surprise. Under exactly what circumstances does the Tank algorithm fail?

If it failed, it means that for every border tile, there exists some configuration that this tile has a mine, and some configuration that this tile is empty. Otherwise the Tank solver would have ‘solved’ this particular tile.

In other words, if it failed, we are forced to guess. But before we put in a random guess, we can do some more analysis, just to make sure that we’re making the best guess we could make.

Try this. What do we do here:

From the 3 in the middle, we know that three of them are mines, as marked. But marking mines doesn’t give us any new information about the grid: in order to gain information, we have to uncover some square. Out of the 13 possible squares to uncover, it’s not at all clear which one is the best.

The Tank solver finds 11 possible configurations. Here they are:

Each of these 11 configurations should be equally likely to be the actual position – so we can assign each square a probability that it contains a mine, by counting how many (of the 11) configurations does it contain a mine:

Our best guess would be to click on any of the squares marked ‘2’: in all these cases, we stand an 82% chance of being correct!

Two Endgame Tactics

Up until now, we haven’t utilized this guy:

The mine counter. Normally, this information isn’t of too much use for us, but in many endgame cases it saves us from guessing.

For example:

Here, we would have a 50-50 guess, where two possibilities are equally likely.

But what if the mine counter reads 1? The 2-mine configuration is eliminated, leaving just one possibility left. We can safely open the three tiles on the perimeter.

Now on to our final tactic.

So far we have assumed that we only have information on a tile if there’s a number next to it. For the most part, that’s true. If you pick a tile in some distant unexplored corner, who knows if there’s a mine there?

Exceptions can arise in the endgame:

The mine counter reads 2. Each of the two circled regions gives us a 50-50 chance – and the Tank algorithm stops here.

Of course, the middle square is safe!

To modify the algorithm to solve these cases, when there aren’t that many tiles left, do the recursion on all the remaining tiles, not just the border tiles.

The two tricks here have the shared property that they rely on the mine counter. Reading the mine counter, however, is a non-trivial task that I won’t attempt; instead, the program is coded in with the total number of mines in the grid, and keeps track of the mines left internally.

Conclusion, Results, and Source Code

At this point, I’m convinced that there isn’t much more we could do to improve the win rate. The algorithm uses every last piece of information available, and only fails when it’s provably certain that guessing is needed.

How well does it work? We’ll use the success rate for the advanced grid as a benchmark.

  • The naïve algorithm could not solve it, unless we get very lucky.
  • Tank Solver with probabilistic guessing solves it about 20% of the time.
  • Adding the two endgame tricks bumps it up to a 50% success rate.

Here’s proof:

I’m done for now; the source code for the project is available on Github if anyone is inclined to look at it / tinker with it:

https://github.com/luckytoilet/MSolver


Follow

Get every new post delivered to your Inbox.

Join 61 other followers