## Beginner’s comparison of Computer Algebra Systems (Mathematica / Maxima / Maple)

August 11, 2014

I’ve never been very good at doing manual computations, and whenever I need to do a tedious computation for an assignment, I like to automate it by writing a computer program. Usually I implemented an ad-hoc solution using Haskell, either using a simple library or rolling my own implementation if the library didn’t have it. But I found this solution to be unsatisfactory: my Haskell programs worked with integers and floating numbers and I couldn’t easily generalize it to work with symbolic expressions. So I looked to learn a CAS (computer algebra system), so in the future I won’t have to hack together buggy code for common math operations.

I have no experience with symbolic computing, so it wasn’t clear to me where to begin. To start off, there are many different competing computer algebra systems, all incompatible with each other, and it’s far from clear which one is best for my needs. I began to experiment with several systems, but after a few days I still couldn’t decide which one was the winner.

I narrowed it down to 3 platforms. Here’s my setup (all running on Windows 7):

• Mathematica 8.0
• Maxima 5.32 with wxMaxima 13.04
• Maple 18.00

So I came up with a trial — I had a short (but nontrivial) problem representative of the type of problem I’d be looking at, and I would try to solve it in all 3 languages, to determine which one was easiest to work with.

### The Problem

This problem came up as a part of a recent linear algebra assignment.

Let the field be $\mathbb{Z}_5$ (so all operations are taken modulo 5). Find all 2×2 matrices $P$ such that

$P^T \left( \begin{array}{cc} 2 & 0 \\ 0 & 3 \end{array} \right) P = I$

We can break this problem into several steps:

• Enumerate all lists of length 4 of values between 0 to 4, that is, [[0,0,0,0],[0,0,0,1],…,[4,4,4,4]]. We will probably do this with a cartesian product or list comprehension.
• Figure out how to convert a list into a 2×2 matrix form that the system can perform matrix operations on. For example, [1,2,3,4] might become matrix([1,2],[3,4])
• Figure out how to do control flow, either by looping over a list (procedural) or with a map and filter (functional)
• Finally, multiply the matrices modulo 5 and check if it equals the identity matrix, and output.

This problem encompasses a lot of the challenges I have with CAS software, that is, utilize mathematical functions (in this case, we only use matrix multiplication and transpose), yet at the same time express a nontrivial control flow. There are 5^4=625 matrices to check, so performance is not a concern; I am focusing on ease of use.

For reference, here is the answer to this problem:

These are the 8 matrices that satisfy the desired property.

I have no prior experience in programming in any of the 3 languages, and I will try to solve this problem with the most straightforward way possible with each of the languages. I realize that my solutions will probably be redundant and inefficient because of my inexperience, but it will balance out in the end because I’m equally inexperienced in all of the languages.

### Mathematica

I started with Mathematica, a proprietary system by Wolfram Research and the engine behind Wolfram Alpha. Mathematica is probably the most powerful out of the three, with capabilities with working with data well beyond what I’d expect from a CAS.

What I found most jarring about Mathematica is its syntax. I’ve worked with multiple procedural and functional languages before, and there are certain things that Mathematica simply does differently from everybody else. Here are a few I ran across:

• To use a pure function (equivalent of a lambda expression), you refer to the argument as #, and the function must end with the & character
• The preferred shorthand for Map is /@ (although you can write the longhand Map)
• To create a cartesian product of a list with itself n times, the function is called Tuples, which I found pretty counterintuitive

Initially I wanted to convert my flat list into a nested list by pattern matching Haskell style, ie f [a,b,c,d] = [[a,b],[c,d]], but I wasn’t sure how to do that, or if the language supports pattern matching on lists. However I ran across Partition[xs,2] which does the job, so I went with that.

Despite the language oddities, the functions are very well documented, so I was able to complete the task fairly quickly. The UI is fairly streamlined and intuitive, so I’m happy with that. I still can’t wrap my head around the syntax — I would like it more if it behaved more like traditional languages — but I suppose I’ll get the hang of it after a while.

Here’s the program I came up with:

SearchSpaceLists := Tuples[Range[0, 4], 4]
SearchSpaceMatrices :=
Map[Function[xs, Partition[xs, 2]], SearchSpaceLists]
Middle := {{2, 0}, {0, 3}}
FilteredMatrices :=
Select[SearchSpaceMatrices,
Mod[Transpose[#].Middle.#, 5] == IdentityMatrix[2] &]
MatrixForm[#] & /@ FilteredMatrices


### Maxima

Maxima is a lightweight, open source alternative to Mathematica; I’ve had friends recommend it as being small and easy to use.

The syntax for Maxima is more natural, with things like lists and loops and lambda functions working more or less the way I expect. However, whenever I tried to do something with a function that isn’t the most common use case, I found the documentation lacking and often ended up combing through old forum posts.

Initially I tried to generate a list with a cartesian product like my Mathematica version, but I couldn’t figure out how to do that, eventually I gave up and used 4 nested for loops because that was better documented.

Another thing I had difficulty with was transforming a nested list into a matrix using the matrix command. Normally you would create a matrix with matrix([1,2],[3,4]), so by passing in two parameters. The function doesn’t handle passing in matrix([[1,2],[3,4]]), so to get around that you need to invoke a macro: funmake(‘matrix,[[1,2],[3,4]]).

Overall I found that the lack of documentation made the system frustrating to work with. I would however use it for simpler computations that fall under the common use cases — these are usually intuitive in Maxima.

Here’s the program I came up with:

Middle:matrix([2,0],[0,3]);
Ident:identfor(Middle);
for a:0 thru 4 do
for b:0 thru 4 do
for c:0 thru 4 do
for d:0 thru 4 do
(P:funmake('matrix,[[a,b],[c,d]]),
P2:transpose(P).Middle.P,
if matrixmap(lambda([x],mod(x,5)),P2) = Ident then
print(P));


Shortly after writing this I realized I didn’t actually need the funmake macro, since there’s no need to generate a nested list in the first place, I could simply do matrix([a,b],[c,d]). Oh well, the point still stands.

### Maple

Maple is a proprietary system developed by Maplesoft, a company based in Waterloo. Being a Waterloo student, I’ve had some contact with Maple: professors used it for demonstrations, some classes used it for grading. Hence I felt compelled to give Maple a shot.

At first I was pleasantly surprised that matrix multiplication in a finite field was easy — the code to calculate A*B in $\mathbb{Z}_5$ is simply A.B mod 5. But everything went downhill after that.

The UI for Maple feels very clunky. Some problems I encountered:

• It’s not clear how to halt a computation that’s in a an infinite loop. It doesn’t seem to be possible within the UI, and the documentation suggests it’s not possible in all cases (it recommends manually terminating the process). Of course, this loses all unsaved work, so I quickly learned to save before every computation.
• I can’t figure out how to delete a cell without googling it. It turns out you have to select your cell and a portion of the previous cell, then hit Del.
• Copy and pasting doesn’t work as expected. When I tried to copy code written inside Maple to a text file, all the internal formatting and syntax highlighting information came with it.
• Not an UI issue, but error reporting is poor. For example, the = operator works for integers, but when applied to matrices, it silently returns false. You have to use Equals(a,b) to compare matrices (this is kind of like java).

In the end, I managed to complete the task but the poor UI made the whole process fairly unpleasant. I don’t really see myself using Maple in the future; if I had to, I would try the command line.

Here’s the program I came up with:

with(LinearAlgebra):
with(combinat, cartprod):
L := [seq(0..4)]:
T := cartprod([L, L, L, L]):
Middle := <2,0;0,3>:
while not T[finished] do
pre_matrix := T[nextvalue]();
matr := Matrix(2,2,pre_matrix);
if Equal(Transpose(matr).Middle.matr mod 5, IdentityMatrix(2)) then
print(matr);
end if
end do:


### Conclusion

After the brief trial, there is still no clear winner, but I have enough data to form some personal opinions:

• Mathematica is powerful and complete, but has a quirky syntax. It has the most potential — definitely the one I would go with if I were to invest more time into learning a CAS.
• Maxima is lightweight and fairly straightfoward, but because of lack of documentation, it might not be the best tool to do complicated things with. I would keep it for simpler calculations though.
• Maple may or may not be powerful compared to the other two, I don’t know enough to compare it. But its UI is clearly worse and it would take a lot to compensate for that.

## Splitting utility costs between roommates is NP-Complete

April 5, 2014

Here’s an easy problem.

You live in a house with 4 people. For simplicity, I will call them Andrei, Bai, Darin, and Young. One person pays for electricity, another person pays for gas, another person pays for water, and the last person pays for internet. However, the utilities cost different amounts, and it is agreed that the total cost should be split equally.

It has come to the time to wrap up the bills. After tallying up the receipts, you find that Andrei has paid $650, Bai has paid$240, Darin has paid $190, and Young has paid$120. What transfers do you make to distribute the costs fairly?

Well that’s easy. Add up all the numbers and you find that the group paid $1200 in total. A quarter of that is$300 — that’s the amount each person should pay in the end. If you’ve already paid $240, then the difference,$60, is the amount you have to pay to compensate.

To see this even more clearly, let us define balance as the difference between what you’re supposed to pay and what you actually paid. From now on, I will use a negative balance to mean you paid more than you supposed to and you are owed money; a positive balance means you owe money to others.

In this case, it’s obvious how to balance the bills. Since Andrei is the only person with a negative balance, everyone simply transfers the correct sum of money to Andrei, problem solved.

### But in general…

Being a computer science major, this left me wondering: what if I lived with 20 people? And what if, throughout the term, we lend each other money, so that multiple people have a negative balance, and multiple people have a positive balance? How do we solve this problem then?

For simplicity, from now on I will assume the preliminary calculations have been done, and we will work solely with the balance column. I will also assume that all values are integers.

One immediate observation is the balances always add up to 0. So given a list of integers than add up to 0, how do we find an efficient set of transfers to balance the bill?

What do we mean by efficient? Well, let’s explore several possibilities.

### Roommate Problem, version 1

Given a list of balances that add up to 0, find the smallest number of transfers to balance the bill.

This seems at first glance to be the criterion we’re looking for. Writing cheques is a hassle, so we don’t want to write more than what is absolutely necessary.

But if you think about it, there’s a really cheap way to solve this problem:

Sort the list. Starting from the highest number, give all your money to the second highest number, repeat n-1 times.

Somehow this doesn’t feel very satisfying. If there are a lot of people, the people in the middle are going to be handling enormous amounts of money. Let’s try again.

### Roommate Problem, version 2

Given a list of balances that add up to 0, minimize the total money transferred to balance the bill.

Perhaps what we really want is to minimize the money transferred? Maybe the bank charges $0.01 for each$1 you transfer?

Unfortunately, this problem can also be solved in a cheap way:

We don’t care how many transfers we make, so let’s just transfer $1 at a time! As long as we always transfer from positive to negative, it doesn’t matter how we do it, we’re always going to transfer a fixed amount of money. Let’s try again. ### Roommate Problem, version 3 Given a list of balances that add up to 0, find the smallest set of transfers to balance the bill, with the limitation that transfers are only allowed from a positive to a negative balance. This captures our intuition that a person should either be transferring money or receiving money, not both. Version 3 doesn’t fall immediately to a cheap trick like its two predecessors. Instances of this problem can get pretty tricky at times — here are some examples of some optimal solutions: I couldn’t come up with an efficient algorithm to solve this problem. The best I could come up with was a greedy algorithm: Assume the input is [-8,-4,5,7]. On each step, look for the number with the least absolute value (-4). Without loss of generality, assume this number is negative. Then ‘zero’ this number by cancelling it with the smallest number on the other side — so transfer$4 from 5 to 4, giving us [-8,1,7]. Repeat this until all numbers are zero.

How bad is this algorithm? Let’s say there are M negative numbers and N positive numbers. Then this algorithm requires at most M+N-1 transfers, since each step zeroes at least one number, and the last step zeroes two numbers.

The optimal solution takes at least max(M,N) transfers. This proves that my greedy algorithm never takes more than 2 times the optimal number of transfers. Not too bad, but not great either.

Unable to progress any further, I asked around in the TopCoder forums. Surprisingly, I got an answer that hinted the problem was impossible to solve efficiently — it is NP-Complete!

### NP-Complete by Reduction from SUBSET-SUM

To prove a problem can be solved efficiently, you simply describe an algorithm that solves the problem, then prove this algorithm is efficient. But how do you prove a problem cannot be solved efficiently?

There are certain problems in computer science that are known to be hard: one of them is the Subset Sum problem. Given a set of positive integers and a positive integer N, is it possible to find a subset that sums to exactly N? Return YES if this is possible, or NO otherwise.

For example, say our set is {3,5,7,8,11}. Can we make 16? The answer is YES, because 5+11=16. Can we make 17? The answer is NO — if you check all the possibilities, you discover that no subset sums to exactly 17.

We can leverage the fact that the Subset Sum problem is hard using a proof by contradiction. Assume that there exists some efficient algorithm to solve the Roommate problem. In the diagram, I symbolize it with a black box.

Assume there is also a converter routine: an easy way to convert an input for the Subset Sum problem into an input for the Roommate problem. I’ll get to the details of this converter shortly; right now, assume it exists.

Then combining the Roommate solver with the converter, we have created a Subset Sum solver! If the Roommate solver is efficient, then this Subset Sum solver is also efficient. But we know that no efficient Subset Sum solver exists. Ergo, no efficient Roommate solver exists either.

The only missing piece is to reduce an instance of the Subset Sum problem to an input to the Roommate problem.

Here’s how. For each number in your set, create a roommate with that number as a positive balance. Then create a roommate with a balance of -N (the number you’re trying to sum up to). Then create one final roommate with the exact balance so that all the numbers sum to 0.

Here’s the input for {3,5,7,8,11} and N=16:

There are 5 numbers in the set, and the Roommate solver finds a solution requiring 5 transfers.

By contrast, here’s the input for {3,5,7,8,11} and N=17:

The Roommate solver can’t do better than 6 transfers.

So to solve the Subset Sum problem, plug it into the Roommate solver and see how many transfers it outputs. If it outputs exactly 1 transfer for every element in your set, then output YES. Otherwise, if there are more transfers than elements in your set, output NO.

This proves that the Roommate problem is as least as hard as Subset Sum, so it’s NP-Complete.

### Research in Existing Literature and Application to Biology

While researching for this blog post, I came upon this research paper titled “On the Minimum Common Integer Partition Problem” published in 2006 by Xin Cheng, Lan Liu, Zheng Liu, and Tao Jiang.

They investigate a problem they call Minimum Common Integer Partition (MCIP). Given two lists of integers, say [4,8] and [5,7], find the smallest common partition — in this case, [3,4,5].

Compare this to the Roommate problem with input [-4,-8,5,7], and it’s clear that the Roommate problem is identical to 2-MCIP. (The 2 just means we’re finding the smallest partition between 2 lists, the paper also investigates finding the smallest partition between more than 2 lists).

Skimming through this paper, it derives an algorithm similar to my greedy algorithm which approximates the problem by a factor of 2. Using more complicated techniques, it manages to produce an algorithm with a 5/4 approximation.

Doing a bit more searching, it turns out that a more recent paper by David Woodruff reduces the approximation ratio for 2-MCIP down to 1.228; an even better paper reduces it down to 1.125 using network flow techniques. At this point, I think I’m way too sidetracked from the original problem, so I didn’t investigate the details.

What surprised me more was that this research was motivated not by roommates sharing utilities, but by biologists studying genome sequences! Biology is not my area of expertise, so I won’t comment further on that. But I’ll leave you these slides (taken from a presentation by the above-mentioned David Woodruff):

So in short, we can’t solve the Roommate problem perfectly, but with cutting-edge algorithms, we can guarantee ourselves to be off by no more than 12.5%!

## Simple experimentation with jQuery

December 31, 2013

This term, I got hired for a co-op internship at a small software company in Kitchener.

The job posting required primarily Java programming, but the company uses a combination of Java (for the back end) and Javascript (for the front end). I did not have much experience with Javascript and web programming, so they asked me to learn jQuery and Ajax, and a bunch of other things.

After a few days of playing with jQuery, this is what I came up with:

It’s a “Trivial Collatz Simulator”. The user types in a number, and the program simulates the Collatz procedure (with animations!) until we reach 1.

The program is written using jQuery. On each iteration, it uses Ajax to query a local server (written in PHP), to do the arithmetic and return the next number in the sequence. That’s about it.

## Hall’s Marriage Theorem explained intuitively

December 21, 2013

Imagine that you have 4 students looking for a job, and 4 positions available to fill. Not all students are equal — some are smarter than others. So the companies want to hire only the smartest students.

(Students are happy with any job they can get)

In this diagram, a bipartite graph, the students are at the top and the companies are at the bottom. A student and a company is connected if the company wants to hire the student. For example, Costco will hire any student, so Costco is connected to Andrei, Bill, Corki, and Danny.

### Hall’s Theorem, formally

Hall’s Theorem tells us when we can have the perfect matching:

Suppose $G$ is a bipartite graph with bipartition $(A,B)$. There is a matching that covers $A$ if and only if for every subset $X \subseteq A$, $N(X) \geq |X|$ where $N(X)$ is the number of neighbors of $X$.

Huh what?

### Hall’s Theorem, intuitively

If you look closely at the diagram, you’ll notice that it doesn’t quite work:

Both Blizzard and Google want to hire Corki and only Corki. But Corki can only work for one company! So the whole thing collapses; the matching fails.

Let’s rewrite Hall’s condition in the context of students and jobs:

For a set of $n$ companies, denote $m$ to mean the number of students that at least one of these companies want. If $m \geq n$ for every set of companies, then a matching is possible. Otherwise, the matching fails.

Here, a set of {Blizzard, Google} consists of 2 companies, but only one student, Corki, is wanted by either company. Since 1 < 2, the matching fails.

Suppose we tell this to Blizzard’s hiring manager, who decides he’ll hire Andrei instead:

Then the matching is successful and every student gets a job. Yay!

Notice that in this example, there are 4 students and 4 jobs. In general, these numbers don’t need to be equal. If we have 10 students and 4 jobs, and we want to fill every job, we can still use Hall’s Theorem. (of course, not every student will get a job)

I like this theorem because it seems so simple. The matching can fail in an obvious way. But if it doesn’t fail in this obvious way, then there’s no way it can fail in a less obvious way — it can’t fail at all.

### Application: Putnam 2012 Problem B3

Let’s apply our knowledge to a harder problem. Actually, this problem becomes quite easy if we know to use Hall’s Theorem:

Suppose $2n$ teams play in a round-robin tournament. Over a period of $2n-1$ days, every team plays every other team exactly once. There are no ties.

Show that for each day we can select a winning team, without selecting the same team twice.

Hint: we can view the teams as one half of the bipartite graph, and the days as the other half. A team is connected to a day if it won its match that day.

### Solution

That’s the hint. Here’s a more detailed solution.

We want to find a matching that covers all the days. Suppose, for contradiction, that this is impossible.

From Hall’s Theorem, there has to be a set of $n$ days, in which there are fewer than $n$ winners in these $n$ days.

Let’s call a team a “loser” if it lost every single game in these $n$ days:

So this poor loser team has lost to $n$ different teams in these $n$ days.

But wait! If it has lost to $n$ teams, then these $n$ teams are winners! Yet we just stated that there are less than $n$ winners. Contradiction — QED.

## Notes on the partial fraction decomposition: why it always works

June 13, 2012

If you’ve taken any intro to Calculus class, you’re probably familiar with partial fraction decomposition.

In case you’re not, the idea is that you’re given some rational function with an awful denominator that you want to integrate, like:

$\frac{4x-2}{(x-2)(x+4)}$

And you break it up into smaller, simpler fractions:

$\frac{1}{x-2} +\frac{3}{x+4}$

This is the idea. If we get into the details, it gets fairly ugly — in a typical calculus textbook, you’ll find a plethora of rules regarding what to do in all sorts of cases: what to do when there are repeated linear factors, quadratic factors, repeated quadratic factors, and so on.

Since the textbooks generously cover this for us, we’ll assume that we know what to do with a rational polynomial with some polynomial as the numerator, and some number of linear or quadratic factors in the denominator. We can do partial fraction decomposition on this. If we like, we could integrate it too. I’m talking about anything of this form:

$\frac{P(x)}{((ax+b)(cx+d) \cdots)((ex^2+fx+g)(hx^2+ix+j) \cdots)}$

Although we won’t prove this, this seems fairly believable. We’ll assume that once we get a fraction into this form, we’re done and we can let existing partial fraction methods take care of the rest.

### Can Partial Fractions Fail?

What if we have a polynomial greater than a quadratic in the denominator? So let’s say:

$\frac{1}{x^3+1}$

Fortunately, here the denominator can be factored, giving us a form we can deal with:

$\frac{1}{(x+1)(x^2-x+1)}$

But we were lucky that time. After all, not all polynomials can be factored, right? What if we have this:

$\frac{1}{x^3+5}$

We can’t factor this. What can we do?

It turns out that this isn’t a huge problem. We never required the coefficients of the factors to be integers! Although the factorization is awkward, it can still be factored:

$\frac{1}{(x + 5^{1/3})(x^2-5^{1/3}x+5^{2/3})}$

Other than making the next step somewhat algebraically tedious, this decomposition is perfectly valid. The coefficients need not be integers, or even be expressed with radicals. As long as every coefficient is real, partial fraction decomposition will work fine.

### Universality of Partial Fractions

The logical next question would be, can all radical functions be written in the previous partial fraction decomposition-suitable form? Looking through my calculus textbooks, none seemed to provide a proof of this — and failing to find a proof on the internet, I’ll give the proof here.

We need to prove that any polynomial that might appear in the denominator of a rational function, say $Q(x)$, can be broken down into linear or quadratic factors with real coefficients.

In order to prove this, we’ll need the following two theorems:

• Fundamental Theorem of Algebra — any polynomial of degree n can be written as a product of n linear complex factors: $Q(x) = (x-z_1) (x-z_2) \cdots (x-z_n)$
• Complex Conjugate Root Theorem — if some complex number $a + bi$ is a root of some polynomial with real coefficients, then its conjugate $a-bi$ is also a root.

Starting with the denominator polynomial $Q(x)$, we break it down using the Fundamental Theorem of Algebra into complex factors. Of these factors, some will be real, while others will be complex.

Consider the complex factors of $Q(x)$. By the complex conjugate root theorem, for every complex factor we have, its conjugate is also a factor. Hence we can take all of the complex factors and pair them up with their conjugates. Why? If we multiply a complex root by its complex conjugate root: $(x-z)(x-\bar{z})$ — we always end up with a quadratic with real coefficients. (you can check this for yourself if you want)

Before, we were left with real linear factors and pairs of complex factors. The pairs of complex factors multiply to form quadratic polynomials with real coefficients, so we are done.

At least in theory — partial fraction decomposition always works. The problem is just that we relied on the Fundamental Theorem of Algebra to hand us the roots of our polynomial. Often, these roots aren’t simple integers or radicals — often they can’t really be expressed exactly at all. So we should say — partial fraction decomposition always works, if you’re fine with having infinitely long decimals in the decomposed product.

## Minimum quadrilateral inscribed in a square

May 6, 2012

A problem that I’ve seen lately reduces to the following problem:

We have a square, and we put a point on each side of the square. Then we connect the four points to create a quadrilateral. How can we make this quadrilateral have the smallest possible perimeter?

Intuitively, you may believe that this natural, obvious configuration should produce the least perimeter:

### Attempt with Calculus

How can we prove that this indeed gives us the smallest possible perimeter?

A first attempt might be to give variables to the side lengths, and somehow find the minimum perimeter using algebra and calculus tools. So there are four independent points — let’s parameterize them with four variables, and assume the side length of the square is 1:

Then we want to minimize this expression:

$\sqrt{a^2+(1-d)^2} + \sqrt{b^2+(1-a)^2}+ \sqrt{c^2+(1-b)^2}+ \sqrt{d^2+(1-c)^2}$

At this point, it isn’t clear how to proceed — there doesn’t seem to be any way to minimize this expression of four variables.

### Proof by Net

We’ll have to try something different. It’s hard to make sense of anything when there are four independent variables. Instead, if we expand things out a bit, things start to become more manageable:

What we did was reflect the square three times, and each time the square is reflected, the inscribed quadrilateral goes with it. By taking only the relevant parts of the quadrilateral, we get the green path.

Now we might have a solution. If we had a different green path, can we reverse the steps and get the original quadrilateral back? Basically, the following requirements have to be met:

• The path has to cross all three of the internal lines BC, BA, and DA.
• The path’s position on the bottom-most line, DC must be the same when reflected onto the top-most line DC.

With these requirements in mind, the shortest green path that satisfies these requirements is a straight line connecting a point on the bottom left to its reflected point on the top right:

Our intuition at the start was well-founded.

Now notice that this isn’t the only possible shortest path. If we move the entire green line to the left or right, we get a different path of the same length!

For instance, the degenerate ‘quadrilateral’ formed by connecting two opposite corners has the same perimeter as the one we get by connecting the midpoints. Neat, huh?

## A CMOQR Problem and why not to Trust Brute Force

March 6, 2012

Recently I was invited to compete in the CMOQR – a qualifier contest for the Canadian Math Olympiad. The contest consisted of eight problems, and contestants were allowed about a week’s time to submit written solutions via email.

After a few days, I was able to solve all of the problems except one — the second part of the seventh problem:

Seven people participate in a tournament, in which each pair of players play one game, and one player is declared the winner and the other the loser. A triplet ABC is considered cyclic if A beats B, B beats C, and C beats A.

Can you always separate the seven players into two rooms, so that neither room contains a cyclic triplet?

(Note: the first half of the problem asked the same question for six people — and it’s not too difficult to prove that no matter what, we can put them into two rooms so that neither the first nor the second room contains a cyclic triplet.)

But what happens when we add another person? Can we still put four people in one room, and three people in the other, so that neither rooms contain a cyclic triplet?

There are two possibilities here:

• One, it’s always possible. No matter what combinations of wins and losses have occurred, we can always separate them into two rooms in such a way. To prove this, we’ll need to systematically consider all possible combinations, and one by one, verify that the statement is possible for each of the cases.
• Two, it’s not always possible. Then there is some counterexample — some combination of wins and losses so that no matter how we separate them, one of the rooms has a cyclic triplet. This is easier to prove: provided that we have the counterexample, we just have to verify that indeed, this case is a counterexample to the statement.

But there’s a problem. Which of the cases does the solution fall into? That is, should we look for a quick solution by counterexample, or look for some mathematical invariant that no counterexample can exist?

### Brute Force?

It would be really helpful if we knew the counterexample, or knew for sure what the counterexample was. What if we wrote a computer program to check all the cases? After all, there are only 7 people in the problem, and 7 choose 2 or 21 games played. Then since each game is either won by one player or the other, there are only 2^21 combinations overall (although that does count some duplicates). And 2^21 is slightly over two million cases to check — completely within the bounds of brute force.

So I coded up a possibility-checker. Generate all 2^21 possible arrangements, then for each one, check all possible ways to separate them into two rooms. If it turns out that no matter how we arrange them, a cyclic triplet persists, then display the counterexample. Simple.

I ran the program. It quickly cycled through every possible arrangement, three seconds later exiting without producing a counterexample.

Alright. So there’s no counterexample. I would have to find some nice mathematical invariant, showing that no matter what, there is always some way to group the players so that neither room has a cyclic triplet.

But no such invariant came. I tried several things, but in each attempt couldn’t quite show that the statement held for every case. I knew that there was no counterexample, but I couldn’t prove it. But why? There must be some tricky way to show that no counterexample existed; whatever it was, I couldn’t find it.

### Brute Force poorly implemented

Reluctantly, as the deadline came and passed, I submitted my set of solutions without solving the problem. When the solutions came out a week later, the solution to this problem did not contain any tricky way to disprove the counterexample. Instead, what I found was this:

Let $A_0 \ldots A_6$ be seven players. Let $A_i$ beat $A_j$ when the difference $i-j \equiv 1,2,4 \mod 7$.

Huh? A counterexample, really? Let’s look at it.

Everything is symmetric — we can ‘cycle’ the players around without changing anything. Also, if we take four players, two of them are consecutive. Let them be $A_0$ and $A_1$.

At this point everything falls into place: in any subset of four players, three of them are cyclic.

But wait … my program had not found any counterexamples! And right here is a counterexample! The culprit was obvious (the reader may have foreseen this by now) — of course, there had to be a problem with my program.

Running my code through a debugger, I found a logic error in the routine converting binary numbers to array configurations, meaning that not all possible configurations were tried. As a result, the counterexample slipped through the hole.

After fixing the code, the program found not one, but a total of 7520 (although not necessarily distinct) counterexamples. Most of them had no elegant structure, but the solution’s configuration was among them.

For the interested, here is the fixed code.

### When to Start Over?

It is true that the program could have been better written, better debugged. But how could you know whether a counterexample existed and your program didn’t find it, or if no counterexample existed at all?

In hindsight, it seems that writing the brute force program made me worse off than if I hadn’t written it at all. After the program ran without finding a single counterexample, I was confident that no counterexample existed, and set out about proving that, instead of looking for counterexamples or symmetry.

When you are stuck on such a math problem — that is, after making a bit of progress you get stuck — it might be profitable to start over. More often than I would like, I prove a series of neat things, without being able to prove the desired result. Then a look at the solutions manual reveals that a very short solution — one or two steps — lay in the opposite direction.

I’ll put an end to my philosophical musings of the day. Fortunately, the cutoff for the CMOQR was low enough that even without solving every single problem, I was still able to meet the cutoff.

## A trivial inequality, and how to express its solution in the most cryptic way imaginable

February 19, 2012

Solutions to olympiad problems are seldom written with clarity in mind — just look at forum posts in the Art of Problem Solving. The author makes jumps and skips a bunch of steps, expecting the reader to fill in the gaps.

Usually this is not much of a problem — the missing steps become obvious when you sit down and think about what’s going on with a pencil and some paper. But sometimes, this is not the case.

### The problem

One of the worst examples I’ve seen comes in the book Inequalities, A Mathematical Olympiad Approach. By all means, this is an excellent book. Anyways, here’s one of its easier problems — and you’re expected to solve it using the triangle inequality:

Prove that for all real numbers a and b,

$||a|-|b|| \leq |a-b|$

### Attempt 1: Intuitive solution

It isn’t clear how the triangle inequality fits. If I weren’t required to use the triangle inequality, I might be tempted to do an intuitive, case-by-case argument.

Let’s visualize the absolute value of $a-b$ as the difference between the two numbers on a number line. Now we compare this distance $|a-b|$ with the distance after you take the absolute value of both of them, $||a|-|b||$. If one of the numbers is positive and the other negative, we clearly have a smaller distance if we ‘reflect’ the negative one over. Of course, if they’re both positive, or they’re both negative, then nothing happens and the distances remain equal.

There, a simple, fairly clear argument. Now let’s see what the book says.

### The book’s solution

Flip to the end of the book, and find

Consider $|a|=|a-b+b|$ and $|b|=|b-a+a|$, and apply the triangle inequality.

Huh. Perhaps if you are better versed than I am in the art of solving inequalities, you’ll understand what this solution is saying. But I, of course, had no idea.

Maybe try the substitution they suggest. I only see one place to possibly substitute $|a|$ for anything — and substituting gives $||a-b+b|-|b-a+a||$. Now what? I don’t think I did it right — this doesn’t make any sense.

To be fair, I cheated a little bit in the first attempt: I didn’t use the triangle inequality. Fair enough — let’s solve it with the triangle inequality then and come back to see if the solution makes any sense now.

### Attempt 2: Triangle inequality solution

A standard corollary to the triangle inequality of two variables is the following:

$|a|-|b| \leq |a-b|$

Combine this with the two variables switched around:

$|b|-|a| \leq |b-a| = |a-b|$

Combine the two inequalities and we get the desired

$||a|-|b|| \leq |a-b|$

Now let’s look at the solution again. Does it make sense? No, at no point here  did we do any $|a-b+b|$ substitution. Clearly the authors were thinking of a different solution that happened to also use the triangle inequality. Whatever it was, I had no idea what the solution meant.

### The book’s solution, decrypted

Out of ideas and hardly apt to let the issue rest, I consulted help online at a math forum. And look — it turns out that my solution was without a doubt the same solution as the book’s intended solution!

What the author meant was this: considering that $|a| = |a-b+b|$, we have $|a| \leq |a-b|+|b|$ from the triangle inequality. Then, moving the $|b|$ over we get $|a|-|b| \leq |a-b|$.

After that, the steps I took above are left to the reader.

Perhaps I’m a bit thick-headed, but your solution can’t possibly be very clear if a reader has the exact same solution yet can’t even recognize your solution as the same solution. Come to think of it, if I couldn’t even recognize the solution, what chance is there of anybody being able to follow the solution — especially if they’re new to inequalities?

Almost every one of the one-sentence phrasings of this solution I could think of would be clearer and less puzzling than the solution the book gives me.

## Understanding Harmonic Conjugates (sort of)

January 7, 2012

For many people (for me at least), the Harmonic Conjugate is a difficult concept to understand. I didn’t really get it the first time I saw it, at Mathcamp. Let’s take the definition of the harmonic conjugate:

AB and CD are harmonic conjugates if this equation holds:

$\frac{AC}{BC} = \frac{AD}{BD}$

If you’re like me, you’re thinking along the lines of “But why? Why is this defined this way? Why would we spend so much time proving things about this weird concept? What’s the point, what’s the use?”

Even now, I can’t really give you an intuitive explanation of why this equality is so important. On the other hand, I could certainly come up with a few problems in which the concept of the harmonic conjugate turns to be useful.

### Apollonius and Fleeing Ships

Apollonius’s problem was this: you are in control of a ship (point A on diagram), and you are in pursuit of another ship (point B). The other ship is fleeing in a straight line in some direction:

Your speed is (obviously) faster than the speed of the other ship: say they’re going at 30 km/h and you’re going at 50 km/h. Additionally, your ship is required to go in a straight line.

In which direction should you set off in order to intercept the fleeing ship?

### Solution with Harmonic Conjugates

The first step of the solution is to construct harmonic conjugates CD so that their ratio is the ratio of your speed to the other ship’s speed (we’ll prove later that this is actually possible; assume we can do this for now):

$\frac{AC}{BC} = \frac{AD}{BD} = \frac{5}{3}$

Next, draw a circle with diameter CD:

There is a point where the ray from B (their ship) intersects this circle. Now go to this point immediately, in a straight line: the ship will be there.

### The Proof

In order to prove that this works, we’ll need to take a step back and look at how we constructed the points C and D. The solution turns out to be evident directly from the construction of the harmonic conjugates.

Again, let’s assume our desired ratio is 5/3. Starting with the points A and B, the first step is constructing some point P so that:

$\frac{AP}{BP} = \frac{5}{3}$

This is fairly easy to do. Draw a circle of radius 5 around A, and draw a circle of radius 3 around B — the intersection P of these two circles forms the correct ratio. (if the circles don’t intersect, just scale everything up and try again)

Next, dropping the internal and external angle bisectors of the new triangle gives the harmonic conjugates C and D:

Why angle bisectors? From the angle bisector theorems (which I won’t prove here):

$\frac{AP}{BP} = \frac{AC}{BC} = \frac{5}{3}$

$\frac{AP}{BP} = \frac{AD}{BD} = \frac{5}{3}$

Combining the two proves that C and D are indeed harmonic conjugates to AB.

As a corollary, notice that because of angle bisecting, the angle CPD is always a right angle — hence, the locus of all points P forms a circle with diameter CD.

Returning to the ship problem, since each point P is defined as a point so that $\frac{AP}{BP} = \frac{5}{3}$, it follows that when both ships travel to such a point P, they will meet at the same time.

## Calculating the Plane Angles of Platonic Solids

October 11, 2011

What is the angle between two planes of a tetrahedron?

The angle between any two edges of the tetrahedron is $60^\circ$, so it’s easy to (falsely) conclude that the angle between two faces must also be $60^\circ$.

But this isn’t the case: the plane angle is defined as the angle between two lines on the two planes that are both perpendicular to the edge. None of the edges in a tetrahedron is perpendicular to any other, so the answer of $60^\circ$ is invalid.

We can try to compute the angle using regular Euclidean solid geometry, but things tend to get messy. A different way to approach the problem is by using vector geometry: using vector methods we can easily calculate the plane angle of the tetrahedron (as well as the icosahedron and the dodecahedron).

Assume symmetry. We represent three concurrent edges of the polyhedron as three vectors beginning at the same point: $\vec a$, $\vec b$, and $\vec c$; let $\alpha$ and $\beta$ be the angles between the vectors (by symmetry we’re assuming that the two alpha’s are equal):
(in case my poor drawing skills does not make this apparent, vectors a and b form one face of the polyhedron and c and b form another face)

For simplicity, let’s also say the lengths of each of the three vectors is 1.

We want to compute the angle between the plane formed by $\vec a$ and $\vec c$, and the plane formed by $\vec b$ and $\vec c$. Hence let $\vec x$ and $\vec y$ be the perpendiculars to $\vec a$ and $\vec b$ respectively each ending at the same point as their respective vectors:

For any two vectors, the dot product is defined $\vec a \cdot \vec b = |\vec a| |\vec b| \cos \theta$ with $\theta$ being the angle between the vectors. Given that the lengths of the vectors are all 1, we have:

$\vec a \cdot \vec c = \vec b \cdot \vec c = \cos \alpha$

$\vec a \cdot \vec b = \cos \beta$

$\vec c \cos \alpha + \vec x = \vec a$

$\vec c \cos \alpha + \vec y = \vec b$

Hence $\vec x = \vec a - \vec c \cos \alpha$ and $\vec y = \vec b - \vec c \cos \alpha$.

We want to find the angle between $\vec x$ and $\vec y$ — call this angle $\theta$. Then

$\vec x \cdot \vec y = |\vec x| |\vec y| \cos \theta$

The dot product of vectors x and y is simply:

$\begin{array}{l} (\vec a - \vec c \cos \alpha) \cdot (\vec b - \vec c \cos \alpha) \\ = (\vec a - \vec c (\vec a \cdot \vec c)) \cdot (\vec b - \vec c (\vec b \cdot \vec c)) \\ = \vec a \cdot \vec b - (\vec a \cdot \vec c) (\vec b \cdot \vec c) \\ = \cos \beta - \cos^2 \alpha \end{array}$

Additionally $|\vec x| = |\vec y| = \sin \alpha$. Hence the cosine of the angle is:

$\cos \theta = \frac{\vec x \cdot \vec y}{|\vec x| |\vec y|} = \frac{\cos \beta - \cos^2 \alpha}{\sin^2 \alpha}$

We can now use this newly derived formula to calculate plane angles! For example…

### Tetrahedron

In the tetrahedron, both $\alpha$ and $\beta$ are 60:
So $\cos \theta = \frac{\cos 60 - \cos^2 60}{\sin^2 60} = \frac{1}{2}$ and $\theta = \arccos \frac{1}{3} = 70.8^\circ$.

Icosahedron

In the icosahedron, $\alpha = 60$ but $\beta = 108$:
The top ‘cap’ is a regular pentagon, which has a vertex angle of 108; each side of the pentagon constitutes a side of an equilateral triangle. Since $\cos 108 = \frac{1}{4} (1-\sqrt{5})$, $\cos \theta = \frac{\cos 108 - \cos^2 60}{\sin^2 60} = -\frac{\sqrt{5}}{3}$, and $\theta = \arccos (-\frac{\sqrt{5}}{3}) = 138.2^\circ$.

### Dodecahedron

Computing angles for the dodecahedron works a bit differently from the tetrahedron and icosahedron. Instead of using existing edges as vectors, we construct an equilateral triangle by connecting three vertices:

So $\alpha = 36$ (since it’s part of a regular pentagon) and $\beta = 60$. Then $\cos \theta = \frac{\cos^2 60 - \cos^2 36}{\sin^2 36}$ and $\theta = 116.6^\circ$.