Monday, April 5, 2021

A TicTacToe solution

An anonymous reader offers this solution:

class TicTacToe {
    constructor() {
        this.turns = []
        this.board = new Array(9).fill(0)
        this.wins = {
            0: { 1: { 2: true }, 3: { 6: true }, 4: { 8: true } },
            1: { 4: { 7: true } },
            2: { 5: { 8: true }, 4: { 6: true } },
            3: { 4: { 5: true } },
            6: { 7: { 8: true } },
        this.cellMap = {
            0: '_',
            1: 'X',
            2: 'O',

    print() {
        console.log('v'.repeat(7), 'turn', this.turns.length)

            .map(c => this.cellMap[c])
            .forEach(r => console.log(r))

    takeTurn(t) {
        if (this.winner) {
            throw new Error('game is over')
        } else if (![1, 2].includes(t.state)) {
            throw new Error('invalid state')
        } else if (t.state === this.turns.slice(-1)[0]?.state) {
            throw new Error('not your turn')
        } else if (t.x < 0 || t.x > 2 || t.y < 0 || t.y > 2) {
            throw new Error('out of bounds')
        } else if (this.turns.find(turn => turn.x === t.x && turn.y === t.y)) {
            throw new Error('cell occupied')

        this.board[t.x * 3 + t.y] = t.state
        const marks = this.board.reduce(
            (a, v, i) => (v === t.state ? [...a, i] : a),
        const combos = combine(marks, 3)
        for (let i = 0; i < combos.length; i++) {
            const [a, b, c] = combos[i]
            if (this.wins[a] && this.wins[a][b] && this.wins[a][b][c]) {
                this.winner = this.cellMap[t.state]

        if (this.winner) return console.log('WINNER', this.winner)

function combine(n, k) {
    let set
    if (typeof n == 'object') {
        set = n
        n = n.length
    const result = []
    const combos = []
    const recurse = start => {
        if (combos.length + (n - start + 1) < k) {
        recurse(start + 1)
        if (combos.length === k) {
                    ? combos
                        .map(c => set[c - 1])
                    : combos.slice(),
        } else if (combos.length + (n - start + 2) >= k) {
            recurse(start + 1)
    recurse(1, combos)
    return result

const t = new TicTacToe()
t.takeTurn({ state: 1, x: 0, y: 0 })
t.takeTurn({ state: 2, x: 1, y: 0 })
t.takeTurn({ state: 1, x: 2, y: 2 })
t.takeTurn({ state: 2, x: 1, y: 2 })
t.takeTurn({ state: 1, x: 0, y: 2 })
t.takeTurn({ state: 2, x: 1, y: 1 })
I've reproduced it here because Google strips the leading whitespace out of the comments section. I didn't have the original indentation, so I just asked Emacs to indent it.

It being an unsolicited submission, I'll have the gall to make an unsolicited critique.

From the standpoint of the original point of the question, the code demonstrates that the author can write coherent, runnable programs. Many of the candidates I interviewed could not get this far. That said, there is room for improvement.

My biggest concern is with the complexity of the code. Let me focus on the combine procedure for the moment. It took me some time to figure out what on earth it was trying to accomplish. It is given a list of the positions for a particular player during the game. So if the game looked like this:

 X | O | X
   | X | X
 O | O |   
the list of positions for X would be 0, 2, 4, 5. combine is also given a number k. It is to return a list of all the k-tuples that can be made from the list of positions. If k were 3 in this example, it would return (0, 2, 4), (0, 2, 5), (0, 4, 5), (2, 4, 5).

Take a moment to test your recursion skills by coding this up.

We can observe this recursion: the solution consists of the first element paired with all (k-1)-tuples of the remaining elements

(map 'list (lambda (s) (cons (car elements) s)) (combine (cdr elements) (- k 1)))
plus all the solutions that don't involve the first element
(combine (cdr elements) k)
There are two base cases: if k is zero, there is a single solution: the zero-tuple '(). If there are no elements, we cannot choose k of them and there are no solutions. Putting this together we get this:
(defun combine (elements k)
   (cond ((zerop k) (list '()))    ; a list of one solution, the zero-tuple
         ((null elements) (list))  ; a list of no solutions
         (t (append (map 'list (lambda (s)
                                 (cons (car elements) s))
                         (combine (cdr elements) (- k 1)))
                    (combine (cdr elements) k)))))
I'll leave it to the reader to translate this back into Javascript.

The author of the original code seems to have gotten lost in the mechanics of constructing solutions. The variable combos is pushed and popped as the recursive calls are made. This means we are keeping track of the recursion both through the recursive flow control and through mutation and backtracking. I find mutation and backtracking harder to understand than recursion because you have to keep track of when exactly things are being mutated.

As I mentioned in the previous post, I was less concerned with the details of how the program worked than I was with whether the candidate could take a “word problem” and turn it into something that works. This approach — generating triplets of positions and seeing if they are winning — works. However, I find overall that it is a bit too complex and the procedure combine is confusing.

Saturday, March 13, 2021

A Google Interview Question

When I was working at Google I had to interview a lot of candidates. Most often the interview was general in nature. I wasn't evaluating a candidate for a particular position, I was determining whether a candidate was all around "Google material".

Although Google is known for its challenging interview questions, I found the simple questions gave me the most information about a candidate. One of my favorite interview questions was this: “Given the state of a tic-tac-toe game (naughts and crosses), determine if someone has won the game. Code this up however you'd like.”

This isn't a very hard problem. (Pause a moment and think how you'd do it, or even take a moment to write it up.) The straightforward solution is to check the rows, the columns, and the two diagonals. I was less interested in whether the candidate would come up with this solution or something more clever than I was in seeing exactly how the candidate worked his way from an informal word problem towards a formal solution in code. I wanted to see how the candidate organized his thoughts. I wanted to see how proficient the candidate was in programming when he was allowed to use the language he was most comfortable with.

I expected most candidates to whip off a solution in a few minutes and then we'd move on to the next interview question. To my surprise, fully half the candidates were unable to finish this simple task. A couple seemed unable to use a whiteboard to sketch out a solution. A large number dove right in and started coding up the obvious nested loop only to get confused at handling the flow control. (There is a hidden trap in this problem: if you choose the nested loop solution, you want to abort the inner loop early when you determine a solution is not possible on the current row or column you are checking, but you want to abort the outer loop early when you determine a solution is possible.) It was often that a candidate did not at first realize that a cell in the tic-tac-toe board has three possible states, not two. One candidate used 4x4 arrays to represent the board so he could use 1-based indexing. Not a single candidate tried to abstract over the low-level array manipulation.

Interviews are highly stressful for candidates, so I didn't judge them too harshly. If it looked like they were on the right track and they successfully noticed and repaired a bug or two in their solution I'd call it a win. If they got hopelessly lost, I'm afraid I couldn't recommend them. It was sad because many seemed very nice people and they were trying their best. Either they weren't cut out to be programmers or their education was not serving them well at all.

I do have to mention one candidate who knocked this question out of the park. He observed that there really aren't that many tic-tac-toe game positions, so you can just hash the game position and look up the winner in a table. I didn't even ask him to code this up.

Tuesday, March 2, 2021

Roll your own conditional

Sometimes you'd like to write your own conditionals. Perhaps you'd like to wrap something around one or both arms of the branch. Perhaps the predicate is tristate and you need three branches. Perhaps the predicate wants to share information with one or both arms. You want to abstract away how the predicate makes it decision from how the caller wants to handle the possible outcomes. The trick is to use continuation passing style:

sign = lambda n, if_positive, if_zero, if_negative: \
           if_positive() if n > 0 else \
           if_negative() if n < 0 else \

# use it like this:
for i in range (-1, 1):
    print (i)
    print (sign (i,
            lambda: \
            lambda: (lambda first, second: second)(
              print ("Hooray!"),
            lambda: \
By making the branches be thunks, we delay evaluation. Once the conditional decides which branch to take, it forces the thunk and returns its value.

Monday, March 1, 2021

Some of Python's language constructs, like variable binding, exception handling, sequencing, and iteration, are provided only through language statements. You cannot use statements in an expression context, so it would seem we are up against a limitation. But Python has lambda expressions. Lambda expressions and function calls are all you need to implement variable binding, sequencing and recursion as ordinary Python expressions.

For example, suppose you wish to have a 'let' expression where you bind helper variables 'a' and 'b' to subexpressions. Simply use lambda and immediately call it:

    return (lambda a, b: h(a, b))(f(x), g(y))
Not pretty, but it gets the job done.

If you want to sequence two expressions, write them like this:
    return (lambda ignore: x + 7)(print (x))
The print expression runs first, its result is ignored and then x is added to 7 and returned. Again, this could use a lot of syntactic sugar, but it gets the job done. Alternatively, we could use left to right evaluation of arguments to do our sequencing:
    return (lambda first, second: second)(print(x), x + 7)
In order to implement iteration, we need to bind a name recursively. We'll use the Y operator.
Y = lambda f: (lambda d: d(d)) (lambda x: f (lambda: x (x)))

print (Y (lambda loop:  \
              lambda x: \
                  None if x == 0 else (lambda first, second: second)(
                                           print (x),
                                           loop()(x - 1)))


Handling exceptions is easy if you use a helper function and some thunks:

def on_error (compute_value, handle_error):
        return compute_value()
        return handle_error()

def safe_divide (dividend, divisor):
    return on_error (
           lambda: dividend / divisor,
           lambda: (lambda first, second: second)(
                       print ('divide by zero, answer 1'),

print (safe_divide (12, 3))
print (safe_divide (12, 0))
divide by zero, answer 1

Sunday, February 28, 2021

I saw this gem at
Someone asked me if we can have a lambda function without any argument?
Yes, we can define a lambda function without any argument. But, it will be useless because there will be nothing to operate on.
Using lambda function without any argument is plain abuse of this feature.

Obviously the author never heard of a thunk. 

Tuesday, February 2, 2021

Descriptive statistics

Suppose I ran a benchmark. Being thorough, I ran it several hundred times, but that just produces a big pile of numbers and we need a way to summarize the results concisely.

The naive way would be to report the average run time and perhaps the variance or standard deviation. This would allow us to assert that the benchmark takes, say, 10.5 ± 1.2 seconds. A reader would feel confident that the bulk of the benchmark runs would take somewhere between 9.3 seconds and 11.7 seconds, and that a lot more runs would take near 10.5 seconds than the minimum of 9.3 seconds or the maximum of 11.7 seconds. The mean (average) and the standard deviation are the characteristic statistics of a normal (Gaussian) distribution. Rather than give the results of hundreds of runs, I summarize by saying that the distribution is Gaussian and I give the characteristic statistics of the best fitting Gaussian distribution.

But what if the distribution isn't Gaussian? By reporting the statistics of the best fitting Gaussian at best I've reported something irrelevant, at worst I've mislead the reader about the nature of the distribution.

It turns out that program run times do not typically follow a Gaussian distribution. A number of different factors influence the run time of a program and these are combined multiplicatively. The resulting distribution will be heavily skewed towards the shorter times, but have a long tail of longer run times. It will look a bit like this:

But a curious thing is going on here. If we plot the exact same curve but use a logarithmic scale for the X axis, we get this:

Our old friend the Gaussian distribution appears. Run times of computer programs typically follow a Gaussian distribution if you plot them on a logarithmic scale. This is called a log-normal distribution and it is one of the most frequently encountered distributions (after the Gaussian).

When reporting a log-normal distribution, it makes sense to report the mean and standard deviation of the Gaussian that appears after you transform to a logarithmic scale. If you transform and take the mean, you end up with the geometric mean. If you transform and take the standard deviation, you end up with something called the geometric standard deviation. This latter is just a little tricky. Instead of a margin of error that you add or subtract from the mean, you have a factor by which you multiply or divide the geometric mean. For example, I might assert a particular benchmark takes 10.5 ×/÷ 1.2, and you can interpret this as meaning that the majority of runs took between (/ 10.5 1.2) = 8.75 seconds and (* 10.5 1.2) = 12.6 seconds.

When you see a paper reporting the mean and standard deviation of some benchmark runs, you should be instantly suspicious. Benchmark runs don't usually follow a Gaussian distribution and it is likely that the author is unaware of this fact and is mindlessly applying irrelevant statistics. But if you see a paper where the geometric mean and geometric standard deviation are reported, you can have much more confidence that the author thought about what he or she was trying to say.

I find it particularly annoying when I see a benchmark harness that will run a benchmark several times and report the mean and standard deviation of the runs. The author of the harness is trying to be helpful by reporting some statistics, but the harness is likely reporting the wrong ones.

Monday, February 1, 2021

Oddity noticed while running a benchmark

I've heard it said that using a trampoline to achieve tail recursion incurs a cost of around a factor of two. I was curious whether that was accurate and where that cost comes from, so I decided to measure it with a micro-benchmark. I chose the Tak benchmark as a starting point because it doesn't do much but function calls and it is fairly well known. The basic Tak benchmark is this:

public static int Tak (int x, int y, int z) {
    return (! (y < x))
        ? z
        : Tak (Tak (x - 1, y, z),
               Tak (y - 1, z, x),
               Tak (z - 1, x, y));

Tak (32, 16, 8) => 9
When called with arguments 32, 16, and 8, the benchmark makes an additional 50,510,520 calls to itself, of which 1/4 of these (12,627,630) are self tail calls. One run of the benchmark consists of calling Tak (32, 16, 8) ten times. On my laptop this takes around 900 ms.

I ran the benchmark 250 times and recorded how long it took. Here are the times plotted against the run number:

For the first 50 runs of the benchmark the average time seems to slowly increase from around 830 ms to around 930 ms, then it goes to about 900 ms for the next 50 runs, then it settles down to around maybe 890 ms after that. Mostly it stays in a fairly narrow band around the average time, but now and then the time will jump to over 1000 ms for a couple of runs.

I don't know why the average run time drifts over time like this, but the effect is visible on several different benchmarks. Since it takes the better part of a minute to perform the first 50 runs, the period over which the runtime increases is relatively long — on the order of 30 - 40 seconds. My guess is maybe as the benchmark runs the processor heats up and begins to slow down slightly in order to control the heat.