As a child, I wanted to become a chess grandmaster. When my mother said no, I downgraded my ambition to becoming a part-time grandmaster, so that I could also have a socially acceptable career as a scientist, as a kind of front. It was probably for the better that this did not happen; nonetheless, over the next two decades, I played a lot of competitive chess, earning the FIDE Master title and making a draw against Grandmaster Maurice Ashley. In the meantime, my socially acceptable career took shape. I became a computational neuroscientist and a computational cognitive scientist. I studied neural coding, multisensory perception, visual decision-making, working memory, and motivation. Only in 2013, in my fifth year as a PI at Baylor College of Medicine in Houston, I realized that the process by which people think ahead in games was interesting from a cognitive science point of view.
Around the same time, NSF issued a call for applications to the “Integrated NSF Support Promoting Interdisciplinary Research and Education (INSPIRE)” program. (I always wonder if someone gets paid to come up with tortured acronyms. If so, I want that job.) Think high-risk, high-reward topics. Ones that PIs get strongly discouraged from pursuing before tenure. I wasn’t tenured yet but that didn’t stop me from proposing to study human reasoning and learning in combinatorial games (deterministic, full-information, two-player games). The goal: understanding how people think ahead when the number of possible futures is combinatorially large. Its relevance: such planning problems are everywhere, from navigation to writing to career planning to economic policy.
But what game to use? As a PhD student in Physics, I had, with a group of friends, read parts of the book series Winning Ways for your Mathematical Games by Conway, Berlekamp, and Guy, about the best strategies in simple games such as Nim and Dots-and-Boxes. Fascinating material, in large part because these famous mathematicians allowed themselves to professionally play. Chapter 22 covered generalizations of tic-tac-toe, which is 3-in-a-row on a 3 x 3 board. I learned that 8-in-a-row on an infinite board was always a draw, and 4-in-a-row on a 4 x 24 board was won for the first player (i.e., under best play on both sides). Why 24?! I suspected that the board did not have to be so large to guarantee a win, and over the next few years, as a hobby, I tried to prove (using pen and paper) that 4-in-a-row is won even on a 4 x 11 board. (Did I mention that I had weird hobbies? I remember that in 2004 I made a friend play this against me at a dinner party.) Fast forwarding to the grant proposal, I decided to use 4-in-a-row on a 4 x 11 board as an experimental paradigm, arguing that it might be complex enough to be relevant to real-world problems while, unlike chess, simple enough to be computationally tractable.
Zeyan Shu and Edgar Walker helped me collect preliminary data from subjects solving 4-in-a-row puzzles (“win in N moves”) at various levels of difficulty (N). I believed that an advantage of this game compared to chess was that the objective quality of a move (whether it changed the game outcome under best subsequent play) could be determined, providing a benchmark for human play. We collected preliminary data of people playing the game, as well as of one subject with simultaneous recording of their eye movements. I hypothesized, following a 1966 chess study in the journal Soviet Psychology, that eye movements could reveal strategy. It was not the worst grant proposal of my career, but it was light-years away from my previous work, so I had no relevant track record. I submitted this grant on May 29, 2013.
To my great surprise, NSF took the leap and funded this work. (Soap box: it is incredibly important for funding agencies to have mechanisms to allow junior researchers to break free from the shackles of their academic heritage and from the temptations of intellectual inertia.) This decision coincided with my move to New York University, where I started in Fall 2013. Shortly after I arrived, the Center for Neural Science held a departmental retreat. I may have been the only PI presenting in a booze-fueled poster session, and I had decided to talk about the just-funded grant, but I drew the attention of a trio of newly arrived PhD students, Bas van Opheusden, Evan Russek, and Oliver Vikbladh. They were actually interested in these ideas! Bas ended up joining my lab, and he later told me that his personal experience as a chess player had contributed to his interest.
In the next five years, Bas developed the tenuous ideas from the grant proposal into a full-fledged research program, pretty much functioning as his own PI. During this period, valuable contributions were also made by Gianni Galbiati, Zahy Bnaya, and Yunqi Li. My original belief that it was important that we could prove a forced win for the first player from the starting position turned out to be wrong; no human player ever came close to finding this solution. (By now, the board size had shrunk from 4 x 11 to 4 x 9 but the forced win was still there.) While the grant proposal had only contained a rather naïve form of depth-limited exhaustive search, Bas converged onto a plausible modeling framework and proceeded to test it rigorously using six experiments and extensive model comparison. Bas decided to focus on the question of expertise, which in retrospect was probably the best possible use of this paradigm and model. We faced fundamental computational challenges, most notably associated with the unbiased evaluation of log likelihoods based on simulations; this became a separate methods paper. And finally, we initiated a collaboration with the UK-based company Peak, which at the time was very interested in collaborating with researchers to understand cognitive training. Bas provided Peak with the AI opponents for users to play against, and Peak gamified the task and put it on their platform. The amount of data that we started receiving exceeded our wildest expectations, with over a million games played each month.
Bas graduated in May 2019 and started a postdoc with Tom Griffiths at Princeton University. He took on new projects so work on this one slowed down. Around the same time, Ionatan Kuperwajs joined the lab as a PhD student. He replicated Bas’s laboratory findings in the Peak data, while also finding evidence for retrospective reasoning and starting a project with postdoc Heiko Schütt on fitting the Peak data with a neural network. Moreover, Ionatan got us to sit down and actually write the journal submission. Sometimes, a new project member helps not only because of the energy that they bring, but also because the previous members see the addition as a fresh start that grants permission to set aside previous guilt.
We submitted the main manuscript to Science in February 2021; it was rejected without review. We sent it to Nature; it was again rejected without review. We sent it to Nature Human Behaviour. We received mostly positive reviews in April 2021, with one reviewer writing “I think this work is important enough to be published in Nature.” Encouraged, we went back to the Nature editors to ask if they were willing to send the paper for review if we addressed the NHB reviews. After due deliberation, the editors agreed and we resubmitted to Nature in June 2021. A month later, the paper was rejected again, this time after reviews. None of the reviews felt damning to us, though, and after getting useful clarification from the editors, we decided to go to work and address the reviews as if the decision had been revise-and-resubmit. We submitted the revision and rebuttal as an appeal in March 2022. In April, we received an actual revise-and-resubmit decision. In the following months, we addressed those comments. We submitted our fourth version to Nature in January 2023; this version was accepted a month later. The paper was published on May 31, 2023. Although this process was long and at times discouraging, it was fair; in particular, our appeals were considered thoughtfully and editors were always willing to explain their decisions.
All in all, 10 years and 2 days elapsed between the submission of the grant proposal and the publication of the paper. One can argue with this count in various ways. Maybe the real start of the project was when Bas joined the lab in April 2014. Maybe the CogSci refereed conference paper in 2017 is the product to count. Maybe more prudent authors would have resubmitted to NHB and gotten the paper published in 2021. But these details aside, the fact remains that the time scale of academic science is extraordinarily long compared to almost every other sector. Doing good science is painstakingly slow. Realizing this fact is important for students to be able to make informed career decisions and for funders and policy-makers to have realistic expectations. For me personally, it is a sobering reminder of how few major research projects we can undertake in our lifetimes, and it increases my urge to diversify. In scientific careers, not unlike in chess, extreme pruning of possible futures is inevitable.
(Image credit: Jordan Lei)
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in