DEEP REINFORCEMENT LEARNING IN EXTREMAL COMBINATORICS

Date
2025-05
Journal Title
Journal ISSN
Volume Title
Publisher
University of Delaware
Abstract
In extremal combinatorics, there are many open conjectures, some of which are probably false. One can attempt to disprove a conjecture via a proof by contradiction, but the common method is to find a single counterexample. However, this is not efficient to find by hand, especially for complex problems that involve large graphs. Therefore, it makes sense for us to attempt to train machines that can automate the process and develop counterexamples for us. In particular, Adam Z. Wagner from Google DeepMind proposed a reinforcement learning approach to accomplish this and was able to find counterexamples to several published conjectures in graph theory. In this thesis, we first review Wagner’s approach and provide more details about its implementation. Next, we propose and explore a new variant of his approach where we focus on geometric graphs to reduce the size of the search space. Using simple conjectures, we show that this new approach can significantly reduce the training time to discover a counterexample.
Description
Keywords
Citation