Information

Abstract

For the past six years, researchers in genetic programming and other program synthesis disciplines have used the General Program Synthesis Benchmark Suite to benchmark many aspects of automatic program synthesis systems. These problems have been used to make notable progress toward the goal of general program synthesis: automatically creating the types of software that human programmers code. Many of the systems that have attempted the problems in the original benchmark suite have used it to demonstrate performance improvements granted through new techniques. Over time, the suite has gradually become outdated, hindering the accurate measurement of further improvements. The field needs a new set of more difficult benchmark problems to move beyond what was previously possible.

In this paper, we describe the 25 new general program synthesis benchmark problems that make up PSB2, a new benchmark suite. These problems are curated from a variety of sources, including programming katas and college courses. We selected these problems to be more difficult than those in the original suite, and give results using PushGP showing this increase in difficulty. These new problems give plenty of room for improvement, pointing the way for the next six or more years of general program synthesis research.

Full Paper

A preprint of the paper can be found on arxiv.

Citation

Helmuth and Peter Kelly. 2021. PSB2: The Second Program Syn-thesis Benchmark Suite. In 2021 Genetic and Evolutionary Computation Conference (GECCO '21), July 10–14, 2021, Lille, France. ACM, New York, NY,USA, 10 pages. https://doi.org/10.1145/3449639.3459285

bibtex

@InProceedings{Helmuth:2021:GECCO, author = "Thomas Helmuth and Peter Kelly", title = "{PSB2}: The Second Program Synthesis Benchmark Suite", booktitle = "2021 Genetic and Evolutionary Computation Conference", series = {GECCO '21}, year = "2021", isbn13 = {978-1-4503-8350-9}, address = {Lille, France}, size = {10 pages}, doi = {10.1145/3449639.3459285}, publisher = {ACM}, publisher_address = {New York, NY, USA}, month = {10-14} # jul, doi-url = {https://doi.org/10.1145/3449639.3459285}, URL = {https://dl.acm.org/doi/10.1145/3449639.3459285}, }

Supplementary materials

Datasets

We have created datasets for each benchmark problem that can be sampled in order to implement the problems.

The easiest way to use PSB2 is to use one of the following libraries, which make the downloading and sampling of these datasets easier. Using these libraries, you do not need to download the entire dataset from Zenodo; the individual problem datasets are downloaded and stored once when first sampling them.

Reference Implementation

We provide a reference implementation in Clojure of the PushGP system used to produce our results, which includes an implementation of each problem in PSB2.

Spreadsheet of considered problems

In order to be transparent about our problem curation process, we have created a table containing all of the problems we considered, including the reason for rejection for rejected problems, initial results if we implemented the problem, and a link to the source of the problem.

Slides and Video of Talk