Project Overview

This page is a short little overview of GenerativeGI, a project that brings together generative art (making art with code) and genetic improvement (fixing code with search algorithms). The idea of this post is to mainly present a bit of insight on the project itself and show off some of our favorite outputs.

Note - this is intentionally brief/reductive. The full paper has all the details in lovely, dry, academic tones.

Note - our journal submission is under review

GenerativeGI

GenerativeGI is a Python-based technique for using genetic improvement (GI) to string together a series of generative art techniques to create something new, the idea being that a generative artist wants to spend less time fiddling with parameters as that can be a time-consuming process. In the interest of brevity, all techniques are described within the README file of our public repo.

Genetic Improvement

GI is an evolutionary computation-based technique for automatically improving the source code of programs. Basically, a GI technique will try to search for the best possible combination of lines of source code that results in an optimal program (GI encodes the software as the genome).

For this work, we consider the source code to be a series of generative art techniques, each of which can accept parameters to vary their input. The GI process then attempts to find the “best” way to combine techniques and parameters to create a glitch art aesthetic.


Here are some of the best (note: ones that I mainly liked) and thought would be worth presenting. Each of the specific runs get their own carousel of images and you can click on each to go to the full size image. Sadly they’re only 1000x1000, otherwise the experiments would have taken a long time to run.

I randomly selected images to show off the outputs - there is no rhyme or reason other than looking for different or interesting images.

Random

These results were from simply randomizing GenerativeGI without any guidance or evolution. Still resulted in very neat outputs, however a lot more ‘blank space’ can be seen in the full dataset.

Single-Objective (Clear)

This experiment focused on a single objective for optimization - pixel differences via root-mean-square (RMS) difference analysis (i.e., a mathematical calculation of how “different” are two images from each other). Additionally, the canvas object was cleared prior to evolutionary operations to provide a clean slate each time a new child was created. Interestingly, the final outputs were typically very similar per-replicate (as opposed to random and Lexicase) as the single fitness objective guided the search towards a common set of solutions.

Single-Objective (No Clear)

This experiment is the same as above (RMS difference), however the canvas object is not cleared to simulate a “pass-it-on” style of art where techniques overlap.

Lexicase (Clear)

This experiment runs the full Lexicase selection algorithm with all five fitness functions active: pairwise RMS difference, pairwise Chebyshev difference, minimizing the length of the genome, maximizing the diversity of techniques in each genome, and maximizing the amount of negative space within an image (targeting 70% negative space).

Lexicase (No Clear)

Same as above, however the canvas is not cleared again to allow techniques to build upon each other throughout evolution.