Generating Julia set with Transpose Convolution Network

Meir Dragovsky
3 min readOct 25, 2020

On this tutorial we will walk through

  • Creating Julia set images programmatically
  • Creating Transpose Convolution model to output Julia set images from single complex float
  • Training model from uniform distribution range (domain)

you can learn more about Julia set here

The complete source code can be found here

Creating Julia set images programmatically

On 2D matrix each value z goes through iteration loop where it tries to escape from some limit lim such that z.real ^ 2 + z.imag ^ 2 > lim in some number of iterations.
The output matrix contains all the points z escape value which is the number of iterations to reach value lim.

The next cell contains definitions of matrix (image) size, escape value limit and max iteration.

We are using numba just in time compiler (jit) to accelerate computation as computing Julia set require high computational efforts.

We also normalize result matrix limiting values to range from 0.0 to 1.0

Creating Transpose Convolution model

Next we define the model. The input is single complex float represented as vector of length 2 and the output is 2D matrix.
The model contains blocks of Conv2DTranspose | normalization | activation where the parameter size upsampling up to its final size.
As we restricts image values to 0.0 to 1.0 the last activation is sigmoid.

Before training the model output looks like this

Training model from uniform distribution range

In a classic machine learning scenario we train a model from data (training set) to be later evaluate with other data (test set)

However here we use another approach
We will train a model to approximate results trained from finite number of inputs taken uniformly from 2 ranges representing complex number real and imaginary parts (the domain).

Since the number of distinct values within a range is infinite we train the model on some random finite set while evaluating the model on other values but still within the ranges (domain).

The ranges for real and imaginary parts are [-1.0, 0.0] and [0.0, 1.0] respectively.

Now lets generate some random images from the model

Now lets compare real images (computed) vs generated by our model

when the input comes from out of the domain

When the input is just a bit away from the domain range we still have some reasonable results
However as much we getting away from the training domain we loose accuracy.

--

--