Online Time Series Deep Dreaming
Gradient ascent step size.
Number of consecutive scales at which to run gradient ascent. We first scale the base series down to the lowest scale and run gradient ascent there. Next, we scale the resulting series up to the next higher scale and run gradient ascent again. We continue until we reach the highest scale, which is just the original size of the time series.
Size ratio between scales.
Number of gradient ascent steps per scale.
When, during a scale, we reach a loss this high, the rest of the scale is skipped and we proceed with the subsequent scale.
Each line adds one layer whose L2 activation we maximize, alongside a relative weight. Because the sum of all squared activations of the neuron in the layer is maximized, few high activations will win over lots of small activations.dense_1 0.5 maximizes the layer dense_1 with weight 0.5.dense_1 #0:1 #3:1.5 only maximizes neurons 0 and 3 from the layer dense_1 with weights 1 and 1.5, respectively.Note that depending on the shape of the layer, you may need more coordinates to identify a single neuron. For example, #5,8:1.5 assigns weight 1.5 to neuron 5,8.
Made with ♥ by Felix Mujkanovic
Code is available on