While I wait to get my hands on Deepmind’s wavenet, I’ve been experimenting some with the parameters of neural-style.
Here, I have two images that use the same style and content image, however in the first one, the result is initialized randomly, and the second is initialized from the source image:
The largest difference appears to be the lamp; the style image didn’t have any areas bright enough to fill that in by itself, but initialized from the content image, it can keep that around.
The next was an experiment with merging together styles. The first are the results of the individual style images– one has the best texture, one gets the red of the barn, one gets the red sky.
And here’s the combination of all three:
So it looks as though, in the case of multiple style images, in addition to creating an odd fusion of the painting style, it also will pull appropriate colors from the different style images to fill in areas that the other style images may not have (see the blue sky and the red barn.)