Deep learning networks are infamous for their ability to detect cats in images. Advances in computer vision and the application of Convolutional Neural Networks (CNN’s) have yielded exciting advances in image classification and computer vision applications. CNN’s are used to classify images and identify the objects that are in them. They essentially translate pixels values to information about what is in the image. There are often many layers between pixel values and outcomes. The layers in these networks can be used to determine the style of an image. Early layers tend to identify lines or colours, whereas later layers identify more complex objects and derivations.
Combining data that has been generated from 2 images that have passed through CNN’s allows a principal content image to be mixed with style from another image. Content and style are weighted, and the algorithm iterates through numerous passes of the images to align the images. The style of an image is derived from comparing convolutional channels filters and the correlation between them to produce gram matrices. Further details on the approach and specification can be found here.
We have been experimenting with various content images and style images over the 2017 holiday period. Although the commercial value of such image generation is difficult to quantify (as with traditional art), the neural style transfer approach allows AI to generate amazing new vivid images by combining a “content” image and a “style” image. We have posted various examples to the Algospark Neural Style Transfer Art gallery. These can be found here:
The implication is that an already large image library of content and style can be combined using AI to generate exciting new computer art derived art libraries.