What we saw some years ago in movies, seems to be happening in real life now. Machine learning, an AI subset, teaches computers to learn from experience, it improves day by day. Machines learn without being programmed by specific computers. The key to artificial intelligence is in its training. Training an AI means teaching it what output data it should return to us when we enter certain input data.
But, one thing is the machine itself, working autonomously, and another one is when it collaborates with a human being, used as an instrument to amplify artistic processes. In this case, not only are learning algorithms needed, but creative ones too. Something that so far has only been possible with human beings. Believe it or not, the first artistic piece developed by a French team under the name of Obvious and generated by an artificial intelligence, was auctioned for 432,500 dollars. Its creators estimate that the sales price would cost 10,000 dollars but that number was multiplied more than 40 times.
Machines capable of composing a tune, painting a picture, are the result of research ranging from the study of the human mind and its creative processes to the design of systems capable of replicating the cognitive mechanisms of people’s brains. In the case of Obvious, their system was based on two algorithms: one that analyzed a total of 15,000 historical portraits painted between the fourteenth and twentieth centuries and another one that was responsible for making the machine distinguish the difference between a painting in real life and the artificially created one.
Experiments with style transfers are also part of the many functions that machine learning can find within art. But what is it exactly? Style transfers are mere recompositions of images having the style of other images. The algorithm takes 3 images, one for content, one for style and one objective. The ‘objective’ image would be the blank canvas on which the work will be created. This work will be a completely new image, which represents the original content of one image in conjunction with the style of the other. It is based on an artificial system based on a deep neural network that creates artistic images of high perceptual quality. Created by Leon Gatys, Alexander Ecker and Matthias Bethge, the system uses neural representations to separate and recombine the content and style of arbitrary images, providing a neuronal algorithm for the creation of artistic images.
In the market, there are already hundreds of applications that manage to generate these types of formats, and make Machine Learning more easily applied. Magenta, for example, is based on the development of new deep learning algorithms and learning reinforcement to synthesize songs, images or sketches, focusing machine learning to create works of art and music.
In many cases beauty or coherence are the least important. It is above all a celebration of the technique and the artistic exploration of its results, so the practice takes place in different fields of visual arts. In this context we start talking about “computational creativity” to refer to the study of software behavior whose performance and results can be considered creative.
The possibilities that open up are almost endless and in recent times the development of computer creativity software has grown exponentially. So, are machines capable of creating and getting excited? Are the works they create authentic in themselves or are they simply a replicated mixture of other works? It is possible that artists have an additional creative tool at their disposal with which they could even collaborate. The applications that simplify machine learning are many, and seem to be growing with agility. In the future, perhaps the originality of AI algorithms will match that of humans in all areas.