Google held an art auction last week in San Francisco for works created by its algorithms, reported The Wall Street Journal. The most expensive image at the auction grabbed $8,000, and all the proceeds from the event were given to Gray Area Foundation.
Computer-generated algorithmic art
Each image was printed on a paper, and some even mimicked the styles of artists like Vincent van Gogh. The images included psychedelic seascapes, van Gogh-inspired forests and fantastical landscapes of castles and dogs. The auctioned art was created by 11 artists, including some who are engineers. They used variations of Google’s DeepDream code, which the company open-sourced last summer. Google’s art show, which displayed works created by computers with some guidance from humans, was a hit.
Blaise Aguera y Arcas, head of Google’s machine intelligence group in Seattle, said, “I used to think art was some peculiar thing that humans do but now I think when we meet the aliens, they’ll have something just like it.”
#MICUS: Value Managers Say Their Style Is Far From Over
Note of Chuck Bath, David Wallack, and Ramona Persaud's presentation from the 2020 Morningstar Investment Conference. Q2 2020 hedge fund letters, conferences and more Dead Again? Value Managers Say Their Style Is Far From Over For more than a decade, large value has been one of the most difficult factors to own. Moderator Linda Abu Read More
Examples of computer-generated algorithmic art go back to the 1960s, so this is not the first time algorithms and art have collided.
Google joins math with art
Galleries and art fairs in Silicon Valley have struggled because tech entrepreneurs are not typically big art collectors. The tech industry is filled with engineers who are hardly interested in art–unless there is a mathematical angle to it. But now there is.
Google engineers applied an algorithm to the artistic process, giving art quantitative and mathematical attributes. Computers, with the help of humans, used a technology called neural networks to make 29 works of art for the show. The technology was originally designed to help identify objects in photos, but engineers, for art’s sake, completely revamped the concept by feeding images of random shapes into the computer algorithms, which then reported what objects the images looked like.
Google engineers and independent artists then used four techniques: DeepDream (running the process several times to create a unique image); Fractal DeepDream (running the same process and different-sized versions, producing fractal images); Class Visualization (zeroing in on a single image); and Style Transfer (creating new images following a famous aesthetic, for example, Vincent van Gogh’s Starry Night).