[Wild things from machineland. I don’t know how artist Eric Wayne got a hold of all these examples of what Google’s Deep Dream has been, well, dreaming up, but its the best collection I’ve seen of these renditions from the multi-layered artificial neural network. I’m glad Google is keeping out of the evil business (Mr. Robot’s Evil Corp. is some other place, innit?). One of my dreams has been a device that digests the day’s moving imagery from the net and spits out a dream that one could view as a video. A condensed version of the day as experienced through internet video. Our commercial collective dream. Then I wouldn’t have to type stuff in and waste all sorts of time on the net. I could just watch a 90 minute D-cycle and that would be it for the day.]
Google developed an artificial neural network to interpret imagery. At first I was modestly intrigued, but now I’m starting to see the threat it poses. Essentially, it’s a filtering process that starts with easy stuff like detecting edges, then shapes, and moving on to identifying what is being seen. The neural networks are honed by exposing them to volumes of images, such as of animals, trees, or buildings, so that they are better able to search for the common features of those subjects. The aim was for a computer to look at an image and be able to say something like, “That’s a golden retriever”, or “That’s John Connor…”.
That is already interesting, but the Google team discovered that the process could be reversed so the neural networks would generate images of animals, trees, and so on, by asking them to enhance the features they were looking for. The result…
View original post 1,816 more words