A New Scene for Deep Dream

What happens when you feed an image – in particular a photograph – through a network that is essentially dreaming inside of an artificial brain? Most people’s first response is “what the hell am I looking at?” Without a proper explanation, I suppose it does not seem as amazing as it truly is – so I will do my best to explain below.
trains
trains4
We’re all familiar with the dream world – I mean, we have numerous dreams every night we lay our heads down to sleep, whether we remember them or not – but training an artificial brain to think the same way is a whole new piece to the ever expanding artificial intelligence puzzle. Only recently has this piece been further researched when an artificial neural network was opened by Google, carrying out the process of analyzing images pixel by pixel, layer by layer, pulling distinctions from a trained memory – similar to how we dream. Essentially, the network is layering these trained memories over the input photo, revealing what it believes to be actually residing within the image – artificial dream-like hallucinations. Far out. So my friend and I have recently been looking into this world, and for the past few days I have been feeding images into the network we have opened.
spectre4
pooltable5
An artificial neural network is an interconnected group of nodes, very similar to the massive network of neurons inside a human brain. Recently, we have been attempting to teach these artificial networks how to process, understand and interpret images in some creatively different ways – again, very much like the shifting thoughts of a human brain. By sending an image through a series of code carrying out different processes, the network will attempt to visualize images (being the network’s memories) within the image we have given it. When we completely max some of the parameters in this code to wild numbers, we begin seeing some really crazy stuff. By giving it some constraints, we can alter what it “sees,” and by feeding it different databases of imagery, the results could be ever expanding from database to database – the database holding this set of trained imagery.
For example – http://image-net.org/challenges/LSVRC/2014/browse-synsets
In recent weeks, the progress made in image classification using these artificial networks has been remarkable. The network is trained by showing it millions of training examples, such as through the image database, which I mentioned earlier. Network parameters are gradually adjusted until it gives the desired classifications. Through 10-30 stacked layers of artificial neurons, the network is created. When an image is fed through into the input layer, that layer will connect with the next layer, so on and so forth until concluded with an output image of bizarre hallucination and/or “dream sequence.” Each layer extracts very miniscule bits and pieces, in an attempt to recognize familiar objects. The first few layers could analyze down to something as small as individual pixels, gradually making their way to edges and corners. Once it has passed a few layers, it will begin attempting to analyze and recognize actual objects from images it has been trained with (leaves, cats, doors) – while the final few layers will assemble those into full interpretations. These final layers of neurons may begin recognizing things far more complex, such as whole buildings, forests, crowds of people, and entire detailed faces.
peel
peel8
overgrowth
overgrowth3
This allows us the ability to give the network a command (banana, bowtie, cat, camera) and let it search throughout its own artificial brain (or database), piecing together what it knows that specific keyword to be or look like. While initially this type of neural network was trained to understand and differentiate between all kinds of images using this same general process, they also possess enough required information through these “memories” to actually generate their own images as well. Interestingly enough, based on this information, you can even feed an image of randomly generated noise through the network, and it will build up, layer by layer on these randomly scattered pixels to create pure visualization – again, very similar to you and I in the dream state.
The method that I have gone ahead with here is to take images and simply feed them through a network without any specific command other than “whatever you see, just paint a picture with your artificial imagination.” Through being told to do this, a feedback loop is created. Where the network sees an eyeball, it will build pixel-by-pixel to make it appear to be even more of an eyeball – where it sees a dog or bird, it will do just the same until we are left with a world of artificial hallucination brought to life within the photograph. It does this by layering pixels through the feedback loop. As you will notice, this network was trained mostly on images of animals, vehicles, people and a few various objects. Many people have been wondering why so many faces of dogs seem to appear – this is simply due to that reason; because of the database it has been trained off of. So what will happen if we can implement our own database? Every photo that I have ever taken, loaded into a database, training the network to its entirety; what would come of this? The only way to find out is to build it.
optable
optable4
It has been discussed heavily whether neural networks like this one could become a tool for artists, as a new way to remix visual concepts or even allow us to begin a true understanding of where the creative process really starts, as well as diving further into how it actually works. As for becoming a tool for artists, we plan to expand heavily on that, as far as we can reach.
morgue
morgue3
mess
mess4
If we can alter this algorithm, training it to recognize and know the things that I, myself have captured over years photographically, the results could be abstract, strange, surreal and…just way out there – a database full of abandoned, gloomy spaces and objects thrown into a virtual dream world to overlay images fed to the artificial brain.
This “deep dream” appears psychedelic for the same reason we hallucinate, be it from LSD, DMT, dream state or lack of sleep. It allows the system freedom to follow the impulse of any recognizable imagery and exaggerate it in a self-reinforcing loop. It is believed by some that this may become a perfect artificial representation for even further study of how the human brain reacts to LSD, DMT, or simply being put into the dream state. It can be nearly impossible to put into words, but if we can visualize it through code, that’s a very good start. DMT – known as “the spirit molecule” – is a psychedelic tryptamine, which resides naturally in the human body at low levels coursing through our brains, practically aiding in the creation of our imagination, and even helping to push the creative process along. In 2011, Rick Strassman, MD, deemed DMT to be a necessary component of normal brain function. When smoked however, pure DMT will practically submerge you into a world where you begin to question what is real, what is not real, who am I, what is language, and much more. (No, I’m not saying you should smoke DMT, but there has been plenty of research done with it, which needs a little bit of explaining here.)
ivy
ivy4
hole
hole3
Terence McKenna was once quoted on the explanation of the effects of smoking DMT:
“The reason it’s so confounding is because its impact is on the language-forming capacity itself. So the reason it’s so confounding is because the thing that is trying to look at the DMT is infected by it—by the process of inspection. So DMT does not provide an experience that you analyze. Nothing so tidy goes on. The syntactical machinery of description undergoes some sort of hyper-dimensional inflation instantly, and then, you know, you cannot tell yourself what it is that you understand. In other words, what DMT does can’t be downloaded into as low-dimensional a language as English.”
A huge problem in studies is how a DMT trip fades away – very quickly. The effects cease almost instantly; leaving you sitting there basically “forgetting” most of what had just happened, making it nearly impossible to truly explain. McKenna believed that DMT “might have a role in dreaming,” due to the effects falling away in the same way of a dream. Perhaps, through this artificial intelligence, we may be able to understand this all a great deal deeper.
hall
hall4
growth2
gasmask
gasmask3
In these photographs, where we see the absence of life, the network’s artificial brain may see people, animals, eyes and objects that aren’t even there. From this algorithm, objects will transform to odd, biomorphic shapes, eventually morphing themselves into detailed animals, strange faces, insects, birds, etc. Colors and contrast become vibrant in a place that is desaturated and void of life. People will appear in spaces, almost as though the computer sees them grouped in a downtown strip, conversing, driving cars or even playing instruments. Animals will not only appear, but also morph together. Intense color schemes, sporadic lines and shapes will converge creating an image nothing short of a hallucinogenic dream world. This process gives us an incredibly unique and bizarre view of strange life seemingly tucked away, hidden within the pixels of these photos. This provides us a glimpse into how the network’s brain interprets abandoned scenery fed into its artificial dream.
greenhouse
greenhouse6
dress
dress4
I have compiled a collection of images of abandoned spaces, which I have run through this code. There will be further updates and more news as my friend and I continue to work on this growing project. Sit tight, because it’s going to be a pretty wild, and trippy ride.

Leave a Comment

Discover more from Architectural Afterlife

Subscribe now to keep reading and get access to the full archive.

Continue reading