Gene Kogan | Densecap Deepdream | 2016 23.02. – 02.06.2019 | Fotomuseum Winterthur

SITUATION #166

Densecap Deepdream, video, 3:11 min., 2016 © Gene Kogan
Gene Kogan, Densecap Deepdream, 2016, SITUATION #166, SITUATIONS/Photo Text Data, installation view at Fotomuseum Winterthur, 2019 © Philipp Ottendörfer
Gene Kogan, Densecap Deepdream, 2016, SITUATION #166, SITUATIONS/Photo Text Data, installation view at Fotomuseum Winterthur, 2019 © Philipp Ottendörfer
Gene Kogan, Densecap Deepdream, 2016, SITUATION #166, SITUATIONS/Foto Text Data, Ausstellungsansicht Fotomuseum Winterthur, 2019 © Philipp Ottendörfer

Dense captioning is a technique in the field of machine vision where computers detect objects in images and describe them in language. Developed by Justin Johnson, Andrej Karpathy and Li Fei-Fei at Stanford Computer Vision Lab, the densecap captioning system employs different types of machine learning algorithms to select segments of an image and generate labels based on what objects can be recognised by a machine trained on a dataset of one hundred thousands pictures. In Densecap Deepdream, Gene Kogan applies this technique to the hallucinatory visuals of a recursive “deepdreaming” algorithm, a technique developed by Alexander Mordvintsev, Chris Olah and Mike Tyka at Google to visualise how neural networks function and what each layer detects in the source image. By combining these two parallel neural networked processes to dialogue and compete with each other, a seemingly endless epistemological struggle emerges, visualising some of the current processes in which “seeing machines” are taught to make sense of images. Stuck between the text labels of dense captioning and the nightmarish images of a deepdreaming algorithm infinitely zooming in and creating new surreal and oniric visions, we end up in a short circuit that reveals the hurdles and dangers of machine vision and its applications.

More by Gene Kogan: genekogan.com