Unnamed

Eine deutsche Fassung des folgenden Textes liegt als PDF vor.
For English, see below.


I can see you. Your are hiding in patterns and forms. In familiar shapes and contrasts. Your eyes lurk behind wheel rims and in branches of trees. Your smiles are concealed in brindled and checkered surfaces.

Machines are watching. They detect our presence and recognize our faces with sophisticated algorithms. Are they paranoid to see us everywhere? Have we failed to teach them how to recognize their master?

unnamed is an art book with 114 photos from my personal archive. A white square indicates where the face detection algorithm in Apple's iPhoto application has found a face, just like in the user interface of the software. Every spread features at least one instance of failure of the algorithm, compiling to a Facebook of false positives.

In the book, “faces” are laid out centered on the pages. This self-imposed layout rule is creating odd framings, sometimes even cutting away the main subject of the image entirely. Constrained by the boundaries of the page the visual composition seems strangely arbitrary and distinctively non-photographic. The images follow an unemotional gaze, delivering an artificial and alienating look.

unnamed
Max Neupert
Self published in 2013
75 numbered copies
232 pages 140 x 179 mm, 115 photos.
Cover design by Moritz Grünke@büro pluspunkt
Print by Laserline, Berlin


Collection of unnamed faces

I've been using the iPhoto software to manage my photo collection. It was bundled with OS X from 2002 until its replacement in 2015. In 2009, Apple added a feature to automatically recognize faces allowing the user to tag them with names. I have been fascinated by this function and noticed the many false positives of the face recognition algorithm. Curious about the phenomenon I collected these images from a time span of over 10 years. Before finally migrating to Linux in 2013, I decided to publish them in a artist's book: unnamed.

Facial recognition has gained many artist's interest over the last years. Not only because concerns over ubiquitous surveillance and control, it also has the aura of something mystical, like teaching the machine how to recognize its maker.

We are born with a visual apparatus which strives to recognize and evaluate faces and expressions quickly. It isn't an evolutionary remnant but a vital function of our cognitive system. It is essential to be able to feel empathy; to decipher if something a deadpan joke or unsettling defamation; to sense if someone is telling a lie. Facial expressions need to be processed and understood. Our brain does this constantly and irrepressibly. People with autism have problems with the capability to decipher faces, they suffer from “face processing impairments” (Dawson). This hinders their ability to navigate in social situations and interhuman communication.

Moritz Wehrmann has studied the fascinating effect that we impulsively mimic facial expressions. In his installation Alter Ego Strobing lights illuminating two opposing participants in alternation on either side of a reflecting sheet of glass. The effect is stunning: The participants see in intervals of split seconds their own face and the face of their opposing participant. When the two faces align, almost instantly facial expressions will align too. An uncanny inner force drives us to imitate and impose on our facial muscles what our eyes see.


Once an image is digitized - became a grid of values - a computer can extract semantic meaning from it. It can try to detect: is there a face or not a face? It can try to recognize: Who's face is it? It can try to classify: What is the gender, age, race of this face? What is the facial expression? Which mood does this expression represent?
The typical camera is in fact a computer, running an operating system. That's not only true for universal computing machines like smart-phones, but even point-and-shoot or DLSR cameras and webcams. The first cameras with face detecting algorithms were the Nikon COOLPIX 7900, 5900 and 7600, introduced in 2005


Sony CyberShot DSC-HX5 smile detection

The Sony Cyber-Shot DSC-HX5 from 2010 has a mode that automatically takes a picture when the subject(s) on the picture smile. The strength threshold of the smile is settable in three steps: normal, medium and strong.


Sony Xperia Android phone face detection and recognition Face detection in camera app Face recognition in album app

Screenshots from a Sony Xperia phone from 2012, where face detection is used in the camera app to set the focus (left) and face recognition is used in the album app to create a database of individuals on the images (right). These features are not present in stock Android, they are part of Sony's proprietary modifications.


Samsung NX300 face detection autofocus

My Samsung NX300 detecting the face in the image in real-time to adjust the focus. The camera from 2013 is running a Tizen Linux operating system. In self portrait mode, it can give audio signals when the face is in the center of the image. In “Best face” mode, it takes a sequence of images and lets the user choose the best facial expression for every person on the image, essentially creating a manipulated image by compiling different moments together which never happened simultaneously.


Today, face detection is an ubiquitous and omnipresent technology. Face recognition is used for advertising, in surveillance and on pictures uploaded to the social media. The british supermarket chain Tesco installed advertising screens at near the cashier queues of their gasoline stations. A camera sees the faces of people waiting to be served. It sees whose eyes are looking at the screen, recognizes the gender, age group and potentially the race to deliver the fitting advertising on the screen.
The “EasyPass” system at german airport immigration reads the biometric data (digitally stored image of the face) on the passports RFID chip to compare it to the camera image of the person travelling with the passport. Aim is to verify in an automated process if the passport belongs to the person travelling with it.

Surveillance and associated technologies provoke a counter movement of circumvention techniques and sousveillance activism. For example Adam Harvey's CV Dazzle hair and makeup styles to avoid face detection or mapping projects of surveillance cameras in public.

Why am I interested in false-positives; the failures of computer vision? If a machine is functioning with 100% reliability it becomes a black box. We tend to stop thinking about machines that we can rely on, taking them for granted. Only when they don't do what they are supposed to do, they attract our attention. Black Boxes are machines where their inner workings have become obfuscated and intransparent. The beauty of failures are that they are revealing part of the inner workings, methods and strategies of the author(s) and the science behind them. A Software glitch is a window into the algorithm, unveiling them and creating their own unintended appearance. For an artist, failures allow for an aesthetic appreciation of code. Another example of experimentation with failure by algorithms can be seen in my work Slidescapes.

Dafydd Hughes has also been inspired by the failing face recognition feature in iPhoto. He took Robert Frank's masterpiece of documentary street photography; The Americans from 1958, scanned it, and saved the images in iPhoto. When images are imported in iPhoto, they are automatically analyzed for faces and the faces are matched to existing persons in the Library. Hughes created a remix version of the original book through the lens of the algorithm by compiling only the portraits which iPhoto had recognized, including false positives. Printed, his work becomes an artefact in the artist's book format. He named his version: Every Face in The Americans — Faces from Photographs by Robert Frank selected by iPhoto. Hughes tries to dissect and guess how iPhoto's face detection is working by looking at the results and the file structure created in relevant directories typically hidden from the user. He is wondering why it can see the white baby, but is not recognizing its black nurse. I have known Hughes personally since 2004, but only learned about this particular work after making unnamed.


Screenshot from Daffyd Hughes thesis

Excerpt from Dafydd Hughes' thesis, illustrating the failure of the face recognition with faces of color (here additionally not camera-facing). Instead the complex patterns of the bushes in the background are being identified as facial features.


Does this mean a computer can be racist? Off course racism is antagonism based on belief. Sice machines are neither capable of emotions nor opinions, it can be concluded that they are oblivious towards racism. However the people who design hard and software may be racist, or work in an environment that already comes preloaded with biases leading to decisions that make discriminatory products. Through this context might not be actively discriminatory through racism, it can easily lead to discrimination. Minorities are aware of these issues as they are affected by it. Technology with in-built barriers for the elderly or sexism in video games are just the most obvious examples.

Wanda Zamen and Desi Cryer, coworkers in a Waller, Texas Toppers Camping Center discovered in 2009 that a specific feature of a laptop's webcam did not work for black people. The camera was supposed to zoom in and move with a face, making videoconferencing more convenient. However, it was only working with white Zamen, not with black Cryer. They decided to make a video clip of it and put it on YouTube, where it got over 3 million views.


HP computers are racist uploaded to YouTube in 2009 by Wanda Zamen and Desi Cryer.


It's not only computers having problems with other races. Humans too have difficulties recognizing individuals of a different race (“those asians all look the same”). It is observed that caucasians have a significantly better recognition rate with caucasian faces and asians with asian faces (Furl). Another study compared black Americans with Nigerians in their ability to recognize caucasians and found that “Significant positive relationships were found between performance scores and interracial experience.” (White Carroo). It seems clear that this own-race-bias has nothing to do with the own race per se, but with the socialization and exposure to other races. There is another crucial thing though: People look at the wrong features to recognize individuals of other races. [The other-race-bias] “is a consequence of a failure of attention being directed to those features of other race faces that are useful for identification“ (Hills). The good news is, that the ability to recognize other faces can be trained by learning which features to look for. In an comparison between Europeans and Africans “African subjects used a greater variety of clues” (Shepherd).

In the context of this work it can only be speculated why computer face detection algorithms discriminate against people of color. I assume that the developers where either asian or caucasian and they developed their algorithms with testing on subjects from their peers. It is a case of a systemic discrimination through a racial bias that was not intended as harmful, but it's consequences can be just as severe. This discrimination already happened in the time of analogue film. Kodak developed their film chemistry to be optimized for caucasian faces. Their reference card — “Shirley Card” after the name of the first model — depicted a white woman with the word “Normal” printed beside it. People of color would appear blacked out and not recognizable on images taken with the film. This bias was so strong that Jean-Luc Godard refused to use Kodak stock for a shoot in Mozambique because he found Kodak film to be racist. Kodak recognized the problem and developed different emulsions that represented darker skin tones better. Only in 1996 Kodak issued multiracial Shirley cards as well, with an asian and a woman of color around the caucasian model (For an excellent history of the Shirley card see Roth).

How does computer vision detect a face? In the history of machine vision and pattern recognition there have been various approaches with the goal to detect and recognize faces reliably, efficiently and robustly. Reliably means that the machine is not seeing faces where there are no faces (false positives) and not missing any faces (false negatives). Efficiently (also called cheap) means that the algorithm can be run with minimal cost on the processor and delivers results quickly—ideally in real time—on high resolution source images. Robust refers to the functioning of the recognition even in situations that make it harder, like low or colorful light, over- or underexposure, shadows casting over parts of the face or occlusions of it, additions to the face like hats, scarfs, glasses, non-frontal views, etc.

Most computer vision algorithms first try to reduce the data they need to process by removing irrelevant or redundant information so that the necessary calculations can be done faster. Face detection is no different here. Resolution is reduced, color images get converted to grayscale and sometimes edges are detected to get an image with only black or white information (bitmap).


Cascading rules in the paper by Kanade

Takeo Kanade has published (with T. Sakai, M. Nagao) one of the first methods to detect faces in 1971. In his paper Computer Analysis and Classification of Photographs of Human Faces from 1972, with the same coauthors he describes a cascading system to detect a face. If the machine finds a certain shape typical for a face in the image, it continues to look for other features of the face in this region, until all required conditions are confirmed and it can be said with confidence that there is a face. If a condition is not met, the algorithm aborts and starts over in a different area of the picture.


The next step in the development of face detection algorithms was the usage of Principal Component analysis (Sirovich & Kirby, 1987) and the Eigenface (Turk & Pentland, 1991). The Eigenface method is based on a collection of different faces which are combined to one meta image and then subtracted from that meta image. Each image now contains the difference of the individual picture compared to the mean of all pictures. A face can now be described as a vector of difference towards the images in the collection. An artistic impression of how a mean image of different faces looks like, can be seen in Tillman Ohm's work Thoughts are free.

In 2001 Paul Viola and Michael Jones made another important breakthrough with the Haar cascade algorithm. It can be used to find any pattern. For facial recognition it is trained on images of faces where it extracts the relevant features by looking at Haar-like features (see illustration). It then creates a cascading set of rules to determine if a given image is a face or not. The algorithm is implemented in the OpenCV library.
Your nose is a feature or your face, it's a horizontal sequence of contrasting cheek, bright nose and dark shadow. The nasal bone is exposed to the light and casts a shadow. This pattern of dark and bright creates an edge that can be detected. Together with brows and eye sockets above the nose form pattern This is how the Haar cascade can recognize a face.


Haar-like Features

Haar-like features. First row: Edge features, second row: Line Features and third row: Four-rectangle features. Named after Alfréd Haar (1885-1933)


Apple added the faces feature to iPhoto when the technology was not yet robust. Seemingly it was not upgraded in the following years either. The algorithm did only look at the pixel data for the face recognition and was ignorant of other strong indicators, like pictures taken only seconds after each other having much more likelihood to contain the same people in it, or pictures taken on different continents (GPS data) at the same time have very little likelihood of containing the same people.

Google (with their web based service Picasa) and Facebook soon followed to integrate face recognition functionality into their platforms. These companies have a different business model compared to Apple. Google and Facebook try to accumulate as much data about their users as possible, because their product is advertisement and their customers the advertisers. When Facebook acquired and incorporated the Israeli startup Face.com it gave them access to a very reliable proprietary face recognition algorithm. Facebook and Google not only have a different business model, they have a great advantage over Apple: because they have access to their user's data they can create cross references between them. For example Facebook can already recommend a name for a face to you, because someone else has tagged this face already. It can also use user input to further train and optimize their algorithm. Since the face recognition of iPhoto happens locally on each users computer this is to a disadvantage to Apple, but is also a concerning privacy issue for the “cloud”.


Failed face blurring in Google StreetView

The algorithm which is supposed to protect the privacy of people who happen to be on the streets when the Google StreetView car passes by, is not free of discrimination either. In this example a black pedestrian in Shreveport, Louisiana is not anonymized, but the white face of Charles Tutt on an advertising for his candidature to be district judge is blurred instead.


It's fascinating to watch the algorithms fail, because strangely enough, errors almost make machines more human. Machines are able just like humans to see faces in random data, sort of an apophenia of the machine. When the machine fails to recognize people of color as humans, it exposes its own racial bias, embossed into software by white and asian programmers. It might just be that they have taught the machines to look for the wrong features.


References

Geraldine Dawson, Sara Jane Webb, James McPartland
Understanding the Nature of Face Processing Impairment in Autism: Insights From Behavioral and Electrophysiological Studies
2005 in Developmental Neuropsychology 27(3), 403–424

Nicholas Furl , P. Jonathon Phillips, Alice J. O’Toole
Face recognition algorithms and the other-race effect: computational mechanisms for a developmental contact hypothesis
2002 in Cognitive Science Vol. 26

Peter J. Hills , Michael B. Lewis
Reducing the own-race bias in face recognition by shifting attention
2006 in The Quarterly Journal of Experimental Psychology Vol. 59, Iss. 6

David Hughes
Every face in The Americans: Faces from photographs by Robert Frank, selected by iPhoto
2011 in Master Thesis at Ryerson University, Toronto, Canada

Lorna Roth
Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity
2009 in Canadian Journal of Communication Vol 34, Nr. 1

Toshiyuki Sakai, Makoto Ngao, Takao Kanade
Computer Analysis and Classification of Photographs of Human Faces
1972 in Proc. First USA-JAPAN Computer Conference, pp. 55-62.

J. W. Shepherd, J. B. Deregowski
Races and faces — a comparison of the responses of Africans and Europeans to faces of the same and different races
1981 in British Journal of Social Psychology Volume 20, Issue 2, pages 125–133

Ming-Hsuan Yang, David J. Kriegman, Narendra Ahuja
Detecting Faces in Images: A Survey
2002 in Transactions on Pattern Analysis ans Machine Intelligence Vol. 24, No. 1

Paul Viola, Michael J. Jones
Robust Real-Time Face Detection
2004 in International Journal of Computer Vision, Volume 57, Issue 2, pp 137-154

Agatha White Carroo
Other Race Recognition: A Comparison of Black American an African Subjects
1986 in Perceptual and Motor Skills: Volume 62, Issue , pp. 135-138.


Page updated: 2016-02-28

Valid XHTML 1.0 Transitional