Human Face Recognition

Principal component analysis


Principal Component Analysis

Principal component analysis is a standard technique used in statistical pattern recognition and signal processing for data reduction and Feature extraction (Haykin, 1999). As the pattern often contains redundant information, mapping it to a feature vector can get rid of this redundancy and yet preserve most of the intrinsic information content of the pattern. These extracted features have great role in distinguishing input patterns.

A face image in 2-dimension with size N × N can also be considered as one dimensional

vector of dimension N2. For example, face image from ORL (Olivetti Research Labs)

database with size 112 × 92 can be considered as a vector of dimension 10,304, or

equivalently a point in a 10,304 dimensional space. An ensemble of images maps to a

collection of points in this huge space. Images of faces, being similar in overall configuration, will not be randomly distributed in this huge image space and thus can be

described by a relatively low dimensional subspace. The main idea of the principle

component is to find the vectors that best account for the distribution of face images within the entire image space. These vectors define the subspace of face images, which we call “face space”. Each of these vectors is of length N2, describes an N × N image, and is a linear combination of the original face images. Because these vectors are the eigenvectors of the covariance matrix corresponding to the original face images, and because they are face-like in appearance, we refer to them as “eigenfaces”.

 

Let the training set of face images be Γ12,….,ΓM , then the average of the set is defined by

                                                                                                       
                        

Each face differs from the average by the vector

                     

                     

 This set of very large vectors is then subject to principal component analysis, which seeks a set of M orthonormal vectors, Um , which best describes the distribution of the data. The kth vector, Uk , is chosen such that

                     

                                                         

                                         

is a maximum subject to

                                                                        

                                                                       

                                       

The vectors Uk and scalars λk are the eigenvectors and eigenvalues, respectively of the

covariance matrix

                                   

where the matrix A =[Φ1, Φ2....ΦM]. The covariance matrix C, however is N2 × N2 real

symmetric matrix, and calculating the N2 eigenvectors and eigenvalues is an intractable task for typical image sizes. We need a computationally feasible method to find these

eigenvectors. Consider the eigenvectors vi of ATA such that

 

                               

Premultiplying both sides by A, we have

 

                                    

 where we see that Avi are the eigenvectors and μi are the eigenvalues of C= AAT.

Following these analysis, we construct the M × M matrix L= ATA, where Lmn=ΦmTΦn , and find the M eigenvectors, vi , of L. These vectors determine linear combinations of the M training set face images to form the eigenfaces UI .

                               

With this analysis, the calculations are greatly reduced, from the order of the number of

pixels in the images (N2) to the order of the number of images in the training set (M). In

practice, the training set of face images will be relatively small (M << N2), and the

calculations become quite manageable. The associated eigenvalues allow us to rank the

eigenvectors according to their usefulness in characterizing the variation among the images. The eigenface images calculated from the eigenvectors of L span a basis set that can be used to describe face images. (Sirovich & Kirby, 1987, 1990) evaluated a limited version of this framework on an ensemble of 115 images (M = 115) images of Caucasian males digitized in a controlled manner, and found that 40 eigenfaces (M' = 40) were sufficient for a very good description of face images. In practice, a smaller M' can be sufficient for identification, since accurate reconstruction of the image is not a requirement. In the framework of face recognition, the operation is a pattern recognition task rather than image reconstruction. The eigenfaces span an M' dimensional subspace of the original N2 image space and hence, the M' significant eigenvectors of the L matrix with the largest associated eigenvalues, are sufficient for reliable representation of the faces in the face space characterized by the eigenfaces. Examples of ORL face database and eigenfaces after applying the eigenfaces algorithm are shown in Figure 1 and Figure 2, respectively.

 

               

 

Figure 1. Samples face images from the ORL database

A new face image (Γ) is transformed into its eigenface components (projected onto “face

space”) by a simple operation,

 

                                    

 

for k = 1,...,M'.The weights form a projection vector,                                

                                  

                                    ΩT = [w1,w2,…………wM ]                                                   (10)

 

describing the contribution of each eigenface in representing the input face image, treating the eigenfaces as a basis set for face images. The projection vector is then used in a standard pattern recognition algorithm to identify which of a number of predefined face classes, if any, best describes the face. The face class Ωk can be calculated by averaging the results of the eigenface representation over a small number of face images of each individual. Classification is performed by comparing the projection vectors of the training face images with the projection vector of the input face image. This comparison is based

on the Euclidean Distance between the face classes and the input face image. This is given in Eq. (11). The idea is to find the face class k that minimizes the Euclidean Distance. Figure 3 shows the testing phase of the PCA approach.

                                     

                                                   εk= ║(Ω−Ωk)║                                               (11)

 

Where Ωk is a vector describing the kth faces class.

 

                                          

 

                         

 

Figure 2.

First 16 eigenfaces with highest eigenvalues

 

 

                  

 

 

 

 

Want To Know more with

Video ???

Contact for more learning: webmaster@freehost7com

 

 

 

 

 

 

 

 

Home

The contents of this webpage are copyrighted © 2008 www.freehost7.com
 All Rights Reserved.