Human Face Recognition
Image processing filters
Most relevant to the material presented in this chapter are illumination-normalization
methods that can be broadly described as quasi illumination-invariant image filters. These
include high-pass and locally-scaled high-pass filters], directional derivatives, Laplacian-of-Gaussian filters, region-based gamma intensity correction filters and edge-maps, to name a few. These are most commonly based on very simple image formation models, for example modelling illumination as a spatially low-frequency band of the Fourier spectrum and identity-based information as high-frequency , see Figure 2. Methods of this group can be applied in a straightforward manner to either single or multiple-image face recognition and are often extremely efficient. However, due to the simplistic nature of the underlying models, in general they do not perform well in the presence of extreme illumination changes.
Adapting to data acquisition conditions
Four face recognition algorithms, the Generic Shape-Illumination method , the Constrained Mutual Subspace Method, the commercial system Facelt and a Kullback-Leibler Divergence-based matching method, were evaluated on a large database using (i) raw greyscale imagery, (ii) high-pass (HP) filtered imagery and (iii) the Self-Quotient Image (QI) representation . Both the high-pass and even further Self Quotient Image representations produced an improvement in recognition for all methods over raw grayscale, as shown in Figure 3, which is consistent with previous findings in the literature. Of importance to this work is that it was also examined in which cases these filters help and how much depending on the data acquisition conditions. It was found that recognition rates using greyscale and either the HP or the QI filter negatively correlated (with p ≈-0.7), as illustrated in Figure 4. This finding was observed consistently across the result of the four algorithms, all of which employ mutually drastically different underlying models.
Performance of the (a) Mutual Subspace Method and the (b) Constrained Mutual Subspace Method using raw grey scale imagery, high-pass (HP) filtered imagery and the Self-Quotient Image (QI), evaluated on over 1300 video sequences with extreme
illumination, pose and head motion variation. Shown are the average performance and ± one standard deviation intervals
A plot of the performance improvement with HP and QI filters against the
performance of unprocessed, raw imagery across different illumination combinations used in training and test. The tests are shown in the order of increasing raw data performance for easier visualization This is an interesting result: it means that while on average both representations increase the recognition rate, they actually worsen it in "easy" recognition conditions when no normalization is needed. The observed phenomenon is well understood in the context of energy of intrinsic and extrinsic image differences and noise (see  for a thorough discussion). Higher than average recognition rates for raw input correspond to small changes in imaging conditions between training and test, and hence lower energy of extrinsic variation. In this case, the two filters decrease the signal-to-noise ratio, worsening the performance, see Figure 5 (a). On the other hand, when the imaging conditions between training and test are very different, normalization of extrinsic variation is the dominant factor and performance is improved, see Figure 5 (b).
(a) Similar acquisition conditions between sequence
(a) Different acquisition conditions between sequences
A conceptual illustration of the distribution of intrinsic, extrinsic and noise signal energies across frequencies in the cases when training and test data acquisition conditions are (a) similar and (b) different, before (left) and after (right) band-pass filtering This is an important observation: it suggests that the performance of a method that uses either of the representations can be increased further by detecting the difficulty of recognition conditions. In this chapter we propose a novel learning framework to do exactly this.
Want To Know more with
Contact for more learning: webmaster@freehost7com
The contents of this webpage are copyrighted © 2008 www.freehost7.com
All Rights Reserved.