|
1.IntroductionFace recognition (FR) has received increasing attention in recent years due to its potential applications. Many FR systems have been devised in the past few decades. One of the most popular techniques for FR is subspace analysis. Two of the most important and representative ones are principal component analysis (PCA)1 and linear discriminant analysis (LDA).2 Many other methods such as 2DPCA,3 2DLDA,4 ICA,5 ,6 ,7 and 8 are all extended from them. LPP9 is also a subspace analysis method. As stated in,9 it performs better than LDA in terms of error rate. However, all methods [including locality preserving projections (LPP)] mentioned above are inefficient, because they identify a test face image by comparing it with all training images. To overcome this disadvantage, we propose to combine LPP with affinity propagation (AP)10 to give a new FR framework, APLPP, which uses much fewer training images, free of noise, to identify a test image. Thus APLPP is more computationally efficient than LPP. The reason why we propose to combine AP with LPP is as follows: AP can detect a representative sample for each class. Therefore, we can use AP to detect a representative face image for each subject and use only the representative images rather than all training ones for identification. However, the original AP regards the negative Euclidean distance between two images as their measure of similarity, which is directly computed on the gray pixel values. As we stated in Ref. 11, such a definition of similarity is unreasonable. To solve this problem, we propose to compute the similarity on low-dimensional features derived from LPP, not only because LPP can be used for dimensionality reduction and best detect the essential face manifold structure, but because it can reduce the unwanted variations resulting from changes in lighting, facial expression, and pose. By using LPP on the high-dimensional face data, more efficient low-dimensional features can be obtained. Obviously, the similarity on these features should be better than that directly computed from the gray pixel values. In addition, it needs much less computational time, because the dimension has been reduced greatly. In view of this analysis, we propose to combine AP with LPP and make full use of their advantages. The rest of this paper is as follows: AP and LPP are reviewed in Secs. 2, 3, respectively. In Sec. 4, we summarize the APLPP method. Experimental results are given in Sec. 5. The final section gives the conclusions. 2.Affinity PropagationAP10 is an efficient clustering method. It first builds a similarity matrix , in which between data points and is their negative Euclidean distance. Before clustering, each point also needs to be assigned a number , which characterizes the prior knowledge of how good point is as a representative example. The data points with larger values of are more likely to be chosen as representative examples. These values are referred to as preferences. In fact, all data points are equally suitable as representative examples; thus the preferences should be set to the same value, which can be varied to produce different numbers of clusters. Generally, that value is the median of . After the construction of similarity matrix and the setting of preferences, two kinds of messages (responsibility, availability) are passed between data points. The responsibility , sent from the data point to the candidate representative example data point , reflects the accumulated evidence for how proper it would be for point to serve as the representative example for point . It is updated using the rule The availability , sent from the candidate representative example point to point , reflects the accumulated evidence for how well suited point is to choose point as its representative example. It is computed by the ruleAvailabilities and responsibilities can be combined to recognize representative examples at any time during affinity propagation. For point , the that maximizes indicates that point serves as the representative example for point . 3.Locality Preserving ProjectionsLPP, or Laplacianface,9 is also a subspace analysis method for recognition. Here, it is formulated as follows: Suppose a set of face images . Let be a similarity matrix defined on the data points. Laplacianface can be obtained by solving the minimization problem with the constraint , where is the graph Laplacian, and measures the local density around . Here is computed using . (It should be noted that the similarity matrix used in LPP is different from that in AP). Finally, the basis functions of Laplacianface are the eigenvectors associated with the smallest eigenvalues of the generalized eigenproblemOnce the eigenvectors are computed, let be the transformation matrix. Thus the low-dimensional feature can be obtained by using . 4.Proposed Affinity Propagation Locality Preserving ProjectionsSuppose there are training images belonging to different subjects. Firstly, select images per person (hence in total) to form the training set, and use LPP on the training set to get the transformation matrix and the corresponding reduced features. Secondly, use AP to cluster these reduced features into different classes and obtain a representative reduced feature for each class. Thus there will be representative features , such that each feature is assigned an identity , and each class has just one representative feature. The reason why these features should be clustered into different classes can be found in Ref. 11. Finally, convert the test face image into the low-dimensional feature using , and then identify it using a nearest-neighbor classifier (NNC) or AP. Thus two methods APLPP(NNC) and APLPP(AP) are generated. APLPP(NNC) uses NNC to find which representative face image is nearest to the test face image. In this case, Euclidean distance is used. On the other hand, APLPP(AP) uses AP again to cluster features ( representative features plus a test feature) into classes and find which representative features the test one belongs to. It should satisfy two conditions: (a) the number of classes after clustering must equal ; (b) the representative features achieved in the training stage must remain representative ones after clustering. How to achieve this goal? As described before, the points with larger preference are more likely to be chosen as representative examples, and each test feature is supposed to be clustered into one of these representative features; thus we assign large values (e.g., 1) to and very small values (e.g., ) to , and then perform the clustering. Since APLPP(NNC) and APLPP(AP) use only representative features for recognition, their recognition time mainly depends on the number of subjects. It increases linearly with , while the recognition time of LPP increases linearly with the number of training images . Obviously, both our methods are more computationally efficient than LPP. 5.ExperimentsIn this section, experiments are described that validate the effectiveness of APLPP and compare it with LPP, using the extended YaleB face database.12 It contains around 64 near-frontal images for each of 38 distinct subjects. All images were preprocessed11 so that the final images have size , with 256 gray levels per pixel. Different numbers of images per individual were taken to form the training set. The rest of the database was considered to be the testing set. The experiments were repeated 20 times, and the average accuracy was recorded. In general, the performance of the LPP and APLPP varies with the number of dimensions. We only show the best results obtained by them in Table 1. The values in parentheses denote the dimensions to attain top recognition rate. As can be seen, APLPP(AP) performs uniformly better than APLPP(NNC), followed by LPP. This means AP is more suitable for use as a classifier than NNC in these cases. Moreover, both of our methods improve the FR performance by large margins over LPP. Table 1Performance comparison on extended YaleB database. Here n is the number of training face images per person.
The experiments reveal some interesting points:
6.Conclusions and Further WorkLPP has been proposed to be combined with AP to give a new FR framework. In contrast with LPP, which identifies a test image by comparing it with all training images, our proposed method uses only representative face images. We also apply the AP clustering method for identification. The new framework has been evaluated on the extended YaleB database. Comparative experimental results show the effectiveness of the new framework. Further work will include applying the proposed method to human gait recognition and digital character recognition. AcknowledgmentsThe authors would like to thank the anonymous reviewers for their critical and constructive comments and suggestions. This research has been supported by the National Natural Science Foundation of China (No. 60675023, No. 60602012) and the China 863 project (No. 2007AA01Z164). ReferencesM. A. Turk and
A. P. Pentland,
“Face recognition using eigenfaces,”
586
–591
(1991). Google Scholar
P. N. Belhumeur,
J. P. Hespanha, and
D. J. Kriegman,
“Eigenfaces versus Fisherfaces: recognition using class specific linear projection,”
IEEE Trans. Pattern Anal. Mach. Intell., 19
(7), 711
–720
(1997). https://doi.org/10.1109/34.598228 0162-8828 Google Scholar
J. Yang,
D. Zhang,
A. F. Frangi, and
J. Yang,
“Two-dimensional PCA: a new approach to appearance-based face representation and recognition,”
IEEE Trans. Pattern Anal. Mach. Intell., 26
(1), 131
–137
(2004). 0162-8828 Google Scholar
J. Yang,
D. Zhang,
X. U. Yong, and
J. Y. Yang,
“Two-dimensional discriminant transform for face recognition,”
Pattern Recogn., 38
(7), 1125
–1129
(2005). 0031-3203 Google Scholar
M. S. Bartlett,
J. R. Movellan, and
T. J. Sejnowski,
“Face recognition by independent component analysis,”
IEEE Trans. Neural Netw., 13
(6), 1450
–1464
(2002). https://doi.org/10.1109/TNN.2002.804287 1045-9227 Google Scholar
S. Noushath,
G. H. Kumar, and
P. Shivakumara,
“(2D)2LDA: an efficient approach for face recognition,”
Pattern Recogn., 39
(7), 1396
–1400
(2006). 0031-3203 Google Scholar
W. Zuo,
D. Zhang,
J. Yang, and
K. Wang,
“BDPCA plus LDA: a novel fast feature extraction technique for face recognition,”
IEEE Trans. Syst., Man, Cybern., Part B: Cybern., 36
(4), 946
–953
(2006). 1083-4419 Google Scholar
I. Dagher and
R. Nachar,
“Face recognition using IPCA-ICA algorithm,”
IEEE Trans. Pattern Anal. Mach. Intell., 28
(6), 996
–1000
(2006). 0162-8828 Google Scholar
X. He,
S. Yan, and
Y. Hu,
“Face recognition using Laplacianfaces,”
IEEE Trans. Pattern Anal. Mach. Intell., 27
(3), 328
–340
(2005). https://doi.org/10.1109/TPAMI.2005.55 0162-8828 Google Scholar
B. J. Frey and
D. Dueck,
“Clustering by passing messages between data points,”
Science, 315
(5814), 972
–976
(2007). https://doi.org/10.1126/science.1136800 0036-8075 Google Scholar
C. Du,
J. Yang,
Q. Wu, and
F. Li,
“Integrating affinity propagation clustering method with linear discriminant analysis for face recognition,”
Opt. Eng., 46
(11), 110501
–110503
(2007). https://doi.org/10.1117/1.2801735 0091-3286 Google Scholar
K. C. Lee,
J. Ho, and
D. J. Kriegman,
“Acquiring linear subspaces for face recognition under variable lighting,”
IEEE Trans. Pattern Anal. Mach. Intell., 27
(5), 684
–698
(2005). https://doi.org/10.1109/TPAMI.2005.92 0162-8828 Google Scholar
|