A Survey of Gait Recognition Using FPCA And RPCA
Volumn 4

A Survey of Gait Recognition Using FPCA And RPCA

Mrs.Rupali Fulse1,  Mr. Nitin Thakre2,  Mrs. Bharati Raut3

  123Assistant Professor, Information Technology, RTMNU,India.




Human identification by gait has created a great   deal of interest in computer vision community due to its advantage of inconspicuous recognition at a relatively far distance. Biometric systems are becoming increasingly important, since they provide more reliable and efficient means of identity verification. Biometric gait Analysis (i.e. recognizing people from the way they walk) is one of the recent attractive topics in biometric research. It has been receiving wide attention in the area of Biometric. Human gait recognition works from the observation that an individual’s walking style is unique and can be used for human identification. So as to recognize individuals walking characteristics. In Gait biometric research there are various gait recognition approaches are available. In this paper, the gait recognition approaches such as “ Partial Least Square with RPCA”, and “ HMM With FPCA” are discussed.

KeywordsBiometrics, Survey, Gait Recognition approaches.


Gait recognition is a relatively new biometric technology which aims to     identify people at a distance by the way they walk. Human gait recognition works from the observation that an individuals walking style is unique and can be used for human identification. So as to recognize individuals walking characteristics, Gait recognition includes visual cue extraction as well as classification. Gait recognition can be classified into three groups namely; motion vision based, wearable sensor based and floor sensor based [1].

Two common categories of gait recognition are appearance-based and model-based approaches. Among the two, the appearance-based approaches suffer from changes in the appearance owing to the change of the viewing or walking directions . But, model-based approaches extract the motion of the human body by means of fitting their models to the input images. Model based ones are view and scale invariant and reflect in the kinematic characteristics of walking manner [2]..

Gait features of the same subject differ from each other due to various degrees of intra-class variability. Recent work [3, 4, 5,6] proved that the single gait recognition performance drops when computing similarity measurement of gait appearances in different viewing angles. In that scenario, the viewing angle of probe gait is not the same as that of gallery data. In this paper, we addressed multi-view gait recognition problem under the difficulties caused by wearing or carrying condition variations.

Previous work have been proposed a rule-based classifier is used to classify the action. The experimental results show that the system can recognize seven types of primitive actions with high accuracy rule based activity recognition [7].


A. HMM with FPCA for feature extraction

Gait recognition algorithm based on fuzzy principal component analysis (FPCA) for gait energy image (GEI) is proposed. Firstly, the original gait sequence is pre-processed and gait energy image is obtained. Secondly, the eigen values and eigenvectors are extracted by fuzzy principal component analysis, which are called fuzzy components. Then the eigenvectors are projected into lower-dimensional space. Finally, the NN classifier is utilized in feature classification. The method is tested on CASIA database. The experimental results show that this algorithm achieves higher recognition performance. Correct recognition rate (CRR) of 89.7% for FPCA algorithm and 83.1% for GEI algorithm. FPCA algorithm to achieve 100% recognition rate.

Gait Energy Image (GEI) is the sum of images of the walking silhouette divided by the number of images. The equation (1) present the pre-processed binary gait silhouette images Bt (x,y) at time t in a sequence, GEI is computed by                         N

G(x,y)=1/N Σ Bt(x,y) ——————————-(1)


Where N is the number of frames in the full gait cycle and x and y are a value in the image coordinates. Gait Energy Image (GEI) has constructed to apply Principal

Component Analysis (PCA) with and without Radon Transform (RT). The Radon Transform is used to detect

features within an image and PCA is used to reduce dimension of the images without much loss of information.

A new gait recognition algorithm using Hidden Markov Model (HMM) is proposed. The input binary silhouette images are pre-processed by morphological operations to fill the holes and remove noise regions. The width vector of the outer contour is used as the image feature. A HMM is trained iteratively using Vertebra algorithm and Baum-Welch algorithm and then used for recognition. HMM is suitable for gait recognition because of its statistical feature and it can reflect the temporal state-transition nature of gait. HMM has been applied to human identification in [2-5]. A new HMM based approach for gait representation and recognition

A new HMM-based gait recognition method.

An HMM is characterized by the following parameters.

(1) N , the number of states in the model. How to choose N is important and this is a classical problem of choosing the appropriate dimensionality of a model that will fit a given set of observations. For CMU MoBo database, N =5 is suggested [5]. The HMM states are denoted as

S=[S1,S2,…,SN ] .

(2)M, the number of distinct observation symbols per state. For gait recognition, every frame’s feature vector is treated as an observation symbol. The number M depends on the number of frames per cycle, the number of states in the model and how to divide one cycle into clusters. The frames in a gait cycle are a consecutive transition along with time. We divide each cycle into N clusters of approximately the same size of M . The observation symbols for one HMM state are denoted as V ={v1,v2,……,vM } .

(3) A, the transition probability matrix. A = {aij} , and aij is defined as

aij = P[qt+1 = S j | qt = Si ],1 ≤ i, j ≤ N ————– (2)

where qt is the state at time t . The values of aij will determine the type of the HMM. In this paper, the left-to right model is chosen, which only allows the transition

from the jth state to either the jth or the ( j +1)th state.

Also the last state can turn back to the first one.

(4) B , the observation symbol probability matrix. B = {bj(k)} ,


bi(k )= P[ vk at t | qt =SJ |, 1≤ J≤ N , 1≤ K ≤M]                            ………………………………. (3)

(5) _, the initial probability. _ ={ _ i }

where _ i = P[ q1 = Si ], 1 ≤ i ≤ N ………………… (4)

We always put the first frame into the first cluster, so the

initial probability π 1is set to be 1 and all other πi are set to be 0.

The complete parameter set of the HMM can be denoted as

               λ = (A,B, π)…………………………………..(5)

            Every gait sequence is divided into cycles. The feature vectors of each cycle are further divided into clusters with about the same size. Each cluster center is treated as an exemplar. An exemplar is defined as

en=1/Nn Σ ft ……………………………………………… (6)

ft ε cn where ft is the feature vector of the tth frame, Cn represents the nth cluster, Nn is the number of the frames in the nth cluster. The exemplar set is denoted as E = {e1,e2,……..,eN }.

The initial exemplars of slow walk (a), fast walk (b) and walking with a ball (c) of the same person. The similarities between (a) and (b) show the effectiveness of the feature extraction method, however, greater changes i.e. (c) cause the similarities decline. For each feature vector f in a cycle, its distance from an exemplar e is measured by the inner product (IP) as defined in the following equation.

D (f,e) = 1-[ f Te/fT feTe]1/2                           . …………………………….(7)

The transition probability matrix A is initialized using trial and error method. The initial observation symbol

probability matrix B = {bj (k)} is defined as bn(ft)=0.01 δn e – δn xD(ft,en)                              ……………………………..(8)

δn=Nn / Σ D(ft,en) ………………………………………….(9)

ft ε cn


a robust View Transformation Model for gait recognition. Based on the gait energy image, the proposed method establishes a robust view transformation model via robust principal component analysis. Partial least square is used as feature selection method. Compared with the existing methods, the proposed method finds out a shared linear correlated low rank subspace, which brings the advantages that the view transformation model is robust to viewing angle variation, clothing and carrying condition changes. Conducted on the CASIA gait dataset, experimental results show that the proposed method outperforms the other existing methods.

A Robust View Transformation Model (Robust VTM) using robust Principal Component Analysis (robust PCA) [9]. The multi-view gait recognition system consists of two separated procedures, one is gait signature registration, and the other is gait recognition. For each process, the gait energy image is chosen as the original gait feature descriptor, while the Partial Least Square (PLS) based feature selection method is adopted. During the registration process, the robust VTM is constructed while the view transformation projection and feature selection functions are learned. The gait features of probe viewing angle are transformed into that of gallery viewing angles. Then gait similarity measurement is conducted to produce recognition result.

a Partial Least Square feature selection based optimized gait representation. Normal human walking is a periodical action. To preserve the temporal information and reduced unnecessary computation cost, we need

to detect period. We estimate the bounding box changes using the methods illustrated in [7, 2], since the aspect ratio of silhouette bounding box change periodically during person walk. Besides, Gait Energy Image (GEI)  has been constructed as gait feature descriptor based on the previous period estimation. In addition, Partial Least Square feature selection method  is employed to extract discriminative part of the gait feature descriptor. Based on the results from the period estimation, GEI is used as gait representation for the gait information in spatial and temporal domain. The silhouettes extracted from background modeling to construct gait energy image. Suppose each In;t(x; y) is a particular pixel located at position (x; y) of t(t = 1; 2; :::; T) image from n(n = 1; 2; :::;N) gait cycle.

All the silhouettes are normalized along both horizontal and vertical directions to a fixed size. Assuming that width and height of GEI are W and H representatively, GEI is defined as

g(x; y) =1TXN


t=1In;t(x; y);

(1) where T is the number of frames in gait sequence. I is a silhouette image at frame t, x and y are the image coordinates. The original GEI feature representation is a 1-D vector, gmk, by concatenating the value of each position in In;t(x; y) along all consecutive rows, where m represents the mth subject and k represents the kth viewing angle. Thus the dimension of the gm k is W _ H.

We employ the Partial Least Square (PLS) regression as feature selection algorithm to learn optimal feature representation vectors. PLS is an efficient supervised dimension reduction approach used as a feature selection method. It also brings the advantages that the target reduced dimension does not limited by the class number of training dataset. In addition, by applying PLS on GEI, similar as that in , the optimized GEI is expected to be better factorized than the original spatial-domain GEI.

Given two sets of gait feature vectors from different subject under same viewing angle, i.e. gmk is obtained from the mth subject under the kth viewing angle while gnk is obtained from the nth subject under the same viewing angle. PLS computes an optimal projection by searching the directions for maximum following object function between two variables max wk [cov(gmk wk; gn


 (2)where wk is the learned projection matrix of the kth viewing angle. Cov operation means to compute the covariance between original GEI feature representation vectors from different individuals under the same viewing angle. Hence, given anew GEI feature representation vector gmk , we learn the optimal gait feature vectors omk under the kth viewing angle via

Om k = gmk wk: (3)

             An optimized gait representation matrix omk is created as the left hand side matrix in equation (4). Each row contains the gait information from different subjects under the same viewing angle while each column includes that from the same subjects under different viewing angles. In that case, there are total K viewing angles and M subjects for constructing VTM.  The vector vm is a shared gait feature of the mth subject from any viewing angle. Pk is a transforming matrix which can project shared gait feature vector v to the gait feature vector under specific viewing angle k. Pk is independent of the

subject. An optimized gait feature vector omj from the mth subject under the jth viewing angle, the learned VTM transform gait feature vectors from the jth viewing angle to the ith viewing angle, Motivated by the improvement of truncated Singular Value Decomposition (TSVD) , the reduced rank approximation for SVD achieves better performance in gait recognition. In fact, gait representation is complicated by the appearance variation under different viewing angle for the same subject, as well as the variability in the same subject at different time, for example due to the appearance change of carrying or wearing conditions. In many cases, however, it is reasonable to assume that the shared components for the same subject under different viewing angles are low-rank.

IV.  Conclusion

In this paper fuzzy principle and robust principle is used for feature extraction and nearest neighbour is used for classification.


  1. Hayder Ali, Jamal Dargham, Chekima Ali, Ervin Gobin  Moung, “Gait Recognition using  principle Component  Analysis” , ICMV, pp. 539 -543, 2011.
  2. Pushpa Rani and G.Arumugam, “An Efficient Gait  recognition System For Human  Identification Using Modified ICA”, IJCSIT, Vol.2, No.1, pp.55-67, 2010.
  3. L.Wang, T.Tan, H.Ning, and W.Hu, “Silhouette analysis based gait recognition for human identification,” IEEE Trans. On PAMI, 2003.
  4. S.Yu, D.Tan, and T.Tan, “A framework for evaluating the Effect of view angle, clothing and carrying condition on gait  recognition,”in  ICPR, 2006.
  5. K.Bashir, T.Xiang, and S.Gong, “Cross-view gait recognition using correlation strength,” in BMVC, 2010.
  6. M.Goffredo, I.Bouchrika, J.N.Carter, and M.S.Nixon, “Selfcalibrating view-invariant gait biometrics,” IEEE Trans SMC- part B, 2010.
  7. Masafumi Sugimoto, Thi Thi Zin, Takashi Toriu and  Shigeyoshi Nakajima,Robust Rule-Based Method for Human Activity Recognition”, IJCSNS International Journal of  computer Science and Network Security, VOL.11 No.4, April 2011.
  8. G. Venkata Narasimhulu “Fuzzy Principal Component Analysis based Gait Recognition (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 3 (3) ,2012,4015 – 4020.
  9. emmanuel j. Candes and xiaodong li .”Robust principal component  analysis?” Journal of the acm, vol. 58, no. 3, article 11, publication may 2011.

Related posts

Basics of Organic and Inorganic Luminescence


Multi-Sensor Embedded System for Security


Description and Validation of Multi Objection Automated Online/Offline Signature Verification Using Semantic Feature Extraction


Leave a Comment