Face Recognition Using Eigen Face
issue 1

Face Recognition Using Eigen Face

1Nikesh Sable, 2Nihal Jugel ,3Saurabh Gathade, 4Dr. S.M. Malode
1Student, Dept. of Computer Technology, K.D.K. College of Engineering, Nagpur, India
2Student, Dept. of Computer Technology, K.D.K. College of Engineering, Nagpur, India
3Student, Dept. of Computer Technology, K.D.K. College of Engineering, Nagpur, India
4Assistant Professor, , Dept. of Computer Technology, K.D.K. College of Engineering, Nagpur, India
1sablenikesh90@gmail.com, 2nihal.jugel@gmail.com, 3saurabhgathade@gmail.com, 4malode2004@yahoo.com

Abstract:

Face Recognition is the most talked topic in recent years in biometrics exploration. Many public places now a days have surveillance cameras for video capture and therefore have an important motive in security. It is widely known that face recognition has played an important role in monitoring system. As human face is a complex object having high degree of unevenness in its appearance, which makes face recognition solid problem in computer vision. The purpose of this paper is to provide solution to image-based detection and recognition with high accuracy.

Key Word-Opencv, Haar Cascade, Face Recognition, Integral Images, Eigen Face.

1.INTRODUCTION

Within just a few decades, biometric identification has gone from being some key of extremely advanced security system in movies to existing all around the world and even within the palms of our hands. The technology has been enforced in some ways during which in our society, phones use biometric identification to grant access and a couple of governments like china and also the United States of America are mistreatment biometric identification on databases like driver’s license for sort of reason. Then we have got fun things like snapchat filters that utilizes facial detection. There’s a huge distinction between biometric identification and face detection. With face detection, the pc will observe the face and with face recognition the pc will identify someone’s face. When we think about face, we tend to in all probability think about basic set of options. A face has eyes, nose and a mouth but clearly there’s additional to face than simply these options. We notice that they conflict during a ton of things just like the dimension of the nose, the gap between the eyes, the form and size of the mouth so on. We tend to propose a face detection and recognition system mistreatment python as the aspect of opencv package. This method holds 3 modules that are detection, training and recognition. Haar cascade is use to look at and acknowledge the faces. It’s a machine learning based totally approach where a cascade operate is trained from heaps of positive and negative pictures. It’s then wont to observe objects in different pictures.

2.METHODOLOGY

The algorithm has four stages:

2.1 Haar Feature Selection

2.2 Creating Integral Images

2.3 Computing the Eigenfaces

2.4 Using Eigenfaces in Face processing

2.1 Haar Feature Selection

Haar Cascade could be a machine learning application of artificial intelligence that provide detection algorithmic used to identify objects in a picture or video. It is mostly a machine learning based technique where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in a picture or video. Here we work on face detection. Initially the algorithm needs a lot of positive image such as images having faces and negative images like images without faces. To train the classifier the features are withdraw from it. For this Haar features from below are used. Each feature could be a single price obtained by subtracting total of pixels underneath white rectangle from total of pixels underneath black rectangle.

Fig 1.1: Haar Features

2.2 Integral Images

An integral image permits you to compute summations over image subregions. Rapidly. These summations are important in many approaches, like calculating HAAR wavelets. This square measure employed in face recognition and substitute similar algorithms. Suppose a picture is x pixels wide and y pixels high. Then the integral of this will be x+1 pixel wide and y+1 pixel high. The first row and column of the integral image square measure are all zeros. All other pixels have a similar value to the sum of all pixels before it.

Fig 1.2: Integral Image

2.3 Computing the Eigenfaces

Fig 2.3.1: Examples of CMU PIE face images.
Fig 2.3.2: The leftmost in the first row is the average face, the others are top two eigenfaces; the second row shows eigenfaces with least three eigenvalues.

To generate eigenfaces, face images are standardized to align the eyes and mouth, then all are changed at the similar pixel resolution. Eigenfaces are then withdraw out of the image data by means of principal component analysis (PCA) in the following manner:

  1. Given M face images with the size of h×w , each image is transformed into a vector of size D(=hw) and placed into the set {Γ1,Γ2,⋯,ΓM} . The face images should be appropriately measured and aligned, and the backgrounds (and possibly non-face areas such as hair and neck) should be constant or removed.
  2. Each face differs from the average by the vector Φi = Γi − Ψ , where the average face is defined by Ψ=1M∑Mi=1Γi .
  3. The covariance matrix C∈RD×D is defined as C=1M∑i=1MΦiΦ⊤i=AA⊤, where A={Φ1,Φ2,⋯,ΦM}∈RD×M .
  4. Determining the eigenvectors of C is an intractable task for typical image sizes when D≫M . However, to efficiently compute the eigenvectors of C , one may first compute the eigenvectors of the much-smaller M×M matrix A⊤A . The eigenvector and eigenvalue matrices of A⊤A are defined as V={v1,v2,⋯vr} and Λ=diag{λ1,λ2,⋯λr} , λ1≥λ2≥⋯λr>0 , where r is the rank of A . Note that eigenvectors corresponding to eigenvalues of zero have been relinquished.
  5. The eigenvalue and eigenvector matrices of C are Λ and U=AVΛ−1/2 , where U={ui} is the collection of eigenfaces.

Figure 2.3.1 shows some examples from the CMU PIE dataset (Sim et al. 2003), and Figure 2.3.2 shows the average face and eigenfaces derived from the dataset.

2.3.1 PCA and SVD

This segment discusses the close relationship between PCA and SVD. PCA seeks to find k “principal axes,” which define an orthonormal coordinate system that can capture most of the dissimilarities in data. For example, given the data matrix A, the covariance matrix (outer product matrix) can be written as C=AA⊤ , and the principal components ui AA⊤ui=λiui

where ui are the eigenvectors of AA⊤ associated with eigenvalues λi . When AA⊤ is too large to be efficiently decomposed, one can circumvent this by computing the inner product matrix A⊤A as in Step 4, A⊤Avi=λivi

where vi are the eigenvectors of A⊤A associated with eigenvalues λi . Note that λi is nonnegative, and ui=λ−0.5iAvi for those nonzero eigenvalues. This clarify Step 5, which is written in the matrix form.

The SVD of A is the following factorization: A=UΔV⊤=∑δiuiv⊤i where U and V are orthogonal matrices, the diagonal matrix Δ contains the singular value δi , which could be positive, zero, or even negative. Connecting to PCA, U and V are the eigenvector matrices of AA⊤ and A⊤A , and δ2i=λi for any i .

2.4 Using Eigenfaces in Face Processing

The eigenfaces reach an m-dimensional subspace of the original image space by selecting the subcategory of eigenvectors Uˆ={u1,⋯,um} related with the m largest eigenvalues. This results in the so-called face space, whose source is the average face, and whose axes are the eigenfaces (see Figure 3). To perform face detection or recognition, one may calculate the distance inside or from the face space.

Fig 2.4.1: Visualization of a 2D face space, with the axes representing two Eigenfaces.
Fig 2.4.2: Original images (Row 1) and their projections into the face space (Row 2).

2.4.1 Face detection

Because the face space (the subspace spanned by the eigenfaces) defines the space of face images, face detection can be considered as detecting image patches that lie close to the face space. In other words, the projection distance δ should be within some threshold θδ . The point-to-space distance δ is the distance in the middle of the face image and its projection onto the face space, and it can be computed as δ=∥(I−UˆUˆ⊤)(Γ−Ψ)∥ where I is the identity matrix. As shown in Figure 4, the distance in the middle of an image (above) and its face space projection (below) is much smaller for a face than for a nonface (tree) image.

2.4.2 Face recognition

A new face Γ is projected into the face space by Ω=Uˆ⊤(Γ−Ψ) , where Uˆ is the set of significant eigenvectors. Note that the weight vector Ω is the portrayal of the new face in face space. One simple way to decide which face class Γ belongs to is reducing the Euclidean distance ϵk=∥Ω−Ωk∥ where Ωk is the weight vector portraying the kth face class. The face Γ is considered as belonging to class k if the the minimum ϵk is smaller than some predefined threshold θϵ ; otherwise, it is classified as unknown. Figure 3 illustrates the estimate and recognition by visualizing face space as a plane.

3. CONCLUSIONS

In this paper we developed a face recognition system using OpenCV and python, used to detect and identify human faces. With the help of images datasets which are explained and trained before recognition. Haar cascade algorithm is used for detection. In future these technologies can be used for number for purposes. The technology can be enhanced by increasing the input images to get more accuracy. Adding color processing, edge detection assists to increase the accuracy. Data is most useful in these processes the more images are used the more accuracy we get. There are some method which can be used to obtain more training images. Such as creating new images from existing ones. Mirror images can be taken to multiply the training images. Resizing and rotation also aid, and you could also add noise to have more training with images that improves the tolerance to noises.

REFERENCES:

  1. Face recognition data, university of Essex, uk, face 96, http://cswww.essex.ac.uk/mv/all faces/faces96.html.
  2. Face recognition data, university of Essex, uk, grimace, http://cswww.essex.ac.uk/mv/all faces/grimace.html.
  3. Https://docs.opencv.org/3.4/d7/d8b/tutorial_py_face_detection.html
  4. Https://pythonprogramming.net/haar-cascade-object-detection-python-opencv-tutorial/
  5. Open source computer vision library reference manual-intel [media]
  6. Https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf
  7. Wikipedia, Wikipedia. “adaboost.” Wikipedia, Wikimedia Foundation, 13 Jan. 2018, adaboost – Wikipedia.
  8. 4. Wikipedia, Wikipedia. “Cascading Classifiers.” Wikipedia, Wikimedia Foundation, 15 October. 2013, en.wikipedia.org/wiki/Cascading classifiers.
  9. P. Belhumeur, J. Hespanha, and D. Kriegman, Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection, IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711–720, 1997.
  10. P. S. Huang, C. J. Harris, and M. S. Nixon, Recognising Humans by Gait via Parametric Canonical Space, Artificial Intelligence in Engineering, 13(4):359–366, October 1999.
  11. R. Kuhn, J. C. Junqua, P. Nguyen, and N. Niedzielski, Rapid Speaker Adaptation in Eigenvoice Space, IEEE Transactions on Speech and Audio Processing, 8(6):695–707, 2000.

Related posts

Defining Problem: Identification of Tree Age Using Image Processing.

admin

LIE DETECTION SYSTEM USING ARTIFICIAL NEURAL NETWORK

admin

Intrusion Detection using Machine Learning

admin

Leave a Comment