The basics of face recognition
Author: huifan Time: 2020-12-08
Face recognition is definitely one of the most popular computer vision problems. Due to its popularity, it has been studied intensively in the past 50 years. The original intention to explore facial recognition was put forward in the 1960s, but it wasn't until Turk and Pentland implemented the "Eigenfaces" algorithm in the 1990s that the field showed some exciting and useful results.
glorious future
Face recognition has received more and more attention recently, and we can foresee a bright future in this field.
Security has a history and will remain in the main application of face recognition in practice in the future. Here, face recognition can help at the same time: recognition and identity verification. A good example is the Frankfurt Airport Security System, which uses facial recognition to automatically perform passenger control. Another application could be security analysis of videos purchased by external city camera systems. Potential criminal suspects can be identified before committing a crime. Take a look at the integration of face recognition with the London Borough of Newham in 1998.
Face recognition can also be used to speed up the recognition of people. We can imagine a system that can recognize a customer when he walks into a branch (bank, insurance), and the front desk staff can welcome the customer by their name and prepare his folder before he actually arrives at the counter.

Advertising companies are making billboards to adapt their content to people passing by. After analyzing the human face, the advertisement will adapt to gender, age and even personal style. However, this usage may not comply with privacy laws. Private companies have no right to photograph people in public places (of course, it depends on the country).
Don’t forget that both Google and Facebook have implemented algorithms that can identify users in a huge database of photos that are maintained as part of social networking services. Third-party services (such as Face.com) provide image-based searches, for example, you can search for pictures containing your best friends.
Google also provides one of the latest usages. It is the face unlock function. As the name suggests, this function enables you to unlock your phone after successfully recognizing your face.
With new hardware devices, especially 3D cameras, the latest news is recognized. The 3D camera can obtain a three-dimensional image of your face and solve the main problems in 2D face recognition (lighting, background detection), so you can get better results. See the example of Microsoft Kinect, when you walk in front of the camera, it can immediately recognize you.

We should keep in mind that face recognition will be used more and more in the future. This applies not only to facial recognition, but also to the entire field of machine learning. The amount of data generated per second forces us to find ways to analyze the data. Machine learning will help us find a way to get meaningful information from data. Face recognition is just a specific method in this field.
How to start?
Since then, several methods and algorithms have been developed, which makes it very difficult for developers or computer scientists to face recognition for the first time.
I hope this article is a good introduction to the subject, and it will provide you with three areas of information:
What are the algorithms and methods used to perform facial recognition.
Fully describe the "eigenface" algorithm.

Shows a complete functional example of face recognition using EmguCV library and Silverlight webcam function. Go directly to the second part of this article to describe the implementation.
The process of recognizing faces in images is divided into two stages:
Face detection-detect the pixels representing the face in the image. There are multiple algorithms to perform this task, one of which is "Haar Cascade face detection" will be used later in the example, but it is not explained in this article.
Face recognition-the actual task of recognizing faces by analyzing the parts of the image recognized in the face detection stage.
Face recognition brings some problems that do not belong to the field at all, which makes it one of the most challenging problems in machine learning.
Lighting issues-due to the reflectivity of human skin, even slight changes in image lighting can have a big impact on the results.
Posture changes-any rotation of a person's body will affect the performance.
Time delay-Of course, due to human aging, the database must be updated regularly.
Methods and algorithms
Appearance-based statistical methods use statistical methods, and they do define different ways of measuring the distance between two images. In other words, they tried to find a way to illustrate the degree of similarity between two faces. There are several methods that fall into this category. the most important is:
Principal component analysis (PCA)-this article introduces.
Linear Discriminant Analysis (more information)
Independent component analysis (more information)
This article describes PCA, while others do not. For a comparison of these methods, please refer to this article

Gabor filter-a filter commonly used in image processing, with the function of capturing important visual features. These filters can locate important features in the image, such as eyes, nose, or mouth. This method can be used in combination with the aforementioned analysis methods to obtain better results.
Neural networks are simulating the behavior of the human brain to perform machine learning tasks such as classification or prediction. In our case, we need to classify images. The explanation of the neural network will take at least the entire article (if there is no more). Basically, a neural network is a set of interconnected nodes. The edges between nodes are weighted to amplify the information propagating between two nodes. Information propagates from a set of input nodes through a set of hidden nodes to a set of output nodes. The developer must invent a method to encode the input (in this case, an image) as a set of input nodes, and decode the output from a set of output points (in this case, a label that identifies the person).
The common method is to take a node for each pixel in the image at the input end of the network, and take a node for each person in the database at the output end, as shown in the following figure:
Neural network for face recognition
Eigenface algorithm and PCA
The eigenface algorithm follows this pattern, and other statistical methods (LDA, ICA) also follow this pattern:
Calculate the distance between the captured image and each image in the database.
Select the example closest to the processed image from the database (the example with the shortest distance to the captured image).
If the distance is not too far-mark the image as a specific person.
What is the distance between the two images?
The key question is: how to express the distance between two images? One possibility is to compare images pixel by pixel. But we can immediately feel that this will not work. Each picture will make the same contribution to the comparison, but not every pixel contains valuable information. For example, background and hair pixels will make the distance larger or smaller. Similarly, for direct comparison, we need to align the faces perfectly in all pictures and hope that the head rotation is always the same.
To overcome this problem, the PCA algorithm creates a set of principal components, called eigensurfaces. Eigenfaces are images that represent the main difference between all the images in the database.
The recognizer first finds an average face by calculating the average of each pixel in the image.
You might think that this is too complicated with the introduction of this topic. You might ask yourself, is there an easy way to start facial recognition?
Yes, it is not. Eigenface algorithm is the basis of face recognition research. Other analysis methods (such as linear component analysis and independent component analysis) are based on the definition of characteristic surface algorithms. Gabor filters are used to identify important features in human faces, and later eigenface algorithms can be used to compare these features.
Neural networks are a complex subject, but the facts show that neural networks rarely have better performance than eigenface algorithms. Sometimes, an image is first defined as a linear combination of feature faces, and then its description vector is passed to the neural network. In other words, the eigenface algorithm actually establishes the basis of face recognition.
There are several open source libraries that implement one or more methods, but as developers, if you don't understand how these algorithms work, we won't be able to use them.
If you want to read more interesting articles about artificial intelligence, its applications and how it changes our world, please stay tuned to hfteco.com
Related News:
Related Products
Website: www.hfteco.com
Website: www.china-attendance.com
Email: info@hfcctv.com