Face recognition technology refers to the use of computer technology to analyze and compare faces. Face recognition is a popular field of computer technology research, including technologies such as face tracking detection, automatic adjustment of image magnification, infrared detection at night, and automatic adjustment of exposure intensity.
Face recognition technology belongs to biometric recognition technology, which is to distinguish individual organisms based on the biological characteristics of the organisms (generally specifically people).
In March 2014, the team led by Tang Xiaoou, director of the Department of Information Engineering of the Chinese University of Hong Kong and deputy dean of the Shenzhen Institutes of Advanced Technology of the Chinese Academy of Sciences, released research results based on the original face recognition algorithm with an accuracy rate of 98.52%. Capacity (97.53%). On August 17, 2019, the Beijing Internet Court released the White Paper on Judicial Application of Internet Technology, which described ten typical technology applications, including face recognition technology.
The face recognition technology is based on the facial features of the person. The input face image or video stream is determined first. If there is a face, then the position, size and main position of each face are further given. Location information of facial organs. Based on this information, the identity characteristics contained in each face are further extracted and compared with known faces to identify the identity of each face.
The generalized face recognition actually includes a series of related technologies for building a face recognition system, including face image acquisition, face positioning, face recognition preprocessing, identity confirmation, and identity search; while narrow face recognition refers specifically to Technology or system for identifying or looking up a human face.
The biometrics studied by biometrics include face, fingerprints, palm prints, iris, retina, sound (voice), body shape, personal habits (such as the intensity and frequency of typing on the keyboard, signatures, etc.), and the corresponding recognition technology has people Face recognition, fingerprint recognition, palm print recognition, iris recognition, retinal recognition, voice recognition (using voice recognition can be used for identity recognition and voice content recognition, only the former belongs to biometric recognition technology), body recognition, keyboard typing Identification, signature identification, etc.
Technical Principle Editor
Face recognition technology consists of three parts:
(1) Face detection
Face detection refers to determining whether a face image exists in a dynamic scene and a complex background, and separating such a face image. There are several ways:
① Reference template method
First design a template of one or several standard faces, and then calculate the degree of matching between the test sample and the standard template, and determine whether there is a face by using a threshold;
② Face Rule Method
Because the human face has certain structural distribution characteristics, the so-called face rule method is to extract these features to generate corresponding rules to determine whether the test sample contains a human face;
③ sample learning method
This method uses the artificial neural network method in pattern recognition, that is, the classifier is generated by learning the face image sample set and the non-face image sample set;
④ skin color model method
This method is based on the law of relatively concentrated distribution of facial skin tones in the color space.
⑤ Feature sub-face method
This method treats all face image collections as a face image subspace, and determines whether a face image exists based on the distance between the detection sample and its projection in the subspace.
It is worth mentioning that the above five methods can also be comprehensively adopted in the actual detection system.
HF RA05 5 inch Facial Time Attendnace with Cloud Software
(2) Face tracking
Face tracking refers to dynamic target tracking of detected faces. Specifically, a model-based method or a combination of motion-based and model-based methods is used. In addition, using skin color model tracking is a simple and effective method.
(3) Face comparison
The face comparison is to confirm the identity of the detected face image or perform a target search in the face image library. This actually means that the sampled faces are compared with the stored faces in order, and the best matching object is found. Therefore, the description of the face image determines the specific method and performance of face image recognition. There are mainly two description methods of feature vector and face pattern template:
① Feature vector method
In this method, the size, position, and distance of facial features such as eye iris, nose wing, and mouth corner are determined first, and then their geometric feature quantities are calculated, and these feature quantities form a feature vector describing the face image.
②Face pattern method
In this method, several standard face image templates or face organ templates are stored in the library, and all pixels in the sampled face image are matched with all the templates in the library using a normalized correlation measure when matching. In addition, there are methods that combine pattern-recognized autocorrelation networks or features with templates.
The core of face recognition technology is actually "local human feature analysis" and "graphic / neural recognition algorithms." This algorithm is a method that uses various organs and feature parts of the human face. For example, the multi-data corresponding to the geometric relationship forms the identification parameters to compare, judge and confirm with all the original parameters in the database. Generally, the judgment time is less than 1 second.
There are three general steps:
(1) First create a face file of the face. That is, a camera is used to collect face image files of unit personnel's faces or take photos of them to form a face image file, and store these face image files by generating faceprint codes.
(2) Get the current human face. That is, using the camera to capture the face image of the current person in or out, or taking a photo input, and generating the face pattern code from the current face image file.
(3) Compare the current face code with the archive inventory. That is, the facial code of the current facial image is compared with the facial code in the archive. The above-mentioned "face pattern coding" method works according to the essential features and beginnings of a human face. This facial texture coding can resist changes in light, skin tone, facial hair, hairstyle, glasses, expressions and posture, and has strong reliability, so that it can accurately identify someone from millions of people. The face recognition process can be completed automatically, continuously, and in real time using ordinary image processing equipment.
The face recognition system mainly includes four components: face image acquisition and detection, face image preprocessing, face image feature extraction, and matching and recognition.
Face image acquisition and detection
Face image collection: Different face images can be collected through the camera lens, such as still images, dynamic images, different positions, different expressions, etc. can be well collected. When the user is within the shooting range of the capture device, the capture device will automatically search for and capture the user's face image.
Face detection: In practice, face detection is mainly used for pre-processing for face recognition, that is, to accurately mark the position and size of a face in an image. The facial features contained in the face image are very rich, such as histogram features, color features, template features, structural features, and Haar features. Face detection is to pick out the useful information and use these features to achieve face detection.
The mainstream face detection method uses the Adaboost learning algorithm based on the above features. The Adaboost algorithm is a method for classification. It combines some weaker classification methods to form a new strong classification method.
In the face detection process, Adaboost algorithm is used to select some rectangular features (weak classifiers) that can best represent the face. The weak classifier is constructed into a strong classifier according to the weighted voting method, and then the trained strong classifiers Cascaded classifiers that form a cascade structure in series can effectively improve the detection speed of the classifier.
Face image preprocessing
Face image preprocessing: The image preprocessing for the face is a process based on the results of face detection, processing the image and finally serving the feature extraction. Because the original image obtained by the system is limited by various conditions and random interference, it can not be used directly. It must be image pre-processed such as gray correction and noise filtering in the early stage of image processing. For the face image, the pre-processing process mainly includes light compensation, gray scale transformation, histogram equalization, normalization, geometric correction, filtering, and sharpening of the face image.
Face image feature extraction
Face image feature extraction: Features that can be used in face recognition systems are generally divided into visual features, pixel statistical features, face image transformation coefficient features, and face image algebraic features. Face feature extraction is based on certain features of the face. Face feature extraction, also known as face representation, is a process of face feature modeling. Face feature extraction methods can be summarized into two categories: one is a knowledge-based representation method; the other is a representation method based on algebraic features or statistical learning.
Knowledge-based representation methods are mainly based on the shape description of the face organs and the distance characteristics between them to obtain feature data that is helpful for face classification. The feature components usually include Euclidean distance, curvature, and angle between feature points. . The human face is composed of eyes, nose, mouth, chin and other parts. The geometric description of these parts and the structural relationship between them can be used as important features to identify the face. These features are called geometric features. Knowledge-based face representation mainly includes geometric feature-based methods and template matching methods.
Face image matching and recognition
Face image matching and recognition: The feature data of the extracted face image is searched and matched with the feature templates stored in the database. By setting a threshold, when the similarity exceeds this threshold, the matching result is output. Face recognition is to compare the facial features to be recognized with the obtained facial feature templates, and judge the identity information of the faces based on the similarity. This process is further divided into two categories: one is the process of comparing images one-to-one, and the other is the process of identifying and matching one-to-many images.
Youtube: Huifan Technology
Linkedin: Huifan Technology
Facebook: Huifan Technology