CN108256459B - Security check door face recognition and face automatic library building algorithm based on multi-camera fusion - Google Patents

Security check door face recognition and face automatic library building algorithm based on multi-camera fusion Download PDF

Info

Publication number
CN108256459B
CN108256459B CN201810021107.2A CN201810021107A CN108256459B CN 108256459 B CN108256459 B CN 108256459B CN 201810021107 A CN201810021107 A CN 201810021107A CN 108256459 B CN108256459 B CN 108256459B
Authority
CN
China
Prior art keywords
face
faces
algorithm
matching
security inspection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810021107.2A
Other languages
Chinese (zh)
Other versions
CN108256459A (en
Inventor
张恩伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Shengxun Technology Co ltd
Original Assignee
Beijing Bravevideo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bravevideo Technology Co ltd filed Critical Beijing Bravevideo Technology Co ltd
Priority to CN201810021107.2A priority Critical patent/CN108256459B/en
Publication of CN108256459A publication Critical patent/CN108256459A/en
Application granted granted Critical
Publication of CN108256459B publication Critical patent/CN108256459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The invention provides a security door face recognition and face automatic library building algorithm based on multi-camera fusion, aiming at solving the problem of rapid security inspection of a security door. The algorithm of the invention does not need the active fitting type face brushing of the detected person, only needs to walk according to a normal route, and belongs to the non-fitting type face recognition algorithm. A plurality of cameras are installed on a security inspection door along the direction of people entering, the cameras simultaneously acquire videos to perform face detection, detected faces are screened by a face quality evaluation module after being tracked by faces, faces belonging to the same person are selected by the screened faces through a corresponding upper half body matching algorithm, then the faces with large angles are deleted through attitude estimation, and the faces are aligned and calibrated and then input into a deep convolution neural network to extract features. And based on the extracted face features, carrying out face matching and automatic library building through a multi-camera face comparison algorithm. The multi-camera fused security inspection door face recognition algorithm remarkably improves the security inspection speed and reduces potential safety hazards such as crowding caused by receiving security inspection.

Description

Security check door face recognition and face automatic library building algorithm based on multi-camera fusion
Technical Field
The invention belongs to the fields of security and safety inspection, and relates to the fields of pattern recognition, graphic image processing, machine learning and the like.
Background
The world looks peaceful but is not safe, and terrorism and extreme senses infiltrate into people's lives silently, so safety precaution remains a very important issue. The security inspection can detect dangerous goods in time to a certain extent, the life and property safety of the masses of people is guaranteed, and security inspection equipment is deployed in various public places such as airports, subways, important venues and the like. The security inspection door is a device for detecting human bodies, and the domestic security inspection of the human bodies mainly adopts a walk-through metal detection door, and is assisted with close-fitting scanning of a portable handheld metal detector and manual 'shooting, touching, pressing and pressing' modes of security inspectors to find suspicious objects such as cutters and the like. The security door only reacts to metal substances, but does not have the effect on other contraband substances, and even if the metal substances are detected, the positioning is rough, so that only a human body with large metal objects can be detected. The matched handheld metal detector needs security check personnel to contact with the personnel to be checked, discontent emotion of the personnel to be checked is easily caused, even limb conflict is caused, the manual check generally needs 6-8 seconds per person, and the security check speed easily causes congestion and detention in many occasions such as subways.
Some manufacturers introduce face recognition into a security inspection system, usually adopt identity card swiping, and then let the inspected person cooperate with face swiping to compare the face of the identity card with the face swiped by the inspected person, namely a person card verification system. The witness verification system is a user active fitting system, and requires a user to face a face acquisition device (camera) to acquire a front face. Moreover, the user is required to carry the identity card, which is not necessary for passengers in a subway system, especially for office workers who hurry to drive the way, even if the identity card is carried, the identity card needs to be taken out from the bag and then the face collection is actively matched, the time consumed in the process is long, and the security inspection speed cannot meet the condition that the passenger flow is extremely large like a subway.
Disclosure of Invention
The invention provides a security door face recognition and face automatic library building algorithm based on multi-camera fusion, aiming at solving the problem of rapid security inspection of a security door. The algorithm of the invention does not need the active fitting type face brushing of the detected person, only needs to walk according to a normal route, and belongs to the non-fitting type face recognition algorithm. At present, the mainstream face recognition algorithm in the market has requirements on the posture of the face, and the more positive the posture is, the higher the accuracy of the recognition algorithm is. The invention adopts a plurality of cameras at different positions and different angles to capture the faces with different heights and different walking postures, and ensures higher face-to-face capturing rate, so that N cameras (N is more than or equal to 3) are arranged on the security inspection door. The N cameras simultaneously carry out face detection and face tracking on the collected videos, M faces (M is less than or equal to N) with the highest face quality score are automatically screened out in a face queue obtained by face tracking of each camera in a limited time period (obtained by calculating the average time of people walking through a security inspection door), and 1 face is screened out at most in each path. Corresponding upper body images are intercepted through the human faces, edge and color matching is carried out on the upper body images, if the matching is successful, the upper body images are regarded as human faces acquired from the same person, the horizontal rotation angle, the pitch angle and the inclination angle of the human faces are calculated by using a human face posture estimation algorithm, and K (K) human faces with smaller human face angles (close to the front face) are screened outLess than or equal to M). After the K faces are aligned and calibrated through the feature points, the K faces are input to a deep convolutional neural network for face feature extraction, each face corresponds to one 1024-dimensional feature vector, and the total K feature vectors corresponding to the K faces are extracted. And respectively comparing the K characteristic vectors with the face characteristic vectors in the face library, if the matching values of the K faces in the face library are more than or equal to a first threshold value, selecting a person with the top matching and the highest matching value as the final output of the recognition, simultaneously adding the snap-shot face with the top matching into the corresponding face library, and updating the face database. If the matching values of the K faces in the face library are smaller than the first threshold, setting a second threshold, and if the matching values are larger than or equal to the second threshold, considering that the faces are temporarily matched, and sequentially selecting the person with the largest number of matched faces (each person corresponds to a plurality of warehoused faces) from the face library by each face in the K faces, wherein the number of matched faces is L1,...,LKThe corresponding average matching values are S1,...,SKAnd selecting the person corresponding to the highest Score as the final face matching output by calculating the matching comprehensive Score and sorting according to the Score, and simultaneously adding the corresponding snapshot face into the corresponding face database to update the face database. And if the matching values of the K faces in the face database are smaller than a second threshold value, establishing a new personnel list, taking the K faces as the faces of the newly-built personnel in the face database, and adding the personnel data in the face database. According to the invention, through a face recognition algorithm and an automatic database building algorithm which are integrated by multiple cameras, the face recognition of the security inspection door is enabled to be captured to the right face to the greatest extent, and the face brushing cooperation of security inspection personnel is not needed, so that the speed of passing through the security inspection door is greatly improved, and meanwhile, according to the face recognition result, the face database of the security inspection door personnel is automatically built, so that an effective face database is provided for the follow-up face management.
The invention provides a security inspection door face recognition and face automatic library building algorithm based on multi-camera fusion, which comprises the following steps:
the face library is initialized, the face library can be empty, or a manually collected face is used as an initial base library, and database indexes such as face pictures, face features, personnel information and the like are established and are related to one another. And each security inspection door pre-allocates a disk and a database according to the number of stored faces of tens of millions.
N cameras are installed on the two sides and the top of the security door, the lens faces the direction where a person enters, and when the person passes through the security door, the person can shoot the face as far as possible. Fig. 1 is a schematic view of a security door equipped with 5 cameras, two cameras on each side of the door and one camera on top. The N cameras simultaneously start a face detection algorithm. The invention does not adopt a face detection algorithm based on deep learning, mainly aims at the scene of a security inspection door, has large passenger flow and requires high detection speed, and the face detection based on the deep learning can not meet the requirement of simultaneously carrying out the face detection by N cameras. In addition, in a multi-camera scene at a security inspection door, the occurrence rate of a front face is high, the background is relatively uniform, and the method does not belong to face detection under the condition of no constraint at all, so that the improved Adaboost algorithm based on the Haar-like characteristic is adopted, the detection speed is ensured, and the high front face detection rate and the low false detection rate are ensured. Because the traditional Haar-like features are all neighborhood local features, such as features formed by a single eye and the periphery, the invention provides the Haar-like features of a 3x3 structure in order to supplement the features of a larger spatial range, and the Haar-like features can better express the combined features of human eyes, nose, mouth and the like. During training, the negative sample adopts scene pictures shot by the security inspection door environment and combines objects such as bags and clothes with various textures possibly appearing in the scene.
After the N cameras detect the human faces, the human faces appearing in each camera are tracked, and a Kalman filtering tracking algorithm based on neighborhood search is adopted as a tracking algorithm. The remote human face is shielded through human face size constraint, the human face passing through a security check door cannot be shielded normally, and generally passes through the security check door one by one, in the scene, the assumption is basically true assuming that the process noise and the observation noise of the human face motion are both Gaussian white noise, so that the Kalman filtering tracking algorithm is effective in the scene.
The results of face tracking are stored in a face queue, and the security gate usually requires people to pass through one by one, so that there is usually only one valid face queue for each camera. In a limited time period (obtained by calculating the average time of people walking through a security inspection door), M faces (M is less than or equal to N) with the highest face quality score are screened from a face queue of N cameras, and 1 face is screened at most in each path. The human face quality evaluation is carried out through comprehensive indexes of illuminance and definition. The illuminance adopts the Y component of the YUV space through the combination indexes of the global average brightness, the highest brightness value, the lowest brightness value and the like of the face. The sharpness is evaluated by the high frequency components, first by Discrete Cosine Transform (DCT), and then estimated by counting the fraction of the number of high frequency coefficients.
And intercepting the corresponding upper body image by the M faces obtained through the above through face coordinates, calculating Local Binary Pattern (LBP) characteristics of the upper body image, and counting a color histogram. The LBP characteristics and the color histogram express the characteristics of clothes, hair and the like worn by people, pairwise matching is carried out through the LBP characteristics and the color histogram, and finally, the faces belonging to the same person in M faces are screened out. And transmitting the screened human face to a human face posture estimation algorithm: firstly, extracting characteristic points of the face of the pedestrian, wherein the characteristic points comprise positions of eyes, a nose, a mouth corner, a chin and the like, and the characteristic points are simultaneously transmitted to the next step of face alignment calibration; three angles such as a horizontal rotation angle, a pitch angle and an inclination angle of the face are estimated through the projection (a matrix of three angles) from the three-dimensional face rotation model to the two-dimensional face, and K faces (K is less than or equal to M) with the angles smaller than a certain threshold value (close to the front face) are screened.
After the K faces are aligned and calibrated based on the feature points, the K faces are input to a deep convolutional neural network to extract features, and the feature dimension is 1024 dimensions. As shown in fig. 4, the deep convolutional neural network consists of 9 convolutional layers, 4 pooling layers, 1 merging layer, and 1 fully-connected layer. Convolutional layers use a 3x3 convolutional kernel, pooling layers use a 2x2 window, and merging layers fuse the features of different convolutional layers. Each convolutional layer is followed by a ReLU (rectified Linear units) unit. Each layer normalizes the features. The final evaluation function is a function weighted by a Softmax loss function and a central loss function. Training a star face database disclosed on the network and a face collected by a security inspection door through a calibrated database, and finally generating a convolutional neural network parameter for extracting the face characteristics.
And respectively comparing the K characteristic vectors with the face characteristic vectors in the face library, if the matching values of the K faces in the face library are more than or equal to a first threshold value, selecting a person with the top matching and the highest matching value as the final output of the recognition, simultaneously adding the snap-shot face with the top matching into the corresponding face library, and updating the face database.
If the matching values of the K faces in the face library are smaller than the first threshold, setting a second threshold, and if the matching values are larger than or equal to the second threshold, considering that the faces are temporarily matched, and sequentially selecting the person with the largest number of matched faces (each person corresponds to a plurality of warehoused faces) from the face library by each face in the K faces, wherein the number of matched faces is L1,...,LKThe corresponding average matching values are S1,...,SKBy calculating a composite score for the match
Figure BDA0001543578940000041
And according to the order of the scores, selecting the person corresponding to the highest Score as the final face matching output, simultaneously adding the corresponding face to the corresponding face library, and updating the face database.
And if the matching values of the K faces in the face database are smaller than a second threshold value, establishing a new personnel list, taking the K faces as the faces of the newly-built personnel in the face database, and adding the personnel data in the face database.
According to the security inspection door face recognition and face automatic library building algorithm based on multi-camera fusion, faces are simultaneously acquired at multiple angles through a plurality of cameras, the face-positive rate of face snapshot is ensured under the condition that active cooperation of security inspection personnel is not needed, the security inspection speed of a security inspection door is improved, and the recognition rate of non-cooperative face recognition of the security inspection door is improved. And the face library of the person to be subjected to security inspection can be automatically established, so that face library support is provided for the management of subsequent persons.
Drawings
FIG. 1 is a schematic view of the installation of multiple cameras in the security door of the present invention.
FIG. 2 is a flow chart of the security inspection door face recognition and face automatic library building algorithm based on multi-camera fusion.
FIG. 3 is a schematic representation of the added Haar-like features of the present invention.
Fig. 4 is a schematic diagram illustrating layers of a deep convolutional neural network for face feature extraction according to the present invention.
Detailed Description
The invention is further explained below with reference to the figures and the specific examples. It should be noted that the examples described below are intended to better understand the invention and are only part of the invention and do not therefore limit the scope of protection of the invention.
As shown in fig. 2, the present invention realizes a series of steps of face detection, face tracking, face quality evaluation, face pose estimation, face alignment calibration, face feature extraction, face feature comparison, automatic face library construction, etc. by a plurality of cameras simultaneously.
In step 201, an empty face library is created, or a manually collected face is used as an initial base library to establish a database index of face-related information. And distributing a disk storage space and a memory space for each security inspection door face recognition system.
In step 202, N cameras are installed on two sides and the top of the security door, the lens faces the direction in which a person enters, and when the person passes through the security door, the person can capture the front face as far as possible. The lens adopts a large-aperture lens, so that the exposure time is shortened, and the face in motion is captured. The N cameras simultaneously acquire videos and perform face detection, an improved Adaboost algorithm based on Haar-like features is adopted, namely features in a larger space range are adopted, the Haar-like features of a 3x3 structure are added, and the combined features of eyes, a nose, a mouth and the like of a face can be better expressed, as shown in figure 3. During training, negative samples are selected from scenes such as subway security inspection doors, airport security inspection doors, important venue security inspection doors and the like.
Step 203, after the human face is detected in step 202, the human face appearing in each camera is tracked, and a Kalman filtering tracking algorithm based on neighborhood search is adopted as the tracking algorithm. The target center is used as a neighborhood search starting point, the face detected in the step 202 closest to the current face is searched in a window in a certain range through position and speed prediction, the searched face coordinates are used as observation coordinates, and variables such as position, speed and the like are updated through a Kalman filter. If the face is not searched, the face is considered to be a disappeared face, and the face is updated and deleted from the face queue frame by frame. And if the new face appears, establishing a new face tracking queue.
Step 204 is to screen M faces (M is less than or equal to N) with the highest face quality score from the face queues of the N cameras in a limited time period (obtained by calculating the average time of the person walking through the security gate) from the face queue of the face tracking result of the step 203, and screen 1 face at most in each path. The human face quality evaluation is carried out by the comprehensive indexes of the illuminance and the definition, and the global average brightness I of the face is setAVGBrightness maximum value IMAXMinimum brightness value IMINThe overall illuminance index of the human face is as follows:
Figure BDA0001543578940000051
in terms of brightness, in order to consider the influence of yin and yang faces, the invention estimates the brightness average value of the left face and the right face by the following formula:
Figure BDA0001543578940000052
counting the number of high-frequency coefficients after face definition Discrete Cosine Transform (DCT): dividing an image into 8x8 macro blocks, performing DCT on each macro block to generate an 8x8 frequency domain matrix C, and performing DCT on each position C in the frequency domain matrixij(except for the DC component, i is more than or equal to 1 and j is less than or equal to 8) a threshold value T is setijIf the current position c of the frequency domain matrix after DCTijGreater than TijThen, the counter of the high frequency component is increasedAdding one, and taking the ratio of the total number of the high-frequency components which finally exceed the threshold value to the total number of the frequency domain coefficients as the human face definition index QUALITYsharpness. Therefore, the comprehensive indexes of the face quality are as follows:
QUALITYface=(QUALITYbrightness+QUALITYuniformity+QUALITYsharpness)/3×100%
step 205 and step 206 cut out the corresponding upper body image through the face coordinates on the M faces obtained in step 204, calculate Local Binary Pattern (LBP) features for the upper body image, and count the color histogram at the same time. The LBP characteristics and the color histogram express the characteristics of clothes, hair and the like worn by people, pairwise matching is carried out through the LBP characteristics and the color histogram, and finally, the faces belonging to the same person in M faces are screened out. And transmitting the screened human face to a human face posture estimation algorithm: firstly, extracting characteristic points of the face of the pedestrian, wherein the characteristic points comprise the positions of eyes, nose, mouth corners, chin and the like. The face characteristic point extraction adopts a convolution neural network: 5 convolutional layers, 3 pooling layers, and 1 fully-connected layer. The feature point is simultaneously transmitted to the next face alignment calibration; three angles such as a horizontal rotation angle, a pitch angle and an inclination angle of the face are estimated through the projection (a matrix of three angles) from the three-dimensional face rotation model to the two-dimensional face, and K faces (K is less than or equal to M) with the angles smaller than a certain threshold value (close to the front face) are screened.
Step 207 calibrates the K faces of step 206 by feature point based alignment, the feature point coordinates provided by the step 206 face feature point detection module. During calibration, the relative positions of the reference points are kept unchanged by taking the coordinates of the positions of eyes, nose, mouth corners, the bottom of the chin and the like as the reference points, and the face image is cut and scaled to a fixed resolution, wherein the face resolution of 128x112 is adopted by the invention.
And step 208, inputting the aligned and calibrated face image into a deep convolutional neural network to extract features, wherein the feature dimension is 1024 dimensions. The deep convolutional neural network consists of 9 convolutional layers, 4 pooling layers, 1 merging layer and 1 full-link layer. The convolution layer adopts convolution kernel of 3x3, the pooling layer adopts window of 2x2, and the merging layer fuses the features of the 11 th layer and the 12 th layer and outputs the fused features to the next layer. Each convolutional layer is followed by a ReLU (rectified Linear units) unit. Each layer normalizes the features. And the final evaluation function adopts a function weighted by a Softmax loss function and a central loss function, and the central loss function selects smaller weight. Training a star face database and faces collected by a security inspection door which are published on the internet through a calibrated database, wherein the training is carried out in an interactive iteration mode, and finally, convolutional neural network parameters for extracting face features are generated. The training sample consists of 41000 people, totaling 50 million faces.
And step 209 and step 210, comparing the K feature vectors with the face feature vectors in the face library, if the matching values of the K faces in the face library are greater than or equal to the first threshold, selecting a person with a top matching and the highest matching value as the final output of the recognition, adding the snap-shot face with the top matching into the corresponding face library, and updating the face database.
Step 211 and step 212, if the matching values of the K faces in the face library are all smaller than the first threshold, setting a second threshold, and if the matching values are greater than or equal to the second threshold, regarding as temporary matching, and sequentially selecting the person with the largest number of matching (each person corresponds to multiple in-library faces) from the face library for each face in the K faces, where the number of matching is L1,...,LKThe corresponding average matching values are S1,...,SKAnd calculating a matching comprehensive score:
Figure BDA0001543578940000061
and according to the order of the scores, selecting the person corresponding to the highest Score as the final face matching output, simultaneously adding the corresponding face to the corresponding face library, and updating the face database.
And step 213, if the matching values of the K faces in the face library are all smaller than the second threshold value, establishing a new personnel list, using the K faces as the faces of the newly-built personnel in the face library, and adding the personnel data in the face database.
The invention relates to a security inspection door face recognition and face automatic database building algorithm based on multi-camera fusion, which belongs to a non-fitting face recognition algorithm and a face database building algorithm, remarkably improves the security inspection speed, reduces potential safety hazards such as crowding caused by receiving security inspection, and simultaneously provides complete data for subsequent personnel management by automatically building a face database.

Claims (4)

1. Security check door face recognition and face automatic library building algorithm based on multi-camera fusion is characterized in that: installing N cameras on a security inspection door facing the direction of people entering, wherein N is more than or equal to 3, simultaneously carrying out face detection and face tracking on the acquired video, and automatically screening M faces with the highest face quality score in a face queue obtained by face tracking of each camera within a limited time period by calculating the average time of people walking through the security inspection door, wherein M is less than or equal to N, and 1 face is screened out at most in each path; the human face quality evaluation is carried out by the comprehensive index of the illumination and the definition, and the human face illumination evaluation index is composed of the global average brightness IAVGBrightness maximum value IMAXMinimum brightness value IMINBy the formula
Figure FDA0003159793570000011
Calculated and added with the illumination symmetry index
Figure FDA0003159793570000012
The human face definition index is counted by the number ratio of high-frequency coefficients after Discrete Cosine Transform (DCT), an image is divided into 8x8 macro blocks, DCT is carried out on each macro block to generate a frequency domain matrix C of 8x8, and each position C in the frequency domain matrix is subjected toijExcept for the direct current component, i is more than or equal to 1, j is less than or equal to 8, and a threshold value T is setijIf the current position c of the frequency domain matrix after DCTijGreater than TijThen, the counter of the high frequency component is increased by one, and the ratio of the total number of the high frequency components which finally exceed the threshold value to the total number of the frequency domain coefficients is used as the human face definition index QUALITYsharpness(ii) a Human face QUALITY comprehensive index QUALITYface=(QUALITYbrightness+QUALITYuniformity+QUALITYsharpness) 3 × 100%; intercepting a corresponding upper body image through a human face, matching the edge and color of the upper body image, if the matching is successful, considering the human face acquired from the same person, calculating the horizontal rotation angle, the pitch angle and the inclination angle of the human face by utilizing a human face posture estimation algorithm through a human face three-dimensional rotation model to a two-dimensional projection and a matrix of three angles, screening K human faces with smaller human face angles close to a frontal face, wherein K is less than or equal to M, firstly extracting pedestrian face characteristic points by the human face posture estimation algorithm, wherein the characteristic points comprise eyes, a nose, a mouth angle and a chin position, and extracting the human face characteristic points by adopting a convolutional neural network of 5 packed layers, 3 pooled layers and 1 fully-connected layer; after the K faces are aligned and calibrated through the feature points, inputting the K faces into a deep convolutional neural network for face feature extraction, wherein each face corresponds to a 1024-dimensional feature vector, and K feature vectors corresponding to the K faces are extracted; respectively comparing the K feature vectors with face feature vectors in a face library, if the matching values of the K faces in the face library are greater than or equal to a first threshold value, selecting a person with a top matching and the highest matching value as final output of recognition, simultaneously adding the snap-shot face with the top matching into the corresponding face library, and updating a face database; if the matching values of the K faces in the face library are smaller than the first threshold, setting a second threshold, and if the matching values are larger than or equal to the second threshold, considering that the faces are temporarily matched, sequentially selecting the person with the largest number of matched faces from the face library by each face in the K faces, wherein each person corresponds to a plurality of warehoused faces, and the number of matched faces is L1,...,LKThe corresponding average matching values are S1,...,SKThe Score is calculated by calculating a matching composite Score,
Figure FDA0003159793570000021
according to the order of the scores, selecting a person corresponding to the highest Score as a final face matching output, simultaneously adding the corresponding snapshot face into a corresponding face database, and updating the face database; if the matching values of the K faces in the face library are all smaller than a second threshold value, establishingEstablishing a new personnel list, taking the K faces as the faces of the newly-built personnel in a face database, and adding the personnel data in the face database; based on the extracted face features, face matching and automatic library building are carried out through a multi-camera face fusion comparison algorithm, and a non-matching face recognition algorithm is realized.
2. The security inspection door face recognition and face automatic library building algorithm according to claim 1, characterized in that an improved Haar-like feature-based Adaboost algorithm, namely a Haar-like feature added with a 3x3 structure, can better express the combined features of the eyes, nose, mouth and the like of a face; during training, negative samples are selected from scenes such as subway security inspection doors, airport security inspection doors, important venue security inspection doors and the like, and a face detection algorithm specific to the scenes of the security inspection doors is formed.
3. The security inspection door face recognition and face automatic library establishment algorithm according to claim 1, characterized in that the face appearing in each camera is tracked, the tracking algorithm adopts a Kalman filtering tracking algorithm based on neighborhood search, a target center is taken as a neighborhood search starting point, candidate faces closest to the current face are searched in a window in a certain range through position and speed prediction, and updating is performed through a Kalman filter.
4. The security door face recognition and face automatic library creation algorithm of claim 1, wherein the face alignment calibration uses eye, nose, mouth corner, and chin bottom position coordinates as fiducial points, keeps the relative positions of these fiducial points unchanged, and cuts and scales the face image to a fixed resolution, using a 128x112 face resolution; after alignment calibration, extracting the face feature vector by a deep convolutional neural network, wherein the convolutional neural network consists of 9 convolutional layers, 4 pooling layers, 1 merging layer and 1 full-connection layer; the convolution layer adopts convolution kernel of 3x3, the pooling layer adopts window of 2x2, the merging layer fuses the characteristics of the 11 th layer and the 12 th layer and outputs the fused characteristics to the next layer; performing feature normalization on each layer of the convolutional neural network; the convolutional neural network is trained by adopting a face library calibrated by a security inspection door scene.
CN201810021107.2A 2018-01-10 2018-01-10 Security check door face recognition and face automatic library building algorithm based on multi-camera fusion Active CN108256459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810021107.2A CN108256459B (en) 2018-01-10 2018-01-10 Security check door face recognition and face automatic library building algorithm based on multi-camera fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810021107.2A CN108256459B (en) 2018-01-10 2018-01-10 Security check door face recognition and face automatic library building algorithm based on multi-camera fusion

Publications (2)

Publication Number Publication Date
CN108256459A CN108256459A (en) 2018-07-06
CN108256459B true CN108256459B (en) 2021-08-24

Family

ID=62726200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810021107.2A Active CN108256459B (en) 2018-01-10 2018-01-10 Security check door face recognition and face automatic library building algorithm based on multi-camera fusion

Country Status (1)

Country Link
CN (1) CN108256459B (en)

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977647B (en) * 2017-12-20 2020-09-04 上海依图网络科技有限公司 Face recognition algorithm evaluation method suitable for public security actual combat
CN109034013B (en) * 2018-07-10 2023-06-13 腾讯科技(深圳)有限公司 Face image recognition method, device and storage medium
CN109190532A (en) * 2018-08-21 2019-01-11 北京深瞐科技有限公司 It is a kind of based on cloud side fusion face identification method, apparatus and system
CN109271923A (en) * 2018-09-14 2019-01-25 曜科智能科技(上海)有限公司 Human face posture detection method, system, electric terminal and storage medium
CN110908289A (en) * 2018-09-17 2020-03-24 珠海格力电器股份有限公司 Smart home control method and device
CN108960216A (en) * 2018-09-21 2018-12-07 浙江中正智能科技有限公司 A kind of detection of dynamic human face and recognition methods
CN109448848A (en) * 2018-09-26 2019-03-08 长沙师范学院 A kind of infantile psychology state evaluating method based on fuzzy evaluation
CN109508648A (en) * 2018-10-22 2019-03-22 成都臻识科技发展有限公司 A kind of face snap method and apparatus
CN109376686A (en) * 2018-11-14 2019-02-22 睿云联(厦门)网络通讯技术有限公司 A kind of various dimensions human face data acquisition scheme, acquisition system and acquisition method
CN109472247B (en) * 2018-11-16 2021-11-30 西安电子科技大学 Face recognition method based on deep learning non-fit type
CN109685106A (en) * 2018-11-19 2019-04-26 深圳博为教育科技有限公司 A kind of image-recognizing method, face Work attendance method, device and system
CN109635755A (en) * 2018-12-17 2019-04-16 苏州市科远软件技术开发有限公司 Face extraction method, apparatus and storage medium
CN109859085A (en) * 2018-12-25 2019-06-07 深圳市天彦通信股份有限公司 Safe early warning method and Related product
CN111382592B (en) * 2018-12-27 2023-09-29 杭州海康威视数字技术股份有限公司 Living body detection method and apparatus
CN109711370B (en) * 2018-12-29 2021-03-26 北京博睿视科技有限责任公司 Data fusion method based on WIFI detection and face clustering
CN109615750B (en) * 2018-12-29 2021-12-28 深圳市多度科技有限公司 Face recognition control method and device for access control machine, access control equipment and storage medium
CN109919091A (en) * 2019-03-06 2019-06-21 广州佳都数据服务有限公司 Face safety inspection method, device and electronic equipment based on dynamic white list
CN110096965A (en) * 2019-04-09 2019-08-06 华东师范大学 A kind of face identification method based on head pose
CN112001206B (en) * 2019-05-27 2023-09-22 北京君正集成电路股份有限公司 Method for combining face recognition libraries through traversal comparison
CN110321821B (en) * 2019-06-24 2022-10-25 深圳爱莫科技有限公司 Human face alignment initialization method and device based on three-dimensional projection and storage medium
CN110427864B (en) * 2019-07-29 2023-04-21 腾讯科技(深圳)有限公司 Image processing method and device and electronic equipment
CN110544333B (en) * 2019-08-13 2021-07-16 成都电科慧安科技有限公司 Access control system and control method thereof
CN110825765B (en) * 2019-10-23 2022-10-04 中国建设银行股份有限公司 Face recognition method and device
CN111144366A (en) * 2019-12-31 2020-05-12 中国电子科技集团公司信息科学研究院 Strange face clustering method based on joint face quality assessment
CN111310580A (en) * 2020-01-19 2020-06-19 四川联众竞达科技有限公司 Face recognition method under non-matching state
CN111652048A (en) * 2020-04-17 2020-09-11 北京品恩科技股份有限公司 A deep learning based 1: n face comparison method
CN111539351B (en) * 2020-04-27 2023-11-03 广东电网有限责任公司广州供电局 Multi-task cascading face frame selection comparison method
CN111814613A (en) * 2020-06-24 2020-10-23 浙江大华技术股份有限公司 Face recognition method, face recognition equipment and computer readable storage medium
CN111866471B (en) * 2020-07-31 2022-05-03 泽达易盛(天津)科技股份有限公司 Visual intelligent public security prevention and control terminal
CN112016508B (en) * 2020-09-07 2023-08-29 杭州海康威视数字技术股份有限公司 Face recognition method, device, system, computing device and storage medium
CN112132048A (en) * 2020-09-24 2020-12-25 天津锋物科技有限公司 Community patrol analysis method and system based on computer vision
CN112307234A (en) * 2020-11-03 2021-02-02 厦门兆慧网络科技有限公司 Face bottom library synthesis method, system, device and storage medium
CN113177530A (en) * 2021-05-27 2021-07-27 广州广电运通智能科技有限公司 Personnel screening method, equipment, medium and product
CN113269124B (en) * 2021-06-09 2023-05-09 重庆中科云从科技有限公司 Object recognition method, system, equipment and computer readable medium
CN113255608B (en) * 2021-07-01 2021-11-19 杭州智爱时刻科技有限公司 Multi-camera face recognition positioning method based on CNN classification
CN113779290A (en) * 2021-09-01 2021-12-10 杭州视洞科技有限公司 Camera face recognition aggregation optimization method
CN113688792B (en) * 2021-09-22 2023-12-08 哈尔滨工程大学 Face recognition method
CN116912808B (en) * 2023-09-14 2023-12-01 四川公路桥梁建设集团有限公司 Bridge girder erection machine control method, electronic equipment and computer readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video
CN104978550A (en) * 2014-04-08 2015-10-14 上海骏聿数码科技有限公司 Face recognition method and system based on large-scale face database
CN105654033A (en) * 2015-12-21 2016-06-08 小米科技有限责任公司 Face image verification method and device
CN105740758A (en) * 2015-12-31 2016-07-06 上海极链网络科技有限公司 Internet video face recognition method based on deep learning
CN106156688A (en) * 2015-03-10 2016-11-23 上海骏聿数码科技有限公司 A kind of dynamic human face recognition methods and system
CN106203260A (en) * 2016-06-27 2016-12-07 南京邮电大学 Pedestrian's recognition and tracking method based on multiple-camera monitoring network
CN106295482A (en) * 2015-06-11 2017-01-04 中国移动(深圳)有限公司 The update method of a kind of face database and device
CN106845357A (en) * 2016-12-26 2017-06-13 银江股份有限公司 A kind of video human face detection and recognition methods based on multichannel network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video
CN104978550A (en) * 2014-04-08 2015-10-14 上海骏聿数码科技有限公司 Face recognition method and system based on large-scale face database
CN106156688A (en) * 2015-03-10 2016-11-23 上海骏聿数码科技有限公司 A kind of dynamic human face recognition methods and system
CN106295482A (en) * 2015-06-11 2017-01-04 中国移动(深圳)有限公司 The update method of a kind of face database and device
CN105654033A (en) * 2015-12-21 2016-06-08 小米科技有限责任公司 Face image verification method and device
CN105740758A (en) * 2015-12-31 2016-07-06 上海极链网络科技有限公司 Internet video face recognition method based on deep learning
CN106203260A (en) * 2016-06-27 2016-12-07 南京邮电大学 Pedestrian's recognition and tracking method based on multiple-camera monitoring network
CN106845357A (en) * 2016-12-26 2017-06-13 银江股份有限公司 A kind of video human face detection and recognition methods based on multichannel network

Also Published As

Publication number Publication date
CN108256459A (en) 2018-07-06

Similar Documents

Publication Publication Date Title
CN108256459B (en) Security check door face recognition and face automatic library building algorithm based on multi-camera fusion
CN109819208B (en) Intensive population security monitoring management method based on artificial intelligence dynamic monitoring
CN105518709B (en) The method, system and computer program product of face for identification
US9104914B1 (en) Object detection with false positive filtering
EP3092619B1 (en) Information processing apparatus and information processing method
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN103106393B (en) A kind of embedded human face identification intelligent identity authorization system based on robot platform
US20200012923A1 (en) Computer device for training a deep neural network
Kawai et al. Person re-identification using view-dependent score-level fusion of gait and color features
Abaza et al. Fast learning ear detection for real-time surveillance
US20140016836A1 (en) Face recognition system and method
US20200394384A1 (en) Real-time Aerial Suspicious Analysis (ASANA) System and Method for Identification of Suspicious individuals in public areas
CN111241932A (en) Automobile exhibition room passenger flow detection and analysis system, method and storage medium
Borges Pedestrian detection based on blob motion statistics
Mahajan et al. Detection of concealed weapons using image processing techniques: A review
Rothkrantz Person identification by smart cameras
CN109919068B (en) Real-time monitoring method for adapting to crowd flow in dense scene based on video analysis
Parameswaran et al. Design and validation of a system for people queue statistics estimation
Xu et al. Smart video surveillance system
Islam et al. Correlating belongings with passengers in a simulated airport security checkpoint
Chen et al. Head-shoulder detection using joint HOG features for people counting and video surveillance in library
Vajhala et al. Weapon detection in surveillance camera images
Lee et al. Recognizing human-vehicle interactions from aerial video without training
Xu et al. A rapid method for passing people counting in monocular video sequences
Pham et al. A robust model for person re-identification in multimodal person localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231125

Address after: Room 609-1, 6th Floor, Import and Export Exhibition and Trading Center, Huanghua Comprehensive Bonded Zone, Huanghua Town, Lingkong Block, Changsha Area, Changsha Free Trade Zone, Hunan Province, 410137

Patentee after: Hunan Shengxun Technology Co.,Ltd.

Address before: Room 403, 4th Floor, Building 6, No. 13 North Ertiao, Zhongguancun, Haidian District, Beijing, 100190

Patentee before: BEIJING BRAVEVIDEO TECHNOLOGY CO.,LTD.