CN112001280A - Real-time online optimization face recognition system and method - Google Patents

Real-time online optimization face recognition system and method Download PDF

Info

Publication number
CN112001280A
CN112001280A CN202010812277.XA CN202010812277A CN112001280A CN 112001280 A CN112001280 A CN 112001280A CN 202010812277 A CN202010812277 A CN 202010812277A CN 112001280 A CN112001280 A CN 112001280A
Authority
CN
China
Prior art keywords
face
data
detection
features
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010812277.XA
Other languages
Chinese (zh)
Inventor
李百成
张翊
黎嘉朗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Whale Cloud Technology Co Ltd
Original Assignee
Whale Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Whale Cloud Technology Co Ltd filed Critical Whale Cloud Technology Co Ltd
Priority to CN202010812277.XA priority Critical patent/CN112001280A/en
Publication of CN112001280A publication Critical patent/CN112001280A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

A real-time face recognition method and a system capable of being optimized online are disclosed, wherein the method comprises the following steps: acquiring image data to be identified, and analyzing the image data to obtain input data; carrying out face detection on the input data based on a face detection network, wherein the application type of the input data is judged before each detection, and a corresponding inference branch is selected based on the application type to carry out face detection so as to obtain face region data in the input data; checking the face quality of the face region data at least according to face definition, face brightness, face angle and face visibility; and performing face feature extraction and HNSW-based face matching on the face region data which passes face quality audit so as to realize face recognition. The method realizes the optimization of the face detection reasoning route based on the scene type and has the capability of expanding the face library on line to improve the identification accuracy.

Description

Real-time online optimization face recognition system and method
Technical Field
The invention belongs to the technical field, and particularly relates to a real-time online optimization face recognition system and method.
Background
The face recognition is a method for carrying out identity recognition based on the face features of the human face, and has been widely applied to various fields in real life due to the advantages of non-contact, intuition and the like. However, according to different application scenes, the quality of the face image acquired by the device may have a large difference, which may cause the reduction of the face recognition efficiency and accuracy.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, it is an object of the present invention to provide a real-time, on-line optimized face recognition system and method.
The embodiment of the invention discloses a real-time face recognition method capable of being optimized on line, which comprises the following steps: acquiring image data to be identified, and analyzing the image data to obtain input data; carrying out face detection on the input data based on a face detection network, wherein the application type of the input data is judged before each detection, and a corresponding inference branch is selected based on the application type to carry out face detection so as to obtain face region data in the input data; checking the face quality of the face region data at least according to face definition, face brightness, face angle and face visibility; and performing face feature extraction and HNSW-based face matching on the face region data which passes face quality audit so as to realize face recognition.
In a possible embodiment, the method further comprises the steps of adding the human face features meeting the preset warehousing conditions to a human face feature base, and after the human face features are obtained and confirmed to be warehoused, adding the human face features to the multi-layer graph data structure of the HNSW according to the probability rule of the expanded human face features.
In one possible embodiment, the preset warehousing condition includes: let similarity be similarity, and similarity threshold be tsimWith the extended similarity threshold textAnd t isext>tsimAnd the current feature extended record number is cntextWhen similarity > textAnd cntextAnd when the number is less than 5, the face features are stored in a storage and marked as to be determined.
In one possible embodiment, the probability rule for expanding the face features is: let a total M-level graph, ordered from shallow to deep as: layer1、layer2…layerMCorresponding insertion probability of p1、p2…pMThe insertion probability has the following relationship:
Figure BDA0002631389880000021
Figure BDA0002631389880000022
in one possible embodiment, the face detection includes: judging the application type of the input data; detecting human faces with different sizes under characteristic images with different scales, dividing the scales into three levels of small, meduim and large, and corresponding the levels to different inference branches under the human face detection; and selectively executing corresponding reasoning branches of the face detection network based on the application type, operating small and meduim branches in a scene with a longer face distance, and operating small and large branches in a scene with a shorter face distance to obtain first face region data.
In one possible embodiment, the method further comprises: and screening the first face region data based on the confidence coefficient to obtain second face region data.
In one possible embodiment, the face region data is divided into a plurality of blocks of respectively statistical gray values; and calculating the gray-dark degree and the brightness of the face area according to the gray value, and judging the brightness of the face area according to the two indexes.
In one possible embodiment, extracting the face key points by a face key point extraction algorithm, wherein the key point information comprises key point horizontal direction coordinates, key point vertical direction coordinates and key point visibility; acquiring human face angles in the horizontal direction and the vertical direction based on the key point information; or counting the number of the visibility of the key points meeting the preset value, and auditing the face quality through the number of the visibility of the key points.
In one possible embodiment, at least the individual face keypoints are extracted 21.
In one possible embodiment, a data structure for HNSW graph search is constructed based on a face feature library; and searching the face features with the maximum similarity by adopting an HNSW vector matching algorithm.
A real-time, on-line optimizable face recognition system comprising: the input module is used for acquiring image data to be identified and analyzing the image data to obtain input data; the face detection module is used for carrying out face detection on the input data based on a face detection network, wherein the application type of the input data is judged before each detection, and a corresponding inference branch is selected based on the application type to carry out face detection so as to obtain face region data in the input data; the face quality auditing module is used for auditing the face quality of the face area data at least according to face definition, face brightness, face angle and face visibility; and the face recognition module is used for extracting the face features of the face region data which passes the face quality audit and carrying out face matching based on HNSW so as to realize face recognition.
In a possible embodiment, the system further comprises a face library online expansion module, which is used for adding the face features meeting the preset warehousing conditions to the face feature base library, and after the face features are obtained and confirmed to be warehoused, adding the face features to the multi-layer graph data structure of the HNSW according to the probability rule of the expanded face features.
In one possible embodiment, the preset warehousing condition includes: let the similarity be simjlarity and the threshold value of the similarity be tsimWith the extended similarity threshold textAnd t isext>tsimAnd the current feature extended record number is cntextWhen similarity > textAnd cntextAnd when the number is less than 5, the face features are stored in a storage and marked as to be determined.
In one possible embodiment, the probability rule for expanding the face features is: let a total M-level graph, ordered from shallow to deep as: layer1、layer2…layerMCorresponding insertion probability of p1、p2…pMThe insertion probability has the following relationship:
Figure BDA0002631389880000041
Figure BDA0002631389880000042
in one possible embodiment, the face detection module is further configured to: judging the application type of the input data; detecting human faces with different sizes under characteristic images with different scales, dividing the scales into three levels of small, meduim and large, and corresponding the levels to different inference branches under the human face detection based on an anchor box mode; and based on the application type, selectively executing corresponding branches of the face detection network, operating small and meduim branches in a scene with a longer face distance, and operating small and large branches in a scene with a shorter face distance to obtain first face region data.
In one possible embodiment, the face detection module is further configured to: and screening the first face region data based on the confidence coefficient to obtain second face region data.
In one possible embodiment, the face quality auditing module is further configured to: dividing the face region data into a plurality of blocks, and respectively counting gray values; and calculating the gray-dark degree and the brightness of the face area according to the gray value, and judging the brightness of the face area according to the two indexes.
In one possible embodiment, the face quality auditing module is further configured to: extracting face key points by a face key point extraction algorithm, wherein key point information comprises key point horizontal direction coordinates, key point vertical direction coordinates and key point visibility; acquiring human face angles in the horizontal direction and the vertical direction based on the key point information; or counting the number of the visibility of the key points meeting the preset value, and auditing the face quality through the number of the visibility of the key points.
In one possible embodiment, the face recognition module is further configured to: constructing a data structure for HNSW graph search based on the face feature library; and searching the face features with the maximum similarity by adopting an HNSW vector matching algorithm.
A computer storage medium storing a computer program which, when executed, implements the aforementioned method.
Compared with the prior art, the invention has the following beneficial effects:
the scheme adopts a special face key point detection network, extracts face key points, outputs key point positions and also comprises the visibility of the key points; the face angle is calculated by adopting 21 key points, and the output angle is more accurate. The method realizes the optimization of the face detection reasoning route based on the scene type and has the capability of expanding the face library on line to improve the identification accuracy.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a system configuration according to an embodiment of the present invention;
FIG. 3 is a schematic view of a face detection structure according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an inference branch selection structure of a face detection model according to an embodiment of the present invention;
fig. 5 is a schematic view of a flow chart of face luminance calculation according to an embodiment of the present invention.
Detailed Description
In order to facilitate understanding of those skilled in the art, the present invention will be further described with reference to the following examples and drawings, which are not intended to limit the present invention.
As shown in FIG. 1, the embodiment of the invention discloses a real-time online optimization face recognition method, which can realize scene type optimization-based face detection inference route and has the capability of online expansion of a face library to improve the recognition accuracy by acquiring image data to be recognized, detecting faces, examining and verifying the face quality, recognizing face features based on deep learning and the like. The method is suitable for multiple application scenes such as entrance guard, monitoring, VIP identification and the like. The method comprises the following steps:
s100, obtaining image data to be identified, and analyzing the image data to obtain input data. In particular implementations, image data such as a sequence of pictures or a video stream input, offline video, etc. may be received from a front-end application.
And S102, carrying out face detection on the input data based on a face detection network, wherein the application type of the input data is judged before each detection, and a corresponding inference branch is selected based on the application type to carry out face detection so as to obtain face region data in the input data.
Before the face detection, the application type of the input data is determined, and the application type determination method may include, but is not limited to, the following means: 1) the front-end application directly transmits the application identification; 2) judging the type based on the resolution of the picture or the video; 3) and (4) running for a period of time on line, counting the occurrence times and the face quality of the faces with different sizes, and counting and judging detection branches suitable for application input.
Detecting human faces with different sizes under characteristic images with different scales, dividing the scales into three levels of small, meduim and large, and corresponding the levels to different inference branches under the human face detection based on an anchor box mode; and selectively executing corresponding reasoning branches of the face detection network based on the application type, operating small and meduim branches in a scene with a longer face distance, and operating small and large branches in a scene with a shorter face distance to obtain first face region data. Therefore, the purposes of reducing the calculated amount and roughly screening based on the size of the human face are achieved.
And screening the first face region data based on the confidence coefficient to obtain second face region data.
And S103, checking the face quality of the face region data at least according to the face definition, the face brightness, the face angle and the face visibility. The face detection area data here may be second face area data.
The examination of the face brightness can be that the face region data is divided into a plurality of blocks to respectively count gray values; and calculating the gray-dark degree and the brightness of the face area according to the gray value, and judging the brightness of the face area according to the two indexes.
Converting the face region picture into a gray image and dividing the gray image into m multiplied by n regions with the same size, i row and j column regions xijThe gray value is calculated according to the following rule:
Figure BDA0002631389880000071
region xijThe two luminance statistics of (c) are:
Figure BDA0002631389880000072
weighting and summing the brightness statistical indexes of the regions to obtain two human face brightness indexes scorebright_dark、scorebright_light
Figure BDA0002631389880000073
scorebright_dark=sum(element_wise(W,summaryd))
scorebright_light=sum(element_wise(W,summaryl))。
The verification of the face visibility can be that face key points are extracted through a face key point extraction algorithm, and key point information comprises key point horizontal direction coordinates, key point vertical direction coordinates and key point visibility; acquiring human face angles in the horizontal direction and the vertical direction based on the key point information; or counting the number of the visibility of the key points meeting the preset value, and auditing the face quality through the number of the visibility of the key points.
And S104, performing face feature extraction and HNSW-based face matching on the face region data which passes face quality audit so as to realize face recognition.
The face feature extraction method can be based on the face key points to align faces, and sends the faces to a feature extraction network to extract face features, and the obtained features are normalized.
The matching method of the face features can be used for constructing a data structure of HNSW image searching based on a face feature library, and searching the face features with the maximum similarity by adopting an HNSW vector matching algorithm.
In one embodiment, the method of the present invention further comprises:
and S105, adding the human face features meeting the preset warehousing conditions to a human face feature base, and after obtaining the human face features and confirming warehousing, adding the human face features to a multi-layer graph data structure of HNSW according to the probability rule of the expanded human face features.
Wherein the preset warehousing condition comprises: let similarity be similarity, and similarity threshold be tsimWith the extended similarity threshold textAnd t isext>tsimAnd the current feature extended record number is cntextWhen similarity > textAnd cntextAnd when the number is less than 5, the face features are stored in a storage and marked as to be determined.
Wherein, the probability rule for expanding the face features is as follows: let a total M-level graph, ordered from shallow to deep as: layer1、layer2…layerMCorresponding insertion probability of p1、p2…pMThe insertion probability has the following relationship:
Figure BDA0002631389880000091
Figure BDA0002631389880000092
the method also comprises data output, namely, the detected and recognized face data is returned to the front-end application in a request or message queue mode.
Corresponding to the foregoing method, as shown in fig. 2, an embodiment of the present invention further discloses a real-time online optimized face recognition system 10, which includes an input module 101, a face detection module 102, a face quality auditing module 103, a face recognition module 104, a face library online expansion module 105, and a data output module 106.
The input module 101 is configured to receive a picture sequence or video stream input, an offline video, and the like from a front-end application, and parse the picture sequence or video stream input into a data structure that can be processed by a subsequent process.
The data output module 106 is configured to integrate the face detection and face recognition results of the face recognition module, and return the integrated results to the front-end application or the message queue.
The face detection module 102 receives the data from the data input module 101, determines the application type of the input data, adopts a face detection network based on an anchor box, and selects a specific inference branch based on the application type to perform face detection, so as to obtain a face region in the picture.
And the face quality auditing module 103 is used for measuring whether the face quality is qualified or not based on the face region definition, the brightness, the face angle and the face shielding four aspects, and abandoning the unqualified face.
And the face recognition module 104 is used for extracting the features of the quality qualified face, and searching the face most similar to the query features by using an HNSW-based vector matching algorithm according to the obtained feature vectors and the face library features. Wherein, the human face features are expressed by high-dimensional feature vectors; before the system runs, a multi-layer graph data structure used by HNSW is constructed based on a face library, or the data structure is loaded from a solidified file.
The face library online expansion module 105 automatically judges whether to library the query face based on the recognition result, the similarity information and the face quality evaluation value of the face recognition module 104.
The system corresponds to the aforementioned method embodiment, and is not described herein again.
The following description will be made with a video stream as image data, and specific method steps are as follows.
The video stream data of the multi-channel monitoring video is input into the input module 101, the input module 101 analyzes the video stream data framing data, the frame image, the frame time and the unique identification ID of the video stream can be input into the face detection module at the frequency of identifying once every 5 frames, and the other frames are not subjected to identification processing.
The face detection module 102 adopts a face detection model based on an anchor box to carry out face detection, the implementation method can adopt a Retinface face recognition algorithm, and six anchor boxes are set, namely small: 16x16, 32x32, medium: 64x64, 128x128, large: 256x256, 512x512, where anchor box size represents the minimum detectable size. The Retinaface face detection structure refers to fig. 3.
The detection results are screened based on the output confidence of the face detection results, and the remaining detection results are input to the face quality auditing module 103. Filtering face detection results with confidence less than 0.9, i.e. scoredetAnd when the detected face is more than or equal to 0.9, the detected face is the final detected face detection result.
The implementation method can adopt a method of system automatic statistics and discrimination on inference branch selection of face detection, and comprises a system operation initial stage and an inference branch selection starting stage.
In the initial operation stage of the system, the distribution of different human face sizes needs to be counted, small, medium and large human face detection branches need to be executed within a period of time after the system is started, and generally, a human flow peak period is selected to respectively count the human face number detected by different branches.
And starting to start the inference branch selection of the face detection model when the statistics reaches a certain number, starting small and medium branches when the statistics result has more faces with small sizes, and starting medium and large branches when the statistics result has more faces with large sizes. The selection of the inference branch is referred to fig. 4.
The detected face is sent to the face quality auditing module 103. The face quality audit comprises four audit links of definition, brightness, face angle and face shielding, and when the four conditions are met, a face picture can enter the identification link.
The human face definition is quantified by using an edge detection operator, and the human face edge can be extracted by adopting a Laplacian operatorInformation and calculating its variance as a face sharpness metric scoresharpWhen scoresharpAnd when the definition is more than 360 degrees, the human face is considered to meet the definition requirement.
The face brightness can use two indexes scorebright_dark、scorebright_lightRespectively measuring the gray level and brightness of human face, converting the human face image into gray level image before calculation, dividing the image into 3 × 3 regions with the same size, and setting the region x of ith row and jth columnijThe size is h × w, and each region is processed as follows:
Figure BDA0002631389880000111
region xijThe two luminance statistics of (c) are:
Figure BDA0002631389880000121
that is, counting the ratio of the gray value less than 50 as the gray level of the region, counting the ratio of the gray value greater than 200 as the brightness of the region, and after obtaining the gray level and brightness of each region, forming a matrix:
Figure BDA0002631389880000122
Figure BDA0002631389880000123
the score is obtained by weighted summation of 3 multiplied by 3 two-dimensional Gaussian kernelsbright_dark、scorebright_light
Figure BDA0002631389880000124
scorebright_dark=sum(element_wise(GuassKernel3×3,summaryd))
scorebright_light=sum(element_wise(GuassKernel3×3,summaryl))
When score is satisfiedbright_dark<0.4、scorebright_lightIf the face brightness is less than 0.5, the face is considered to meet the brightness requirement, and the brightness calculation flow refers to the figure 5.
The judgment of the face angle and the face shielding needs to obtain face key point information, 21 key points of the face are extracted by using a face key point extraction algorithm, and the key point information comprises (x, y, score)vis) Respectively, the horizontal coordinate of the key point, the vertical coordinate of the key point and the visibility of the key point, wherein scorevis∈[0,1]And 1 indicates that the key points are completely visible, and 21 key points comprise parts such as left eyes, right eyes, noses, mouth corners and the like.
A topological structure diagram of the key points can be established based on 21 key points and the adjacent relation thereof, and the human face angles in the horizontal direction and the vertical direction can be estimated based on the position informationh、anglev
When angleh|<30、|anglevAnd when the | is less than 45, the face is considered to meet the angle requirement.
Judging the visibility of the face shielding based on key points, and counting scorevisNumber of keypoints cnt < 0.3unvisWhen cnt is usedunvisAnd when the number is less than 9, the human face is considered to have no obvious occlusion.
The face meeting the above 4 quality conditions may be subjected to an identification process, and the face that does not pass the quality audit outputs the face position and the reason why the quality audit does not pass in the output module 106.
The face with the quality approved is sent to the face recognition module 104, and the face is aligned based on the key point information, and the face recognition is divided into two steps: extracting human face features and matching the human face features.
The face feature extraction is based on deep learning mode extraction, se-renet 50 is used as a feature extraction network, fine classification training is carried out on the face according to id in a training stage, features of the previous layer of a classification layer are selected as face features in the feature extraction stage, and the feature dimensionality is 256.
The face feature matching adopts an HNSW vector matching algorithm to search the face record with the maximum similarity, and the similarity calculation adopts cosine similarity. The cosine similarity calculation formula is as follows:
Figure BDA0002631389880000131
a. b is a face feature vector, i.e., the closer to 0, the more similar the face features.
Before the HNSW vector matching algorithm is applied, multi-layer graph query construction in the HNSW algorithm needs to be constructed on the basis of a face feature library. In the implementation, a three-layer graph structure can be adopted, and the graph is divided into layers from shallow to deep1、layer2、layer3
Because the face library comprises the extended face features added during the operation, the extended face features are different from the face features input normally, and the rules are different when a three-layer graph structure is constructed.
When the face features hit the face library, the face library can be expanded on line by meeting the following conditions: similar 1.similara,b<0.1,2.cntext<5,cntextThe number of records has been expanded for the current feature. When the conditions are met, the query features are written into a face library and marked as to be determined, and the multi-layer graph structure of the HNSW is not updated for the moment. Then, the identity is confirmed through a front-end user, when the identity is correct, the feature is marked as determined, and the feature is inserted into a multi-layer graph structure when the following probability rules are met:
a total of 3 layers of graphs, ordered from shallow to deep: layer1、layer2、layer3Corresponding insertion probability of p1、p2、p3The insertion probability has the following relationship:
Figure BDA0002631389880000141
let p be1=0.1、p2=0.3、p30.6, according to these threeThe probabilities insert features into the multi-level graph structure.
The data output module 106 returns the detected and recognized face data to the front-end application in the form of a request or a message queue.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments are merely illustrative, and for example, a division of a unit is merely a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
While the invention has been described in terms of its preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (20)

1. A real-time online optimization-based face recognition method is characterized by comprising the following steps:
acquiring image data to be identified, and analyzing the image data to obtain input data;
carrying out face detection on the input data based on a face detection network, wherein the application type of the input data is judged before each detection, and a corresponding inference branch is selected based on the application type to carry out face detection so as to obtain face region data in the input data;
checking the face quality of the face region data at least according to face definition, face brightness, face angle and face visibility;
and performing face feature extraction and HNSW-based face matching on the face region data which passes face quality audit so as to realize face recognition.
2. The method of claim 1, further comprising adding the face features meeting the preset warehousing conditions to a face feature base, and after obtaining the face features and confirming warehousing, adding the face features to a multi-layer graph data structure of the HNSW according to a probability rule for expanding the face features.
3. The method of claim 2, wherein the pre-determined binning conditions comprise: let similarity be similarity, and similarity threshold be tsimWith the extended similarity threshold textAnd t isext>tsimAnd the current feature extended record number is cntextWhen similarity > textAnd cntextAnd when the number is less than 5, the face features are stored in a storage and marked as to be determined.
4. A method as claimed in claim 2 or 3, wherein the probability rule for augmenting the face features is: let a total M-level graph, ordered from shallow to deep as: layer1、layer2…layerMCorresponding insertion probability of p1、p2…pMThe insertion probability has the following relationship:
Figure FDA0002631389870000021
Figure FDA0002631389870000022
5. the method of claim 1, wherein the face detection comprises:
judging the application type of the input data;
detecting human faces with different sizes under characteristic images with different scales, dividing the scales into three levels of small, meduim and large, and corresponding the levels to different inference branches under the human face detection; and selectively executing corresponding reasoning branches of the face detection network based on the application type, operating small and meduim branches in a scene with a longer face distance, and operating small and large branches in a scene with a shorter face distance to obtain first face region data.
6. The method of claim 5, further comprising: and screening the first face region data based on the confidence coefficient to obtain second face region data.
7. The method of claim 1, wherein the face region data is divided into a plurality of blocks of respective statistical gray values; and calculating the gray-dark degree and the brightness of the face area according to the gray value, and judging the brightness of the face area according to the two indexes.
8. The method of claim 1, wherein the face key points are extracted by a face key point extraction algorithm, and the key point information includes key point horizontal direction coordinates, key point vertical direction coordinates, and key point visibility; acquiring human face angles in the horizontal direction and the vertical direction based on the key point information; or counting the number of the visibility of the key points meeting the preset value, and auditing the face quality through the number of the visibility of the key points.
9. The method of claim 8, wherein at least 21 face key points are extracted.
10. The method of claim 1, wherein a data structure for HNSW graph search is constructed based on a face feature library; and searching the face features with the maximum similarity by adopting an HNSW vector matching algorithm.
11. A real-time, online-optimizable face recognition system, comprising:
the input module is used for acquiring image data to be identified and analyzing the image data to obtain input data;
the face detection module is used for carrying out face detection on the input data based on a face detection network, wherein the application type of the input data is judged before each detection, and a corresponding inference branch is selected based on the application type to carry out face detection so as to obtain face region data in the input data;
the face quality auditing module is used for auditing the face quality of the face area data at least according to face definition, face brightness, face angle and face visibility;
and the face recognition module is used for extracting the face features of the face region data which passes the face quality audit and carrying out face matching based on HNSW so as to realize face recognition.
12. The system of claim 11, further comprising a face library online expansion module, configured to add face features meeting preset warehousing conditions to the face feature base library, and after obtaining the face features and confirming warehousing, add the face features to the multi-layer graph data structure of HNSW according to probability rules of the expanded face features.
13. The system of claim 12, wherein the pre-defined binning conditions comprise: let similarity be similarity, and similarity threshold be tsimExpanding, expandingThe threshold value of the charging similarity is textAnd t isext>tsimAnd the current feature extended record number is cntextWhen similarity > textAnd cntextAnd when the number is less than 5, the face features are stored in a storage and marked as to be determined.
14. The system of claim 12 or 13, wherein the probability rule for augmenting the face features is: let a total M-level graph, ordered from shallow to deep as: layer1、layer2…layerMCorresponding insertion probability of p1、p2…pMThe insertion probability has the following relationship:
Figure FDA0002631389870000041
Figure FDA0002631389870000042
15. the system of claim 11, wherein the face detection module is further to: judging the application type of the input data;
detecting human faces with different sizes under characteristic images with different scales, dividing the scales into three levels of small, meduim and large, and corresponding the levels to different inference branches under the human face detection based on an anchor box mode;
and based on the application type, selectively executing corresponding branches of the face detection network, operating small and meduim branches in a scene with a longer face distance, and operating small and large branches in a scene with a shorter face distance to obtain first face region data.
16. The system of claim 15, wherein the face detection module is further configured to: and screening the first face region data based on the confidence coefficient to obtain second face region data.
17. The system of claim 11, wherein the face quality audit module is further configured to: dividing the face region data into a plurality of blocks, and respectively counting gray values; and calculating the gray-dark degree and the brightness of the face area according to the gray value, and judging the brightness of the face area according to the two indexes.
18. The system of claim 11, wherein the face quality audit module is further to: extracting face key points by a face key point extraction algorithm, wherein key point information comprises key point horizontal direction coordinates, key point vertical direction coordinates and key point visibility; acquiring human face angles in the horizontal direction and the vertical direction based on the key point information; or counting the number of the visibility of the key points meeting the preset value, and auditing the face quality through the number of the visibility of the key points.
19. The system of claim 11, wherein the face recognition module is further configured to: constructing a data structure for HNSW graph search based on the face feature library; and searching the face features with the maximum similarity by adopting an HNSW vector matching algorithm.
20. A computer storage medium storing a computer program which, when executed, implements the method of claims 1-10.
CN202010812277.XA 2020-08-13 2020-08-13 Real-time online optimization face recognition system and method Pending CN112001280A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010812277.XA CN112001280A (en) 2020-08-13 2020-08-13 Real-time online optimization face recognition system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010812277.XA CN112001280A (en) 2020-08-13 2020-08-13 Real-time online optimization face recognition system and method

Publications (1)

Publication Number Publication Date
CN112001280A true CN112001280A (en) 2020-11-27

Family

ID=73463116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010812277.XA Pending CN112001280A (en) 2020-08-13 2020-08-13 Real-time online optimization face recognition system and method

Country Status (1)

Country Link
CN (1) CN112001280A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743308A (en) * 2021-09-06 2021-12-03 汇纳科技股份有限公司 Face recognition method, device, storage medium and system based on feature quality
WO2023040480A1 (en) * 2021-09-15 2023-03-23 上海商汤智能科技有限公司 Image detection method and apparatus, electronic device, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605993A (en) * 2013-12-04 2014-02-26 康江科技(北京)有限责任公司 Image-to-video face identification method based on distinguish analysis oriented to scenes
CN107491767A (en) * 2017-08-31 2017-12-19 广州云从信息科技有限公司 End to end without constraint face critical point detection method
CN108090406A (en) * 2016-11-23 2018-05-29 浙江宇视科技有限公司 Face identification method and system
CN108960209A (en) * 2018-08-09 2018-12-07 腾讯科技(深圳)有限公司 Personal identification method, device and computer readable storage medium
CN110751043A (en) * 2019-09-19 2020-02-04 平安科技(深圳)有限公司 Face recognition method and device based on face visibility and storage medium
CN110826519A (en) * 2019-11-14 2020-02-21 深圳市华付信息技术有限公司 Face occlusion detection method and device, computer equipment and storage medium
CN110866500A (en) * 2019-11-19 2020-03-06 上海眼控科技股份有限公司 Face detection alignment system, method, device, platform, mobile terminal and storage medium
CN111241345A (en) * 2020-02-18 2020-06-05 腾讯科技(深圳)有限公司 Video retrieval method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605993A (en) * 2013-12-04 2014-02-26 康江科技(北京)有限责任公司 Image-to-video face identification method based on distinguish analysis oriented to scenes
CN108090406A (en) * 2016-11-23 2018-05-29 浙江宇视科技有限公司 Face identification method and system
CN107491767A (en) * 2017-08-31 2017-12-19 广州云从信息科技有限公司 End to end without constraint face critical point detection method
CN108960209A (en) * 2018-08-09 2018-12-07 腾讯科技(深圳)有限公司 Personal identification method, device and computer readable storage medium
CN110751043A (en) * 2019-09-19 2020-02-04 平安科技(深圳)有限公司 Face recognition method and device based on face visibility and storage medium
CN110826519A (en) * 2019-11-14 2020-02-21 深圳市华付信息技术有限公司 Face occlusion detection method and device, computer equipment and storage medium
CN110866500A (en) * 2019-11-19 2020-03-06 上海眼控科技股份有限公司 Face detection alignment system, method, device, platform, mobile terminal and storage medium
CN111241345A (en) * 2020-02-18 2020-06-05 腾讯科技(深圳)有限公司 Video retrieval method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HAN XIA ET AL.: ""Face Recognition and Application of Film and Television Actors Based on Dlib"", 《IEEE》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743308A (en) * 2021-09-06 2021-12-03 汇纳科技股份有限公司 Face recognition method, device, storage medium and system based on feature quality
CN113743308B (en) * 2021-09-06 2023-12-12 汇纳科技股份有限公司 Face recognition method, device, storage medium and system based on feature quality
WO2023040480A1 (en) * 2021-09-15 2023-03-23 上海商汤智能科技有限公司 Image detection method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
Föckler et al. Phoneguide: museum guidance supported by on-device object recognition on mobile phones
CN111539370A (en) Image pedestrian re-identification method and system based on multi-attention joint learning
CN114972418A (en) Maneuvering multi-target tracking method based on combination of nuclear adaptive filtering and YOLOX detection
CN111126122B (en) Face recognition algorithm evaluation method and device
CN111209818A (en) Video individual identification method, system, equipment and readable storage medium
CN110827432B (en) Class attendance checking method and system based on face recognition
KR20170015639A (en) Personal Identification System And Method By Face Recognition In Digital Image
CN110765841A (en) Group pedestrian re-identification system and terminal based on mixed attention mechanism
CN111814690A (en) Target re-identification method and device and computer readable storage medium
CN112001280A (en) Real-time online optimization face recognition system and method
CN113378675A (en) Face recognition method for simultaneous detection and feature extraction
CN111814846B (en) Training method and recognition method of attribute recognition model and related equipment
CN114783037B (en) Object re-recognition method, object re-recognition apparatus, and computer-readable storage medium
CN111091057A (en) Information processing method and device and computer readable storage medium
CN113537107A (en) Face recognition and tracking method, device and equipment based on deep learning
CN114581990A (en) Intelligent running test method and device
CN116824641B (en) Gesture classification method, device, equipment and computer storage medium
CN117036410A (en) Multi-lens tracking method, system and device
CN113657169B (en) Gait recognition method, device and system and computer readable storage medium
CN110968719A (en) Face clustering method and device
CN115527168A (en) Pedestrian re-identification method, storage medium, database editing method, and storage medium
CN113255549A (en) Intelligent recognition method and system for pennisseum hunting behavior state
CN113627383A (en) Pedestrian loitering re-identification method for panoramic intelligent security
CN111382628B (en) Method and device for judging peer
CN114220078A (en) Target re-identification method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination