CN109522853A - Face datection and searching method towards monitor video - Google Patents

Face datection and searching method towards monitor video Download PDF

Info

Publication number
CN109522853A
CN109522853A CN201811400352.0A CN201811400352A CN109522853A CN 109522853 A CN109522853 A CN 109522853A CN 201811400352 A CN201811400352 A CN 201811400352A CN 109522853 A CN109522853 A CN 109522853A
Authority
CN
China
Prior art keywords
face
monitor video
facial image
eyebrow
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811400352.0A
Other languages
Chinese (zh)
Other versions
CN109522853B (en
Inventor
谢剑斌
李沛秦
闫玮
张术华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
Hunan Zhongzhi Jun Winning Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Zhongzhi Jun Winning Technology Co Ltd filed Critical Hunan Zhongzhi Jun Winning Technology Co Ltd
Priority to CN201811400352.0A priority Critical patent/CN109522853B/en
Publication of CN109522853A publication Critical patent/CN109522853A/en
Application granted granted Critical
Publication of CN109522853B publication Critical patent/CN109522853B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of Face datection and searching method towards monitor video, trains human-face detector first;The monitoring video frame of pending recognition of face and search will be inputted, it is detected using human-face detector, obtains the human face region I in monitoring video framef, in human face region IfMiddle carry out facial feature localization, obtains monitor video human face five-sense-organ positioning result;It determines target facial image, facial feature localization is carried out to target facial image, obtains target face facial feature localization result;Then the facial feature localization result of monitor video facial image and the facial feature localization of target facial image that preceding step obtains are as a result, calculate the full face of the two and local face similarity.The probability fusion similarity for finally calculating monitor video facial image and target facial image obtains search matching result.It is more accurate using the invention enables search results.

Description

Face datection and searching method towards monitor video
Technical field
The present invention relates to technical field of face recognition, and in particular to a kind of Face datection and searcher of facing video monitoring Method.
Background technique
With the large-scale application of video monitoring system and the further investigation of Classical correlation algorithm, recognition of face will be based on The face recognition search of technology is combined with video monitoring, can effectively improve the intelligent level of video monitoring, especially in net Network, which pursues and captures an escaped prisoner, hits population abducts etc. working efficiency can be improved, while mitigating labor workload, it may be found that target person Probability gets a promotion.This is using a restraining factors outstanding at present: since monitor video is mainly shot compared with large scene, being easy Cause human face region therein smaller, to can't detect face, causes identification bad with search effect.
Existing Face datection and searching method, such as Publication No. 103824051A, publication date is on May 28th, 2014 Application for a patent for invention discloses one kind based on the matched face searching method of regional area, face is snapped to standard grid Face is simultaneously divided into several organic regions, classifies for identification for each extracted region multi-level features.Publication No. 106886739A, publication date are the application for a patent for invention on June 23rd, 2017, disclose a kind of video prison based on recognition of face Prosecutor method extracts face layering histogram in method and is used as feature, to picture noise with preferable robustness.Publication No. 104899576A, publication date be the application for a patent for invention on the 9th of September in 2015, is disclosed a kind of based on Gabor transformation and HOG Face recognition features' extracting method is extracted and is normalized to containing facial image first, big by generating 40 different directions Small Gabor filter filters normalization face respectively, obtains the Gabor characteristic of multiple directions and size, then right Obtained Gabor characteristic carries out HOG processing, further obtains the histogram of gradients information of Gabor characteristic, so that Gabor is filtered As a result enhance.Publication No. 104700089A, publication date are the application for a patent for invention on June 10th, 2015, disclose a kind of base In the face identification method of Gabor and SB2DLPP, mainly including pretreatment, feature extraction, Feature Dimension Reduction and Classification and Identification four Step.(1) all people's face image in known face database is pre-processed, including dimension normalization and histogram equalization Change;(2) feature extraction is carried out to pretreated facial image using Gabor wavelet;(3) classification information is introduced, to step (2) The dimensional images feature application extracted has bidirectional two-dimensional locality preserving projections (SB2DLPP) algorithm of supervision to carry out dimensionality reduction, from And extract the eigenmatrix for being mapped to lower-dimensional subspace;(4) Classification and Identification is carried out using nearest neighbor classifier.Publication No. 103679151A, publication date are the application for a patent for invention on March 26th, 2014, disclose a kind of fusion LBP, Gabor characteristic Face cluster method.Publication No. 104820844A, publication date are that the application for a patent for invention on the 5th of August in 2015 discloses one kind The characteristics of image that facial image to be identified obtains is divided into positive negative sample by face identification method, using Adaboost to it is described just Negative sample carry out feature selecting, obtain obvious characteristic, using Adaboost to the characteristics of image obtained by the facial image into Row feature selecting obtains proper subspace;The SVM divided using circular symmetric is carried out feature training to the proper subspace and obtained Obtain ECC encoder matrix;The SVM and the ECC encoder matrix divided using circular symmetric carries out feature to the obvious characteristic Match.
Above method groundwork, which concentrates on face characteristic, indicates part, not can solve in monitor video human face region compared with Small bring face missing inspection problem, and the above method is larger mainly for size, details face extraction feature abundant, and it is right In actual monitored video for smaller area, details face not abundant, features described above extracting method can not be obtained and height The effect that quality facial image matches.
Summary of the invention
In view of the deficiencies in the prior art, the present invention provides a kind of Face datection and searcher towards monitor video Method.
Purpose to realize the present invention, is achieved using following technical scheme:
The present invention is first against the small size human face target in monitor video, based on YOLO V3 method training Face datection Device;Then to the human face region detected, facial image is divided into local face component, for the corresponding volume of each component training Product neural network extracts its feature;The similarity that target face and template face are finally calculated based on probability fusion is realized reliable Search.
Referring to Fig.1, Face datection and searching method towards monitor video, comprising the following steps:
S1 trains human-face detector;
S2 inputs the monitoring video frame of pending recognition of face and search, is carried out using the human-face detector that S1 is obtained to it Detection, obtains the human face region I in monitoring video framef, in human face region IfMiddle carry out facial feature localization, obtains monitor video face Facial feature localization result;
S3 determines target facial image, carries out facial feature localization to target facial image, obtains target face facial feature localization knot Fruit;
The target face figure that S4 is obtained in facial feature localization result and S3 based on the monitor video facial image obtained in S2 The facial feature localization of picture is as a result, calculate the full face of the two and local face similarity.
S5 calculates the probability fusion similarity of monitor video facial image and target facial image, obtains search matching knot Fruit.
The training method of human-face detector is as follows in S1 of the present invention:
S1.1 creates training dataset;
Training dataset includes multiple the monitor video images of (usually no less than 10,000) containing different faces image, and The area size of facial image is covered from smaller size (such as 20 × 20 pixels) to larger size (with prison in each monitor video image For controlling video resolution 1280 × 1024,256 × 256 pixel of maximum face, larger size face is not belonging to monitor video Usual range).
S1.2 concentrates the human face region in all monitor video images to be labeled and to each monitor video training data Image generates its corresponding mark file.
The monitor video image that training data is concentrated is randomly divided into training set and test set by S1.3 according to a certain percentage.Instruction Practice collection and is used to test the generalization ability of the training pattern for establishing training pattern, test set.
S1.4 is trained using YOLO V3, is tested, the human-face detector after ultimately generating training.
The training collocation file of YOLO V3 is set;Download the pre-training file of YOLO V3;Use above-mentioned training set, test Collection, configuration file, pre-training file call the train instruction of YOLO V3 to be trained;Training is completed, after automatically generating training Human-face detector.
The innovative point of this method is: YOLO (You Only Look Once) is a kind of target based on single neural network Detection method, V3 version are better than other object detection methods based on convolutional neural networks to the verification and measurement ratio of Small object, and Speed is faster.The algorithm advantage of YOLO V3 is utilized in step S1, more effectively trained to small size face sample image can be obtained Human-face detector is obtained, realizes effective detection to face smaller in monitor video.
In S2 of the present invention, the method for facial feature localization is as follows:
The human-face detector obtained using S1 is detected to obtain to the monitoring video frame of pending recognition of face and search Human face region If, human face region IfWidth=height=DI, facial feature localization is followed the steps below to it:
S2.1: Dlib face critical point detection method is based on from human face region IfDetect human face five-sense-organ key point.Referring to figure 2, human face five-sense-organ key point is distributed in full face profile and Zuo Mei, right eyebrow, left eye, right eye, nose, lip, utilizes Dlib face Its position of human face five-sense-organ key point and serial number that critical point detection method detects are as shown in Figure 2.
S2.2: nose is as the human face five-sense-organ key point detected in S2.1, using nose as geometric center, 1.5 × DIFor side length, cut out square human face region from the frame image, and by the square human face region size scaling to 128 × 128 pixels obtain size normalization human face region Ifs
S2.3: to IfsFull face profile and Zuo Mei, the right side are distributed in using the detection of Dlib face critical point detection method again Eyebrow, left eye, right eye, nose, the human face five-sense-organ key point on lip, obtain full face face key point distributed image;And according to To full face face key point distributed image cut out IfsIn left eyebrow, right eyebrow, left eye, right eye, six kinds of nose, lip Local maps Picture.
It is identical as facial feature localization method in S2 to the method for target facial image progress facial feature localization in step S3 of the present invention, I.e. to target facial image according to S2.1, S2.2 method obtains the corresponding size normalization human face region of target facial image, then The corresponding full face face key point distributed image of target facial image and Zuo Mei, right eyebrow, a left side are obtained according to the method in S2.3 Six kinds of eye, right eye, nose, lip topographies.
Left eyebrow, right eyebrow, left eye, right eye in step S4 of the present invention in video human face image and in target facial image, The calculation method of the similarity of nose, mouth and full face is identical, by taking left eyebrow as an example, detailed process are as follows:
S4.1: collecting a large amount of left eyebrow samples as left eyebrow sample training collection in advance, according to specificity by left eyebrow sample training Each left eyebrow sample concentrated is classified, and the left eyebrow in each left eyebrow sample is divided into such as thin eyebrow, heavy eyebrows, synophrys, bending eyebrow Typical categories, if the typical categories number k of left eyebrow1, using sorted left eyebrow sample training collection training for the convolution mind of left eyebrow Through network;
S4.2: eyebrow image left in monitor video facial image obtained in S2.3 is input to the left side obtained after S4.1 training Eyebrow convolutional neural networks export feature vector
S4.3: feature vector is calculatedUse Euclidean distance in typical template feature vector similar to training in S4.1 Spend maximum one kind Si0, maximum Euclidean distance similarity Pi0As the probability (calculating left eyebrow when i=1) for being classified as the category;
(1) each optional image and defeated from the left eyebrow sample of each typical categories that the left eyebrow sample training in S4.1 is concentrated Enter to left eyebrow convolutional neural networks, obtains corresponding left eyebrow typical categories feature vectorM=1,2,3 ..., k1, k1 For the typical categories number of left eyebrow.
(2) it calculatesWithM=1,2,3 ..., k1Euclidean distance similarity
Wherein n is feature vectorWithLength (they are calculated by the same convolutional neural networks Arrive, so length is identical),It indicatesJ-th of element,It indicatesJth A element.
(3) maximum one is taken out from above-mentioned Euclidean distance similarity as Pi0
The corresponding classification sequence number of the maximum similarity is to be denoted as Si0, because computing object is Zuo Mei, i=1 herein.
S4.4: using step method identical with S4.1~S4.3, calculates other face portions in monitor video facial image Part is S through corresponding CNN classification resultsi0, the probability for corresponding to such is Pi0, i=2,3 ... 6, respectively indicate right eyebrow, left eye, the right side Eye, nose, mouth;
S4.5: using step method identical with S4.1~S4.4, calculates five in target facial image obtained in S3 Official's component is S through corresponding CNN classification resultsi1, the probability for corresponding to such is Pi1, i=1,2,3 ... 6, respectively indicate left eyebrow, the right side Eyebrow, left eye, right eye, nose, mouth.
In the convolutional neural networks of the right eyebrow of training, by the right eyebrow in each right eyebrow sample be divided into as thin eyebrow, heavy eyebrows, synophrys, Bend the typical categories such as eyebrow.In the convolutional neural networks of training left/right eye, the left eye in each left/right eye sample is divided into such as Typical categories.In the convolutional neural networks of training nose, the nose in each nose sample is divided into such as plain nose, flat nose, hawk The typical categories such as hook nose, upturned nose, nose nose.In the convolutional neural networks of training mouth, by the mouth in each mouth sample point The typical categories such as type are upwarped for such as thick lip type, thin lip type, ptosis of labial angle type, bicker.The classification of the typical categories of each local feature It is to be divided according to the general classification form to each local feature.
Further, further include step S4.6: for full-face images, target facial image is divided into k according to affiliated personnel7 Class, using step method identical with S4.2~S4.3, full face is through corresponding CNN classification results in calculating monitor video facial image For Si7, the probability for corresponding to such is Pi7
In S5 of the present invention, the probability fusion similarity p for calculating monitor video facial image and target facial image is
The size for judging probability fusion similarity p, as it is greater than a certain preset threshold, then it is assumed that in monitor video image Facial image is similar to target facial image, returns to search matching result.
Convolutional neural networks (Convolution neural network, hereinafter referred to as CNN) are a kind of intelligentized Known training image collection is inputted the network by feature extracting method, and part is handled and parameter setting can be generated by training and excellent Change, therefore more preferably quality and efficiency can be obtained.Face is is divided into left eyebrow, the right side by the method used in step S4 of the present invention 6 kinds of eyebrow, left eye, right eye, nose, mouth components, in addition full face totally 7 kinds of images, as shown in Figure 3.Above-mentioned 7 kinds of images are respectively trained Corresponding CNN carries out feature extraction and classification, as shown in Figure 4.It is certain by each face part classification by convolutional neural networks One classification simultaneously exports its probability, then exports the probability that its two faces are identified as same people to complete face.What it can be generated Technical effect shows in the following areas:
(1) global characteristics are compared based on part specificity description and extraction face characteristic, the special features of face component home It is distincter and stable, it is not influenced vulnerable to expression and posture;
(2) local feature and global characteristics are merged, makes feature more comprehensively, can more embody monitor video small-medium size face Feature.
For the fusion problem of multiple CNN results, multiple networks output result is averaging to be used as and most be terminated by conventional method Fruit simply averagely cannot effectively embody face characteristic when the specific degree of each component of face has difference.In S5 of the present invention Probability is based on to each CNN output result to merge, and embodies the specific characteristics of face.It, can be real by above-mentioned processing Now more CNN human face similarity degrees based on probability fusion calculate, and the technical effect that can be generated shows in the following areas::
(1) specificity of component home and the global feature of Global Face are merged, classification correctness is improved.
(2) the multiple CNN classification results of probability fusion are based on, this method can effectively reflect the unusual strong of different component features Degree, keeps search result more acurrate.
Detailed description of the invention
Fig. 1 is flow chart of the invention
The face key point position Fig. 2 and serial number figure
Fig. 3 face component segmentation figure
Fig. 4 is the more convolutional neural networks schematic diagrames for merging local feature and global characteristics
Specific embodiment
Below in conjunction with the attached drawing in figure of the embodiment of the present invention, technical solution in the embodiment of the present invention carry out it is clear, It is fully described by, is described in further details, but embodiments of the present invention are not limited only to this.
Referring to Fig.1, Face datection and searching method towards monitor video, comprising the following steps:
S1 trains human-face detector;
S1.1 creates training dataset;
Training dataset includes multiple the monitor video images of (usually no less than 10,000) containing different faces image, and The area size of facial image is covered from smaller size (such as 20 × 20 pixels) to larger size (with prison in each monitor video image For controlling video resolution 1280 × 1024,256 × 256 pixel of maximum face, larger size face is not belonging to monitor video Usual range).
S1.2 concentrates the human face region in all monitor video images to be labeled and to each monitor video training data Image generates its corresponding mark file.
The monitor video image that training data is concentrated is randomly divided into training set and test set by S1.3 according to a certain percentage.Instruction Practice collection and is used to test the generalization ability of the training pattern for establishing training pattern, test set.
S1.4 is trained using YOLO V3, is tested, the human-face detector after ultimately generating training.
The training collocation file of YOLO V3 is set;Download the pre-training file of YOLO V3;Use above-mentioned training set, test Collection, configuration file, pre-training file call the train instruction of YOLO V3 to be trained;Training is completed, after automatically generating training Human-face detector.
S2 inputs the monitoring video frame of pending recognition of face and search, is carried out using the human-face detector that S1 is obtained to it Detection, obtains the human face region I in monitoring video framef, in human face region IfMiddle carry out facial feature localization, obtains monitor video face Facial feature localization result.
The human-face detector obtained using S1 is detected to obtain to the monitoring video frame of pending recognition of face and search Human face region If, the peak width=height=DI, carry out following facial feature localization processing:
S2.1: Dlib face critical point detection method is based on from human face region IfDetect human face five-sense-organ key point.Referring to figure 2, human face five-sense-organ key point is distributed in full face profile and Zuo Mei, right eyebrow, left eye, right eye, nose, lip, utilizes Dlib face Its position of human face five-sense-organ key point and serial number that critical point detection method detects are as shown in Figure 2.
S2.2: nose is as human face five-sense-organ key point, with nose (serial number 31 in Fig. 2 is nose) for geometric center, 1.5 ×DIFor side length, square human face region is cut out from the frame image, and by the square human face region size scaling to 128 × 128 pixels obtain size normalization human face region Ifs
S2.3: to IfsFull face profile and Zuo Mei, the right side are distributed in using the detection of Dlib face critical point detection method again Eyebrow, left eye, right eye, nose, the human face five-sense-organ key point on lip, obtain full face face key point distributed image;And according to To full face face key point distributed image cut out IfsIn left eyebrow, right eyebrow, left eye, right eye, six kinds of nose, lip Local maps Picture.
S3 determines target facial image, carries out facial feature localization to target facial image, obtains target face facial feature localization knot Fruit.
It is identical as facial feature localization method in S2 to the method for target facial image progress facial feature localization in S3, i.e., to target person Face image is according to S2.1, and S2.2 method obtains the corresponding size normalization human face region of target facial image, according still further in S2.3 Method obtain the corresponding full face face key point distributed image of target facial image and Zuo Mei, right eyebrow, left eye, right eye, nose Six kinds of son, lip topographies.
The target face figure that S4 is obtained in facial feature localization result and S3 based on the monitor video facial image obtained in S2 The facial feature localization of picture is as a result, calculate the full face of the two and local face similarity.
S4.1: collecting a large amount of left eyebrow samples as left eyebrow sample training collection in advance, according to specificity by left eyebrow sample training Each left eyebrow sample concentrated is classified, and the left eyebrow in each left eyebrow sample is divided into such as thin eyebrow, heavy eyebrows, synophrys, bending eyebrow Typical categories, if the typical categories number k of left eyebrow1, using sorted left eyebrow sample training collection training for the convolution mind of left eyebrow Through network.
S4.2: eyebrow image left in monitor video facial image obtained in S2.3 is input to the left side obtained after S4.1 training Eyebrow convolutional neural networks export feature vector
S4.3: feature vector is calculatedUse Euclidean distance in typical template feature vector similar to training in S4.1 Spend maximum one kind Si0, maximum Euclidean distance similarity Pi0As the probability (calculating left eyebrow when i=1) for being classified as the category.
(1) each optional image and defeated from the left eyebrow sample of each typical categories that the left eyebrow sample training in S4.1 is concentrated Enter to left eyebrow convolutional neural networks, obtains corresponding left eyebrow typical categories feature vectorM=1,2,3 ..., k1, k1 For the typical categories number of left eyebrow;
(2) it calculatesWithM=1,2,3 ..., k1Euclidean distance similarity
Wherein n is feature vectorWithLength,It indicatesJ-th yuan Element,It indicatesJ-th of element;
(3) maximum one is taken out from the Euclidean distance similarity being calculated in (2) as Pi0,
The corresponding classification sequence number of the maximum similarity is to be denoted as Si0, because computing object is Zuo Mei, i=1 herein.
S4.4: using step method identical with S4.1~S4.3, calculates other face portions in monitor video facial image Part is S through corresponding CNN classification resultsi0, the probability for corresponding to such is Pi0, i=2,3 ... 6, respectively indicate right eyebrow, left eye, the right side Eye, nose, mouth.
S4.5: using step method identical with S4.1~S4.4, calculates five in target facial image obtained in S3 Official's component is S through corresponding CNN classification resultsi1, the probability for corresponding to such is Pi1, i=1,2,3 ... 6, respectively indicate left eyebrow, the right side Eyebrow, left eye, right eye, nose, mouth;
S4.6: for full-face images, target facial image is divided into k according to affiliated personnel7Class, using with S4.2~S4.3 phase Same step method, calculating full face in monitor video facial image and being corresponded to CNN classification results is Si7, such corresponding probability is Pi7
S5 calculates the probability fusion similarity of monitor video facial image and target facial image, obtains search matching knot Fruit.
The probability fusion similarity p of monitor video facial image and target facial image is
The size for judging probability fusion similarity p, as it is greater than a certain preset threshold, then it is assumed that in monitor video image Facial image is similar to target facial image, returns to search matching result.
Although in conclusion the present invention has been disclosed as a preferred embodiment, however, it is not to limit the invention, any Those of ordinary skill in the art, without departing from the spirit and scope of the present invention, when can make it is various change and retouch, therefore this hair Bright protection scope is subject to the range defined depending on claims.

Claims (9)

1. a kind of Face datection and searching method towards monitor video, which comprises the following steps:
S1 trains human-face detector;
S2 inputs the monitoring video frame of pending recognition of face and search, is examined using the human-face detector that S1 is obtained to it It surveys, obtains the human face region I in monitoring video framef, in human face region IfMiddle carry out facial feature localization, obtains monitor video face five Official's positioning result;
S3 determines target facial image, carries out facial feature localization to target facial image, obtains target face facial feature localization result;
The target facial image that S4 is obtained in facial feature localization result and S3 based on the monitor video facial image obtained in S2 Facial feature localization is as a result, calculate the full face of the two and local face similarity;
S5 calculates the probability fusion similarity of monitor video facial image and target facial image, obtains search matching result.
2. the Face datection and searching method according to claim 1 towards monitor video, which is characterized in that face in S1 The training method of detector is as follows:
S1.1 creates training dataset;
S1.2 concentrates the human face region in all monitor video images to be labeled and to each monitor video image training data Generate its corresponding mark file;
The monitor video image that training data is concentrated is randomly divided into training set and test set by S1.3 according to a certain percentage.Training set For establishing training pattern, test set is used to test the generalization ability of the training pattern;
S1.4 is trained using YOLO V3, is tested, the human-face detector after ultimately generating training.
3. the Face datection and searching method towards monitor video according to claim 2, which is characterized in that In S1.1, training dataset includes the no less than 10,000 monitor video images containing different faces image, and each monitor video figure Area size's covering scope of facial image is from 20 × 20 pixels to 256 × 256 pixels as in.
4. the Face datection and searching method towards monitor video according to claim 3, which is characterized in that S2 The middle human-face detector obtained using S1 detects the monitoring video frame of pending recognition of face and search to obtain face area Domain If, human face region IfWidth=height=DI, facial feature localization is followed the steps below to it:
S2.1: Dlib face critical point detection method is based on from human face region IfHuman face five-sense-organ key point is detected, human face five-sense-organ is crucial Point is distributed in full face profile and Zuo Mei, right eyebrow, left eye, right eye, nose, lip;
S2.2: nose is as the human face five-sense-organ key point detected in S2.1, using nose as geometric center, 1.5 × DIFor side It is long, cut out square human face region from the frame image, and by the square human face region size scaling to 128 × 128 pictures Element obtains size normalization human face region Ifs
S2.3: to IfsFull face profile and Zuo Mei, right eyebrow, a left side are distributed in using the detection of Dlib face critical point detection method again Eye, right eye, nose, the human face five-sense-organ key point on lip, obtain full face face key point distributed image;And it is complete according to what is obtained Face face key point distributed image cuts out IfsIn left eyebrow, right eyebrow, left eye, right eye, six kinds of nose, lip topographies.
5. the Face datection and searching method towards monitor video according to claim 4, which is characterized in that S3 In to target facial image carry out facial feature localization method it is identical as facial feature localization method in S2, i.e., to target facial image according to S2.1, S2.2 method obtain the corresponding size normalization human face region of target facial image, obtain according still further to the method in S2.3 The corresponding full face face key point distributed image of target facial image and Zuo Mei, right eyebrow, left eye, right eye, nose, six kinds of lip Topography.
6. the Face datection and searching method according to claim 5 towards monitor video, which is characterized in that the realization of S4 Method is:
S4.1: a large amount of left eyebrow samples are collected in advance as left eyebrow sample training collection, concentrate left eyebrow sample training according to specificity Each left eyebrow sample classify, by the left eyebrow in each left eyebrow sample be divided into such as thin eyebrow, heavy eyebrows, synophrys, bending eyebrow typical case Classification, if the typical categories number k of left eyebrow1, the convolutional Neural net of left eyebrow is directed to using sorted left eyebrow sample training collection training Network;
S4.2: eyebrow image left in monitor video facial image obtained in S2.3 is input to the left eyebrow obtained after S4.1 training and is rolled up Product neural network, exports feature vector
S4.3: feature vector is calculatedWith training in S4.1 with Euclidean Distance conformability degree in typical template feature vector most Big a kind of Si0, maximum Euclidean distance similarity Pi0As the probability for being classified as the category;
S4.4: using step method identical with S4.1~S4.3, calculates other face component warps in monitor video facial image Corresponding CNN classification results are Si0, the probability for corresponding to such is Pi0, i=2,3 ... 6, respectively indicate right eyebrow, left eye, right eye, nose, Mouth;
S4.5: using step method identical with S4.1~S4.4, calculates the face portion in target facial image obtained in S3 Part is S through corresponding CNN classification resultsi1, the probability for corresponding to such is Pi1, i=1,2,3 ... 6, respectively indicate left eyebrow, right eyebrow, a left side Eye, right eye, nose, mouth.
7. the Face datection and searching method according to claim 6 towards monitor video, which is characterized in that S4 further includes Step S4.6, for full-face images, target facial image is divided into k according to affiliated personnel7Class, using identical with S4.2~S4.3 Step method, calculating full face in monitor video facial image and being corresponded to CNN classification results is Si7, the probability for corresponding to such is Pi7
8. the Face datection and searching method according to claim 6 towards monitor video, which is characterized in that the reality of S4.3 Existing method is as follows:
(1) it each optional image and is input to from the left eyebrow sample of each typical categories that the left eyebrow sample training in S4.1 is concentrated Left eyebrow convolutional neural networks obtain corresponding left eyebrow typical categories feature vectork1For Zuo Mei Typical categories number;
(2) it calculatesWithEuclidean distance similarity
Wherein n is feature vectorWithLength,It indicatesJ-th of element,It indicatesJ-th of element;
(3) maximum one is taken out from the Euclidean distance similarity being calculated in (2) as Pi0,
The corresponding classification sequence number of the maximum similarity is to be denoted as Si0, because computing object is Zuo Mei, i=1 herein.
9. the Face datection and searching method according to claim 6 towards monitor video, which is characterized in that in S5, meter The probability fusion similarity p for calculating monitor video facial image and target facial image is
The size for judging probability fusion similarity p, as it is greater than a certain preset threshold, then it is assumed that the face in monitor video image Image is similar to target facial image, returns to search matching result.
CN201811400352.0A 2018-11-22 2018-11-22 Face datection and searching method towards monitor video Active CN109522853B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811400352.0A CN109522853B (en) 2018-11-22 2018-11-22 Face datection and searching method towards monitor video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811400352.0A CN109522853B (en) 2018-11-22 2018-11-22 Face datection and searching method towards monitor video

Publications (2)

Publication Number Publication Date
CN109522853A true CN109522853A (en) 2019-03-26
CN109522853B CN109522853B (en) 2019-11-19

Family

ID=65778619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811400352.0A Active CN109522853B (en) 2018-11-22 2018-11-22 Face datection and searching method towards monitor video

Country Status (1)

Country Link
CN (1) CN109522853B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321857A (en) * 2019-07-08 2019-10-11 苏州万店掌网络科技有限公司 Accurate objective group analysis method based on edge calculations technology
CN110490057A (en) * 2019-07-08 2019-11-22 特斯联(北京)科技有限公司 A kind of self-adaptive identification method and system based on face big data artificial intelligence cluster
CN111177469A (en) * 2019-12-20 2020-05-19 国久大数据有限公司 Face retrieval method and face retrieval device
CN111325132A (en) * 2020-02-17 2020-06-23 深圳龙安电力科技有限公司 Intelligent monitoring system
CN111325133A (en) * 2020-02-17 2020-06-23 深圳龙安电力科技有限公司 Image processing system based on artificial intelligence recognition
CN111881906A (en) * 2020-06-18 2020-11-03 广州万维创新科技有限公司 LOGO identification method based on attention mechanism image retrieval
CN111914811A (en) * 2020-08-20 2020-11-10 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and storage medium
CN113468954A (en) * 2021-05-20 2021-10-01 西安电子科技大学 Face counterfeiting detection method based on local area features under multiple channels
CN114359030A (en) * 2020-09-29 2022-04-15 合肥君正科技有限公司 Method for synthesizing human face backlight picture
WO2022205259A1 (en) * 2021-04-01 2022-10-06 京东方科技集团股份有限公司 Face attribute detection method and apparatus, storage medium, and electronic device
CN116071804A (en) * 2023-01-18 2023-05-05 北京六律科技有限责任公司 Face recognition method and device and electronic equipment
CN116561372A (en) * 2023-07-03 2023-08-08 北京瑞莱智慧科技有限公司 Personnel gear gathering method and device based on multiple algorithm engines and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201111085Y (en) * 2007-06-11 2008-09-03 湖北东润科技有限公司 Human face automatic recognition system
CN105893946A (en) * 2016-03-29 2016-08-24 中国科学院上海高等研究院 Front face image detection method
CN106156688A (en) * 2015-03-10 2016-11-23 上海骏聿数码科技有限公司 A kind of dynamic human face recognition methods and system
CN106372606A (en) * 2016-08-31 2017-02-01 北京旷视科技有限公司 Target object information generation method and unit identification method and unit and system
CN107832721A (en) * 2017-11-16 2018-03-23 百度在线网络技术(北京)有限公司 Method and apparatus for output information
CN108416324A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 Method and apparatus for detecting live body

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201111085Y (en) * 2007-06-11 2008-09-03 湖北东润科技有限公司 Human face automatic recognition system
CN106156688A (en) * 2015-03-10 2016-11-23 上海骏聿数码科技有限公司 A kind of dynamic human face recognition methods and system
CN105893946A (en) * 2016-03-29 2016-08-24 中国科学院上海高等研究院 Front face image detection method
CN106372606A (en) * 2016-08-31 2017-02-01 北京旷视科技有限公司 Target object information generation method and unit identification method and unit and system
CN107832721A (en) * 2017-11-16 2018-03-23 百度在线网络技术(北京)有限公司 Method and apparatus for output information
CN108416324A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 Method and apparatus for detecting live body

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490057A (en) * 2019-07-08 2019-11-22 特斯联(北京)科技有限公司 A kind of self-adaptive identification method and system based on face big data artificial intelligence cluster
CN110490057B (en) * 2019-07-08 2020-10-27 光控特斯联(上海)信息科技有限公司 Self-adaptive identification method and system based on human face big data artificial intelligence clustering
CN110321857A (en) * 2019-07-08 2019-10-11 苏州万店掌网络科技有限公司 Accurate objective group analysis method based on edge calculations technology
CN111177469A (en) * 2019-12-20 2020-05-19 国久大数据有限公司 Face retrieval method and face retrieval device
CN111325133B (en) * 2020-02-17 2023-09-29 深圳龙安电力科技有限公司 Image processing system based on artificial intelligent recognition
CN111325132A (en) * 2020-02-17 2020-06-23 深圳龙安电力科技有限公司 Intelligent monitoring system
CN111325133A (en) * 2020-02-17 2020-06-23 深圳龙安电力科技有限公司 Image processing system based on artificial intelligence recognition
CN111881906A (en) * 2020-06-18 2020-11-03 广州万维创新科技有限公司 LOGO identification method based on attention mechanism image retrieval
CN111914811A (en) * 2020-08-20 2020-11-10 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and storage medium
CN114359030A (en) * 2020-09-29 2022-04-15 合肥君正科技有限公司 Method for synthesizing human face backlight picture
CN114359030B (en) * 2020-09-29 2024-05-03 合肥君正科技有限公司 Synthesis method of face backlight picture
WO2022205259A1 (en) * 2021-04-01 2022-10-06 京东方科技集团股份有限公司 Face attribute detection method and apparatus, storage medium, and electronic device
CN113468954A (en) * 2021-05-20 2021-10-01 西安电子科技大学 Face counterfeiting detection method based on local area features under multiple channels
CN113468954B (en) * 2021-05-20 2023-04-18 西安电子科技大学 Face counterfeiting detection method based on local area features under multiple channels
CN116071804A (en) * 2023-01-18 2023-05-05 北京六律科技有限责任公司 Face recognition method and device and electronic equipment
CN116561372A (en) * 2023-07-03 2023-08-08 北京瑞莱智慧科技有限公司 Personnel gear gathering method and device based on multiple algorithm engines and readable storage medium
CN116561372B (en) * 2023-07-03 2023-09-29 北京瑞莱智慧科技有限公司 Personnel gear gathering method and device based on multiple algorithm engines and readable storage medium

Also Published As

Publication number Publication date
CN109522853B (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN109522853B (en) Face datection and searching method towards monitor video
CN104517104B (en) A kind of face identification method and system based under monitoring scene
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
Wan et al. Bootstrapping face detection with hard negative examples
CN109558810B (en) Target person identification method based on part segmentation and fusion
CN104881637B (en) Multimodal information system and its fusion method based on heat transfer agent and target tracking
CN106919921B (en) Gait recognition method and system combining subspace learning and tensor neural network
Mady et al. Face recognition and detection using Random forest and combination of LBP and HOG features
Wang et al. Improving human action recognition by non-action classification
Rouhi et al. A review on feature extraction techniques in face recognition
CN105956552A (en) Face black list monitoring method
CN108830222A (en) A kind of micro- expression recognition method based on informedness and representative Active Learning
Udawant et al. Cotton leaf disease detection using instance segmentation
CN110188718A (en) It is a kind of based on key frame and joint sparse indicate without constraint face identification method
Duffner et al. A neural scheme for robust detection of transparent logos in TV programs
CN106407878B (en) Method for detecting human face and device based on multi-categorizer
CN102156879B (en) Human target matching method based on weighted terrestrial motion distance
CN103366163A (en) Human face detection system and method based on incremental learning
CN110135362A (en) A kind of fast face recognition method based under infrared camera
CN109784261A (en) Pedestrian's segmentation and recognition methods based on machine vision
CN103577805A (en) Gender identification method based on continuous gait images
Zhang et al. Transferring training instances for convenient cross-view object classification in surveillance
Paul et al. Automatic adaptive facial feature extraction using CDF analysis
Wang et al. Thermal infrared object tracking based on adaptive feature fusion
Liu et al. Detection of Late Blight in Potato Leaves Based on Multi-Feature and SVM Classifier

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No. 2707, 27F, building h, Tianjian one square mile, Xinhe street, Kaifu District, Changsha City, Hunan Province

Applicant after: HUNAN ZHONGZHI JUNYING TECHNOLOGY Co.,Ltd.

Address before: 410000 Room 0709, 7th Floor, Building 3, Huachuang International Plaza, 109 Furong Middle Road, Wujialing Street, Changsha City, Hunan Province

Applicant before: HUNAN ZHONGZHI JUNYING TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Zhang Shuhua

Inventor before: Xie Jianbin

Inventor before: Li Peiqin

Inventor before: Yan Wei

Inventor before: Zhang Shuhua

CB03 Change of inventor or designer information
TR01 Transfer of patent right

Effective date of registration: 20231212

Address after: 410006 Room 101, Building 3, Country Garden Smart Park, Xuehua Village, Bachelor Street, Yuelu District, Changsha City, Hunan Province

Patentee after: HUNAN ZHONGKE YOUXIN TECHNOLOGY CO.,LTD.

Patentee after: National University of Defense Technology

Address before: Room 2707, 27th floor, building h, Tianjian square mile, Xinhe street, Kaifu District, Changsha, Hunan 410000

Patentee before: HUNAN ZHONGZHI JUNYING TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240204

Address after: 410073 Hunan province Changsha Kaifu District, Deya Road No. 109

Patentee after: National University of Defense Technology

Country or region after: China

Address before: 410006 Room 101, Building 3, Country Garden Smart Park, Xuehua Village, Bachelor Street, Yuelu District, Changsha City, Hunan Province

Patentee before: HUNAN ZHONGKE YOUXIN TECHNOLOGY CO.,LTD.

Country or region before: China

Patentee before: National University of Defense Technology

TR01 Transfer of patent right