CN101980242A - Human face discrimination method and system and public safety system - Google Patents

Human face discrimination method and system and public safety system Download PDF

Info

Publication number
CN101980242A
CN101980242A CN201010501198.3A CN201010501198A CN101980242A CN 101980242 A CN101980242 A CN 101980242A CN 201010501198 A CN201010501198 A CN 201010501198A CN 101980242 A CN101980242 A CN 101980242A
Authority
CN
China
Prior art keywords
camouflage
facial image
face
image
people
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201010501198.3A
Other languages
Chinese (zh)
Other versions
CN101980242B (en
Inventor
徐勇
杨治银
李岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Zhengbang Information Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201010501198.3A priority Critical patent/CN101980242B/en
Publication of CN101980242A publication Critical patent/CN101980242A/en
Application granted granted Critical
Publication of CN101980242B publication Critical patent/CN101980242B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a human face discrimination method and a human face discrimination system. The system comprises a disguise classifier, a disguise model and a human face identification classifier. The human face discrimination method comprises the following steps of: acquiring a human face image, namely acquiring the human face image from a video or images; performing human face disguise discrimination, namely performing disguise discrimination on the human face image by using the disguise classifier and the disguise model; and performing human face discrimination, namely performing human face discrimination on undisguised human face images in the disguise discrimination result by using the human face identification classifier. In the human face discrimination method, the human face discrimination system and a public safety system of the invention, the human face disguise discrimination is performed before the human face discrimination, so that the disguise behavior of human face images is timely responded.

Description

People's face phase method of discrimination, system and public safety system
Technical field
The present invention relates to method, system and public safety system that a kind of people's face is differentiated mutually, relate in particular to a kind of method, system and public safety system that others face is differentiated mutually that pretend earlier to declare.
Background technology
Along with the progress of society, the increasing social activities of people is all towards convenient, simple, convenient development.The bank self-help cash dispenser is a performance of social development, makes people avoid carrying the inconvenience that cash brings.But the safety problem day in self-help bank's environment displays more.For example: problems such as by bank card deception, theft property is the subject matter of present bank loss, and bank card is stolen are also more and more frequent.Unprincipled fellow big city holds bank card that theft comes after blocking and goes to the self-service Room withdraw the money (blocked a variety of, for example: the suspect who carries out self-help bank's crime is blocked face with cap, scarf and sunglasses).Secondly, the crime one's share of expenses for a joint undertaking (as wanted criminal, robber etc.) of other classifications can logical self-help bank be withdrawn the money, and this also provides the mechanism of a kind of suspect's of discovery whereabouts, can discern and locate the crime one's share of expenses for a joint undertaking by recognition of face.By the withdraw the money monitoring camera in the Room of bank self-help, the people that can after finding the suspect timely or blocking, withdraw the money, and give the alarm, remind the related personnel to note; In addition, suspect to crime, the description that provides according to the eye witness, as whether wear glasses, condition such as sex, or the image or the analog image that provide according to the eye witness, suspect's face database is searched for, compared, list more possible suspect and accuse of also for the eye witness and offer convenience for the affirmation and the extracting of suspicion of crime molecule.The face identification method of prior art and system can not differentiate camouflage usually, can not effectively react for the facial image that pretends.
Summary of the invention
The technical matters that the present invention solves is: make up method, system and public safety system that a kind of people's face is differentiated mutually, overcome in the prior art and can not differentiate camouflage, the technical matters that can not effectively react for the facial image of camouflage.
Technical scheme of the present invention is: a kind of people's face phase method of discrimination is provided, comprises camouflage sorter, camouflage model and recognition of face sorter, described people's face phase method of discrimination comprises the steps:
Obtain facial image: from video or image, obtain facial image;
Carrying out the camouflage of people's face differentiates: facial image is pretended to differentiate according to camouflage sorter, camouflage model;
People's face is differentiated mutually: carry out people's face classifying face for the facial image that pretends among the camouflage differentiation result according to the recognition of face sorter and differentiate mutually.
Further technical scheme of the present invention is: in the step of carrying out people's face camouflage differentiation, before carrying out the facial image contrast, also comprise the pretreatment operation to facial image, described pretreatment operation is for to carry out normalized to facial image.
Further technical scheme of the present invention is: in carrying out people's face camouflage discriminating step, described people's face camouflage differentiation comprises: being branded as, camouflage is differentiated, the wear dark glasses camouflage is differentiated, worn masks or the camouflage differentiation of scarf.
Technical scheme of the present invention is: make up a kind of people's face phase judgement system, comprise the image input block of importing facial image, carry out the camouflage judgement unit that the camouflage of people's face is differentiated according to facial image, carry out people's face and know each other other face identification unit, described camouflage judgement unit comprises and obtains the camouflage sorter, camouflage model and camouflage discrimination module, described face identification unit comprises the facial image database of storing facial image, the recognition of face sorter, described camouflage discrimination module is according to the camouflage sorter, the camouflage model pretends to differentiate to the facial image of described image input block input, differentiate when not being the facial image of camouflage for described camouflage discrimination module, described face identification unit will be carried out people's face according to the recognition of face sorter to the facial image of described image input block input and facial image in the described facial image database and be discerned mutually.
Further technical scheme of the present invention is: described camouflage sorter comprises cap sorter, sunglasses sorter, described camouflage discrimination module is differentiated facial image according to the cap sorter and whether is the camouflage of being branded as, and whether described camouflage discrimination module is differentiated facial image according to the sunglasses sorter is the wear dark glasses camouflage.
Further technical scheme of the present invention is: described camouflage model comprises complexion model, and whether described camouflage discrimination module is differentiated facial image according to complexion model is to wear masks or the camouflage of scarf.
Further technical scheme of the present invention is: described people's face phase judgement system also comprises the facial image retrieval unit, and described facial image retrieval unit is retrieved the facial image in the described facial image database according to the condition of input.
Technical scheme of the present invention is: make up a kind of public safety system, described public safety system comprises people's face phase judgement system, described facial image database is suspect's image data base of storage suspect facial image, differentiate when not being the facial image of camouflage for described camouflage discrimination module, described face identification unit is carried out people's face with described facial image and facial image in described suspect's image data base and is discerned mutually.
Further technical scheme of the present invention is: described public safety system also comprises alarm unit, and differentiating for described camouflage discrimination module is that described alarm unit is reported to the police when being the facial image of camouflage; For described face identification unit described facial image is identified as facial image in described suspect's image data base, described alarm unit is reported to the police.
Further technical scheme of the present invention is: described public safety system also comprises suspect's image retrieval unit, described public safety system also comprises suspect's image retrieval unit, and described suspect's image retrieval unit is retrieved the facial image in described suspect's image data base according to the condition of input.
Technique effect of the present invention is: the method that inventor's face is differentiated mutually, system and public safety system, by carry out people's face before discerning mutually advanced pedestrian's face camouflage differentiate, in time react for the behavior that facial image pretends.
Description of drawings
Fig. 1 is a process flow diagram of the present invention.
Fig. 2 is the structural representation of inventor's face phase judgement system.
Fig. 3 is the structural representation of public safety system of the present invention.
Embodiment
Below in conjunction with specific embodiment, technical solution of the present invention is further specified.
As shown in Figure 1, the specific embodiment of the present invention is: the invention provides a kind of people's face phase method of discrimination, comprise camouflage sorter, camouflage model and recognition of face sorter, described people's face phase method of discrimination comprises the steps:
Step 100: obtain facial image, that is, from video or image, obtain facial image.At first to obtain facial image among the present invention, the obtaining of facial image derives from cuts facial image in the video, also can derive from the image that contains the people face part in the image, the image among the present invention can be the image electronic file that contains the people face part on the image electronic file of directly reception or image electronic file that scans generation or the truncated picture file.In the specific embodiment of the present invention, facial image is lived people's face according to the size of people's face with rectangle frame exceed, that is to say and obtain facial image with the big rectangular image of having the face and being lived by rectangle frame of trying one's best.
Step 200: carry out the camouflage of people's face and differentiate, that is, facial image is pretended to differentiate according to camouflage sorter, camouflage model.Among the present invention, described people's face camouflage differentiation comprises: the wear dark glasses camouflage is differentiated, the wear dark glasses camouflage is differentiated, worn masks or the camouflage differentiation of scarf.Before pretending differentiation, need to generate camouflage sorter and camouflage model.Specifically, the camouflage sorter among the present invention comprises cap sorter, sunglasses sorter, differentiates facial image according to the cap sorter and whether is the camouflage of being branded as, and whether differentiate facial image according to the sunglasses sorter is the wear dark glasses camouflage.Described camouflage model comprises complexion model, and whether differentiate facial image according to complexion model is to wear masks or the camouflage of scarf.Below specify the generative process of camouflage sorter:
(cap sorter and sunglasses divide device explanation synoptic diagram to be attached at last)
The generative process of cap sorter: at first obtain the image of many different laying method difformity caps and be not two groups of images of image of cap, generally quantitatively respectively above 100, these images are carried out normalized, be defined as the 40x30 pixel size such as the size unification.Then with the image of many caps as the positive example sample, with many be not the image of cap as the counter-example sample, these positive example samples and counter-example sample are promptly formed the training set of cap cascade classifier.(OpenCV is the computer vision storehouse of increasing income that Intel Company supports, it is made of a series of C functions and a small amount of C++ class, has realized a lot of general-purpose algorithms of Flame Image Process and computer vision aspect to utilize Opencv.) in the Adaboosting algorithm finished writing based on the Haar feature, the training of sorter realizes with the haartraining program among the Opencv, training set with the cap cascade classifier obtained is input training respectively, obtain the cap sorter, the final cap cascade classifier that is output as based on the Harr feature.
After obtaining the cap cascade classifier, obtain wherein cap hu invariant (Hu Shi invariant, some moment characteristics again.Because moment of inertia and some very useful square invariants about major axis and minor axis all can directly be obtained by square, invariant moments is the statistical property of image, satisfy translation, all constant unchangeability is contracted, rotated in gentry, obtained using widely in field of image recognition, Hu has at first proposed to be used for the invariant moments of region shape identification, and the Hu Shi invariant is some moment characteristics.) training set.Utilize the cap cascade classifier that forms in the sorter training, detect cap, and cut the cap area image, two class images are arranged in the image collection that obtains like this, one class is a cap, and an other class is not cap (being referred to as " false cap " here), they is divided into two groups then, one group that is cap is the positive example sample, and " false cap " one group is the counter-example sample; (obtaining the used image set of cap Hu invariant training set here is two different image sets with the used image library of above-mentioned training cap cascade classifier so to be formed for obtaining the cap image set of cap Hu invariant; These two image sets all are ready before system's operation, calculate wherein 7 hu invariants of all images, form cap hu invariant training set, and cap and non-cap two class data are wherein arranged.
Promptly obtained the cap sorter by said method.
After obtaining the cap sorter, utilize the cap sorter that the camouflage of being branded as in the image is differentiated, promptly, utilize cap cascade classifier Preliminary detection earlier, to detected ' cap image ', utilize 7 invariant training sets and K (positive integer in the cap sorter again, get 9 in the present invention) whether be genuine cap near neighbor method if differentiating in the image of current detection, if just the proper vector of current detection cap is nearer from cap hu invariant training set cap data, then be judged to cap, otherwise be not cap.In the specific implementation process, true cap image is drawn with the point of the 7th dimensional feature for the 5th dimension in 7 Hu invariant moments of the image of cap with erroneous judgement good separation property, therefore use these two features to reduce erroneous judgement, so also reduced computation complexity as the input of k nearest neighbor method.
The same with above-mentioned process, the acquisition process of sunglasses sorter is as follows:
The generative process of sunglasses sorter: at first obtain the image of many different laying method difformity sunglasses and be not two groups of images of image of sunglasses, generally quantitatively respectively above 100, these images are carried out normalized, be defined as the 40x30 pixel size such as the size unification.Then with the image of many sunglasses as the positive example sample, with many be not the image of sunglasses as the counter-example sample, these positive example samples and counter-example sample are promptly formed the training set of sunglasses cascade classifier.(OpenCV is the computer vision storehouse of increasing income that Intel Company supports, it is made of a series of C functions and a small amount of C++ class, has realized a lot of general-purpose algorithms of Flame Image Process and computer vision aspect to utilize Opencv.) in the Adaboosting algorithm finished writing based on the Haar feature, the training of sorter realizes with the haartraining program among the Opencv, training set with the sunglasses sorter obtained is input training respectively, obtain the sunglasses sorter, the final sunglasses cascade classifier that is output as based on the Harr feature.
After obtaining the sunglasses cascade classifier, obtain wherein sunglasses hu invariant (Hu Shi invariant, some moment characteristics again.Because moment of inertia and some very useful square invariants about major axis and minor axis all can directly be obtained by square, invariant moments is the statistical property of image, satisfy translation, all constant unchangeability is contracted, rotated in gentry, obtained using widely in field of image recognition, Hu has at first proposed to be used for the invariant moments of region shape identification, and the Hu Shi invariant is some moment characteristics.) training set.Utilize the sunglasses cascade classifier that forms in the sorter training, detect cap, and cut the sunglasses area image, two class images are arranged in the image collection that obtains like this, one class is sunglasses, and an other class is not sunglasses (being referred to as " false sunglasses " here), they is divided into two groups then, one group that is sunglasses is the positive example sample, and ' false sunglasses ' one group is the counter-example sample; (obtaining the used image set of sunglasses Hu invariant training set here is two different image sets with the used image library of above-mentioned training sunglasses cascade classifier so to be formed for obtaining the sunglasses image set of sunglasses Hu invariant; ), calculate wherein 7 hu invariants of all images, form sunglasses hu invariant training set, sunglasses and non-sunglasses two class data are wherein arranged.
Promptly obtained the sunglasses sorter by said method.
After obtaining the sunglasses sorter, utilize the sunglasses sorter to differentiate to carrying out the wear dark glasses camouflage in the image, promptly, utilize sunglasses cascade classifier Preliminary detection earlier, to detected ' sunglasses image ', calculate 7 Hu invariants that are somebody's turn to do ' sunglasses image ' again, then 7 Hu invariant input neural networks, according to neural network output result (0 or 1,0 expression is not sunglasses, 1 expression is sunglasses) whether the image of differentiating current detection be genuine cap, utilize 7 invariant training sets and multilayer neural network method in the cap sorter, the neural metwork training process is as follows, and using has three layers of BP neural network, 7 nodes of input layer (corresponding 7 Hu invariants), hide 10 node of layer, node of output layer (whether correspondence is sunglasses); The sunglasses hu invariant training set of the data set of neural network training for from video, intercepting.The training back produces neural network model.
Two relevant steps in the following brief description camouflage sorter:
One, based on the Adaboosting algorithm of Haar feature.Detect the target of some appointments based on the Adaboosting algorithm of Haar feature, as people's face, sunglasses, eyes, automobile etc.Specifically cross as follows: the Haar feature is divided three classes: the feature templates that edge feature, linear feature, central feature become with the diagonal line characteristics combination.Adularescent and two kinds of rectangles of black in the feature templates, and the eigenwert that defines this template be the white rectangle pixel value and that deduct the black rectangle pixel value and.Determining that the quantity of Harr-like feature after the characteristic formp just depends on the size of training sample image matrix, feature templates is placed arbitrarily in subwindow, a kind of form is called a kind of feature, and the feature of finding out all subwindows is the basis of carrying out the weak typing training.Adaboost is a kind of iterative algorithm, and its core concept is at the different sorter (Weak Classifier) of same training set training, then these Weak Classifiers is gathered, and constitutes a stronger final sorter (strong classifier).Utilize the harr feature of sample to carry out the sorter training, obtain the strong classifier of a cascade.Training sample is divided into positive example sample and counter-example sample, and wherein the positive example sample is meant target sample to be checked (for example people's face or automobile etc.), and the counter-example sample refers to other arbitrary image, and all sample images all are normalized to same size (for example, 20x20).After sorter has been trained, just can be applied to the detection of the area-of-interest (size identical) in the input picture with training sample.Detecting target area (automobile or people's face) sorter is output as 1, otherwise is output as 0.In order to detect whole sub-picture, mobile search window in image detects each position and determines possible target.In order to search for the target object of different sizes, sorter is designed to carry out size and changes, and is more more effective than changing size of images size to be checked like this.So in order to detect the target object of unknown size in image, scanning sequence need scan image several times with the search window of different proportion size usually.
Two, the calculating of hu invariant.
The computing method of 7 hu invariants are as follows: for digital picture f (x, y), its (p+q) rank geometric moment m PqExpression:
m pq = Σ x Σ y x p y q f ( x , y )
μ pq = Σ x Σ y ( x - x 0 ) p ( y - y 0 ) q f ( x , y )
Central moment μ PqBe translation invariant, also need standardize obtains yardstick standardization square:
η pq = μ pq μ 00 r , r = p + q + 2 2 , p + q 2 ≥ 2
7 invariants that hu produces are:
φ 1=η 12
φ 2=(η 02) 2+4η 1 2
φ 3=(η 0-3η 2) 2+(3η 13) 2
φ 4=(η 02) 2+(η 13) 2
φ 5=(η 0-3η 2)(η 02)[(η 02) 2-3(η 13) 2]+
(3η 13)(η 13)[3(η 02) 2-(η 13) 2]
φ 6=(η 02)[(η 02) 2-(η 13) 2]+4η 102)(η 13)
φ 7=(3η 13)(η 02)[(η 02) 2-3(η 13) 2]-
0-3η 2)(η 13)[3(η 02) 2-(η 13) 2]
m PqBe (p+q) rank square of image, p, q are nonnegative integers arbitrarily; μ PqIt is (p+q) center, rank invariant moments of image; η PqBe (p+q) center, the rank invariant moments after the standardization; Φ i(i=1,2 ... 7) being seven invariants recklessly, is scalar.
Among the present invention, described camouflage model comprises complexion model, and whether differentiate facial image according to complexion model is to wear masks or the camouflage of scarf.Detailed process is as follows: the function of complexion model is to given image to be arranged, and finds and human face region that mark is not blocked, so that the identification of later stage facial image is handled.Pass through then to calculate similarity, thereby mark is carried out in the zone of people's face.Comprise following three steps:
One, brightness of image adjustment
Given image, because the mass discrepancy of camera, the difference of illumination has bigger difference, for reducing the influence of external condition to the face complexion color, we at first carry out pre-service to it and correct.Here the rectification formula of disposal route is as follows:
Sc=Sc*scalar
Scalar = Savg Scavg
Wherein, Sc is the rgb value of original image, and Savg is the rgb value of standard picture, and Scavg is the mean value of the pairing RGB of corresponding RGB component of present image.The rgb value of standard picture, we are by taking 20 images under the normal illumination condition, calculate R, the G of all pixels in all pictures then respectively, the mean value of three passages of B obtains.By calculating the SAVGB=174.415 that we obtain, SAVGG=180.664, SAVGR=180.448.
We find that before the image rectification, there is over-exposed problem in video image comparison by the image rectification result, adopt correcting algorithm after, picture contrast has obviously strengthened.
Two, human face region detects
Detect us for the zone of people's face and adopted the method for calculating the human face region similarity.
At first, we arrive YCbCr color form to image transitions.Compare with rgb color space, YCbCr can well separate the brightness in the coloured image.
The formula that rgb color space is converted to the YCbCr space is as follows:
Cb=128-37.797*R/255-74.203*G/255+112*B/255
Cr=128+112*R255-93.786*G/255-18.214*B/255
Remove Y component (luminance component), we reduce to two dimension to a three-dimensional planar, and the zone of the colour of skin is relatively very concentrated on this two dimensional surface, so we simulate this distribution with Gaussian distribution.
We adopt the method for training to obtain such center of distribution, obtain the similarity of a colour of skin then according to the distance of pixel that will investigate from this center, then obtain the similar distribution plan of a former figure, again according to certain rule to this distribution plan binaryzation, finally determine the zone of the colour of skin.In the time of training, that need determine is average M and variance C.Provide by following formula:
M=E(x),C=E((x-M)(x-M) T)x=[r,b] T
Wherein, x is the Cr of the color of all pixels in the image, the vector that two values of Cb are formed.Adopt formula when calculating similarity:
P(r,b)=exp[-0.5(x-m) TC -1(x-m)]
P (r, b) be also referred to as Cr in the YCbCr space, the pixel that two values of Cb are r, b is the probability of the colour of skin, after the calculating similarity, if P (r, b) greater than given threshold value, then this is the colour of skin, and the corresponding pixel points gray-scale value is made as 1, otherwise is 0, in view of the above image is carried out binaryzation, threshold value is determined according to experimental result repeatedly.Be 0.62 in the present invention.
Three, the extraction of human face region
After correctly image being carried out binaryzation, whole face is in the zone of same connection in theory, although perhaps also may there be other less connected regions in image, the area of whole human face region should be maximum.Based on this, we calculate the area ratio of connected region area and entire image earlier, look for area ratio necessarily to limit interval connected component, just can think that this connected region is exactly the human face region that will look for.Limit and intervally to determine by the test of many times result, to limit the interval be [0.25,0.68] to area ratio among the present invention, and promptly area ratio is greater than 0.25 and be considered to people's face less than 0.68 o'clock this connected region.
Step 300: people's face is differentiated mutually, that is, carry out people's face classifying face for the facial image that pretends among the camouflage differentiation result according to the recognition of face sorter and differentiate mutually.
Detailed process is as follows: grab a two field picture from video.In the present invention, use the people's face sorter that trains that provides among the opencv to detect people's face.In the specific implementation process, this facial image normalization size gray level image that is the size (70 * 100 pixel) of training sample.If y ∈ is R nRepresent this facial image, be test sample book, A ∈ R N * mBeing the matrix that all training samples are formed, promptly is all images sample in the suspect's face database that has loaded, the corresponding training sample of each row.Suppose that test sample y can be expressed as follows with the linear combination of all training samples:
y = Σ k = 1 m α k a k - - - ( * * )
Wherein, m represents the number of training sample, a kRepresent k training sample, α kBe in the linear combination with k the corresponding coefficient of training sample.Wherein, α=(α 1, α 2.., α m) TBe coefficient vector, A=(a 1, a 2..., a m).
Pass through following formula
α ^ = ( A T A ) - 1 A T y
Obtain coefficient vector.
Calculate every class sample to describing the contribution of test sample book.By formula (* *) as can be known, each training sample all has contribution to the description of test sample book, and the contribution of k training sample is α ka k(k=1,2 ..., m).Because the classification under each training sample is known, so, can obtain this type of sample to describing the contribution of test sample book with the contribution addition of all training samples in each class.For example, suppose a s..., a tBe the training sample that belongs to the d class, then the contribution done when describing test sample book of d class is g dsa s+ ...+α ta t
Calculate the error e of all categories d=|| y-g d|| 2(d=1,2 ..., L).Wherein, L is the classification number in the database.
Find out the affiliated classification of least error, identification.The more little classification of error explanation test sample book (facial image to be identified) and this classification close together, and think that then portrait to be identified is the same with portrait in the database during less than threshold value necessarily when error.Otherwise refusal identification enters next time and handles.Get 0.02 (decimal between 0 to 1) in threshold value in the present invention.
Preferred implementation of the present invention is: in the step of carrying out people's face camouflage differentiation, before carrying out the facial image contrast, also comprise the pretreatment operation to facial image, described pretreatment operation is for to carry out normalized to facial image.
As shown in Figure 2, specific implementation process of the present invention is as follows: technical scheme of the present invention is: make up a kind of people's face phase judgement system, comprise the image input block 1 of importing facial image, carry out the camouflage judgement unit 2 that the camouflage of people's face is differentiated according to facial image, carry out people's face and know each other other face identification unit 3, described camouflage judgement unit comprise obtain the camouflage sorter 22, camouflage model 23 and camouflage discrimination module 21, described face identification unit 3 comprises the facial image database 32 of storing facial image, recognition of face sorter 31, described camouflage discrimination module 21 is according to camouflage sorter 22, the facial image of 23 pairs of described image input block 1 inputs of camouflage model pretends to differentiate, differentiate when not being the facial image of camouflage for described camouflage discrimination module 21, described face identification unit 3 will be carried out people's face according to recognition of face sorter 31 to the facial image of described image input block 1 input and facial image in the described facial image database 32 and be discerned mutually.
Specific implementation process of the present invention is as follows: at first will obtain facial image by image input block 1, the obtaining of facial image derives from cuts facial image in the video, also can derive from the image that contains the people face part in the image, the image among the present invention can be the image electronic file that contains the people face part on the image electronic file of directly reception or image electronic file that scans generation or the truncated picture file.In the specific embodiment of the present invention, facial image is lived people's face according to the size of people's face with rectangle frame exceed, that is to say and obtain facial image with the big rectangular image of having the face and being lived by rectangle frame of trying one's best.
Secondly, facial image is pretended to differentiate according to camouflage sorter, camouflage model.Among the present invention, described people's face camouflage differentiation comprises: being branded as, camouflage is differentiated, the wear dark glasses camouflage is differentiated, worn masks or the camouflage differentiation of scarf.Before pretending differentiation, need to generate camouflage sorter 22 and camouflage model 23.Specifically, camouflage sorter 22 among the present invention comprises cap sorter 221, sunglasses sorter 222, described camouflage discrimination module 21 is differentiated facial image according to the cap sorter and whether is the camouflage of being branded as, and whether described camouflage discrimination module 21 is differentiated facial image according to sunglasses sorter 222 is the wear dark glasses camouflage.Described camouflage model 23 comprises complexion model 231, and whether described camouflage discrimination module 21 is differentiated facial images according to complexion model 231 is to wear masks or the camouflage of scarf.
At last, for the facial image that described camouflage discrimination module 21 is differentiated for not being camouflage, described face identification unit 3 will be carried out people's face according to recognition of face sorter 31 to the facial image of described image input block 1 input and facial image in the described facial image database 32 and be discerned mutually.
Inventor's face phase judgement system in time reacts for the behavior that facial image pretends, and recognition effect is better.
As shown in Figure 2, preferred implementation of the present invention is: described people's face phase judgement system also comprises facial image retrieval unit 4, and described facial image retrieval unit 4 is retrieved the facial image in the described facial image database 32 according to the condition of input.Facial image retrieval unit 4 among the present invention compares retrieval to the image in the described facial image database 32 by described face identification unit 3 according to the facial image of user's input.Facial image retrieval unit 4 of the present invention can also carry out the facial image retrieval according to the people's of input conditions such as sex, the range of age.
As shown in Figure 3, technical scheme of the present invention is: make up a kind of public safety system, described public safety system comprises people's face phase judgement system, described facial image database is suspect's image data base 33 of storage suspect facial image, differentiate when not being the facial image of camouflage for described camouflage discrimination module 21, described face identification unit 3 is carried out people's face with described facial image and facial image in described suspect's image data base 33 and is discerned mutually.
Its concrete course of work only defines suspect's image data base 33 with common facial image database here with the course of work of above-mentioned people's face phase recognition system.
As shown in Figure 3, preferred implementation of the present invention is: described public safety system also comprises alarm unit 5, and differentiating for described camouflage discrimination module 21 is when being the facial image of camouflage, and described alarm unit 5 is reported to the police; For described face identification unit 3 described facial image is identified as facial image in described suspect's image data base 33, described alarm unit 5 is reported to the police.
Preferred implementation of the present invention is: described public safety system also comprises suspect's image retrieval unit, and described suspect's image retrieval unit is retrieved the facial image in described suspect's image data base 33 according to the condition of input.Suspect's image retrieval unit among the present invention compares retrieval to the image in described suspect's image data base 33 by described face identification unit 3 according to the facial image of user's input.Suspect's image retrieval unit further of the present invention is carried out the facial image retrieval according to the people's of input conditions such as sex, the range of age.Suspect's image retrieval cell operation process described here is the same with described facial image retrieval unit 4 courses of work, and only its data retrieved storehouse is suspect's image data base 33.
Technique effect of the present invention is: the method that inventor's face is differentiated mutually, system and public safety system, by carry out people's face before discerning mutually advanced pedestrian's face camouflage differentiate, in time react for the behavior that facial image pretends.
Above content be in conjunction with concrete preferred implementation to further describing that the present invention did, can not assert that concrete enforcement of the present invention is confined to these explanations.For the general technical staff of the technical field of the invention, without departing from the inventive concept of the premise, can also make some simple deduction or replace, all should be considered as belonging to protection scope of the present invention.

Claims (10)

1. people's face phase method of discrimination comprises camouflage sorter, camouflage model and recognition of face sorter, and described people's face phase method of discrimination comprises the steps:
Obtain facial image: from video or image, obtain facial image;
Carrying out the camouflage of people's face differentiates: facial image is pretended to differentiate according to camouflage sorter, camouflage model;
People's face is differentiated mutually: carry out people's face classifying face for the facial image that pretends among the camouflage differentiation result according to the recognition of face sorter and differentiate mutually.
2. people's face phase method of discrimination according to claim 1, it is characterized in that, in the step of carrying out people's face camouflage differentiation, before carrying out the facial image contrast, also comprise the pretreatment operation to facial image, described pretreatment operation is for to carry out normalized to facial image.
3. people's face phase method of discrimination according to claim 1 is characterized in that, in carrying out people's face camouflage discriminating step, described people's face camouflage differentiation comprises: being branded as, camouflage is differentiated, the wear dark glasses camouflage is differentiated, worn masks or the camouflage differentiation of scarf.
4. people's face phase judgement system, it is characterized in that, comprise the image input block of importing facial image, carry out the camouflage judgement unit that the camouflage of people's face is differentiated according to facial image, carry out people's face and know each other other face identification unit, described camouflage judgement unit comprises and obtains the camouflage sorter, camouflage model and camouflage discrimination module, described face identification unit comprises the facial image database of storing facial image, the recognition of face sorter, described camouflage discrimination module is according to the camouflage sorter, the camouflage model pretends to differentiate to the facial image of described image input block input, differentiate when not being the facial image of camouflage for described camouflage discrimination module, described face identification unit will be carried out people's face according to the recognition of face sorter to the facial image of described image input block input and facial image in the described facial image database and be discerned mutually.
5. people's face phase judgement system according to claim 4, it is characterized in that, described camouflage sorter comprises cap sorter, sunglasses sorter, described camouflage discrimination module is differentiated facial image according to the cap sorter and whether is the camouflage of being branded as, and whether described camouflage discrimination module is differentiated facial image according to the sunglasses sorter is the wear dark glasses camouflage.
6. people's face phase judgement system according to claim 4 is characterized in that described camouflage model comprises complexion model, and whether described camouflage discrimination module is differentiated facial image according to complexion model is to wear masks or the camouflage of scarf.
7. people's face phase judgement system according to claim 4, it is characterized in that, described people's face phase judgement system also comprises the facial image retrieval unit, and described facial image retrieval unit is retrieved the facial image in the described facial image database according to the condition of input.
8. an application rights requires the public safety system of the described people's face of arbitrary claim phase judgement system in 4 to 7, it is characterized in that, described public safety system comprises people's face phase judgement system, described facial image database is suspect's image data base of storage suspect facial image, differentiate when not being the facial image of camouflage for described camouflage discrimination module, described face identification unit is carried out people's face with described facial image and facial image in described suspect's image data base and is discerned mutually.
9. public safety system according to claim 8 is characterized in that described public safety system also comprises alarm unit, and differentiating for described camouflage discrimination module is that described alarm unit is reported to the police when being the facial image of camouflage; For described face identification unit described facial image is identified as facial image in described suspect's image data base, described alarm unit is reported to the police.
10. public safety system according to claim 8, it is characterized in that, described public safety system also comprises suspect's image retrieval unit, described public safety system also comprises suspect's image retrieval unit, and described suspect's image retrieval unit is retrieved the facial image in described suspect's image data base according to the condition of input.
CN201010501198.3A 2010-09-30 2010-09-30 Human face discrimination method and system and public safety system Expired - Fee Related CN101980242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010501198.3A CN101980242B (en) 2010-09-30 2010-09-30 Human face discrimination method and system and public safety system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010501198.3A CN101980242B (en) 2010-09-30 2010-09-30 Human face discrimination method and system and public safety system

Publications (2)

Publication Number Publication Date
CN101980242A true CN101980242A (en) 2011-02-23
CN101980242B CN101980242B (en) 2014-04-09

Family

ID=43600744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010501198.3A Expired - Fee Related CN101980242B (en) 2010-09-30 2010-09-30 Human face discrimination method and system and public safety system

Country Status (1)

Country Link
CN (1) CN101980242B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779264A (en) * 2012-07-10 2012-11-14 北京恒信彩虹科技有限公司 Method and device for realizing barcode recognition
CN104604219A (en) * 2013-07-12 2015-05-06 欧姆龙株式会社 Image processing device, image processing method, and image processing program
CN106778452A (en) * 2015-11-24 2017-05-31 沈阳新松机器人自动化股份有限公司 Service robot is based on human testing and the tracking of binocular vision
CN107491746A (en) * 2017-08-02 2017-12-19 安徽慧视金瞳科技有限公司 A kind of face prescreening method based on the analysis of big gradient pixel
CN107633266A (en) * 2017-09-07 2018-01-26 西安交通大学 A kind of electric locomotive OCS and pantograph arc method for measuring
CN108197250A (en) * 2017-12-29 2018-06-22 深圳云天励飞技术有限公司 Picture retrieval method, electronic equipment and storage medium
CN108228792A (en) * 2017-12-29 2018-06-29 深圳云天励飞技术有限公司 Picture retrieval method, electronic equipment and storage medium
JP2018142137A (en) * 2017-02-27 2018-09-13 日本電気株式会社 Information processing device, information processing method and program
CN108597168A (en) * 2018-04-24 2018-09-28 广东美的制冷设备有限公司 Security alarm method, apparatus based on optical filter and household appliance
CN109101923A (en) * 2018-08-14 2018-12-28 罗普特(厦门)科技集团有限公司 A kind of personnel wear the detection method and device of mask situation
CN109190498A (en) * 2018-08-09 2019-01-11 安徽四创电子股份有限公司 A method of the case intelligence string based on recognition of face is simultaneously
CN109522960A (en) * 2018-11-21 2019-03-26 泰康保险集团股份有限公司 Image evaluation method, device, electronic equipment and computer-readable medium
CN110135279A (en) * 2019-04-23 2019-08-16 深圳神目信息技术有限公司 A kind of method for early warning based on recognition of face, device, equipment and computer-readable medium
CN111811657A (en) * 2020-07-07 2020-10-23 杭州海康威视数字技术股份有限公司 Method and device for correcting human face temperature measurement and storage medium
CN110341554B (en) * 2019-06-24 2021-05-25 福建中科星泰数据科技有限公司 Controllable environment adjusting system
CN113569676A (en) * 2021-07-16 2021-10-29 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112597867B (en) * 2020-12-17 2024-04-26 佛山科学技术学院 Face recognition method and system for wearing mask, computer equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778500B (en) * 2016-11-11 2019-09-17 北京小米移动软件有限公司 A kind of method and apparatus obtaining personage face phase information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1215618A2 (en) * 2000-12-14 2002-06-19 Eastman Kodak Company Image processing method for detecting human figures in a digital image
CN1710925A (en) * 2004-06-18 2005-12-21 乐金电子(中国)研究开发中心有限公司 Device and method for identifying identity by pick up head set on handset
CN101369310A (en) * 2008-09-27 2009-02-18 北京航空航天大学 Robust human face expression recognition method
CN101440676A (en) * 2008-12-22 2009-05-27 北京中星微电子有限公司 Intelligent anti-theft door lock based on cam and warning processing method thereof
CN101751557A (en) * 2009-12-18 2010-06-23 上海星尘电子科技有限公司 Intelligent biological identification device and identification method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1215618A2 (en) * 2000-12-14 2002-06-19 Eastman Kodak Company Image processing method for detecting human figures in a digital image
CN1710925A (en) * 2004-06-18 2005-12-21 乐金电子(中国)研究开发中心有限公司 Device and method for identifying identity by pick up head set on handset
CN101369310A (en) * 2008-09-27 2009-02-18 北京航空航天大学 Robust human face expression recognition method
CN101440676A (en) * 2008-12-22 2009-05-27 北京中星微电子有限公司 Intelligent anti-theft door lock based on cam and warning processing method thereof
CN101751557A (en) * 2009-12-18 2010-06-23 上海星尘电子科技有限公司 Intelligent biological identification device and identification method thereof

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779264A (en) * 2012-07-10 2012-11-14 北京恒信彩虹科技有限公司 Method and device for realizing barcode recognition
CN104604219B (en) * 2013-07-12 2019-04-12 欧姆龙株式会社 Image processing apparatus and image processing method
CN104604219A (en) * 2013-07-12 2015-05-06 欧姆龙株式会社 Image processing device, image processing method, and image processing program
CN106778452A (en) * 2015-11-24 2017-05-31 沈阳新松机器人自动化股份有限公司 Service robot is based on human testing and the tracking of binocular vision
JP7120590B2 (en) 2017-02-27 2022-08-17 日本電気株式会社 Information processing device, information processing method, and program
JP2018142137A (en) * 2017-02-27 2018-09-13 日本電気株式会社 Information processing device, information processing method and program
CN107491746A (en) * 2017-08-02 2017-12-19 安徽慧视金瞳科技有限公司 A kind of face prescreening method based on the analysis of big gradient pixel
CN107491746B (en) * 2017-08-02 2020-07-17 安徽慧视金瞳科技有限公司 Face pre-screening method based on large gradient pixel analysis
CN107633266A (en) * 2017-09-07 2018-01-26 西安交通大学 A kind of electric locomotive OCS and pantograph arc method for measuring
CN107633266B (en) * 2017-09-07 2020-07-28 西安交通大学 Electric locomotive contact net pantograph electric arc detection method
CN108197250A (en) * 2017-12-29 2018-06-22 深圳云天励飞技术有限公司 Picture retrieval method, electronic equipment and storage medium
CN108228792A (en) * 2017-12-29 2018-06-29 深圳云天励飞技术有限公司 Picture retrieval method, electronic equipment and storage medium
CN108597168A (en) * 2018-04-24 2018-09-28 广东美的制冷设备有限公司 Security alarm method, apparatus based on optical filter and household appliance
CN109190498A (en) * 2018-08-09 2019-01-11 安徽四创电子股份有限公司 A method of the case intelligence string based on recognition of face is simultaneously
CN109101923A (en) * 2018-08-14 2018-12-28 罗普特(厦门)科技集团有限公司 A kind of personnel wear the detection method and device of mask situation
CN109522960A (en) * 2018-11-21 2019-03-26 泰康保险集团股份有限公司 Image evaluation method, device, electronic equipment and computer-readable medium
CN110135279A (en) * 2019-04-23 2019-08-16 深圳神目信息技术有限公司 A kind of method for early warning based on recognition of face, device, equipment and computer-readable medium
CN110341554B (en) * 2019-06-24 2021-05-25 福建中科星泰数据科技有限公司 Controllable environment adjusting system
CN111811657A (en) * 2020-07-07 2020-10-23 杭州海康威视数字技术股份有限公司 Method and device for correcting human face temperature measurement and storage medium
CN112597867B (en) * 2020-12-17 2024-04-26 佛山科学技术学院 Face recognition method and system for wearing mask, computer equipment and storage medium
CN113569676A (en) * 2021-07-16 2021-10-29 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN101980242B (en) 2014-04-09

Similar Documents

Publication Publication Date Title
CN101980242B (en) Human face discrimination method and system and public safety system
US8351662B2 (en) System and method for face verification using video sequence
Ma et al. Robust precise eye location under probabilistic framework
US8345921B1 (en) Object detection with false positive filtering
CN101142584B (en) Method for facial features detection
CN102902959B (en) Face recognition method and system for storing identification photo based on second-generation identity card
US8750573B2 (en) Hand gesture detection
US20120027252A1 (en) Hand gesture detection
EP2091021A1 (en) Face authentication device
CN104504408A (en) Human face identification comparing method and system for realizing the method
CN106339657B (en) Crop straw burning monitoring method based on monitor video, device
CN109145742A (en) A kind of pedestrian recognition method and system
Felix et al. Entry and exit monitoring using license plate recognition
CN103530648A (en) Face recognition method based on multi-frame images
Dehshibi et al. Persian vehicle license plate recognition using multiclass Adaboost
CN103077378A (en) Non-contact human face identifying algorithm based on expanded eight-domain local texture features and attendance system
CN107315990A (en) A kind of pedestrian detection algorithm based on XCS LBP features and cascade AKSVM
CN106874825A (en) The training method of Face datection, detection method and device
Amaro et al. Evaluation of machine learning techniques for face detection and recognition
CN107784263A (en) Based on the method for improving the Plane Rotation Face datection for accelerating robust features
Guofeng et al. Traffic sign recognition based on SVM and convolutional neural network
Laroca et al. A first look at dataset bias in license plate recognition
CN106886771A (en) The main information extracting method of image and face identification method based on modularization PCA
Hassan et al. Facial image detection based on the Viola-Jones algorithm for gender recognition
Vardhini et al. Facial Recognition using OpenCV and Python on Raspberry Pi

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: HAINAN 01 INFORMATION TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: XU YONG

Effective date: 20150818

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150818

Address after: 570100, Hainan Province, 27 Nansha Road, Haikou Province, Xinhua Bookstore headquarters, 5 floor

Patentee after: Hainan 01 Mdt InfoTech Ltd.

Address before: 518000 building C, Innovation Research Institute, Nanshan District hi tech Zone, Guangdong, Shenzhen 1-6

Patentee before: Xu Yong

TR01 Transfer of patent right

Effective date of registration: 20210409

Address after: 570000 301, 3rd floor, DESHENGSHA commercial city, 23 Changdi Road, Longhua District, Haikou City, Hainan Province

Patentee after: Hainan Zhengbang Information Technology Co.,Ltd.

Address before: 570100 5th floor, provincial Xinhua Bookstore headquarters, 27 Nansha Road, Haikou City, Hainan Province

Patentee before: Hainan 01 Mdt InfoTech Ltd.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140409

Termination date: 20210930

CF01 Termination of patent right due to non-payment of annual fee