CN101980242B - Human face discrimination method and system and public safety system - Google Patents

Human face discrimination method and system and public safety system Download PDF

Info

Publication number
CN101980242B
CN101980242B CN201010501198.3A CN201010501198A CN101980242B CN 101980242 B CN101980242 B CN 101980242B CN 201010501198 A CN201010501198 A CN 201010501198A CN 101980242 B CN101980242 B CN 101980242B
Authority
CN
China
Prior art keywords
face
camouflage
image
facial image
people
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201010501198.3A
Other languages
Chinese (zh)
Other versions
CN101980242A (en
Inventor
徐勇
杨治银
李岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Zhengbang Information Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201010501198.3A priority Critical patent/CN101980242B/en
Publication of CN101980242A publication Critical patent/CN101980242A/en
Application granted granted Critical
Publication of CN101980242B publication Critical patent/CN101980242B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a human face discrimination method and a human face discrimination system. The system comprises a disguise classifier, a disguise model and a human face identification classifier. The human face discrimination method comprises the following steps of: acquiring a human face image, namely acquiring the human face image from a video or images; performing human face disguise discrimination, namely performing disguise discrimination on the human face image by using the disguise classifier and the disguise model; and performing human face discrimination, namely performing human face discrimination on undisguised human face images in the disguise discrimination result by using the human face identification classifier. In the human face discrimination method, the human face discrimination system and a public safety system of the invention, the human face disguise discrimination is performed before the human face discrimination, so that the disguise behavior of human face images is timely responded.

Description

People's face phase method of discrimination, system and public safety system
Technical field
The present invention relates to a kind of people's face and sentence mutually method for distinguishing, system and public safety system, relate in particular to and a kind ofly first pretend to sentence others face and sentence mutually method for distinguishing, system and public safety system.
Background technology
Along with social progress, the increasing social activities of people is all towards convenient, simple, convenient development.Bank self-help cash dispenser is a performance of social development, makes people avoid carrying the inconvenience that cash brings.But the safety problem day in self-help bank's environment more displays.For example: by bank card, cheat, steal the subject matter that property is current loss of bank, the problem such as bank card is stolen is also more and more frequent.Unprincipled fellow big city holds bank card that theft comes after blocking and goes to the self-service Room withdraw the money (blocked a variety of, for example: the suspect who carries out self-help bank's crime is blocked face with cap, scarf and sunglasses).Secondly, crime one's share of expenses for a joint undertaking (as wanted criminal, robber etc.) the Hui Tong self-help bank of other classifications withdraws the money, and this also provides the mechanism of a kind of suspect's of discovery whereabouts, by recognition of face, can identify and locate crime one's share of expenses for a joint undertaking.By the withdraw the money monitoring camera in the Room of bank self-help, the people that can withdraw the money after finding timely suspect or blocking, and give the alarm, remind related personnel to note; In addition, suspect to crime, the description providing according to eye witness, as whether worn glasses, the condition such as sex, or the image providing according to eye witness or analog image, suspect's face database is searched for, compared, list more possible suspect and accuse of also for confirmation and the crawl of suspicion of crime molecule offers convenience for eye witness.The face identification method of prior art and system, can not differentiate camouflage conventionally, for the facial image pretending, can not effectively react.
Summary of the invention
The technical matters that the present invention solves is: build a kind of people's face and sentence mutually method for distinguishing, system and public safety system, overcome in prior art and can not differentiate camouflage, can not carry out the technical matters of effective reaction for the facial image of camouflage.
Technical scheme of the present invention is: a kind of people's face phase method of discrimination is provided, comprises camouflage sorter, camouflage model and recognition of face sorter, described people's face phase method of discrimination comprises the steps:
Obtain facial image: from video or image, obtain facial image;
Carrying out the camouflage of people's face differentiates: facial image is pretended to differentiate according to camouflage sorter, camouflage model;
People's face is differentiated mutually: for the facial image pretending in camouflage differentiation result, according to recognition of face sorter, carry out face classification face and differentiate mutually.
Further technical scheme of the present invention is: in carrying out the step of people's face camouflage differentiation, before carrying out facial image contrast, also comprise the pretreatment operation to facial image, described pretreatment operation is for to be normalized facial image.
Further technical scheme of the present invention is: in carrying out people's face camouflage discriminating step, described people's face camouflage differentiation comprises: being branded as, camouflage is differentiated, wear dark glasses camouflage is differentiated, worn masks or the camouflage differentiation of scarf.
Technical scheme of the present invention is: build a kind of people's face phase judgement system, comprise the image input block of inputting facial image, according to facial image, carry out the camouflage judgement unit that the camouflage of people's face is differentiated, carry out people's face and know each other other face identification unit, described camouflage judgement unit comprises and obtains camouflage sorter, camouflage model and camouflage discrimination module, described face identification unit comprises the face database of storing facial image, recognition of face sorter, described camouflage discrimination module is according to camouflage sorter, camouflage model pretends to differentiate to the facial image of described image input block input, for described camouflage discrimination module, differentiating is while not being the facial image of camouflage, described face identification unit is identified mutually by the facial image of described image input block input and facial image in described face database are carried out to people's face according to recognition of face sorter.
Further technical scheme of the present invention is: described camouflage sorter comprises cap sorter, sunglasses sorter, described camouflage discrimination module differentiates according to cap sorter whether facial image is the camouflage of being branded as, and whether described camouflage discrimination module is differentiated facial image according to sunglasses sorter is wear dark glasses camouflage.
Further technical scheme of the present invention is: described camouflage model comprises complexion model, and whether described camouflage discrimination module is differentiated facial image according to complexion model is to wear masks or the camouflage of scarf.
Further technical scheme of the present invention is: described people's face phase judgement system also comprises facial image retrieval unit, and described facial image retrieval unit is retrieved the facial image in described face database according to the condition of input.
Technical scheme of the present invention is: build a kind of public safety system, described public safety system comprises people's face phase judgement system, described face database is suspect's image data base of storage suspect facial image, for described camouflage discrimination module, differentiate when not being the facial image of camouflage, described face identification unit is carried out people's face by described facial image and facial image in described suspect's image data base and is identified mutually.
Further technical scheme of the present invention is: described public safety system also comprises alarm unit, and for described camouflage discrimination module, differentiating is that while being the facial image of camouflage, described alarm unit is reported to the police; For described face identification unit, described facial image is identified as to the facial image in described suspect's image data base, described alarm unit is reported to the police.
Further technical scheme of the present invention is: described public safety system also comprises suspect's image retrieval unit, described public safety system also comprises suspect's image retrieval unit, and described suspect's image retrieval unit is retrieved the facial image in described suspect's image data base according to the condition of input.
Technique effect of the present invention is: inventor's face is sentenced method for distinguishing, system and public safety system mutually, by carry out people's face before identifying mutually advanced pedestrian's face camouflage differentiate, the behavior of pretending for facial image is reacted in time.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention.
Fig. 2 is the structural representation of inventor's face phase judgement system.
Fig. 3 is the structural representation of public safety system of the present invention.
Embodiment
Below in conjunction with specific embodiment, technical solution of the present invention is further illustrated.
As shown in Figure 1, the specific embodiment of the present invention is: the invention provides a kind of people's face phase method of discrimination, comprise camouflage sorter, camouflage model and recognition of face sorter, described people's face phase method of discrimination comprises the steps:
Step 100: obtain facial image, that is, obtain facial image from video or image.In the present invention, first to obtain facial image, the obtaining of facial image derives from and in video, cuts facial image, also can derive from the image that contains people face part in image, the image in the present invention can be the image electronic file that contains people face part on the image electronic file of directly reception or the image electronic file of scanning generation or truncated picture file.In the specific embodiment of the present invention, facial image is lived to people's face according to the size of people's face with rectangle frame and be limited, that is to say and obtain facial image with the large rectangular image of having the face and being lived by rectangle frame of trying one's best.
Step 200: carry out the camouflage of people's face and differentiate, that is, facial image is pretended to differentiate according to camouflage sorter, camouflage model.In the present invention, described people's face camouflage differentiation comprises: wear dark glasses camouflage is differentiated, wear dark glasses camouflage is differentiated, worn masks or the camouflage differentiation of scarf.Before pretending differentiation, need to generate camouflage sorter and camouflage model.Specifically, the camouflage sorter in the present invention comprises cap sorter, sunglasses sorter, according to cap sorter, differentiates whether facial image is the camouflage of being branded as, and whether according to sunglasses sorter, differentiate facial image is wear dark glasses camouflage.Described camouflage model comprises complexion model, and whether according to complexion model, differentiate facial image is to wear masks or the camouflage of scarf.Below illustrate the generative process of camouflage sorter:
The generative process of cap sorter: first obtain the image of multiple different laying method difformity caps and be not two groups of images of image of cap, generally quantitatively respectively over 100, these images are normalized, such as the unified regulation of size is 40x30 pixel size.Then using the image of multiple caps as positive example sample, using multiple be not the image of cap as negative data, these positive example samples and negative data form the training set of cap cascade classifier.Utilizing Opencv(OpenCV is the computer vision storehouse of increasing income that Intel Company supports, it consists of a series of C functions and a small amount of C++ class, has realized a lot of general-purpose algorithms of image processing and computer vision aspect.) in the Adaboosting algorithm based on Haar feature finished writing, the training of sorter realizes by the haartraining program in Opencv, the training set of the cap cascade classifier obtained of take is input training respectively, obtain cap sorter, the final cap cascade classifier being output as based on Harr feature.
Obtain after cap cascade classifier, then obtain cap hu invariant (Hu Shi invariant, some moment characteristics wherein.Because the moment of inertia about major axis and minor axis and some very useful square invariants all can directly be obtained by square, bending moment is not the statistical property of image, meet translation, gentry's contracting, rotate all constant unchangeability, in field of image recognition, be widely used, first Hu has proposed the not bending moment for region shape identification, and Hu Shi invariant is some moment characteristics.) training set.Utilize the cap cascade classifier forming in sorter training, detect cap, and cut cap area image, in the image collection obtaining like this, there are two class images, one class is cap, and an other class is not cap (being referred to as " false cap " here), then they is divided into two groups, one group that is cap is positive example sample, and " false cap " one group is negative data; (obtaining cap Hu invariant training set image set used and above-mentioned training cap cascade classifier image library used is here two different image sets to be so formed for obtaining the cap image set of cap Hu invariant; These two image sets are all ready before system operation, calculate wherein 7 hu invariants of all images, form cap hu invariant training set, wherein have cap and non-cap two class data.
By said method, obtained cap sorter.
After obtaining cap sorter, utilize cap sorter to differentiate the camouflage of being branded as in image,, first utilize cap cascade classifier Preliminary detection, to ' the cap image ' that detect, 7 invariant training sets in recycling cap sorter and positive integer of K(, get in the present invention 9) whether be genuine cap near neighbor method if differentiating in the image of current detection, if just the proper vector of current detection cap from cap hu invariant training set cap data close to, be judged to cap, otherwise be not cap.In specific implementation process, true cap image be mistaken for cap image 7 Hu not in bending moment the 5th dimension draw and have good separation property with the point of the 7th dimensional feature, therefore by these two features, as the input of k nearest neighbor method, reduce erroneous judgement, so also reduced computation complexity.
The same with said process, the acquisition process of sunglasses sorter is as follows:
The generative process of sunglasses sorter: first obtain the image of multiple different laying method difformity sunglasses and be not two groups of images of image of sunglasses, generally quantitatively respectively over 100, these images are normalized, such as the unified regulation of size is 40x30 pixel size.Then using the image of multiple sunglasses as positive example sample, using multiple be not the image of sunglasses as negative data, these positive example samples and negative data form the training set of sunglasses cascade classifier.Utilizing Opencv(OpenCV is the computer vision storehouse of increasing income that Intel Company supports, it consists of a series of C functions and a small amount of C++ class, has realized a lot of general-purpose algorithms of image processing and computer vision aspect.) in the Adaboosting algorithm based on Haar feature finished writing, the training of sorter realizes by the haartraining program in Opencv, the training set of the sunglasses sorter obtained of take is input training respectively, obtain sunglasses sorter, the final sunglasses cascade classifier being output as based on Harr feature.
Obtain after sunglasses cascade classifier, then obtain sunglasses hu invariant (Hu Shi invariant, some moment characteristics wherein.Because the moment of inertia about major axis and minor axis and some very useful square invariants all can directly be obtained by square, bending moment is not the statistical property of image, meet translation, gentry's contracting, rotate all constant unchangeability, in field of image recognition, be widely used, first Hu has proposed the not bending moment for region shape identification, and Hu Shi invariant is some moment characteristics.) training set.Utilize the sunglasses cascade classifier forming in sorter training, detect cap, and cut sunglasses area image, in the image collection obtaining like this, there are two class images, one class is sunglasses, and an other class is not sunglasses (being referred to as " false sunglasses " here), then they is divided into two groups, one group that is sunglasses is positive example sample, and ' false sunglasses ' one group is negative data; (obtaining sunglasses Hu invariant training set image set used and above-mentioned training sunglasses cascade classifier image library used is here two different image sets to be so formed for obtaining the sunglasses image set of sunglasses Hu invariant; ), calculate wherein 7 hu invariants of all images, form sunglasses hu invariant training set, wherein there are sunglasses and non-sunglasses two class data.
By said method, obtained sunglasses sorter.
After obtaining sunglasses sorter, utilize sunglasses sorter to differentiate carrying out wear dark glasses camouflage in image, , first utilize sunglasses cascade classifier Preliminary detection, to ' the sunglasses image ' that detect, calculate again 7 Hu invariants that are somebody's turn to do ' sunglasses image ', then 7 Hu invariant input neural networks, according to neural network Output rusults (0 or 1, 0 represents it is not sunglasses, 1 represents it is sunglasses) whether the image of differentiating current detection be genuine cap, utilize 7 invariant training sets and multilayer neural network method in cap sorter, neural metwork training process is as follows, use has three layers of BP neural network, 7 nodes of input layer (corresponding 7 Hu invariants), 10 nodes of hidden layer, node of output layer (whether correspondence is sunglasses), the sunglasses hu invariant training set of the data set of neural network training for intercepting from video.After training, produce neural network model.
Two relevant steps in following brief description camouflage sorter:
One, the Adaboosting algorithm based on Haar feature.Adaboosting algorithm based on Haar feature detects the target of some appointments, as people's face, sunglasses, eyes, automobile etc.Specifically cross as follows: Haar feature is divided three classes: the feature templates that edge feature, linear feature, central feature become with diagonal line Feature Combination.Adularescent and two kinds of rectangles of black in feature templates, and the eigenwert that defines this template be white rectangle pixel value and that deduct black rectangle pixel value and.Determining that the quantity of Harr-like feature after characteristic formp just depends on the size of training sample image matrix, feature templates is placed arbitrarily in subwindow, form is called an a kind of feature, and the feature of finding out all subwindows is the basis of carrying out weak typing training.Adaboost is a kind of iterative algorithm, and its core concept is to train different sorter (Weak Classifier) for same training set, then these Weak Classifiers is gathered, and forms a stronger final sorter (strong classifier).Utilize the harr feature of sample to carry out sorter training, obtain the strong classifier of a cascade.Training sample is divided into positive example sample and negative data, wherein positive example sample refers to target sample to be checked (such as people's face or automobile etc.), negative data refers to other arbitrary image, and all sample images are all normalized to same size (for example, 20x20).After sorter has been trained, just can be applied to the detection of the area-of-interest (size identical with training sample) in input picture.Target area (automobile or people's face) sorter being detected is output as 1, otherwise is output as 0.In order to detect whole sub-picture, mobile search window in image, detects each position and determines possible target.In order to search for the target object of different sizes, sorter is designed to carry out size change, more more effective than the size that changes image to be checked like this.So in order to detect the target object of unknown size in image, scanning sequence need to scan image several times with the search window of different proportion size conventionally.
Two, the calculating of hu invariant.
The computing method of 7 hu invariants are as follows: for digital picture f (x, y), and its (p+q) rank geometric moment m pqrepresent:
m pq = Σ x Σ y x p y q f ( x , y )
μ pq = Σ x Σ y ( x - x 0 ) p ( y - y 0 ) q f ( x , y )
Center square μ pqbe translation invariant, also need standardization to obtain yardstick standardization square:
η pq = μ pq μ 00 r , r = p + q + 2 2 , p + q 2 ≥ 2
7 invariants that hu constructs are:
φ 1=η 2002
φ 2=(η 2002) 2+4η 11 2
φ 3=(η 30-3η 12) 2+(3η 2103) 2
φ 4=(η 3012) 2+(η 2103) 2
φ 5=(η 30-3η 12)(η 3012)[(η 3012) 2-3(η 2103) 2]+
(3η 2103)(η 2103)[3(η 3012) 2-(η 2103) 2]
φ 6=(η 2002)[(η 3012) 2-(η 2103) 2]+4η 113012)(η 2103)
φ 7=(3η 2103)(η 3012)[(η 30+η1 2) 2-3(η 2103) 2]-
30-3η 12)(η 2103)[3(η 3012) 2-(η 2103) 2]
M pqbe (p+q) rank square of image, p, q are nonnegative integers arbitrarily; μ pqit is (p+q) center, rank bending moment not of image; η pqfor (p+q) center, the rank bending moment not after standardization; Φ i(i=1,2 ... 7) being seven invariants recklessly, is scalar.
In the present invention, described camouflage model comprises complexion model, and whether according to complexion model, differentiate facial image is to wear masks or the camouflage of scarf.Detailed process is as follows: the function of complexion model is to have an image to given, finds and human face region that mark is not blocked, so that the identifying processing of later stage facial image.Then pass through to calculate similarity, thereby mark is carried out in the region of people's face.Comprise following three steps:
One, brightness of image adjustment
Given image, due to the mass discrepancy of camera, the difference of illumination, has larger difference, and for reducing the impact of external condition on face complexion color, first we carry out pre-service rectification to it.Here the rectification formula of disposal route is as follows:
Sc=Sc*scalar
Scalar = Savg Scavg
Wherein, Sc is the rgb value of original image, the rgb value that Savg is standard picture, the mean value of the corresponding RGB of corresponding RGB component that Scavg is present image.The rgb value of standard picture, then we calculate respectively R, the G of all pixels in all pictures, the mean value of tri-passages of B obtains by take 20 images under normal illumination condition.By calculating, the SAVGB=174.415 that we obtain, SAVGG=180.664, SAVGR=180.448.
We find comparison by image rectification result, and before image rectification, video image exists over-exposed problem, adopt after correcting algorithm, and picture contrast has obviously strengthened.
Two, human face region detects
For the region of people's face, detect us and adopted the method for calculating human face region similarity.
First, we are transformed into YCbCr color form image.Compare with rgb color space, YCbCr can well separate the brightness in coloured image.
The formula that rgb color space is converted to YCbCr space is as follows:
Cb=128-37.797*R/255-74.203*G/255+112*B/255
Cr=128+112*R255-93.786*G/255-18.214*B/255
Remove Y component (luminance component), we reduce to two dimension a three-dimensional planar, and on this two dimensional surface, the region of the colour of skin is relatively very concentrated, so we simulate this distribution by Gaussian distribution.
We adopt the method for training to obtain such center of distribution, then according to the distance at the pixel Li Gai center that will investigate, obtain the similarity of a colour of skin, then obtain the similar distribution plan of a former figure, again according to certain rule to this distribution plan binaryzation, finally determine the region of the colour of skin.In the time of training, that need to determine is average M and variance C.By formula below:
M=E(x),C=E((x-x)(x-M) T)x=[r,t] T
Wherein, x is the Cr of the color of all pixels in image, the vector that two values of Cb form.While calculating similarity, adopt formula:
P(r,b)=exp[-0.5(x-M) TC -1(x-M)]
P(r, b) also referred to as Cr in YCbCr space, the probability that the pixel that two values of Cb are r, b is the colour of skin, calculates after similarity, if P(r, b) be greater than given threshold value, this point is the colour of skin, and corresponding pixel points gray-scale value is made as 1, otherwise is 0, accordingly image is carried out to binaryzation, threshold value is determined according to experimental result repeatedly.Be 0.62 in the present invention.
Three, the extraction of human face region
Correctly image is being carried out after binaryzation, whole face is in theory in the region in same connection, although perhaps also may there are other less connected regions in image, the area of whole human face region should be maximum.Based on this, we first calculate the area ratio of connected region area and whole image, look for area ratio necessarily to limit interval connected component, just can think that this connected region is exactly the human face region that will look for.Limit and intervally by test of many times result, to determine, in the present invention, to limit interval be [0.25,0.68] to area ratio, and area ratio is greater than 0.25 and be less than 0.68 o'clock this connected region and be considered to people's face.
Step 300: people's face is differentiated mutually, that is, carry out face classification face for the facial image pretending in camouflage differentiation result according to recognition of face sorter and differentiate mutually.
Detailed process is as follows: from video, grab a two field picture.In the present invention, with the face classification device training providing in opencv, detect people's face.In specific implementation process, the gray level image that is the size (70 * 100 pixel) of training sample this facial image normalization size.If y ∈ is R nrepresent this facial image, be test sample book, A ∈ R n * mbeing the matrix that all training samples form, is all image patterns in the suspect's face database having loaded, the corresponding training sample of each row.Suppose that test sample y can be expressed as follows with the linear combination of all training samples:
y = Σ k = 1 m α k a k - - - ( * * )
Wherein, m represents the number of training sample, a krepresent k training sample, α kbe in linear combination with k the coefficient that training sample is corresponding.Wherein, α=(α 1, α 2.., α m) tcoefficient vector, A=(a 1, a 2.., a m).
Pass through following formula
α ^ = ( A T A ) - 1 A T y
Obtain coefficient vector.
Calculate every class sample to describing the contribution of test sample book.From formula (* *), each training sample has contribution to the description of test sample book, and the contribution of k training sample is α ka k(k=1,2 ..., m).Because the classification under each training sample is known, so the contribution of all training samples in each class is added, can obtain this type of sample to describing the contribution of test sample book.For example, suppose a s..., a tbe the training sample that belongs to d class, the contribution that d class is done when describing test sample book is g dsa s+ ..+ α ta t.
Calculate the error e of all categories d=|| y-g d|| 2(d=1,2 ..., L).Wherein, L is the classification number in database.
Find out the affiliated classification of least error, identification.Classification explanation test sample book (facial image to be identified) and this classification close together that error is less, and when error is less than certain threshold value, think that portrait to be identified is the same with portrait in database.Otherwise refusal identification, enters next time and processes.In threshold value in the present invention, get the decimal between 0.02(0 to 1).
The preferred embodiment of the present invention is: in carrying out the step of people's face camouflage differentiation, before carrying out facial image contrast, also comprise the pretreatment operation to facial image, described pretreatment operation is for to be normalized facial image.
As shown in Figure 2, specific embodiment of the invention process is as follows: technical scheme of the present invention is: build a kind of people's face phase judgement system, comprise the image input block 1 of inputting facial image, according to facial image, carry out the camouflage judgement unit 2 that the camouflage of people's face is differentiated, carry out people's face and know each other other face identification unit 3, described camouflage judgement unit comprise obtain camouflage sorter 22, camouflage model 23 and camouflage discrimination module 21, described face identification unit 3 comprises the face database 32 of storing facial image, recognition of face sorter 31, described camouflage discrimination module 21 is according to camouflage sorter 22, the facial image of 23 pairs of described image input block 1 inputs of camouflage model pretends to differentiate, for described camouflage discrimination module 21, differentiating is while not being the facial image of camouflage, described face identification unit 3 is identified mutually by the facial image of described image input block 1 input and facial image in described face database 32 are carried out to people's face according to recognition of face sorter 31.
Specific embodiment of the invention process is as follows: first by image input block 1, will obtain facial image, the obtaining of facial image derives from and in video, cuts facial image, also can derive from the image that contains people face part in image, the image in the present invention can be the image electronic file that contains people face part on the image electronic file of directly reception or the image electronic file of scanning generation or truncated picture file.In the specific embodiment of the present invention, facial image is lived to people's face according to the size of people's face with rectangle frame and be limited, that is to say and obtain facial image with the large rectangular image of having the face and being lived by rectangle frame of trying one's best.
Secondly, facial image is pretended to differentiate according to camouflage sorter, camouflage model.In the present invention, described people's face camouflage differentiation comprises: being branded as, camouflage is differentiated, wear dark glasses camouflage is differentiated, worn masks or the camouflage differentiation of scarf.Before pretending differentiation, need to generate camouflage sorter 22 and camouflage model 23.Specifically, camouflage sorter 22 in the present invention comprises cap sorter 221, sunglasses sorter 222, described camouflage discrimination module 21 differentiates according to cap sorter whether facial image is the camouflage of being branded as, and whether described camouflage discrimination module 21 is differentiated facial image according to sunglasses sorter 222 is wear dark glasses camouflage.Described camouflage model 23 comprises complexion model 231, and whether described camouflage discrimination module 21 is differentiated facial images according to complexion model 231 is to wear masks or the camouflage of scarf.
Finally, the facial image of differentiating for not being camouflage for described camouflage discrimination module 21, described face identification unit 3 is identified mutually by the facial image of described image input block 1 input and facial image in described face database 32 are carried out to people's face according to recognition of face sorter 31.
The behavior that inventor's face phase judgement system pretends for facial image is reacted in time, and recognition effect is better.
As shown in Figure 2, the preferred embodiment of the present invention is: described people's face phase judgement system also comprises facial image retrieval unit 4, and described facial image retrieval unit 4 is retrieved the facial image in described face database 32 according to the condition of input.Facial image retrieval unit 4 in the present invention, according to the facial image of user's input, contrasts retrieval to the image in described face database 32 by described face identification unit 3.Facial image retrieval unit 4 of the present invention can also carry out facial image retrieval according to conditions such as the people's of input sex, the ranges of age.
As shown in Figure 3, technical scheme of the present invention is: build a kind of public safety system, described public safety system comprises people's face phase judgement system, described face database is suspect's image data base 33 of storage suspect facial image, for described camouflage discrimination module 21, differentiate when not being the facial image of camouflage, described face identification unit 3 is carried out people's face by described facial image and facial image in described suspect's image data base 33 and is identified mutually.
Its specific works process, with the course of work of above-mentioned people's face phase recognition system, only defines suspect's image data base 33 by common face database here.
As shown in Figure 3, the preferred embodiment of the present invention is: described public safety system also comprises alarm unit 5, and for described camouflage discrimination module 21, differentiating is while being the facial image of camouflage, and described alarm unit 5 is reported to the police; For described face identification unit 3, described facial image is identified as to the facial image in described suspect's image data base 33, described alarm unit 5 is reported to the police.
The preferred embodiment of the present invention is: described public safety system also comprises suspect's image retrieval unit, and described suspect's image retrieval unit is retrieved the facial image in described suspect's image data base 33 according to the condition of input.Suspect's image retrieval unit in the present invention, according to the facial image of user's input, contrasts retrieval to the image in described suspect's image data base 33 by described face identification unit 3.Suspect's image retrieval of the present invention unit can also carry out facial image retrieval according to conditions such as the people's of input sex, the ranges of age.Suspect's image retrieval cell operation process described here is the same with described facial image retrieval unit 4 courses of work, and only the database of its retrieval is suspect's image data base 33.
Technique effect of the present invention is: inventor's face is sentenced method for distinguishing, system and public safety system mutually, by carry out people's face before identifying mutually advanced pedestrian's face camouflage differentiate, the behavior of pretending for facial image is reacted in time.
Above content is in conjunction with concrete preferred implementation further description made for the present invention, can not assert that specific embodiment of the invention is confined to these explanations.For general technical staff of the technical field of the invention, without departing from the inventive concept of the premise, can also make some simple deduction or replace, all should be considered as belonging to protection scope of the present invention.

Claims (10)

1. people's face phase method of discrimination, comprises camouflage sorter, camouflage model and recognition of face sorter, and described people's face phase method of discrimination comprises the steps:
Obtain facial image: from video or image, obtain facial image;
Carrying out the camouflage of people's face differentiates: facial image is pretended to differentiate according to camouflage sorter, camouflage model, utilize the harr feature of sample to carry out sorter training, obtain the strong classifier of a cascade, training sample is divided into positive example sample and negative data, wherein, positive example sample refers to target sample to be checked, negative data refers to other arbitrary image, all sample images are all normalized, strong classifier for cascade, according to the strong classifier of cascade, obtain hu invariant training set, in described camouflage model, also comprise complexion model, whether according to described complexion model, differentiate facial image is to wear masks or the camouflage of scarf, according to the human face region that described complexion model finds and mark is not blocked, then by calculating similarity, thereby mark is carried out in the region of people's face, while calculating similarity, adopt formula:
P(r,b)=exp[-0.5(x-m) TC -1(x-m)]
Wherein, x is the Cr of the color of all pixels in image, the vector that two values of Cb form, m represents the number of training sample, P(r, b) also referred to as Cr in YCbCr space, two values of Cb are r, the pixel of b is the probability of the colour of skin, calculate after similarity, if P(r, b) be greater than given threshold value, this point is the colour of skin, corresponding pixel points gray-scale value is made as 1, otherwise be 0, accordingly image is carried out to binaryzation, image is being carried out after binaryzation, in the region of whole face in same connection, then calculate the area ratio of connected region area and whole image, according to area ratio, necessarily limit interval connected component, think that this connected region is exactly the human face region that will look for,
People's face is differentiated mutually: for the facial image pretending in camouflage differentiation result, according to recognition of face sorter, carry out face classification face and differentiate mutually; Described recognition of face is for the face identification method with refusal recognition function based on linear combination, specific as follows: this facial image normalization size, to be the big or small gray level image of training sample; If y ∈ is R nrepresent this facial image, be test sample book, A ∈ R n * mbeing the matrix that all training samples form, is all image patterns in the suspect's face database having loaded, the corresponding training sample of each row; Test sample y is expressed as follows with the linear combination of all training samples:
y = Σ k = 1 m α k a k
Wherein, m represents the number of training sample, a krepresent k training sample, α kbe in linear combination with k the coefficient that training sample is corresponding; Wherein, α=(α 1, α 2.., α m) tcoefficient vector, A=(a 1, a 2..., a m);
Calculate every class sample to describing the contribution of test sample book; The contribution that d class is done when describing test sample book is g dsa s+ ...+α ta t; Calculate the error e of all categories d=|| y-g d|| 2(d=1,2 ..., L), wherein, L is the classification number in database; Find out the affiliated classification of least error, identification; Classification explanation test sample book and this classification close together that error is less, and when error is less than certain threshold value, think that portrait to be identified is the same with portrait in database; Otherwise refusal identification, enters next time and processes.
2. people's face phase method of discrimination according to claim 1, it is characterized in that, in carrying out the step of people's face camouflage differentiation, before carrying out facial image contrast, also comprise the pretreatment operation to facial image, described pretreatment operation is for to be normalized facial image.
3. people's face phase method of discrimination according to claim 1, is characterized in that, in carrying out people's face camouflage discriminating step, described people's face camouflage differentiation comprises: being branded as, camouflage is differentiated, wear dark glasses camouflage is differentiated, worn masks or the camouflage differentiation of scarf.
4. people's face phase judgement system, it is characterized in that, comprise the image input block of inputting facial image, according to facial image, carry out the camouflage judgement unit that the camouflage of people's face is differentiated, carry out people's face and know each other other face identification unit, described camouflage judgement unit comprises camouflage sorter, camouflage model and camouflage discrimination module, described face identification unit comprises the face database of storing facial image, recognition of face sorter, described camouflage discrimination module is according to camouflage sorter, camouflage model pretends to differentiate to the facial image of described image input block input, for described camouflage discrimination module, differentiating is while not being the facial image of camouflage, described face identification unit is identified mutually by the facial image of described image input block input and facial image in described face database are carried out to people's face according to recognition of face sorter,
Described camouflage sorter utilizes the harr feature of sample to carry out sorter training, obtain the strong classifier of a cascade, training sample is divided into positive example sample and negative data, wherein, positive example sample refers to target sample to be checked, negative data refers to other arbitrary image, all sample images are all normalized, strong classifier for cascade, according to the strong classifier of cascade, obtain hu invariant training set, in described camouflage model, also comprise complexion model, whether according to described complexion model, differentiate facial image is to wear masks or the camouflage of scarf, according to the human face region that described complexion model finds and mark is not blocked, then by calculating similarity, thereby mark is carried out in the region of people's face, while calculating similarity, adopt formula:
P(r,b)=exp[-0.5(x-m) TC -1(x-m)]
P(r, b) also referred to as Cr in YCbCr space, the probability that the pixel that two values of Cb are r, b is the colour of skin, calculate after similarity, if P(r, b) be greater than given threshold value, this point is the colour of skin, and corresponding pixel points gray-scale value is made as 1, otherwise is 0, accordingly image is carried out to binaryzation, image is being carried out after binaryzation, in the region of whole face in same connection, then calculating the area ratio of connected region area and whole image, according to area ratio, necessarily limit interval connected component, think that this connected region is exactly the human face region that will look for;
Described face identification unit is for the face identification method with refusal recognition function based on linear combination, specific as follows: this facial image normalization size, to be the big or small gray level image of training sample; If y ∈ is R nrepresent this facial image, be test sample book, A ∈ R n * mbeing the matrix that all training samples form, is all image patterns in the suspect's face database having loaded, the corresponding training sample of each row; Test sample y is expressed as follows with the linear combination of all training samples:
y = Σ k = 1 m α k a k
Wherein, m represents the number of training sample, a krepresent k training sample, α kbe in linear combination with k the coefficient that training sample is corresponding; Wherein, α=(α 1, α 2.., α m) tcoefficient vector, A=(a 1, a 2..., a m);
Calculate every class sample to describing the contribution of test sample book; The contribution that d class is done when describing test sample book is g dsa s+ ...+α ta t; Calculate the error e of all categories d=|| y-g d|| 2(d=1,2 ..., L), wherein, L is the classification number in database; Find out the affiliated classification of least error, identification; Classification explanation test sample book and this classification close together that error is less, and when error is less than certain threshold value, think that portrait to be identified is the same with portrait in database; Otherwise refusal identification, enters next time and processes.
5. people's face phase judgement system according to claim 4, it is characterized in that, described camouflage sorter comprises cap sorter, sunglasses sorter, described camouflage discrimination module differentiates according to cap sorter whether facial image is the camouflage of being branded as, and whether described camouflage discrimination module is differentiated facial image according to sunglasses sorter is wear dark glasses camouflage.
6. people's face phase judgement system according to claim 4, is characterized in that, described camouflage model comprises complexion model, and whether described camouflage discrimination module is differentiated facial image according to complexion model is to wear masks or the camouflage of scarf.
7. people's face phase judgement system according to claim 4, it is characterized in that, described people's face phase judgement system also comprises facial image retrieval unit, and described facial image retrieval unit is retrieved the facial image in described face database according to the condition of input.
8. an application rights requires in 4 to 7 the public safety system of people's face phase judgement system described in arbitrary claim, it is characterized in that, described public safety system comprises people's face phase judgement system, described face database is suspect's image data base of storage suspect facial image, for described camouflage discrimination module, differentiate when not being the facial image of camouflage, described face identification unit is carried out people's face by described facial image and facial image in described suspect's image data base and is identified mutually.
9. public safety system according to claim 8, is characterized in that, described public safety system also comprises alarm unit, and for described camouflage discrimination module, differentiating is that while being the facial image of camouflage, described alarm unit is reported to the police; For described face identification unit, described facial image is identified as to the facial image in described suspect's image data base, described alarm unit is reported to the police.
10. public safety system according to claim 8, it is characterized in that, described public safety system also comprises suspect's image retrieval unit, described public safety system also comprises suspect's image retrieval unit, and described suspect's image retrieval unit is retrieved the facial image in described suspect's image data base according to the condition of input.
CN201010501198.3A 2010-09-30 2010-09-30 Human face discrimination method and system and public safety system Expired - Fee Related CN101980242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010501198.3A CN101980242B (en) 2010-09-30 2010-09-30 Human face discrimination method and system and public safety system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010501198.3A CN101980242B (en) 2010-09-30 2010-09-30 Human face discrimination method and system and public safety system

Publications (2)

Publication Number Publication Date
CN101980242A CN101980242A (en) 2011-02-23
CN101980242B true CN101980242B (en) 2014-04-09

Family

ID=43600744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010501198.3A Expired - Fee Related CN101980242B (en) 2010-09-30 2010-09-30 Human face discrimination method and system and public safety system

Country Status (1)

Country Link
CN (1) CN101980242B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778500A (en) * 2016-11-11 2017-05-31 北京小米移动软件有限公司 A kind of method and apparatus for obtaining people's object plane phase information

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779264B (en) * 2012-07-10 2015-05-13 北京恒信彩虹科技有限公司 Method and device for realizing barcode recognition
JP6405606B2 (en) * 2013-07-12 2018-10-17 オムロン株式会社 Image processing apparatus, image processing method, and image processing program
CN106778452A (en) * 2015-11-24 2017-05-31 沈阳新松机器人自动化股份有限公司 Service robot is based on human testing and the tracking of binocular vision
JP7120590B2 (en) * 2017-02-27 2022-08-17 日本電気株式会社 Information processing device, information processing method, and program
CN107491746B (en) * 2017-08-02 2020-07-17 安徽慧视金瞳科技有限公司 Face pre-screening method based on large gradient pixel analysis
CN107633266B (en) * 2017-09-07 2020-07-28 西安交通大学 Electric locomotive contact net pantograph electric arc detection method
CN108228792B (en) * 2017-12-29 2020-06-16 深圳云天励飞技术有限公司 Picture retrieval method, electronic device and storage medium
CN108197250B (en) * 2017-12-29 2019-10-25 深圳云天励飞技术有限公司 Picture retrieval method, electronic equipment and storage medium
CN108597168A (en) * 2018-04-24 2018-09-28 广东美的制冷设备有限公司 Security alarm method, apparatus based on optical filter and household appliance
CN109190498A (en) * 2018-08-09 2019-01-11 安徽四创电子股份有限公司 A method of the case intelligence string based on recognition of face is simultaneously
CN109101923B (en) * 2018-08-14 2020-11-27 罗普特(厦门)科技集团有限公司 Method and device for detecting mask wearing condition of person
CN109522960A (en) * 2018-11-21 2019-03-26 泰康保险集团股份有限公司 Image evaluation method, device, electronic equipment and computer-readable medium
CN110135279B (en) * 2019-04-23 2021-06-08 深圳神目信息技术有限公司 Early warning method, device and equipment based on face recognition and computer readable medium
CN110341554B (en) * 2019-06-24 2021-05-25 福建中科星泰数据科技有限公司 Controllable environment adjusting system
CN111811657B (en) * 2020-07-07 2022-05-27 杭州海康威视数字技术股份有限公司 Method and device for correcting human face temperature measurement and storage medium
CN112597867B (en) * 2020-12-17 2024-04-26 佛山科学技术学院 Face recognition method and system for wearing mask, computer equipment and storage medium
CN113569676B (en) * 2021-07-16 2024-06-11 北京市商汤科技开发有限公司 Image processing method, device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1215618A2 (en) * 2000-12-14 2002-06-19 Eastman Kodak Company Image processing method for detecting human figures in a digital image
CN1710925A (en) * 2004-06-18 2005-12-21 乐金电子(中国)研究开发中心有限公司 Device and method for identifying identity by pick up head set on handset
CN101369310A (en) * 2008-09-27 2009-02-18 北京航空航天大学 Robust human face expression recognition method
CN101440676A (en) * 2008-12-22 2009-05-27 北京中星微电子有限公司 Intelligent anti-theft door lock based on cam and warning processing method thereof
CN101751557A (en) * 2009-12-18 2010-06-23 上海星尘电子科技有限公司 Intelligent biological identification device and identification method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1215618A2 (en) * 2000-12-14 2002-06-19 Eastman Kodak Company Image processing method for detecting human figures in a digital image
CN1710925A (en) * 2004-06-18 2005-12-21 乐金电子(中国)研究开发中心有限公司 Device and method for identifying identity by pick up head set on handset
CN101369310A (en) * 2008-09-27 2009-02-18 北京航空航天大学 Robust human face expression recognition method
CN101440676A (en) * 2008-12-22 2009-05-27 北京中星微电子有限公司 Intelligent anti-theft door lock based on cam and warning processing method thereof
CN101751557A (en) * 2009-12-18 2010-06-23 上海星尘电子科技有限公司 Intelligent biological identification device and identification method thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778500A (en) * 2016-11-11 2017-05-31 北京小米移动软件有限公司 A kind of method and apparatus for obtaining people's object plane phase information
CN106778500B (en) * 2016-11-11 2019-09-17 北京小米移动软件有限公司 A kind of method and apparatus obtaining personage face phase information

Also Published As

Publication number Publication date
CN101980242A (en) 2011-02-23

Similar Documents

Publication Publication Date Title
CN101980242B (en) Human face discrimination method and system and public safety system
Long et al. Detecting Iris Liveness with Batch Normalized Convolutional Neural Network.
US8351662B2 (en) System and method for face verification using video sequence
CN102663413B (en) Multi-gesture and cross-age oriented face image authentication method
Sun et al. Gender classification based on boosting local binary pattern
CN102902959B (en) Face recognition method and system for storing identification photo based on second-generation identity card
US20120027263A1 (en) Hand gesture detection
CN103020992B (en) A kind of video image conspicuousness detection method based on motion color-associations
CN104504408A (en) Human face identification comparing method and system for realizing the method
CN109145742A (en) A kind of pedestrian recognition method and system
WO2008072622A1 (en) Face authentication device
CN105574509B (en) A kind of face identification system replay attack detection method and application based on illumination
CN111914761A (en) Thermal infrared face recognition method and system
CN103077378A (en) Non-contact human face identifying algorithm based on expanded eight-domain local texture features and attendance system
Dehshibi et al. Persian vehicle license plate recognition using multiclass Adaboost
CN106022223A (en) High-dimensional local-binary-pattern face identification algorithm and system
US20110182497A1 (en) Cascade structure for classifying objects in an image
Amaro et al. Evaluation of machine learning techniques for face detection and recognition
Laroca et al. A first look at dataset bias in license plate recognition
CN106886771A (en) The main information extracting method of image and face identification method based on modularization PCA
CN118230354A (en) Sign language recognition method based on improvement YOLOv under complex scene
CN103942545A (en) Method and device for identifying faces based on bidirectional compressed data space dimension reduction
CN108831158A (en) It disobeys and stops monitoring method, device and electric terminal
Hajare et al. Face Anti-Spoofing Techniques and Challenges: A short survey
Hassan et al. Facial image detection based on the Viola-Jones algorithm for gender recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: HAINAN 01 INFORMATION TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: XU YONG

Effective date: 20150818

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150818

Address after: 570100, Hainan Province, 27 Nansha Road, Haikou Province, Xinhua Bookstore headquarters, 5 floor

Patentee after: Hainan 01 Mdt InfoTech Ltd.

Address before: 518000 building C, Innovation Research Institute, Nanshan District hi tech Zone, Guangdong, Shenzhen 1-6

Patentee before: Xu Yong

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210409

Address after: 570000 301, 3rd floor, DESHENGSHA commercial city, 23 Changdi Road, Longhua District, Haikou City, Hainan Province

Patentee after: Hainan Zhengbang Information Technology Co.,Ltd.

Address before: 570100 5th floor, provincial Xinhua Bookstore headquarters, 27 Nansha Road, Haikou City, Hainan Province

Patentee before: Hainan 01 Mdt InfoTech Ltd.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140409

Termination date: 20210930

CF01 Termination of patent right due to non-payment of annual fee