CN109919041A - A kind of face identification method based on intelligent robot - Google Patents

A kind of face identification method based on intelligent robot Download PDF

Info

Publication number
CN109919041A
CN109919041A CN201910118367.6A CN201910118367A CN109919041A CN 109919041 A CN109919041 A CN 109919041A CN 201910118367 A CN201910118367 A CN 201910118367A CN 109919041 A CN109919041 A CN 109919041A
Authority
CN
China
Prior art keywords
face
image
robot
lbp
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910118367.6A
Other languages
Chinese (zh)
Inventor
王建荣
唐子越
高洁
刘志强
徐天一
喻梅
于瑞国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910118367.6A priority Critical patent/CN109919041A/en
Publication of CN109919041A publication Critical patent/CN109919041A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a kind of face identification methods based on intelligent robot, the method is applied to Furo-D robot, the described method includes: Face datection: shooting a picture to face using the camera in robot, and positioned to face and iris out face position;Face characteristic extracts: extracting feature from human face region;Face matching: correlated characteristic is combined into a feature vector, the facial image that matching search has similar features vector is carried out from face database, as recognition result.The present invention realizes the face identification method that can be used for the completion of Furo-D robot, in the case where background is relatively simple, has higher discrimination.

Description

A kind of face identification method based on intelligent robot
Technical field
The present invention relates to image procossings, deep learning field, are related to Face datection and field of face identification more particularly to one Face identification method of the kind based on intelligent robot.
Background technique
There are two major classes in recognition of face:
(1) verifying/certification (1:1 matching)
(2) discrimination/identification (1:N matching)
Recently, the technology based on biological characteristic has become the promising tool of access control.For example, into building Personnel authorize permission and be typically based on different attributes, such as PIN (Personal Identification Number), RFID (radio frequency identification) card and Key.These attributes may be ruptured in the presence of such as PIN, the weakness that RFID and key may be stolen.On the contrary, based on biological characteristic The behavioural characteristic of check-up individual is to verify identity.
In past 15 years, researcher has delivered a large amount of paper, focuses on the theory and practice side of recognition of face Face.The paper delivered reports general Journal of Sex Research and solves the technology of particular problem (illumination, block and posture).One of them Interesting face recognition scheme --- non-tensor product form 2-d wavelet is utilized, two-dimensional linear discrimination technology is then utilized, to increase By force to the differentiation of facial characteristics.Finally, being classified using SVM (support vector machines).Compared with traditional Tensor-product wavelet, newly Non-separable wavelets can reliably detect out facial characteristics.A kind of dictionary learning method of structuring --- from face data Dictionary is blocked in study one.Classification (SSRC) technology indicated based on structural sparse, the technology are developed in this research Face has successfully been handled to block and illumination variation.A kind of autonomous stance adjustment method --- improve to be detected and face database In matching degree between each face.
Face datection is the technology for positioning an image or the face location in video flowing, is that face is detected and identified automatically The key link for system is the basis for developing the technologies such as recognition of face, face tracking, so the accuracy of Face datection It is most important, have become the hot issue of image procossing and area of pattern recognition research.Currently, external grind Face datection Study carefully very much, more famous face database has MIT (being created by Massachusetts Polytechnics Media Lab), CMU (by Ka Naijimei Grand university establishes) etc..Tsinghua University, the Beijing University of Technology, Asia Microsoft Research, Chinese Academy of Sciences's computer technology of the country Research institute, Institute of Automation Research of CAS, Institute of Automation Research of CAS also have been devoted to grinding for Face datection Study carefully.With gradually going deep into for Face datection research, the correlative theses quantity delivered in the world is also dramatically increased.In July, 1997, It is special that the PMAI (pattern-recognition and artificial intelligence) of IEEE (International Electrical and Electronic Engineering Association) has published a recognition of face Volume.In IEEE, FG (automatic face and gesture identification international conference), ICIP (image procossing international conference) and CVPR (computer Vision and pattern-recognition international conference) etc. in momentous conferences, all there are many outstanding Face datection paper publishings every year.
Nowadays, potentially large number of application needs completely reliable face identification system.Therefore, the technology must more at It is ripe, could in conventional practice widespread deployment.The total trend of researcher be concern eliminate as low resolution or attitudes vibration to Recognition of face bring influences.This is advantageous the accuracy of recognition of face, because of in most cases systematic difference Scene is known.The breakthrough of this respect research is so that the application of Automatic face recognition starts to develop, such as E-Passport gate inhibition And access control.However, it is necessary to which further research is automate recognition of face in other several applications.For example, being based on The security system of CCTV (closed-circuit TV monitoring system) can identify criminal.The problem of this kind of application is to block and low point Resolution.Method based on 3D has shown that result the problems such as solving such as postural change.
Although there are a large amount of face recognition algorithms, this some difficulty is still suffered from:
(1) accurate feature locations are most important for good recognition performance.As face is rotated up to centainly Angle, face variation feature make many face recognition algorithms be difficult to handle.Face's posture, change of age and uneven photograph Bright factor is to perplex three main problems of current face's recognizer.
(2) if face is blocked, discrimination can decline rapidly.Equally, the construction packages such as beard and glasses also can be significant Influence discrimination.
(3) key factor in recognition of face is low resolution, is easy to appear low point in wide-long shot image Resolution image.In addition, eyes closed also will affect the identification accuracy of most of face identification systems, because system is in identification Before image can be normalized and re-scaling.
(4) performance of the usual decision systems of processing of details.For example, after input picture to face rotate, scale, block and Affine deformation is normalized with specification face.
(5) face recognition algorithms select according to application.For example, the method based on feature may not apply to it is low The face-image of resolution ratio, such as 15 × 15 pixels and following.And when used in development system PCA (principal component analysis)/ ICA (independent component analysis) and where use LDA (linear discriminant analysis).
(6) recognition of face is a special and difficult situation of object identification.The difficulty of recognition of face is positive view In angle, face-image seems much like, and the difference between them is critically important for analyzing.To in standard database most New face recognition technology is studied, such as FERET, FRVT and FAT (being face database) posture, illumination and age is true It is set to the main problem of face recognition algorithms.
(7) identification is accurate when the performance of the face recognition algorithms of most of exploitations comes into operation under conditions of uncontrolled Rate can decline rapidly.So far, the processing above problem can be efficiently used for without a kind of face recognition algorithms.
(8) many face image databases, such as bibliography have had collected the algorithm of test recognition of face.Every number The specific aspect of test is intended to according to library, such as posture, illumination, representation and is blocked.Previous research work shows controlled Under the conditions of recognition of face it is mature.However, recognition of face becomes challenging when carrying out in outdoor environment.
Summary of the invention
The present invention provides a kind of face identification methods based on intelligent robot, and the purpose of the present invention is realization one is complete Whole recognition of face, and apply in Furo-D robot, allow Furo-D robot to possess the function of recognition of face, it is final to realize For subscriber station in face of robot, robot can initiatively greet to user and tell the name of user, described below:
A kind of face identification method based on intelligent robot, the method are applied to Furo-D robot, the method Include:
Face datection: one picture is shot to face using the camera in robot, and positioning doubling-up is carried out to face Face position out;
Face characteristic extracts: extracting feature from human face region;
Face matching: correlated characteristic is combined into a feature vector, and matching search is carried out from face database has similar spy The facial image for levying vector, as recognition result.
The beneficial effect of the technical scheme provided by the present invention is that: realize the completion that one can be used for Furo-D robot Face identification method has higher discrimination in the case where background is relatively simple.
Detailed description of the invention
Fig. 1 is the schematic diagram using the class curve before SSR (single scale retina cortex algorithm for image enhancement) algorithm;
Fig. 2 is the schematic diagram using the class curve after SSR algorithm;
Fig. 3 is demand analysis data flow diagram;
Fig. 4 is face recognition module data flow diagram;
Fig. 5 is HSV (tone, saturation degree, lightness) hexagonal cone color model figure;
Fig. 6 is class face pixel detection process schematic;
Fig. 7 is class face pixel detection schematic illustration;
Wherein, (a) is self-replacation schematic diagram;It (b) is extension schematic diagram.
Fig. 8 is 3 × 3 neighborhood LBP (local binary patterns) code schematic diagrames.
Fig. 9 is the different Textures that LBP operator detects
Figure 10 is three difference P, the prototype neighborhood collection schematic diagram of R value;
Figure 11 is histogram.
Wherein, (a) uses the image histogram before SSR algorithm process;(b) with the image histogram after SSR algorithm process.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, embodiment of the present invention is made below further Ground detailed description.
Embodiment 1
To achieve the above object, the method that the embodiment of the present invention mainly applies LBP (local binary patterns), is compared with reaching The purpose of good feature extraction.In addition, realize the illumination standardization to face by SSR algorithm, make face more standard, The performance for further increasing recognition of face enables the recognition of face based on intelligent robot to bring better experience to user.
Face identification method based on intelligent robot is classified into three phases implementation, and respectively Face datection, face is special Sign is extracted and recognition of face.
Face datection: one picture is shot to face using the camera in robot, and positioning doubling-up is carried out to face Face position out;
Face characteristic extracts: extracting feature from human face region;
Recognition of face: correlated characteristic is combined into a feature vector, and matching search is carried out from face database has similar spy The facial image for levying vector, as recognition result.
In conclusion the embodiment of the present invention realizes the face identification method that can be used for Furo-D robot, carrying on the back In the case that scape is relatively simple, there is higher discrimination.
Embodiment 2
The scheme in embodiment 1 is further introduced below with reference to specific example, described below:
The face identification method needs the function of the functional module operation recognition of face of user's click recognition of face, first can The default camera head that robot is called using WebCam (method for calling equipment default camera head), clicks " taking pictures " button to people Face is taken pictures, and the image of shooting can be stored under local path file, and image input face recognition module is carried out face knowledge Not, it identifies and result is saved as into txt file to local after result, and call phonetic entry module, voice module reads in txt file Robot reads txt file content, i.e. voice reads recognition result.
Face identification method is for the identity of identification or identifier from digital picture.In order to use face identification method to know Others' face, digital picture need to undergo three Main Stages, i.e. Face datection, feature extraction and face matching.
1, Face datection separates facial area and background image, and the position of face is irised out with box.In feature extraction In the stage, the most useful and unique feature of face-image is extracted, and is stored in the unique identification that face database returns to facial image.Once Obtain these features, so that it may in cognitive phase, the image of shooting be compared with image in face database.With people in face database Face feature vector is apart from the smallest facial image as face recognition result.Finally, intelligent robot voice output identified person Name.
2, the data flow diagram of recognition of face is as shown in Figure 4:
(1) based on the Face datection of the colour of skin
This method uses the Face datection algorithm based on the colour of skin, and this method is detected and positioned using the technology of EVOLUTIONARY COMPUTATION Face area.Others main distinction in the colour of skin of different race, age and property is important to be embodied in brightness, and brightness is removed This factor, the colour of skin have certain cluster in certain colour of skin space, by the detection and segmentation to the colour of skin, further Detect human face region, this is the core concept of the Face datection algorithm based on features of skin colors.
A. color model
A) normalized RGB color model
Wherein, a complexion model is critically important, this model can be used to different races, age and property others Carry out Face datection.Some results of study are shown: (1) colour of skin of people is gathered in a small RGB color region;(2) The colour of skin of people in brightness than in color difference it is bigger.So normalized RGB model can only can characterize color change compared with Small face.
In general, the color of each pixel is to be composed of tri- color values of R, G, B, and brightness value is I=R+G + B, and the variation range of each component is [0,1 ..., 255].Since brightness value of the colouring information to pixel is very sensitive, because This each color component value can be with brightness value I normalization as shown in formula (1):
R=R/I, g=G/I, b=B/I (1)
Wherein, r+g+b=1;Therefore normalization color value can be indicated only with r and g.
Result of study shows that the distribution of color in normalized RGB color is indicated by 2D Gaussian Profile, G (m, σ2), as shown in formula (2) and (3):
Wherein, mr, mgRespectively represent the Gaussian mean of r and g distribution of color, σ2Represent the variance square of the simplification of each distribution Battle array model.
B) hsv color model
In addition to the normalized RGB model that top is mentioned, HSV (tone, saturation degree, lightness) model can be more nearly Perception of the people to color.HSV model can image it is as shown in Figure 5 by hexagonal Based On The Conic Model.Tone (H) is the spectral composition of color Measurement, and be expressed as angle, change to 360 ° from 0.Saturation degree (S) refers to the purity of color, changes to 1 from 0.Face The light levels of color are defined by lightness (V), and value range is also from 0 to 1.Formula (4) to (8) can be used from RGB model conversion Hsv color model.
H=H1 if B≤G (5)
H=360 ° of-H1 if B > G (6)
C) color model selects
The Face datection of this method has used RGB model and HSV model simultaneously, and the parameter for having chosen two models is such as public Shown in formula (9) and (10):
0.36≤r≤0.465,0.28≤g≤0.363 (9)
0≤H≤50,0.20≤S≤0.68,0.35≤V≤1.0 (10)
Formula (9) and (10) can be used to detect class skin pixel.
The detection of B human face region and segmentation
A) class skin pixels are detected with the intelligent body of development
In artificial intelligence, an intelligent body is defined as the intelligent entity of the cooperation action that can advocate peace certainly.And In this method, an intelligent body is defined as the searcher of a class skin pixels.More specifically, it can perceive it Position assesses the color value of pixel, marking class skin points, and the ability with self-replacation and diffusion.
It is described as follows with the class human face region in the method sense colors image of the intelligent body of development:
(1) it is uniformly distributed the initial sets A={ agent an of intelligent body in the picturei, i=1,2,3 ..., N.Such as figure Shown in 7, white pixel represents intelligent body.In order to detect all possible face, intelligent body is distributed on each of image In 20 × 15 part, so, N is the sum of all pixels of image/300.
(2) then the intelligent body for each in A is used with the value of the HSV of the point of formula (4) to (8) intelligent computing agent Formula (9) to (10) judges whether this point is class skin pixels.
(3) if this point belongs to class human face region and accessed not yet by other intelligent bodies, this intelligent body A label will be done in this point.Then it will produce again four sub- intelligent bodies in four neighborhood points, as shown in Fig. 7 (a), and And this little intelligent body is also added in set A, with father's intelligent body index having the same.After self-replacation, father's intelligence Cognition is removed from image.
(4) if this point is not belonging to class human face region or it had been accessed by other intelligent bodies, this intelligence Body will Stochastic propagation to one in its eight neighborhoods, as shown in Fig. 7 (b), and its age will increase 1.If intelligent body Age be more than its service life, intelligent body can be removed from image-context.For Face datection, due to the connection of face image Property, the service life that an intelligent body is arranged is 1 years old.
(5) if set A is sky, development, which calculates, to be stopped, and otherwise, is repeated since step (2).It is marked by intelligent body Region be class human face region.
Class face pixel detection is carried out with intelligent body as shown in fig. 6, wherein (a) is the equally distributed intelligence in color image The picture example of energy body, is detecting the example of class skin pixels (b) for intelligent body, (c) is final Face Detection result example.
B) class human face region is divided
In the detection process, each intelligent body family can detecte a region and a class human face region may be by Multiple intelligent body family detections, as shown in Fig. 6 (b), this class human face region is exactly to be detected by multiple intelligent body families. Therefore, in order to which how many facial area in image determined, combined region is needed.Combined region is basic in the method proposed Idea is, will if two regions are more than to link together on a certain number of points (selecting 5 points after many experiments) The two region merging techniques are a bigger region.Otherwise, it is handled the two regions as different facial areas.
More specifically, for each initial intelligent body i, i=0,1 ..., N-1, if its original position belongs to As soon as class human face region, it is referred to as an effective intelligent body.Therefore, the subset of available one effective intelligent body and Family's subscript of each effectively intelligent body of storage.Those non-effective intelligent bodies are deleted from image.In the detection process, if It has more than 5 points to be accessed by the intelligent body i in a family with the intelligent body j in another family simultaneously, then allows Wij= 1, also mean that the region detected by Liang Ge intelligent body family belongs to a big region, otherwise Wij=0.Therefore it can obtain To intelligent body relational matrix (ARM) [Wij].By analyzing ARM, it can determine that how many is detected by different intelligent body family Region belong to a class human face region.Then, merge these regions to a face candidate region and face candidate region Quantity can also be determined.
(2) LBP feature extraction
This method carries out recognition of face using LBP method.Due to being shooting at close range image, and pass through image preprocessing Image resolution ratio can be improved, it is possible to realize recognition of face using the algorithm of feature extraction.Ojala et al. was situated between in 1996 Continued LBP, the ordered set compared with being described as the binary system of the image pixel intensities between center pixel and its surrounding pixel.It is used It in extraction uniqueness and useful feature from pretreatment image, and is the most effective and newest method for recognition of face. The texture and shape of digital picture can be described using LBP.Each pixel of image is labeled with LBP code, which is by by two Ary codes are converted to decimal code acquisition.As shown in figure 8, in 3 × 3 neighborhoods, using centre of neighbourhood pixel as threshold value, 8 neighborhoods Pixel value and threshold value comparison, smaller than threshold value is designated as 0, and bigger than threshold value is designated as 1, and Binary Conversion is the decimal system, final center The LBP code that pixel generates is 124.It divides the image into several fritters first, therefrom extracts feature.Then it will start from obtaining Feature in calculate each piece of LBP histogram.Later, it combines all LBP histograms of the image to obtain a company The vector connect.It may then pass through the similitude (distance) measured between its histogram and carry out movement images.Several researchs and research Work shows can be in different facial expressions, different lighting conditions, image rotation using the progress recognition of face of LBP method With the extraordinary result of offer in terms of personnel's aging.The speed and discrimination performance of LBP system are also very outstanding.
Although original adoption LBP method carries out recognition of face and face verification, it has been applied to crowd all over the world In applying more.Have been made multinomial extension and improvement in several years in the past few years, and still carry out numerous studies with Improve the robustness of this method.
C equivalence LBP
An important special case of LBP is LBP of equal value.With the increase of sampled point in neighborhood collection, the type of binary mode It can sharply increase.In practical applications, operator used by not requiring nothing more than is simple as far as possible, while also requiring calculating speed fast, several It is small according to amount of storage.In order to solve this problem, Ojala proposes " equivalent formulations " to drop to the schema category of LBP operator Dimension.LBP descriptor of equal value includes two step-by-step conversions of maximum from 0 to 1, and vice versa.Since binary mode character string is Circulation, therefore it is impossible that primary conversion, which only occurs, in LBP descriptor.This means that an equivalent formulations or just not having There is conversion or just have and converts twice.11111111 and 10001111 be the equivalence for being respectively provided with zero degree conversion and converting twice The example of binary mode.If P is the sampling number in neighborhood collection, according to document, there are two the moulds of transition by turn for tool Formula quantity is calculated as P (P-1), with have 2PPossibility combination uneven mode compare, mode quantity, which largely reduces, to be conducive to Work.It the use of another reason for LBP of equal value is that it can only detect most important feature in pretreatment image, as shown in Figure 9 Turning, spot, edge and line end.
D LBP operator
As described, original LBP operator be defined within pixel be 3x3 neighborhood in, using centre of neighbourhood pixel as threshold value, Compared with the gray value of 8 adjacent pixels is with it, radius is only a pixel, is expressed as LBPu2 8,1 1(as shown in Figure 10).Afterwards Come the length in order to improve feature vector, researcher proposes different operators.It can choose not using these enlargement oprators With the neighborhood territory pixel or sampled point of size.According to document, LBPu2 16,216 neighborhood territory pixels of 2 pixels (radius be) and LBPu2 8,2(8 neighborhood territory pixels that radius is 2 pixels), have obtained relatively good result in most of databases.Project is to this Two operators and former operator are tested.
E math block
In order to use different size of neighborhood, LBP operator is extended by drawing the circle with radius R from center pixel. It obtains P sampled point on the edge of the circle and is compared with the value of center pixel.Three kinds of differences are shown as shown in Figure 10 P and R value.
If the coordinate of center pixel is (xc,yc), P neighborhood territory pixel coordinate is (xp,yp), can with formula (11) and Formula (12) calculates, and P represents the sampling number of neighborhood collection, and p is each independent sample point.
xp=xc+R cos(2πp/P) (11)
yp=yc+R sin(2πp/P) (12)
Using formula (13) Lai Shengcheng LBP, for pixel (xc, yc), binomial component 2PIt is assigned to eachThese binomial weights sums are as follows:
(3) illumination standardizes
The main problem of face identification system is processing illumination variation and attitudes vibration.These variations will lead to people The serious problems of face identifying system performance.The image difference that most of illumination induce is greater than the difference between individual, this is certainly One serious problem.Project is tested and changes to the performance of LBP method by the way that these variations are normalized Into.
F SSR
Referring to Figure 11, illumination reduction is carried out by SSR algorithm, to eliminate illumination change.SSR passes through low pass filter Digital picture.In this experiment, Gaussian filter is used as low-pass filter.According to document, digital picture is actually by two A frequency component composition, i.e. illumination (low frequency component) and reflectivity (high fdrequency component), wherein illumination factor needs are removed.
F (x, y)=I (x, y) R (x, y) (14)
Wherein, (x, y) is the coordinate of digital picture, and I is the illumination factor in digital picture, and R represents reflection coefficient.Such as public affairs The mathematical model of extension SSR shown in formula (15).
R=Log (f (x, y))-Log (f (x, y)) * T (x, y) (15)
Wherein, f (x, y) represents real figure image, and T (x, y) is gauss low frequency filter around low in real image Frequency component can mathematically be indicated by formula (16).Here c represents standard deviation, and standard deviation is set as 30 in the method.
In formula (15), real image low-pass filter convolution, so that only low frequency component, that is, bypass illumination factor.So This convolution results is subtracted from real image afterwards, and exporting is the image with reduced illumination factor.
Embodiment 3
LBP method is realized from the different types of image that " Yale's data set " obtains in MATLAB, with test The performance of method.The sampled point and radius of LBP operator are variations, to observe influence of these parameters to performance.With do not answer It is compared with the result of SSR algorithm, the SSR algorithm after application extension is observed that high discrimination.
Pretreated image and possible training data be used to test in algorithm, generate each image using LBP method Feature vector.By using chi-square statistics amount (χ2) distance matrix is calculated, wherein including similarity between each pair of image The measurement of (distance).Then by using distance matrix and given face database image and detection image list, system-computed grade Curve.Class curve is the cumulative matches score of check image Yu picture library image.Discrimination be plotted as in score list etc. The function of grade.For example, grade 1 refers to that the first minimum range of the image in detection image and data set, grade 2 are the second minimums Distance and grade 3 are third minimum ranges.
The characteristics of experiment is carried out with Yale's data set, data set is the variation with strong illumination and expression.Cause This, this task is more more complicated than other standards data set, as a result relatively worse.
It is as shown in Figure 1 using the class curve before SSR algorithm.Fig. 1 illustrates LBPu2 8,1Accuracy rate in grade 1 is 25% and its maximum accuracy rate be in class 6 93%.LBPu2 8,2100% accuracy rate is realized in grade 13, Accuracy rate is 47% in grade 1.LBPu2 16,2The accuracy rate that 100% is realized in grade 9, accuracy rate is in grade 1 47%.So LBPu2 16,2Operator gives best result in these operators.
It is as shown in Figure 2 using the class curve after SSR algorithm.It, can be with after SSR algorithm is realized in Yale's data set Observe that result improves.Explanation is in LBP as shown in Figure 2u2 8,1Result does not improve after spreading, in fact, by arranging several times After name, situation is getd worse with.In LBPu2 8,2It can be observed that some raisings, its accuracy rate is especially in inferior grade in operator In have better result.LBPu2 16,2There is high discrimination especially in grade 7.
This is the experiment done about design and implementation one efficient recognition of face, its table in the environment of artificial control It is now good.Also extension has been carried out so that the performance of system is effective in relatively amorphous environment.In this research work, lead to The SSR algorithm for implementing to remove illumination factor from real image is crossed to standardize face, to enable feature extraction algorithm Easily it is matched with facial image most like in database.By extracting face figure from standard Yale's data set Picture can also draw class curve with different LBP operators of equal value.Class curve shows LBPu2 8,1Operator is before other operators Reach maximal accuracy, but 100% precision cannot be reached at all.With LBPu2 8,2It compares, LBPu2 16,2Operator can be smaller etc. The accuracy of the lower realization 100% of grade value.After illumination factor is normalized, LBPu2 16,2Operator, which can obtain, more to be changed Into result.Therefore it may be concluded that by applying SSR algorithm and using LBPu2 16,2As LBP operator, knowledge can be improved The performance of other system.
These results can further be restored by using weighting LBP, and wherein weight is assigned to each region of face. The feature in the region is more important, and the weight for distributing to the region is higher, to make its more accurate regional partial image.
It will be appreciated by those skilled in the art that attached drawing is the schematic diagram of a preferred embodiment, the embodiments of the present invention Serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (3)

1. a kind of face identification method based on intelligent robot, which is characterized in that the method is applied to Furo-D robot, The described method includes:
Face datection: one picture is shot to face using the camera in robot, and face is positioned and irises out people Face position;
Face characteristic extracts: extracting feature from human face region;
Face matching: correlated characteristic is combined into a feature vector, carried out from face database matching search have similar features to The facial image of amount, as recognition result.
2. a kind of face identification method based on intelligent robot according to claim 1, which is characterized in that the face Detection is to detect human face region by the detection and segmentation to the colour of skin.
3. a kind of face identification method based on intelligent robot according to claim 1, which is characterized in that the face It detects while having used RGB model and HSV model, and have chosen the parameter of two models:
0.36≤r≤0.465,0.28≤g≤0.363
0≤H≤50,0.20≤S≤0.68,0.35≤V≤1.0
Wherein, S is saturation degree;V is the light levels of color.
CN201910118367.6A 2019-02-16 2019-02-16 A kind of face identification method based on intelligent robot Pending CN109919041A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910118367.6A CN109919041A (en) 2019-02-16 2019-02-16 A kind of face identification method based on intelligent robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910118367.6A CN109919041A (en) 2019-02-16 2019-02-16 A kind of face identification method based on intelligent robot

Publications (1)

Publication Number Publication Date
CN109919041A true CN109919041A (en) 2019-06-21

Family

ID=66961595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910118367.6A Pending CN109919041A (en) 2019-02-16 2019-02-16 A kind of face identification method based on intelligent robot

Country Status (1)

Country Link
CN (1) CN109919041A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309423A (en) * 2020-02-13 2020-06-19 北京百度网讯科技有限公司 Configuration method, device, equipment and medium of terminal interface image
CN111666925A (en) * 2020-07-02 2020-09-15 北京爱笔科技有限公司 Training method and device for face recognition model
CN113870454A (en) * 2021-09-29 2021-12-31 平安银行股份有限公司 Attendance checking method and device based on face recognition, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103663A (en) * 2011-02-26 2011-06-22 山东大学 Ward visit service robot system and target searching method thereof
CN103500339A (en) * 2013-09-11 2014-01-08 北京工业大学 Illumination face identification method integrating single-scale Retinex algorithm and normalization structure descriptor
CN103632132A (en) * 2012-12-11 2014-03-12 广西工学院 Face detection and recognition method based on skin color segmentation and template matching
CN103679157A (en) * 2013-12-31 2014-03-26 电子科技大学 Human face image illumination processing method based on retina model
CN104091163A (en) * 2014-07-19 2014-10-08 福州大学 LBP face recognition method capable of eliminating influences of blocking
CN105182983A (en) * 2015-10-22 2015-12-23 深圳创想未来机器人有限公司 Face real-time tracking method and face real-time tracking system based on mobile robot
CN106326816A (en) * 2015-06-30 2017-01-11 芋头科技(杭州)有限公司 Face recognition system and face recognition method
CN106339702A (en) * 2016-11-03 2017-01-18 北京星宇联合投资管理有限公司 Multi-feature fusion based face identification method
CN106934377A (en) * 2017-03-14 2017-07-07 深圳大图科创技术开发有限公司 A kind of improved face detection system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103663A (en) * 2011-02-26 2011-06-22 山东大学 Ward visit service robot system and target searching method thereof
CN103632132A (en) * 2012-12-11 2014-03-12 广西工学院 Face detection and recognition method based on skin color segmentation and template matching
CN103500339A (en) * 2013-09-11 2014-01-08 北京工业大学 Illumination face identification method integrating single-scale Retinex algorithm and normalization structure descriptor
CN103679157A (en) * 2013-12-31 2014-03-26 电子科技大学 Human face image illumination processing method based on retina model
CN104091163A (en) * 2014-07-19 2014-10-08 福州大学 LBP face recognition method capable of eliminating influences of blocking
CN106326816A (en) * 2015-06-30 2017-01-11 芋头科技(杭州)有限公司 Face recognition system and face recognition method
CN105182983A (en) * 2015-10-22 2015-12-23 深圳创想未来机器人有限公司 Face real-time tracking method and face real-time tracking system based on mobile robot
CN106339702A (en) * 2016-11-03 2017-01-18 北京星宇联合投资管理有限公司 Multi-feature fusion based face identification method
CN106934377A (en) * 2017-03-14 2017-07-07 深圳大图科创技术开发有限公司 A kind of improved face detection system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
夏军: ""机器人在复杂条件下人脸检测与识别算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
戴健: ""证件照人脸识别的算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
杜娟娟: ""分布式自治智能体优化算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
段红燕等: ""改进的单尺度Retinex和LBP结合的人脸识别"", 《计算机工程与应用》 *
黄世震等: ""基于嵌入式 Linux 的复杂光照人脸实时检测研究"", 《微型机与应用》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309423A (en) * 2020-02-13 2020-06-19 北京百度网讯科技有限公司 Configuration method, device, equipment and medium of terminal interface image
CN111309423B (en) * 2020-02-13 2023-11-21 北京百度网讯科技有限公司 Terminal interface image configuration method, device, equipment and medium
CN111666925A (en) * 2020-07-02 2020-09-15 北京爱笔科技有限公司 Training method and device for face recognition model
CN111666925B (en) * 2020-07-02 2023-10-17 北京爱笔科技有限公司 Training method and device for face recognition model
CN113870454A (en) * 2021-09-29 2021-12-31 平安银行股份有限公司 Attendance checking method and device based on face recognition, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Kumar et al. Face detection in still images under occlusion and non-uniform illumination
Kekre et al. Performance Comparison for Face Recognition using PCA, DCT &WalshTransform of Row Mean and Column Mean
Anand et al. An improved local binary patterns histograms techniques for face recognition for real time application
CN107220598B (en) Iris image classification method based on deep learning features and Fisher Vector coding model
CN109919041A (en) A kind of face identification method based on intelligent robot
Atharifard et al. Robust component-based face detection using color feature
Fernando et al. Novel approach to use HU moments with image processing techniques for real time sign language communication
Ahmed et al. Eye detection and localization in a facial image based on partial geometric shape of iris and eyelid under practical scenarios
Zhang et al. Local feature extracted by the improved bag of features method for person re-identification
Patil et al. Expression invariant face recognition using semidecimated DWT, Patch-LDSMT, feature and score level fusion
Taha et al. Iris features extraction and recognition based on the local binary pattern technique
Pathak et al. Multimodal eye biometric system based on contour based E-CNN and multi algorithmic feature extraction using SVBF matching
KR101174103B1 (en) A face recognition method of Mathematics pattern analysis for muscloskeletal in basics
Sriman et al. Comparison of Paul Viola–Michael Jones algorithm and HOG algorithm for Face Detection
Seal et al. A Comparative Study of Human thermal face recognition based on Haar wavelet transform (HWT) and Local Binary Pattern (LBP)
Gowda Age estimation by LS-SVM regression on facial images
Curran et al. The use of neural networks in real-time face detection
Masood et al. Spatial analysis for colon biopsy classification from hyperspectral imagery
Rahman et al. Face detection and sex identification from color images using adaboost with SVM based component classifier
Tan et al. Face recognition algorithm based on open CV
Yuan et al. Fingerprint liveness detection based on multi-modal fine-grained feature fusion
Chen et al. Integrating local and global manifold structures for unsupervised dimensionality reduction
Suvorov et al. Mathematical model of the biometric iris recognition system
Hiremath et al. Symbolic factorial discriminant analysis for illumination invariant face recognition
Meng et al. A comparative study of age-invariant face recognition with different feature representations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190621

WD01 Invention patent application deemed withdrawn after publication