CN1794265A - Method and device for distinguishing face expression based on video frequency - Google Patents

Method and device for distinguishing face expression based on video frequency Download PDF

Info

Publication number
CN1794265A
CN1794265A CN 200510135670 CN200510135670A CN1794265A CN 1794265 A CN1794265 A CN 1794265A CN 200510135670 CN200510135670 CN 200510135670 CN 200510135670 A CN200510135670 A CN 200510135670A CN 1794265 A CN1794265 A CN 1794265A
Authority
CN
China
Prior art keywords
face
people
human
video
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200510135670
Other languages
Chinese (zh)
Other versions
CN100397410C (en
Inventor
谢东海
黄英
王浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGDONG ZHONGXING ELECTRONICS Co Ltd
Original Assignee
Vimicro Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vimicro Corp filed Critical Vimicro Corp
Priority to CNB2005101356705A priority Critical patent/CN100397410C/en
Publication of CN1794265A publication Critical patent/CN1794265A/en
Application granted granted Critical
Publication of CN100397410C publication Critical patent/CN100397410C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

This invention provides an identification method and a device for countenance based on the video, which applies the ASM profile pick-up algorithm in the pick-up of character vectors and picks up the man-face image based on the position of the eyes to generate a normalized character face from the chin and picks up the most effective character in the character face to identify the countenance, which can eliminate the influence of illumination to make the right and left gray value almost the same with the variance.

Description

Human facial expression recognition method and device based on video
Technical field
The present invention relates to a kind of recognition methods, refer in particular to a kind of based on the recognition methods and the device of video to people's face portion expression.
Background technology
Along with going deep into and great application prospect of man-machine interaction research, people's face portion Expression Recognition has become a research focus of present mode identification and artificial intelligence field.But the Real time identification of human face expression is a very problem of difficulty, and many theories are gone back imperfection, and ripe commercial achievement does not almost have.The difficulty of human face expression identification is that same a kind of expression that different people is made has than big-difference, and the difference between the different expression is also very delicate.Illumination in addition, human face posture also can have influence on the accuracy of identification.The method of Expression Recognition generally all is based on statistics and finishes, and promptly extracts eigenvector from facial image, and training classifier is discerned at last then.
Feature Extraction is identified as the key that loses, and the feature that is used for Expression Recognition at present can be divided into two kinds: local feature and global feature.Based on people's face portion Expression Recognition of local feature is to utilize position, size and mutual alignment thereof different of everyone facial characteristics (eyebrow, eyes, nose, face and face contour etc.) to carry out feature extraction, reaches the purpose of people's face portion Expression Recognition.Identification based on people's face global feature is from whole facial image, proposes to have reflected that whole feature realizes people's face portion Expression Recognition.The data volume of local feature is smaller, but it represents entire image with limited feature, can lose Useful Information.And accurate, the automatic extraction of face characteristic is a very difficult problem.
In existing technology, someone proposes to adopt and adopts the Fisher criterion function to discern to the identification of the facial expression of people's face, just the global feature of people face is discerned, utilize back-propagation algorithm that people's face is discerned, the basic step of this method identification is: a, the image that receives is carried out pre-service; B, the local feature that carries out people's face extract; The extraction of c, global feature; D, part and global feature are merged; E, at last the facial expression of people's face of receiving is made identification.But this recognition methods is clearer feature to people's face to be analyzed and judges, although can roughly reflect the facial expression that the people embodies on the face, also is subjected to the influence of external factors such as illumination, still can not be accurate, extract automatically rapidly.
Summary of the invention
The technical problem to be solved in the present invention is: prior art can not be extracted the problem of human face expression accurately, automatically, the present invention proposes the human facial expression recognition method under a kind of video situation, purpose is to solve the defective that exists in the prior art, this method is based on the global feature of people's face, people's face chin profile according to automatic extraction generates a standard face, adopt the algorithm of AdaBoost to select the most effective feature then, obtain sane recognition result.
Method of the present invention is the algorithm that can carry out real time automatic detection, follow the tracks of and identify the common expression of front face to people's face at the video data proposition of USB camera commonly used, especially modal four kinds of expressions, and the facial expression that can avoid discerning is subjected to the influence of factors such as illumination.
The object of the present invention is achieved like this:
A kind of human facial expression recognition method based on video may further comprise the steps:
From the video data of USB camera input, gather the facial expression image data of people's face, this view data is done pre-service;
Extract real-time people's face is the position in the image after pre-service;
According to the human eye sorter human eye in the people's face in the image of determining is made the location;
Comprise the image-region of people's face according to the information extraction of the position of the human eye of determining and people's face sorter, carry out normalized;
Human face is located;
According to the position of the location of human face being determined people's face chin, determine the human face region in the image, the generating feature face, and as classification samples;
Calculate the Gabor feature of described eigenface image;
The Gabor feature that calculates is selected;
By the latent structure support vector machine classifier of selecting;
Sorter according to structure draws the human face expression recognition result.
When gathering, comprise following facial image tracking step from the data of USB camera input:
Before not obtaining tracking target, search for every two field picture, detect the existence of facial image;
There are one or more people's faces if detect certain two field picture, then in ensuing two two field pictures, follow the tracks of detected people's face, and people's face of following the tracks of in this two two field picture is detected and verifies, testing result is judged;
After same position three two field pictures all detected people's face, algorithm thought that just this position has facial image, and carry out the real-time face detection algorithm and extract the position of people's face in image this moment;
If detect and have a plurality of facial images in the scene, picking out maximum facial image begins to follow the tracks of, in subsequent frame, continue to follow the tracks of this people's face, similarity as the tracking results of back one frame and former frame in the consecutive frame is low excessively, or certain tracking target region do not detect positive homo erectus's face for a long time, then stops to follow the tracks of.
Described normalized realizes by the resampling algorithm: described resampling algorithm is convergent-divergent, rotation and translation transformation, and the position of human eye of detection and the location overlap of human eye sorter are positioned.
Described is to adopt target extraction method to the human face location, and this target extraction method is the active shape model algorithm.
The concrete steps of described active shape model algorithm are:
By the profile information that extracts people's face in the video data, set up sample unit;
Sample in the sample unit is carried out normalization and registration process, carry out the principal component analysis conversion then;
To the half-tone information at each reference mark in the profile information after the principal component analysis conversion, as the foundation of point search;
The mean profile that principal component analysis is calculated carries out iterative search as the initial value of profile search, obtains net result.
The step of described iterative search is:
Obtain initial shift value according to half-tone information, will snap to mean profile, calculate the parameter value of alignment according to the new profile that the gray scale search obtains;
Calculate the changing value of shape according to the statistical value of data after the alignment and principal component analysis calculating;
The result who is once searched for to original position according to the shape inverse of parameter value after will changing of alignment;
Repeat above-mentioned search step, proceed iteration and obtain net result up to convergence.
Described generating feature face is facial contour and the contrast of the people's face in people's face sorter that will extract, and carries out the degree of tilt adjustment.
Also have the step that the eigenface that generates is handled between the Gabor feature of generating feature face and calculated characteristics face image: part about the eigenface that forms is carried out the normalization of gray scale, and the gray average of part is identical with variance about making.
Be provided with the gray scale filter bag between the part about described eigenface.
The support vector machine classifier of being set up is the multicategory classification device, for one to one, one-to-many or make a strategic decision tree-like.
The present invention also proposes a kind of facial expression recognition apparatus based on video, comprises the video data acquiring unit, graphics processing unit, people's face information database and human facial expression recognition unit;
The video data acquiring unit is gathered the facial image of video and is sent it to graphics processing unit;
Graphics processing unit is transferred people's face information and collection from people's face information database facial image compares, and people's face data is calculated again, and sends the data after calculating to described human facial expression recognition unit;
The human facial expression recognition unit is discerned the facial image of gathering according to the identifying information of storing in people's face information database.
Also comprise display unit, the facial expression that identifies is shown.
Described graphics processing unit comprises comparing unit, feature generation unit, computing unit and sorter unit;
Described comparing unit is made contrast with the image information of people's face and the image information in the face database, detects people's face and eyes, and extracts facial image according to the eyes position, and this people's face information is sent to the feature generation unit;
Described feature generation unit is located human face, according to people's face chin generating feature face, eigenface is sent to computing unit as sample;
The Gabor feature of described computing unit calculated characteristics face image, and adopt the AdaBoost algorithm to select feature, again the feature of selecting is sent to the sorter unit;
Described sorter unit is sent to the human facial expression recognition unit according to the latent structure support vector machine classifier of selecting with information of classifier.
Also comprise a video data tracing unit in the described video data acquiring unit, this video data tracing unit carries out trace detection to people's face data of video data, judges whether the input data acquisition.
The technical scheme of the method that the present invention is above-mentioned, make and to extract the facial expression of people's face accurately automatically in the video situation, and this method has adopted Adaboost and ASM algorithm, can eliminate the influence of illumination, in method, facial image has been carried out special disposal, make people's face about the part gray average and variance basically identical, and method of the present invention is developed one at the video data of USB camera commonly used can carry out real time automatic detection to people's face, follow the tracks of and can identify the algorithm of four kinds of common expressions of front face, can reach preferable technology and commercial result.
Description of drawings
Fig. 1 is the method flow diagram of the human facial expression recognition based on video of the present invention.
Fig. 2 is that expression is gathered synoptic diagram among the embodiment of the human facial expression recognition method based on video of the present invention.
The behave normalized synoptic diagram of face picture shape of Fig. 3.
Fig. 4 is the detection synoptic diagram of ASM algorithm.
Fig. 5 a is depicted as the eigenface of the facial contour that collects.
Fig. 5 b is depicted as the standard feature face.
The synoptic diagram that Fig. 6 generates for eigenface.
Fig. 7 is when carrying out the Gabor feature calculation of feature facial image, the Gabor feature synoptic diagram of image under different scale, different angles.
Fig. 8 is the synoptic diagram of sorter one to one of the present invention.
Fig. 9 is the recognition effect figure of method of the present invention.
Figure 10 is the structured flowchart of device of the present invention.
Embodiment
The present invention provides a kind of people's face portion expression recognition method based on video, and this method is made at the video data of USB camera commonly used, and this method can be carried out real time automatic detection, follow the tracks of and can discern the common expression of front face people's face.
With reference to of the present invention shown in Figure 1, be the process flow diagram of recognition methods of the present invention, its step that specifically comprises is as follows:
At first, gather the human face expression image, this acquisition step specifically: from the video data of USB camera input, gather the facial expression image data of people's face, this view data done pre-service;
In an embodiment of the present invention, in this images acquired process, the step that also comprises people's face data tracing, the purpose of this step is a plurality of people's faces that detect in real time in the photographed scene, people's face to one of them people's face such as maximum continues to follow the tracks of, and constantly checking in tracing process, whether the existence of judgement people face to be.This tracing step can detect people's face of-20 to 20 degree degree of depth rotations ,-20 to 20 degree planes rotations, can detect the people's face different colours of skin, under the different illumination conditions or bespectacled people's face etc.Track algorithm is not subjected to the influence of human face posture, and side, rotation people face can be followed the tracks of equally.
This tracing step realizes in the following ways:
Before not obtaining tracking target, every two field picture is searched for, detect people's face and whether exist; If certain two field picture detects one or more people's faces, then in ensuing two two field pictures, follow the tracks of these people's faces, and people's face of following the tracks of in this two two field picture is detected and verifies, judge whether the testing result of front is true man's faces; Only after certain position three frame all detected people's face, algorithm thought that just this people from position face exists, and continues facial image is judged identification.In this tracking step,, select one of them to follow the tracks of if having a plurality of people's faces in the scene.In subsequent frame, continue to follow the tracks of this people's face,, then stop tracking if back one frame is low excessively with the similarity of the tracking results of former frame in the consecutive frame; If certain tracking target region does not detect positive homo erectus's face for a long time, think that then the tracking value of this target is little, stop to follow the tracks of this target.After previous tracking target stops to follow the tracks of, in successive image, carry out people's face again and detect, up to finding new people's face, follow the tracks of new people's face, repeat the step of face tracking.
With reference to content shown in Figure 1, collect the human face expression image after, carry out people's face then and detect step, the people's face in the present embodiment detects, and is actually real-time face detection algorithm extract real-time people face pre-service after the position in image of employing based on video; This recognition method can be with reference to shown in Figure 2, and present algorithm can be discerned difference expression, for example: neutral, laugh at, expression such as angry and surprised, and the algorithm of identification is based on Statistics.Before the identification of carrying out method of the present invention, collection sample that must be at first a large amount of, can be by the record of USB camera down by picker's expression video, the isolated image that comprises human face expression is used as the initial sample that is used for adding up from video file, form initial sample, so that in identifying, adopt.
In method of the present invention, the purpose that people's face detects is to determine the position of people's face in the image that collects, and has determined the position of people's face, just can carry out the detection of eyes.Detect step referring to the eyes among Fig. 1 simultaneously, this step is according to the human eye sorter human eye in the people's face in the image of determining to be made the location; This step is after the image-region of detected people's face, determine the position of human eye based on the human eye sorter, the human eye sorter generally detects and sets up based on the method for statistics, promptly at first trains sorter according to the human eye sample, detects according to sorter then.
Referring to Fig. 1 and content shown in Figure 3, extract the image that only comprises people's face according to the eyes position, this step is the image-region that comprises people's face according to the information extraction of the position of the human eye of determining and human eye sorter, carries out normalized.Normalized process can be referring to the content of Fig. 3, by the image of video acquisition among Fig. 3 a face template with reference to the standard among Fig. 3 b, finally obtains the normalized result shown in Fig. 3 c.This is because the variation of size can take place apart from the distance of USB camera with real human face in the zone of video situation human face, this is very disadvantageous to the organ positioning algorithm based, after detecting the position of eyes, an image need resample out from original video data, in the image eyes position be fix and also line be level, the image after the resampling has covered the Zone Full of people's face.
The algorithm that resamples is simple convergent-divergent, rotation and a translation transformation, is about to after the above-mentioned conversion of detected eyes process and the eyes location overlap in the standard face image.The size of standard picture can be 120*148.Concrete computing formula is:
x=λ(x′cosθ+y′sinθ)+x 0
y=λ(-x′sinθ+y′cosθ)+y 0
If λ cos is θ=a, λ sin θ=b, formula can be written as so:
x=ax′+by′+x 0
y=-bx′+ay′+y 0
Have only four unknown numbers in above-mentioned formula, each point can be listed two equations, and two points just can solve all unknown numbers.So can carry out this conversion by the position of eyes.
The facial image that obtains through above-mentioned resampling algorithm process and the standard picture of precondition big or small identical, detected eyes (among Fig. 3 * point) through rotation with translation after with standard picture in the eyes position be identical.
After the extraction of carrying out above-mentioned facial image, continuation is referring to Fig. 1, carry out the human face location, this positioning human face adopts the target extraction algorithm to realize, can adopt ASM (Active Shape Model in an embodiment of the present invention, active shape model) algorithm is realized, the purpose of this step is: extract the zone of people's face accurately, and remove incoherent background information in the image.Need to make the position of the general profile of people's face in the method for the present invention, ASM introduces the statistical information of existing facial contour as constraint condition, is used for controlling the variation of contour shape in the profile search.Utilize the method for ASM can extract the profile of people's face fast and accurately, human face is located.
Wherein, the concrete steps of described ASM algorithm are:
At first, by the profile information that extracts people's face in the video data, set up sample unit;
Then, the sample in the sample unit is carried out normalization and registration process, carry out principal component analysis (principal components analysis is called for short PCA) conversion then;
To the half-tone information at each reference mark in the profile information of handling in the PCA conversion, as the foundation of point search;
And then, carry out iterative search with the initial value that the mean profile that principal component analysis calculates is searched for as profile, obtain net result.
When carrying out the ASM algorithm, the concrete steps of this iterative search are:
Obtain initial shift value according to half-tone information, will snap to mean profile, calculate the parameter value of alignment according to the new profile that the gray scale search obtains;
Calculate the changing value of shape according to the statistical value of data after the alignment and principal component analysis calculating;
The result who is once searched for to original position according to the shape inverse of parameter value after will changing of alignment;
The repeat search step is carried out iteration and is obtained net result up to convergence.
In method of the present invention, for speed and the accuracy that improves search, can also introduce pyramid image, be used for carrying out hierarchical search.And, when the present invention carries out the ASM algorithm, owing to introduced the variation that PCA counting statistics method is controlled facial contour, make the algorithm of ASM can find out the profile of people's face comparatively accurately, the speed of algorithm is also very fast, the calculating of iterative search just can restrain within 1 second, in algorithm arrangement of the present invention, can utilize the position of detected human eye to determine the initial position of profile, in order to improve the organ locating accuracy, the present invention makes the image of storing in database image size big or small and actual detected consistent simultaneously.Implement the time of the present invention, in fact also can adopt AAM (Active Aspect Model, active appearance models) algorithm to realize, because this algorithm usually uses in the prior art, so repeat no more in the present embodiment the searching of the profile of people's face.
By foregoing description, in conjunction with content shown in Figure 4 as can be seen, algorithm of the present invention can recover the position of chin in people's face preferably, can well keep the global shape of profile.
Continuation is referring to the content of Fig. 1, carry out the human face location after, according to the position of the location of human face being determined people's face chin, determine the human face region in the image again, generating feature face, and as classification samples; In this step, in the time of the generating feature face, the sample that is used to classify should comprise the main region of people's face, and remove those garbages that can influence recognition effect, in the process of human face expression identification, under the situation of only considering the front face Expression Recognition, the principal element of influence identification is background and illumination.Method of the present invention is the position that extracts chin according to the ASM algorithm, human face region in the image can be extracted the eigenface image that is used for human facial expression recognition as separately, the size of eigenface is fixed, the size of general features face is set to the requirement that 64*64 can satisfy discrimination and speed aspect, if the too little discrimination so of eigenface can reduce, then can influence the efficient of algorithm too greatly.
Simultaneously in conjunction with Fig. 5 and content shown in Figure 6, wherein Fig. 5 a is the eigenface that collects, Fig. 5 b is the standard feature face, Fig. 5 a has many parallel straight lines from top to bottom successively, by the position of straight line as seen, wherein line has been determined the position of people's face chin, and be standard feature face in people's face sorter shown in Fig. 5 b, this standard feature face be train before discerning resultant, also from top to bottom indicated on this figure with Fig. 5 a in same number of many lines that are parallel to each other, can determine the position of face chin equally with the position of the corresponding lines of Fig. 5 a; Since Fig. 5 a and Fig. 5 b more as can be seen, the size of the facial contour that comes out according to video input actual extracting and the size of standard feature face are also inequality, and may have inclination.Method of the present invention can be sampled along the angle of inclination of calculating, many lines as shown in Fig. 5 a, corresponding same number of lines among Fig. 5 b, relation between the corresponding lines human face region of reality can be resampled for and the standard feature little on all four image of being bold, by detected people's face in the video can be converted to after such sampling with standard feature be bold little consistent, and the facial image of angle unanimity.This is a process that the facial image that collects is carried out standardization, and the standardization here is meant detected people's face in the video is made it consistent with our the standard feature face of setting through geometric transformation.Aims of standardization are generation and Feature Extraction of sample for convenience, improve the precision of identification.
With reference to the content of figure 6, left side figure is the facial image that extracts from video data simultaneously, and the right is that this eigenface size is preferably 64*64 through the eigenface that obtains after resampling.Identification of the present invention is based on gradation of image information, so the illumination meeting has influence on our final recognition result, in order to remove illumination effect, we handle the eigenface that generates.Method is carried out the normalization of gray scale about to eigenface respectively, and the gray average of part is all identical with variance about making.There is simultaneously the jump of a gray scale, set up the transitional zone of a gray scale in the centre of left and right sides face, make the left part that gray scale can be level and smooth carry out the transition to right part from face according to this method for fear of the centre.
Continuation is referring to the content of Fig. 1, behind the generating feature face, to calculating the Gabor feature of described eigenface image; Can calculate 5 yardsticks to each pixel of eigenface image as shown in Figure 7, Gabor feature on 6 directions, be the vector that each pixel can obtain one 30 dimension, the Gabor feature of all pixels of image of 64*64 concentrates in together the proper vector that can obtain one 122880 dimension.In actual computation, for the speed of accelerating to calculate, the present invention adopts fast Fourier transform (FFT) to calculate the Gabor feature.
Referring to content shown in Figure 1, after the Gabor feature of calculated characteristics face image, need select the Gabor feature that calculates; In method of the present invention, the dimension of the Gabor eigenvector that calculates according to eigenface is up to 122880 dimensions, this brings very big trouble can for training of the present invention and calculating, cause efficiency of algorithm low, therefore, the present invention adopts the AdaBoost algorithm to select feature, and this Adaboost method extracts the most effective a part of feature from original vector, as the sample of classification.The AdaBoost algorithm basic principle is that Weak Classifier is constantly combined, and forms the strong classifier that classification capacity is very strong.AdaBoost carries out in the calculation process in utilization, and we can pick out the best series of features of classification capacity, and obtain final sorter according to the weight that training obtains.Adaboost algorithm itself realizes by changing DATA DISTRIBUTION whether it is correct according to the classification of each example in each training set, and the overall classification accuracy of last time, determines the weighted value of each example.The sorter that obtains is last to be merged with train at every turn, as last decision-making sorter.
With reference to figure 1 and content shown in Figure 8, carrying out after feature selects, by latent structure support vector machine (SVM) sorter of selecting; For example, method of the present invention adopts the AdaBoost algorithm to pick out 2000 dimensional features as training sample, certainly also can select features such as 3000 dimensions, 4000 dimensions in actual applications as training sample, be example with 2000 dimensions in the present embodiment, and formation svm classifier device, in the present embodiment, will distinguish four classes expression basically, be the sorter of multiclass therefore.In fact simple relatively two class sorters of multicategory classification device.In an embodiment of the present invention, owing to will discern four kinds of expressions at least, every kind of expression can be regarded as a class, so be a multicategory classification device.And SVM can construct linear classifier and non-linear sorter.In the method for the invention, two kinds of sorters can be realized, but the speed that adopts linear classifier to discern can be fast.So under the situation that does not influence discrimination, adopting linear classifier is a better embodiment of the present invention.The design of the multicategory classification device described in the present invention can have multiple choices: one to one, and one-to-many, decision tree etc.Be sorter of design between per two classes one to one, have four classifications such as the present invention, so just have 6 kinds of combinations, the present invention just can make up and obtain 6 sorters.If one-to-many, we can design a sorter between each class and other classes for that, and four classifications just can obtain four sorters.Complicated all right design decision tree.
In the present embodiment, describe with man-to-man method for designing, the effect of sorter is divided two classes exactly one to one.In Expression Recognition, all use top method to come the design category device combination (just 6 kinds of combinations being arranged) of any two classes as four kinds of expressions, just can obtain 6 man-to-man sorters.Utilize these man-to-man sorters, we just can distinguish four kinds of expressions.
Its principle can adopt 6 lines to represent 6 sorters as shown in Figure 8, and wherein lines 11 with neutrality expression and the expression laughed at separately; 12 in lines with indignation and the expression laughed at separately; Lines 13 separate expression surprised and that laugh at; Lines 21 separate the expression of neutrality expression and indignation; Lines 22 separate neutrality expression and surprised expression; Lines 23 are that surprised expression with indignation is separated.
At last,, obtain after the svm classifier device, of the present inventionly just can carry out the identification of real-time human face expression, in the invention process process, at first each frame in the video is carried out people's face and detect the position of then people's face being followed the tracks of and extracting eyes referring to Fig. 1; If follow the tracks of successfully, just the people's face in the current images is carried out Expression Recognition, and provide the result of identification in real time; Referring to content shown in Figure 9, the left side is the video data of USB camera input simultaneously, and the wicket on the right is the result of human facial expression recognition.
Method of the present invention can be applied to a kind of facial expression recognition apparatus based on video, and as shown in figure 10, described device comprises video data acquiring unit 1, graphics processing unit 2, people's face information database 3 and human facial expression recognition unit 4; The facial image of the 1 pair of video in video data acquiring unit is gathered and is sent it to graphics processing unit 2; Graphics processing unit 2 is transferred people's face information by the image comparison of the comparing unit in the graphics processing unit 2 121 with both from people's face information database 3, and adopts 123 pairs of people's faces of AdaBoost computing unit data to calculate to send described human facial expression recognition unit 4 to; Human facial expression recognition unit 4 is discerned the facial image of gathering according to the identifying information of storage in people's face information database 3.This device also comprises display unit 5, and the facial expression that identifies is shown.
Wherein said graphics processing unit 2 comprises comparing unit 121, feature generation unit 122, computing unit 123 and sorter unit 124; Described comparing unit 121 is made contrast with the image information of people's face and the image information in the face database 3, detects people's face and eyes, and extracts facial image according to the eyes position, and this human face image information is sent to feature generation unit 122; 122 pairs of human face location of described feature generation unit according to people's face chin generating feature face, are sent to computing unit 123 with eigenface as sample; The Gabor feature of described computing unit 123 calculated characteristics face images, and adopt the AdaBoost algorithm to select feature, again the feature of selecting is sent to sorter unit 124; Described sorter unit 124 is sent to human facial expression recognition unit 4 according to the latent structure support vector machine classifier of selecting with information of classifier.Also comprise a video data tracing unit 111 in the described video data acquiring unit 1, people's face data of 111 pairs of video datas of this video data tracing unit are carried out trace detection, judge whether to gather, and carry out the tracker's face step in the inventive method.
Method of the present invention makes can extract the facial expression of people's face accurately automatically in the video situation, and this method has adopted Adaboost and ASM algorithm, can eliminate the influence of illumination, in method, facial image has been carried out special disposal, make people's face about the part gray average and variance basically identical, and method of the present invention is developed an algorithm that can carry out real time automatic detection to people's face, follow the tracks of and can identify the common expression of front face at the video data of USB camera commonly used, can reach preferable commercial result.
The above only is preferred embodiment of the present invention, and is in order to restriction the present invention, within the spirit and principles in the present invention not all, any modification of being done, is equal to replacement etc., all should be included within protection scope of the present invention.

Claims (14)

1, a kind of human facial expression recognition method based on video is characterized in that, may further comprise the steps:
From the video data of USB camera input, gather the facial expression image data of people's face, this view data is done pre-service;
Extract real-time people's face is the position in the image after pre-service;
According to the human eye sorter human eye in the people's face in the image of determining is made the location;
Comprise the image-region of people's face according to the information extraction of the position of the human eye of determining and people's face sorter, carry out normalized;
Human face is located;
According to the position of the location of human face being determined people's face chin, determine the human face region in the image, the generating feature face, and as classification samples;
Calculate the Gabor feature of described eigenface image based on described classification samples;
The Gabor feature that calculates is selected;
By the latent structure support vector machine classifier of selecting;
Sorter according to structure draws the human face expression recognition result.
2, the human facial expression recognition method based on video as claimed in claim 1 is characterized in that, when gathering from the data of USB camera input, comprises following face tracking step:
Before not obtaining tracking target, search for every two field picture, detect people's face and whether exist;
There are one or more people's faces if detect certain two field picture, then in ensuing two two field pictures, follow the tracks of detected people's face, and people's face of following the tracks of in this two two field picture is detected and verifies, testing result is judged;
After same position three two field pictures all detected people's face, algorithm thought that just this position has people's face, and carry out the real-time face detection algorithm and extract the position of people's face in image this moment;
If detect and have a plurality of people's faces in the scene, picking out one of them people's face begins to follow the tracks of, in subsequent frame, continue to follow the tracks of this people's face, similarity as the tracking results of back one frame and former frame in the consecutive frame is low excessively, or certain tracking target region do not detect positive homo erectus's face for a long time, then stops to follow the tracks of.
3, the human facial expression recognition method based on video as claimed in claim 1, it is characterized in that, described normalized realizes by the resampling algorithm: described resampling algorithm is convergent-divergent, rotation and translation transformation, and the position of human eye of detection and the location overlap of human eye sorter are positioned.
4, the human facial expression recognition method under the video situation as claimed in claim 1 is characterized in that, described is to adopt target extraction method to the human face location, and this target extraction method is the active shape model algorithm.
5, the human facial expression recognition method based on video as claimed in claim 4 is characterized in that the concrete steps of described active shape model algorithm are:
By the profile information that extracts people's face in the video data, set up sample unit;
Sample in the sample unit is carried out normalization and registration process, carry out the principal component analysis conversion then;
To the half-tone information at each reference mark in the profile information after the principal component analysis conversion, as the foundation of point search;
The mean profile that principal component analysis is calculated carries out iterative search as the initial value of profile search, obtains facial contour.
6, the human facial expression recognition method based on video as claimed in claim 5 is characterized in that the step of described iterative search is:
Obtain initial shift value according to half-tone information, will snap to mean profile, calculate the parameter value of alignment according to the new profile that the gray scale search obtains;
Calculate the changing value of shape according to the statistical value of data after the alignment and principal component analysis calculating;
The result who is once searched for to the position of new profile according to the shape inverse of parameter value after will changing of alignment;
Repeat above-mentioned search step, proceed iteration and obtain facial contour up to convergence.
7, the human facial expression recognition method based on video as claimed in claim 1 is characterized in that, the generation of described eigenface is facial contour and the contrast of the people's face in people's face sorter that will extract, and carries out the degree of tilt adjustment.
8, the human facial expression recognition method based on video as claimed in claim 1, it is characterized in that, also have the step that the eigenface that generates is handled between the Gabor feature of generating feature face and calculated characteristics face image: part about the eigenface that forms is carried out the normalization of gray scale, and the gray average of part is identical with variance about making.
9, the human facial expression recognition method based on video as claimed in claim 8 is characterized in that, between the part gray scale filter bag is being set about described eigenface.
10, the human facial expression recognition method based on video as claimed in claim 1 is characterized in that the support vector machine classifier of being set up is the multicategory classification device, for one to one, one-to-many or make a strategic decision tree-like.
11, a kind of facial expression recognition apparatus based on video is characterized in that:
Comprise the video data acquiring unit, graphics processing unit, people's face information database and human facial expression recognition unit;
The video data acquiring unit is gathered the facial image of video and is sent it to graphics processing unit;
Graphics processing unit is transferred people's face information and collection from people's face information database facial image compares, and people's face data is calculated again, and sends the data after calculating to described human facial expression recognition unit;
The human facial expression recognition unit is discerned the facial image of gathering according to the identifying information of storing in people's face information database.
12, the facial expression recognition apparatus based on video as claimed in claim 11 is characterized in that, also comprises display unit, and the facial expression that identifies is shown.
13, the facial expression recognition apparatus based on video as claimed in claim 11 is characterized in that, described graphics processing unit comprises comparing unit, feature generation unit, computing unit and sorter unit;
Described comparing unit is made contrast with the image information of people's face and the image information in the face database, detects people's face and eyes, and extracts facial image according to the eyes position, and this human face image information is sent to the feature generation unit;
Described feature generation unit is located human face, according to people's face chin generating feature face, eigenface is sent to computing unit as sample;
The Gabor feature of described computing unit calculated characteristics face image, and adopt the AdaBoost algorithm to select feature, again the feature of selecting is sent to the sorter unit;
Described sorter unit is sent to the human facial expression recognition unit according to the latent structure support vector machine classifier of selecting with information of classifier.
14, the facial expression recognition apparatus based on video as claimed in claim 11, it is characterized in that, also comprise a video data tracing unit in the described video data acquiring unit, this video data tracing unit carries out trace detection to people's face data of video data, judges whether the input data are gathered.
CNB2005101356705A 2005-12-31 2005-12-31 Method and device for distinguishing face expression based on video frequency Active CN100397410C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005101356705A CN100397410C (en) 2005-12-31 2005-12-31 Method and device for distinguishing face expression based on video frequency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005101356705A CN100397410C (en) 2005-12-31 2005-12-31 Method and device for distinguishing face expression based on video frequency

Publications (2)

Publication Number Publication Date
CN1794265A true CN1794265A (en) 2006-06-28
CN100397410C CN100397410C (en) 2008-06-25

Family

ID=36805690

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005101356705A Active CN100397410C (en) 2005-12-31 2005-12-31 Method and device for distinguishing face expression based on video frequency

Country Status (1)

Country Link
CN (1) CN100397410C (en)

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008083535A1 (en) * 2007-01-11 2008-07-17 Shanghai Isvision Technologies Co. Ltd. Method for encrypting/decrypting electronic document based on human face identification
CN100426318C (en) * 2006-09-28 2008-10-15 北京中星微电子有限公司 AAM-based object location method
CN100426317C (en) * 2006-09-27 2008-10-15 北京中星微电子有限公司 Multiple attitude human face detection and track system and method
CN100444190C (en) * 2006-10-30 2008-12-17 邹采荣 Human face characteristic positioning method based on weighting active shape building module
CN100447808C (en) * 2007-01-12 2008-12-31 郑文明 Method for classification human facial expression and semantics judgement quantization method
CN100556078C (en) * 2006-11-21 2009-10-28 索尼株式会社 Camera head, image processing apparatus and image processing method
CN101689303A (en) * 2007-06-18 2010-03-31 佳能株式会社 Facial expression recognition apparatus and method, and image capturing apparatus
CN101175187B (en) * 2006-10-31 2010-04-21 索尼株式会社 Image storage device, imaging device, image storage method
CN101226590B (en) * 2008-01-31 2010-06-02 湖南创合世纪智能技术有限公司 Method for recognizing human face
CN101206715B (en) * 2006-12-18 2010-10-06 索尼株式会社 Face recognition apparatus, face recognition method, Gabor filter application apparatus, and computer program
CN101944163A (en) * 2010-09-25 2011-01-12 德信互动科技(北京)有限公司 Method for realizing expression synchronization of game character through capturing face expression
CN101285677B (en) * 2007-04-12 2011-03-23 东京毅力科创株式会社 Optical metrology using a support vector machine with simulated diffraction signal inputs
CN102004906A (en) * 2010-11-18 2011-04-06 无锡中星微电子有限公司 Face identification system and method
CN102058983A (en) * 2010-11-10 2011-05-18 无锡中星微电子有限公司 Intelligent toy based on video analysis
CN101216881B (en) * 2007-12-28 2011-07-06 北京中星微电子有限公司 A method and device for automatic image acquisition
WO2011079458A1 (en) * 2009-12-31 2011-07-07 Nokia Corporation Method and apparatus for local binary pattern based facial feature localization
CN101719223B (en) * 2009-12-29 2011-09-14 西北工业大学 Identification method for stranger facial expression in static image
CN102214299A (en) * 2011-06-21 2011-10-12 电子科技大学 Method for positioning facial features based on improved ASM (Active Shape Model) algorithm
US8085996B2 (en) 2007-06-11 2011-12-27 Sony Corporation Image processing apparatus, image display apparatus, imaging apparatus, method for image processing therefor, and program
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video
CN101777116B (en) * 2009-12-23 2012-07-25 中国科学院自动化研究所 Method for analyzing facial expressions on basis of motion tracking
US8233678B2 (en) 2007-08-14 2012-07-31 Sony Corporation Imaging apparatus, imaging method and computer program for detecting a facial expression from a normalized face image
CN101887513B (en) * 2009-05-12 2012-11-07 联咏科技股份有限公司 Expression detecting device and method
CN101337128B (en) * 2008-08-20 2012-11-28 北京中星微电子有限公司 Game control method and system based on face
US8411911B2 (en) 2008-11-28 2013-04-02 Casio Computer Co., Ltd. Image processing apparatus, image processing method, and storage medium for storing program
WO2013149556A1 (en) * 2012-04-06 2013-10-10 腾讯科技(深圳)有限公司 Method and device for automatically playing expression on virtual image
CN103400105A (en) * 2013-06-26 2013-11-20 东南大学 Method identifying non-front-side facial expression based on attitude normalization
WO2014032496A1 (en) * 2012-08-28 2014-03-06 腾讯科技(深圳)有限公司 Method, device and storage medium for locating feature points on human face
CN104575495A (en) * 2013-10-21 2015-04-29 中国科学院声学研究所 Language identification method and system adopting total variable quantity factors
CN104573617A (en) * 2013-10-28 2015-04-29 季春宏 Video shooting control method
CN104767980A (en) * 2015-04-30 2015-07-08 深圳市东方拓宇科技有限公司 Real-time emotion demonstrating method, system and device and intelligent terminal
CN104951743A (en) * 2015-03-04 2015-09-30 苏州大学 Active-shape-model-algorithm-based method for analyzing face expression
CN105187721A (en) * 2015-08-31 2015-12-23 广州市幸福网络技术有限公司 An identification camera and method for rapidly extracting portrait features
CN105404878A (en) * 2015-12-11 2016-03-16 广东欧珀移动通信有限公司 Photo classification method and apparatus
CN105678702A (en) * 2015-12-25 2016-06-15 北京理工大学 Face image sequence generation method and device based on feature tracking
CN105917305A (en) * 2013-08-02 2016-08-31 埃莫蒂安特公司 Filter and shutter based on image emotion content
CN106127829A (en) * 2016-06-28 2016-11-16 广东欧珀移动通信有限公司 The processing method of a kind of augmented reality, device and terminal
CN106127828A (en) * 2016-06-28 2016-11-16 广东欧珀移动通信有限公司 The processing method of a kind of augmented reality, device and mobile terminal
CN106157363A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 A kind of photographic method based on augmented reality, device and mobile terminal
CN106157262A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 The processing method of a kind of augmented reality, device and mobile terminal
CN106687989A (en) * 2014-10-23 2017-05-17 英特尔公司 Method and system of facial expression recognition using linear relationships within landmark subsets
CN107451560A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 User's expression recognition method, device and terminal
CN107592507A (en) * 2017-09-29 2018-01-16 深圳市置辰海信科技有限公司 The method of automatic trace trap high-resolution front face photo
CN107729882A (en) * 2017-11-19 2018-02-23 济源维恩科技开发有限公司 Emotion identification decision method based on image recognition
CN108268838A (en) * 2018-01-02 2018-07-10 中国科学院福建物质结构研究所 Facial expression recognizing method and facial expression recognition system
CN108416291A (en) * 2018-03-06 2018-08-17 广州逗号智能零售有限公司 Face datection recognition methods, device and system
CN108446672A (en) * 2018-04-20 2018-08-24 武汉大学 A kind of face alignment method based on the estimation of facial contours from thick to thin
CN108583569A (en) * 2018-03-26 2018-09-28 刘福珍 A kind of collision warning device based on double moving average algorithm
CN108875519A (en) * 2017-12-19 2018-11-23 北京旷视科技有限公司 Method for checking object, device and system and storage medium
CN109727303A (en) * 2018-12-29 2019-05-07 广州华多网络科技有限公司 Video display method, system, computer equipment, storage medium and terminal
CN105095827B (en) * 2014-04-18 2019-05-17 汉王科技股份有限公司 Facial expression recognition device and method
CN110728252A (en) * 2019-10-22 2020-01-24 山西省信息产业技术研究院有限公司 Face detection method applied to regional personnel motion trail monitoring
WO2021248814A1 (en) * 2020-06-13 2021-12-16 德派(嘉兴)医疗器械有限公司 Robust visual supervision method and apparatus for home learning state of child

Cited By (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100426317C (en) * 2006-09-27 2008-10-15 北京中星微电子有限公司 Multiple attitude human face detection and track system and method
CN100426318C (en) * 2006-09-28 2008-10-15 北京中星微电子有限公司 AAM-based object location method
CN100444190C (en) * 2006-10-30 2008-12-17 邹采荣 Human face characteristic positioning method based on weighting active shape building module
CN101175187B (en) * 2006-10-31 2010-04-21 索尼株式会社 Image storage device, imaging device, image storage method
CN100556078C (en) * 2006-11-21 2009-10-28 索尼株式会社 Camera head, image processing apparatus and image processing method
US8385607B2 (en) 2006-11-21 2013-02-26 Sony Corporation Imaging apparatus, image processing apparatus, image processing method and computer program
CN101206715B (en) * 2006-12-18 2010-10-06 索尼株式会社 Face recognition apparatus, face recognition method, Gabor filter application apparatus, and computer program
WO2008083535A1 (en) * 2007-01-11 2008-07-17 Shanghai Isvision Technologies Co. Ltd. Method for encrypting/decrypting electronic document based on human face identification
CN100447808C (en) * 2007-01-12 2008-12-31 郑文明 Method for classification human facial expression and semantics judgement quantization method
CN101285677B (en) * 2007-04-12 2011-03-23 东京毅力科创株式会社 Optical metrology using a support vector machine with simulated diffraction signal inputs
US8085996B2 (en) 2007-06-11 2011-12-27 Sony Corporation Image processing apparatus, image display apparatus, imaging apparatus, method for image processing therefor, and program
CN101689303A (en) * 2007-06-18 2010-03-31 佳能株式会社 Facial expression recognition apparatus and method, and image capturing apparatus
US8233678B2 (en) 2007-08-14 2012-07-31 Sony Corporation Imaging apparatus, imaging method and computer program for detecting a facial expression from a normalized face image
CN101216881B (en) * 2007-12-28 2011-07-06 北京中星微电子有限公司 A method and device for automatic image acquisition
CN101226590B (en) * 2008-01-31 2010-06-02 湖南创合世纪智能技术有限公司 Method for recognizing human face
CN101337128B (en) * 2008-08-20 2012-11-28 北京中星微电子有限公司 Game control method and system based on face
US8411911B2 (en) 2008-11-28 2013-04-02 Casio Computer Co., Ltd. Image processing apparatus, image processing method, and storage medium for storing program
CN101887513B (en) * 2009-05-12 2012-11-07 联咏科技股份有限公司 Expression detecting device and method
CN101777116B (en) * 2009-12-23 2012-07-25 中国科学院自动化研究所 Method for analyzing facial expressions on basis of motion tracking
CN101719223B (en) * 2009-12-29 2011-09-14 西北工业大学 Identification method for stranger facial expression in static image
CN102640168B (en) * 2009-12-31 2016-08-03 诺基亚技术有限公司 Method and apparatus for facial Feature Localization based on local binary pattern
WO2011079458A1 (en) * 2009-12-31 2011-07-07 Nokia Corporation Method and apparatus for local binary pattern based facial feature localization
CN102640168A (en) * 2009-12-31 2012-08-15 诺基亚公司 Method and apparatus for local binary pattern based facial feature localization
US8917911B2 (en) 2009-12-31 2014-12-23 Nokia Corporation Method and apparatus for local binary pattern based facial feature localization
CN101944163A (en) * 2010-09-25 2011-01-12 德信互动科技(北京)有限公司 Method for realizing expression synchronization of game character through capturing face expression
CN102058983B (en) * 2010-11-10 2012-08-29 无锡中星微电子有限公司 Intelligent toy based on video analysis
CN102058983A (en) * 2010-11-10 2011-05-18 无锡中星微电子有限公司 Intelligent toy based on video analysis
CN102004906A (en) * 2010-11-18 2011-04-06 无锡中星微电子有限公司 Face identification system and method
CN102214299A (en) * 2011-06-21 2011-10-12 电子科技大学 Method for positioning facial features based on improved ASM (Active Shape Model) algorithm
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video
CN102306290B (en) * 2011-10-14 2013-10-30 刘伟华 Face tracking recognition technique based on video
CN103366782B (en) * 2012-04-06 2014-09-10 腾讯科技(深圳)有限公司 Method and device automatically playing expression on virtual image
CN103366782A (en) * 2012-04-06 2013-10-23 腾讯科技(深圳)有限公司 Method and device automatically playing expression on virtual image
WO2013149556A1 (en) * 2012-04-06 2013-10-10 腾讯科技(深圳)有限公司 Method and device for automatically playing expression on virtual image
US9457265B2 (en) 2012-04-06 2016-10-04 Tenecent Technology (Shenzhen) Company Limited Method and device for automatically playing expression on virtual image
WO2014032496A1 (en) * 2012-08-28 2014-03-06 腾讯科技(深圳)有限公司 Method, device and storage medium for locating feature points on human face
CN103400105A (en) * 2013-06-26 2013-11-20 东南大学 Method identifying non-front-side facial expression based on attitude normalization
CN103400105B (en) * 2013-06-26 2017-05-24 东南大学 Method identifying non-front-side facial expression based on attitude normalization
CN105917305B (en) * 2013-08-02 2020-06-26 埃莫蒂安特公司 Filtering and shutter shooting based on image emotion content
CN105917305A (en) * 2013-08-02 2016-08-31 埃莫蒂安特公司 Filter and shutter based on image emotion content
CN104575495A (en) * 2013-10-21 2015-04-29 中国科学院声学研究所 Language identification method and system adopting total variable quantity factors
CN104573617A (en) * 2013-10-28 2015-04-29 季春宏 Video shooting control method
CN105095827B (en) * 2014-04-18 2019-05-17 汉王科技股份有限公司 Facial expression recognition device and method
CN106687989A (en) * 2014-10-23 2017-05-17 英特尔公司 Method and system of facial expression recognition using linear relationships within landmark subsets
CN106687989B (en) * 2014-10-23 2021-06-29 英特尔公司 Method, system, readable medium and apparatus for facial expression recognition
CN104951743A (en) * 2015-03-04 2015-09-30 苏州大学 Active-shape-model-algorithm-based method for analyzing face expression
CN104767980A (en) * 2015-04-30 2015-07-08 深圳市东方拓宇科技有限公司 Real-time emotion demonstrating method, system and device and intelligent terminal
CN104767980B (en) * 2015-04-30 2018-05-04 深圳市东方拓宇科技有限公司 A kind of real-time emotion demenstration method, system, device and intelligent terminal
CN105187721A (en) * 2015-08-31 2015-12-23 广州市幸福网络技术有限公司 An identification camera and method for rapidly extracting portrait features
CN105187721B (en) * 2015-08-31 2018-09-21 广州市幸福网络技术有限公司 A kind of the license camera and method of rapid extraction portrait feature
CN105404878A (en) * 2015-12-11 2016-03-16 广东欧珀移动通信有限公司 Photo classification method and apparatus
CN105678702B (en) * 2015-12-25 2018-10-19 北京理工大学 A kind of the human face image sequence generation method and device of feature based tracking
CN105678702A (en) * 2015-12-25 2016-06-15 北京理工大学 Face image sequence generation method and device based on feature tracking
CN106157363A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 A kind of photographic method based on augmented reality, device and mobile terminal
CN106127829B (en) * 2016-06-28 2020-06-30 Oppo广东移动通信有限公司 Augmented reality processing method and device and terminal
CN106127829A (en) * 2016-06-28 2016-11-16 广东欧珀移动通信有限公司 The processing method of a kind of augmented reality, device and terminal
CN106127828A (en) * 2016-06-28 2016-11-16 广东欧珀移动通信有限公司 The processing method of a kind of augmented reality, device and mobile terminal
CN106157262B (en) * 2016-06-28 2020-04-17 Oppo广东移动通信有限公司 Augmented reality processing method and device and mobile terminal
CN106157262A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 The processing method of a kind of augmented reality, device and mobile terminal
CN107451560A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 User's expression recognition method, device and terminal
CN107592507A (en) * 2017-09-29 2018-01-16 深圳市置辰海信科技有限公司 The method of automatic trace trap high-resolution front face photo
CN107729882A (en) * 2017-11-19 2018-02-23 济源维恩科技开发有限公司 Emotion identification decision method based on image recognition
CN108875519A (en) * 2017-12-19 2018-11-23 北京旷视科技有限公司 Method for checking object, device and system and storage medium
CN108875519B (en) * 2017-12-19 2023-05-26 北京旷视科技有限公司 Object detection method, device and system and storage medium
CN108268838B (en) * 2018-01-02 2020-12-29 中国科学院福建物质结构研究所 Facial expression recognition method and facial expression recognition system
CN108268838A (en) * 2018-01-02 2018-07-10 中国科学院福建物质结构研究所 Facial expression recognizing method and facial expression recognition system
CN108416291A (en) * 2018-03-06 2018-08-17 广州逗号智能零售有限公司 Face datection recognition methods, device and system
CN108583569A (en) * 2018-03-26 2018-09-28 刘福珍 A kind of collision warning device based on double moving average algorithm
CN108446672A (en) * 2018-04-20 2018-08-24 武汉大学 A kind of face alignment method based on the estimation of facial contours from thick to thin
CN108446672B (en) * 2018-04-20 2021-12-17 武汉大学 Face alignment method based on shape estimation of coarse face to fine face
CN109727303A (en) * 2018-12-29 2019-05-07 广州华多网络科技有限公司 Video display method, system, computer equipment, storage medium and terminal
CN110728252A (en) * 2019-10-22 2020-01-24 山西省信息产业技术研究院有限公司 Face detection method applied to regional personnel motion trail monitoring
CN110728252B (en) * 2019-10-22 2023-08-04 山西省信息产业技术研究院有限公司 Face detection method applied to regional personnel motion trail monitoring
WO2021248814A1 (en) * 2020-06-13 2021-12-16 德派(嘉兴)医疗器械有限公司 Robust visual supervision method and apparatus for home learning state of child

Also Published As

Publication number Publication date
CN100397410C (en) 2008-06-25

Similar Documents

Publication Publication Date Title
CN1794265A (en) Method and device for distinguishing face expression based on video frequency
US10990191B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
CN105740780B (en) Method and device for detecting living human face
US9087241B2 (en) Intelligent part identification for use with scene characterization or motion capture
Wahl et al. Surflet-pair-relation histograms: a statistical 3D-shape representation for rapid classification
JP5629803B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN105740779B (en) Method and device for detecting living human face
JP4743823B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN109460734B (en) Video behavior identification method and system based on hierarchical dynamic depth projection difference image representation
CN1794264A (en) Method and system of real time detecting and continuous tracing human face in video frequency sequence
CN106570491A (en) Robot intelligent interaction method and intelligent robot
CN1977286A (en) Object recognition method and apparatus therefor
CN108573231B (en) Human body behavior identification method of depth motion map generated based on motion history point cloud
CN1506903A (en) Automatic fingerprint distinguishing system and method based on template learning
CN1950844A (en) Object posture estimation/correlation system, object posture estimation/correlation method, and program for the same
Tsalakanidou et al. Integration of 2D and 3D images for enhanced face authentication
CN113850865A (en) Human body posture positioning method and system based on binocular vision and storage medium
García-Martín et al. Robust real time moving people detection in surveillance scenarios
Chen et al. Silhouette-based object phenotype recognition using 3D shape priors
Czyzewski et al. Chessboard and chess piece recognition with the support of neural networks
US20070183686A1 (en) Method and apparatus for estimating object part location in digital image data using feature value analysis
CN110969101A (en) Face detection and tracking method based on HOG and feature descriptor
Proença et al. SHREC’15 Track: Retrieval of Oobjects captured with kinect one camera
CN114399731B (en) Target positioning method under supervision of single coarse point
CN1226017C (en) Rotary human face detection method based on radiation form

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160516

Address after: 519031 Guangdong city of Zhuhai province Hengqin Baohua Road No. 6, room 105 -478

Patentee after: GUANGDONG ZHONGXING ELECTRONICS CO., LTD.

Address before: 100083, Haidian District, Xueyuan Road, Beijing No. 35, Nanjing Ning building, 15 Floor

Patentee before: Beijing Vimicro Corporation