CN101561710A - Man-machine interaction method based on estimation of human face posture - Google Patents

Man-machine interaction method based on estimation of human face posture Download PDF

Info

Publication number
CN101561710A
CN101561710A CNA2009101038842A CN200910103884A CN101561710A CN 101561710 A CN101561710 A CN 101561710A CN A2009101038842 A CNA2009101038842 A CN A2009101038842A CN 200910103884 A CN200910103884 A CN 200910103884A CN 101561710 A CN101561710 A CN 101561710A
Authority
CN
China
Prior art keywords
face
point
people
image
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2009101038842A
Other languages
Chinese (zh)
Other versions
CN101561710B (en
Inventor
毛玉星
张占龙
何为
成华安
傅饶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN2009101038842A priority Critical patent/CN101561710B/en
Publication of CN101561710A publication Critical patent/CN101561710A/en
Application granted granted Critical
Publication of CN101561710B publication Critical patent/CN101561710B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a man-machine interaction method based on estimation of human face posture, comprising the steps as follows: human face image sequence is obtained by a camera, and five characteristic points such as two corners of eyes, two corners of mouth, and an apex of a nose are automatically extracted after pre-processing; taking a right side image as the reference, three deflection angles of human face in any image are estimated according to the positions and the corresponding relations of the five characteristic points; mouse pointer position and operation mode are defined by the posture information to generate man-machine interaction information; and a new visual mouse device is formed by being connected with a computer through a USB interface. As the supplement of traditional man-machine interaction method, the device is applicable to some special interaction groups (for example, disabled persons) and interaction environments (for example, multimedia game), and has obvious application value.

Description

A kind of man-machine interaction method of estimating based on human face posture
Technical field
The present invention relates to a kind of man-machine interaction method, be particularly related to the method for a kind of structure based on the vision mouse apparatus of face characteristic point location and attitude detection, by graphical analysis and target detection means, extract two canthus, two corners of the mouths and nose totally 5 unique points, individual features point position with a width of cloth direct picture is reference, 3 deflection angles of real-time face image are estimated, and generation human-machine interactive information, be connected to computing machine by USB interface, form a kind of vision mouse apparatus of man-machine interaction.
Background technology
Human-computer interaction technology (Human-Computer Interaction Techniques) is meant by the computing machine Input/Output Device, realizes the technology of people and computer dialog with effective and efficient manner.It comprises that machine provides to the people and reaches prompting in a large number for information about by output or display device, and the people answers a question etc. by input equipment to the machine input for information about.Human-computer interaction technology is one of important content in the computer user interface design.Ambits such as it and cognitive science, ergonomics, psychology have close getting in touch.
In field of human-computer interaction, the present widely used conventional apparatus such as keyboard, mouse, light pen that have, yet some in particular cases conventional apparatus its limitation is arranged, for example operation of the complicated interface of some multimedia game and individuals with disabilities use a computer, and press for input media the replenishing as existing man-machine interaction mode that design does not rely on limb action.
In order to adapt to different interactive environments, be applicable to different mutual crowds, broad research is based on vision (image), the sense of hearing (voice), sense of touch exchange methods such as (pressure, temperature) for Chinese scholars, and some method tentatively comes into operation.In the visual interactive method, mainly contain research focuses such as sight line tracer technique, Sign Language Recognition technology at present.The sight line tracer technique is the pass blinkpunkt by camera and pupil location technology perception people, thereby drive mouse location and associative operation, yet two significant drawbacks are arranged: the one, it is discontinuous that pupil moves, and can not coincide with the viewpoint that moves continuously is fine, has reduced the accuracy of following the tracks of; The 2nd, because physiology and psychological causes, human eye mobile has randomness, and the subjective desire of self paying close attention to is not reacted in some situation servant an eye line position.
For overcoming the defective of eye tracking human-machine interaction method, the present invention adopts the human face posture method of estimation, utilize video frequency pick-up head to obtain facial image, realize that by image and video processing technique people's face detects and the key point location, characteristic point position distribution estimating human face posture according to the multiframe sequence image, and, be a kind of new exchange method by attitude information realization man-machine interaction.The human-computer interaction device of Xing Chenging is applicable to mutual crowd (as the individuals with disabilities) and interactive environment (as multimedia game) that some are special thus, has significant using value, and the significance of expansion category is arranged in field of human-computer interaction.
Summary of the invention
Can not satisfy different interactive environments and mutual crowd fully at existing human-computer interaction device, the purpose of this invention is to provide a kind of man-machine interaction method of estimating based on human face posture, utilize human face characteristic point technology such as location and human face posture estimation automatically, generate human-machine interactive information, be connected to computing machine by USB interface, form a kind of vision mouse apparatus of man-machine interaction.
The present invention relates to comprise following steps:
A) the people's face digital image sequence that obtains through optical lens and cmos image sensor assembly is realized data acquisition by the digital video high-speed channel of DSP.
B) people's face digital picture is carried out pre-service.
1. image de-noising.Because picture noise point has significant difference with their neighbour's pixel on gray-scale value, statistical nature or the regularity of distribution, can carry out image filtering thus and realize squelch, make easier detection of useful information and identification.Nonlinear diffusion is a kind of preferable image noise-eliminating method, has good guarantor limit characteristic, but will be through iteration repeatedly, computation complexity height.Adopt the variance structure edge map of impact point four direction belt-like zone among the present invention, be applied in the nonlinear diffusion image de-noising algorithm, reduce iterations, and adopt integrogram to realize calculating fast.
2. people's face detects and area dividing.Many people's faces of manual mark, these people's faces are transformed to same size, ask again and on average obtain face template, obtain the Position Approximate of people's face, according to human face structure feature priori people's face is divided into right and left eyes, nose and four zones of mouth then with template matching method.
3. rim detection.Because 5 unique points all are positioned on the angle point of human face, need to extract profile or angle point information to improve bearing accuracy.Traditional edge detection method has Canny operator, Sobel operator, morphological method and small wave converting method etc.In addition, the SUSAN Corner Detection Algorithm can fine reflecting edge and angle point information, but effect is subjected to threshold affects.Adopt direction filtering method to obtain edge image among the present invention.
C) 5 unique points of people from location face (2 canthus, 2 corners of the mouths and nose).
1. locate the canthus.At first, in front in the human eye area of Que Dinging, image after the de-noising is carried out the Gray Projection of level and vertical both direction, get the two-dimensional coordinate of position, valley point respectively, and delimit the definite position of rectangular area as eyes with this as the eyeball center; Then, investigate this zone, edge image is carried out binary conversion treatment, and detect counting of any connected region, remove and be communicated with the zone count very little.At last, the human eye connected region of choosing is extracted the most left (right side) point as the canthus point.
2. locate the corners of the mouth.Corners of the mouth location also is a Corner Detection, and method and location, canthus are similar, but because there is not the pupil of high-contrast in the grey scale change of mouth as eyes, cause its profile fuzzy.Adopt the training of lip colo(u)r atlas to set up the Gaussian Mixture probability model earlier, with mouth its probability of substitution Model Calculation that the zone has, and the normalization back is used the identical method in location, canthus again and is extracted corners of the mouth position as new half-tone information.
3. locate nose.Nose is not on profile, and the location nose will rely on itself and the geometry site of the canthus and the corners of the mouth definite.Because nose is nearer from the nostril, and be positioned at line top, two nostrils, adopt the Gray Projection method in nasal area, to search for two naris positions earlier, above the nostril, search highlight then as the nose position.When people's face deflection angle was big, the nose location was caused error easily, but in the present invention, the nose position only influences the symbol of deflection angle, promptly determined rotation direction, and do not influence angular dimension and calculate, thereby to error and insensitive.
D) 3 deflection angles of estimation people face
3 deflection angles are the bases that generate human-machine interactive information.Not too big at deflection angle, be bold little relative fixed and be in the picture center of people has under the condition of good light photograph, can accurately obtain 5 unique points, asks for 3 deflection angles with following step.
1. coordinate transform.Calculate earlier the center of right and left eyes angle point, and as image origin, all the other all coordinate points are carried out coordinate transform according to the relative position of itself and initial point, obtain new coordinate figure.Calculate the center and the record of two corners of the mouth points then, obtain two canthus, center, canthus, nose, corners of the mouth center totally 5 unique points, these 5 unique points are to carry out the basis that three three-dimensional deflection angles of people's face are estimated.
2. determining a width of cloth direct picture, is horizontal line with two canthus lines, and the plane that this line and corners of the mouth central point are determined is the front when vertical with the pick-up lens normal, establishes three deflection angles 0 at this moment.5 unique points of location direct picture are according to d) 1. method finish coordinate transform, calculates 5 unique point coordinates and record, conduct is with reference to information in follow-up attitude is estimated.
3. the facial image that any time is gathered needs to estimate that its attitude is to generate human-machine interactive information.At first locate 5 unique points, again according to d) 2. identical method obtains the coordinate of 5 points.With reference to d) unique point of the direct picture that 2. obtains, these coordinates are verified according to geometrical-restriction relation, obvious ineligible positioning result is abandoned processing, do not generate interactive information.
4. utilize d) 2., d) the 3. coordinate of corresponding 5 of determined two width of cloth images of two steps, utilize pinhole camera modeling, make some reasonable simplification hypothesis that do not influence the result that reach according to algorithm characteristic, utilize at last utmost point geometrical principle is derived, calculate the attitude of people's face in the arbitrary image, obtain three deflection angles.
E) generate human-machine interactive information
Define mouse position and mode of operation by three deflection angles that obtain previously.Computing machine-mouse exchange method comprises that pointer moves the operations such as single hit and dblclick of mouse left-right key arbitrarily in two bit planes.The present invention is according to two angle values location mouse pointer position wherein, with the operation of the interframe sudden change amount definition mouse of another angle, thus the generation human-machine interactive information.
F) connect computer realization communication
Develop USB interface in the device, write driver, the human-machine interactive information that obtains previously is sent to computing machine according to the communication protocol of standard USB mouse.Computing machine has reduced the burden of object computer without any need for special program support, does not influence the complicated software of operator's appliance computer operation.
The present invention replenishes as traditional man-machine interaction method, is applicable to mutual crowd (as disabled people scholar) and interactive environment (as multimedia game) that some are special, has significant using value.
Description of drawings
Fig. 1 is a human-machine interactive information processing flow chart of the present invention
Fig. 2 is the face template figure that people's face is just located employing
Fig. 3 is 5 characteristic point position figure in front face organ template and this method
Fig. 4 is the define method synoptic diagram of 3 deflection angles of people's face among the present invention
Embodiment
The present invention is further illustrated below in conjunction with a non-limiting example
Referring to Fig. 1, Fig. 2, Fig. 3, Fig. 4.
Image acquisition of the present invention, sequential generate and control adopts the CPLD device to realize, Leonardo da Vinci's processor TMS32C6446 that image pre-service, positioning feature point and attitude estimation related algorithm are adopted TI company finishes, and USB interface realizes with the Cypress control chip.Finish all algorithms and hardware module design according to message processing flow, the analog mouse operation realizes man-machine interaction.
The main modular content introduction is as follows:
(1) facial image pre-service
I. image de-noising.Because optical system or electron device influence, image unavoidably can be subjected to noise, need carry out denoising Processing to improve the bearing accuracy of unique point.Use nonlinear diffusion de-noising algorithm principle, and utilize the variance yields of impact point neighborhood four direction belt-like zone gray scale to shine upon as edge of image, the diffusing capacity of each pixel is by the difference and the corresponding direction weight coefficient decision of neighborhood eight pixels and this point, thereby enhancing adaptive faculty, it is few to reduce iterations, accelerates arithmetic speed.Its iterative formula is:
x i , j ′ = x i , j + λ ( Σ p = - 1 1 Σ q = - 1 1 g ( σ p , q ) ▿ p , q x i , j )
In order repeatedly not produce new extreme point in the band process, λ gets 0.125, σ P, qAccording to p, the different four direction belt-like zone variances of representing of q value.
Figure A20091010388400082
Gradient for the calculating of fork separating method.Spread function g (σ) is defined as:
g ( σ ) = 1 1 + σ 2 / K 2
II. people's face is located and area dividing.Because the present invention is applied to special man-machine interaction environment, can guarantee a series of objective condition: good illumination, the people is bold little relatively more unified and be in picture centre, and background influence is less, has reduced people's face location burden.Adopt 100 people's faces of manual mark earlier, after size normalization, ask and on average obtain face template, extract the position of people's face with template matching method.If a I, jBe pixel grey scale to be detected, t I, jBe template pixel grey scale, E a, E tBe respectively the two average, then matching factor is defined as:
m = Σ i , j ( a i , j - E a ) ( t i , j - E t ) Σ i , j ( a i , j - E a ) 2 Σ i , j ( t i , j - E t ) 2
M>0.55 is the candidate human face region.In sensing range,, then the human face region of continuous appearance is asked and on average obtained people's face position if a plurality of qualified zones occur.According to human face structure feature priori people's face is divided into right and left eyes, nose and four zones of mouth then.
III. rim detection.Organs such as the eye of people's face, nose, mouth have comprised significant profile information, can extract profile information by edge detection algorithm, for feature point detection lays the first stone.Adopt direction filtering method among the present invention: calculate earlier the gradient magnitude and the direction of each point, again impact point and adjacent gradient thereof are carried out vector superposedly, put new gray-scale value with its mould value as this, thereby obtain edge image at 8 with method of difference.
(2) 5 unique points of people from location face (2 canthus, 2 corners of the mouths and nose).
I. locate the canthus.At first, the human eye approximate region that the pretreatment image that obtains at (1) I and (1) II determine, carry out level and vertical Gray Projection, earlier the projection waveform is carried out smoothly, getting the position, valley point more respectively is the vertical and horizontal coordinate of eyeball central point, and delimits the position of rectangular area as eyes with this; Secondly, adopt maximum variance between clusters to determine that threshold value carries out binary conversion treatment to (1) the III edge image in this zone; Once more, bianry image is carried out connected region detect, connectivity points is less than 20 zone as ELIMINATION OF ITS INTERFERENCE, and the candidate is communicated with the district carries out the shape checking; At last, the human eye connected region of choosing is extracted the most left (right side) point, if exist multiple spot then nethermost as the canthus point.
II. locate the corners of the mouth.Adopt and location, canthus similar methods, just because the grey scale change of mouth is obvious not as eyes, so before lip region being carried out the edge detection, earlier carry out colour of skin conversion: a large amount of lip samples of manual collection by gauss hybrid models (GMMs), calculate Cr, the Cb colour difference information of all sample points earlier according to its color, and, set up the Gaussian Mixture probability model of forming by two Gauss models as two-dimensional coordinate, all sample points of substitution carry out skin-color training, obtain model parameter.Colouring information substitution model with (1) II definite face zone of step is had a few calculates the probability that belongs to lip, to the 0-255 interval, as new gray-scale map, uses (1) III method to carry out rim detection then probability transformation.Follow-up corners of the mouth positioning step extracts identical with the canthus.
III. locate nose.At first locate two naris positions, based on the position of finding, two canthus, define its distance and be h, position relation according to canthus, the corners of the mouth and nostril, 1.2h below the canthus line is to 1.6h, width is to carry out level and vertical Gray Projection respectively in the rectangular area of h, by the position, valley point of horizontal projection ordinate as naris position, with two valley points of vertical projection respectively as the horizontal ordinate in two nostrils.Above the nostril, search highlight then in the 0.3h, as the nose position.
(3) 3 deflection angles of estimation people face
At deflection angle not too big (in 20 °), and have under the condition of good light photograph, preceding step can accurately be located 5 unique point A~E, asks for 3 deflection angles with following step.
I. coordinate pre-service.A, B two point coordinate are asked on average, obtained central point F.Appointment F is a true origin, and level is to the right transverse axis u, is longitudinal axis v straight up, to the two-dimensional coordinate of back all images point all as with reference to carrying out conversion.Two corners of the mouth point D, E coordinate asked on average obtain central point G,, ask for 3 deflection angles with following step together with two canthus, canthus central point and nose totally 5 unique point A, B, C, F, G.
II. at first determine a width of cloth direct picture, set its three angle of deflection, β, γ are 0, locates 5 unique point A~E, the coordinate and the record of ordering according to the calculating of top (3) I method A, B, C, F, G.As the reference image, in the statement, serve as a mark below with subscript 1.So these coordinate points are described as: A point (u A1, v A1), B point (u B1, v B1), C point (u C1, v C1), F point (u F1, v F1), G point (u G1, v G1).
III. the facial image that any time is gathered is at first located 5 unique points, obtains the coordinate of 5 points again according to the identical method of (3) II, is mark with subscript 2: A point (u A2, v A2), B point (u B2, v B2), C point (u C2, v C2), F point (u F2, v F2), G point (u G2, v G2).
IV. utilize (3) II, (3) III two to go on foot the coordinate of corresponding 5 of determined two width of cloth images, utilize pinhole camera modeling and utmost point geometrical principle is carried out mathematical derivation, calculate the attitude of people's face in the image 2, obtain three deflection angle α, β, γ.Pinhole camera modeling:
s m ^ = K [ R | T ] M ^
Wherein s is a scale factor, m ^ = [ u , v , 1 ] T The homogeneous coordinates of picture point, K is the inner parameter matrix of camera, can be defined as diag (f, f, 1).T=[t x, t y, t z] TBe translation matrix, M ^ = [ x , y , z , 1 ] T Be the homogeneous coordinates of people from space face three-dimensional point.R is with α, and β and γ are the rotation matrix on basis.The task of this step is according to 5 unique points of two width of cloth known image
Figure A20091010388400104
Information by some rational hypothesis, is not being known
Figure A20091010388400105
Situation under three angle values of estimation among the R.
Estimate that the arithmetic expression in the algorithm all occurs in poor-merchant's mode, can the parameter of pin-hole image machine model be made the following assumptions: s=1, K=diag (1,1,1), T=[0,0,0 owing to only relate to angle] TIf:
M = v B 2 - v A 2 u B 2 - u A 2 O = v G 2 - v F 2 u G 2 - u F 2 P = u B 1 - u A 1 u B 2 - u A 2
c=cosγ?d=sinγ?h=sinβ
Can derive:
γ=arc?tan(M)
β = arccos ( 1 Pc )
α = arctan ( c + Od Ohc - hd )
Because in computation process, there is a sign symbol problem in β, can not determine by 4 of A, B, F, G are unique, so need introducing C point, determines its symbol according to the relative position of C and F, G line.
(4) generate human-machine interactive information
By α, β, three angle information definition mouse positions of γ and mode of operation.α, β 0 are defined as screen center, and mouse moved up and down when α changed, mouse move left and right when β changes, angle during more than or equal to 20 ° mouse navigate to the screen edge.The sudden change definition mouse action mode of γ angle, γ changes between 3 °~8 ° to clicking left button for angle just and between the two continuous frames, surpass 8 ° for double-clicking left button, γ for angle between negative and the two continuous frames change between-3 °~-8 ° for clicking right button, surpass-8 ° and be the double-click right button.
(5) be connected to by USB and form mouse apparatus on the PC
Use Cypress chip development USB interface on the device, realize and the communicating by letter of PC in standard USB mouse mode.Be sent to PC with the mouse position of top method generation and the operation information of mode of operation replacement conventional mouse, form a kind of vision mouse apparatus of estimating based on human face posture.

Claims (2)

1, a kind of man-machine interaction method of estimating based on human face posture, method may further comprise the steps:
A) people's face digital image sequence that will obtain through optical lens and cmos image sensor assembly is realized data acquisition by the digital video high-speed channel of DSP, and is finished video processnig algorithms by DSP;
B) people's face digital picture is carried out pre-service
At first carry out image noise reduction: adopt the nonlinear diffusion method to realize squelch, wherein edge map adopts the variance computing method of impact point neighborhood four direction belt-like zone; Zone summation in the variance computation process adopts integrogram to realize, to reduce calculated amount; Secondly, the people's face in the image is positioned, and carry out area dividing, obtain the Position Approximate of people's face, according to human face structure feature priori people's face is divided into right and left eyes, nose and four zones of mouth then with template matching method; Carry out rim detection at last, the size and Orientation of examination arbitrary image point gradient, according to the consistance of the gradient direction of 8 of picture point and neighborhoods, with all gradients of 9 ask vector with, as the new gray-scale value of this picture point, obtain edge image with its mould value;
C) inner eye corner of 2 two eyes of people from location face, 2 corners of the mouths and nose totally 5 unique points:
1. locate the canthus: at first, in the preliminary zone of human eye, adopt the Gray Projection method to determine the eyeball center, and mark the definite position of rectangular area as eyes with this; Then, in this zone, adopt maximum variance between clusters to determine threshold value, edge image is carried out binary conversion treatment, and the restriction connection is counted; At last, the human eye connected region is extracted the most left (right side) point as the canthus point;
2. locate the corners of the mouth: adopt the training of lip colo(u)r atlas to contain the mixed Gauss model of two Gaussian functions earlier, the arbitrfary point belongs to the probability of lip in the calculating face zone, and the normalization back is used the identical method in location, canthus again and is extracted corners of the mouth position as the new half-tone information of this point;
3. locate nose: in nasal area, adopt two naris positions of Gray Projection method search earlier, above the nostril, search highlight then as the nose position;
D) 3 deflection angles of estimation people face
1. coordinate transform: at first, calculate the center at two canthus, all unique points are carried out coordinate transform as image origin; Secondly, calculate the center and the record of two corners of the mouth points, obtain two canthus, center, canthus, nose, corners of the mouth center totally 5 new unique points;
2. to a width of cloth direct picture and there is the realtime graphic of deflection angle to obtain 5 characteristic point coordinates respectively, finish coordinate transform and set up corresponding relation;
3. utilize the coordinate of corresponding 5 of two width of cloth images, by pinhole camera modeling, make according to algorithm characteristic that some are reasonable and do not influence result's simplification hypothesis, utilize at last utmost point geometrical principle is derived, estimate the attitude of people's face in the arbitrary image, obtain three deflection angles;
E) generate human-machine interactive information
By three deflection angles definition mouse positions and mode of operation, according to two angle values location mouse pointer position wherein, with the operation of the interframe variable quantity definition mouse of another angle, generation human-machine interactive information;
F) connect computer realization communication
Develop USB interface and write driver according to the communication protocol of standard USB mouse, the human-machine interactive information that obtains previously is sent to computing machine, form a kind of new vision mouse apparatus.
2, the man-machine interaction method of estimating based on human face posture according to claim 1, it is characterized in that:, determine that wherein two corners of the mouth points are to participate in attitude and estimate in order to obtain its mid point a width of cloth front reference picture and have the realtime graphic of deflection angle to locate 5 unique points respectively; By the deflection angle definition human-machine interactive information of people's face and with USB mouse mode and compunication; This vision mouse apparatus is suitable for multimedia application and disabled people scholar uses a computer.
CN2009101038842A 2009-05-19 2009-05-19 Man-machine interaction method based on estimation of human face posture Expired - Fee Related CN101561710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009101038842A CN101561710B (en) 2009-05-19 2009-05-19 Man-machine interaction method based on estimation of human face posture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009101038842A CN101561710B (en) 2009-05-19 2009-05-19 Man-machine interaction method based on estimation of human face posture

Publications (2)

Publication Number Publication Date
CN101561710A true CN101561710A (en) 2009-10-21
CN101561710B CN101561710B (en) 2011-02-09

Family

ID=41220526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009101038842A Expired - Fee Related CN101561710B (en) 2009-05-19 2009-05-19 Man-machine interaction method based on estimation of human face posture

Country Status (1)

Country Link
CN (1) CN101561710B (en)

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156537A (en) * 2010-02-11 2011-08-17 三星电子株式会社 Equipment and method for detecting head posture
CN102231093A (en) * 2011-06-14 2011-11-02 伍斌 Screen locating control method and device
CN102262706A (en) * 2010-05-24 2011-11-30 原相科技股份有限公司 Method for calculating ocular distance
CN102375533A (en) * 2010-08-19 2012-03-14 阳程科技股份有限公司 Cursor control method
CN102968180A (en) * 2011-12-02 2013-03-13 微软公司 User interface control based on head direction
CN103149939A (en) * 2013-02-26 2013-06-12 北京航空航天大学 Dynamic target tracking and positioning method of unmanned plane based on vision
CN103211605A (en) * 2013-05-14 2013-07-24 重庆大学 Psychological testing system and method
CN103279767A (en) * 2013-05-10 2013-09-04 杭州电子科技大学 Human-machine interaction information generation method based on multi-feature-point combination
CN103336577A (en) * 2013-07-04 2013-10-02 宁波大学 Mouse control method based on facial expression recognition
WO2013143390A1 (en) * 2012-03-26 2013-10-03 腾讯科技(深圳)有限公司 Face calibration method and system, and computer storage medium
CN103593654A (en) * 2013-11-13 2014-02-19 智慧城市系统服务(中国)有限公司 Method and device for face location
CN103605466A (en) * 2013-10-29 2014-02-26 四川长虹电器股份有限公司 Facial recognition control terminal based method
CN103793693A (en) * 2014-02-08 2014-05-14 厦门美图网科技有限公司 Method for detecting face turning and facial form optimizing method with method for detecting face turning
CN103942525A (en) * 2013-12-27 2014-07-23 高新兴科技集团股份有限公司 Real-time face optimal selection method based on video sequence
CN104123000A (en) * 2014-07-09 2014-10-29 昆明理工大学 Non-intrusive mouse pointer control method and system based on facial feature detection
CN104331152A (en) * 2010-05-24 2015-02-04 原相科技股份有限公司 Three-dimensional image interaction system
CN104573657A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Blind driving detection method based on head lowing characteristics
CN104780308A (en) * 2014-01-09 2015-07-15 联想(北京)有限公司 Information processing method and electronic device
CN105701786A (en) * 2016-03-21 2016-06-22 联想(北京)有限公司 Image processing method and electronic equipment
CN106462738A (en) * 2014-05-20 2017-02-22 埃西勒国际通用光学公司 Method for constructing a model of the face of a person, method and device for posture analysis using such a model
CN106774936A (en) * 2017-01-10 2017-05-31 上海木爷机器人技术有限公司 Man-machine interaction method and system
CN106874861A (en) * 2017-01-22 2017-06-20 北京飞搜科技有限公司 A kind of face antidote and system
CN106991417A (en) * 2017-04-25 2017-07-28 华南理工大学 A kind of visual projection's interactive system and exchange method based on pattern-recognition
CN107123139A (en) * 2016-02-25 2017-09-01 夏立 2D to 3D facial reconstruction methods based on opengl
CN107122054A (en) * 2017-04-27 2017-09-01 青岛海信医疗设备股份有限公司 A kind of detection method and device of face deflection angle and luffing angle
CN103472915B (en) * 2013-08-30 2017-09-05 深圳Tcl新技术有限公司 reading control method based on pupil tracking, reading control device and display device
CN107424218A (en) * 2017-07-25 2017-12-01 成都通甲优博科技有限责任公司 A kind of sequence chart bearing calibration tried on based on 3D and device
CN108573218A (en) * 2018-03-21 2018-09-25 漳州立达信光电子科技有限公司 Human face data acquisition method and terminal device
CN108905192A (en) * 2018-06-01 2018-11-30 北京市商汤科技开发有限公司 Information processing method and device, storage medium
CN109345305A (en) * 2018-09-28 2019-02-15 广州凯风科技有限公司 A kind of elevator electrical screen advertisement improvement analysis method based on face recognition technology
CN109671108A (en) * 2018-12-18 2019-04-23 重庆理工大学 A kind of single width multi-angle of view facial image Attitude estimation method arbitrarily rotated in plane
CN110097021A (en) * 2019-05-10 2019-08-06 电子科技大学 Face pose estimation based on MTCNN
CN110610171A (en) * 2019-09-24 2019-12-24 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN110647790A (en) * 2019-04-26 2020-01-03 北京七鑫易维信息技术有限公司 Method and device for determining gazing information
CN110741377A (en) * 2017-06-30 2020-01-31 Oppo广东移动通信有限公司 Face image processing method and device, storage medium and electronic equipment
CN111033508A (en) * 2018-04-25 2020-04-17 北京嘀嘀无限科技发展有限公司 System and method for recognizing body movement
CN112069993A (en) * 2020-09-04 2020-12-11 西安西图之光智能科技有限公司 Dense face detection method and system based on facial features mask constraint and storage medium
CN112488032A (en) * 2020-12-11 2021-03-12 重庆邮电大学 Human eye positioning method based on nose-eye structure constraint

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109408651B (en) * 2018-09-21 2020-09-29 神思电子技术股份有限公司 Face retrieval method based on face gesture recognition

Cited By (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156537A (en) * 2010-02-11 2011-08-17 三星电子株式会社 Equipment and method for detecting head posture
CN102156537B (en) * 2010-02-11 2016-01-13 三星电子株式会社 A kind of head pose checkout equipment and method
CN102262706A (en) * 2010-05-24 2011-11-30 原相科技股份有限公司 Method for calculating ocular distance
CN104331152A (en) * 2010-05-24 2015-02-04 原相科技股份有限公司 Three-dimensional image interaction system
CN102262706B (en) * 2010-05-24 2014-11-05 原相科技股份有限公司 Method for calculating ocular distance
CN104331152B (en) * 2010-05-24 2017-06-23 原相科技股份有限公司 3-dimensional image interaction systems
CN102375533A (en) * 2010-08-19 2012-03-14 阳程科技股份有限公司 Cursor control method
CN102231093A (en) * 2011-06-14 2011-11-02 伍斌 Screen locating control method and device
CN102231093B (en) * 2011-06-14 2013-07-31 伍斌 Screen locating control method and device
CN102968180A (en) * 2011-12-02 2013-03-13 微软公司 User interface control based on head direction
CN102968180B (en) * 2011-12-02 2016-03-30 微软技术许可有限责任公司 Based on the user interface control of cephalad direction
US9530045B2 (en) 2012-03-26 2016-12-27 Tencent Technology (Shenzhen) Company Limited Method, system and non-transitory computer storage medium for face detection
WO2013143390A1 (en) * 2012-03-26 2013-10-03 腾讯科技(深圳)有限公司 Face calibration method and system, and computer storage medium
CN103149939B (en) * 2013-02-26 2015-10-21 北京航空航天大学 A kind of unmanned plane dynamic target tracking of view-based access control model and localization method
CN103149939A (en) * 2013-02-26 2013-06-12 北京航空航天大学 Dynamic target tracking and positioning method of unmanned plane based on vision
CN103279767A (en) * 2013-05-10 2013-09-04 杭州电子科技大学 Human-machine interaction information generation method based on multi-feature-point combination
CN103211605B (en) * 2013-05-14 2015-02-18 重庆大学 Psychological testing system and method
CN103211605A (en) * 2013-05-14 2013-07-24 重庆大学 Psychological testing system and method
CN103336577A (en) * 2013-07-04 2013-10-02 宁波大学 Mouse control method based on facial expression recognition
CN103336577B (en) * 2013-07-04 2016-05-18 宁波大学 A kind of mouse control method based on human face expression identification
CN103472915B (en) * 2013-08-30 2017-09-05 深圳Tcl新技术有限公司 reading control method based on pupil tracking, reading control device and display device
CN103605466A (en) * 2013-10-29 2014-02-26 四川长虹电器股份有限公司 Facial recognition control terminal based method
CN103593654A (en) * 2013-11-13 2014-02-19 智慧城市系统服务(中国)有限公司 Method and device for face location
WO2015070764A1 (en) * 2013-11-13 2015-05-21 智慧城市系统服务(中国)有限公司 Face positioning method and device
CN103593654B (en) * 2013-11-13 2015-11-04 智慧城市系统服务(中国)有限公司 A kind of method and apparatus of Face detection
AU2014350727B2 (en) * 2013-11-13 2017-06-29 Shenzhen Smart Security & Surveillance Service Robot Co., Ltd. Face positioning method and device
CN103942525A (en) * 2013-12-27 2014-07-23 高新兴科技集团股份有限公司 Real-time face optimal selection method based on video sequence
CN104780308A (en) * 2014-01-09 2015-07-15 联想(北京)有限公司 Information processing method and electronic device
CN103793693A (en) * 2014-02-08 2014-05-14 厦门美图网科技有限公司 Method for detecting face turning and facial form optimizing method with method for detecting face turning
CN106462738B (en) * 2014-05-20 2020-10-09 依视路国际公司 Method for constructing a model of a person's face, method and apparatus for analyzing a pose using such a model
CN106462738A (en) * 2014-05-20 2017-02-22 埃西勒国际通用光学公司 Method for constructing a model of the face of a person, method and device for posture analysis using such a model
US10380411B2 (en) 2014-05-20 2019-08-13 Essilor International Method for constructing a model of the face of a person, method and device for posture analysis using such a model
CN104123000A (en) * 2014-07-09 2014-10-29 昆明理工大学 Non-intrusive mouse pointer control method and system based on facial feature detection
CN104573657A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Blind driving detection method based on head lowing characteristics
CN104573657B (en) * 2015-01-09 2018-04-27 安徽清新互联信息科技有限公司 It is a kind of that detection method is driven based on the blind of feature of bowing
CN107123139A (en) * 2016-02-25 2017-09-01 夏立 2D to 3D facial reconstruction methods based on opengl
CN105701786A (en) * 2016-03-21 2016-06-22 联想(北京)有限公司 Image processing method and electronic equipment
CN105701786B (en) * 2016-03-21 2019-09-24 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN106774936A (en) * 2017-01-10 2017-05-31 上海木爷机器人技术有限公司 Man-machine interaction method and system
CN106774936B (en) * 2017-01-10 2020-01-07 上海木木机器人技术有限公司 Man-machine interaction method and system
CN106874861A (en) * 2017-01-22 2017-06-20 北京飞搜科技有限公司 A kind of face antidote and system
WO2018196370A1 (en) * 2017-04-25 2018-11-01 华南理工大学 Pattern recognition-based visual projection interaction system and interaction method
CN106991417A (en) * 2017-04-25 2017-07-28 华南理工大学 A kind of visual projection's interactive system and exchange method based on pattern-recognition
CN107122054A (en) * 2017-04-27 2017-09-01 青岛海信医疗设备股份有限公司 A kind of detection method and device of face deflection angle and luffing angle
CN110741377A (en) * 2017-06-30 2020-01-31 Oppo广东移动通信有限公司 Face image processing method and device, storage medium and electronic equipment
CN107424218A (en) * 2017-07-25 2017-12-01 成都通甲优博科技有限责任公司 A kind of sequence chart bearing calibration tried on based on 3D and device
CN107424218B (en) * 2017-07-25 2020-11-06 成都通甲优博科技有限责任公司 3D try-on-based sequence diagram correction method and device
CN108573218A (en) * 2018-03-21 2018-09-25 漳州立达信光电子科技有限公司 Human face data acquisition method and terminal device
CN111033508B (en) * 2018-04-25 2020-11-20 北京嘀嘀无限科技发展有限公司 System and method for recognizing body movement
US10997722B2 (en) 2018-04-25 2021-05-04 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for identifying a body motion
CN111033508A (en) * 2018-04-25 2020-04-17 北京嘀嘀无限科技发展有限公司 System and method for recognizing body movement
CN108905192A (en) * 2018-06-01 2018-11-30 北京市商汤科技开发有限公司 Information processing method and device, storage medium
CN109345305A (en) * 2018-09-28 2019-02-15 广州凯风科技有限公司 A kind of elevator electrical screen advertisement improvement analysis method based on face recognition technology
CN109671108A (en) * 2018-12-18 2019-04-23 重庆理工大学 A kind of single width multi-angle of view facial image Attitude estimation method arbitrarily rotated in plane
CN110647790A (en) * 2019-04-26 2020-01-03 北京七鑫易维信息技术有限公司 Method and device for determining gazing information
CN110097021A (en) * 2019-05-10 2019-08-06 电子科技大学 Face pose estimation based on MTCNN
CN110097021B (en) * 2019-05-10 2022-09-06 电子科技大学 MTCNN-based face pose estimation method
CN110610171A (en) * 2019-09-24 2019-12-24 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112069993A (en) * 2020-09-04 2020-12-11 西安西图之光智能科技有限公司 Dense face detection method and system based on facial features mask constraint and storage medium
CN112069993B (en) * 2020-09-04 2024-02-13 西安西图之光智能科技有限公司 Dense face detection method and system based on five-sense organ mask constraint and storage medium
CN112488032A (en) * 2020-12-11 2021-03-12 重庆邮电大学 Human eye positioning method based on nose-eye structure constraint
CN112488032B (en) * 2020-12-11 2022-05-20 重庆邮电大学 Human eye positioning method based on nose and eye structure constraint

Also Published As

Publication number Publication date
CN101561710B (en) 2011-02-09

Similar Documents

Publication Publication Date Title
CN101561710B (en) Man-machine interaction method based on estimation of human face posture
Jiang et al. Gesture recognition based on skeletonization algorithm and CNN with ASL database
Cheng et al. An image-to-class dynamic time warping approach for both 3D static and trajectory hand gesture recognition
Crowley et al. Vision for man machine interaction
CN105739702B (en) Multi-pose finger tip tracking for natural human-computer interaction
Geetha et al. A vision based dynamic gesture recognition of indian sign language on kinect based depth images
CN109472198A (en) A kind of video smiling face's recognition methods of attitude robust
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
CN108983980A (en) A kind of mobile robot basic exercise gestural control method
CN108197534A (en) A kind of head part's attitude detecting method, electronic equipment and storage medium
CN103995595A (en) Game somatosensory control method based on hand gestures
CN107392151A (en) Face image various dimensions emotion judgement system and method based on neutral net
WO2018076484A1 (en) Method for tracking pinched fingertips based on video
CN104038799A (en) Three-dimensional television-oriented gesture manipulation method
CN105069745A (en) face-changing system based on common image sensor and enhanced augmented reality technology and method
CN106503619B (en) Gesture recognition method based on BP neural network
Juan Gesture recognition and information recommendation based on machine learning and virtual reality in distance education
Wu et al. Appearance-based gaze block estimation via CNN classification
Li et al. A novel hand gesture recognition based on high-level features
CN105975906A (en) PCA static gesture recognition method based on area characteristic
CN104898971A (en) Mouse pointer control method and system based on gaze tracking technology
Sokhib et al. A combined method of skin-and depth-based hand gesture recognition.
CN108268125A (en) A kind of motion gesture detection and tracking based on computer vision
Lan et al. Data fusion-based real-time hand gesture recognition with Kinect V2
CN106940792A (en) The human face expression sequence truncation method of distinguished point based motion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110209

Termination date: 20130519