CN105487665B - A kind of intelligent Mobile Service robot control method based on head pose identification - Google Patents

A kind of intelligent Mobile Service robot control method based on head pose identification Download PDF

Info

Publication number
CN105487665B
CN105487665B CN201510872912.2A CN201510872912A CN105487665B CN 105487665 B CN105487665 B CN 105487665B CN 201510872912 A CN201510872912 A CN 201510872912A CN 105487665 B CN105487665 B CN 105487665B
Authority
CN
China
Prior art keywords
head
mobile service
service robot
head pose
characteristic point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510872912.2A
Other languages
Chinese (zh)
Other versions
CN105487665A (en
Inventor
徐国政
吕呈
朱博
高翔
陈盛
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201510872912.2A priority Critical patent/CN105487665B/en
Publication of CN105487665A publication Critical patent/CN105487665A/en
Application granted granted Critical
Publication of CN105487665B publication Critical patent/CN105487665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Abstract

The present invention provides a kind of intelligent Mobile Service robot control method identified based on head pose, this method detects facial feature points using constraint partial model algorithm, the geometrical relationship of facial feature points is recycled to estimate current head posture, to enable the patient to control the movement of intelligent Mobile Service robot with head.Its specific implementation includes the following steps:Training head pose sample obtains the local template of facial feature points shape template and characteristic point;Using Ki nect acquisition head images and find face location;Facial feature points are detected by constraining partial model algorithm;Utilize the facial feature estimation current head posture detected;Different control commands is sent out to Information Mobile Service robot further according to different head pose parameters.Utilize the method for the present invention so that physical disabilities can also control intelligent Mobile Service robot using head steady, facilitate their trip.

Description

A kind of intelligent Mobile Service robot control method based on head pose identification
Technical field
The present invention relates to machine vision control fields, and in particular to a kind of intelligent Mobile Service based on head pose identification Robot control method is applied in intelligent Mobile Service robot interactive system, is suitable for handing in intelligent Mobile Service robot The head pose real-time estimation based on local restriction model (CLM) is realized in mutual system and intelligent mobile is taken to realize based on this The control of business robot.
Background technology
It is faced jointly with the development of society, the problem of an aging population has become country in developed country and partial development Challenge, predictive display, from 2015~2035 years 20 years, Aged in China population ratio will double, and reach 20%, wherein being no lack of since disease leads to the old man of inconvenient activity, in addition not due to various action caused by various disaster accidents Just the quantity of physical disabilities is also gradually increasing, especially high paraplegia and arm handicapped person scholar, a part of in them Travel activity receives great restriction.In order to trip facility with their quality of life of raising, it is in recent years, convenient to go on a journey Walking-replacing tool becomes the hot spot of society and research institution's concern, and wherein intellect service robot more becomes hot spot therein and grinds Study carefully object.Intelligent wheel chair as a kind of representative intelligent Mobile Service robot there is rocking bar to control, limb control, Navigation, avoidance, the functions such as rehabilitation.Wherein rocking bar control possesses good control performance, but for both hands deformity or high paraplegia Cause for the people that both hands can not move, rocking bar control and part of limb control such as the methods of gesture control receive very big Restriction, therefore, impetus, which is controlled, becomes the hot spot of Recent study as a kind of novel man-machine interaction mode.
The method of current existing identification head appearance has:
1, wearable motion sensor method, such method on test object head by installing accelerometer and gyroscope biography Sensor judges head appearance by receiving sensor exercise data, and such method precision is high, but needs wearable device, user experience compared with Difference.
2, LED scaling methods capture headgear image, then root by the headgear with certain amount LED using camera According to the current position of the position judgment of LED, equally, the method precision is high, but still needs to wearable device.
3, machine vision method, the method acquire head image by camera and judge head appearance by algorithm, are current Main stream approach contacts, better user experience with test object zero, wherein existing means are mainly logical using algorithm for pattern recognition Cross Image estimation head appearance (machine vision method), such as the random forest grader of traditional template matching algorithm, mainstream, face Characteristic point geometrical relationship method etc., traditional CLM algorithms utilize merely two dimensional image, exist and different illumination conditions are interfered Problem.
Invention content
Present invention aims at a kind of intelligent Mobile Service robot control based on head pose identification is provided, using base Estimate head pose mode in the CLM methods of two dimensional image and depth image, is detected by local restriction model algorithm CLM Human face characteristic point, further according to the geometrical relationship estimation head pose for the facial feature points that detection obtains, and then again to intelligent mobile Service robot implements control.
The above-mentioned purpose of the present invention realizes that dependent claims are to select else or have by the technical characteristic of independent claims The mode of profit develops the technical characteristic of independent claims.
To reach above-mentioned purpose, the present invention proposes a kind of intelligent Mobile Service robot control identified based on head pose Method obtains head pose and controls Information Mobile Service robot based on constraining partial model algorithm, and specific implementation includes Following steps:
S1 builds the shape template of facial feature points and the local feature template of characteristic point by head pose sample database
The purpose of the step be in sample database chromatic image and depth image PCA dimensionality reductions and be aligned, then into Row feature extraction and study obtain the local feature template of the shape template and each characteristic point of characteristic point, and build corresponding SVM is applied to subsequent matching step.
S2 by two dimensional image of the RGBD photographic devices acquisition comprising face and depth image and is aligned
Include the coloured image and depth image of face using Kinect acquisitions in this step, can and meanwhile acquisition resolution All it is the coloured image and depth image of 640*480, acquisition rate is 30 frame per second.Due to the colour imagery shot and depth of Kinect The image that degree camera has a certain distance, therefore collects two width needs correction function first to carry out registration process.
S3 detects face location in the picture using Viola-Jones detectors
The position of face is quickly positioned in collected RGB color image using the method, if with a rectangle frame packet The face detected is enclosed, the characteristic point for following further detection face is ready.
S4 detects characteristic point using CLM algorithms on the face detected
The position for further searching for facial feature points in the step in the face location detected, carries out face first The initial estimation of characteristic point, then the estimation of better place is carried out to the characteristic point of initial estimation, iteration is arrived until all characteristic points The estimation of facial feature points is so far completed in optimum position.
S5 estimates head pose by the shape of the human face characteristic point detected
Current head pose is estimated in the step using the shape of the facial feature points detected in previous step, is estimated Head appearance is divided by the pitch angle on head, roll angle, three angles of angle of rotation:Appearance without a head, left-hand rotation head, right-hand rotation head come back, bow 5 kinds of head poses altogether.
S6 realizes the control to Information Mobile Service robot by head pose recognition result
PC machine sends out different control commands according to current head pose and gives Information Mobile Service robot in this step, 5 kinds The corresponding motion state of head appearance is:Stop, turning left, turn right, advance, retreats.Computer is by identifying obtained head appearance Gesture result sends different orders to dsp controller by serial ports, and DSP retransmits control command and driven to motor to control intelligence The movement of energy wheelchair.
It should be appreciated that as long as aforementioned concepts and all combinations additionally conceived that describe in greater detail below are at this Sample design it is not conflicting in the case of can be viewed as the disclosure subject matter a part.In addition, required guarantor All combinations of the theme of shield are considered as a part for the subject matter of the disclosure.
Can be more fully appreciated from the following description in conjunction with attached drawing present invention teach that foregoing and other aspect, reality Apply example and feature.The feature and/or advantageous effect of other additional aspects such as illustrative embodiments of the present invention will be below Description in it is obvious, or by according to present invention teach that specific implementation mode practice in learn.
Description of the drawings
Attached drawing is not intended to drawn to scale.In the accompanying drawings, identical or approximately uniform group each of is shown in each figure It can be indicated by the same numeral at part.For clarity, in each figure, not each component part is labeled. Now, by example and the embodiments of various aspects of the invention will be described in reference to the drawings, wherein:
Fig. 1 is the structure chart and Kinect camera schematic view of the mounting position of intelligent Mobile Service robot.
Fig. 2 is intelligent Mobile Service robot control system architecture figure.
Fig. 3 is head pose angle schematic diagram.
Fig. 4 is the flow chart of the intelligent Mobile Service robot control method identified based on head pose.
Fig. 5 is the schematic diagram for demarcating facial feature points.
Fig. 6 is detection face location schematic diagram.
Fig. 7 is to pass through facial feature estimation head pose schematic diagram.
Fig. 8 is facial feature points testing result and head appearance result.
Specific implementation mode
In order to know more about the technology contents of the present invention, spy lifts specific embodiment and institute's accompanying drawings is coordinated to be described as follows.
Various aspects with reference to the accompanying drawings to describe the present invention in the disclosure, shown in the drawings of the embodiment of many explanations. It is not intended to cover all aspects of the invention for embodiment of the disclosure.It should be appreciated that a variety of designs and reality presented hereinbefore Those of apply example, and describe in more detail below design and embodiment can in many ways in any one come it is real It applies, this is because design disclosed in this invention and embodiment are not limited to any embodiment.In addition, disclosed by the invention one A little aspects can be used alone, or otherwise any appropriately combined be used with disclosed by the invention.
Fig. 1 is the structure chart of intelligent Mobile Service robot, the tool of intelligent Mobile Service robot 100 there are one main body and The each component being arranged in main body, these components include:Headrest 101, PC controllers 103, manipulates Kinect cameras 102 Bar 104, motor 105, battery 106, front-wheel 107, trailing wheel 108 and anti-hypsokinesis wheel 109.As shown, Kinect cameras 102 Front about 50cm mounted on user head or so place, it is ensured that can face head and by entire head be included in acquisition In picture, distance 50cm or so can preferably acquire coloured image and depth image.
It should be appreciated that in Fig. 1, aforementioned headrest 101, PC controllers 103, control stick 104, motor 105, battery 106, front-wheel 107, trailing wheel 108 and anti-hypsokinesis wheel 109 are the common setting in intelligent Mobile Service robot, specific configuration, function And/or details are not described herein for effect.These components installation site shown in FIG. 1 and/or a combination thereof are only exemplary, one In a little needs or necessary example, make, combination and/or its installation site of these components can be any ways 's.
Fig. 2 illustratively illustrates that intelligent Mobile Service robot control system architecture figure, this control system include image Acquisition module, image processing module, intelligent Mobile Service robot control module, intelligent Mobile Service robot.Wherein, image Acquisition module, that is, Kinect cameras, image processing module use PC controllers, intelligent Mobile Service robot control module to make With DSP, by PC controllers to the head pose information that is identified after image procossing come to intelligent Mobile Service robot into Row activity control.
Fig. 3 show head pose angle schematic diagram, is divided into pitch angle, angle of rotation and roll angle.
Fig. 4 is the intelligent Mobile Service robot controlling party based on head pose identification according to certain embodiments of the invention The flow chart of method, is as follows:
S1 builds the shape template of facial feature points and the local feature template of different characteristic point;
S2 acquires the two dimensional image on head and depth image and is aligned;
S3 detects face location in the image collected;
S4 detects characteristic point using CLM algorithms on face;
S5 estimates head pose by the geometrical relationship of the human face characteristic point detected;
S6 realizes the control to Information Mobile Service robot by head pose recognition result.
Shown in below in conjunction with the accompanying drawings, the aforementioned intelligent Mobile Service machine identified based on head pose is more specifically described The exemplary realization of people's control method.
In step S1, the local feature template specific steps of the shape template and different characteristic point that build facial feature points are such as Under:
S11 collects n head appearance training samples, it is desirable that each sample includes to compare clearly face image, each sample Including gray scale and depth two images, and resolution ratio is all 640*480, it is desirable that gray scale and depth image have been aligned.
S12 demarcates the characteristic point on every width sample, as shown in figure 5, demarcating m characteristic point, each feature on every width sample The corresponding fixed serial number of point, is connected into a vector as the shape vector of this sample using the coordinate of m characteristic point, obtains one Shape vector:
Si={ xi1,yi1,xi2,yi2,...xim,yim}
(x in formulaij,yij) represent j-th of characteristic point of i-th of sample.
N face sample is demarcated, n shape vector is obtained.Representative point is chosen in the calibration of characteristic point as possible, Such as the canthus in figure, the corners of the mouth, nose, at least one of chin.
S13, is normalized n shape vector of structure using Procrustes analysis methods and registration process, disappears Except displacement, scaling, the influence of rotation;
S14 extracts the principal component of shape vector by PCA methods;
S15 trains the local classifiers of this feature point using corresponding image block to each characteristic point.
In some instances, in abovementioned steps S13, using Procrustes analysis methods to n shape vector of structure The realization with registration process is normalized, includes the following steps:
S131 eliminates Influence of Displacement first, and each characteristic point position in the face is subtracted all features in the face The mean place of point is to make sample set snap to origin:
S132, if the shape of all samples obtains sample set S=(S after all snapping to origin1,S2,S3...Sn)。
S133 selects S1As with reference to shape, suitable European similarity transformations are selected to other all shapes, with S1For mesh Mark is aligned, and one group of new shape is obtained after alignment:
S'=(S1',S2',S3'...Sn')
S134 calculates the average shape of new shape collection:
S135, by new average shapeWith S1Alignment obtains
S136, to (S2',S3',...,Sn') the suitable transformation of selection, withAlignment.
S137 calculates new average shape again, if the variation of new average shape and last average shape is big It is slight in a certain given threshold value, then it is believed that iteration terminate, otherwise go to S134.
It is in this way, by being aligned face shape vector, different shape vector is unified under the same coordinate system, in place It sets, size, be comparable in rotation, so as to carry out the analysis of next step.
In some instances, abovementioned steps S14 extracts the principal component of shape vector by PCA methods, and specific steps are such as Under:
S141 calculates the mean value of the shape vector collection S after Procrustes is analyzed:
S142 calculates the covariance matrix of shape vector collection:
Covariance matrix is carried out Eigenvalues Decomposition, obtains feature vector p by S143iCharacteristic value corresponding with its, and by shape The difference value vector dS of shape vector and mean shape vectoriIt is expressed as the linear combination of principal component:
dSii1p1i2p2+...+λi2mp2m
S144 enables P=(p1,p2,...,p2m), λi=(λi1i2,...,λi2m)T, can obtain:
dSi=P λi
S145 selects the corresponding feature vector P of preceding t characteristic value in feature value vectorSAs main shaft, obtain:
Wherein bsIt is weight vector, changes each component therein and can be obtained by new shape, limits component certain Section variation can ensure new shape close to sample shape.B under normal circumstancessValue be:
In this way, the shape that arbitrary sample is concentrated can be by the linear weighted function approximation table of average shape and t feature vector Show.
In some instances, abovementioned steps S15, the local classifiers of training single feature point, specific construction step is such as Under:
S151, extracts p*p size areas around each characteristic point, is write the gray value in this region as column vector in order Local feature of the form as this characteristic point the form of column vector is write as the depth value in this region for depth image As local feature.
S152, a shared n sample, each sample have m characteristic point, then for each characteristic point, there are 2n parts Feature, including n gray scale local feature and n depth local feature, respectively using this n local feature as proper characteristics, then W local feature is randomly selected in the other positions of gray level image and depth image as error characteristic, then each feature respectively Point possesses n+w gray scale local feature and n+w depth local feature, wherein comprising mistake and correct, illustrates ash first Feature is spent, for each characteristic point, its training sample set is:
{x(1),x(2),...,x(n+w)}
Each training sample is a local feature vectors:
The output of S153, SVM are y(i)={ -1,1 }, 1 indicates correct sample, and -1 indicates the sample of mistake.Pass through input The output of the interior product representation SVM of data set and support vector:
Wherein xjIndicate the subset of training sample, αjIndicate the weights of support vector, NsIndicate the number of support vector, b tables Show deviation.
The output of S154, SVM can be expressed as the linear combination of input vector:
y(i)=wT·x(i)
Wherein wT=(w1,w2,...,wn+w) indicating the weights of input element, θ indicates deviation, according to input data set and defeated Go out function, w can be found outTAnd θ.
S155, the gray scale local feature collection that m characteristic point is respectively trained in repetition above step obtain m Linear SVM.
S156 trains the depth local feature collection of m characteristic point to obtain m Linear SVM again according to above step.
Step S2 acquires two dimensional image and depth image by Kinect cameras, and is sent into computer disposal, Kinect Coloured image and depth image can be acquired simultaneously, due to having a certain distance between RGB cameras and depth camera, acquired Two width pictures be not complete correspondence, need to carry out registration process, the function in library of being increased income using OPENNI is carried out at alignment Reason, and RGB image is converted into gray level image.
Step S3, to the image collected handle and quickly obtains using Viola-Jones detectors in a computer The position of face, it is assumed here that surround the face detected with a rectangle frame, as shown in Figure 6.
Step S4 detects characteristic point, in some instances, the detection using CLM algorithms on the face detected Steps are as follows:
S41 does initial estimation to facial feature points, and average facial contours are put to the face position detected in step s3 It sets, the initial estimation as facial feature points.
S42 extracts the image block of (t+p/2) * (t+p/2) size around a characteristic point current location, for gray-scale map As extraction gray level image block, depth image block extracted for depth image, t is indicated around search current location within the scope of t*t most Good characteristic point position, p indicate the size of the topography of training.
S43 illustrates the operation for gray level image block first, and the gray level image block of extraction is corresponding by this feature point Linear SVM obtains a response surface design R (x, y), indicates surrounding's image block of every bit probability similar with template.
S44 is fitted a quadratic function r (x, y) to R (x, y).
S45 finds out the global maximum point of quadratic function r (x, y), and it is the most similar to template to represent the regions p*p around the point, Depth image optimal match point in the block is equally found out, by best in the optimal match point and depth image in gray level image Next optimum position of the midpoint with point as this characteristic point.
S46 finds out next optimum position of all characteristic points, completes an iteration.
S47, repeat S42-S46 iterative process, until characteristic point twice between displacement distance be less than a threshold value or Reach certain iterations, so far thinks that all characteristic points have found best position.
S5, by detecting the facial feature estimation head pose completed.As shown in figure 3, head pose is divided into roll angle, rotation Angle, pitch angle.The method of estimation of three kinds of angles is introduced individually below:
S51, as shown in Fig. 7 (a), certain roll angle is presented in head, if two center point coordinates are respectively (x1,y1),(x2, y2), then head roll angle can be by eye line and horizontal angle estimation:
S52, as shown in Fig. 7 (b), certain angle of rotation is presented in head, if two external eyes angular coordinates are respectively (x1,y1), (x2,y2), nose coordinate is (x3,y3), then the angle difference of nose and two tail of the eyes is:
The angle difference β of nose and two inner eye corners is calculated simultaneously2With the angle difference β of nose and two corners of the mouths3, take three angles Average value as head angle of rotation β estimation:
S53, as shown in Fig. 7 (c), certain pitch angle is presented in head, and the calculating of pitch angle relies solely on characteristic point seat Mark is difficult estimation, needs to use depth data, if eyes and face are in same upright plane when appearance without a head, i.e., they are from camera Distance it is identical, make when facing upward head action, eyes and camera distance are d1, face is d with camera distance2If head is average Radius is r, and point coordinates is (x in two lines1,y1), lip center point coordinate is (x2,y2), then the estimation formulas of pitch angle For:
The results are shown in Figure 8 for the facial feature points and head pose wherein detected.
Step S6 is controlled the movement of Information Mobile Service robot by the head pose angle of estimation, is as follows:
S61, if facing upward brilliance degree is more than given threshold χy, then advance command is sent out,
If angle of bowing is more than given threshold χd, then backward command is sent out.
S62, if head turns left, angle is more than given threshold βl, then flicker order is sent out,
If head turns right, angle is more than given threshold βr, then flicker order is sent out.
S63 is considered as appearance state without a head, sends out and cease and desist order if institute is angled to be respectively less than corresponding threshold value.
Control method in conjunction with described in above example, as shown in Figure 1, Figure 2, the interior moderate in illuminance, User keeps right sitting position gesture, Kinect cameras to be placed on 40-50 centimeters in front of head on intelligent wheel chair, opens intelligence The head appearance control function of wheelchair, user's new line to certain angle, wheelchair travel forward, and user bows to certain angle, Wheelchair moves backward, and user head turns left certain angle, and wheelchair is turned left, and user head goes to the right certain angle Degree, wheelchair are turned right, and user keeps appearance without a head, that is, face front, at this time wheelchair stop motion.
Although the present invention has been disclosed as a preferred embodiment, however, it is not to limit the invention.Skill belonging to the present invention Has usually intellectual in art field, without departing from the spirit and scope of the present invention, when can be used for a variety of modifications and variations.Cause This, the scope of protection of the present invention is defined by those of the claims.

Claims (3)

1. a kind of intelligent Mobile Service robot control method based on head pose identification, which is characterized in that this method is with about Based on beam partial model algorithm, obtains head pose and control Information Mobile Service robot, specific implementation includes the following steps:
The local feature template of step 1, the shape template that facial feature points are built by head pose sample database and characteristic point;
Step 2 acquires the two dimensional image comprising face and depth image by RGBD photographic devices and is aligned;
Step 3 detects face location in the picture;
Step 4 detects characteristic point using CLM algorithms on face;
Step 5 estimates head pose by the shape of the human face characteristic point detected;
Step 6 realizes the control to Information Mobile Service robot by head pose recognition result;
In the step 1, the local feature template of the shape template and each characteristic point of facial feature points is built respectively, it is specific to wrap Containing following steps:
(1) sample set of construction feature dot shape carries out Procrustes analyses and eliminates displacement, rotation, and the influence of scaling carries out PCA dimension-reduction treatment;
(2) to each characteristic point, the local classifiers of this feature point are trained using corresponding image block;
In the step 4, by estimating characteristic point initial position and being detected using CLM algorithm iterations characteristic point to optimum position The position of all characteristic points in face;
It is poor by the angle between nose and two by two lines and horizontal line angle estimation roll angle in the step 5 Estimate angle of rotation, estimates pitch angle with the distance between camera respectively by lip and eyes.
2. the intelligent Mobile Service robot control method according to claim 1 based on head pose identification, feature It is, in abovementioned steps 3, face location is quickly detected in the picture using Viola-Jones detectors.
3. the intelligent Mobile Service robot control method according to claim 1 based on head pose identification, feature It is, in abovementioned steps 6, controls advance by the pitch angle on head and retreat, control by the angle of rotation on head And right-hand rotation, and control command is sent to the next machine controller by serial ports.
CN201510872912.2A 2015-12-02 2015-12-02 A kind of intelligent Mobile Service robot control method based on head pose identification Active CN105487665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510872912.2A CN105487665B (en) 2015-12-02 2015-12-02 A kind of intelligent Mobile Service robot control method based on head pose identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510872912.2A CN105487665B (en) 2015-12-02 2015-12-02 A kind of intelligent Mobile Service robot control method based on head pose identification

Publications (2)

Publication Number Publication Date
CN105487665A CN105487665A (en) 2016-04-13
CN105487665B true CN105487665B (en) 2018-09-07

Family

ID=55674689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510872912.2A Active CN105487665B (en) 2015-12-02 2015-12-02 A kind of intelligent Mobile Service robot control method based on head pose identification

Country Status (1)

Country Link
CN (1) CN105487665B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105773633B (en) * 2016-04-14 2018-04-20 中南大学 Mobile robot man-machine control system based on face location and sensitivity parameter
CN105912120B (en) * 2016-04-14 2018-12-21 中南大学 Mobile robot man-machine interaction control method based on recognition of face
CN106974780B (en) * 2017-03-13 2018-06-29 邝子佳 Method for controlling intelligent wheelchair based on difference navigation attitude
CN107349570A (en) * 2017-06-02 2017-11-17 南京邮电大学 Rehabilitation training of upper limbs and appraisal procedure based on Kinect
CN107358154A (en) * 2017-06-02 2017-11-17 广州视源电子科技股份有限公司 A kind of head movement detection method and device and vivo identification method and system
CN107493531B (en) * 2017-08-04 2019-11-08 歌尔科技有限公司 A kind of head pose detection method, device and earphone
CN107621880A (en) * 2017-09-29 2018-01-23 南京邮电大学 A kind of robot wheel chair interaction control method based on improvement head orientation estimation method
CN107754307A (en) * 2017-12-05 2018-03-06 野草莓影业(北京)有限公司 Control method, control device and the swiveling seat of swiveling seat
US10528802B2 (en) * 2018-01-09 2020-01-07 Futurewei Technologies, Inc. Head pose and distraction estimation
CN108427918B (en) * 2018-02-12 2021-11-30 杭州电子科技大学 Face privacy protection method based on image processing technology
CN108711175B (en) * 2018-05-16 2021-10-01 浙江大学 Head attitude estimation optimization method based on interframe information guidance
CN109086727B (en) * 2018-08-10 2021-04-30 北京奇艺世纪科技有限公司 Method and device for determining motion angle of human head and electronic equipment
CN109993073B (en) * 2019-03-14 2021-07-02 北京工业大学 Leap Motion-based complex dynamic gesture recognition method
CN110097024B (en) * 2019-05-13 2020-12-25 河北工业大学 Human body posture visual recognition method of transfer, transportation and nursing robot
CN110909596B (en) * 2019-10-14 2022-07-05 广州视源电子科技股份有限公司 Side face recognition method, device, equipment and storage medium
CN111346356B (en) * 2020-03-09 2021-05-04 牡丹江医学院 Sports teaching apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968846A (en) * 2010-07-27 2011-02-09 上海摩比源软件技术有限公司 Face tracking method
CN103870843A (en) * 2014-03-21 2014-06-18 杭州电子科技大学 Head posture estimation method based on multi-feature-point set active shape model (ASM)
CN104463100A (en) * 2014-11-07 2015-03-25 重庆邮电大学 Intelligent wheelchair man-machine interaction system and method based on facial expression recognition mode

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968846A (en) * 2010-07-27 2011-02-09 上海摩比源软件技术有限公司 Face tracking method
CN103870843A (en) * 2014-03-21 2014-06-18 杭州电子科技大学 Head posture estimation method based on multi-feature-point set active shape model (ASM)
CN104463100A (en) * 2014-11-07 2015-03-25 重庆邮电大学 Intelligent wheelchair man-machine interaction system and method based on facial expression recognition mode

Also Published As

Publication number Publication date
CN105487665A (en) 2016-04-13

Similar Documents

Publication Publication Date Title
CN105487665B (en) A kind of intelligent Mobile Service robot control method based on head pose identification
US11699293B2 (en) Neural network image processing apparatus
TWI383325B (en) Face expressions identification
CN103177269B (en) For estimating the apparatus and method of object gesture
CN105005999B (en) It is a kind of based on obstacle detection method of the computer stereo vision towards apparatus for guiding blind
CN108345869A (en) Driver's gesture recognition method based on depth image and virtual data
CN110147738B (en) Driver fatigue monitoring and early warning method and system
CN106796449A (en) Eye-controlling focus method and device
EP3893090B1 (en) Method for eye gaze tracking
CN105159452B (en) A kind of control method and system based on human face modeling
CN111046734A (en) Multi-modal fusion sight line estimation method based on expansion convolution
CN112232128B (en) Eye tracking based method for identifying care needs of old disabled people
WO2022042203A1 (en) Human body key point detection method and apparatus
CN108256379A (en) A kind of eyes posture identification method based on Pupil diameter
CN113963416A (en) Eye movement interaction method and system based on laser visual feedback
CN111626226B (en) Neck rotation recognition method and system
Rahmaniar et al. Touchless head-control (thc): Head gesture recognition for cursor and orientation control
CN114639168B (en) Method and system for recognizing running gesture
CN110826495A (en) Body left and right limb consistency tracking and distinguishing method and system based on face orientation
WO2020065790A1 (en) Estimation device, estimation method, and storage medium
WO2023103145A1 (en) Head pose truth value acquisition method, apparatus and device, and storage medium
Gruendig et al. 3d head pose estimation with symmetry based illumination model in low resolution video
JP5688514B2 (en) Gaze measurement system, method and program
Pangestu et al. Electric Wheelchair Control Mechanism Using Eye-mark Key Point Detection.
Zhou et al. Eye-tracking Control Wheelchair Method Based on Landmarks of Face

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20160413

Assignee: Zhangjiagang Institute of Zhangjiagang

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: X2019980001251

Denomination of invention: Method for controlling intelligent mobile service robot based on head posture recognition

Granted publication date: 20180907

License type: Common License

Record date: 20191224