CN105487665A - Method for controlling intelligent mobile service robot based on head posture recognition - Google Patents

Method for controlling intelligent mobile service robot based on head posture recognition Download PDF

Info

Publication number
CN105487665A
CN105487665A CN201510872912.2A CN201510872912A CN105487665A CN 105487665 A CN105487665 A CN 105487665A CN 201510872912 A CN201510872912 A CN 201510872912A CN 105487665 A CN105487665 A CN 105487665A
Authority
CN
China
Prior art keywords
mobile service
service robot
head
intelligent mobile
head pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510872912.2A
Other languages
Chinese (zh)
Other versions
CN105487665B (en
Inventor
徐国政
吕呈
朱博
高翔
陈盛
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201510872912.2A priority Critical patent/CN105487665B/en
Publication of CN105487665A publication Critical patent/CN105487665A/en
Application granted granted Critical
Publication of CN105487665B publication Critical patent/CN105487665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Abstract

The invention provides a method for controlling an intelligent mobile service robot based on head posture recognition. Facial feature points are detected by utilizing a constraint local model algorithm; the current head posture is estimated by utilizing the geometrical relationship of the facial feature points; and thus, a patient can control motion of the intelligent mobile service robot by using the head. The method specifically comprises the following steps: training a head posture sample to obtain a facial feature point shape template and a local template of feature points; acquiring a head image by using Kinect and finding out the position of the face; detecting the facial feature points by using the constraint local model algorithm; estimating the current head posture by utilizing the detected feature points; and giving different control commands to the mobile service robot according to different head posture parameters. By means of the method disclosed by the invention, handicapped persons can also stably control the intelligent mobile service robot by using heads; and thus, the handicapped persons are convenient to go out.

Description

A kind of Intelligent Mobile Service robot control method based on head pose identification
Technical field
The present invention relates to machine vision control field, be specifically related to a kind of Intelligent Mobile Service robot control method based on head pose identification, be applied in Intelligent Mobile Service robot interactive system, be suitable for the head pose real-time estimation that realizes in Intelligent Mobile Service robot interactive system based on local restriction model (CLM) and realize the control to Intelligent Mobile Service robot based on this.
Background technology
Along with the development of society, the problem of an aging population become developed country and partial development Chinese Home common faced by challenge, predictive display, from 20 years of 2015 ~ 2035 years, Aged in China population ratio will double, reach 20%, wherein be no lack of because disease causes the old man of inconvenient activity, add that the quantity of the various handicapped physical disabilities caused due to various disaster accident is also increasing gradually, especially the personage of high paraplegia and arm deformity, in them, the travel activity of a part receives great restriction.In order to the facility of going on a journey is to improve their quality of life, in recent years, walking-replacing tool of going on a journey easily becomes the focus of society and research institution's concern, and wherein intellect service robot more becomes hot research object wherein.Intelligent wheel chair is as a kind of representative Intelligent Mobile Service robot, and have rocking bar and control, limb control, navigation, keeps away barrier, the functions such as rehabilitation.Wherein rocking bar controls to have good control performance, but for the people that both hands deformity or high paraplegia cause both hands to move, rocking bar control and part of limb control the methods such as such as gesture control and receive very large restriction, therefore, impetus controls the focus becoming Recent study as a kind of novel man-machine interaction mode.
The method of current existing identification head appearance has:
1, wearable motion sensor method, these class methods are by installing accelerometer and gyro sensor at tested object head, and judge head appearance by receiving sensor exercise data, this class methods precision is high, but needs wearable device, and Consumer's Experience is poor.
2, LED scaling method, by wearing the headgear of some LED, utilizes camera to catch headgear image, then the position current according to the position judgment of LED, and equally, the method precision is high, but still needs wearable device.
3, machine vision method, the method judges head appearance by camera collection head image by algorithm, it is current main stream approach, contact with tested object zero, better user experience, wherein existing means mainly Land use models recognizer by Image estimation head appearance (machine vision method), the random forest sorter, facial feature points geometric relationship method etc. of such as traditional template matching algorithm, main flow, traditional CLM algorithm utilizes two dimensional image merely, there is the problem for different illumination conditions interference.
Summary of the invention
The object of the invention is to provide a kind of Intelligent Mobile Service robot controlling based on head pose identification, adopt and estimate head pose mode based on the CLM method of two dimensional image and depth image, human face characteristic point is detected by local restriction model algorithm CLM, estimate head pose according to the geometric relationship detecting the facial feature points obtained again, and then implement to control to Intelligent Mobile Service robot again.
Above-mentioned purpose of the present invention is realized by the technical characteristic of independent claims, and dependent claims develops the technical characteristic of independent claims with alternative or favourable mode.
For reaching above-mentioned purpose, the present invention proposes a kind of Intelligent Mobile Service robot control method based on head pose identification, and to retrain based on partial model algorithm, obtain head pose and control Information Mobile Service robot, its specific implementation comprises the steps:
S1, builds the shape template of facial feature points and the local feature template of unique point by head pose Sample Storehouse
The object of this step is that the color image in Sample Storehouse and depth image are carried out to PCA dimensionality reduction and align, then carry out feature extraction and learn to obtain the shape template of unique point and the local feature template of each unique point, and build corresponding SVM, be applied to coupling step below.
S2, comprises the two dimensional image of face and depth image by the collection of RGBD camera head and aligns
Utilize Kinect to gather the coloured image and the depth image that comprise face in this step, can acquisition resolution be all coloured image and the depth image of 640*480, acquisition rate be 30 frames per second simultaneously.Because the colour imagery shot of Kinect and depth camera have certain distance, the image therefore collecting two width needs the advanced row alignment process of correction function.
S3, utilizes Viola-Jones detecting device to detect face location in the picture
Utilize the position of the method quick position face in the RGB color image collected, if surround the face detected with a rectangle frame, for the unique point further detecting face is below ready.
S4, utilizes CLM algorithm to detect unique point on the face detected
Further search for the position of facial feature points in the face location detected in this step, first the initial estimation of facial feature points is carried out, again the unique point of initial estimation is carried out to the estimation of more best placement, iteration, until all unique points are to optimum position, so far completes the estimation of facial feature points.
S5, estimates head pose by the shape of the human face characteristic point detected
Utilize the shape of the facial feature points detected in previous step to estimate current head pose in this step, estimate the angle of pitch of head, roll angle, angle of rotation three angles, are divided into head appearance: appearance without a head, left-hand rotation head, right-hand rotation head, comes back, 5 kinds of head poses altogether of bowing.
S6, realizes the control to Information Mobile Service robot by head pose recognition result
In this step, PC sends different control commands to Information Mobile Service robot according to current head pose, and the corresponding respectively motion state of 5 kinds of head appearances is: stop, turning left, turn right, advance, retreat.Computing machine is by identifying that the head pose result obtained sends different orders to dsp controller by serial ports, and DSP sends control command again and drives to motor thus control the motion of intelligent wheel chair.
As long as should be appreciated that aforementioned concepts and all combinations of extra design described in further detail below can be regarded as a part for subject matter of the present disclosure when such design is not conflicting.In addition, all combinations of theme required for protection are all regarded as a part for subject matter of the present disclosure.
The foregoing and other aspect of the present invention's instruction, embodiment and feature can be understood by reference to the accompanying drawings from the following description more all sidedly.Feature and/or the beneficial effect of other additional aspect of the present invention such as illustrative embodiments will be obvious in the following description, or by learning in the practice of the embodiment according to the present invention's instruction.
Accompanying drawing explanation
Accompanying drawing is not intended to draw in proportion.In the accompanying drawings, each identical or approximately uniform ingredient illustrated in each figure can represent with identical label.For clarity, in each figure, not each ingredient is all labeled.Now, the embodiment of various aspects of the present invention also will be described with reference to accompanying drawing by example, wherein:
Fig. 1 is structural drawing and the Kinect camera installation site schematic diagram of Intelligent Mobile Service robot.
Fig. 2 is Intelligent Mobile Service robot control system architecture figure.
Fig. 3 is head pose angle schematic diagram.
Fig. 4 is the process flow diagram of the Intelligent Mobile Service robot control method based on head pose identification.
Fig. 5 is the schematic diagram demarcating facial feature points.
Fig. 6 is for detecting face location schematic diagram.
Fig. 7 is by facial feature estimation head pose schematic diagram.
Fig. 8 is facial feature points testing result and head appearance result.
Embodiment
In order to more understand technology contents of the present invention, institute's accompanying drawings is coordinated to be described as follows especially exemplified by specific embodiment.
Each side with reference to the accompanying drawings to describe the present invention in the disclosure, shown in the drawings of the embodiment of many explanations.Embodiment of the present disclosure must not be intended to comprise all aspects of the present invention.Be to be understood that, multiple design presented hereinbefore and embodiment, and describe in more detail below those design and embodiment can in many ways in any one is implemented, this is because design disclosed in this invention and embodiment are not limited to any embodiment.In addition, aspects more disclosed by the invention can be used alone, or otherwisely anyly appropriately combinedly to use with disclosed by the invention.
Fig. 1 is the structural drawing of Intelligent Mobile Service robot, each component that Intelligent Mobile Service robot 100 has a main body and is arranged in main body, these components comprise: headrest 101, Kinect camera 102, PC controller 103, operating rod 104, motor 105, battery 106, front-wheel 107, trailing wheel 108 and anti-hypsokinesis wheel 109.As shown in the figure, the dead ahead that Kinect camera 102 is arranged on user's head is about about 50cm and locates, and guarantees just to be included in the picture of collection to head by whole head, and about distance 50cm can gather coloured image and depth image preferably.
Be to be understood that, in Fig. 1, aforementioned headrest 101, PC controller 103, operating rod 104, motor 105, battery 106, front-wheel 107, trailing wheel 108 and anti-hypsokinesis wheel 109, be the conventional setting in Intelligent Mobile Service robot, it specifically construct, function and/or effect do not repeat them here.These installation sites shown in component diagram 1 and/or its combination are only exemplary, some need or necessity example in, the make of these components, combination and/or its installation site can be any-modes.
Fig. 2 exemplarily illustrates Intelligent Mobile Service robot control system architecture figure, and this control system comprises image capture module, image processing module, Intelligent Mobile Service robot control module, Intelligent Mobile Service robot.Wherein, image capture module and Kinect camera, image processing module uses PC controller, and Intelligent Mobile Service robot control module uses DSP, by PC controller to identifying after image procossing that the head pose information obtained carries out activity control to Intelligent Mobile Service robot.
Figure 3 shows that head pose angle schematic diagram, be divided into the angle of pitch, angle of rotation and roll angle.
Fig. 4 is the process flow diagram of the Intelligent Mobile Service robot control method based on head pose identification according to certain embodiments of the invention, and concrete steps are as follows:
S1, builds the shape template of facial feature points and the local feature template of different characteristic point;
S2, gathers the two dimensional image of head and depth image and aligns;
S3, detects face location in the image collected;
S4, utilizes CLM algorithm to detect unique point on face;
S5, estimates head pose by the geometric relationship of the human face characteristic point detected;
S6, realizes the control to Information Mobile Service robot by head pose recognition result.
Shown in accompanying drawing, the exemplary realization of the aforementioned Intelligent Mobile Service robot control method based on head pose identification is more specifically described.
In step S1, the local feature template concrete steps building the shape template of facial feature points and different characteristic point are as follows:
S11, collect n head appearance training sample, require that each sample comprises face image more clearly, each sample comprises gray scale and the degree of depth two width image, and resolution is all 640*480, requires that gray scale and depth image align.
S12, demarcates the unique point on every width sample, as shown in Figure 5, every width sample is demarcated m unique point, the sequence number that each Feature point correspondence is fixing, the coordinate of m unique point is connected into the shape vector of a vector as this sample, obtains a shape vector:
S i={x i1,y i1,x i2,y i2,...x im,y im}
(x in formula ij, y ij) represent the jth unique point of i-th sample.
Demarcate n face sample, obtain n shape vector altogether.The demarcation of unique point chooses representative point, such as, canthus in figure, the corners of the mouth, nose as far as possible, at least one of lower Palestine and China.
S13, utilizes Procrustes analytical approach to be normalized and registration process n the shape vector built, eliminates displacement, convergent-divergent, the impact of rotation;
S14, extracts the major component of shape vector by PCA method;
S15, to each unique point, utilizes corresponding image block to train the local classifiers of this unique point.
In some instances, in abovementioned steps S13, utilize Procrustes analytical approach to be normalized and the realization of registration process n the shape vector built, comprise the following steps:
S131, first eliminates Influence of Displacement, each characteristic point position in this face is deducted the mean place of all unique points in this face thus makes sample set snap to initial point:
( x ‾ i , y ‾ i ) = ( 1 n Σ j = 1 n x i j , 1 n Σ j = 1 n y i j )
( x i j ′ , y i j ′ ) = ( x i j - x ‾ i , y i j - y ‾ i )
S132, if the shape of all samples obtains sample set S=(S after all snapping to initial point 1, S 2, S 3... S n).
S133, selects S 1as with reference to shape, suitable European similarity transformation is selected, with S to other all shapes 1for target is alignd, after alignment, obtain one group of new shape:
S'=(S 1',S 2',S 3'...S n')
S134, calculates the average shape of new shape collection:
S ‾ ′ = 1 n Σ ( S 1 ′ + S 2 ′ + , ... S n ′ )
S135, by new average shape with S 1alignment obtains
S136, to (S 2', S 3' ..., S n') select suitable conversion, with alignment.
S137, calculates new average shape again, if new average shape is less than a certain given threshold value with the change size of last average shape, then can thinks that iteration terminates, otherwise forward S134 to.
So, by alignment face shape vector, by different shape vector unifications under the same coordinate system, thus in position, size, rotation, there is comparability, thus next step analysis can be carried out.
In some instances, abovementioned steps S14, extracted the major component of shape vector by PCA method, concrete steps are as follows:
S141, calculates the average of the shape vector collection S after Procrustes analyzes:
S ‾ = 1 n Σ i = 1 n S i
S142, calculates the covariance matrix of shape vector collection:
S d = 1 n Σ i = 1 n ( S i - S ‾ ) ( S i - S ‾ ) T
S143, carries out Eigenvalues Decomposition by covariance matrix, obtains proper vector p iwith its characteristic of correspondence value, and by the difference value vector dS of shape vector and mean shape vector ibe expressed as the linear combination of major component:
dS i = S i - S ‾
dS i=λ i1p 1i2p 2+...+λ i2mp 2m
S144, makes P=(p 1, p 2..., p 2m), λ i=(λ i1, λ i2..., λ i2m) t, can obtain:
dS i=Pλ i
S i = S ‾ + Pλ i
S145, selects t eigenwert characteristic of correspondence vector P before in feature value vector sas main shaft, obtain:
S i = S ‾ + P s b s
Wherein b sbe weight vector, change wherein each component and just can obtain new shape, limit component and can ensure that new shape is close to sample shape in certain interval change.Generally b svalue be:
- 3 λ k ≤ b k ≤ 3 λ k
Like this, the shape concentrated of arbitrary sample can by the linear weighted function approximate representation of average shape and t proper vector.
In some instances, abovementioned steps S15, trains the local classifiers of single unique point, and concrete construction step is as follows:
S151, extracts p*p size area around each unique point, the gray-scale value in this region is write in order as the local feature of form as this unique point of column vector, for depth image, the depth value in this region is write as the form of column vector as local feature.
S152, one total n sample, each sample has m unique point, then for each unique point, there is 2n local feature, comprise n gray scale local feature and n degree of depth local feature, respectively using this n local feature as proper characteristics, again respectively at other positions random selecting w local feature of gray level image and depth image as error characteristic, then each unique point has n+w gray scale local feature and n+w degree of depth local feature, wherein comprise mistake with correct, first gray feature is described, for each unique point, its training sample set is:
{x (1),x (2),...,x (n+w)}
Each training sample is a local feature vectors:
x ( i ) = ( x 1 ( i ) , x 2 ( i ) , ... , x p * p ( i ) ) T , i = 1 , 2 , ... , n + w
The output of S153, SVM is y (i)={-1,1}, 1 represents correct sample, and-1 represents wrong sample.Output by product representation SVM in input data set and support vector:
y ( i ) = &Sigma; j = 1 N S &alpha; j < x j , x > + b
Wherein x jrepresent the subset of training sample, α jrepresent the weights of support vector, N srepresent the number of support vector, b represents deviation.
The output of S154, SVM can be expressed as the linear combination of input vector:
y (i)=w T·x (i)
Wherein w t=(w 1, w 2..., w n+w) representing the weights of input element, θ represents deviation, according to input data set and output function, can obtain w tand θ.
S155, repeats above step and trains the gray scale local feature collection of m unique point to obtain m Linear SVM respectively.
S156, trains the degree of depth local feature collection of m unique point to obtain m Linear SVM according to above step again.
Step S2, by Kinect camera collection two dimensional image and depth image, and send into computer disposal, Kinect can gather coloured image and depth image simultaneously, owing to having certain distance between RGB camera and depth camera, two width pictures of collection are not complete correspondence, need to carry out registration process, the function utilizing OPENNI to increase income in storehouse carries out registration process, and RGB image is converted to gray level image.
Step S3, utilizes Viola-Jones detecting device the image collected to be processed to the position obtaining face fast in a computer, and hypothesis surrounds with a rectangle frame face detected here, as shown in Figure 6.
Step S4, utilizes CLM algorithm to detect unique point on the face detected, in some instances, the step of this detection is as follows:
S41, does initial estimation to facial feature points, average facial contours is put the face location detected in step s3, as the initial estimation of facial feature points.
S42, extract the image block of (t+p/2) * (t+p/2) size around a unique point current location, gray level image block is extracted for gray level image, depth image block is extracted for depth image, t to represent around search current location best features point position within the scope of t*t, and p represents the size of the topography of training.
S43, first illustrates the operation for gray level image block, the gray level image block extracted is passed through the Linear SVM of this Feature point correspondence, obtains a response surface design R (x, y), represents the probability that surrounding's image block of every bit is similar to template.
S44, to R (x, y) matching quadratic function r (x, y).
S45, find out quadratic function r (x, y) overall maximum point, around representing this point, p*p region is the most similar to template, find out the optimal match point in depth image block equally, using the next optimum position of the mid point of the optimal match point in gray level image and the optimal match point in depth image as this unique point.
S46, finds out the next optimum position of all unique points, completes an iteration.
S47, repeats S42-S46 iterative process, until the displacement between unique point twice is less than a threshold value or reaches certain iterations, so far thinks that all unique points have found best position.
S5, by the facial feature estimation head pose detected.As shown in Figure 3, head pose is divided into roll angle, angle of rotation, the angle of pitch.Below the method for estimation of three kinds of angles is introduced respectively:
S51, as shown in Fig. 7 (a), head presents certain roll angle, if two center point coordinates are respectively (x 1, y 1), (x 2, y 2), then head roll angle can by eye line and horizontal angle estimation:
&alpha; = a r c t a n | y 2 - y 1 x 2 - x 1 |
S52, as shown in Fig. 7 (b), head presents certain angle of rotation, if the tail of the eye coordinate of two is respectively (x 1, y 1), (x 2, y 2), nose coordinate is (x 3, y 3), then the angle difference of nose and two tail of the eyes is:
&beta; 1 = | a r c t a n | y 1 - y 3 x 1 - x 3 | - a r c t a n | y 2 - y 3 x 2 - x 3 | |
Calculate the angle difference β of nose and two inner eye corners simultaneously 2with the angle difference β of nose and two corners of the mouths 3, get the estimation of mean value as the angle of rotation β of head of three angles:
&beta; = 1 3 ( &beta; 1 + &beta; 2 + &beta; 3 )
S53, as shown in Fig. 7 (c), head presents certain angle of pitch, the calculating of luffing angle only relies on unique point coordinate to be difficult to estimate, need to use depth data, if eyes and face are in same upright plane during appearance without a head, namely they are identical from the distance of camera, make when facing upward action, eyes and camera distance are d 1, face and camera distance are d 2if head mean radius is r, in two lines, point coordinate is (x 1, y 1), lip center point coordinate is (x 2, y 2), then the estimation formulas of the angle of pitch is:
&chi; = | arctan y 1 r - | d 1 - d 2 | - a r c t a n y 2 r |
Wherein detect the facial feature points that obtains and head pose result as shown in Figure 8.
Step S6, by the motion of the head pose Angle ambiguity Information Mobile Service robot of estimation, concrete steps are as follows:
S61, if face upward brilliance degree to be greater than setting threshold value χ y, then send advance command,
If bow, angle is greater than setting threshold value χ d, then backward command is sent.
S62, if head turns left, angle is greater than setting threshold value beta l, then send flicker order,
If head turns right, angle is greater than setting threshold value beta r, then flicker order is sent.
S63, if all angles are all less than corresponding threshold value, are then considered as appearance state without a head, send and cease and desist order.
In conjunction with the control method described in above embodiment, composition graphs 1, shown in Fig. 2, in the indoor that illuminance is moderate, user keeps right sitting position gesture on intelligent wheel chair, Kinect camera is placed on head front 40-50 centimeters, open the head appearance controlling functions of intelligent wheel chair, user comes back certain angle, wheelchair travels forward, user bows certain angle, wheelchair moves backward, user's head turns left certain angle, wheelchair is turned left, user's head forwards certain angle to the right, wheelchair is turned right, user keeps appearance without a head, namely front is faced, now wheelchair stop motion.
Although the present invention with preferred embodiment disclose as above, so itself and be not used to limit the present invention.Persond having ordinary knowledge in the technical field of the present invention, without departing from the spirit and scope of the present invention, when being used for a variety of modifications and variations.Therefore, protection scope of the present invention is when being as the criterion depending on those as defined in claim.

Claims (6)

1. based on an Intelligent Mobile Service robot control method for head pose identification, it is characterized in that, the method is to retrain based on partial model algorithm, and obtain head pose and control Information Mobile Service robot, its specific implementation comprises the steps:
Step 1, build the shape template of facial feature points and the local feature template of unique point by head pose Sample Storehouse;
Step 2, the two dimensional image being comprised face by the collection of RGBD camera head and depth image are also alignd;
Step 3, detect face location in the picture;
Step 4, CLM algorithm is utilized to detect unique point on face;
Step 5, estimate head pose by the shape of human face characteristic point detected;
Step 6, realize control to Information Mobile Service robot by head pose recognition result.
2. the Intelligent Mobile Service robot control method based on head pose identification according to claim 1, it is characterized in that, in abovementioned steps 1, build the shape template of facial feature points and the local feature template of each unique point respectively, specifically comprise following steps:
(1) sample set of construction feature point shape, carry out Procrustes and analyze elimination displacement, rotate, the impact of convergent-divergent, carries out PCA dimension-reduction treatment;
(2) to each unique point, corresponding image block is utilized to train the local classifiers of this unique point.
3. the Intelligent Mobile Service robot control method based on head pose identification according to claim 1, is characterized in that, in abovementioned steps 3, utilizes Viola-Jones detecting device to detect face location fast in the picture.
4. the Intelligent Mobile Service robot control method based on head pose identification according to claim 1, it is characterized in that, in abovementioned steps 4, by estimating unique point initial position and utilizing CLM algorithm iteration unique point to detect the position of all unique points in face to optimum position.
5. the Intelligent Mobile Service robot control method based on head pose identification according to claim 1, it is characterized in that, in abovementioned steps 5, by two lines and horizontal line angle estimation roll angle, estimate angle of rotation by the angle difference between nose and two, by lip and eyes, the distance respectively and between camera estimates the angle of pitch.
6. the Intelligent Mobile Service robot control method based on head pose identification according to claim 1, it is characterized in that, in abovementioned steps 6, control to advance and retreat by the angle of pitch of head, control by the angle of rotation of head and turn right, and sending control command to slave computer controller by serial ports.
CN201510872912.2A 2015-12-02 2015-12-02 A kind of intelligent Mobile Service robot control method based on head pose identification Active CN105487665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510872912.2A CN105487665B (en) 2015-12-02 2015-12-02 A kind of intelligent Mobile Service robot control method based on head pose identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510872912.2A CN105487665B (en) 2015-12-02 2015-12-02 A kind of intelligent Mobile Service robot control method based on head pose identification

Publications (2)

Publication Number Publication Date
CN105487665A true CN105487665A (en) 2016-04-13
CN105487665B CN105487665B (en) 2018-09-07

Family

ID=55674689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510872912.2A Active CN105487665B (en) 2015-12-02 2015-12-02 A kind of intelligent Mobile Service robot control method based on head pose identification

Country Status (1)

Country Link
CN (1) CN105487665B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105773633A (en) * 2016-04-14 2016-07-20 中南大学 Mobile robot man-machine control system based on face location and flexibility parameters
CN105912120A (en) * 2016-04-14 2016-08-31 中南大学 Face recognition based man-machine interaction control method of mobile robot
CN106974780A (en) * 2017-03-13 2017-07-25 邝子佳 Method for controlling intelligent wheelchair based on difference navigation attitude
CN107349570A (en) * 2017-06-02 2017-11-17 南京邮电大学 Rehabilitation training of upper limbs and appraisal procedure based on Kinect
CN107358154A (en) * 2017-06-02 2017-11-17 广州视源电子科技股份有限公司 A kind of head movement detection method and device and vivo identification method and system
CN107493531A (en) * 2017-08-04 2017-12-19 歌尔科技有限公司 A kind of head pose detection method, device and earphone
CN107621880A (en) * 2017-09-29 2018-01-23 南京邮电大学 A kind of robot wheel chair interaction control method based on improvement head orientation estimation method
CN107754307A (en) * 2017-12-05 2018-03-06 野草莓影业(北京)有限公司 Control method, control device and the swiveling seat of swiveling seat
CN108427918A (en) * 2018-02-12 2018-08-21 杭州电子科技大学 Face method for secret protection based on image processing techniques
CN108711175A (en) * 2018-05-16 2018-10-26 浙江大学 A kind of head pose estimation optimization method that inter-frame information is oriented to
CN109086727A (en) * 2018-08-10 2018-12-25 北京奇艺世纪科技有限公司 A kind of method, apparatus and electronic equipment of the movement angle of determining human body head
CN109993073A (en) * 2019-03-14 2019-07-09 北京工业大学 A kind of complicated dynamic gesture identification method based on Leap Motion
CN110909596A (en) * 2019-10-14 2020-03-24 广州视源电子科技股份有限公司 Side face recognition method, device, equipment and storage medium
CN111346356A (en) * 2020-03-09 2020-06-30 牡丹江医学院 Sports teaching apparatus
CN111480164A (en) * 2018-01-09 2020-07-31 华为技术有限公司 Head pose and distraction estimation
WO2020228217A1 (en) * 2019-05-13 2020-11-19 河北工业大学 Human body posture visual recognition method for transfer carrying nursing robot, and storage medium and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968846A (en) * 2010-07-27 2011-02-09 上海摩比源软件技术有限公司 Face tracking method
CN103870843A (en) * 2014-03-21 2014-06-18 杭州电子科技大学 Head posture estimation method based on multi-feature-point set active shape model (ASM)
CN104463100A (en) * 2014-11-07 2015-03-25 重庆邮电大学 Intelligent wheelchair man-machine interaction system and method based on facial expression recognition mode

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968846A (en) * 2010-07-27 2011-02-09 上海摩比源软件技术有限公司 Face tracking method
CN103870843A (en) * 2014-03-21 2014-06-18 杭州电子科技大学 Head posture estimation method based on multi-feature-point set active shape model (ASM)
CN104463100A (en) * 2014-11-07 2015-03-25 重庆邮电大学 Intelligent wheelchair man-machine interaction system and method based on facial expression recognition mode

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912120B (en) * 2016-04-14 2018-12-21 中南大学 Mobile robot man-machine interaction control method based on recognition of face
CN105912120A (en) * 2016-04-14 2016-08-31 中南大学 Face recognition based man-machine interaction control method of mobile robot
CN105773633A (en) * 2016-04-14 2016-07-20 中南大学 Mobile robot man-machine control system based on face location and flexibility parameters
CN106974780A (en) * 2017-03-13 2017-07-25 邝子佳 Method for controlling intelligent wheelchair based on difference navigation attitude
CN107349570A (en) * 2017-06-02 2017-11-17 南京邮电大学 Rehabilitation training of upper limbs and appraisal procedure based on Kinect
CN107358154A (en) * 2017-06-02 2017-11-17 广州视源电子科技股份有限公司 A kind of head movement detection method and device and vivo identification method and system
CN107493531A (en) * 2017-08-04 2017-12-19 歌尔科技有限公司 A kind of head pose detection method, device and earphone
CN107621880A (en) * 2017-09-29 2018-01-23 南京邮电大学 A kind of robot wheel chair interaction control method based on improvement head orientation estimation method
CN107754307A (en) * 2017-12-05 2018-03-06 野草莓影业(北京)有限公司 Control method, control device and the swiveling seat of swiveling seat
CN111480164B (en) * 2018-01-09 2024-03-19 华为技术有限公司 Head pose and distraction estimation
CN111480164A (en) * 2018-01-09 2020-07-31 华为技术有限公司 Head pose and distraction estimation
CN108427918A (en) * 2018-02-12 2018-08-21 杭州电子科技大学 Face method for secret protection based on image processing techniques
CN108427918B (en) * 2018-02-12 2021-11-30 杭州电子科技大学 Face privacy protection method based on image processing technology
CN108711175B (en) * 2018-05-16 2021-10-01 浙江大学 Head attitude estimation optimization method based on interframe information guidance
CN108711175A (en) * 2018-05-16 2018-10-26 浙江大学 A kind of head pose estimation optimization method that inter-frame information is oriented to
CN109086727A (en) * 2018-08-10 2018-12-25 北京奇艺世纪科技有限公司 A kind of method, apparatus and electronic equipment of the movement angle of determining human body head
CN109086727B (en) * 2018-08-10 2021-04-30 北京奇艺世纪科技有限公司 Method and device for determining motion angle of human head and electronic equipment
CN109993073A (en) * 2019-03-14 2019-07-09 北京工业大学 A kind of complicated dynamic gesture identification method based on Leap Motion
WO2020228217A1 (en) * 2019-05-13 2020-11-19 河北工业大学 Human body posture visual recognition method for transfer carrying nursing robot, and storage medium and electronic device
CN110909596A (en) * 2019-10-14 2020-03-24 广州视源电子科技股份有限公司 Side face recognition method, device, equipment and storage medium
CN111346356A (en) * 2020-03-09 2020-06-30 牡丹江医学院 Sports teaching apparatus

Also Published As

Publication number Publication date
CN105487665B (en) 2018-09-07

Similar Documents

Publication Publication Date Title
CN105487665A (en) Method for controlling intelligent mobile service robot based on head posture recognition
CN105787471B (en) It is a kind of applied to help the elderly help the disabled Information Mobile Service robot control gesture identification method
CN101777116B (en) Method for analyzing facial expressions on basis of motion tracking
Gu et al. Human gesture recognition through a kinect sensor
CN105005999B (en) It is a kind of based on obstacle detection method of the computer stereo vision towards apparatus for guiding blind
CN101889928B (en) Head gesture recognition technology-based wheelchair control method
CN102074034B (en) Multi-model human motion tracking method
Xu et al. Real-time dynamic gesture recognition system based on depth perception for robot navigation
Vasquez et al. Deep detection of people and their mobility aids for a hospital robot
CN105159452B (en) A kind of control method and system based on human face modeling
CN104598878A (en) Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information
CN101201695A (en) Mouse system for extracting and tracing based on ocular movement characteristic
CN101526997A (en) Embedded infrared face image identifying method and identifying device
CN105787442B (en) A kind of wearable auxiliary system and its application method of the view-based access control model interaction towards disturbance people
Sáez et al. Aerial obstacle detection with 3-D mobile devices
JP2018514036A (en) Machine vision with dimensional data reduction
CN104463100A (en) Intelligent wheelchair man-machine interaction system and method based on facial expression recognition mode
CN104091155A (en) Rapid iris positioning method with illumination robustness
CN102184016B (en) Noncontact type mouse control method based on video sequence recognition
CN113158833B (en) Unmanned vehicle control command method based on human body posture
CN107621880A (en) A kind of robot wheel chair interaction control method based on improvement head orientation estimation method
CN111368762A (en) Robot gesture recognition method based on improved K-means clustering algorithm
CN107292272A (en) A kind of method and system of the recognition of face in the video of real-time Transmission
CN103093237A (en) Face detecting method based on structural model
CN102156994B (en) Joint positioning method for single-view unmarked human motion tracking

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20160413

Assignee: Zhangjiagang Institute of Zhangjiagang

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: X2019980001251

Denomination of invention: Method for controlling intelligent mobile service robot based on head posture recognition

Granted publication date: 20180907

License type: Common License

Record date: 20191224