The content of the invention
It is an object of the invention to provide a kind of robot head gestural control method and system, allow robot according to quilt
The gesture of survey personnel is acted, the interactive operation between the actual doctors and patients of true simulation, to provide exercising platform for doctor.
To achieve the above object, the invention provides following scheme:
A kind of robot head gestural control method, methods described includes:
The gesture shape of tested personnel's hand is recognized, the gesture shape recognition result is obtained;The gesture shape identification
As a result first gesture shape, second gesture shape and the 3rd gesture shape are included;
When the gesture shape recognition result is the first gesture shape, by the tracking mark set in the robot
Will position position, triggers the robot and enters tracking SBR, preparation starts to track the hand motion of the tested personnel;
When the gesture shape recognition result is the second gesture shape and tracking mark position has been set, institute
The hand motion for stating tested personnel described in robotic tracking carries out end rotation motion;
When the gesture shape recognition result is three gesture shape, tracking mark position is reset, stopped
The end rotation motion of the robot, the robot head is fixed on stop position.
Optionally, the gesture shape of identification tested personnel's hand, obtains the gesture shape recognition result, specific bag
Include:
Obtain the coloured image and depth image of tested personnel's hand;
Gesture foreground picture is obtained according to the coloured image and the depth image;
The gesture shape of the tested personnel is identified according to the gesture foreground picture, the gesture shape identification knot is obtained
Really.
Optionally, it is described that gesture foreground picture is obtained according to the coloured image and the depth image, specifically include:
The depth image is handled using Threshold Segmentation Algorithm, image district of the gray value in setting range is extracted
Domain is used as foreground area;
The coloured image of the foreground area is obtained according to correspondence position of the foreground area in the coloured image;
Histogram is set up according to features of skin colors;
The coloured image of the foreground area is transformed into corresponding color space;
Back projection is carried out in the color space according to the histogram and obtains probability graph;
Denoising is carried out to the probability graph using morphological erosion expansion algorithm and Threshold Segmentation Algorithm, obtains described
Gesture foreground picture.
Optionally, the gesture shape that the tested personnel is identified according to the gesture foreground picture, obtains the hand
Gesture shape recognition result, is specifically included:
Calculate the characteristic vector of the gesture foreground picture;
The characteristic vector is classified using SVMs, gesture classification result is obtained;
The gesture shape of tested personnel's hand is identified according to the gesture classification result, the gesture shape is obtained
Recognition result.
Optionally, it is described when the gesture shape recognition result be the second gesture shape and the tracking mark position
When being set, the hand motion of tested personnel described in the robotic tracking carries out end rotation motion, specifically includes:
The direction of rotation of the robot head is determined according to the probability graph;
The horizontal rotation speed and vertical rotary speed of the robot head are calculated according to the probability graph;
The robot head is controlled according to the direction of rotation, the horizontal rotation speed and the vertical rotary speed
Horizontally rotated according to the direction of rotation and the horizontal rotation speed, according to the direction of rotation and the vertical rotation
Speed is rotated vertically.
The invention also discloses a kind of robot head gestural control system, the system includes:
Gesture shape recognition result acquisition module, the gesture shape for recognizing tested personnel's hand obtains the gesture
Shape recognition result;The gesture shape recognition result includes first gesture shape, second gesture shape and the 3rd gesture shape;
First gesture shape control module, for when the gesture shape recognition result be the first gesture shape when,
By the tracking mark position set in the robot position, trigger the robot and enter tracking SBR, preparation start with
The hand motion of tested personnel described in track;
Second gesture shape control module, for being the second gesture shape and institute when the gesture shape recognition result
Tracking mark position is stated when being set, controls the hand motion of tested personnel described in the robotic tracking to carry out end rotation fortune
It is dynamic;
3rd gesture shape control module, for when the gesture shape recognition result be three gesture shape when,
Tracking mark position is reset, stops the end rotation motion of the robot, the robot head, which is fixed on, to stop
Stop bit is put.
Optionally, the gesture shape recognition result acquisition module is specifically included:
Image acquisition submodule, coloured image and depth image for obtaining tested personnel's hand;
Gesture foreground picture acquisition submodule, for obtaining gesture prospect according to the coloured image and the depth image
Figure;
Gesture shape recognition result acquisition submodule, for identifying the tested personnel's according to the gesture foreground picture
Gesture shape, obtains the gesture shape recognition result.
Optionally, the gesture foreground picture acquisition submodule is specifically included:
Foreground area extraction unit, for being handled using Threshold Segmentation Algorithm the depth image, extracts gray scale
The image-region being worth in setting range is used as foreground area;
Prospect color image taking unit, for being obtained according to correspondence position of the foreground area in the coloured image
Obtain the coloured image of the foreground area;
Histogram sets up unit, for setting up histogram according to features of skin colors;
Image conversion unit, for the coloured image of the foreground area to be transformed into corresponding color space;
Probability graph acquiring unit, probability is obtained for carrying out back projection in the color space according to the histogram
Figure;
Gesture foreground picture acquiring unit, for using morphological erosion expansion algorithm and Threshold Segmentation Algorithm to the probability
Figure carries out denoising, obtains the gesture foreground picture.
Optionally, the gesture shape recognition result acquisition submodule is specifically included:
Characteristic vector computing unit, the characteristic vector for calculating the gesture foreground picture;
Gesture classification result acquiring unit, for classifying using SVMs to the characteristic vector, obtains hand
Gesture classification results;
Gesture shape recognition result acquiring unit, for identifying tested personnel's hand according to the gesture classification result
The gesture shape in portion, obtains the gesture shape recognition result.
Optionally, the second gesture shape control module is specifically included:
Direction of rotation acquisition submodule, the direction of rotation for determining the robot head according to the probability graph;
Rotary speed calculating sub module, the horizontal rotation speed for calculating the robot head according to the probability graph
With vertical rotary speed;
Rotary motion control submodule, for according to the direction of rotation, the horizontal rotation speed and the vertical rotation
Rotary speed controls the robot head to be horizontally rotated according to the direction of rotation and the horizontal rotation speed, according to institute
State direction of rotation and the vertical rotary speed is rotated vertically.
The specific embodiment provided according to the present invention, the invention discloses following technique effect:
The invention provides a kind of robot head gestural control method and system.Methods described recognizes tested personnel first
The gesture shape of hand, obtains the gesture shape recognition result;The gesture shape recognition result include first gesture shape,
Second gesture shape and the 3rd gesture shape;, will be described when the gesture shape recognition result is the first gesture shape
The tracking mark position position set in robot, triggers the robot and enters tracking SBR, it is described that preparation starts tracking
The hand motion of tested personnel;When the gesture shape recognition result be the second gesture shape and the tracking mark position
When being set, the hand motion of tested personnel described in the robotic tracking carries out end rotation motion;When the gesture shape
When recognition result is three gesture shape, tracking mark position is reset, stops the head rotation of the robot
Transhipment is dynamic, and the robot head is fixed on stop position.Methods described and system are controlled by the different gestures of tested personnel
Robot processed is acted accordingly, can truly be simulated the interactive operation between actual doctors and patients, be provided well for doctor
Effective traditional Chinese medical science rotation class gimmick exercising platform.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made
Embodiment, belongs to the scope of protection of the invention.
It is an object of the invention to provide a kind of robot head gestural control method and system.
In order to facilitate the understanding of the purposes, features and advantages of the present invention, it is below in conjunction with the accompanying drawings and specific real
Applying mode, the present invention is further detailed explanation.
Fig. 1 is the method flow diagram of robot head gestural control method of the embodiment of the present invention.
Referring to Fig. 1, a kind of robot head gestural control method, including:
Step 101:The gesture shape of tested personnel's hand is recognized, the gesture shape recognition result is obtained.The gesture
Shape recognition result includes first gesture shape, second gesture shape and the 3rd gesture shape.
In step 101, the gesture shape of identification tested personnel's hand obtains the gesture shape recognition result, tool
Body includes:
Step 1011:Obtain the coloured image and depth image of tested personnel's hand.
The coloured image and the depth image shoot acquisition by being fixed on the imaging sensor of robot head.Institute
State the Kinect that imaging sensor is Microsoft.
Step 1012:Gesture foreground picture is obtained according to the coloured image and the depth image.
The step 1012 is specifically included:
Step (1):The depth image is handled using Threshold Segmentation Algorithm, gray value is extracted in setting range
Image-region be used as foreground area.
The light and shade of the depth image of gained represents the distance of object distance camera lens representated by current pixel point herein, false
If there is a cupboard in camera lens apart from 5 meters of camera lens, a flat people raised one's hand is apart from 3 meters of camera lens, and wherein the hand of people is apart from camera lens
2.5 meters, then what is shown in the obtained depth image is exactly the patch of the shape of a dark cupboard, and one brighter
Human body shape patch, also have the brighter patch of hand shape on this human body patch (because for hand is compared with human body
Apart from camera lens closer to), then according to light and shade (gray value) set threshold value with regard to the object of different distance can be partitioned into.In the present embodiment
The depth image is handled using Threshold Segmentation Algorithm, image-region of the gray value in setting range is extracted as preceding
Scene area, exactly in order to be partitioned into hand region in image background.Specific method is:The depth image is traveled through, by gray scale
Pixel intensity of the value in setting range retains, and the pixel outside setting range is set to 0, thus can be by the prospect
Split described in region from view picture in depth image.
Step (2):The foreground area is obtained according to correspondence position of the foreground area in the coloured image
Coloured image.
The foreground area is exactly the image-region of tested personnel's hand, extracts figure of the gray value in setting range
After picture region is as foreground area, according to position of the foreground area in the depth image, against the cromogram
Identical region is with regard to that can obtain the coloured image of the foreground area as in.Then according still further to skin color segmentation image, other are removed
It is not the object of the colour of skin (such as close to clothing of hand etc.), it becomes possible to obtain the gesture foreground picture.
Step (3):Histogram is set up according to features of skin colors.
The features of skin colors is exactly the feature that human body complexion has, and the features of skin colors can be obtained from many places, this reality
Apply in the method described in example, be the picture by choosing tested personnel's hand in advance, then the colour of skin to the hand is entered
Row statistics obtains the features of skin colors, is then compared with the similar features in application scenarios, is subject to certain calculating with mutual
Distinguish, the specific features of skin colors obtained from.
Histogram described in the present embodiment uses Cr-Cb two-dimensional histogram.Initially set up one 50 × 50 two
Histogram is tieed up, statistics falls into the number of the pixel of each block in the two-dimensional histogram, sets up the institute of the features of skin colors
State two-dimensional histogram.Similarly, then the two-dimensional histogram of current scene in the coloured image for obtaining the foreground area is counted, contrast
The histogrammic difference of scene and the colour of skin, more significant feature in colour of skin histogram is left, it is easy to the feature that background is obscured
Delete, just obtained the last histogram, the histogram is normalized, its scope is fallen between 0-255.
Step (4):The coloured image of the foreground area is transformed into corresponding color space.
The features of skin colors has the face used in the different forms of expression, the present embodiment under different colours space
The colour space is YCrCb color spaces.
Step (5):Back projection is carried out in the color space according to the histogram and obtains probability graph.
In the histogram above set up, for certain point, abscissa is Cr value, and ordinate is Cb value, should
The value of point represents the number (frequency is may be considered after normalization) of the pixel with Cr, Cb value, in the coloured image
Full figure is traveled through again, and to any pixel point, Cr, Cb value for the point inquire about the corresponding frequency in the histogram, by this
Frequency and then obtains the probability graph as the brightness of the point, and bright-dark degree's representative of certain pixel should in the probability graph
Point is the probability size of tested personnel's hand skin, and the point is brighter, and the probability is bigger.
Step (6):Denoising is carried out to the probability graph using morphological erosion expansion algorithm and Threshold Segmentation Algorithm,
Obtain the gesture foreground picture.
The probability graph is handled using morphological erosion expansion algorithm and Threshold Segmentation Algorithm, the probability is removed
Influence of noise in figure, obtains the gesture foreground picture, and the gesture foreground picture is a width black and white gray level image.
Step 1013:The gesture shape of the tested personnel is identified according to the gesture foreground picture, the gesture is obtained
Shape recognition result.
The step 1013 is specifically included:
Step is 1.:Calculate the characteristic vector of the gesture foreground picture.
Geometric invariant moment (Hu squares) feature of the gesture foreground picture is calculated, hand described in the gesture foreground picture is calculated
Finger tip number, calculate the girth and area ratio of the gesture foreground picture.
The Hu moment characteristics, the finger tip number and the girth and area ratio are spliced into a row vector as work as
The characteristic vector of the preceding gesture foreground picture.For example calculate the obtained Hu be characterized as [0.8,0.1,0.01,0,0,0,
0], the finger tip number is calculated as 3, and the girth is 0.02 with area ratio, then the characteristic vector that splicing is obtained is just
It is [0.8,0.1,0.01,0,0,0,0,3,0.02].
Step is 2.:The characteristic vector is classified using SVMs, gesture classification result is obtained.
The characteristic vector is classified using the grader of trained completion, such as using the SVMs
Algorithm for Training grader, is then classified using the grader to the characteristic vector, obtains the gesture classification result.
Step is 3.:The gesture shape of tested personnel's hand is identified according to the gesture classification result, obtains described
Gesture shape recognition result.
Fig. 2 is the schematic diagram of gesture shape recognition result described in the embodiment of the present invention.Gesture shape of the present invention is known
Other result includes three kinds of gesture shapes, respectively first gesture shape, second gesture shape and the 3rd gesture shape.Wherein,
One gesture shape is used to represent that the triggering robot prepares to start tracking, and second gesture shape is used to represent that the robot is opened
The gesture for beginning to track the tested personnel carries out end rotation motion, and the 3rd gesture shape is used to represent tracking stopping.Referring to figure
2, in the present embodiment, using the gesture shape shown in Fig. 2 (a) as the first gesture shape, using the hand shown in Fig. 2 (b)
Gesture shape is used as the 3rd gesture shape as the second gesture shape using the gesture shape shown in Fig. 2 (c).In reality
In the application of border, the different gesture shapes can be arranged as required to as first, second, and third gesture shape.
Step 102:When the gesture shape recognition result is the first gesture shape, it will be set in the robot
Tracking mark position position, trigger the robot and enter tracking SBR, preparation starts to track the hand of the tested personnel
Portion is acted.
When the gesture shape recognition result is the first gesture shape shown in Fig. 2 (a), by the robot
The tracking mark position position of setting, triggers the robot and enters tracking SBR, preparation starts to track the tested personnel
Hand motion.The tracking mark position is a kind of protection setting to the robot motion, and robot is rotated
It is preceding can all detect tracking mark position whether set, if without set, the robot is not carried out movement instruction, i.e., will not be with
The hand motion of tested personnel described in track is rotated.
Step 103:When the gesture shape recognition result be the second gesture shape and the tracking mark position by
During set, the hand motion of tested personnel described in the robotic tracking carries out end rotation motion.
When the gesture shape recognition result is the second gesture shape shown in Fig. 2 (b) and tracking mark position
When being set, the hand motion that the robot starts to track the tested personnel carries out end rotation motion.By to institute
Probability graph is stated to carry out can be calculated coordinate of the presently described gesture shape in the coloured image, then to the gesture shape
Image coordinate carry out calculate can obtain the movement velocity in each joint of robot.
The robot is the training robot that class gimmick training is rotated towards the traditional Chinese medical science, and the robot is used for simulating cervical vertebra
Patient for doctor to provide exercising platform.The training robot head neck has two joints, wherein the first joint can be with water
Flat rotation, second joint can rotate vertically, and the structural simulation human cervical spine with a kind of variation rigidity.
Step 103 is specifically included:
Step 1031:The direction of rotation of the robot head is determined according to the probability graph.
First, the coordinate that any point is defined in the probability graph is (x, y), and the gray value of (x, y) point is p (x, y), institute
The p+q rank geometric moments for stating probability graph are:
Then:
M00=∑ p (x, y) (2)
M10=∑ xp (x, y) (3)
M01=∑ yp (x, y) (4)
Center of gravity P of the second gesture shape in the probability graphc(xc,yc) be:
Wherein, xcRepresent the x coordinate of the center of gravity, ycRepresent the y-coordinate of the center of gravity.
The state space that present image plane is image is defined, described image plane is to be differentiated according to described image sensor
The plane that rate is defined, described image and described image plane are identical with the resolution ratio of described image sensor, such as when use
When described image sensor resolution ratio is 1440 × 900, the resolution sizes of described image plane and described image are also 1440
×900.Then the current state space is:
X=(xc,yc)T (7)
Steady-state spatial is defined in the state space for Ω s,
Wherein, uwThe width of the state space is represented, β represents proportionality coefficient, and β is that value is being less than 1/2nd just
Number, vhRepresent the height of the state space.
Coordinate of the presently described second gesture shape in the state space is calculated according to the probability graph.According to described
Relative position relation between coordinate and the steady-state spatial border, determines the direction of rotation of the robot head.
Fig. 3 is the schematic diagram of state space of the present invention and the steady-state spatial coordinate system.As shown in figure 3, u0,u1,
v0,v1Left and right, the upper and lower border of the steady-state spatial Ω s is represented respectively.When the second gesture shape is in the state space
In horizontal coordinate be located at the left side of the steady-state spatial left margin, i.e., the value of described horizontal coordinate is less than u0When, it is determined that described
Robot head turns clockwise;When the horizontal coordinate is more than u1When, determine the robot head rotate counterclockwise.Also may be used
To be less than u when the value of the horizontal coordinate0When, determine the robot head rotate counterclockwise;When the horizontal coordinate is more than u1
When, determine that the robot head turns clockwise.When vertical seat target value of the mark in the state space is small
In v0When, determine that the robot head rotates to direction of bowing;When the horizontal coordinate is more than v1When, determine the robot
Head rotates to new line direction;Or when the vertical seat target value is less than v0When, determine the robot head to new line direction
Rotation;When the horizontal coordinate is more than v1When, determine that the robot head rotates to direction of bowing.
Step 1032:The horizontal rotation speed and vertical rotation speed of the robot head are calculated according to the probability graph
Degree.
According to coordinate position of the presently described second gesture shape in the state space and the steady-state spatial border
Position calculate site error, calculate the site error be exactly the current state space of the second gesture shape with
The steady-state spatial closest to border make it is poor.
According to the state space and the steady-state spatial, the current site error is calculated, the site error e's
Calculation formula is as follows:
Wherein, R represents the conversion formula on the steady-state spatial border,Wherein a, b, c, d takes
Value is respectively:
Wherein, c is a column vector, represents the border of the steady-state spatial,u0Represent the steady-state spatial Ω
S border, u1The right margin of the steady-state spatial Ω s, v are represented respectively0The coboundary of the steady-state spatial Ω s, v are represented respectively1
The lower boundary of the steady-state spatial Ω s is represented respectively.
Wherein, X represents the current state space, X=(xc,yc)T。
The input u for controlling the robot head rotary speed is calculated according to the site error et,
ut=ke (14)
Wherein, k represents scaling coefficient, the scaling for carrying out the site error,Its
Middle ηu,ηvRepresent proportionality coefficient, the ηu、ηvIt is two constants.
In order that the rotary motion of the robot is more steady, sign function is asked for the site error, i.e.,:
Wherein, euRepresent error of the site error in image level direction, evRepresent that the site error is perpendicular in image
Nogata to error,The speed of described image horizontal direction is represented,Represent the speed of described image vertical direction.
The speed of service w for obtaining the robot end is calculated using image turnxAnd wy, calculation formula is as follows:
Wherein, ωxRepresent the rotary speed rotated around image transverse axis of the robot head, that is, the machine
The vertical rotary speed of head part;ωyThe rotary speed rotated around the image longitudinal axis of the robot head is represented, also
It is the horizontal rotation speed of the robot head;(up,vp) be described image sensor image coordinate system principal point;λ tables
Show that described image sensor focal length is converted into the length of pixel;U represents the second gesture shape on the probability graph
Row coordinate, v represents row coordinate of the second gesture shape on the probability graph;JsRepresent image turn,
Described robot is the training robot that class gimmick training is rotated towards the traditional Chinese medical science, and described robot is used for simulating
Cervical spondylopathy patient for doctor to provide exercising platform.The artificial two-articulated robot of described machine, the incidence tool of the robot
There are two joints, wherein the first joint can be horizontally rotated, second joint can rotate vertically, and use a kind of variation rigidity
Structural simulation human cervical spine.
Step 1033:According to the direction of rotation, the vertical rotary speed ωxWith the horizontal rotation speed omegayControl
The robot head is horizontally rotated according to the direction of rotation and the horizontal rotation speed, according to the direction of rotation
Rotated vertically with the vertical rotary speed.When the direction of rotation represents to turn clockwise, described first is controlled to close
Section is turned clockwise according to the horizontal rotation speed, and when the direction of rotation represents rotate counterclockwise, control is described
First joint carries out rotate counterclockwise according to the horizontal rotation speed, that is, controls the left-right rotation of the robot head.Together
When the direction of rotation represents to rotate to direction of bowing, control the second joint according to the vertical rotary speed to institute
State direction of bowing to rotate, when the direction of rotation represents to rotate to new line direction, control the second joint according to described perpendicular
Direct rotary rotary speed rotates to the new line direction, that is, controls rotating upwardly and downwardly for the robot head.
Step 104:It is when the gesture shape recognition result is three gesture shape, tracking mark position is clear
Zero, stop the end rotation motion of the robot, the robot head is fixed on stop position.
When the robot head rotates to exercise desired position, gesture is replaced by shown in Fig. 2 (c) by operating personnel
The 3rd gesture shape.Now the gesture shape recognition result is the 3rd gesture shape, and tracking mark position is clear
Zero, stop the end rotation motion of the robot, the robot head is fixed on stop position.That is, described machine
Head part is tracked behind position needed for the second gesture shape is rotated to, static to be fixed on position needed for exercise.
Fig. 4 is the signal moved using robot head gestural control method control machine head part of the present invention
Figure.As shown in figure 4, tested personnel's hand 401 is placed in the front of described image sensor 402, according to rotation position needs
Make three kinds of gesture shapes as shown in figure (2).The robot head has first joint 403 and the second joint
404。
When tested personnel's hand makes the first gesture shape as shown in Fig. 2 (a), the now sign-shaped
Shape recognition result is the first gesture shape, by the tracking mark position set in the robot position, triggers the machine
People enters tracking SBR, and preparation starts to track the hand motion of the tested personnel.
Next it is now described when tested personnel's hand makes the second gesture shape as shown in Fig. 2 (b)
Gesture shape recognition result is the second gesture shape and the tracking mark position when being set, the robot start with
The hand motion of tested personnel described in track carries out end rotation motion.According to the direction of rotation, the vertical rotation calculated
Rotary speed ωxWith the horizontal rotation speed omegay, first joint 403 of the robot head is controlled according to the rotation
Direction and the horizontal rotation speed are horizontally rotated, when the direction of rotation represents to turn clockwise, and control described the
One joint 403 is turned clockwise according to the horizontal rotation speed, when the direction of rotation represents rotate counterclockwise, control
Make first joint 403 and carry out rotate counterclockwise according to the horizontal rotation speed, that is, control a left side for the robot head
Turn right dynamic.The second joint 404 of the robot head is controlled according to the direction of rotation and the vertical rotation simultaneously
Speed is rotated vertically, when the direction of rotation represents to rotate to direction of bowing, and controls the second joint 404 according to institute
State vertical rotary speed to rotate to the direction of bowing, when the direction of rotation represents to rotate to new line direction, control is described
Second joint 404 rotates according to the vertical rotary speed to the new line direction, that is, controls above and below the robot head
Rotate.
When the robot head rotates to exercise desired position, the tested personnel (operator) changes gesture
For the 3rd gesture shape shown in Fig. 2 (c).Now the gesture shape recognition result is the 3rd gesture shape, will be described
Tracking mark position is reset, and stops the end rotation motion of the robot, and the robot head is fixed on stop position.
The final second gesture shape movement for having made robotic tracking tested personnel is to the angles and positions of exercise needs, Neng Gouzhen
Interactive operation between the actual doctors and patients of real simulation, the exercising platform for the treatment of skill is provided for doctor.
Fig. 5 is the structural representation of robot head gestural control system of the embodiment of the present invention.
As shown in figure 5, the robot head gestural control system, including:
Gesture shape recognition result acquisition module 501, the gesture shape for recognizing tested personnel's hand obtains the hand
Gesture shape recognition result;The gesture shape recognition result includes first gesture shape, second gesture shape and the 3rd sign-shaped
Shape;
First gesture shape control module 502, for being the first gesture shape when the gesture shape recognition result
When, by the tracking mark position set in the robot position, trigger the robot and enter tracking SBR, prepare to start
Track the hand motion of the tested personnel;
Second gesture shape control module 503, for being the second gesture shape when the gesture shape recognition result
And tracking mark position is when being set, the hand motion of tested personnel described in the robotic tracking is controlled to carry out head rotation
Transhipment is dynamic;
3rd gesture shape control module 504, for being the 3rd gesture shape when the gesture shape recognition result
When, tracking mark position is reset, stops the end rotation motion of the robot, the robot head is fixed on
Stop position.
Wherein, the gesture shape recognition result acquisition module 501 is specifically included:
Image acquisition submodule, coloured image and depth image for obtaining tested personnel's hand;
Gesture foreground picture acquisition submodule, for obtaining gesture prospect according to the coloured image and the depth image
Figure;
Gesture shape recognition result acquisition submodule, for identifying the tested personnel's according to the gesture foreground picture
Gesture shape, obtains the gesture shape recognition result.
Wherein, the gesture foreground picture acquisition submodule is specifically included:
Foreground area extraction unit, for being handled using Threshold Segmentation Algorithm the depth image, extracts gray scale
The image-region being worth in setting range is used as foreground area;
Prospect color image taking unit, for being obtained according to correspondence position of the foreground area in the coloured image
Obtain the coloured image of the foreground area;
Histogram sets up unit, for setting up histogram according to features of skin colors;
Image conversion unit, for the coloured image of the foreground area to be transformed into corresponding color space;
Probability graph acquiring unit, probability is obtained for carrying out back projection in the color space according to the histogram
Figure;
Gesture foreground picture acquiring unit, for using morphological erosion expansion algorithm and Threshold Segmentation Algorithm to the probability
Figure carries out denoising, obtains the gesture foreground picture.
Wherein, the gesture shape recognition result acquisition submodule is specifically included:
Characteristic vector computing unit, the characteristic vector for calculating the gesture foreground picture;
Gesture classification result acquiring unit, for classifying using SVMs to the characteristic vector, obtains hand
Gesture classification results;
Gesture shape recognition result acquiring unit, for identifying tested personnel's hand according to the gesture classification result
The gesture shape in portion, obtains the gesture shape recognition result.
Wherein, the second gesture shape control module 503 is specifically included:
Direction of rotation acquisition submodule, the direction of rotation for determining the robot head according to the probability graph;
Rotary speed calculating sub module, the horizontal rotation speed for calculating the robot head according to the probability graph
With vertical rotary speed;
Rotary motion control submodule, for according to the direction of rotation, the horizontal rotation speed and the vertical rotation
Rotary speed controls the robot head to be horizontally rotated according to the direction of rotation and the horizontal rotation speed, according to institute
State direction of rotation and the vertical rotary speed is rotated vertically.
Robot head gestural control system of the present invention, can be controlled according to the gesture shape of tested personnel's hand
The rotation and stopping of robot head, make the robot head move to the angles and positions of exercise needs, being capable of true mould
Intend the interactive operation between actual doctors and patients, the exercising platform for the treatment of skill is provided for doctor.
Specific case used herein is set forth to the principle and embodiment of the present invention, and above example is said
The bright method and its core concept for being only intended to help to understand the present invention;Simultaneously for those of ordinary skill in the art, foundation
The thought of the present invention, will change in specific embodiments and applications.In summary, this specification content is not
It is interpreted as limitation of the present invention.