CN107688779A - A kind of robot gesture interaction method and apparatus based on RGBD camera depth images - Google Patents

A kind of robot gesture interaction method and apparatus based on RGBD camera depth images Download PDF

Info

Publication number
CN107688779A
CN107688779A CN201710714575.3A CN201710714575A CN107688779A CN 107688779 A CN107688779 A CN 107688779A CN 201710714575 A CN201710714575 A CN 201710714575A CN 107688779 A CN107688779 A CN 107688779A
Authority
CN
China
Prior art keywords
hand
gesture
robot
mapping
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710714575.3A
Other languages
Chinese (zh)
Inventor
丁希仑
齐静
韩锦飞
刘永超
杨光
张昀灿
白世杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201710714575.3A priority Critical patent/CN107688779A/en
Publication of CN107688779A publication Critical patent/CN107688779A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/117Biometrics derived from hands

Abstract

The invention discloses a kind of robot gesture interaction method and apparatus based on RGBD camera depth images, belong to man-machine interaction and robotic technology field.Described device includes predefined module, data acquisition module, hand region segmentation submodule, gesture identification submodule, robot control module and feedback module;Methods described is:First in leg formula moving operation machine people's application platform, the type and mapping relations of the self-defined user gesture of predefined module;Data acquisition module uses RGBD cameras collection user gesture data;Static gesture identification module carries out hand region segmentation and gesture identification to depth image;With reference to gesture species and recognition result, corresponding mapping relations are selected, robot control module's control machine people completes specific action.The present invention is advantageous to improve adaptability of the gesture interaction to environment by illumination effect very little;It is simple to operate, the requirement of robot real-time, interactive can be met, and there is preferable robustness to illumination and complex background.

Description

A kind of robot gesture interaction method and apparatus based on RGBD camera depth images
Technical field
The invention belongs to man-machine interaction and robotic technology field, and in particular to one kind is based on RGBD camera depth images Robot gesture interaction method and apparatus.
Background technology
Leg formula moving operation machine people can replace the mankind to go the hypertoxic place operation of danger, and space etc. can also be gone to perform Celestial body detecting task, have broad application prospects in fields such as reconnaissance warning, emergency management and rescue, fight against terrorism and violence.These tasks are usual It is more complicated, and robot level is limited at present, it is impossible to entirely autonomous all affairs of processing, it usually needs the mankind assist, well Man-machine interaction can not only improve operating efficiency, the wisdom of humanity can also be made full use of, guidance machine people completes more complicated Task.Therefore, good man-machine interaction is most important.
Gesture is the important way of people and robot interactive, has the advantages of interactive visual, natural, but robot and human hand Problems be present during power-relation is mutual:Due to leg, formula moving operation machine people is based on embedded system, hardware computing capability more It is limited, real-time, interactive can not be realized;And robot is in dynamic environment more, had a great influence by illumination, background change etc..
Natural man-machine interaction has the advantages of simple, intuitive, easy to operate, allows the interacting increasingly of people and robot It is interesting, make the life of people more and more convenient, increasingly favored by masses;So improve man-machine interaction mode in computer It is trend of the times with robot industry.
The content of the invention
The present invention is limited in order to solve the gesture identification hardware computing capability in leg type mobile robot, but gesture identification has Requirement of real-time, and gesture identification is influenceed the problem of big by illumination variation, it is proposed that one kind is based on RGBD camera depth images Robot gesture interaction method and apparatus, for leg formula moving operation machine people's application platform;
The described robot gesture interaction device based on RGBD camera depth images, including predefined module, data Acquisition module, hand region segmentation submodule, gesture identification submodule, robot control module and feedback module;It is wherein predetermined Adopted module includes gesture-type submodule and mapping submodule;Hand region splits submodule and gesture identification submodule constitutes Static gesture identification module.
Predefined module is according to the type and practical application request of robot, the type of predefined user gesture, and hand Mapping relations between gesture and robot;
Specially:Gesture-type submodule is used for predefining the gesture-type of user;
Mapping submodule is used for defining the mapping between gesture species/recognition result and robot motion's species/action type; Include Three models:Motion control pattern, operational control pattern and " motion+operation " pattern.
Motion control pattern is used for defining two kinds of mappings:Mapping between gesture species and robot motion's species, Yi Jishi Mapping between other result and robot motion's species;
Operational control pattern is used for defining two kinds of mappings:Mapping between gesture species and robot manipulation's type, Yi Jishi Mapping between other result and robot manipulation's type;
The four kinds of mappings of " motion+operation " pattern definition:The mapping of gesture species and robot motion's species, recognition result with The mapping of robot motion's species;Mapping between gesture species and robot manipulation's type, and recognition result are grasped with robot Make the mapping between type.
User makes certain gesture motion before camera, and data acquisition module gathers the bone of user by RGBD cameras The data such as point and depth image, and split submodule to hand region with USB transmission, hand region splits submodule from acquisition The hand region that user is partitioned into skeleton point and depth image is supplied to gesture identification submodule;Gesture identification submodule is carried out Identification, and recognition result is sent to robot control module by ROS message;Robot control module is according to robot itself Type of exercise and action type, with reference to gesture species and recognition result, selected from mapping submodule definition corresponding to reflect Relation is penetrated, control machine people completes specific action.
Feedback module is the specific action that user actually accomplishes according to mission requirements, environmental change and robot, formulates phase Answer control strategy;And predefined module is returned to, control strategy is converted into certain gestures type.
The described robot gesture interaction method based on RGBD camera depth images, is comprised the following steps that:
Step 1: being directed to leg formula moving operation machine people's application platform, module is predefined according to practical situations, difference The type of self-defined user gesture, and the mapping relations between gesture and robot;
Mapping includes Three models:Motion control pattern, operational control pattern and " motion+operation " pattern.
Motion control pattern is used for defining two kinds of mappings:Mapping between gesture species and robot motion's species, Yi Jishi Mapping between other result and robot motion's species;
Operational control pattern is used for defining two kinds of mappings:Mapping between gesture species and robot manipulation's type, Yi Jishi Mapping between other result and robot manipulation's type;
The four kinds of mappings of " motion+operation " pattern definition:The mapping of gesture species and robot motion's species, recognition result with The mapping of robot motion's species;Mapping between gesture species and robot manipulation's type, and recognition result are grasped with robot Make the mapping between type.
Step 2: user makes certain gestures according to robot type, actual demand and predefined gesture and mapping;
Step 3: skeleton point and depth image of the data acquisition module using RGBD cameras collection user gesture;
Step 4: static gesture identification module carries out hand region segmentation and gesture to the skeleton point and depth image of collection Identification;
Comprise the following steps that:
Step 401, the SDK by the use of RGBD cameras extract the depth value of hand central point as a reference value, set hand Depth value scope, and extract the object in the range of this;
Specially:First with SDK extraction hand central point depth value DepthValue, and using DepthValue as A reference value, front and rear each selection range a, the then pixel by depth value in the range of [DepthValue-a, DepthValue+a] Value be both configured to 0, the value of the pixel in remaining depth bounds is both configured to 255, depth value is extracted and exists Object in the range of [DepthValue-a, DepthValue+a].
Step 402, on the basis of hand central point, extract hand (Region of interest, ROI) area interested Domain, the segmentation of hand region is carried out in the range of depth value.
Specially:
First, on the basis of hand central point, rectangle frame is defined around hand central point as hand area-of-interest;
The width beta of rectangle frame adjusts according to user from the distance of camera, specially β=d × w;D is from hand center The depth of point;W is the width of rectangle frame when ID is 1 meter from hand central point.
When the point of hand falls it is interior in depth value scope [DepthValue-a, DepthValue+a] when, these point corresponding to Object in rectangle frame retains, the hand region as segmentation.
Step 403, the hand region result to segmentation carry out noise reduction and Morphological scale-space, obtain hand binary image.
Step 404, hand bianry image is identified, obtains the number of finger as gesture identification result.
Concretely comprise the following steps:
Step 4041, extraction palm of the hand position simultaneously calculate palm of the hand exact position as new datum mark;
The hand central point of SDK extractions is mobile to centre, hand central point is formed along solstics and central point Vector, respectively in X direction with Y-direction move;
Displacement determines in the following manner:
In gesture close under plumbness, according to palm of the hand coordinate mobile equation, hand central point is translated:
Wherein, (X_max, Y_max) is respectively hand solstics along X, the coordinate value of Y-direction;(HandX, HandY) is hand Portion's central point is along X, the coordinate value of Y-direction;B, c ∈ Q, Q are rational, are determined according to actual hand geometrical relationship.
After palm of the hand coordinate translation, new datum mark is closer to hand center.
Step 4042, find the point farthest apart from new datum mark in hand ROI region, and calculate between the two away from From distance_max;
Step 4043, distance distance_max is divided into Num_Circle parts;
Num_Circle is the empirical value that user determines according to the image for being actually needed and gathering;
0 < Num_Circle≤20, Num_Circle ∈ N;
Step 4044, withIntegral multiple for radius draw Num_Circle circle, write down each justify and The intersection point number of hand profile in hand ROI region, composition set Count;
Set Count=count [1], count [2] ... count [i], count [n] };
Step 4045, for each circle and the hand profile in hand ROI region in set Count intersection point it is whether effective Judged, if it is valid, the intersection point is counted, otherwise not counted;
When counting the intersection point number of i-th of circle and the hand profile in hand ROI region, when circle enters from white portion During black region, the pixel of current D detection is all in white portion, and the pixel of rear D detection is all in black region When interior, current color change is effective, and count [i] value adds 1;When circle enters white portion from black region, at preceding D Pixel is all in black region, and for rear D pixel all in white portion, color change is effective, and count [i] value adds 1.
Test point number is the direct proportion function of radius of a circle.
Step 4046, the number count [i] that maximum is counted in the set Count after judging is found, calculating finger number is coun[t]i/2-1。
Step 5: being mapped according to predefined gesture, gesture identification result is mapped as to the specific action of robot, and lead to Cross ROS message and be sent to robot control module.
Step 6: robot control module's control machine people completes specific action;
Step 7: specific action, environmental change and mission requirements that user actually accomplishes according to robot, formulate corresponding control System strategy;
Step 8: feedback module continues current gesture mapping according to the selection of corresponding control strategy still carries out new gesture Mapping, repeats the above steps, control strategy is converted into certain gestures type.
The advantage of the invention is that:
1) a kind of, robot gesture interaction method based on RGBD camera depth images, make one to hand over naturally with robot Mutually, man-machine interaction effect and operating efficiency are improved.
2) a kind of, robot gesture interaction method based on RGBD camera depth images, has to illumination and complex background Preferable robustness, and recognition methods is simple, disclosure satisfy that the demand of robot real-time, interactive.
3), a kind of robot gesture interaction device based on RGBD camera depth images, based on ROS (Robot Operating System, robot operating system), it is portable strong.
Brief description of the drawings
Fig. 1 is a kind of flow chart of the robot gesture interaction method based on RGBD camera depth images of the present invention;
Fig. 2 is the calculating principle of similitude schematic diagram that the present invention defines adaptive rectangle frame on the basis of hand central point;
Fig. 3 is that static gesture identification module of the present invention carries out hand region segmentation result and gesture identification schematic diagram
Fig. 4 is a kind of structure chart of the robot gesture interaction device based on RGBD camera depth images of the present invention;
Fig. 5 is the predefined 6 kinds of certain gestures schematic diagrames of the present invention;
Fig. 6 is original depth image of the present invention and the hand region segmentation figure based on threshold value that extracts as comparison diagram;
Fig. 7 is that original depth image of the present invention increases the contrast after ROI region frame with the hand region segmentation figure extracted Figure;
Fig. 8 is error source schematic diagram in the gesture profile that the present invention is partitioned into.
Embodiment
Below in conjunction with drawings and examples, the present invention is described in further detail.
The present invention, for leg formula moving operation machine people's application platform, makes from naturally general man-machine interaction angle Hand Segmentation and gesture identification are carried out with the Asus body-sensing camera Xtion PRO LIVE depth images collected, passes through identification Static gesture carrys out control machine people motion/operation.Wherein, static gesture identification is based on RGBD camera depth images, by illumination Very little is influenceed, is advantageous to improve adaptability and robustness of the gesture interaction to environment;This method is simple to operate, can meet robot The requirement of real-time, interactive, and there is preferable robustness to illumination and complex background.
The present invention is towards public safety application-specific, for window and Linux platform;The present embodiment is put down based on Linux Platform, ROS (Robot Operating System, robot operating system) are developed.
The described robot gesture interaction device based on RGBD camera depth images, as shown in figure 4, including predefined Module, data acquisition module, hand region segmentation submodule, gesture identification submodule, robot control module and feedback module; Wherein predefined module includes gesture-type submodule and mapping submodule;Hand region splits submodule and gesture identification submodule Block constitutes static gesture identification module.
Predefined module predefines gesture species and the gesture knowledge of user according to the type and practical application request of robot Other mapping relations between result, and gesture and robot;
Specially:Gesture-type submodule is used for predefining the gesture species and gesture identification result of user;
Mapping submodule is used for defining the mapping between gesture species/recognition result and robot motion's species/action type; Include Three models:Motion control pattern, operational control pattern and " motion+operation " pattern.
Motion control pattern is used for defining two kinds of mappings:Mapping between gesture species and robot motion's species, Yi Jishi Mapping between other result and robot motion's species;
Operational control pattern is used for defining two kinds of mappings:Mapping between gesture species and robot manipulation's type, Yi Jishi Mapping between other result and robot manipulation's type;
The four kinds of mappings of " motion+operation " pattern definition:The mapping of gesture species and robot motion's species, recognition result with The mapping of robot motion's species;Mapping between gesture species and robot manipulation's type, and recognition result are grasped with robot Make the mapping between type.
Data acquisition module gathers the data such as skeleton point, RGB image, depth image by RGBD cameras, for based on depth The static gesture identification submodule for spending image provides data.
Motion planning and robot control module is sent out for receiving the static gesture identification module based on depth image by ROS message The gesture identification result sent out, and according between the gesture identification result and type of exercise/action type of predefined module definition Mapping relations, control machine people complete specific action.
Static gesture identification submodule based on depth image includes hand region segmentation submodule and gesture identification submodule Block.Wherein, hand region is partitioned into the depth map that hand region segmentation submodule obtains from data acquisition module, gesture is known Gesture species is identified in the hand region that small pin for the case module is partitioned into from hand region segmentation submodule.
User makes certain gesture motion before camera, and data acquisition module gathers the bone of user by RGBD cameras The data such as point and depth image, and split submodule to hand region with USB transmission, hand region splits submodule from acquisition The hand region that user is partitioned into skeleton point and depth image is supplied to gesture identification submodule;Gesture identification submodule is carried out Identification, and recognition result is sent to robot control module by ROS message;Robot control module is according to robot itself Type of exercise and action type, with reference to gesture species and recognition result, selected from mapping submodule definition corresponding to reflect Relation is penetrated, control machine people completes specific action.
Feedback module is the specific action that user actually accomplishes according to mission requirements, environmental change and robot, formulates phase Answer control strategy;And predefined module is returned to, control strategy is converted into certain gestures type.
The described robot gesture interaction method based on RGBD camera depth images, as shown in figure 1, specific steps are such as Under:
Step 1: being directed to leg formula moving operation machine people's application platform, module is predefined according to practical situations, difference The type of self-defined user gesture, and the mapping relations between gesture and robot;
Mapping includes Three models:Motion control pattern, operational control pattern and " motion+operation " pattern.
Motion control pattern is used for defining two kinds of mappings:Mapping between gesture species and robot motion's species, Yi Jishi Mapping between other result and robot motion's species;
Operational control pattern is used for defining two kinds of mappings:Mapping between gesture species and robot manipulation's type, Yi Jishi Mapping between other result and robot manipulation's type;
The four kinds of mappings of " motion+operation " pattern definition:The mapping of gesture species and robot motion's species, recognition result with The mapping of robot motion's species;Mapping between gesture species and robot manipulation's type, and recognition result are grasped with robot Make the mapping between type.
Step 2: user makes certain gestures according to robot type, actual demand and predefined gesture and mapping;
Step 3: skeleton point and depth image of the data acquisition module using RGBD cameras collection user gesture;
Gathered using Asus body-sensing camera Xtion PRO LIVE or suitable for other RGBD cameras such as Kinect Data.
Step 4: static gesture identification module carries out hand region segmentation and gesture to the skeleton point and depth image of collection Identification;
Comprise the following steps that:
Step 401, the SDK by the use of RGBD cameras extract the depth value of hand central point as a reference value, set hand Depth value scope, and extract the object in the range of this;
Specially:The depth of hand central point is extracted first with Asus body-sensing camera Xtion PRO LIVE SDK Value DepthValue, and using Depth V as a reference value, front and rear each selection range a, then by depth value [DepthValue-a, DepthValue+a] in the range of the value of pixel be both configured to 0 (black), by the not pixel in this depth bounds Value is both configured to 255 (whites), extracts object of the depth value in the range of [DepthValue-a, DepthValue+a].
Step 402, on the basis of hand central point, extract hand (Region of interest, ROI) area interested Domain, the segmentation of hand region is carried out in the range of depth value.
Specially:
First, on the basis of hand central point, rectangle frame is defined around hand central point as hand area-of-interest;
Only when the point for meeting step 401 condition is within this rectangle frame, the point of hand is just construed as.Examine Consider the distance difference of the hand potential range camera of people, the size of rectangle frame will also be entered according to the distance of hand and camera Row adjustment.The size of rectangle frame is considered the linear function of distance by the present invention, more simply may be considered one it is directly proportional Function, as shown in Figure 2:Assuming that the hand when hand is in apart from 1 meter of (i.e. the depth of hand central point is 1 meter) place of camera Width shared by rectangle frame is w, then when depth is d, it is assumed that width β, according to the principle of similitude, can obtain β=d × w;
When the point of hand falls it is interior in depth value scope [DepthValue-a, DepthValue+a] when, these point corresponding to Object in rectangle frame retains, the hand region as segmentation.
Fig. 2 is the adaptive rectangle frame schematic diagram of hand region, and it can adjust side according to the distance of user distance camera The width of frame.But in fact, hand on screen corresponding area bigger, required frame nearer apart from camera Width is bigger.So when rectangle frame size is calculated, the present invention needs with 255 (maximum depth values) on this basis Distance value is subtracted as new distance, so just obtains adaptive hand region frame, i.e. hand region of interest ROI.In this base On plinth, the judgement of hand region point is limited to and meets step 401, and in adaptive area-of-interest inframe, so just To the segmentation result of hand region.
Step 403, the hand region result to segmentation carry out noise reduction and Morphological scale-space, obtain hand binary image.
Step 404, hand bianry image is identified, obtains the number of finger as gesture identification result.
After being partitioned into hand region, hand binary image is obtained, using the palm of the hand as the center of circle, along the circumference of different radiuses The hand bianry image split is detected, detects that color change point can thinks there is finger appearance, such as Fig. 3 institutes Show.
Concretely comprise the following steps:
Step 4041, on the basis of extracting palm of the hand position and this position using data collecting card, calculate palm of the hand exact position and make For new datum mark;
Because Asus body-sensing camera Xtion PRO LIVE SDK extractions hand central point refers to root middle all the time, and And the hand region extracted the point farthest apart from this datum mark be all the time in wrist, by hand central point as far as possible to centre It is mobile, the vector that hand central point is formed along solstics and central point, moved necessarily with Y-direction in X direction respectively Distance;
Displacement determines in the following manner:
Assuming that gesture, as long as deviation angle is not very big, is according to palm of the hand coordinate mobile equation close under plumbness Formula (1), is translated to hand central point:
Wherein, (X_max, Y_max) is respectively hand solstics along X, the coordinate value of Y-direction;(HandX, HandY) is hand Portion's central point is along X, the coordinate value of Y-direction;B, c ∈ Q, Q are rational, are determined according to actual hand geometrical relationship.
After palm of the hand coordinate translation, new datum mark establishes base closer to hand center for subsequent gesture identification Plinth.
Step 4042, find the point farthest apart from new datum mark in hand ROI region, and calculate between the two away from From distance_max;
Step 4043, distance distance_max is divided into Num_Circle parts;
Num_Circle is the empirical value that user determines according to the image for being actually needed and gathering;
0 < Num_Circle≤20, Num_Circle ∈ N;
Step 4044, withIntegral multiple for radius draw Num_Circle circle, write down each justify and The intersection point number of hand profile in hand ROI region, composition set Count;
Set Count=count [1], count [2] ... count [i], count [n] };
Step 4045, for each circle and the hand profile in hand ROI region in set Count intersection point it is whether effective Judged, if it is valid, the intersection point is counted, otherwise not counted;
Because circumference often passes through a finger, it can all pass through pixel value changes twice:The color of pixel is changed into from white Black and from black be changed into white.
In order to reduce or avoid the occurrence of detection it is wrong the problem of, the present invention increase to whether the condition judgment counted;When During the intersection point number of i-th of circle of statistics and the hand profile in hand ROI region, when circle enters black region from white portion When, the pixel of current D detection is all in white portion, and when the latter D pixel detected is all in black region, it is believed that This is once effective color change, enters finger from background, count [i] value adds 1;When circle enters white from black region During region, in preceding D pixel all in black region, rear D pixel is all in white portion, it is believed that this is once effective Color change, enter white background from finger black region, count [i] value adds 1.
It is noted herein that being not that the point taken is The more the better, because pixel coordinate is integer one by one, it is Discrete type.When having taken excessive point, what may be detected twice in succession is same point, thus occurs and counts feelings on the high side Condition, counting the finger number come naturally also can and then increase;When test point is very few, that is to say, that the angle circumferentially detected every time increases Amount becomes bigger, so if angle very little between two fingers, it is possible to directly the space of centre to every the past, Cause missing inspection.The number of test point and the radius of circumference are connected for this present invention, test point number is set as circumference half The direct proportion function in footpath, what the single angle step that can thus to detect on the circumference of different radii was just as.
Step 4046, the number count [i] that maximum is counted in the set Count after judging is found, calculating finger number is coun[t]i/2-1。
Point on circumferentially is detected one by one, if detecting that pixel value changes, just by the meter of i-th of circle Number device count [i] increases by 1;So detection to the end, removes the maximum number max stored in these circle Counters, due to inspection Wrist is also calculated inside when survey, it is thus to detected two pixel change points more;So current finger number is:
max/2-1 (2)
Step 5: being mapped according to predefined gesture, gesture identification result is mapped as to the specific action of robot, and lead to Cross ROS message and be sent to robot control module.
Step 6: robot control module's control machine people completes specific action;
Robot control module is according to the gesture identification result and predefined gesture-type and motion/operation received Mapping relations, control machine people complete special exercise/operation task.
Step 7: specific action, environmental change and mission requirements that user actually accomplishes according to robot, formulate corresponding control System strategy;
Step 8: feedback module continues current gesture mapping according to the selection of corresponding control strategy still carries out new gesture Mapping, repeats the above steps, control strategy is converted into certain gestures type.
Embodiment one:
It is that static gesture identification of the present invention based on RGBD camera depth images carries out gesture interaction with robot below Implementation process, divide following steps:
First, the mapping of Pre-defined gesture species and gesture identification result and robot motion/action type.
This example is directed to six sufficient leg formula moving operation machine people, and its end effector is mounted in robot leg end, robot When mobile, its end effector can be walked as leg, and when robot stops, its end effector can carry out operation operation. As shown in figure 5, predefine 6 kinds of static number gestures (singlehanded 0-5 numerals gesture);Predefined static gesture recognition result and machine The mapping of people's action type, as shown in table 1.
Table 1
The finger number identified Robot manipulation acts
0 End effector of robot closes
1 End effector of robot is opened
2 End effector of robot turns clockwise
3 End effector of robot rotate counterclockwise
4 Lift end effector of robot
5 Put down end effector of robot
Then, static gesture identification module the skeleton point and depth image of the user of collection are carried out hand region segmentation and Gesture identification:
Step 1):The depth value of hand central point is extracted using Asus body-sensing camera Xtion PRO LIVE SDK DepthValue, and using DepthValue as a reference value, respectively take the distance of 2 units backward forward, then depth value exists The value of pixel in the range of [DepthValue-2, DepthValue+2] is both configured to 0 (black), will not be in this depth model The value of pixel in enclosing is both configured to 255 (whites), thus extract depth value [DepthValue-2, DepthValue+2] in the range of object, as shown in Figure 6.
Same depth bounds is in hand due in camera viewfinder range, having other articles certainly, it is so direct Set depth bounds that the article that other and hand have equal depth is also very likely mistakenly considered into hand.It will be appreciated from fig. 6 that border The article that there are other even depth in region is taken as hand, and to extracting, simultaneously as arm lifts vertically, hand and arm are several Same depth is in, so, arm is erroneously interpreted as hand all over and extracted.
Step 2):For settlement steps to deal 1) the problem of, on the basis of hand central point, a rectangle frame is defined to surrounding, The point of hand is only just construed as when the point for meeting step 1) condition is within this rectangle frame.
It is the segmentation result of hand region as shown in Figure 7, it can be seen that current segmentation effect such as effectively eliminates at the deep object Influence, also effectively removes arm segment.Simultaneously because present invention employs ROI region, and in ROI region hardly Might have other objects and hand region has identical depth, so being not in noise in hand region inframe.And Outside hand scope frame ROI, pixel value is uniformly arranged to 255 (whites), so being also impossible to occur outside regional frame Noise.Dissatisfactory place can only be camera hand caused by not caused enough the small noise of hand edge accuracy of detection Edge is not smooth enough.This point defect can use the noise reduction process of image and the morphological operation of bianry image will be this unsmooth It is minimized, uses medium filtering for this example noise reduction, morphological operation uses and opens operation.Till now, succeeded Hand region is partitioned into, solid foundation is established for subsequent gesture identification.
Step 3):After being partitioned into hand region, hand binary image is obtained, using the palm of the hand as the center of circle, along different half The circumference in footpath detects to the hand bianry image split, detects that color change point can thinks have finger to go out It is existing, it is specific as follows:
The palm of the hand position that the present invention is extracted using data collecting card, and following operate is carried out on the basis of this position:
Palm of the hand position is extracted, is laid the foundation for subsequent gesture identification.Due to Asus body-sensing camera Xtion PRO LIVE SDK extraction hand central points refer to root middle all the time, and the point that the hand region extracted is farthest apart from this datum mark All the time in wrist, so the present invention is as far as possible mobile to centre by palm of the hand point first, by hand central point along solstics and The vector that central point is formed, moves a certain distance with Y-direction in X direction respectively.Displacement determines in the following manner: Assume initially that gesture, can be according to palm of the hand coordinate movement side as long as deviation angle is not very big always close to plumbness Journey, i.e. formula (1), are translated, i.e., to hand central point
HandX=HandX+ (X_max-HandX)/b
HandY=HandY+ (Y_max-HandY)/c (1)
Wherein, b=10, c=6;Here why moving ratio in two directions is different, is because the present invention carries " the hand central point " got not is really to be located at the centre of the palm, but refers to root positioned at middle finger, when hand state close to the vertical shape, In X-direction, " the real palm of the hand of hand " center " deviation is seldom, and then departure ratio is more in the Y direction.So the present invention is in X direction Moving ratio very little, only account for 1/10th of original distance, and moving ratio is larger in the Y direction, account for original distance six/ One.The two ratios make this example be obtained according to the geometrical relationship of hand, and user can somewhat change according to actual conditions.
Because Xtion PRO LIVE data collecting cards are to the inaccurate of finger rim detection, the hand region split Many rough points are also had in finger edge.It is possible to flase drop occur when directly detecting finger in this way, because If occurring tiny noise for finger edge, a finger may be detected. as, as shown in Figure 8, it can be seen that segmentation The gesture outline portion gone out also has some rough places.Wherein, by taking the noise that curve marks as an example, if circle is worn just This projection is crossed, according to algorithm above, counter count [i] will increase by two countings, because circle enters from white portion A number has been remembered during this black projection, has then remembered a number again when entering white portion from black projection.count[i] Remember twice more, be equal to count a finger more.Although the present invention passes through noise reduction process and form said before Learn operation and improve the quality of the hand region being partitioned into, but can not exclude to have a small amount of projection and have influence on gesture identification knot Fruit.
In order to reduce or avoid the occurrence of detection it is wrong the problem of, invention increases to whether the condition judgment counted, I.e. when circle is to enter black region from white portion, only in the first two point all in white portion, latter two point is all black In color region, the present invention is just it is thought that once effective color change, enters finger, count [i] increases from background.Work as circle Be from black region enter white portion when, only in the first two point all in black region, latter two point all in white portion Interior, the present invention is just it is thought that once effective color change, enters white background, count [i] increasings from finger black region Add.
The present invention finds out the optimal single angle step of detection by largely testing, i.e., is got on each circumference optimal The test point of quantity, the false drop rate to noise is greatly reduced, and all count [i] are controlled in rational model In enclosing, i.e. count [i]<=12, obtain than more satisfactory result.This example is by each gesture all with 10 pictures To be tested, gesture identification result is as shown in table 2.
Table 2
Being counted by table 2 can show that overall discrimination is 92%.This result is generally still able to reach requirement.But this Kind method is not high to discrimination when finger number is zero, has only reached 60%, because when our holding fist, finger Joint is possible to protrude.The mistake in detection is caused, prominent part is erroneously interpreted as finger.A kind of improved method is to sentencing The limitation of broken strip part is more harsher, only when the distance for judging point and the palm of the hand is more than some value, just thinks that this point is that have The point of effect.This distance can be the direct proportion function of ultimate range.This this example can be taken as ultimate range three/ One.
Finally, gesture identification result is sent to robot control system.Robot control system is according to receiving such as table 1 Shown gesture identification result and predefined gesture-type and robot manipulation act mapping relations, and control machine people completes special Determine motion/operation task.

Claims (6)

1. a kind of robot gesture interaction device based on RGBD camera depth images, it is characterised in that including predefining mould Block, data acquisition module, hand region segmentation submodule, gesture identification submodule, robot control module and feedback module;In advance Definition module includes gesture-type submodule and mapping submodule;Hand region splits submodule and gesture identification submodule is formed Static gesture identification module;
User makes certain gesture motion before camera, data acquisition module by RGBD cameras gather user skeleton point and The data such as depth image, and split submodule, hand region segmentation bone of the submodule from acquisition to hand region with USB transmission The hand region that user is partitioned into point and depth image is supplied to gesture identification submodule;Gesture identification submodule is known Not, and by recognition result by ROS message it is sent to robot control module;Robot control module is according to robot itself Type of exercise and action type, with reference to gesture species and recognition result, corresponding mapping is selected from mapping submodule definition Relation, control machine people complete specific action.
2. a kind of robot gesture interaction device based on RGBD camera depth images as claimed in claim 1, its feature It is, described predefined module predefines the type of user gesture according to the type and practical application request of robot, and Mapping relations between gesture and robot;
Gesture-type submodule is used for predefining the gesture-type of user;
Mapping submodule is used for defining the mapping between gesture species/recognition result and robot motion's species/action type;Including Three models:Motion control pattern, operational control pattern and " motion+operation " pattern;
Motion control pattern is used for defining two kinds of mappings:Mapping between gesture species and robot motion's species, and identification knot Mapping between fruit and robot motion's species;
Operational control pattern is used for defining two kinds of mappings:Mapping between gesture species and robot manipulation's type, and identification knot Mapping between fruit and robot manipulation's type;
The four kinds of mappings of " motion+operation " pattern definition:The mapping of gesture species and robot motion's species, recognition result and machine The mapping of people's type of exercise;Mapping between gesture species and robot manipulation's type, and recognition result and robot manipulation's class Mapping between type.
3. a kind of robot gesture interaction device based on RGBD camera depth images as claimed in claim 1, its feature It is, described feedback module is the specific action that user actually accomplishes according to mission requirements, environmental change and robot, is formulated Corresponding control strategy, and predefined module is returned to, control strategy is converted into certain gestures type.
A kind of 4. application machine of the robot gesture interaction device based on RGBD camera depth images as claimed in claim 1 Device people's gesture interaction method, it is characterised in that comprise the following steps:
Step 1: being directed to leg formula moving operation machine people's application platform, module is predefined according to practical situations, is made by oneself respectively The type of adopted user gesture, and the mapping relations between gesture and robot;
Step 2: user makes certain gestures according to robot type, actual demand and predefined gesture and mapping;
Step 3: skeleton point and depth image of the data acquisition module using RGBD cameras collection user gesture;
Step 4: static gesture identification module carries out hand region segmentation to the skeleton point and depth image of collection and gesture is known Not;
Comprise the following steps that:
The depth value that step 401, the SDK by the use of RGBD cameras extract hand central point sets the depth of hand as a reference value Angle value scope, and extract the object in the range of this;
Specially:First with the depth value DepthValue of SDK extraction hand central points, and using DepthValue as benchmark It is worth, front and rear each selection range a, then the value of the pixel by depth value in the range of [DepthValue-a, DepthValue+a] 0 is both configured to, the value of the pixel in remaining depth bounds is both configured to 255, extracts depth value in [DepthValue- A, DepthValue+a] in the range of object;
Step 402, on the basis of hand central point, extract hand (Region of interest, ROI) region interested, The segmentation of hand region is carried out in the range of depth value;
First, on the basis of hand central point, rectangle frame is defined around hand central point as hand area-of-interest;
The width beta of rectangle frame adjusts according to user from the distance of camera, specially β=d × w;D is from hand central point Depth;W is the width of rectangle frame when ID is 1 meter from hand central point;
When the point of hand falls it is interior in depth value scope [DepthValue-a, DepthValue+a] when, these point corresponding to rectangle Object in frame retains, the hand region as segmentation;
Step 403, the hand region result to segmentation carry out noise reduction and Morphological scale-space, obtain hand binary image;
Step 404, hand bianry image is identified, obtains the number of finger as gesture identification result;
Step 5: being mapped according to predefined gesture, gesture identification result is mapped as to the specific action of robot, and pass through ROS message is sent to robot control module;
Step 6: robot control module's control machine people completes specific action;
Step 7: specific action, environmental change and mission requirements that user actually accomplishes according to robot, formulate corresponding control plan Slightly;
Reflected Step 8: feedback module still carries out new gesture according to the current gesture mapping of corresponding control strategy selection continuation Penetrate, repeat the above steps, control strategy is converted into certain gestures type.
5. a kind of robot gesture interaction method based on RGBD camera depth images as claimed in claim 4, its feature It is, it is described to be concretely comprised the following steps to what hand bianry image was identified in step 404:
Step 4041, extraction palm of the hand position simultaneously calculate palm of the hand exact position as new datum mark;
The hand central point of SDK extractions is mobile to centre, by hand central point along solstics and central point formed to Amount, moved in X direction with Y-direction respectively;
Displacement determines in the following manner:
In gesture close under plumbness, according to palm of the hand coordinate mobile equation, hand central point is translated:
HandX=HandX+ (X_max-HandX)/b
HandY=HandY+ (Y_max-HandY)/c
Wherein, (X_max, Y_max) is respectively hand solstics along X, the coordinate value of Y-direction;(HandX, HandY) is in hand Heart point is along X, the coordinate value of Y-direction;B, c ∈ Q, Q are rational, are determined according to actual hand geometrical relationship;
After palm of the hand coordinate translation, new datum mark is closer to hand center;
Step 4042, the point farthest apart from new datum mark is found in hand ROI region, and calculate distance between the two distance_max;
Step 4043, distance distance_max is divided into Num_Circle parts;
Num_Circle is the empirical value that user determines according to the image for being actually needed and gathering;
0 < Num_Circle≤20, Num_Circle ∈ N;
Step 4044, withIntegral multiple for radius draw Num_Circle circle, write down each justify and hand The intersection point number of hand profile in ROI region, composition set Count;
Set Count=count [1], count [2] ... count [i], count [n] };
Whether step 4045, the intersection point for each circle and the hand profile in hand ROI region in set Count are effectively carried out Judge, if it is valid, the intersection point is counted, otherwise not count;
When counting the intersection point number of i-th of circle and the hand profile in hand ROI region, when circle enters black from white portion During region, the pixel of current D detection is all in white portion, and when the latter D pixel detected is all in black region, Current color change is effective, and count [i] value adds 1;When circle enters white portion from black region, in preceding D pixel All in black region, for rear D pixel all in white portion, color change is effective, and count [i] value adds 1;
Step 4046, the number count [i] that maximum is counted in the set Count after judging is found, it is count to calculate finger number [i]/2-1。
6. a kind of robot gesture interaction method based on RGBD camera depth images as claimed in claim 5, its feature It is, in step 4045, described test point number is the direct proportion function of radius of a circle.
CN201710714575.3A 2017-08-18 2017-08-18 A kind of robot gesture interaction method and apparatus based on RGBD camera depth images Pending CN107688779A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710714575.3A CN107688779A (en) 2017-08-18 2017-08-18 A kind of robot gesture interaction method and apparatus based on RGBD camera depth images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710714575.3A CN107688779A (en) 2017-08-18 2017-08-18 A kind of robot gesture interaction method and apparatus based on RGBD camera depth images

Publications (1)

Publication Number Publication Date
CN107688779A true CN107688779A (en) 2018-02-13

Family

ID=61153476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710714575.3A Pending CN107688779A (en) 2017-08-18 2017-08-18 A kind of robot gesture interaction method and apparatus based on RGBD camera depth images

Country Status (1)

Country Link
CN (1) CN107688779A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108748139A (en) * 2018-04-18 2018-11-06 四川文理学院 Robot control method based on human body temperature type and device
CN109623848A (en) * 2019-02-26 2019-04-16 江苏艾萨克机器人股份有限公司 A kind of hotel service robot
CN110083243A (en) * 2019-04-29 2019-08-02 深圳前海微众银行股份有限公司 Exchange method, device, robot and readable storage medium storing program for executing based on camera
CN110276292A (en) * 2019-06-19 2019-09-24 上海商汤智能科技有限公司 Intelligent vehicle motion control method and device, equipment and storage medium
CN110427100A (en) * 2019-07-03 2019-11-08 武汉子序科技股份有限公司 A kind of movement posture capture system based on depth camera
CN110598510A (en) * 2018-06-13 2019-12-20 周秦娜 Vehicle-mounted gesture interaction technology
CN111126279A (en) * 2019-12-24 2020-05-08 深圳市优必选科技股份有限公司 Gesture interaction method and gesture interaction device
CN111290377A (en) * 2018-11-21 2020-06-16 富士施乐株式会社 Autonomous moving apparatus and computer readable medium
CN111300402A (en) * 2019-11-26 2020-06-19 爱菲力斯(深圳)科技有限公司 Robot control method based on gesture recognition
CN111354029A (en) * 2020-02-26 2020-06-30 深圳市瑞立视多媒体科技有限公司 Gesture depth determination method, device, equipment and storage medium
CN111694428A (en) * 2020-05-25 2020-09-22 电子科技大学 Gesture and track remote control robot system based on Kinect
CN112882577A (en) * 2021-03-26 2021-06-01 歌尔光学科技有限公司 Gesture control method, device and system
CN113139402A (en) * 2020-01-17 2021-07-20 海信集团有限公司 A kind of refrigerator
CN111126279B (en) * 2019-12-24 2024-04-16 深圳市优必选科技股份有限公司 Gesture interaction method and gesture interaction device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468145A (en) * 2015-11-18 2016-04-06 北京航空航天大学 Robot man-machine interaction method and device based on gesture and voice recognition
CN105867630A (en) * 2016-04-21 2016-08-17 深圳前海勇艺达机器人有限公司 Robot gesture recognition method and device and robot system
CN106005086A (en) * 2016-06-02 2016-10-12 北京航空航天大学 Leg-wheel composite robot based on Xtion equipment and gesture control method thereof
CN106326860A (en) * 2016-08-23 2017-01-11 武汉闪图科技有限公司 Gesture recognition method based on vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468145A (en) * 2015-11-18 2016-04-06 北京航空航天大学 Robot man-machine interaction method and device based on gesture and voice recognition
CN105867630A (en) * 2016-04-21 2016-08-17 深圳前海勇艺达机器人有限公司 Robot gesture recognition method and device and robot system
CN106005086A (en) * 2016-06-02 2016-10-12 北京航空航天大学 Leg-wheel composite robot based on Xtion equipment and gesture control method thereof
CN106326860A (en) * 2016-08-23 2017-01-11 武汉闪图科技有限公司 Gesture recognition method based on vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ASANTERABI MALIMA ETAL.: "A FAST ALGORITHM FOR VISION-BASED HAND GESTURE RECOGNITION FOR ROBOT CONTROL", 《IEEE 14TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS》 *
MYUNG-HO JU ETAL.: "Emotional Interaction with a Robot Using Facial Expressions Face Pose and Hand Gestures", 《INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS》 *
齐静等: "机器人视觉手势交互技术研究进展", 《机器人》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108748139A (en) * 2018-04-18 2018-11-06 四川文理学院 Robot control method based on human body temperature type and device
CN110598510A (en) * 2018-06-13 2019-12-20 周秦娜 Vehicle-mounted gesture interaction technology
CN110598510B (en) * 2018-06-13 2023-07-04 深圳市点云智能科技有限公司 Vehicle-mounted gesture interaction technology
US11960275B2 (en) 2018-11-21 2024-04-16 Fujifilm Business Innovation Corp. Autonomous moving apparatus and non-transitory computer readable medium
CN111290377B (en) * 2018-11-21 2023-10-10 富士胶片商业创新有限公司 Autonomous mobile apparatus and computer readable medium
CN111290377A (en) * 2018-11-21 2020-06-16 富士施乐株式会社 Autonomous moving apparatus and computer readable medium
CN109623848A (en) * 2019-02-26 2019-04-16 江苏艾萨克机器人股份有限公司 A kind of hotel service robot
CN110083243A (en) * 2019-04-29 2019-08-02 深圳前海微众银行股份有限公司 Exchange method, device, robot and readable storage medium storing program for executing based on camera
CN110276292A (en) * 2019-06-19 2019-09-24 上海商汤智能科技有限公司 Intelligent vehicle motion control method and device, equipment and storage medium
CN110276292B (en) * 2019-06-19 2021-09-10 上海商汤智能科技有限公司 Intelligent vehicle motion control method and device, equipment and storage medium
CN110427100A (en) * 2019-07-03 2019-11-08 武汉子序科技股份有限公司 A kind of movement posture capture system based on depth camera
CN111300402A (en) * 2019-11-26 2020-06-19 爱菲力斯(深圳)科技有限公司 Robot control method based on gesture recognition
CN111126279B (en) * 2019-12-24 2024-04-16 深圳市优必选科技股份有限公司 Gesture interaction method and gesture interaction device
CN111126279A (en) * 2019-12-24 2020-05-08 深圳市优必选科技股份有限公司 Gesture interaction method and gesture interaction device
CN113139402A (en) * 2020-01-17 2021-07-20 海信集团有限公司 A kind of refrigerator
CN111354029A (en) * 2020-02-26 2020-06-30 深圳市瑞立视多媒体科技有限公司 Gesture depth determination method, device, equipment and storage medium
CN111694428A (en) * 2020-05-25 2020-09-22 电子科技大学 Gesture and track remote control robot system based on Kinect
CN111694428B (en) * 2020-05-25 2021-09-24 电子科技大学 Gesture and track remote control robot system based on Kinect
CN112882577B (en) * 2021-03-26 2023-04-18 歌尔科技有限公司 Gesture control method, device and system
CN112882577A (en) * 2021-03-26 2021-06-01 歌尔光学科技有限公司 Gesture control method, device and system

Similar Documents

Publication Publication Date Title
CN107688779A (en) A kind of robot gesture interaction method and apparatus based on RGBD camera depth images
CN102402680B (en) Hand and indication point positioning method and gesture confirming method in man-machine interactive system
Jain et al. Real-time upper-body human pose estimation using a depth camera
CN106598227B (en) Gesture identification method based on Leap Motion and Kinect
US9330307B2 (en) Learning based estimation of hand and finger pose
CN103530599A (en) Method and system for distinguishing real face and picture face
CN110569817B (en) System and method for realizing gesture recognition based on vision
CN103984928A (en) Finger gesture recognition method based on field depth image
CN103376890A (en) Gesture remote control system based on vision
Krejov et al. Multi-touchless: Real-time fingertip detection and tracking using geodesic maxima
CN106030610A (en) Real-time 3D gesture recognition and tracking system for mobile devices
Hongyong et al. Finger tracking and gesture recognition with kinect
Shahrabadi et al. Detection of indoor and outdoor stairs
CN110032932A (en) A kind of human posture recognition method based on video processing and decision tree given threshold
Wachs et al. A real-time hand gesture system based on evolutionary search
CN103426000B (en) A kind of static gesture Fingertip Detection
Holte et al. View invariant gesture recognition using the CSEM SwissRanger SR-2 camera
Jean et al. Body tracking in human walk from monocular video sequences
Obukhov et al. Organization of three-dimensional gesture control based on machine vision and learning technologies
Raza et al. An integrative approach to robust hand detection using CPM-YOLOv3 and RGBD camera in real time
Rong et al. RGB-D hand pose estimation using fourier descriptor
Wang Hand gesture recognition based on fingertip detection
Sridhar et al. Multiple camera, multiple person tracking with pointing gesture recognition in immersive environments
Leone et al. Topological and volumetric posture recognition with active vision sensor in AAL contexts
CN104680134A (en) Method for quickly detecting human body

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180213