CN104992171A - Method and system for gesture recognition and man-machine interaction based on 2D video sequence - Google Patents

Method and system for gesture recognition and man-machine interaction based on 2D video sequence Download PDF

Info

Publication number
CN104992171A
CN104992171A CN201510469130.4A CN201510469130A CN104992171A CN 104992171 A CN104992171 A CN 104992171A CN 201510469130 A CN201510469130 A CN 201510469130A CN 104992171 A CN104992171 A CN 104992171A
Authority
CN
China
Prior art keywords
staff
man
gesture
target
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510469130.4A
Other languages
Chinese (zh)
Inventor
黄飞
侯立民
谢建
黄克
田泽康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yst Technology Co Ltd
Original Assignee
Yst Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yst Technology Co Ltd filed Critical Yst Technology Co Ltd
Priority to CN201510469130.4A priority Critical patent/CN104992171A/en
Publication of CN104992171A publication Critical patent/CN104992171A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for gesture recognition and man-machine interaction based on a 2D video sequence. The method comprises the steps of S1, acquiring a monocular 2D video frame sequence image, and extracting moving foreground in the image; S2, detecting a human hand in the moving foreground, and constructing a joint feature model of the human hand; S3, predicting a location area where a human hand target emerges, searching and positioning the human hand target in the location area by using the joint feature model of the human hand, and acquiring the location of the human hand in the current frame; S4, judging the type of the current operating mode according to the location of the human hand in the current frame; and S5, tracking the human hand, recognizing the posture and the gesture of the human hand in the current operating mode, converting the posture and the gesture into a corresponding instruction so as to realize man-machine interaction. The man-machine interaction method disclosed by the invention not only can realize target hand selection at a complicated background, but also can realize high-accuracy and high-stability tracking for the human hand.

Description

A kind of gesture identification based on 2D video sequence and man-machine interaction method and system
Technical field
The present invention relates to a kind of gesture identification based on 2D video sequence and man-machine interaction method and system, belong to human-computer interaction technique field.
Background technology
Based on the body sense control technology of gesture identification, current a kind of important man-machine interaction means are become.It gathers the motion picture of user by common camera, by algorithm for pattern recognition, detection and positioning is carried out to the hand-characteristic in image, then the attitude and movement locus etc. by identifying staff, this identifying information is converted into operation signal, feed back to intelligent terminal, and trigger corresponding operational order, as the switching of TV programme, the adjustment of volume, the amplification of picture, webpage, to reduce, the manipulation of simple somatic sensation television game, as cut fruit, playing ball, drive the game such as class.The camera that Gesture Recognition is equipped with based on intelligent terminal, in terminal, corresponding identification software is installed, above operation can be completed, thus on hardware cost and mode of operation, all have great advantage, this technology can be used for manipulating the consumer-elcetronics devicess such as TV, PC, panel computer and smart mobile phone.
According to the evolution of the investigation and application of gesture identification, following several technological means roughly can be divided into:
(1) based on data glove or adornment: wear special gloves or marker by user, identified by camera, gloves itself are particular design, there is obvious feature, thus the complicacy of detection and Identification algorithm can be reduced, but the mode of operation of this Worn type, be obviously difficult to the needs of satisfied natural man-machine interaction, thus the method is not widely used all the time;
(2) based on 3D depth camera: the technology of representative is the KINECT product of Microsoft, it passes through three-dimensional scanning device, obtain the dynamic 3 D model of operator, because it is operated in 3d space, thus avoid the difficult problem that the color interference, Iamge Segmentation etc. that exist in 2D space are a large amount of.But 3D scanning device volume is comparatively large, and hardware cost is higher, and required arithmetic capability is higher, is thus difficult to integrated and is applied to popular intelligent terminal as on the equipment such as TV, mobile phone;
(3) based on the technology of common camera 2D image recognition: because this technology realizes based on common camera, be thus also the technology most with large-scale application potentiality.The application number that the applicant submits to is application discloses " a kind of target person gesture interaction method based on monocular video sequence " of 201310481745.X, it is by identifying staff stationary posture or singlehanded gesture, thus the embedded platform that can be applied to low arithmetic capability carries out man-machine interaction.But this application still exists following shortcoming: a) owing to lacking depth information, in complex environment, the extraction of staff can be more difficult; B) common 2D camera is very sensitive to light, and will realize carrying out high precision tracking for the target as this non-rigid, few texture of staff, will face very large challenge under complex environment; C) due to the difference of noise, distance, in addition everyone custom, also can cause affecting to the various attitude of staff and the identification of gesture; D) the both hands attitude of staff and the identification of gesture cannot be realized; E) for both hands identification, how processing the problems such as two hand intersections, is also a difficult problem.Thus, inventor is still needed to proceed Improvement.
Summary of the invention
The object of the invention is to, a kind of gesture identification based on 2D video sequence and man-machine interaction method and system are provided, it effectively can solve problems of the prior art, especially common 2D camera is very sensitive to light, realize carrying out high precision tracking to the target of this non-rigid, few texture of staff, the problem of very large challenge will be faced.
For solving the problems of the technologies described above, the present invention adopts following technical scheme: a kind of gesture identification based on 2D video sequence and man-machine interaction method, comprise the following steps:
S1, obtains monocular 2D sequence of frames of video image, and extracts sport foreground in this image (thus can reject stationary object, tentatively pick out the region that staff may occur, reduce the calculated amount of staff location);
S2, detects staff in described sport foreground, and builds the union feature model of staff;
S3, the band of position that prediction staff target occurs, and in this band of position, utilize staff union feature pattern search, location staff target, obtain staff position in the current frame;
S4, the operator scheme type current according to staff position judgment in the current frame;
S5, follows the tracks of staff, identifies attitude and the gesture of staff under current mode; Described attitude and gesture are converted to corresponding instruction, realize man-machine interaction.
For the union feature model in step S2, modes such as directly merging renewal, the renewal of multisample storehouse or on-line study can be adopted to carry out model modification, especially the mode directly merged is adopted to upgrade, weight calculation function during renewal adopts the linear function of Model Matching similarity, thus object module Rapid Variable Design at short notice can be reflected timely, the feature of the quick movement of coupling staff that can be real-time.
In step S1 of the present invention, by GMM motion detection algorithm, the sport foreground in image is extracted, make the extraction of sport foreground more efficiently, more stable, simultaneously the present invention have employed the mode of local updating rate adaptation adjustment for the model modification strategy of motion detection block.
In step S2 of the present invention, staff is detected by the union feature of Haar and LBP in described sport foreground, sorter adopts Adaboost, this union feature is adopted to detect, thus can while lifting verification and measurement ratio, make computing velocity also quickly, meet real-time requirement, be applicable to being transplanted in embedded system.
In step S2 of the present invention, described union feature model is formed by two or more fusions any in color, shape, texture, structure, Gradient Features model, relative to single color model, make the recognition and tracking of staff more reliable and more stable, precision is higher, and the deformation interference that the rapid movement that can overcome complex environment and staff produces.
Preferably, in step S2, described union feature model is formed by kernel function fusion by color and textural characteristics model, thus the weight of core can be increased further, reduce the mode of the weight of marginal portion, reduce the interference of background, improve stability and the degree of accuracy of tracking.
In addition, the present invention is accelerated hardware by neon, OpenMP, multithreading optimized algorithm, the present invention is made to adopt union feature model to carry out following the tracks of and detect, not only the recognition and tracking of staff is more reliable and more stable, precision is higher, and computing time greatly reduces, run very smooth on a mobile platform.
Described kernel function can be gaussian kernel function, Polynomial kernel function, Radial basis kernel function etc., especially adopts gaussian kernel function to merge color and textural characteristics, can reduce the interference of background further, improves the stability and degree of accuracy of following the tracks of.
Concrete, adopt gaussian kernel function to carry out fusion to color and textural characteristics and comprise the following steps:
(1) color histogram characteristic model and LBP Texture similarity characteristic model are extracted in weighting;
(2) the color histogram characteristic model described in utilization and LBP Texture similarity characteristic model are searched for staff target respectively, obtain two Search Results;
(3) described two Search Results are merged by linear mode, similarity is weights, the weight that similarity is large is large, the weight that similarity is little is little, obtain union feature model (i.e. the union feature histogram of color histogram+LBP Texture similarity), concrete, described union feature model is:
Rect r e s u l t = c o l o r S i m c o l o r S i m + l b p S i m Rect c o l o r + l b p S i m c o l o r S i m + l b p S i m Rect l b p
Wherein, Rect resultfor the result finally merged; ColorSim is the similarity (0 ~ 1) of color tracking, and lbpSim is the similarity (0 ~ 1) of texture tracked, Rect colorfor the result of color tracking, Rect lbpfor the result of texture tracked.
In step S3, adopt the way of search of average drifting to search for target staff, relative to faster by pixel search, population search speed, be more suitable for applying in real time in the present invention.
In the aforesaid gesture identification based on 2D video sequence and man-machine interaction method, step S3 specifically comprises the following steps: by the trajectory analysis to target staff, and prediction staff position in the current frame, determines the region that target may exist centered by this position; Travel through the rectangle frame of all target staff sizes in this region, extract the union feature of each rectangle frame, mate with the sample in Sample Storehouse, obtain staff position in the current frame, and utilize staff characteristic sum track Renewal model Sample Storehouse.
Wherein, the trajectory analysis passed through target staff described in step S3, prediction staff position in the current frame, centered by this position, determine that the region that target may exist specifically comprises: hypothetical target is uniform motion at short notice, utilize the front 3 frame movable informations of target to calculate average velocity and the direction of its motion; Again according to the position that target in the average velocity calculated and direction prediction next frame may occur; Centered by the position that target may occur in the next frame doped, and the average movement velocity current according to target determines region of search, the physical location of accurate tracking location staff in this region of search.
In addition, with sample in Sample Storehouse carrying out mate described in step S3 refers to and to mate with the histogram in Sample Storehouse according to the union feature model extracted (i.e. the union feature histogram of LPB+ color).
Preferably, when traveling through the rectangle frame of all target sizes in this region, if this rectangle frame comprises sport foreground pixel, then extract the union feature in this rectangle frame, mate with the sample in Sample Storehouse, obtain staff position in the current frame, and Renewal model Sample Storehouse; Otherwise continue to detect next rectangle frame.
Step S4 of the present invention specifically comprises: other positions of human body, and according to staff position in the current frame, distinguishes right-hand man and predict the band of position that another hand occurs; The union feature pattern search of staff, location another hand is utilized in this band of position; According to searching element, positioning result determines current to be operating as Two-hand-mode or singlehanded pattern.
Specifically, first can detect face with method for detecting human face, then judge the position relationship of current staff and face, if current staff is in the left side of face, be left hand, and current staff is then the right hand on the right side of face; When determining after current staff is left hand or the right hand, thus the i.e. band of position that occurs of measurable another hand, convenient and swift.
Further, the present invention realizes identification and the process of both hands intersection by the following method:
(1) every hand is followed the tracks of respectively, judge when remote holder is left hand or the right hand according to the movement locus produced;
(2) union feature model is set up respectively to every hand, and itself and the current union feature model collected are contrasted, judge when remote holder is left hand or the right hand;
(3) judge when remote holder is left hand or the right hand according to the position relationship of current staff and face;
(4) according to step (1), (2), (3), final judged result is obtained.
Realized identification and the process of both hands intersection by said method, stability and the reliability of Dynamic System are higher, can save a large amount of computing times simultaneously.
Step S5 comprises: according to current operator scheme, calculate staff amount of exercise in the picture, and according to the mode of Nonlinear Mapping, be mapped as the amount of exercise (the movement position coordinate being about to the hand calculated adopts nonlinear mode to be mapped on the coordinate position of display screen) of mouse or keyboard under present displays resolution, realize the manipulation to mouse or keyboard.
In said method, step S5 specifically comprises: follow the tracks of staff, obtains position and the track of every hand, and carries out attitude and gesture identification; Wherein, when carrying out gesture recognition, centered by described people's hand position, in the minimum enclosed rectangle frame of palm, detect the attitude of current staff, and it is mated with the attitude in Sample Storehouse, if coupling, then export the order corresponding with this attitude, realize man-machine interaction; When carrying out gesture identification, the positional information of buffer memory every hand in multiple image, and carry out resampling; By sampling, the track obtained mates with standard gesture model, if similarity is greater than the threshold value of setting, then exports the order that this gesture is corresponding, realizes man-machine interaction.
Based on gesture identification and the man-machine interactive system of 2D video sequence, comprising:
Sport foreground extraction module: for obtaining monocular 2D sequence of frames of video image, and extract the sport foreground in this image;
Staff selection module: for detecting staff in described sport foreground;
Target staff MBM: for building the union feature model of staff;
Target staff tracking module: for predicting the band of position that staff target occurs, utilizes the union feature pattern search of staff, location staff target, obtains staff position in the current frame in this band of position;
Operator scheme identification module: for the operator scheme type current according to staff position judgment in the current frame;
Attitude, gesture recognition module: for following the tracks of staff, identify attitude and the gesture of staff under current mode;
Human-computer interaction module: for described attitude and gesture are converted to corresponding instruction, realize man-machine interaction.
Preferably, described target staff tracking module also comprises:
Band of position prediction module: for by the trajectory analysis to target staff, predict staff position in the current frame, determine the region that target may exist centered by this position;
Staff position determination module: for traveling through the rectangle frame of all target staff sizes in this region, extracting the union feature of each rectangle frame, mating with the sample in Sample Storehouse, obtains staff position in the current frame;
Model Sample Storehouse update module: for Renewal model Sample Storehouse.
In the present invention, described staff position determination module also comprises:
Sport foreground pixel detection module: whether comprise sport foreground pixel for detecting rectangle frame.
In said system, described operator scheme identification module also comprises:
Right-hand man's discriminating module: for other positions of human body, and according to staff position in the current frame, distinguish right-hand man;
Single both hands identification module: for predicting the band of position that another hand occurs; The union feature pattern search of staff, location another hand is utilized in this band of position; According to searching element, positioning result determines current to be operating as Two-hand-mode or singlehanded pattern.
In the aforesaid gesture identification based on 2D video sequence and man-machine interactive system, described human-computer interaction module also comprises:
Mapping mouse or Keysheet module: for according to current operator scheme, calculate staff amount of exercise in the picture, and according to the mode of Nonlinear Mapping, is mapped as the amount of exercise of mouse or keyboard under present displays resolution, realizes the manipulation to mouse or keyboard.
In the present invention, described attitude, gesture recognition module also comprise:
Attitude matching module: during for carrying out gesture recognition, centered by described people's hand position, in the minimum enclosed rectangle frame of palm, detects the attitude of current staff, and it is mated with the attitude in Sample Storehouse;
Gesture matching module: during for carrying out gesture identification, the positional information of buffer memory every hand in multiple image, and carry out resampling; By sampling, the track obtained mates with standard gesture model.
Compared with prior art, the present invention is by extracting the sport foreground in this image, and staff is detected in described sport foreground, build the union feature model of staff, the union feature pattern search of recycling staff, location staff target, obtain staff position in the current frame, then identify attitude and the gesture of staff; Described attitude and gesture are converted to corresponding instruction, realize man-machine interaction, not only achieve the target staff selection under complex background, but also the tracking of high precision to staff, high stability can be realized; In addition, the present invention can also carry out the detection and tracking of one hand, both hands, and distinguishes right-hand man accurately, realizes utilizing more gesture and attitude to carry out man-machine interaction.In addition, the present invention is merged color, shape, texture, structure, Gradient Features by utilizing kernel function, thus can increase the weight of core, reduces the mode of the weight of marginal portion, further reduce the interference of background, further increase stability and the degree of accuracy of tracking.In addition, the staff analog mouse pattern in the present invention, can be extended for telepilot pattern, touch screen pattern etc. easily.The present invention adopts the update mode of GMM to upgrade background model, the model modification strategy of motion detection block be have employed to the mode of local updating rate adaptation simultaneously, thus can extract the target in this rapid movement of staff more accurately.
In addition, present invention employs union feature and follow the tracks of the thought combining and detect, staff is split more accurate, reduce the interference of various attitude to staff and gesture identification, and then make the recognition and tracking of staff more reliable and more stable, precision is higher; The present invention simultaneously makes calculated amount requirement of real time by the parallel optimization on algorithm logic and hardware-accelerated (as NEON, OpenMP, multithreading optimized algorithm), runs also very smooth on a mobile platform; In addition, the motion detection mode that have employed local updating rate adaptation in the present invention, in conjunction with the algorithm idea that union feature follows the tracks of (referring to the union feature model after adopting color and texture model to merge), union feature detects (union feature referring to the Haar+LBP in claim 2) and both hands identification, ensure that system stability reliably runs in embedded systems fast.In addition, the present invention adopts union feature to detect, and detecting device inherently has the impact overcoming certain illumination and noise; And the union feature model adopted in tracker also has anti-light photograph, the characteristic of deformation, the timely study of adding trace model upgrades and in conjunction with motion analysis and trajectory analysis, thus enhances the robustness of the target following to non-rigid, few texture, improves the degree of accuracy of tracking; In addition, for the operation of different distances, the present invention according to the size adaptation adjustment mouse mappings coefficient of staff, thus reduces distance to operating the impact brought.
Accompanying drawing explanation
Fig. 1 is the method flow diagram of a kind of embodiment of the present invention;
Fig. 2 is the method flow diagram that staff detects;
Fig. 3 is the construction method process flow diagram of staff union feature model;
Fig. 4 is staff location determining method process flow diagram in the picture;
Fig. 5 is the Booting sequence figure of gesture system;
Fig. 6 is single, double hand mode identification method process flow diagram;
Fig. 7 is the method flow diagram utilizing staff attitude and gesture to carry out man-machine interaction;
Fig. 8 is the method flow diagram utilizing staff to manipulate mouse movement.
Below in conjunction with the drawings and specific embodiments, the present invention is further illustrated.
Embodiment
Embodiments of the invention: a kind of gesture identification based on 2D video sequence and man-machine interaction method, as shown in Figure 1, comprise the following steps:
S1, obtains monocular 2D sequence of frames of video image, and extracts the sport foreground in this image by GMM motion detection algorithm, thus can reject stationary object, tentatively pick out the region that staff may occur, reduce the calculated amount of staff location;
S2, in described sport foreground, detecting staff by the union feature of Haar and LBP, (sorter adopts Adaboost, if detect staff, then upgrades the part of motion model except staff region, exports the position of staff; If staff do not detected, then to motion model update all), and build the union feature model (as shown in Figure 3) of staff; Wherein, described union feature model is formed by two or more fusions any in color, shape, texture, structure, Gradient Features model, especially the union feature model merged by kernel function by color and textural characteristics model, the weight of core can be increased, reduce the mode of the weight of marginal portion, reduce the interference of background, improve the stability and degree of accuracy of following the tracks of; Described kernel function can be gaussian kernel function, Polynomial kernel function, Radial basis kernel function etc., especially adopts gaussian kernel function to merge color and textural characteristics, can reduce the interference of background further, improves the stability and degree of accuracy of following the tracks of; Concrete, adopt gaussian kernel function to carry out fusion to color and textural characteristics and comprise the following steps:
(1) color histogram characteristic model and LBP Texture similarity characteristic model are extracted in weighting;
(2) the color histogram characteristic model described in utilization and LBP Texture similarity characteristic model are searched for staff target respectively, obtain two Search Results;
(3) described two Search Results are merged by linear mode, similarity is weights, the weight that similarity is large is large, the weight that similarity is little is little, obtain union feature model (i.e. the union feature histogram of color histogram+LBP Texture similarity), concrete, described union feature model is:
Rect r e s u l t = c o l o r S i m c o l o r S i m + l b p S i m Rect c o l o r + l b p S i m c o l o r S i m + l b p S i m Rect l b p
Wherein, Rect resultfor the result finally merged; ColorSim is the similarity (0 ~ 1) of color tracking, and lbpSim is the similarity (0 ~ 1) of texture tracked, Rect colorfor the result of color tracking, Rect lbpfor the result of texture tracked;
For described union feature model, modes such as directly merging renewal, the renewal of multisample storehouse or on-line study can be adopted to carry out model modification, especially the mode directly merged is adopted to upgrade, weight calculation function during renewal adopts the linear function of Model Matching similarity, thus object module Rapid Variable Design at short notice can be reflected timely, the feature of the quick movement of coupling staff that can be real-time;
S3, the band of position that prediction staff target occurs, in this band of position, utilize the union feature pattern search of staff (way of search of average drifting can be adopted to search for target staff, also the mode by pixel search, population search can be adopted, but the way of search speed of average drifting is faster, to be more suitable in the present invention applying in real time), location staff target, obtain staff position in the current frame; Specifically comprise the following steps (as shown in Figure 2, Figure 4 shows): by the trajectory analysis to target staff, prediction staff position in the current frame, determines the region that target may exist centered by this position; Travel through the rectangle frame of all target staff sizes in this region, extract the union feature of each rectangle frame, mate with the sample in Sample Storehouse, obtain staff position in the current frame, and utilize staff characteristic sum track Renewal model Sample Storehouse; When traveling through the rectangle frame of all target sizes in this region, if this rectangle frame comprises sport foreground pixel, then extract the union feature in this rectangle frame, mate with the sample in Sample Storehouse, obtain staff position in the current frame, and Renewal model Sample Storehouse; Otherwise continue to detect next rectangle frame; The position of the staff that buffer memory multiframe step S3 obtains, and carry out resampling, by sampling, the track obtained mates with standard gesture model, if similarity is enough large, then gesture system starts, otherwise does not start (as shown in Figure 5), thus reduces the probability of error starting; Wherein, the described trajectory analysis to target staff, prediction staff position in the current frame, centered by this position, determine that the region that target may exist specifically comprises: hypothetical target is uniform motion at short notice, utilize the front 3 frame movable informations of target to calculate average velocity and the direction of its motion; Again according to the position that target in the average velocity calculated and direction prediction next frame may occur; Centered by the position that target may occur in the next frame doped, and the average movement velocity current according to target determines region of search, the physical location of accurate tracking location staff in this region of search;
S4, the operator scheme type current according to staff position judgment in the current frame; Specifically comprise (as shown in Figure 6): other positions of human body, and according to staff position in the current frame, distinguish right-hand man and predict the band of position that another hand occurs; In this band of position, utilize the union feature pattern search of staff, location another hand (as first face can be detected with method for detecting human face, judge the position relationship of current staff and face again, if current staff is in the left side of face, be left hand, current staff is then the right hand on the right side of face; When determining after current staff is left hand or the right hand, thus the i.e. band of position that occurs of measurable another hand, convenient and swift); According to searching element, positioning result determines current to be operating as Two-hand-mode or singlehanded pattern;
S5, follows the tracks of staff, identifies attitude and the gesture (as shown in Figure 7) of staff under current mode; Described attitude and gesture are converted to corresponding instruction, realize man-machine interaction; Specifically comprise: staff is followed the tracks of, obtain position and the track of every hand, and carry out attitude and gesture identification; Wherein, when carrying out gesture recognition, centered by described people's hand position, in the minimum enclosed rectangle frame of palm, detect the attitude of current staff, and it is mated with the attitude in Sample Storehouse, if coupling, (represent down order as clenched fist, palm represents up order, and perpendicular thumb represents praises then to export the order corresponding with this attitude; In addition, user also can the attitude of self-defined needs), realize man-machine interaction; When carrying out gesture identification, the positional information of buffer memory every hand in multiple image, and carry out resampling; By sampling, the track obtained mates with standard gesture model, if similarity is greater than the threshold value of setting, then export order corresponding to this gesture (if one hand is return order obliquely, one hand is brandished as ordering up and down up and down, both hands are outwards mobile for amplifying order, and both hands move inward as reducing order etc.; In addition, user also can self-defined gesture), realize man-machine interaction; Wherein, identification and the process of both hands intersection is realized by following methods:
(1) every hand is followed the tracks of respectively, judge when remote holder is left hand or the right hand according to the movement locus produced;
(2) union feature model is set up respectively to every hand, and itself and the current union feature model collected are contrasted, judge when remote holder is left hand or the right hand;
(3) judge when remote holder is left hand or the right hand according to the position relationship of current staff and face;
(4) according to step (1), (2), (3), final judged result is obtained.
Concrete, according to current operator scheme, calculate staff amount of exercise in the picture, and according to the mode of Nonlinear Mapping, be mapped as the amount of exercise (the movement position coordinate being about to the hand calculated adopts nonlinear mode to be mapped on the coordinate position of display screen) of mouse or keyboard under present displays resolution, realize the manipulation to mouse or keyboard.
The present invention is accelerated hardware by neon, OpenMP, multithreading optimized algorithm, the present invention is made to adopt union feature model to carry out following the tracks of and detect, not only the recognition and tracking of staff is more reliable and more stable, precision is higher, and computing time greatly reduces, run very smooth on a mobile platform.
Based on gesture identification and the man-machine interactive system of 2D video sequence, comprising:
Sport foreground extraction module: for obtaining monocular 2D sequence of frames of video image, and extract the sport foreground in this image;
Staff selection module: for detecting staff in described sport foreground;
Target staff MBM: for building the union feature model of staff;
Target staff tracking module: for predicting the band of position that staff target occurs, utilizes the union feature pattern search of staff, location staff target, obtains staff position in the current frame in this band of position;
Operator scheme identification module: for the operator scheme type current according to staff position judgment in the current frame;
Attitude, gesture recognition module: for following the tracks of staff, identify attitude and the gesture of staff under current mode;
Human-computer interaction module: for described attitude and gesture are converted to corresponding instruction, realize man-machine interaction.
Described target staff tracking module also comprises:
Band of position prediction module: for by the trajectory analysis to target staff, predict staff position in the current frame, determine the region that target may exist centered by this position;
Staff position determination module: for traveling through the rectangle frame of all target staff sizes in this region, extracting the union feature of each rectangle frame, mating with the sample in Sample Storehouse, obtains staff position in the current frame;
Model Sample Storehouse update module: for Renewal model Sample Storehouse.
Described staff position determination module also comprises:
Sport foreground pixel detection module: whether comprise sport foreground pixel for detecting rectangle frame.
Described operator scheme identification module also comprises:
Right-hand man's discriminating module: for other positions of human body, and according to staff position in the current frame, distinguish right-hand man;
Single both hands identification module: for predicting the band of position that another hand occurs; The union feature pattern search of staff, location another hand is utilized in this band of position; According to searching element, positioning result determines current to be operating as Two-hand-mode or singlehanded pattern.
Described human-computer interaction module also comprises:
Mapping mouse or Keysheet module: for according to current operator scheme, calculate staff amount of exercise in the picture, and according to the mode of Nonlinear Mapping, is mapped as the amount of exercise of mouse or keyboard under present displays resolution, realizes the manipulation to mouse or keyboard.
Described attitude, gesture recognition module also comprise:
Attitude matching module: during for carrying out gesture recognition, centered by described people's hand position, in the minimum enclosed rectangle frame of palm, detects the attitude of current staff, and it is mated with the attitude in Sample Storehouse;
Gesture matching module: during for carrying out gesture identification, the positional information of buffer memory every hand in multiple image, and carry out resampling; By sampling, the track obtained mates with standard gesture model.

Claims (10)

1., based on gesture identification and the man-machine interaction method of 2D video sequence, it is characterized in that, comprise the following steps:
S1, obtains monocular 2D sequence of frames of video image, and extracts the sport foreground in this image;
S2, detects staff in described sport foreground, and builds the union feature model of staff;
S3, the band of position that prediction staff target occurs, utilizes the union feature pattern search of staff, location staff target, obtains staff position in the current frame in this band of position;
S4, the operator scheme type current according to staff position judgment in the current frame;
S5, follows the tracks of staff, identifies attitude and the gesture of staff under current mode; Described attitude and gesture are converted to corresponding instruction, realize man-machine interaction.
2. the gesture identification based on 2D video sequence according to claim 1 and man-machine interaction method, is characterized in that, in step S2, in described sport foreground, detects staff by the union feature of Haar and LBP.
3. the gesture identification based on 2D video sequence according to claim 1 and man-machine interaction method, is characterized in that, described union feature model is formed by two or more fusions any in color, shape, texture, structure, Gradient Features model; Preferably, described union feature model is formed by kernel function fusion by color and textural characteristics model.
4. the gesture identification based on 2D video sequence according to any one of claims 1 to 3 and man-machine interaction method, it is characterized in that, step S3 specifically comprises the following steps: by the trajectory analysis to target staff, prediction staff position in the current frame, determines the region that target may exist centered by this position; Travel through the rectangle frame of all target staff sizes in this region, extract the union feature of each rectangle frame, mate with the sample in Sample Storehouse, obtain staff position in the current frame, and Renewal model Sample Storehouse.
5. the gesture identification based on 2D video sequence according to any one of claims 1 to 3 and man-machine interaction method, it is characterized in that, step S4 specifically comprises: other positions of human body, and according to staff position in the current frame, distinguish right-hand man and predict the band of position that another hand occurs; The union feature pattern search of staff, location another hand is utilized in this band of position; According to searching element, positioning result determines current to be operating as Two-hand-mode or singlehanded pattern; Step S5 comprises: according to current operator scheme, calculates staff amount of exercise in the picture, and according to the mode of Nonlinear Mapping, is mapped as the amount of exercise of mouse or keyboard under present displays resolution, realizes the manipulation to mouse or keyboard.
6. the gesture identification based on 2D video sequence according to claim 5 and man-machine interaction method, it is characterized in that, step S5 specifically comprises: follow the tracks of staff, obtains position and the track of every hand, and carries out attitude and gesture identification; Wherein, when carrying out gesture recognition, centered by described people's hand position, in the minimum enclosed rectangle frame of palm, detect the attitude of current staff, and it is mated with the attitude in Sample Storehouse, if coupling, then export the order corresponding with this attitude, realize man-machine interaction; When carrying out gesture identification, the positional information of buffer memory every hand in multiple image, and carry out resampling; By sampling, the track obtained mates with standard gesture model, if similarity is greater than the threshold value of setting, then exports the order that this gesture is corresponding, realizes man-machine interaction.
7., based on gesture identification and the man-machine interactive system of 2D video sequence, it is characterized in that, comprising:
Sport foreground extraction module: for obtaining monocular 2D sequence of frames of video image, and extract the sport foreground in this image;
Staff selection module: for detecting staff in described sport foreground;
Target staff MBM: for building the union feature model of staff;
Target staff tracking module: for predicting the band of position that staff target occurs, utilizes the union feature pattern search of staff, location staff target, obtains staff position in the current frame in this band of position;
Operator scheme identification module: for the operator scheme type current according to staff position judgment in the current frame;
Attitude, gesture recognition module: for following the tracks of staff, identify attitude and the gesture of staff under current mode;
Human-computer interaction module: for described attitude and gesture are converted to corresponding instruction, realize man-machine interaction.
8. the gesture identification based on 2D video sequence according to claim 7 and man-machine interactive system, is characterized in that, described target staff tracking module also comprises:
Band of position prediction module: for by the trajectory analysis to target staff, predict staff position in the current frame, determine the region that target may exist centered by this position;
Staff position determination module: for traveling through the rectangle frame of all target staff sizes in this region, extracting the union feature of each rectangle frame, mating with the sample in Sample Storehouse, obtains staff position in the current frame;
Model Sample Storehouse update module: for Renewal model Sample Storehouse.
9. the gesture identification based on 2D video sequence according to claim 7 or 8 and man-machine interactive system, is characterized in that, described operator scheme identification module also comprises:
Right-hand man's discriminating module: for other positions of human body, and according to staff position in the current frame, distinguish right-hand man;
Single both hands identification module: for predicting the band of position that another hand occurs; The union feature pattern search of staff, location another hand is utilized in this band of position; According to searching element, positioning result determines current to be operating as Two-hand-mode or singlehanded pattern;
Described human-computer interaction module also comprises:
Mapping mouse or Keysheet module: for according to current operator scheme, calculate staff amount of exercise in the picture, and according to the mode of Nonlinear Mapping, is mapped as the amount of exercise of mouse or keyboard under present displays resolution, realizes the manipulation to mouse or keyboard.
10. the gesture identification based on 2D video sequence according to claim 9 and man-machine interactive system, is characterized in that, described attitude, gesture recognition module also comprise:
Attitude matching module: during for carrying out gesture recognition, centered by described people's hand position, in the minimum enclosed rectangle frame of palm, detects the attitude of current staff, and it is mated with the attitude in Sample Storehouse;
Gesture matching module: during for carrying out gesture identification, the positional information of buffer memory every hand in multiple image, and carry out resampling; By sampling, the track obtained mates with standard gesture model.
CN201510469130.4A 2015-08-04 2015-08-04 Method and system for gesture recognition and man-machine interaction based on 2D video sequence Pending CN104992171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510469130.4A CN104992171A (en) 2015-08-04 2015-08-04 Method and system for gesture recognition and man-machine interaction based on 2D video sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510469130.4A CN104992171A (en) 2015-08-04 2015-08-04 Method and system for gesture recognition and man-machine interaction based on 2D video sequence

Publications (1)

Publication Number Publication Date
CN104992171A true CN104992171A (en) 2015-10-21

Family

ID=54303984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510469130.4A Pending CN104992171A (en) 2015-08-04 2015-08-04 Method and system for gesture recognition and man-machine interaction based on 2D video sequence

Country Status (1)

Country Link
CN (1) CN104992171A (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354551A (en) * 2015-11-03 2016-02-24 北京英梅吉科技有限公司 Gesture recognition method based on monocular camera
CN106022211A (en) * 2016-05-04 2016-10-12 北京航空航天大学 Method using gestures to control multimedia device
CN106558070A (en) * 2016-11-11 2017-04-05 华南智能机器人创新研究院 A kind of method and system of the visual tracking under the robot based on Delta
CN106780566A (en) * 2016-11-11 2017-05-31 华南智能机器人创新研究院 A kind of method and system of the target following under the robot based on Delta
CN107479715A (en) * 2017-09-29 2017-12-15 广州云友网络科技有限公司 The method and apparatus that virtual reality interaction is realized using gesture control
CN107885317A (en) * 2016-09-29 2018-04-06 阿里巴巴集团控股有限公司 A kind of exchange method and device based on gesture
CN107885324A (en) * 2017-09-28 2018-04-06 江南大学 A kind of man-machine interaction method based on convolutional neural networks
CN108052927A (en) * 2017-12-29 2018-05-18 北京奇虎科技有限公司 Gesture processing method and processing device based on video data, computing device
CN108096833A (en) * 2017-12-20 2018-06-01 北京奇虎科技有限公司 Somatic sensation television game control method and device based on cascade neural network, computing device
CN108108709A (en) * 2017-12-29 2018-06-01 纳恩博(北京)科技有限公司 A kind of recognition methods and device, computer storage media
CN108205646A (en) * 2016-12-19 2018-06-26 北京数码视讯科技股份有限公司 A kind of hand gestures detection method and device
CN108229391A (en) * 2018-01-02 2018-06-29 京东方科技集团股份有限公司 Gesture identifying device and its server, gesture recognition system, gesture identification method
CN108491767A (en) * 2018-03-06 2018-09-04 北京因时机器人科技有限公司 Autonomous roll response method, system and manipulator based on Online Video perception
CN109101860A (en) * 2017-06-21 2018-12-28 富泰华工业(深圳)有限公司 Electronic equipment and its gesture identification method
CN110020634A (en) * 2019-04-15 2019-07-16 刘政操 A kind of business administration data display board
CN110070478A (en) * 2018-08-24 2019-07-30 北京微播视界科技有限公司 Deformation pattern generation method and device
CN110298314A (en) * 2019-06-28 2019-10-01 海尔优家智能科技(北京)有限公司 The recognition methods of gesture area and device
US10496879B2 (en) 2017-08-25 2019-12-03 Qualcomm Incorporated Multiple-detection gesture recognition
CN110807391A (en) * 2019-10-25 2020-02-18 中国人民解放军国防科技大学 Human body posture instruction identification method for human-unmanned aerial vehicle interaction based on vision
CN111007806A (en) * 2018-10-08 2020-04-14 珠海格力电器股份有限公司 Smart home control method and device
CN111696140A (en) * 2020-05-09 2020-09-22 青岛小鸟看看科技有限公司 Monocular-based three-dimensional gesture tracking method
WO2021008589A1 (en) * 2019-07-18 2021-01-21 华为技术有限公司 Application running mehod and electronic device
CN113515190A (en) * 2021-05-06 2021-10-19 广东魅视科技股份有限公司 Mouse function implementation method based on human body gestures
CN113642413A (en) * 2021-07-16 2021-11-12 新线科技有限公司 Control method, apparatus, device and medium
CN113706606A (en) * 2021-08-12 2021-11-26 新线科技有限公司 Method and device for determining position coordinates of spaced gestures
CN114167978A (en) * 2021-11-11 2022-03-11 广州大学 Human-computer interaction system carried on construction robot
CN114253395A (en) * 2021-11-11 2022-03-29 易视腾科技股份有限公司 Gesture recognition system for television control and recognition method thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763515A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Real-time gesture interaction method based on computer vision
CN102298443A (en) * 2011-06-24 2011-12-28 华南理工大学 Smart home voice control system combined with video channel and control method thereof
CN102324019A (en) * 2011-08-12 2012-01-18 浙江大学 Method and system for automatically extracting gesture candidate region in video sequence
CN102854983A (en) * 2012-09-10 2013-01-02 中国电子科技集团公司第二十八研究所 Man-machine interaction method based on gesture recognition
CN103530613A (en) * 2013-10-15 2014-01-22 无锡易视腾科技有限公司 Target person hand gesture interaction method based on monocular video sequence
CN104049760A (en) * 2014-06-24 2014-09-17 深圳先进技术研究院 Obtaining method and system of man-machine interaction instruction
CN104134061A (en) * 2014-08-15 2014-11-05 上海理工大学 Number gesture recognition method for support vector machine based on feature fusion
CN104361313A (en) * 2014-10-16 2015-02-18 辽宁石油化工大学 Gesture recognition method based on multi-kernel learning heterogeneous feature fusion
US20150117708A1 (en) * 2012-06-25 2015-04-30 Softkinetic Software Three Dimensional Close Interactions

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763515A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Real-time gesture interaction method based on computer vision
CN102298443A (en) * 2011-06-24 2011-12-28 华南理工大学 Smart home voice control system combined with video channel and control method thereof
CN102324019A (en) * 2011-08-12 2012-01-18 浙江大学 Method and system for automatically extracting gesture candidate region in video sequence
US20150117708A1 (en) * 2012-06-25 2015-04-30 Softkinetic Software Three Dimensional Close Interactions
CN102854983A (en) * 2012-09-10 2013-01-02 中国电子科技集团公司第二十八研究所 Man-machine interaction method based on gesture recognition
CN103530613A (en) * 2013-10-15 2014-01-22 无锡易视腾科技有限公司 Target person hand gesture interaction method based on monocular video sequence
CN104049760A (en) * 2014-06-24 2014-09-17 深圳先进技术研究院 Obtaining method and system of man-machine interaction instruction
CN104134061A (en) * 2014-08-15 2014-11-05 上海理工大学 Number gesture recognition method for support vector machine based on feature fusion
CN104361313A (en) * 2014-10-16 2015-02-18 辽宁石油化工大学 Gesture recognition method based on multi-kernel learning heterogeneous feature fusion

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354551B (en) * 2015-11-03 2019-07-16 北京英梅吉科技有限公司 Gesture identification method based on monocular cam
CN105354551A (en) * 2015-11-03 2016-02-24 北京英梅吉科技有限公司 Gesture recognition method based on monocular camera
CN106022211B (en) * 2016-05-04 2019-06-28 北京航空航天大学 A method of utilizing gesture control multimedia equipment
CN106022211A (en) * 2016-05-04 2016-10-12 北京航空航天大学 Method using gestures to control multimedia device
CN107885317A (en) * 2016-09-29 2018-04-06 阿里巴巴集团控股有限公司 A kind of exchange method and device based on gesture
CN106558070A (en) * 2016-11-11 2017-04-05 华南智能机器人创新研究院 A kind of method and system of the visual tracking under the robot based on Delta
CN106780566A (en) * 2016-11-11 2017-05-31 华南智能机器人创新研究院 A kind of method and system of the target following under the robot based on Delta
CN106780566B (en) * 2016-11-11 2019-06-21 华南智能机器人创新研究院 A kind of method and system of target following under the robot based on Delta
CN106558070B (en) * 2016-11-11 2019-02-26 华南智能机器人创新研究院 A kind of method and system of vision tracking under the robot based on Delta
CN108205646A (en) * 2016-12-19 2018-06-26 北京数码视讯科技股份有限公司 A kind of hand gestures detection method and device
CN109101860B (en) * 2017-06-21 2022-05-13 富泰华工业(深圳)有限公司 Electronic equipment and gesture recognition method thereof
CN109101860A (en) * 2017-06-21 2018-12-28 富泰华工业(深圳)有限公司 Electronic equipment and its gesture identification method
US10496879B2 (en) 2017-08-25 2019-12-03 Qualcomm Incorporated Multiple-detection gesture recognition
CN107885324B (en) * 2017-09-28 2020-07-28 江南大学 Human-computer interaction method based on convolutional neural network
CN107885324A (en) * 2017-09-28 2018-04-06 江南大学 A kind of man-machine interaction method based on convolutional neural networks
CN107479715A (en) * 2017-09-29 2017-12-15 广州云友网络科技有限公司 The method and apparatus that virtual reality interaction is realized using gesture control
CN108096833B (en) * 2017-12-20 2021-10-01 北京奇虎科技有限公司 Motion sensing game control method and device based on cascade neural network and computing equipment
CN108096833A (en) * 2017-12-20 2018-06-01 北京奇虎科技有限公司 Somatic sensation television game control method and device based on cascade neural network, computing device
CN108108709B (en) * 2017-12-29 2020-10-16 纳恩博(北京)科技有限公司 Identification method and device and computer storage medium
CN108052927A (en) * 2017-12-29 2018-05-18 北京奇虎科技有限公司 Gesture processing method and processing device based on video data, computing device
CN108108709A (en) * 2017-12-29 2018-06-01 纳恩博(北京)科技有限公司 A kind of recognition methods and device, computer storage media
CN108052927B (en) * 2017-12-29 2021-06-01 北京奇虎科技有限公司 Gesture processing method and device based on video data and computing equipment
CN108229391A (en) * 2018-01-02 2018-06-29 京东方科技集团股份有限公司 Gesture identifying device and its server, gesture recognition system, gesture identification method
US10725553B2 (en) 2018-01-02 2020-07-28 Boe Technology Group Co., Ltd. Gesture recognition device, gesture recognition method, and gesture recognition system
CN108491767B (en) * 2018-03-06 2022-08-09 北京因时机器人科技有限公司 Autonomous rolling response method and system based on online video perception and manipulator
CN108491767A (en) * 2018-03-06 2018-09-04 北京因时机器人科技有限公司 Autonomous roll response method, system and manipulator based on Online Video perception
CN110070478A (en) * 2018-08-24 2019-07-30 北京微播视界科技有限公司 Deformation pattern generation method and device
CN111007806A (en) * 2018-10-08 2020-04-14 珠海格力电器股份有限公司 Smart home control method and device
CN110020634A (en) * 2019-04-15 2019-07-16 刘政操 A kind of business administration data display board
CN110298314A (en) * 2019-06-28 2019-10-01 海尔优家智能科技(北京)有限公司 The recognition methods of gesture area and device
WO2021008589A1 (en) * 2019-07-18 2021-01-21 华为技术有限公司 Application running mehod and electronic device
US11986726B2 (en) 2019-07-18 2024-05-21 Honor Device Co., Ltd. Application running method and electronic device
CN110807391A (en) * 2019-10-25 2020-02-18 中国人民解放军国防科技大学 Human body posture instruction identification method for human-unmanned aerial vehicle interaction based on vision
CN111696140A (en) * 2020-05-09 2020-09-22 青岛小鸟看看科技有限公司 Monocular-based three-dimensional gesture tracking method
CN111696140B (en) * 2020-05-09 2024-02-13 青岛小鸟看看科技有限公司 Monocular-based three-dimensional gesture tracking method
CN113515190A (en) * 2021-05-06 2021-10-19 广东魅视科技股份有限公司 Mouse function implementation method based on human body gestures
CN113642413A (en) * 2021-07-16 2021-11-12 新线科技有限公司 Control method, apparatus, device and medium
CN113706606A (en) * 2021-08-12 2021-11-26 新线科技有限公司 Method and device for determining position coordinates of spaced gestures
CN113706606B (en) * 2021-08-12 2024-04-30 新线科技有限公司 Method and device for determining position coordinates of spaced hand gestures
CN114253395A (en) * 2021-11-11 2022-03-29 易视腾科技股份有限公司 Gesture recognition system for television control and recognition method thereof
CN114167978A (en) * 2021-11-11 2022-03-11 广州大学 Human-computer interaction system carried on construction robot
CN114253395B (en) * 2021-11-11 2023-07-18 易视腾科技股份有限公司 Gesture recognition system and method for television control

Similar Documents

Publication Publication Date Title
CN104992171A (en) Method and system for gesture recognition and man-machine interaction based on 2D video sequence
Mukherjee et al. Fingertip detection and tracking for recognition of air-writing in videos
CN103941866B (en) Three-dimensional gesture recognizing method based on Kinect depth image
Zhou et al. A novel finger and hand pose estimation technique for real-time hand gesture recognition
US7308112B2 (en) Sign based human-machine interaction
US8970696B2 (en) Hand and indicating-point positioning method and hand gesture determining method used in human-computer interaction system
US9811721B2 (en) Three-dimensional hand tracking using depth sequences
Wu et al. Robust fingertip detection in a complex environment
US20200226786A1 (en) Detecting pose using floating keypoint(s)
Patruno et al. People re-identification using skeleton standard posture and color descriptors from RGB-D data
CN107357427A (en) A kind of gesture identification control method for virtual reality device
CN102831439A (en) Gesture tracking method and gesture tracking system
CN102855461B (en) In image, detect the method and apparatus of finger
CN107450714A (en) Man-machine interaction support test system based on augmented reality and image recognition
CN102426480A (en) Man-machine interactive system and real-time gesture tracking processing method for same
CN103839040A (en) Gesture identification method and device based on depth images
CN111444764A (en) Gesture recognition method based on depth residual error network
CN103605986A (en) Human motion recognition method based on local features
CN103105924B (en) Man-machine interaction method and device
CN110135237B (en) Gesture recognition method
CN107256083A (en) Many finger method for real time tracking based on KINECT
Hu et al. Depth sensor based human detection for indoor surveillance
Paul et al. Hand segmentation from complex background for gesture recognition
Liu et al. Towards interpretable and robust hand detection via pixel-wise prediction
CN108614988A (en) A kind of motion gesture automatic recognition system under complex background

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 214135 Jiangsu Province, Wuxi City Linghu Wuxi national hi tech Industrial Development Zone, Road No. 111 Wuxi Software Park, whale D building room 701

Applicant after: Yi Teng Teng Polytron Technologies Inc

Address before: 214135 Jiangsu Province, Wuxi City Linghu Wuxi national hi tech Industrial Development Zone, Road No. 111 Wuxi Software Park, whale D building room 701

Applicant before: YST TECHNOLOGY CO., LTD.

COR Change of bibliographic data
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20151021