CN103839040A - Gesture identification method and device based on depth images - Google Patents

Gesture identification method and device based on depth images Download PDF

Info

Publication number
CN103839040A
CN103839040A CN201210490622.8A CN201210490622A CN103839040A CN 103839040 A CN103839040 A CN 103839040A CN 201210490622 A CN201210490622 A CN 201210490622A CN 103839040 A CN103839040 A CN 103839040A
Authority
CN
China
Prior art keywords
gesture
hand
motion
region
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210490622.8A
Other languages
Chinese (zh)
Other versions
CN103839040B (en
Inventor
梁玲燕
赵颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN201210490622.8A priority Critical patent/CN103839040B/en
Publication of CN103839040A publication Critical patent/CN103839040A/en
Application granted granted Critical
Publication of CN103839040B publication Critical patent/CN103839040B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a gesture identification method and device based on depth images and a method and device for starting a man-machine interaction system. The gesture identification method includes the steps of detecting a three-dimensional movement track of a hand candidate area based on a depth image sequence of a hand area, and identifying a preset gesture according to the three-dimensional movement track of the hand candidate area. According to the method and device of the gesture identification method, movement information in continuous identification images is made full use of; due to the facts that a skin color model is not used and gesture identification is carried out by means of time-space domain movement information of the continuous images and continuous depth value change information, robustness of the gesture identification method is good, influences of light conditions are small, and due to the fact that gesture identification is carried out on the basis of the movement tracks, the gesture identification method and device can be used in a relative long distance range.

Description

Gesture identification method based on depth image and device
Technical field
The present invention relates generally to gesture identification, relates more specifically to gesture identification method and device based on depth image.
Background technology
Some Gesture Recognition based on image processing have been proposed.
In United States Patent (USP) " US7340077B2 ", a kind of gesture recognition system based on depth transducer is proposed.The method, by the combination of attitude within the scope of certain hour is carried out to gesture identification, is carried out gesture identification mainly for shape, position and the direction of institute's identification division, finally by the gesture recognizing, relevant electric equipment is controlled.The method mainly adopts still image information to identify, and has lost a large amount of movable informations in continuous videos image.In addition, the identification of gesture is mainly based on assembled gesture, and therefore user must do multiple gesture modes and go a gesture, and this is not very convenient for user's operation.
United States Patent (USP) open " US20120069168A1 " has proposed a kind of gesture recognition system that TV is controlled.Gesture (palm opens and closure) is for " selection " and " determining " feature operation of TV.First, the detected calculating distance based between palm center and palm bottom of attitude (" opening " or " closing ") of hand, then gesture (palm opens and closure) can be identified, the state transformational relation between " opening " and " closing " based on hand.In this system, in order effectively to judge the folding condition of hand, the distance of user and TV must be in effective distance range.Therefore the method is not too applicable to remote-controlled operation.This system adopts complexion model to carry out the detection of hand simultaneously, and testing result will be easy to be subject to the impact that ambient lighting changes.
Exercise question is " Hand Gesture Recognition for Human-Machine Interaction ", Journal of WSCG, and 2004 article has proposed a kind of real-time vision utility system based on gesture identification.First, adopt complexion model to carry out motion hand Region Segmentation; Then carry out the gesture recognition of hand based on Hausdorff distance.The method is equally easily subject to the impact of illumination.
In addition, some article uses 2D movement locus to carry out gesture identification, and they carry out feature extraction or adopt complexion model to carry out foreground segmentation based on still image conventionally.
Summary of the invention
According to embodiments of the invention, a kind of gesture identification method based on depth image is provided, can comprise: based on the range image sequence that comprises hand region, detect the 3 D motion trace of hand candidate region; And according to the 3 D motion trace of hand candidate region, identification prearranged gesture.
According to another embodiment of the present invention, provide a kind of method of opening man-machine interactive system, having comprised: obtained the range image sequence that comprises hand region; Based on the range image sequence that comprises hand region, detect the 3 D motion trace of hand candidate region; According to the 3 D motion trace of hand candidate region, identify the gesture of raising one's hand; And if recognize the gesture of raising one's hand, open man-machine interactive system, enter in man-machine interaction state.
According to another embodiment of the present invention, provide a kind of method based on depth image identification human body predetermined action, can comprise: based on the range image sequence that comprises this human body region, the 3 D motion trace of candidate region, human body position; And according to the 3 D motion trace of human body candidate region, the predetermined action of identification human body.
According to another embodiment of the present invention, provide a kind of gesture identifying device based on depth image, can comprise: 3 D motion trace detection part, for the range image sequence based on comprising hand region, detects the 3 D motion trace of hand candidate region; And gesture identification parts, for according to the 3 D motion trace of hand candidate region, identify prearranged gesture.
Because include the movement locus in Depth Domain in gesture identification process, therefore take full advantage of the movable information in continuous recognition image according to the gesture identification method based on depth image of the embodiment of the present invention and device; Owing to not using complexion model, but adopt the time-space domain motion information of consecutive image and continuous depth value change information to carry out gesture identification, therefore this gesture identification method performance is compared with robust, and suffered illumination condition impact is less; Owing to carrying out gesture identification based on movement locus, it can use in distance range relatively far away.Consuming time short and robustness is high according to the gesture identification method of the embodiment of the present invention.
According to the technology of the unlatching man-machine interactive system of the embodiment of the present invention, a kind of facility, the control mode of system startup are reliably provided, whether prompting user has entered system state of a control, preventing that user's unconscious movement is erroneously identified as can operating gesture, thereby a kind of man-machine interaction mode that has more user friendly is provided.
Brief description of the drawings
Fig. 1 schematically shows according to an embodiment of the invention Gesture Recognition for the sight schematic diagram of man-machine interaction.
Fig. 2 shows according to the overview flow chart of the gesture identification method of first embodiment of the invention.
Fig. 3 shows the process flow diagram that detects according to an embodiment of the invention the illustrative methods of the 3 D motion trace of user's hand candidate region without identification user hand.
Fig. 4 shows the overview flow chart of identifying according to an embodiment of the invention a method of the gesture of raising one's hand based on 3 D motion trace.
Fig. 5 is the movement locus exploded view of 3 D motion trace in different dimensional according to an embodiment of the invention.
Fig. 6 (a) schematically shows the form of 3 D motion trace and the schematic diagram of Motion feature extraction to 6 (c).
Fig. 7 shows the moving window based on variable-size according to an embodiment of the invention and identifies from the movement locus feature of input the process flow diagram of the method for gesture.
Fig. 8 illustrates according to the process flow diagram of the gesture identification method of second embodiment of the invention.
Fig. 9 shows the process flow diagram of the method for anthropometry modelling verification prearranged gesture according to an embodiment of the invention.
Figure 10 (a1) to (a3), (b1) to (b3) and (c) show according to an embodiment of the invention, use the head center line location, shoulder horizontal direction location of histogram analysis method and as the raise one's hand schematic diagram of the hand of gesture end and the mutual alignment relation of head of example.
Figure 11 shows a kind of according to an embodiment of the invention process flow diagram of method of opening man-machine interactive system.
Figure 12 shows according to the functional configuration block diagram of the gesture identifying device based on depth image of the embodiment of the present invention.
Figure 13 is the overall hardware block diagram illustrating according to the gesture recognition system of the embodiment of the present invention.
Embodiment
In order to make those skilled in the art understand better the present invention, below in conjunction with the drawings and specific embodiments, the present invention is described in further detail.
To be described in the following order:
1, Application Scenarios-Example
2, the first embodiment of gesture identification method
The overall procedure of 2.1 gesture identification methods
2.2, obtaining of 3 D motion trace
2.3, identify based on 3 D motion trace the gesture of raising one's hand
2.4, the Motion feature extraction of 3 D motion trace
2.5, the identification of the motion feature based on 3 D motion trace prearranged gesture
2.6, utilize the gesture identification of the variable moving window of window size
3, the second embodiment of gesture identification method
The overall procedure of the gesture identification method of 3.1 second embodiment
3.2, according to anthropometry modelling verification prearranged gesture
4, the open method of man-machine interactive system
5, the gesture identifying device based on depth image
6, system hardware configuration
7, sum up
1, Application Scenarios-Example
Fig. 1 schematically shows according to an embodiment of the invention Gesture Recognition for the schematic diagram of the sight of man-machine interaction.As shown in Figure 1, subscriber station is before the human-computer interaction device such as computing machine, such as the stereo camera of binocular camera, take people's for example sequence of left-right images or directly obtain range image sequence, and issued the gesture identification equipment such as personal computer, personal computer analysis depth image sequence and carry out gesture identification, and result based on gesture identification responds, if for example recognizing this is a gesture of raising one's hand for starting, draw this effective enabling signal, and send enabling signal; If instead recognize this not gesture of raising one's hand for starting, show that this is invalid enabling signal, does not send enabling signal.Certainly, this is a schematic example, is not limited to computing machine for the equipment of identifying gesture, can be for example game machine, projector, televisor etc.
As known in those skilled in the art, depth image (Depth image) is the image that the value of the pixel in image is the degree of depth.Than gray level image, depth image has the degree of depth (distance) information of object, is therefore particularly suitable for needing the various application of steric information.
In addition, as is well known, have simple conversion relation between the depth value of a pixel and parallax value, therefore the implication of depth image of the present invention is broad sense, comprises anaglyph.
2, the first embodiment of gesture identification method
The overall procedure of 2.1 gesture identification methods
Fig. 2 shows according to the overview flow chart of the gesture identification method 100 of first embodiment of the invention.
As shown in Figure 2, in step 110, based on the range image sequence that comprises hand region, detect the 3 D motion trace of hand candidate region.
This range image sequence can be that the camera from can obtain depth image such as binocular camera any transmits, or can local calculate from gray level image in real time, or by network deep from outside etc.
The 3 D motion trace here, refer to different from the two-dimentional track of traditional two dimensional image, there is the movement locus of the degree of depth (distance) information, also be that each movement locus point on track has both plane (x, y) coordinate information, also has the Z coordinate information of the sign degree of depth (distance).
The method that detects the 3 D motion trace of hand candidate region can be divided into roughly the method based on hard recognition and the method based on hand filtration.
In the method based on hard recognition, for example, from initial depth image, first according to the feature of hand region, by the position of the identification hand region such as matching treatment definite hand region, then utilize motion tracking technology to follow the tracks of hand in follow-up depth image, thereby obtain the 3 D motion trace of hand.Here, for from identifying hand, if also there is corresponding gray level image, also can be in conjunction with the technology of the hard recognition based on complexion model.
In the method for filtering based on hand, first do not carry out hand region identification, but first detect the moving mass region in depth image, then for example extraordinary feature based on hand selects (or filtration) hand exercise piece region from moving mass region.This mode can without identification hand, thereby can carry out faster gesture identification, be particularly suitable for real-time man-machine interaction processing.The 3 D motion trace of the detection hand candidate region of filtering based on hand is described with reference to Fig. 3 below.
But, the 3 D motion trace detection mode of above-mentioned hand candidate region is only example, and the present invention is not limited thereto, any can be based on range image sequence, the technology that obtains the 3 D motion trace of certain object all can be applied to the present invention.
In step 120, according to the 3 D motion trace of hand candidate region, identification prearranged gesture.
Generally speaking, different gestures, corresponding to different 3 D motion traces, therefore can be passed through the 3 D motion trace of the hand candidate region of analyzing above-mentioned acquisition, and carry out the identification of prearranged gesture.
For example, for the gesture of raising one's hand, with people, from as object of reference, its 3 D motion trace is to start the para-curve that stops to above to lordosis from below; If be decomposed into Depth Domain and two movement locus on the two dimensional surface territory vertical with Depth Domain, in Depth Domain, with respect in face of video camera for, for distance from as far as near again to para-curve far away, and on two dimensional surface territory, be rectilinear motion from top to bottom.
Again for example, as the let go gesture relative with the gesture of raising one's hand, with people, from as object of reference, its 3 D motion trace is to start the para-curve that stops to below to lordosis from above; If be decomposed into Depth Domain and two movement locus on the two dimensional surface territory vertical with Depth Domain, in Depth Domain, with respect in face of video camera for, for distance from as far as near again to para-curve far away, and on two dimensional surface territory, be rectilinear motion from top to bottom.By the way, about the difference of raising one's hand gesture and letting go between gesture, direction of motion, the hand position in terminal moment etc. that also can be indicated according to the relation between chronological location point judge.
Again for example, for drawing circle gesture, its 3 D motion trace is sub-circular.For the gesture of waving, its movement locus is reciprocal pendular movement track.
Again for example, for naturally droop the hand exercise pushing away forward from hand, its three-dimensional track can be considered as partly raising one's hand and add the combination pushing away forward, and the action wherein pushing away is forward rectilinear motion in Depth Domain, and approximate motionless in plane domain.
Follow-up, with reference to Fig. 4,5,6, taking the gesture of raising one's hand as example, the process of identifying the gesture of raising one's hand by the movement locus on the movement locus on analysis depth territory and two dimensional surface territory is described.
In conjunction with Fig. 1, the method based on depth image identification prearranged gesture according to the embodiment of the present invention is described above.But, the present invention is not limited to gesture identification, also can be applied to the action recognition of other people body region, for example, be applied to the action recognition of foot, for example the footwork of skirt identification forward.Obviously, the present invention is also not limited to and is applied to the mankind, also can be applied to other active objects, for example, and animal, robot etc.
2.2, obtaining of 3 D motion trace
Fig. 3 shows the process flow diagram that detects according to an embodiment of the invention the illustrative methods 110 of the 3 D motion trace of user's hand candidate region without identification user hand.This illustrative methods 110 can be applied to the step S110 shown in Fig. 2.
As shown in Figure 3, in step S111, obtain the moving mass region in depth image.
As example, the detection in moving mass region and obtain and can realize by conventional frame-to-frame differences method.Particularly, for example, present frame and former frame are subtracted each other, the difference of domain of dependence, compared with predetermined motion difference threshold value, if current region difference is greater than predetermined difference threshold value, is detected current region for moving region piece.Region above can for example be detected and be obtained by connected domain.
Example as an alternative, can obtain moving mass region based on background subtraction, particularly, for example, carries out obtaining of moving mass region by present image subtracting background figure, can and average by the accumulation of multiple image above and obtain about the formation of Background.In step S112, according at least one in the position in moving mass region, area and shape, from moving mass region, choose hand candidate region.
For example, can, according to the knowledge of general position, area and the shape of the position in moving mass region, area and shape etc. and the staff of learning in advance, from moving mass region, choose hand candidate region.Particularly, if this moving mass region may be the moving region of hand, this region area size should be close to the size of the average hand of the mankind so.If this moving mass region area is excessive or too small, can reject this region.Similarly, also there is a fixed proportion in the shape of the mankind's hand on Aspect Ratio, also can whether meet this proportionate relationship according to moving mass region and filter moving mass region.Again for example, the distance between staff and the person generally within limits, if moving mass region, not within this scope, also can be rejected moving mass region outside hand candidate motion region.If a moving mass region meets position, area and shape condition simultaneously, this moving mass is hand region.Certainly, should be to two moving mass regions of more options as hand candidate motion region, if therefore exist more than two moving mass regions and meet according to the filtercondition of the position of staff, area and shape, can assess them and meet the degree of confidence (or possibility) in staff region, therefrom select two moving mass regions to be at the most used as hand candidate motion region.Hereinafter, for ease of describing, be illustrated as example to select a hand candidate moving mass region from all moving mass.
In addition, the anticipatory knowledge of the position based on mankind's hand, area and shape is selected hand candidate motion region from moving mass region above.But this is only example, also can be according to various other factors, utilize various technology to select the moving mass region that most probable is hand, for example, in the situation that there is gray level image, can select in conjunction with complexion model the moving mass region of hand.
In step 113, calculate and record the positional information of hand candidate region.As example, can calculate the center of mass point positional information of hand candidate region, as the positional information of hand candidate region.The center of mass point position of so-called hand candidate region can be for example by being averaging and obtaining all positions in hand candidate region.
In step 114, obtain the positional information sequence corresponding to range image sequence.
Particularly, for example, can, at each system acquisition after the center of mass point of current hand exercise piece, this center of mass point be put into " motor point positional information sequence " storer according to time order and function order, " motor point positional information sequence " just formed three-dimensional (3D) movement locus.In addition, in the time that motor point sequence length is greater than the motion sequence length of setting, can, by deleting old motor point, put into new motor point, carry out movement locus point and upgrade.
In one example, in the positional information sequence of motor point, can store explicitly corresponding temporal information with positional information.
In one example, if motor point sequence length does not also reach while setting sequence length, system can not start follow-up 3D gripper path analysis.
2.3, identify based on 3 D motion trace the gesture of raising one's hand
Fig. 4 shows the overview flow chart of identifying according to an embodiment of the invention a method 120 of the gesture of raising one's hand based on 3 D motion trace.The method 120 can be applied to the step S120 shown in Fig. 2.
As shown in Figure 4, in step S121, obtained 3 D motion trace is carried out to Motion feature extraction.This is provided to exemplary specific descriptions below with reference to Fig. 5 and Fig. 6.
In step S122, the motion feature of the 3 D motion trace based on obtained, carries out gesture identification.
For example, can, by the motion feature of the obtained 3 D motion trace motion model corresponding with prearranged gesture compared, identify this 3 D motion trace and whether characterize prearranged gesture.Below with reference to Fig. 7, this is provided exemplary specific descriptions.
But, the method for identifying prearranged gesture based on 3 D motion trace shown in Fig. 4 is only example.For example, in some cases, can not carry out Motion feature extraction, whether meet for example parabolic of concrete mathematical form and carry out gesture identification and only analyze this 3 D motion trace by numerical analysis.
2.4, the Motion feature extraction of 3 D motion trace
3 D motion trace can by each tracing point with corresponding to each frame depth image oneself motion feature and the feature of overall 3 D motion trace portray,
Fig. 5 is the movement locus exploded view of 3 D motion trace in different dimensional according to an embodiment of the invention.
As shown in Figure 5, exemplarily, oneself motion feature 121 of each tracing point can comprise: the Depth Domain motion feature 1222 of two-dimension time-space characteristic of field 1211, tracing point, the two-dimension time-space characteristic of field of each tracing point and Depth Domain motion feature are all associated with the time point of this tracing point.Two-dimension time-space characteristic of field can comprise position, speed, the angle of tracing point.The Depth Domain motion feature of tracing point can comprise the depth value of tracing point.
Form and the Motion feature extraction of 3 D motion trace are schematically described to 6 (c) below with reference to Fig. 6 (a).
Fig. 6 (a) shows world coordinate system, and wherein Z axis is depth transducer direction depth direction in other words.The 2D time-space domain motion track of Fig. 6 (b) for observing from X-Y plane, corresponding to the part of being pointed out by mark 601 in Fig. 6 (a).The 1D depth value movement locus of Fig. 6 (c) for observing on Z axis, corresponding to the part of being pointed out by mark 602 in Fig. 6 (a).The horizontal range of depth value z in figure between representing from depth transducer to moving mass.
First the feature extraction of the 2D movement locus in X-Y plane is described below.
The 2D movement locus of raising one's hand of observing from front elevation (X-Y plane) is similar to the rectilinear motion in Fig. 6 (b).At time range [ts, te], hand center of mass point P is from starting point P smove to terminating point P e, tm is the middle moment point between time range [ts, te].Because the different users custom of raising one's hand is different, therefore movement locus starting point may be different, and movement locus terminating point also may be different, and many possible movement locus are for example as shown in Fig. 6 (b).
According to one embodiment of present invention, the rectilinear motion feature in X-Y plane comprises: position, movement velocity and the direction of motion of each tracing point, and the range of movement of overall track.
2D coordinate (x can be used in the position of each tracing point Pi in X-Y plane i, y i) represent.
Movement velocity speed can be calculated by formula (1) below (whether formula 1 should be the speed of calculating Y-direction).
speed i = dis ( P i , P i - 1 ) t i - t i - 1 , i = 1,2 , . . . . n , t 0 = t s , t n = t e - - - ( 1 )
Wherein, dis (P i, P i-1) be current moving mass center of mass point P iwith previous frame center of mass point P i-1between space length, for example Euclidean distance, t irepresent the moment of present frame, t i-1represent moment of previous frame, suppose that the number (equaling window size, the frame number that will analyze in other words) of the continuous path point that will analyze is n, n is more than or equal to 2 integer, can choose as required, for example, get 15,20 etc.
Direction of motion can represent with movement angle θ, and the tangent tan θ of movement angle θ can use following formula (2) to represent.
tan θ i = y i - y i - 1 x i - x i - 1 , i = 1,2 , . . . . n , t i - 1 = t s , t n = t e - - - ( 2 )
In formula (2), i represents the numbering of present frame, x i, y irepresent respectively the hand region center of mass point p in present frame ix coordinate figure and y coordinate figure, similarly, x i-1, y i-1represent respectively x coordinate figure and the y coordinate figure of the hand region center of mass point pi in former frame.
Range of movement between starting point and the terminal of this 3 D motion trace on X-Y plane is illustrated in the separately range of movement Range of hand exercise region in directions X and Y-direction xand Range y, can be represented by formula (3)
Range X = | x end - x start | Range Y = | y end - y start | - - - ( 3 )
Wherein, Range yrepresent the difference in height between movement locus terminating point and movement locus starting point, Rangex represents the level error between movement locus terminating point and movement locus starting point
Comprehensive above feature, for X-Y plane, motion feature is { [2Dfeature i], 2Dfeature total, wherein the feature of each tracing point comprises time, position, speed, angle, i.e. 2Dfeature i=[t i, (x i, y i), speed i, θ i], the general characteristic directions X of 2D planar obit simulation and the range of movement separately of Y-direction, 2Dfeature total=[Range x, Range y].
But, it be only example that above-mentioned motion feature forms, and mainly the raise one's hand identification of gesture of consideration designs.The factor such as gesture and desirable precision based on desired identification, can design and consider that different motion features forms, for example, the motion feature of each tracing point can also comprise acceleration, in 2D plane, the general characteristic of track can also comprise maximal rate, peak acceleration, extreme lower position dot information, extreme higher position dot information, leftmost position dot information, least significant dot information etc.
For Depth Domain, similar with above-mentioned plane domain, motion feature can be divided into the Depth Domain motion feature of each tracing point and the Depth Domain motion feature of whole track.The Depth Domain motion feature of each tracing point can comprise time, position, speed and acceleration etc., and the Depth Domain motion feature of whole track can be range of movement, maximum depth value, minimum depth value, maximal rate, minimum speed, peak acceleration, minimum acceleration.Equally, can design different Depth Domain motion features according to the gesture difference for identification.According to an example, for the gesture of raising one's hand, consider the motion feature { [Zfeature of following Depth Domain i], Zfeature total.Wherein the feature of each tracing point comprises time, position, speed, angle, i.e. Zfeature i=[t i, z i, speed i], the general characteristic of Depth Domain track, the i.e. range of movement of Z direction, Zfeature total=[Range z]=[(Zmax-Zmin)].
In addition, it should be noted that, above-mentioned motion feature, no matter be the motion feature of each tracing point or the overall movement feature of 3 D motion trace on 2D X-Y plane with overall movement feature in Depth Domain can together with extract, and then carry out follow-up gesture identification; But also can carry out Motion feature extraction while carry out in real time gesture identification, parallel like this benefit of carrying out feature extraction and gesture identification is as long as find that the motion feature of 3 D motion trace does not meet corresponding track and/or the kinetic characteristic of prearranged gesture, just can stop the motion feature analysis of current circulation and gesture identification and enters next circulation.
2.5, the identification of the motion feature based on 3 D motion trace prearranged gesture
After obtaining as mentioned above the motion feature of 3 D motion trace, can be according to the prearranged gesture that will identify, the motion feature based on 3 D motion trace is identified user and whether has been made prearranged gesture.
For example, if wish whether identification user has made the gesture of raising one's hand, can judge whether the relevant motion feature of 3 D motion trace meets the feature corresponding with the gesture of raising one's hand.
This analysis can be carried out in X-Y plane territory and the Depth Domain of 2D respectively.
For example, for the gesture of raising one's hand, in the X-Y plane territory of 2D, should meet:
(1) each tracing point speed is in the Y direction greater than 0 all the time, and hand moves upward all the time, as shown in formula (4);
Speed yi>0i=1,2,...,n (4)
(2) because the movement locus on X-Y plane is similar to rectilinear motion, therefore in whole motion process [ts, te], it is approximate constant that movement angle should keep, and the angle of each tracing point should approximately equal, as shown in formula (5).
θ 1≈θ 2≈...≈θ n (5)
(3) range of movement in x direction and y direction should meet predetermined range threshold, as shown in formula (6).
h thres min < Range Y < h thres max L thres min < | Range X | < L thres max - - - ( 6 )
Wherein, h thresminand h thresmaxpredetermined lower threshold value and the upper threshold value in short transverse, L thresmin, L thresmaxrepresent the level error scope between starting point and terminating point, optionally, L thresmincan be set to 0.H thresmin, h thresmax, L thresmin, L thresmaxvalue relevant to mankind's forearm average length, can be by collecting the raising one's hand operating habit of most users and analytical calculation obtains.
(4) maximal value in y direction and minimum value should be the y coordinate figure of terminating point and the y coordinate figure of starting point, as shown in formula (7).
y max = y end y min = y start - - - ( 7 )
In addition, for the gesture of raising one's hand, in Depth Domain, should meet:
(5) the depth value Changing Pattern of motion center of mass point is: from big to small first, after from small to large, and the motion on Z axis in time range [ts, te] of motion center of mass point is similar to parabolic motion, as shown in Fig. 6 (c).Movement Locus Equation is approximate suc as formula shown in (8), and wherein d represents the coordinate figure on Z axis, i.e. depth value.
D=at 2+ bt+c is ts<t<te wherein, d min≤ d≤d max, D in addition thresmin< (d max-d min) <D thresmax(8)
In formula (8), ts is moment corresponding to track starting point, and te is moment corresponding to track end point, and dmin is illustrated in the minimum value of depth value in this time range, d maxthe maximal value that is illustrated in depth value in this time range, depth value d, in time range [ts, te], should fall into [d max, d min] in scope, and depth value scope (d max-d min) should fall into the threshold range [D of human arm length thresmin, D thresmax] within, the threshold range of human arm length can, by many people's arm length is added up and obtained, in one embodiment, be set to [200mm, 500mm].
About, in Depth Domain, the motion feature of track whether can determine by the method for for example least square method of numerical analysis by coincidence formula (8), be equivalent to each data point (ti, di) of cicada, carry out track fitting with quadratic function.
Above-mentioned formula (8) is general formulae, in one embodiment, thinks that this para-curve should be about t mthe tracing point almost symmetry in moment, now formula (8) will become d=a (t-tm) 2+ b (t-tm)+d min, wherein t mfor middle moment point, at moment point tm, it is d=d that the depth value of the center of mass point of hand region reaches minimum min.
In addition, for prearranged gesture, between the movement locus on the X-Y plane of 2D in movement locus and 1D Depth Domain, not completely independently conventionally, but should have certain incidence relation.For example, for the gesture of raising one's hand, should meet:
(6), at middle moment point tm, on the movement locus of Depth Domain, it is minimum that the center of mass point depth value of hand region reaches, meanwhile, on the X-Y plane of 2D on movement locus, the height value (y of the center of mass point of hand region m) should be approximately equal to height hstart (, the y coordinate y of starting point s) and the mean value of hend (, the y coordinate ye of terminating point), as shown in formula (9).
d m = d min y m = ( y s + y e ) / 2 - - - ( 9 )
Thus, according to an embodiment, whether represent the gesture of raising one's hand in order to identify the 3 D motion trace of a hand region, analyzed the movement locus (motion feature) in X-Y plane movement locus (motion feature), the 1D Depth Domain of the 2D relevant with this 3 D motion trace and whether meet above-mentioned condition (1)-(6) between the two.If met, judge that user has made the gesture of raising one's hand, otherwise judge that user does not make the gesture of raising one's hand.
The above-mentioned various operations of carrying out for the gesture of raising one's hand can be applicable to by simple amendment the identification of the let go gesture relative with the gesture of raising one's hand.
In addition, above-mentioned gesture identification can be applied to the identification of leg action similarly, for example, kick and receive leg.
2.6, utilize the gesture identification of the variable moving window of window size
In order to detect whether there is certain continuous motor pattern in continuous videos, according to an embodiment, can consider to adopt moving window, gesture is analyzed and identified to all frame of video in window, if do not identify gesture in a window, according to predetermined step-length as a frame slip window, then gesture is analyzed and identified to all frame of video in next window.
Because the custom of raising one's hand of different user is different, therefore many-sided different such as the length of run duration etc., even and if same user, the gesture of raising one's hand of making different time is also not quite similar.For example, can be for example making the gesture of raising one's hand in 15 frames for certain user, and other users may be for example making the gesture etc. of raising one's hand in 20 frames.
Consider above situation, in one embodiment, adopt window size variable moving window to decide based on which and how many depth images and carry out 3 D motion trace determination and analysis, the size of window represents the input using the depth image of continuous how many frames as gesture identification; If the 3 D motion trace of the range image sequence in the moving window based on pre-sizing can not mate motion model in the Depth Domain corresponding with prearranged gesture and two dimensional motion locus model in the plane vertical with Depth Domain simultaneously, increase the size of this moving window so that the input using more depth map picture frame as gesture identification, and proceed the coupling of the movement locus model that the 3 D motion trace of size correspondence of the moving window after this increase is corresponding with this prearranged gesture.
For example, in an example, the moving window series of variable size as shown in Equation 10.
Sliding window size=[15,20,25,30,35,40,45,50,55](10)
Figuratively, this situation is similar to for a window, given initiating terminal, initial size is defined as 15 frames, if the movement locus of the video analysis in the window based on this 15 frame does not identify the gesture of for example raising one's hand, make window size expand as 20 frames by clearing end for example 5 frames that once extend back, and continue to extract movement locus using the video of this 20 frame as analytic target, the gesture identification of raising one's hand; If recognize at certain window size the gesture of raising one's hand, window variation stops and no longer sliding, except leaveing no choice but proceed the identification of another gesture; Until being 55 frames, window size do not recognize yet the gesture of raising one's hand if conventionally repeat said process, by window initiating terminal mobile predetermined step-length backward, for example mobile 2 frames, then continue to utilize the moving window series of variable size above to carry out taking turns new identifying processing.
Fig. 7 shows the moving window based on variable-size according to an embodiment of the invention and identifies from the movement locus feature of input the process flow diagram of the method 122 of gesture.The method 122 can be applied to the step S122 shown in Fig. 4.
In step S1221, input 3D movement locus feature, wherein motion feature for example comes from the result of the step S121 shown in Fig. 4.
In step S1222, for example the raise one's hand motion model of gesture of the 3D movement locus within the scope of current window and prearranged gesture is matched.
In step S1223, judge whether this movement locus mates the motion model of prearranged gesture, for example whether meet the parabolic motion in rectilinear motion and the 1D Depth Domain in 2D plane, such as the condition (1) that judges whether to meet in above-mentioned 2.5 trifles arrives (6).If judge that in step S1223 this movement locus meets the movement locus model in the present invention, 3D movement locus identifying finishes, otherwise advances to step S1224.
In step S1224, judge whether moving window size has been maximal value.
If in step S1224, determine that moving window template does not reach maximal value, advances to step S1225; Otherwise advance to step S1226.
In step S1225, change the sizes values of moving window, and turn back to step S1221, input newly increases the motion feature of frame, to proceed movement locus template matches.
In step S1226, judge whether motion finishes.
If determine that in step S1226 motion does not also finish, advance to step S1227, otherwise 3D movement locus identifying finish.
In step S1227, moving window is resetted, be again transferred to minimum value by moving window size, then advance to step S1228.
At step S1228, track is moved to next motion center of mass point, then turn back to step S1221, input newly increases the motion feature of frame, to proceed movement locus coupling.
It should be noted that, the moving window size series shown in above-mentioned formula (10) is only example, can suitable moving window size series be set arbitrarily according to the performance of the frame per second of image capture device, computing machine etc.
At above-described embodiment, by utilizing the variable moving window of above-mentioned window size to detect prearranged gesture, can adapt to better the exercise habit difference of different people, carry out more accurately gestures detection.
3, the second embodiment of gesture identification method
The overall procedure of the gesture identification method of 3.1 second embodiment
Fig. 8 illustrates according to the process flow diagram of the gesture identification method 200 of second embodiment of the invention.According to the gesture identification method 200 of the second embodiment from be according to the gesture identification method 100 of the first embodiment different anthropometry verification step S230 many, and 3 D motion trace detecting step S210 wherein, step S220 based on 3 D motion trace identification prearranged gesture are identical with step S110, S120 in the first embodiment, omit being repeated in this description it here.
In step S230, if identify prearranged gesture according to the 3 D motion trace of hand candidate region, determine whether the position relationship between hand candidate region and other positions of human body meets the anthropometry model of making in prearranged gesture situation, to verify this prearranged gesture.
Anthropometry model is for further verifying the identified prearranged gesture such as the gesture of raising one's hand.Anthropometry is to describe the subject of the mankind's constitutive character situation by the method for measuring and observe.Anthropometry is combined for computer video fields such as image recognitions.
Gain knowledge according to anthropological measuring, under normal circumstances, at the gesture end of raising one's hand, generally by reaching a certain height region of human body, by the height region of close head, also there is certain distance value simultaneously in the height of hand between hand central point and head center point.
Provide a concrete exemplary embodiment according to anthropometry modelling verification prearranged gesture below in conjunction with Fig. 9.
3.2, according to anthropometry modelling verification prearranged gesture
Fig. 9 shows the process flow diagram of the method 230 of anthropometry modelling verification prearranged gesture according to an embodiment of the invention.The method can be applied to the step S230 shown in Fig. 8.
As shown in Figure 9, in step S231, depth image is carried out to foreground segmentation, to obtain human region.For example, foregoing employing connected domain analysis method carries out foreground segmentation, then merges association area region, according to the instruction of body region priori, obtains correct foreground image, is human region in this example.The foreground image obtaining is as shown in Figure 10 (a1).
In step S232, from human region, detect head zone, calculate the position of head zone.
According to an embodiment, we use histogram analysis method to carry out head location.But, histogram analysis method is only example, and any method that can carry out head location all can be for the present invention, as utilized the method for " Ω " head shoulder detection model.
Figure 10 shows according to an embodiment of the invention, uses the head center line location, shoulder horizontal direction location of histogram analysis method and as the raise one's hand schematic diagram of the hand of gesture end and the mutual alignment relation of head of example.
How to carry out head center line location and shoulder horizontal direction location below in conjunction with Figure 10 explanation according to one embodiment of the invention, and how basis the raise one's hand checking of gesture of mutual alignment relation that the hand of gesture end should be satisfied with head of for example raising one's hand.
According to an embodiment, can be by the head location in following operation performing step S232.
1) foreground image is carried out to the statistics with histogram in vertical direction, or in other words to each row of foreground image add up the non-vanishing number of pixel value from top to bottom and, to find head median vertical line.As shown in (a2) in Figure 10, in vertical statistic histogram, the statistic histogram value maximum of head zone, therefore can obtain the head median vertical line in Figure 10 (a3) by searching local maximum statistical value.
2) median vertical line based on head, to raising one's hand on one side (in this example, the right) body region carry out the statistics with histogram in horizontal direction, or in other words to the non-vanishing number of horizontal each row statistical pixel values of image and, to locate neck area position (being indicated by H2) in the horizontal direction by finding minimum statistics value location point.As shown in Figure 10 (b1), left part in Figure 10 (b1) represents horizontal histogram, right side divides to represent it is people's foreground image, the horizontal histogram in left side is the result of the human body foreground picture of the side of raising one's hand being done to horizontal statistic histogram, and such statistics with histogram mode will can not be subject to the chirokinesthetic impact of another side.
3) alternatively, in one embodiment, in order to navigate to more accurately neck area position in the horizontal direction, can convert horizontal statistic histogram, transformation for mula is suc as formula (11), and experimental result is as shown in Figure 10 (b2).In horizontal statistic histogram after conversion, neck area is more obvious than the neck area in Figure 10 (b1), is conducive to neck location.Based on Figure 10 (b2), neck area can position by finding maximum statistical value location point.
hist _ mean i = ( hist i - c + hist i + c ) / 2 i = c , c + 1 , . . . , n - c ; hist _ ne w i = ( hist _ mea n i - hist i ) - - - ( 11 )
In formula (11), hist irepresent horizontal statistic histogram, i is histogrammic index value, and hist (i-c) and hist (i+c) are initial level statistic histogram, see Figure 10 (b1).Hist_new is for to convert by formula (11) the new histogram obtaining, and c is step-length constant value, and n is maximum index value.
4) according to neck position (being indicated by H2) and position (being indicated by H1) in the horizontal direction, head coboundary in the horizontal direction, obtain head zone.Orienting after head zone, can calculate the position of head width and head center of mass point.
In step S233, whether the vertical range based between head position and hand position falls into the first preset range, and whether space length between head position and hand position fall into the second preset range, and whether checking gesture is prearranged gesture.
Usually, for prearranged gesture, according to anthropometry, the vertical range between head position and hand position should fall into preset range.Taking the gesture of raising one's hand as example, according to the position relationship between head and hand, whether the position height that can judge motion hand meets the final height of the gesture of raising one's hand.To be elaborated to this below in conjunction with Figure 10 (c).
In Figure 10 (c), P1 is head center of mass point, and P2 is hand center of mass point.Head width/be highly h, shoulder width is 2h.Under normal circumstances, raise one's hand the final height of gesture between height H 1 and H3.
According to an exemplary embodiment, can adopt formula (12) by checking that whether head center of mass point P1 and the distance of hand center of mass point P2 in y coordinate axis are less than certain altitude scope, verify finally highly whether meeting the requirements of hand.
abs(p1.y-p2.y)<a*h (12)
In formula (12), p1.y represents the y coordinate figure of head center of mass point P1, and p2.y represents the y coordinate figure of hand center of mass point, and a is constant coefficient, and the value of a can be carried out data analysis statistics by collection user's the custom of raising one's hand and be obtained
In addition, for prearranged gesture, according to anthropometry, the space length between head position and hand position should fall into preset range.
According to an exemplary embodiment, can adopt hand center of mass point P2 and the Euclidean distance of head center of mass point P1 in real space coordinate system to verify the validity of the gesture of raising one's hand, as shown in formula (13).
D p 1 - p 2 = ( x p 1 - x p 2 ) 2 + ( y p 1 - y p 2 ) 2 + ( z p 1 - z p 2 ) 2 2 , c 1 < D p 1 - p 2 < c 2 - - - ( 13 )
In formula (13), D p1-p2for the Euclidean distance between head center of mass point P1 and hand center of mass point P2.C 1and c 2for predetermined threshold, this threshold value can be by gathering user profile, and experimental calculation obtains repeatedly.Euclidean distance D p1-p2for judging the location point of hand, whether within the scope of human region.But Euclidean distance is only example, can adopt other distance metric to weigh the space length between hand and head.
In addition, alternatively, in one embodiment, a verification condition using residence time of the gesture end place hand of raising one's hand as the gesture validity of raising one's hand.Usually, as the gesture of raising one's hand, at the end of raising one's hand gesture, should stop at least schedule time, for example, can judge, in given time range, whether hand remains static according to following formula (14).
D p 2 = dis ( P 2 last - P 2 current ) D p 2 < c 3 Count stayingtime > T thres - - - ( 14 )
In formula (14), D p2represent the position P2 of hand center of mass point at a upper moment last lastwith the position P2 at current time current currentbetween space length; c 3for predetermined threshold, be a smaller value, if the space length D between hand center of mass point and a upper moment and the position of current time p2be less than this threshold value, determine that hand remains static.Numerical value c 3can different numerical value need to be set according to system performance, as being set to 1cm or 5cm, in 1cm situation, it is static that system requirements operator's hand keeps substantially; If be set to 5cm, user's hand can keep slight movement during anthropological measuring Epidemiological Analysis, can prevent like this system point jitter conditions.In order to strengthen the robustness of system, can set the timing Count of the residence time stayingtimeshould be greater than schedule time threshold value T thres, time threshold T threscan suitably arrange according to application for example 1 second.
Utilize the above-mentioned gesture proof procedure based on anthropometry model, if determine that this gesture of raising one's hand meets anthropometry model, determine that so this gesture of raising one's hand is the gesture of effectively raising one's hand.
4, the open method of man-machine interactive system
The above-mentioned gesture identification method based on depth image and device can have many application-specific, for example, can be applied to the open method as man-machine interactive system.
At present, a lot of systems are all under specific environment, some special gestures to be identified, can state of a control and whether user has been entered to system gesture, seldom provide corresponding information.This causes user's some automatisms in operating process to be identified as can operating gesture, and the control of man-machine interactive system is brought to very big inconvenience, has reduced the friendly of man-machine interactive system.
If in man-machine interactive system, add a kind of control signal that starts to point out user whether to enter system state of a control, prevent the appearance of user's unconscious movement, man-machine interaction mode will be more friendly so.Therefore system startup control signal is very important in man-machine interaction, and a kind of convenient, reliable system startup control mode will well be improved user's experience.
Simple, a natural and stable initiation gesture is very important in man-machine interactive system.The gesture of raising one's hand is a kind of very convenient effective user's operating gesture, can be used in man-machine interactive system, starts and controls as system, and the user who improves man-machine interactive system experiences.
Describe a kind of according to an embodiment of the invention by the raise one's hand method of gesture unlatching man-machine interactive system of identification below in conjunction with Figure 11.
Figure 11 shows a kind of according to an embodiment of the invention process flow diagram of method 300 of opening man-machine interactive system.
As shown in figure 11, in step S310, obtain the range image sequence that comprises hand region, for example, obtain by taking such as binocular camera, pass through from outside that wired connection or wireless connections transmission obtain etc.
In step S320, based on the range image sequence that comprises hand region, detect the 3 D motion trace of hand candidate region.
In step S330, according to the 3 D motion trace of hand candidate region, identify the gesture of raising one's hand.
The specific implementation of above-mentioned steps S320 and S330 can be with reference to above about the step S110 of Fig. 2 and the realization of S120 and figure, and only, concrete identification is the gesture of raising one's hand herein.
In step S340, if recognize the gesture of raising one's hand, open man-machine interactive system, enter in man-machine interaction state.
Equally similarly, in the open method of this man-machine interactive system, also can apply anthropometry model for gesture identification and verify, specifically can, with reference to above-mentioned combination Fig. 8,9,10 description, repeat no more here.
By said process with the gesture startup system of raising one's hand after, user can adopt other gestures to carry out system control operation.
In above-mentioned scene, fast, stable, in real time, to identify the gesture of raising one's hand be very important robust.The embodiment of the present invention is based on 3D gripper path analysis with alternatively also based on anthropometry model analysis, provide a kind of fast, the gesture identification method of robust.
This invention is not used complexion model, but adopts the time-space domain motion information of consecutive image and continuous depth value change information to carry out gesture identification.Therefore this gesture identification method performance is compared with robust, and suffered illumination condition impact is less.This invention is carried out gesture identification based on movement locus in addition, and it can use in distance range relatively far away.This gesture identification method is consuming time short and robustness is high, for man-machine interactive system, will effectively improve user experience.
5, the gesture identifying device based on depth image
Describe according to the gesture identifying device based on depth image of the embodiment of the present invention below with reference to Figure 12.
Figure 12 shows according to the functional configuration block diagram of the gesture identifying device based on depth image 400 of the embodiment of the present invention.
As shown in figure 12, gesture identifying device 400 can comprise: 3 D motion trace detection part 410, for the range image sequence based on comprising hand region, detects the 3 D motion trace of hand candidate region; And gesture identification parts 420, for according to the 3 D motion trace of hand candidate region, identify prearranged gesture.
Above-mentioned 3 D motion trace detection part 410 and gesture identification parts 420, concrete function with operation can be with reference to above-mentioned Fig. 1 to the relevant description of Fig. 3.Here omit relevant being repeated in this description.
6, system hardware configuration
The present invention can also improve hardware system by a kind of disparity map and implement.Figure 13 is the overall hardware block diagram illustrating according to the gesture recognition system of the embodiment of the present invention 1000.As shown in figure 13, disparity map improves system 1000 and can comprise: input equipment 1100, for input relevant image or information from outside, parameter or depth map, the initial parallax figure etc. of the left image that such as video camera is taken and right image, video camera, the remote input equipment that for example can comprise keyboard, Genius mouse and communication network and connect; Treatment facility 1200, above-mentioned according to the gesture identification method based on depth map of the embodiment of the present invention for implementing, or be embodied as above-mentioned gesture identifying device, or implement the open method of above-mentioned man-machine interactive system, what for example can comprise the central processing unit of computing machine or other has chip of processing power etc., can be connected to the network (not shown) such as the Internet, according to processing procedure need to be to teletransmission image after treatment etc.; Output device 1300, for implement the result of the opening process gained of above-mentioned gesture identification process or man-machine interactive system to outside output, for example, can comprise display, printer and communication network and the long-range output device that connects etc.; And memory device 1400, for store the related data such as such as the movement locus feature in feature, the Depth Domain of depth map, foreground picture, Background, motion center of mass point position and corresponding moment, 3 D motion trace, 2D plane motion track of unlatching of above-mentioned gesture identification process or man-machine interactive system in volatile or non-volatile mode, for example, can comprise the various volatile or nonvolatile memory of random-access memory (ram), ROM (read-only memory) (ROM), hard disk or semiconductor memory etc.
7, sum up
According to embodiments of the invention, a kind of gesture identification method based on depth image is provided, can comprise: based on the range image sequence that comprises hand region, detect the 3 D motion trace of hand candidate region; And according to the 3 D motion trace of hand candidate region, identification prearranged gesture.
According to another embodiment of the present invention, provide a kind of method of opening man-machine interactive system, having comprised: obtained the range image sequence that comprises hand region; Based on the range image sequence that comprises hand region, detect the 3 D motion trace of hand candidate region; According to the 3 D motion trace of hand candidate region, identify the gesture of raising one's hand; And if recognize the gesture of raising one's hand, open man-machine interactive system, enter in man-machine interaction state.
According to another embodiment of the present invention, provide a kind of method based on depth image identification human body predetermined action, can comprise: based on the range image sequence that comprises this human body region, the 3 D motion trace of candidate region, human body position; And according to the 3 D motion trace of human body candidate region, the predetermined action of identification human body.
According to another embodiment of the present invention, provide a kind of gesture identifying device based on depth image, can comprise: 3 D motion trace detection part, for the range image sequence based on comprising hand region, detects the 3 D motion trace of hand candidate region; And gesture identification parts, for according to the 3 D motion trace of hand candidate region, identify prearranged gesture.
According to another embodiment of the present invention, provide a kind of device of opening man-machine interactive system, having comprised: range image sequence has obtained parts, for obtaining the range image sequence that comprises hand region; 3 D motion trace detection part, for the range image sequence based on comprising hand region, detects the 3 D motion trace of hand candidate region; The gesture identification of raising one's hand parts, for according to the 3 D motion trace of hand candidate region, identify the gesture of raising one's hand; And man-machine interactive system turned parts, if for recognizing the gesture of raising one's hand, open man-machine interactive system, enter in man-machine interaction state.
According to another embodiment of the present invention, a kind of device based on depth image identification human body predetermined action is provided, can comprise: 3 D motion trace detection part, for the range image sequence based on comprising this human body region, the 3 D motion trace of candidate region, human body position; And human body motion identification component, for according to the 3 D motion trace of human body candidate region, identify the predetermined action of human body.
Utilize according to the gesture identification method based on depth image and the device of the embodiment of the present invention, because include the movement locus in Depth Domain in gesture identification process, thereby take full advantage of the movable information in continuous recognition image; Owing to not using complexion model, but adopt the time-space domain motion information of consecutive image and continuous depth value change information to carry out gesture identification, therefore this gesture identification method performance is compared with robust, and suffered illumination condition impact is less; Owing to carrying out gesture identification based on movement locus, it can use in distance range relatively far away.Consuming time short and robustness is high according to the gesture identification method of the embodiment of the present invention.
Thereby according to the technology of passing through to identify based on depth image the gesture unlatching man-machine interactive system of raising one's hand of the embodiment of the present invention, a kind of facility, the control mode of system startup are reliably provided, whether prompting user has entered system state of a control, preventing that user's unconscious movement is erroneously identified as can operating gesture, thereby a kind of man-machine interaction mode that has more user friendly is provided.
Aforementioned description is only illustrative, can much revise and/or replace.
In accompanying drawing above and description, describe with the example that is identified as of the gesture of raising one's hand, but the present invention is not limited thereto, the technology that 3 D motion trace based on hand candidate region is identified gesture can be applied to the identification of other gesture, for example, the gesture of letting go, from letting go, hand is put into the gesture of front, the gesture of waving etc. downwards.Further, the present invention is not limited to the identification of hand motion, but can be applied to the identification of the action of other human body, for example step, shank, buttocks, head etc.Again further, the method based on 3 D motion trace identification maneuver of the present invention is not limited to only identify people's action, also can be applied to and identify for example identification of the action of the movable object of animal, robot, mechanical hand etc.
In addition, above in, the application of gesture identification has been described to open man-machine interactive system as example, but the present invention is not limited thereto.Gesture identification based on depth image can be for man-machine interaction in projector control, game machine etc.
In addition, above in the time that the 3 D motion trace of hand is analyzed, be decomposed into movement locus and the 2D movement locus vertical with Depth Domain in Depth Domain, but this is only example.Can not decompose the movement locus in Direct Analysis 3d space.Or, also can decompose further, be for example decomposed into movement locus and the movement locus on y axle etc. on x axle in movement locus in Depth Domain, 2D plane.
In addition, depth map in describing is above construed as generalized concept, has comprised the image of range information, and its implication contains usually said disparity map, can be by being simply mutually converted between parallax and the degree of depth because it will be apparent to those skilled in the art that.
In addition, in describing, characterize the position of hand region, but this is only example in gesture identification with the position of hand center of mass point above, the representative point that can adopt as required other is as articulation point etc.In addition, only adopt a center of mass point to analyze here, but this is only example, can infer, in some cases, for complicated gesture, can both analyzes the articulation point of the center of mass point of hand, the articulation point of also analyzing wrist, ancon etc.
In addition, the Gesture Recognition in description above, only based on depth image, but should the gesture identification based on depth image can in conjunction with the technology based on gray level image, for example, carry out hard recognition etc. based on complexion model.
Ultimate principle of the present invention has below been described in conjunction with specific embodiments, but, it is to be noted, for those of ordinary skill in the art, can understand whole or any steps or the parts of method and apparatus of the present invention, can be in the network of any calculation element (comprising processor, storage medium etc.) or calculation element, realized with hardware, firmware, software or their combination, this is that those of ordinary skill in the art use their basic programming skill just can realize in the situation that having read explanation of the present invention.
Therefore, object of the present invention can also realize by move a program or batch processing on any calculation element.Described calculation element can be known fexible unit.Therefore, object of the present invention also can be only by providing the program product that comprises the program code of realizing described method or device to realize.That is to say, such program product also forms the present invention, and the storage medium that stores such program product also forms the present invention.Obviously, described storage medium can be any storage medium developing in any known storage medium or future.
Also it is pointed out that in apparatus and method of the present invention, obviously, each parts or each step can decompose and/or reconfigure.These decomposition and/or reconfigure and should be considered as equivalents of the present invention.And, carry out the step of above-mentioned series of processes and can order naturally following the instructions carry out in chronological order, but do not need necessarily to carry out according to time sequencing.Some step can walk abreast or carry out independently of one another.
Above-mentioned embodiment, does not form limiting the scope of the invention.Those skilled in the art should be understood that, depend on designing requirement and other factors, various amendments, combination, sub-portfolio can occur and substitute.Any amendment of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in protection domain of the present invention.

Claims (10)

1. the gesture identification method based on depth image, comprising:
Based on the range image sequence that comprises hand region, detect the 3 D motion trace of hand candidate region; And
According to the 3 D motion trace of hand candidate region, identification prearranged gesture.
2. according to the gesture identification method of claim 1, also comprise:
If identify prearranged gesture according to the 3 D motion trace of hand candidate region, determine whether the position relationship between hand candidate region and other positions of human body meets the anthropometry model of making in prearranged gesture situation, to verify this prearranged gesture.
3. according to the gesture identification method of claim 1, wherein, if the 3 D motion trace of hand candidate region meets parabolic motion model on depth direction, and two dimensional motion track in the plane perpendicular to depth direction meets rectilinear motion model, identify upwards raise one's hand gesture or the gesture of letting go downwards.
4. according to the gesture identification method of claim 1, based on range image sequence, the 3 D motion trace that detects hand candidate region comprises:
Obtain the moving mass region in depth image;
From moving mass region, choose hand candidate region; Calculate and record the positional information of hand candidate region; And
Obtain the positional information sequence corresponding to range image sequence.
5. according to the gesture identification method of claim 4, wherein 3 D motion trace comprises oneself the motion feature corresponding to each tracing point of each frame depth image,
Oneself motion feature of each tracing point comprises: comprise position, speed, the angle of tracing point two-dimension time-space characteristic of field, comprise the Depth Domain motion feature of the depth value of tracing point, the two-dimension time-space characteristic of field of each tracing point and Depth Domain motion feature are all associated with the time point of this tracing point.
6. according to the gesture identification method of claim 5, wherein, adopt window size variable moving window to decide based on which and how many depth images and carry out 3 D motion trace determination and analysis, the size of window represents the input using the depth image of continuous how many frames as gesture identification;
If the 3 D motion trace of the range image sequence in the moving window based on pre-sizing can not mate motion model in the Depth Domain corresponding with prearranged gesture and two dimensional motion locus model in the plane vertical with Depth Domain simultaneously, increase the size of this moving window so that the input using more depth map picture frame as gesture identification, and proceed the coupling of the movement locus model that the 3 D motion trace of size correspondence of the moving window after this increase is corresponding with this prearranged gesture.
7. according to the gesture identification method of claim 2, wherein determine whether position relationship between hand candidate region and other positions of human body meets the anthropometry model of making in prearranged gesture situation and comprise:
Depth image is carried out to foreground segmentation, to obtain human region;
From human region, detect head zone, calculate the position of head zone;
Whether the vertical range based between head position and hand position falls into the first preset range, and whether space length between head position and hand position fall into the second preset range, and whether checking gesture is prearranged gesture.
8. a method of opening man-machine interactive system, comprising:
Acquisition comprises the range image sequence of hand region;
Based on the range image sequence that comprises hand region, detect the 3 D motion trace of hand candidate region;
According to the 3 D motion trace of hand candidate region, identify the gesture of raising one's hand; And
If recognize the gesture of raising one's hand, open man-machine interactive system, enter in man-machine interaction state.
9. the gesture identifying device based on depth image, comprising:
3 D motion trace detection part, for the range image sequence based on comprising hand region, detects the 3 D motion trace of hand candidate region; And
Gesture identification parts, for according to the 3 D motion trace of hand candidate region, identify prearranged gesture.
10. the method based on depth image identification human body predetermined action, comprising:
Based on the range image sequence that comprises this human body region, the 3 D motion trace of candidate region, human body position; And
According to the 3 D motion trace of human body candidate region, the predetermined action of identification human body.
CN201210490622.8A 2012-11-27 2012-11-27 Gesture identification method and device based on depth image Expired - Fee Related CN103839040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210490622.8A CN103839040B (en) 2012-11-27 2012-11-27 Gesture identification method and device based on depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210490622.8A CN103839040B (en) 2012-11-27 2012-11-27 Gesture identification method and device based on depth image

Publications (2)

Publication Number Publication Date
CN103839040A true CN103839040A (en) 2014-06-04
CN103839040B CN103839040B (en) 2017-08-25

Family

ID=50802519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210490622.8A Expired - Fee Related CN103839040B (en) 2012-11-27 2012-11-27 Gesture identification method and device based on depth image

Country Status (1)

Country Link
CN (1) CN103839040B (en)

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200491A (en) * 2014-08-15 2014-12-10 浙江省新华医院 Motion posture correcting system for human body
CN104714650A (en) * 2015-04-02 2015-06-17 三星电子(中国)研发中心 Information input method and information input device
CN105279526A (en) * 2014-06-13 2016-01-27 佳能株式会社 Trajectory segmentation method and device
CN105528060A (en) * 2014-09-30 2016-04-27 联想(北京)有限公司 Terminal device and control method
CN105718037A (en) * 2014-12-05 2016-06-29 乐视致新电子科技(天津)有限公司 Method and device for identifying states of target object
CN106325509A (en) * 2016-08-19 2017-01-11 北京暴风魔镜科技有限公司 Three-dimensional gesture recognition method and system
CN106570482A (en) * 2016-11-03 2017-04-19 深圳先进技术研究院 Method and device for identifying body motion
CN106846403A (en) * 2017-01-04 2017-06-13 北京未动科技有限公司 The method of hand positioning, device and smart machine in a kind of three dimensions
CN106886741A (en) * 2015-12-16 2017-06-23 芋头科技(杭州)有限公司 A kind of gesture identification method of base finger identification
CN106886750A (en) * 2017-01-04 2017-06-23 沈阳工业大学 Extracting tool movement locus recognition methods based on Kinect
CN106909871A (en) * 2015-12-22 2017-06-30 江苏达科智能科技有限公司 Gesture instruction recognition methods
CN106918347A (en) * 2015-09-26 2017-07-04 大众汽车有限公司 The interactive 3D navigation system of the 3D aerial views with destination county
CN106933352A (en) * 2017-02-14 2017-07-07 深圳奥比中光科技有限公司 Three-dimensional human body measurement method and its equipment and its computer-readable recording medium
CN107065599A (en) * 2017-06-12 2017-08-18 山东师范大学 Wheeled robot dynamic simulation system and method based on body feeling interaction
CN107239728A (en) * 2017-01-04 2017-10-10 北京深鉴智能科技有限公司 Unmanned plane interactive device and method based on deep learning Attitude estimation
CN107590794A (en) * 2017-09-11 2018-01-16 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN107765845A (en) * 2016-08-19 2018-03-06 奥的斯电梯公司 System and method for using the sensor network across building to carry out the far distance controlled based on gesture
CN107835969A (en) * 2015-06-25 2018-03-23 三星电子株式会社 Method, electronic equipment, the method and touch-sensing module to setting touch-sensing module in the electronic device to be operated being controlled to the touch-sensing module of electronic equipment
CN108073851A (en) * 2016-11-08 2018-05-25 株式会社理光 A kind of method, apparatus and electronic equipment for capturing gesture identification
CN108268181A (en) * 2017-01-04 2018-07-10 奥克斯空调股份有限公司 A kind of control method and device of non-contact gesture identification
CN108563995A (en) * 2018-03-15 2018-09-21 西安理工大学 Human computer cooperation system gesture identification control method based on deep learning
CN108815845A (en) * 2018-05-15 2018-11-16 百度在线网络技术(北京)有限公司 The information processing method and device of human-computer interaction, computer equipment and readable medium
CN108830891A (en) * 2018-06-05 2018-11-16 成都精工华耀科技有限公司 A kind of rail splice fastener loosening detection method
CN109033972A (en) * 2018-06-27 2018-12-18 上海数迹智能科技有限公司 A kind of object detection method, device, equipment and storage medium
CN109407842A (en) * 2018-10-22 2019-03-01 Oppo广东移动通信有限公司 Interface operation method, device, electronic equipment and computer readable storage medium
CN109863535A (en) * 2016-10-11 2019-06-07 富士通株式会社 Move identification device, movement recognizer and motion recognition method
CN110347260A (en) * 2019-07-11 2019-10-18 歌尔科技有限公司 A kind of augmented reality device and its control method, computer readable storage medium
CN110834331A (en) * 2019-11-11 2020-02-25 路邦科技授权有限公司 Bionic robot action control method based on visual control
CN111107278A (en) * 2018-10-26 2020-05-05 北京微播视界科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN111401166A (en) * 2020-03-06 2020-07-10 中国科学技术大学 Robust gesture recognition method based on electromyographic information decoding
CN111563401A (en) * 2019-02-14 2020-08-21 上海汽车集团股份有限公司 Vehicle-mounted gesture recognition method and system, storage medium and electronic equipment
CN111651038A (en) * 2020-05-14 2020-09-11 香港光云科技有限公司 Gesture recognition control method based on ToF and control system thereof
CN111913574A (en) * 2020-07-15 2020-11-10 北京字节跳动网络技术有限公司 Method, apparatus, electronic device and computer readable medium for controlling device
CN112089595A (en) * 2020-05-22 2020-12-18 未来穿戴技术有限公司 Login method of neck massager, neck massager and storage medium
CN112099634A (en) * 2020-09-17 2020-12-18 珠海格力电器股份有限公司 Interactive operation method and device based on head action, storage medium and terminal
CN112307876A (en) * 2019-07-25 2021-02-02 和硕联合科技股份有限公司 Joint point detection method and device
CN112306235A (en) * 2020-09-25 2021-02-02 北京字节跳动网络技术有限公司 Gesture operation method, device, equipment and storage medium
CN112686231A (en) * 2021-03-15 2021-04-20 南昌虚拟现实研究院股份有限公司 Dynamic gesture recognition method and device, readable storage medium and computer equipment
CN112835449A (en) * 2021-02-03 2021-05-25 青岛航特教研科技有限公司 Virtual reality and somatosensory device interaction-based safety somatosensory education system
CN113139402A (en) * 2020-01-17 2021-07-20 海信集团有限公司 A kind of refrigerator
CN113158912A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Gesture recognition method and device, storage medium and electronic equipment
CN113282164A (en) * 2021-03-01 2021-08-20 联想(北京)有限公司 Processing method and device
CN114296543A (en) * 2021-11-15 2022-04-08 北京理工大学 Fingertip force detection and gesture recognition intelligent interaction system and intelligent ring
CN114415830A (en) * 2021-12-31 2022-04-29 科大讯飞股份有限公司 Air input method and device, computer readable storage medium
CN114639172A (en) * 2022-05-18 2022-06-17 合肥的卢深视科技有限公司 High-altitude parabolic early warning method and system, electronic equipment and storage medium
CN116627260A (en) * 2023-07-24 2023-08-22 成都赛力斯科技有限公司 Method and device for idle operation, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040190776A1 (en) * 2003-03-31 2004-09-30 Honda Motor Co., Ltd. Gesture recognition apparatus, gesture recognition method, and gesture recognition program
CN102253713A (en) * 2011-06-23 2011-11-23 康佳集团股份有限公司 Display system orienting to three-dimensional images
CN102509074A (en) * 2011-10-18 2012-06-20 Tcl集团股份有限公司 Target identification method and device
CN102789568A (en) * 2012-07-13 2012-11-21 浙江捷尚视觉科技有限公司 Gesture identification method based on depth information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040190776A1 (en) * 2003-03-31 2004-09-30 Honda Motor Co., Ltd. Gesture recognition apparatus, gesture recognition method, and gesture recognition program
CN102253713A (en) * 2011-06-23 2011-11-23 康佳集团股份有限公司 Display system orienting to three-dimensional images
CN102509074A (en) * 2011-10-18 2012-06-20 Tcl集团股份有限公司 Target identification method and device
CN102789568A (en) * 2012-07-13 2012-11-21 浙江捷尚视觉科技有限公司 Gesture identification method based on depth information

Cited By (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279526A (en) * 2014-06-13 2016-01-27 佳能株式会社 Trajectory segmentation method and device
CN105279526B (en) * 2014-06-13 2019-11-29 佳能株式会社 Divide the method and apparatus of track
CN104200491A (en) * 2014-08-15 2014-12-10 浙江省新华医院 Motion posture correcting system for human body
CN105528060A (en) * 2014-09-30 2016-04-27 联想(北京)有限公司 Terminal device and control method
CN105528060B (en) * 2014-09-30 2018-11-09 联想(北京)有限公司 terminal device and control method
CN105718037A (en) * 2014-12-05 2016-06-29 乐视致新电子科技(天津)有限公司 Method and device for identifying states of target object
CN104714650A (en) * 2015-04-02 2015-06-17 三星电子(中国)研发中心 Information input method and information input device
CN104714650B (en) * 2015-04-02 2017-11-24 三星电子(中国)研发中心 A kind of data inputting method and device
CN107835969B (en) * 2015-06-25 2021-05-25 三星电子株式会社 Electronic device including touch sensing module and method of operating the same
CN107835969A (en) * 2015-06-25 2018-03-23 三星电子株式会社 Method, electronic equipment, the method and touch-sensing module to setting touch-sensing module in the electronic device to be operated being controlled to the touch-sensing module of electronic equipment
CN106918347A (en) * 2015-09-26 2017-07-04 大众汽车有限公司 The interactive 3D navigation system of the 3D aerial views with destination county
CN106886741A (en) * 2015-12-16 2017-06-23 芋头科技(杭州)有限公司 A kind of gesture identification method of base finger identification
CN106909871A (en) * 2015-12-22 2017-06-30 江苏达科智能科技有限公司 Gesture instruction recognition methods
CN107765845A (en) * 2016-08-19 2018-03-06 奥的斯电梯公司 System and method for using the sensor network across building to carry out the far distance controlled based on gesture
CN106325509A (en) * 2016-08-19 2017-01-11 北京暴风魔镜科技有限公司 Three-dimensional gesture recognition method and system
CN109863535B (en) * 2016-10-11 2022-11-25 富士通株式会社 Motion recognition device, storage medium, and motion recognition method
CN109863535A (en) * 2016-10-11 2019-06-07 富士通株式会社 Move identification device, movement recognizer and motion recognition method
CN106570482B (en) * 2016-11-03 2019-12-03 深圳先进技术研究院 Human motion recognition method and device
CN106570482A (en) * 2016-11-03 2017-04-19 深圳先进技术研究院 Method and device for identifying body motion
CN108073851B (en) * 2016-11-08 2021-12-28 株式会社理光 Grabbing gesture recognition method and device and electronic equipment
CN108073851A (en) * 2016-11-08 2018-05-25 株式会社理光 A kind of method, apparatus and electronic equipment for capturing gesture identification
CN106846403B (en) * 2017-01-04 2020-03-27 北京未动科技有限公司 Method and device for positioning hand in three-dimensional space and intelligent equipment
CN106886750A (en) * 2017-01-04 2017-06-23 沈阳工业大学 Extracting tool movement locus recognition methods based on Kinect
CN106846403A (en) * 2017-01-04 2017-06-13 北京未动科技有限公司 The method of hand positioning, device and smart machine in a kind of three dimensions
CN107239728B (en) * 2017-01-04 2021-02-02 赛灵思电子科技(北京)有限公司 Unmanned aerial vehicle interaction device and method based on deep learning attitude estimation
CN107239728A (en) * 2017-01-04 2017-10-10 北京深鉴智能科技有限公司 Unmanned plane interactive device and method based on deep learning Attitude estimation
CN108268181A (en) * 2017-01-04 2018-07-10 奥克斯空调股份有限公司 A kind of control method and device of non-contact gesture identification
CN106933352A (en) * 2017-02-14 2017-07-07 深圳奥比中光科技有限公司 Three-dimensional human body measurement method and its equipment and its computer-readable recording medium
CN107065599A (en) * 2017-06-12 2017-08-18 山东师范大学 Wheeled robot dynamic simulation system and method based on body feeling interaction
CN107065599B (en) * 2017-06-12 2021-05-07 山东师范大学 Motion simulation system and method of wheeled robot based on somatosensory interaction
CN107590794A (en) * 2017-09-11 2018-01-16 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN108563995A (en) * 2018-03-15 2018-09-21 西安理工大学 Human computer cooperation system gesture identification control method based on deep learning
CN108815845A (en) * 2018-05-15 2018-11-16 百度在线网络技术(北京)有限公司 The information processing method and device of human-computer interaction, computer equipment and readable medium
CN108815845B (en) * 2018-05-15 2019-11-26 百度在线网络技术(北京)有限公司 The information processing method and device of human-computer interaction, computer equipment and readable medium
CN108830891A (en) * 2018-06-05 2018-11-16 成都精工华耀科技有限公司 A kind of rail splice fastener loosening detection method
CN108830891B (en) * 2018-06-05 2022-01-18 成都精工华耀科技有限公司 Method for detecting looseness of steel rail fishplate fastener
CN109033972A (en) * 2018-06-27 2018-12-18 上海数迹智能科技有限公司 A kind of object detection method, device, equipment and storage medium
CN109407842A (en) * 2018-10-22 2019-03-01 Oppo广东移动通信有限公司 Interface operation method, device, electronic equipment and computer readable storage medium
CN111107278B (en) * 2018-10-26 2022-03-01 北京微播视界科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN111107278A (en) * 2018-10-26 2020-05-05 北京微播视界科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN111563401A (en) * 2019-02-14 2020-08-21 上海汽车集团股份有限公司 Vehicle-mounted gesture recognition method and system, storage medium and electronic equipment
CN110347260B (en) * 2019-07-11 2023-04-14 歌尔科技有限公司 Augmented reality device, control method thereof and computer-readable storage medium
CN110347260A (en) * 2019-07-11 2019-10-18 歌尔科技有限公司 A kind of augmented reality device and its control method, computer readable storage medium
CN112307876A (en) * 2019-07-25 2021-02-02 和硕联合科技股份有限公司 Joint point detection method and device
CN112307876B (en) * 2019-07-25 2024-01-26 和硕联合科技股份有限公司 Method and device for detecting node
CN110834331A (en) * 2019-11-11 2020-02-25 路邦科技授权有限公司 Bionic robot action control method based on visual control
CN113139402A (en) * 2020-01-17 2021-07-20 海信集团有限公司 A kind of refrigerator
CN111401166A (en) * 2020-03-06 2020-07-10 中国科学技术大学 Robust gesture recognition method based on electromyographic information decoding
CN111651038A (en) * 2020-05-14 2020-09-11 香港光云科技有限公司 Gesture recognition control method based on ToF and control system thereof
CN112089595A (en) * 2020-05-22 2020-12-18 未来穿戴技术有限公司 Login method of neck massager, neck massager and storage medium
CN111913574B (en) * 2020-07-15 2024-04-30 抖音视界有限公司 Method, apparatus, electronic device, and computer-readable medium for controlling device
CN111913574A (en) * 2020-07-15 2020-11-10 北京字节跳动网络技术有限公司 Method, apparatus, electronic device and computer readable medium for controlling device
CN112099634A (en) * 2020-09-17 2020-12-18 珠海格力电器股份有限公司 Interactive operation method and device based on head action, storage medium and terminal
CN112306235B (en) * 2020-09-25 2023-12-29 北京字节跳动网络技术有限公司 Gesture operation method, device, equipment and storage medium
CN112306235A (en) * 2020-09-25 2021-02-02 北京字节跳动网络技术有限公司 Gesture operation method, device, equipment and storage medium
CN112835449A (en) * 2021-02-03 2021-05-25 青岛航特教研科技有限公司 Virtual reality and somatosensory device interaction-based safety somatosensory education system
CN113282164A (en) * 2021-03-01 2021-08-20 联想(北京)有限公司 Processing method and device
CN112686231B (en) * 2021-03-15 2021-06-01 南昌虚拟现实研究院股份有限公司 Dynamic gesture recognition method and device, readable storage medium and computer equipment
CN112686231A (en) * 2021-03-15 2021-04-20 南昌虚拟现实研究院股份有限公司 Dynamic gesture recognition method and device, readable storage medium and computer equipment
CN113158912B (en) * 2021-04-25 2023-12-26 北京华捷艾米科技有限公司 Gesture recognition method and device, storage medium and electronic equipment
CN113158912A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Gesture recognition method and device, storage medium and electronic equipment
CN114296543A (en) * 2021-11-15 2022-04-08 北京理工大学 Fingertip force detection and gesture recognition intelligent interaction system and intelligent ring
CN114415830A (en) * 2021-12-31 2022-04-29 科大讯飞股份有限公司 Air input method and device, computer readable storage medium
CN114639172A (en) * 2022-05-18 2022-06-17 合肥的卢深视科技有限公司 High-altitude parabolic early warning method and system, electronic equipment and storage medium
CN116627260A (en) * 2023-07-24 2023-08-22 成都赛力斯科技有限公司 Method and device for idle operation, computer equipment and storage medium

Also Published As

Publication number Publication date
CN103839040B (en) 2017-08-25

Similar Documents

Publication Publication Date Title
CN103839040A (en) Gesture identification method and device based on depth images
CN105930767B (en) A kind of action identification method based on human skeleton
US7308112B2 (en) Sign based human-machine interaction
Han et al. Enhanced computer vision with microsoft kinect sensor: A review
CN101989326B (en) Human posture recognition method and device
US9117138B2 (en) Method and apparatus for object positioning by using depth images
US8615108B1 (en) Systems and methods for initializing motion tracking of human hands
CN103718175B (en) Detect equipment, method and the medium of subject poses
CN102231093B (en) Screen locating control method and device
CN107357427A (en) A kind of gesture identification control method for virtual reality device
CN105849673A (en) Human-to-computer natural three-dimensional hand gesture based navigation method
CN107992792A (en) A kind of aerial handwritten Chinese character recognition system and method based on acceleration transducer
CN107908288A (en) A kind of quick human motion recognition method towards human-computer interaction
Lee et al. Game interface using hand gesture recognition
US20140211991A1 (en) Systems and methods for initializing motion tracking of human hands
CN104992171A (en) Method and system for gesture recognition and man-machine interaction based on 2D video sequence
CN108898063A (en) A kind of human body attitude identification device and method based on full convolutional neural networks
CN104616028A (en) Method for recognizing posture and action of human limbs based on space division study
CN112148128A (en) Real-time gesture recognition method and device and man-machine interaction system
CN104850219A (en) Equipment and method for estimating posture of human body attached with object
CN111444764A (en) Gesture recognition method based on depth residual error network
Alam et al. Implementation of a character recognition system based on finger-joint tracking using a depth camera
CN103995595A (en) Game somatosensory control method based on hand gestures
CN110633004A (en) Interaction method, device and system based on human body posture estimation
CN110796101A (en) Face recognition method and system of embedded platform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170825

Termination date: 20191127