CN104951735A - Kinect detecting method - Google Patents
Kinect detecting method Download PDFInfo
- Publication number
- CN104951735A CN104951735A CN201410117955.5A CN201410117955A CN104951735A CN 104951735 A CN104951735 A CN 104951735A CN 201410117955 A CN201410117955 A CN 201410117955A CN 104951735 A CN104951735 A CN 104951735A
- Authority
- CN
- China
- Prior art keywords
- finger
- image block
- palm
- point
- finger tip
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The object of the present invention is to provide a kinect detecting method for identifying at least one hand feature of a human body. Missing finger tips are estimated according to the palm center and fingertips positions of a hand in a previous image block and several displacements of current fingertips. The positions of all fingertips can be found to overcome the defects of a current image block. A patting gesture of the hand can be identified, when the positions of all fingertips and palm center in a plurality of image blocks change in such a manner that all fingertips except the fingertip of a thumb move downwards, the position of the palm center remains nearly unchanged, and the distance between finger edge lines corresponding to a same finger except thumb increases.
Description
Technical field
The present invention relates to a kind of body sense method for detecting, and especially about the body sense method for detecting of identification human hands feature.
Background technology
In recent years, based on gesture identification and the system fast development thereof of image action, especially games system.Wherein, the action through computer for analysis user performs the interactive approach that instruction has become following most possibility, and how allowing the interactive interface between user and computer more friendly is a day by day important problem.But traditional solution often needs to configure an inductor on user's finger, although this measure can increase the accuracy of detecting hand motion, also increases the burden of user.Another preferably mode be directly the hand of user is considered as an instruction issuing utensil, analyze the hand move mode of user to input instruction in the mode of optical profile type infrared sensing process, control the operating system of computer or peripheral device.But this kind of traditional optical image analytical approach is too complicated and stable not.
Therefore, how allowing user free hand gestures and computation interface can carry out interaction, is a problem demanding prompt solution.
Summary of the invention
Because the problems referred to above, the invention provides a kind of body sense method for detecting, through the individual displacements on the palm centre of the palm point of the staff in previous image block and the position on each finger tip summit and current finger tip summit, the finger tip summit of estimation disappearance, to find out the position on all finger tip summits, the deficiency of present image block so can be supplied.
An object of the present invention is to provide a kind of body sense method for detecting, for at least one hand-characteristic of identification human body, it is characterized in that: the position comprising the following steps: a palm centre of the palm point of the staff confirmed in a previous image block and each finger tip summit of five fingers; In a current image block, confirm the finger tip summit that one group of direction of motion is the most consistent; From those finger tip summits confirmed this current image block with the individual displacements on those finger tip summits in this previous image block, the position on this at least one finger tip summit of estimation disappearance; Revise the position of the palm centre of the palm point that this at least one finger tip summit through estimating is found out; And when those all finger tip summits and the change in location of this palm centre of the palm point in multiple image block are, all down move in all finger tip summits except corresponding thumb, this some position, palm centre of the palm is close to constant, and except corresponding thumb same of correspondence finger those finger edge lines spacing become large time, pick out staff one bounces gesture.
Implement aspect according to one of the present invention, it is characterized in that: confirm that the step on a palm centre of the palm point of the staff in a previous image block and each finger tip summit of five fingers more comprises the following steps: the finger tip edge line of all candidates picked out in each image block of multiple image block; At least one finger edge direction is calculated from the finger tip edge line of each this candidate; Relatively each finger edge direction and the finger orientation estimated value obtained from this image block, a best finger orientation will be saved as with this finger orientation estimated value this at least one finger edge direction immediate, and point length estimation by corresponding with this finger orientation estimated value this at least one finger edge direction immediate one and save as a best finger length; According to this best finger orientation and this best finger length, in those image block, judge the finger edge line of at least one candidate; And calculate on the finger tip edge line of this candidate, there is a bit of the distance maximal value of the two end point connecting line of the finger tip edge line to this candidate, and using this o'clock as a finger tip summit.
Aspect is implemented according to one of the present invention, it is characterized in that: confirm that the step on a palm centre of the palm point of the staff in a previous image block and each finger tip summit of five fingers more comprises the following steps: to obtain by this finger tip summit, a vertical line vertical with the line of the two-end-point of the finger tip edge line of this candidate, and by it as a finger center line; And those obtaining corresponding thumb and middle finger point the intersection point of center line, and by it as a palm centre of the palm point.
Aspect is implemented according to one of the present invention, it is characterized in that: more comprise the following steps: the distance according to this palm centre of the palm point and the distal point with this finger center line of corresponding thumb thereof, using this palm centre of the palm point as the center of circle, this palm centre of the palm point is recognized as a palm with the distance of this distal point of this finger center line of corresponding thumb as the circle that radius is formed.
Aspect is implemented according to one of the present invention, it is characterized in that: the step picking out the finger tip edge line of all candidates in each image block of multiple image block more comprises the following steps: in each image block of those image block, pick out the curve of all bend in one direction: and in the curve of those bend in one direction, pick out at least one curve of the nearly pi/2 of the corresponding center of circle corner connection of radian, as the finger tip edge line of this candidate.
Aspect is implemented according to one of the present invention, it is characterized in that: in each image block of those image block, the step picking out the curve of all bend in one direction more comprises the following steps: Grad and the gradient direction thereof of each picture point calculated in this image block; Multiple image border in this image block is extracted with Tuscany (Canny) rim detection operand; And each of those image borders is judged one by one, when the gradient direction of the arbitrary continuation in an image border 3 meets the gradually large or gradually little variation relation of gradient direction, this image border is debated to know be the curve of a bend in one direction.
Aspect is implemented according to one of the present invention, it is characterized in that: the step calculating at least one finger edge direction from the finger tip edge line of each this candidate more comprises the following steps: the two-end-point of the finger tip edge line from this candidate, bring this finger length estimated value and the finger orientation estimated value obtained from this image block into a predetermined party formula, to find out corresponding multiple pixels in this image block; And go out this at least one finger edge direction from the mean value calculation of the gradient direction of those pixels; Wherein, this predetermined party formula is:
tan(D
finger)=(y-y
n)/(x-x
n);
Wherein x ∈ { x
n..., x
n+ L
max× cos (D
finger), y ∈ { y
n..., y
n+ L
max× sin (D
finger), n ∈ { 1,2}, D
fingerrepresent a predetermined tangential direction of an end points of the finger tip edge line of this candidate, L
maxrepresent a predetermined finger maximum length, meet following relationship:
L
max=W
estimate×max(R
L/W),
W
estimatedistance between the two-end-point representing the finger tip edge line of this candidate, max (R
l/W) be a predetermined value, representative refers to maximal value that is long and finger beam ratio, and the coordinate of the two-end-point of this finger tip edge line is (X
n, y
n) (n ∈ 1,2}).
Accompanying drawing explanation
Fig. 1 shows the device architecture schematic diagram of a demonstration example, at least one external appearance characteristic of identification human body or animal.
Fig. 2 shows the clock pulse schematic diagram used according to one sense arrangement for detecting.
Fig. 3 body sense method for detecting shown according to one of the present invention embodiment picks out the schematic diagram of at least one hand-characteristic of human body in an image.
Fig. 4 shows the schematic diagram of a finger tip edge line.
The body sense method for detecting of another embodiment of Fig. 5 foundation the present invention picks out the schematic diagram of at least one hand-characteristic of human body in an image.
Fig. 6 shows one of the body sense method for detecting of one of foundation the present invention embodiment and selects process flow diagram.
Fig. 7 show the body sense method for detecting of one of the present invention embodiment another select process flow diagram.
Fig. 8 show the body sense method for detecting of one of the present invention embodiment another select process flow diagram.
Fig. 9 shows and selects process flow diagram according to another of the body sense method for detecting of one of the present invention embodiment.
Figure 10 shows and selects process flow diagram according to another of the body sense method for detecting of one of the present invention embodiment.
Figure 11 shows a process flow diagram of the body sense method for detecting according to one of the present invention embodiment.
Embodiment
For further illustrating each embodiment, the present invention provides graphic.This is graphic is a bit a part for disclosure of the present invention, and it is mainly that embodiment is described, and the associated description of instructions can be coordinated to explain the operation principles of embodiment.Coordinate with reference to these contents, this area has knows that the knowledgeable will be understood that the advantage of other possible embodiments and the present invention usually.Element in figure not drawn on scale, and similar component symbol is commonly used to element like representation class.
The invention provides a kind of body sense method for detecting, through the individual displacements on the palm centre of the palm point of the staff in previous image block and the position on each finger tip summit and current finger tip summit, the finger tip summit of estimation disappearance, to find out the position on all finger tip summits, the deficiency of present image block so can be supplied.
In practical application, the deficiency of present image block may be come from staff in motion process, and Toe Transplantation for Segmental Finger may crested, or because movement velocity causes image quality to decline, cannot definite identifying position.
In hardware structure, provide a demonstration example herein, the present invention is not limited to this.Please refer to Fig. 1, its display application according to the device architecture schematic diagram of a demonstration example of the body sense method for detecting of one embodiment of the invention, at least one external appearance characteristic of identification human body or animal.As shown in FIG., body sense arrangement for detecting 1 comprises that camera lens module 11, image sensing unit 12 is other is located at camera lens module 11, light source feeding unit 13, image process unit 14 and one sense identification module 15.
Camera lens module 11 comprises multiple eyeglass, through the optical characteristics of this little eyeglass, changes the path of the light injected from a light inlet, thus form an image on image sensing unit 12.
In the present embodiment, image sensing unit 12 example is a cmos sensor, light source feeding unit 13 is exemplarily wall emission infrared light supply, but the present invention is not limited to this, in other embodiments, image sensing unit can be the sensor of other kenels, and as ccd sensor, light source feeding unit can be selected indoor lamp, comprises redness, the RGB lamp of green, blue-light source or the light source feeding unit of other kenels.
Image process unit 14 controls the start of light source feeding unit 13 and image sensing unit 12, and make light source feeding unit 13 provide a light source, image sensing unit 12 senses an image.In detail, light source feeding unit 13 is controlled often to light n frame by image process unit 14 and flickers with regard to the mode extinguishing a frame, image sensing unit 12 is controlled to sense this image in periodically on a frame-by-frame basis mode by image process unit 14, sense each frame image when light source feeding unit 13 is lighted after, image process unit 14 start is to carry out image processing to this frame image.
At this clock pulse schematic diagram please also refer to Fig. 2, it provides a demonstration example of image sensing unit 12, light source feeding unit 13 and image process unit 14 work time pulse, the present embodiment to be described.For example, as can be seen from Fig. 2, the light source feeding unit 13 in demonstration example often lights four frames to flicker with regard to the mode extinguishing a frame, and for each frame, image sensing unit 12 all senses this image and sends this image to image process unit 14.Therefore, image process unit 14 can receive the frame image that senses when light source feeding unit 13 is not lighted and light source feeding unit 13 is lighted continuously afterwards time the four frame images that sense.After image process unit 14 receives each the frame image of the four frame images sensed when this light source feeding unit 13 is lighted continuously, this frame image sensed by image sensing unit 12 when can extinguish according to light source feeding unit 13, weaken the background interference of other frame images sensed when other light sources feeding unit 13 is lighted, to promote the contrast of other frame images, then by other frame image outputs to body sense identification module 15.Body sense identification module 15 can perform and integrally feel method for detecting after receiving the output of image process unit 14, to pick out at least one external appearance characteristic of human body or animal.
Controlling light source feeding unit 13 respectively via image process unit 14 makes start therebetween be worked in coordination with image sensing unit 12, image when image when acquisition light source feeding unit 13 extinguishes and light source feeding unit 13 are lighted, thus the bias light interference in image when extinguishing by light source feeding unit 13 weakens when light source feeding unit 13 is lighted image, the contrast of image when strengthening light source feeding unit 13 is lighted, makes prospect object image get over obvious.In other embodiments, image process unit 14 can control light and shade, spectrum, the colour temperature of light source feeding unit 13 further, to optimize the contrast of each character, for object identification provides the image of high-quality.
In other embodiments, if image sensing unit operates with the frame per second of 90fps, suggestion light source feeding unit often lights four frames to flicker with regard to the mode extinguishing a frame, and report point rate is 72; If image sensing unit operates with the frame per second of 60fps, suggestion light source feeding unit often lights nine frames to flicker with regard to the mode extinguishing a frame, and report point rate is 54.On the other hand, when the light source kind selected is RGB lamp, the environment such as indoor, outdoor, cloudy day, projecting lamp can be adapted to, control mode is now use the image during extinguishing of light source feeding unit in order to analyze the more weak spectrum of infrared light, ruddiness, green glow and blue light, uses the illumination that the lamp socket composed compared with the low light level is enhanced contrast.
Body sense identification module 15 can extract the contrast of object image by character such as brightness, color, light are general, the edge line analyzing contrast filters out the object with identification.About the body sense method for detecting that body sense identification module 15 performs, please refer to follow-up explanation.
About the one sense method for detecting of one embodiment of the invention, please refer to Figure 11, this method is at least one hand-characteristic for identification human body.The first step of this method is the position (step S1110) on a palm centre of the palm point of the staff first confirmed in a previous image block and each finger tip summit of five fingers.In order to obtain the position on finger tip summit, being preferably and first picking out finger tip edge line and and then determine to point center line.
About identification finger tip edge line and and then determine to point the method for center line, also provide a demonstration example at this.Please also refer to Fig. 3 and Fig. 6, wherein Fig. 3 body sense method for detecting shown according to one of the present invention embodiment picks out the schematic diagram of at least one hand-characteristic of human body in an image, Fig. 6 show according to one of the present invention embodiment body sense method for detecting one select process flow diagram.As shown in FIG., first, optionally for the image of a frame frame, video conversion can be comprised through a succession of, as the image processing step being converted to 256 gray-scale figure from RGB image, the removal of Gaussian convolution Fuzzy Processing burr, bright spot etc. make an uproar news becomes multiple image block.Then, in multiple image block, pick out the finger tip edge line (step S100) of all candidates in each image block.Separately please refer to Fig. 7, wherein show the body sense method for detecting of one of the present invention embodiment more meticulously one selects process flow diagram.As shown in Figure 7, in order to pick out the finger tip edge line of candidate, before this in each image block of image block, picked out the curve (step S110) of all bend in one direction.Then, in the curve of this little bend in one direction, at least one curve of the nearly predetermined value of the corresponding center of circle corner connection of radian is picked out, finger tip edge line (step S120) alternatively.
Separately please refer to Fig. 8, another wherein showing the body sense method for detecting of one of the present invention embodiment more meticulously selects process flow diagram.About in each image block of image block, pick out carrying out of the curve of all bend in one direction, can according to the flow process of Fig. 8 at this.In step S111, the Grad of each picture point in computed image block and gradient direction.For example, can through Grad and gradient direction image block being carried out to the calculation of Sobel process of convolution in horizontal direction and vertical direction and find out each picture point in image block, then used two WITH CROSS GRADIENTS to calculate the Grad of each picture point in European plane and gradient direction again.Then, in step S112, extract the multiple image borders in image block with Tuscany (Canny) rim detection operand, this little image border is preferably contrast comparatively significantly edge line.Then, in step S113, the relation of gradient direction between continuous image vegetarian refreshments in combining image edge and limit unit, the each of those image borders is judged one by one, when the gradient direction of the arbitrary continuation in an image border 3 meets the gradually large or gradually little variation relation of gradient direction, this image border is debated to know be the curve of a bend in one direction.
Body sense method for detecting designs according to the observation of characteristics of human body, therefore can the actual characteristic of identification laminating human body, and have good identification correctness.Such as in order to identification hand-characteristic, observe the geometric configuration of finger of straight configuration, finger can be regarded as approx one cylindric, project to figure two dimensional surface then can regarded two parallel lines and finger tip half source curve as and form.Secondly, according to the physiological property that human body is general, the middle finger of staff is the longest, therefore the ratio of the finger length and finger beam with maximal value is (with max (R
l/W) representative), and a value can be preset as max (R
l/W).Observe the finger tip profile of staff, advise aforementioned predetermined value to be set as pi/2.Therefore, at this with Sx (x ∈ 1,2,3 ...) indicate finger tip effective coverage, specific go out the curve of the nearly pi/2 of the corresponding center of circle corner connection of radian, finger tip edge line alternatively.
After step S100 picks out the finger tip edge line of all candidates in each image block, just calculate at least one finger edge direction (step S200) from the finger tip edge line of each this candidate.In detail, as shown in Figure 10, first carry out step S210, from the two-end-point of the finger tip edge line of candidate, bring obtain from image block one finger length estimation and a finger orientation estimated value into a predetermined party formula, to find out corresponding multiple pixels in image block.This predetermined equational design is the external appearance characteristic from finger.The distance of the two-end-point of the finger tip edge line of candidate can W
estimaterepresent, thus, the finger maximum length L that the finger tip edge line of candidate is corresponding
maxcan W
estimate× max (R
l/W) represent, and the line segment equation obtaining representing finger edge is:
Tan (D
finger)=(y-y
n)/(x-x
n) equation (1);
Line segment length L is:
L=[(x
n-x) × (x
n-x)+(y
n-y) × (y
n-y)]
1/2equation (2);
Wherein x ∈ { x
n..., x
n+ L
max× cos (D
finger), y ∈ { y
n..., y
n+ L
max× sin (D
finger), n ∈ { 1,2}, D
fingerrepresent a predetermined tangential direction of an end points of the finger tip edge line of this candidate, the coordinate of the two-end-point of finger tip edge line is (x
n, y
n) (n ∈ 1,2}).Therefore, the finger length estimation L will obtained from image block
estimatewith a finger orientation estimated value D
finger_estimatebring in equation (1), (2), suppose L
estimatefor (0, L
max], D
finger_estimatefor [D
finger-θ, D
finger+ θ], in image block, pixel Px corresponding to line segment is found out in matching.Then, as described in step S220, from the gradient direction G of those pixels Px
xmean value G
meancalculate this at least one finger edge direction.Because gradient direction is vertical with finger edge direction, obtain the mean value G of gradient direction
meanafterwards, finger edge direction D can be obtained
estimate_edgefor G
mean± pi/2.
Then, at acquisition finger edge direction D
estimate_edgeafterwards, in step S300, compare each finger edge direction D
estimate_edgewith the finger orientation estimated value D obtained from this image block
finger_estimate, will with this finger orientation estimated value D
finger_estimateimmediate finger edge direction D
estimate_edgesave as a best finger orientation D
finger_best, and will with finger orientation estimated value D
finger_estimateimmediate finger edge direction D
estimate_edgecorresponding finger length estimated value L
estimatesave as best finger length L
finger_best.
Above step is for each different finger, as: thumb, forefinger, middle finger, the third finger, little finger of toe carry out, and therefore have different best finger orientations and best finger length for each different finger.Achieving best finger orientation D
finger_bestwith best finger length L
finger_bestafterwards, the finger edge line (step S400) of candidate just can be judged accordingly in image block.The standard judged can be close on best finger orientation D in angle
finger_bestbest finger length L is closed on length in one scope
finger_besta scope in finger tip edge line and form the pixel Px of finger edge, be deemed to be the finger edge line of candidate.
Judge the finger edge line of candidate in image block after, the judgement on finger tip summit can be carried out, to this, please additional reference Fig. 4, it shows the schematic diagram of a finger tip edge line, therefrom can find out that the finger tip summit P3 on the finger tip edge line of candidate is maximum to the distance of two-end-point, therefore, judge that the first step pointing center line calculates on the finger tip edge line of this candidate, have the distance maximal value of the two end point connecting line of the finger tip edge line to candidate a bit, and using this point as finger tip summit P3 (step S500).If with mathematical equation representative, suppose with (x
n, y
n) (n ∈ 1,2,3 ..., m}) represent each point on the finger tip edge line of candidate, by Outer Product of Vectors can obtain on the finger tip edge line of candidate a bit to the distance S Δ of two end point connecting line
p1p2pnfor:
SΔ
p1p2pn=[(x
2-x
1)×(y
n-y
1)-(y
2-y
1)×(x
n-x
1)]/2;
And calculate S Δ
p1p2pnthere is (x during maximal value
n, y
n), then this (x
n, y
n) value is the coordinate of finger tip summit P3.
Separately please refer to Fig. 9, another of the body sense method for detecting of one of its display foundation the present invention embodiment selects process flow diagram, exemplarily judges a palm centre of the palm point of staff.At this, palm centre of the palm point can draw by judgement finger center line.Subsequent steps S500, in step S600, obtains by finger tip summit P3, a vertical line vertical with the line of the two-end-point of the finger tip edge line of this candidate, and by it as a finger center line.For example, the length pointing center line can determine according to the length of the finger edge line of correspondence.Then, separately please refer to Fig. 5, the body sense method for detecting of another embodiment of its display foundation the present invention picks out the schematic diagram of at least one hand-characteristic of human body in an image.After obtaining finger center line, in step S700, those finding out corresponding thumb and middle finger accordingly point the intersection point of center line, and by it as a palm centre of the palm point.In detail, first, be extend from the terminal M 3 of the finger center line of middle finger, namely make straight line S2M3, be denoted as L
s2M3, then, the terminal M 1 crossing the finger center line of thumb makes L
s2M3vertical line, meet at C point, this C point is palm centre of the palm point.
In another embodiment, when the hand-characteristic in the wish identification centre of the palm, can be radius according to the distance between the terminal M 1 of the finger center line of found out palm centre of the palm point C point and thumb, palm centre of the palm point C point is for making a circle in the center of circle, this circle can represent an inscribed circle of palm, thus, namely the position in the palm centre of the palm and the scope of palm can be determined.
After the position on each finger tip summit of the palm centre of the palm point and five fingers that confirm the staff in a previous image block, then, in a current image block, confirm the finger tip summit (step S1120) that one group of direction of motion is the most consistent.For example, through combining the information of the hand that first time recognizes and the current object finding similar finger, one group of finger tip that direction of motion is the most consistent can be found out.
Then, in step S1130, from those finger tip summits confirmed this current image block with the individual displacements on those finger tip summits in this previous image block, the position on this at least one finger tip summit of estimation disappearance.In detail, can respectively for the finger tip summit corresponding to same finger previous with the position in current image block, calculate the parameter such as displacement, direction, speed of this root finger, close link staff every root finger in the middle of relation each other, then calculate the current position on finger tip summit of finger correspondence of disappearance with interpolation or other mathematical way.Even so because be difficult to the finger tip vertex position picking out all fingers in image, also find out the finger tip vertex position of all fingers by this method, and then keep track the change of finger tip vertex position of all fingers.
Then, in step S1140, the position of the palm centre of the palm point that this at least one finger tip summit through estimating is found out is revised.The Kalman filter (Kalman filter) being exemplarily use kinematical equation (Equation of motion) at this is revised, but the present invention is not limited to this.
In other embodiments, not only can follow the trail of the finger tip summit of finger and the current location of palm centre of the palm point and change, more can go out the gesture of staff by this little data identification.The mechanism of these identifications is mainly all through the feature of observing staff gesture and designs, therefore can the actual characteristic of identification laminating human body, and has good identification correctness.For example, first case picks out one of staff to bounce gesture.The mechanism of identification is when judging that all finger tip summits (comprise and can directly pick out and lack but through estimating the finger tip summit drawn) and the change in location of palm centre of the palm point in multiple image block are, all down move in all finger tip summits except corresponding thumb, point position, the palm centre of the palm is close to constant, and except corresponding thumb same of correspondence finger those finger edge lines spacing become large time, just can pick out one of staff and bounce gesture.Second case is pick out one of staff to close up gesture, identifier is made as the change in location in multiple image block when all finger tip summits and palm centre of the palm point, all down move in all finger tip summits except corresponding thumb, angle between those finger center lines of corresponding residue four finger constantly reduces, this finger center line of corresponding thumb moves closer to this palm centre of the palm point, and those all finger tip summits are when being changed to disappearance, pick out staff one closes up gesture.3rd example is the rotate gesture picking out staff, and the mechanism of identification is when the angle change direction of those all finger center lines is identical with rate of change, and this palm centre of the palm point is close to time constant, picks out a rotate gesture of staff.4th example is the shake gesture picking out staff, and the mechanism of identification is when the angle change direction of all finger center lines is all simultaneously for alternately changing clockwise and counterclockwise, and this palm centre of the palm point is close to time constant, picks out a shake gesture of staff.
It is event, can learn from above-mentioned, even if body sense method for detecting of the present invention is under the situation of motion at staff, also the finger tip summit of disappearance can be estimated, thus improve the accuracy of following the trail of staff feature, and the different gestures of staff can be picked out further, more complete body sense discriminating function is provided.
More than describe according to the multiple different embodiment of the present invention, wherein various features can single or different combination enforcement.Therefore, the exposure of embodiment of the present invention, for illustrating the specific embodiment of principle of the present invention, should be regardless of limit the present invention in disclosed embodiment.Further it, previously described and accompanying drawing is only the use of the present invention's demonstration, do not limit by its limit.The change of other element or combination all may, and not to be contrary in the spirit of the present invention and scope.
[main element symbol description]
1 body sense arrangement for detecting 11 camera lens module
12 image sensing unit
13 light source feeding units
14 image process units
15 body sense identification modules
S100, S110, S111, S112, S113, S120, S200, S210, S220, S300, S400, S500, S600, S700, S1110, S1120, S1103, S1140 step.
Claims (7)
1. a body sense method for detecting, for following the trail of at least one hand-characteristic of human body, is characterized in that: comprise the following steps:
Confirm the position on a palm centre of the palm point of the staff in a previous image block and each finger tip summit of five fingers;
In a current image block, confirm the finger tip summit that one group of direction of motion is the most consistent;
From those finger tip summits confirmed this current image block with the individual displacements on those finger tip summits in this previous image block, the position on this at least one finger tip summit of estimation disappearance;
Revise the position of the palm centre of the palm point that this at least one finger tip summit through estimating is found out; And
When those all finger tip summits and this palm centre of the palm point, the change in location in multiple image block is, all down move in all finger tip summits except corresponding thumb, this some position, palm centre of the palm is close to constant, and except corresponding thumb same of correspondence finger those finger edge lines spacing become large time, pick out staff one bounces gesture.
2. body sense method for detecting as claimed in claim 1, is characterized in that: confirm that the step on a palm centre of the palm point of the staff in a previous image block and each finger tip summit of five fingers more comprises the following steps:
Pick out the finger tip edge line of all candidates in each image block of multiple image block;
At least one finger edge direction is calculated from the finger tip edge line of each this candidate;
Relatively each finger edge direction and the finger orientation estimated value obtained from this image block, a best finger orientation will be saved as with this finger orientation estimated value this at least one finger edge direction immediate, and point length estimation by corresponding with this finger orientation estimated value this at least one finger edge direction immediate one and save as a best finger length;
According to this best finger orientation and this best finger length, in those image block, judge the finger edge line of at least one candidate; And
Calculate on the finger tip edge line of this candidate, there is a bit of the distance maximal value of the two end point connecting line of the finger tip edge line to this candidate, and using this o'clock as a finger tip summit.
3. body sense method for detecting as claimed in claim 2, is characterized in that: confirm that the step on a palm centre of the palm point of the staff in a previous image block and each finger tip summit of five fingers more comprises the following steps:
Obtain by this finger tip summit, a vertical line vertical with the line of the two-end-point of the finger tip edge line of this candidate, and by it as a finger center line; And
Those obtaining corresponding thumb and middle finger point the intersection point of center line, and by it as a palm centre of the palm point.
4. body sense method for detecting as claimed in claim 3, is characterized in that: more comprise the following steps:
According to the distance of this palm centre of the palm point and the distal point with this finger center line of corresponding thumb thereof, using this palm centre of the palm point as the center of circle, this palm centre of the palm point is recognized as a palm with the distance of this distal point of this finger center line of corresponding thumb as the circle that radius is formed.
5. body sense method for detecting as claimed in claim 2, is characterized in that: the step picking out the finger tip edge line of all candidates in each image block of multiple image block more comprises the following steps:
In each image block of those image block, pick out the curve of all bend in one direction: and
In the curve of those bend in one direction, pick out at least one curve of the nearly pi/2 of the corresponding center of circle corner connection of radian, as the finger tip edge line of this candidate.
6. body sense method for detecting as claimed in claim 5, is characterized in that: in each image block of those image block, the step picking out the curve of all bend in one direction more comprises the following steps:
Calculate Grad and the gradient direction thereof of each picture point in this image block;
Multiple image border in this image block is extracted with Tuscany (Canny) rim detection operand; And
The each of those image borders is judged one by one, when the gradient direction of the arbitrary continuation in an image border 3 meets the gradually large or gradually little variation relation of gradient direction, this image border is debated to know be the curve of a bend in one direction.
7. body sense method for detecting as claimed in claim 2, is characterized in that: the step calculating at least one finger edge direction from the finger tip edge line of each this candidate more comprises the following steps:
From the two-end-point of the finger tip edge line of this candidate, bring this finger length estimated value and the finger orientation estimated value obtained from this image block into a predetermined party formula, to find out corresponding multiple pixels in this image block; And
This at least one finger edge direction is gone out from the mean value calculation of the gradient direction of those pixels;
Wherein, this predetermined party formula is:
tan(D
finger)=(y-y
n)/(x-x
n);
Wherein x ∈ { x
n..., x
n+ L
max× cos (D
finger), y ∈ { y
n..., y
n+ L
max× sin (D
finger), n ∈ { 1,2}, D
fingerrepresent a predetermined tangential direction of an end points of the finger tip edge line of this candidate,
L
maxrepresent a predetermined finger maximum length, meet following relationship:
L
max=W
estimate×max(R
L/W),
W
estimatedistance between the two-end-point representing the finger tip edge line of this candidate, max (R
l/W) be a predetermined value, representative refers to maximal value that is long and finger beam ratio, and the coordinate of the two-end-point of this finger tip edge line is (X
n, y
n) (n ∈ 1,2}).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410117955.5A CN104951735A (en) | 2014-03-26 | 2014-03-26 | Kinect detecting method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410117955.5A CN104951735A (en) | 2014-03-26 | 2014-03-26 | Kinect detecting method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104951735A true CN104951735A (en) | 2015-09-30 |
Family
ID=54166381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410117955.5A Pending CN104951735A (en) | 2014-03-26 | 2014-03-26 | Kinect detecting method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104951735A (en) |
-
2014
- 2014-03-26 CN CN201410117955.5A patent/CN104951735A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110308789B (en) | Method and system for mixed reality interaction with peripheral devices | |
EP3035164B1 (en) | Wearable sensor for tracking articulated body-parts | |
US10540812B1 (en) | Handling real-world light sources in virtual, augmented, and mixed reality (xR) applications | |
US8094937B2 (en) | System and method for labeling feature clusters in frames of image data for optical navigation | |
CN104808795B (en) | The gesture identification method and augmented reality glasses system of a kind of augmented reality glasses | |
JP6547013B2 (en) | Biological information analysis apparatus and method thereof | |
JP2010522922A (en) | System and method for tracking electronic devices | |
CN105045398A (en) | Virtual reality interaction device based on gesture recognition | |
US10497179B2 (en) | Apparatus and method for performing real object detection and control using a virtual reality head mounted display system | |
CN107357428A (en) | Man-machine interaction method and device based on gesture identification, system | |
CN104838337A (en) | Touchless input for a user interface | |
CN102369498A (en) | Touch pointers disambiguation by active display feedback | |
CN110489027B (en) | Handheld input device and display position control method and device of indication icon of handheld input device | |
CN205080499U (en) | Mutual equipment of virtual reality based on gesture recognition | |
CN104199548B (en) | A kind of three-dimensional man-machine interactive operation device, system and method | |
US7853080B2 (en) | System and method for identifying and labeling cluster pixels in a frame of image data for optical navigation | |
CN105046249A (en) | Human-computer interaction method | |
CN113160260B (en) | Head-eye double-channel intelligent man-machine interaction system and operation method | |
CN205080498U (en) | Mutual equipment of virtual reality with 3D subassembly of making a video recording | |
US10824237B2 (en) | Screen display control method and screen display control system | |
CN104951734A (en) | Kinect detecting apparatus | |
CN104951735A (en) | Kinect detecting method | |
CN108401452B (en) | Apparatus and method for performing real target detection and control using virtual reality head mounted display system | |
CN104951738A (en) | Kinect detecting method | |
CN104951736A (en) | Kinect detecting method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20150930 |