CN103714322A - Real-time gesture recognition method and device - Google Patents
Real-time gesture recognition method and device Download PDFInfo
- Publication number
- CN103714322A CN103714322A CN201310731359.1A CN201310731359A CN103714322A CN 103714322 A CN103714322 A CN 103714322A CN 201310731359 A CN201310731359 A CN 201310731359A CN 103714322 A CN103714322 A CN 103714322A
- Authority
- CN
- China
- Prior art keywords
- coordinate
- finger
- gesture
- edge
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to the field of somatosensory control, in particular to a real-time gesture recognition method and device. The method is real-time, accurate and capable of carrying out image real-time input while carrying out image processing through an algorithm in one frame so that abundant data information like edges, skeletons, tips and nodes of various types of two-dimensional image targets of three-dimensional image targets can be obtained, and the data information can be sorted in real time to obtain an order array. According to the method, accurate and real-time tracking and matching of the targets and arteriovenous trajectory analysis in medical science based on skeleton characteristics can be further achieved. The real-time gesture recognition device comprises a gesture identifying control device and two free degree mechanical rods.
Description
Technical field
The present invention relates to body sense control field, especially a kind of real-time gesture recognition methods and device.
Background technology
Gesture identification is an important technology of body sense control field, by technique user, can in the most natural mode, carry out man-machine mutual by gesture identification controller, such as the hand by user just can carry out musical composition, realizing finger plays in the air, the remote operation of medical treatment, remote danger is done industry, and three-dimensional model such as builds at the operation.
Traditional gesture identification process is first by methods such as frame difference method or skin color models, to extract gesture target from visible images; Then carry out object edge detection, then the splicing of sorting of edge sequence; Finally by the angle of curvature, put algorithm, the coordinate place that obtains curvature maximum is class finger tip, because the gap of finger tip and finger is the maximum extreme point of the curvature of finger, in order to get rid of webs, also needs by vectorial anglec of rotation feature, to get rid of finger seam coordinate again, thereby finally obtain finger tip coordinate.
But traditional Gesture Recognition Algorithm has a lot of shortcomings, first will be by the leaching process of gesture target in gesture target leaching process, because under visible ray environment, image is responsive to luminous ray, easily affect the extraction effect of target, versatility is poor, and adopt frame difference method also will train background template, increase the replacement problem of recognizer complexity and template, storing frame data contrasts process difficulty is strengthened, the problems such as memory space increase during embedded transplanting, are unfavorable for embedded platform transplanting.The second splicing for rim detection sequence need to consume more time and resource, can not make gesture identification result of calculation complete in a frame, some frames of time delay always, and real-time is poor.The 3rd for obtain maximum angle of curvature point coordinate between frame and frame, there is coordinate jumping characteristic, do not there is coordinate continuity, can produce obvious coordinate shake, yet the location that finally can make finger tip produces obviously shake, the accurately operational applications after being unfavorable for.The 4th obtains the skeleton data information of target again when gesture identification, is the approximate bounds region of simple identification finger tip.
Summary of the invention
Technical matters to be solved by this invention is: the problem existing for prior art, a kind of real-time gesture recognition methods and device are provided, the present invention relates to a kind of gesture identification method of real-time and precise, can carry out image at a frame inner edge and export in real time limit and carry out this algorithm and carry out image processing, thereby can obtain the two dimension of any type or 3-D view object edge, finger backbone, finger tips, endpoint node etc., enrich data message; And data message can real-time sorting obtain order array.Can also realize the accurate real-time follow-up coupling of target based on framework characteristic, medically arteriovenous trajectory analysis.
The technical solution used in the present invention is as follows:
A kind of real-time gesture recognition methods comprises:
Step 1: by the left camera of gesture identifying device, right camera, carry out the synchronous capable full screen scanning of k in real time of image display frame, obtain infrared picture data, processor is processed infrared picture data respectively; Step 2: processor is converted into gray level image by infrared image and processes by adaptive threshold, carries out gesture region and background area separated, and the adaptive method by image binaryzation is processed the binary picture that obtains gesture target;
Step 3: the prediction subsumption algorithm by based target border carries out the prediction of the coordinate position of correlated characteristic to each finger in gesture region, and by the separated classification of different finger characteristics, obtain different finger start edge array a in gesture region
ei, finger terminating edge array b
ei, volar edge array and point key array c
ei; When e=1, represent left camera data, during e=2, represent right camera data;
Step 4: by the finger start edge a of each finger
ei, finger terminating edge b
eiwith the key array c of finger
ei, obtain finger fingertip central coordinate of circle (x
ei, y
ei);
Step 5: by step 3,4, according to finger tips judgment criterion, obtain different finger dactylus ends;
Step 6: described left camera, right camera are respectively by corresponding difference finger dactylus end and the volar edge matching palm center of circle (x
e0, y
e0); Data to left camera, the same gesture posture of right camera collection, carry out two disparity map couplings and calculate, and obtain mating gesture identification coordinate points;
Step 7: thus coupling gesture identification coordinate points is carried out to the 3 dimension data information that three-dimensional modeling obtains gesture; The movement locus of three-dimensional data information is carried out to smoothing processing; Three-dimensional data information after level and smooth is applied to output.
Preferably, the prediction subsumption algorithm concrete steps on based target border in described step 3:
Step 31: when doubtful gesture target being detected, i.e. binary map white pixel, and record gesture posture at the starting point a of this row
ei, its coordinate figure is (xa
ei, ya
ei);
Step 32: when capable this suspected target width of k, when white pixel number is greater than threshold value p continuously, thinks and gesture target rather than noise spot detected, and record the terminating point b of this row gesture target
ei(xb
ei, yb
ei), p=10 wherein.
Step 33: the start edge (xa that suddenly obtains capable certain finger gesture of k-1 by first two steps
e (i-1), ya
e (i-1)) and terminating edge (xb
e (i-1), yb
e (i-1)), and
ya
e (i-1)=yb
e (i-1), by (xa
e (i-1), ya
e (i-1)), (xb
e (i-1), yb
e (i-1)) 2 obtain this finger gesture at the capable middle point coordinate (mx of k-1
e (i-1), my
e (i-1)), point key coordinate (mx
e (i-1), my
e (i-1)), wherein said
my
e (i-1)=ya
e (i-1); K>=1 wherein, i>1.Step 34: because finger gesture image has continuity, each finger line scanning point midway also has continuity, by k-1 start of line edge (xa
e (i-1), ya
e (i-1)), terminating edge (xb
e (i-1), yb
e (i-1)), point key coordinate (mx
e (i-1), my
e (i-1)), prediction next line, the start edge (xa that k is capable
ei, ya
ei), terminating edge (xb
ei, yb
ei), point key coordinate (mx
ei, my
ei), wherein said
my
ei=ya
ei;
Step 35: equate with k-1 is capable according to the capable start edge of continuity k and terminating edge, between the finger gesture start edge and finger gesture terminating edge of the key coordinate of the capable finger of k in prediction, this mid point, start edge, terminating edge all belong to mid point, start edge point, the terminating edge of this finger target, otherwise do not belong to this finger target, start edge (xa
ei, ya
ei) be the edge forming by i start edge, terminating edge (xb
ei, yb
ei) be the edge forming by i terminating edge, point key coordinate (mx
ei, my
ei) be by i middle point coordinate skeleton;
Step 36: distinguish the key coordinate (mx of different finger in different gesture postures by real-time line scanning
ei, my
ei), start edge (xa
ei, ya
ei), terminating edge (xb
ei, yb
ei), obtain the key coordinate (mx of different finger in each gesture target
ei, my
ei), start edge (xa
ei, ya
ei), terminating edge (xb
ei, yb
ei), and volar edge array, the storage of classifying.
Preferably, described step 4 obtains finger tip central coordinate of circle and adopts approximating method, concrete steps:
Step 41: for pointing key coordinate (mx in the gesture posture obtaining
ei, my
ei), when colleague is first while occurring the target of a continuous P white pixel, left margin point a
0coordinate (xa
e0, ya
e0), right margin point b
0coordinate (xb
e0, yb
e0), next line left margin point c
0coordinate (xa
e1, ya
e1) form leg-of-mutton circumscribed circle, pass through a
0, b
0, c
0carry out finger tip central coordinate of circle (x
e0, y
e0) matching;
Step 42:ya
e0=yb
e0, ya
e1=ya
e0+ c, corrected parameter c value range is 9 to 11;
Step 43: finger tip central coordinate of circle (x
e0, y
e0):
Preferably, described step 4 obtains finger tip central coordinate of circle and adopts centroid algorithm, concrete steps:
Step 41: for pointing key coordinate (mx in the gesture posture obtaining
ei, my
ei), when colleague is first while occurring the target of a continuous P white pixel, left margin point a
0coordinate (xa
e0, ya
e0), right margin point b
0coordinate (xb
e0, yb
e0), next line left margin point c
0coordinate (xa
e1, ya
e1) form leg-of-mutton circumscribed circle, pass through a
0, b
0, c
0carry out finger tip central coordinate of circle (x
e0, y
e0) matching;
Step 42: finger tip central coordinate of circle (x
i, y
i),
Preferably, described step 5 finger tips judgment criterion concrete steps are: the finger skeleton prediction mid point h of two or more numbers appears in judgement first
e (h+1)coordinate (mx
e (h+1), my
e (h+1)), i
e (u+1)coordinate (mx
e (u+1), my
e (u+1)), j
e (v+1)coordinate (mx
e (v+1), my
e (v+1)) appear at same white scanning area, so lastrow with it corresponding finger skeleton mid point be finger dactylus end h
ehcoordinate (mx
eh, my
eh), i
eucoordinate (mx
eu, my
eu), j
evcoordinate (mx
ev, my
ev), wherein said h+1<=i, u+1<=i, v+1<=i.
Preferably, described step 6 concrete steps comprise:
Step 61: obtain centre of the palm coordinate (x by the method for finger tip center of circle matching by 2 finger dactylus ends and arbitrary random volar edge point
0, y
0), and take centre of the palm coordinate and set up coordinate system as initial point, calculate respectively finger tip central coordinate of circle (x
ei, y
ei) to corresponding finger dactylus end (mx
ei, my
ei), then from dactylus end (mx
ei, my
ei) to centre of the palm coordinate (x
0, y
0) vector:
Step 62: calculate Q=min (abs (W
1s-W
2s)), when Q is for hour, the same finger of the match is successful left and right two width figure, otherwise continue to find coupling, until final, realize the coupling in each finger tip, edge, backbone, dactylus end and centre of the palm of pointing of left and right two width figure, find the coordinate one-to-one relationship of left and right two each fingers of width figure;
Step 63: according to the finger tip center of circle, point key coordinate, centre of the palm coordinate (x0, y0), volar edge, dactylus end, obtain gesture identification coordinate points.
Preferably, described gesture identifying device comprises gesture identification control device and 2DOF mechanical shaft, described gesture identification control device comprises left camera, right camera, minimum two infrared LEDs, processor, the infrared fileter corresponding with camera, outside framework, 2DOF mechanical shaft, described processor is used for controlling left camera, right camera, infrared LED, the work of 2DOF mechanical shaft, processor is positioned over outside framework inside, left camera, right camera base is distributed in outside framework front, described infrared LEDs etc. are distributed in outside framework front, described infrared fileter is connected with surface, camera upper surface, described 2DOF mechanical shaft one end is connected with outside framework side end face, the 2DOF mechanical shaft other end is by base ground connection, processor is controlled the electric rotating machine of 2DOF mechanical shaft.
Preferably, described in comprise 3 infrared LEDs, described infrared LED and left camera, right camera interval are evenly distributed on outside framework front.
A kind of real-time gesture recognition device comprises gesture identification control device and 2DOF mechanical shaft, described gesture identification control device comprises left camera, right camera, minimum two infrared LEDs, processor, the infrared fileter corresponding with camera, outside framework, 2DOF mechanical shaft, described processor is used for controlling left camera, right camera, infrared LED, the work of 2DOF mechanical shaft, processor is positioned over outside framework inside, left camera, right camera base is distributed in outside framework front, described infrared LEDs etc. are distributed in outside framework front, described infrared fileter is connected with surface, camera upper surface, described 2DOF mechanical shaft one end is connected with outside framework side end face, the 2DOF mechanical shaft other end is by base ground connection, processor is controlled the electric rotating machine of 2DOF mechanical shaft.
Preferably, a kind of real-time gesture recognition device comprises 3 infrared LEDs, and described infrared LED and left camera, right camera interval are evenly distributed on outside framework front.
In sum, owing to having adopted technique scheme, the invention has the beneficial effects as follows:
1, high, the registration of this technical scheme efficiency, take that resource is few and hardware configuration is simple during about strong, the embedded transplanting of transplantability of embedded platform, by this algorithm, can obtain the rich image target data informations such as gesture identification, two dimensional image skeletal extraction, three-dimensional picture skeletal extraction, can also realize the accurate real-time follow-up coupling of multiple goal based on skeleton contour feature.Can be widely used in image processing, pattern-recognition sensor application on PC.
2, the application's patent has realized at a high speed gesture identification accurately, can in a two field picture, obtain gesture identification, two dimensional image skeletal extraction, three-dimensional picture skeletal extraction etc. enrich data message by this patent algorithm.And the method can be transplanted to the embedded platforms such as FPGA.
3, the present invention can avoid the shortcoming of traditional gesture identification, and within a frame, real-time and precise realizes gesture identification, and highly versatile can be transplanted as FPGA, ARM, DSP under multiple embedded platform, and the resource cost of institute's transplanting platform is few; The data message obtaining comprises the coordinate arrays such as the edge, skeleton, tip, node of gesture target, and due to the real-time high-efficiency of this algorithm, a lot of time margin of having had more than needed, can carry out stream line operation with other algorithm for pattern recognition.In a word, this Gesture Recognition Algorithm in real time, precisely, there is good hardware transplantability.
The impact of 4, in order to weaken visible ray, target being extracted by building specific infrared optics platform, is carried out target and background separation under infrared light environment, reaches good target extraction effect.
5, gesture identification prediction separation algorithm can complete targeted scans in a frame, can well to each finger of gesture, carry out the prediction of position, and the correct separation of different finger characteristics is sorted out.
6, adopt fitting algorithm or centroid algorithm that the data message of the different fingers of Real-time Obtaining is obtained to order edge and the skeleton array of each finger by algorithm process, and volar edge array.
The order edge of each finger 7, obtaining, by the method acquisition finger tip mid point of fitting circle.The method, for the location of target, has good track and localization effect regardless of its angle, far and near deformation.
8, by finger separation flags and skeleton array data, obtain finger tips (the being node) data of target, then by volar edge array and the finger node matching centre of the palm.
Same the gesture target of 9, carrying out two disparity maps by each finger of obtaining and the order skeleton array of palm and the correlationship of key point (as finger tip mid point, dactylus end, the centre of the palm) combined coupling.
Accompanying drawing explanation
Examples of the present invention will be described by way of reference to the accompanying drawings, wherein:
Fig. 1 is: real-time gesture recognition device Organization Chart
Fig. 2 is: gesture identifying device schematic diagram
Fig. 3 is: gesture identifying device structural drawing
Fig. 4 is: gesture identification target is extracted binary picture
Fig. 5 is: gesture identification finger backbone and edge contour figure
Fig. 6 is: gesture target area data separating figure
Fig. 7 is: point location fitted figure heart figure in finger tip
Fig. 8 is: refer to dactylus end location and centre of the palm matching location map a more
Fig. 9 is: refer to dactylus end location and centre of the palm matching location map b more
Figure 10 is: the poor three-dimensional system of coordinate figure of double vision
Figure 11 is: the parallax public domain figure of the poor camera of double vision
Figure 12 is: Gesture Recognition Algorithm process flow diagram
Figure 13 is: three-dimensional modeling figure
Figure 14 is: gesture match map.
Embodiment
Disclosed all features in this instructions, or the step in disclosed all methods or process, except mutually exclusive feature and/or step, all can combine by any way.
Disclosed arbitrary feature in this instructions (comprising any accessory claim, summary and accompanying drawing), unless narration especially all can be replaced by other equivalences or the alternative features with similar object.That is,, unless narration especially, each feature is an example in a series of equivalences or similar characteristics.
Related description of the present invention:
1, dactylus end: point the junction with volar edge.
2,2DOF mechanical shaft (front and back, two of left and right degree of freedom) is responsible for each rotation of 180 all around.By processor, control rotation.To strengthen gesture identification, control field range.The rotation of degree of freedom mechanical shaft is that the angle that the control controlled by processor is rotated is the width modulation function control angle by processor, the pulsewidth of energising certain number, the specific angle of the corresponding rotation of motor, each motor has the minimum rotation parameter of oneself as each energising pulsewidth rotation 1 degree.
If 3 by the key coordinate (mx of gesture
ei, my
ei) and obtain finger tip, can allow the position at finger tip middle part produce deviation, and lose precise positioning, so adopt here, three coordinates of three points in top carry out the matching of finger tip circle.
4, the finger tip center of circle, finger backbone, dactylus end are all from finger tip to maniphalanx, to do in a region, then to the dry part (being dactylus end) of joining with volar edge of maniphalanx.Its Parametric Representation is consistent, and these three parameters are owing to being all the key part of finger, and the finger tip center of circle is the key starting point of finger, and dactylus end is the midrange of the key last column of finger.
5, a series of mid points have formed finger backbone.
6, as shown in Figure 1, real-time gesture recognition device comprises gesture identification control device and 2DOF mechanical shaft, and as shown in Figure 2, gesture identification control device comprises left camera, right camera, minimum two infrared LEDs, processor, the infrared fileter corresponding with camera, outside framework, 2DOF mechanical shaft, described processor is used for controlling left camera, right camera, infrared LED, the work of 2DOF mechanical shaft, processor is positioned over outside framework inside, left camera, right camera base is distributed in outside framework front, described infrared LEDs etc. are distributed in outside framework front, described infrared fileter is connected with surface, camera upper surface, described 2DOF mechanical shaft one end is connected with outside framework side end face, the 2DOF mechanical shaft other end is by base ground connection, and processor is controlled the electric rotating machine of 2DOF mechanical shaft. and the camera that is wherein positioned over the positive left side of outside framework is left camera, the camera that is positioned over outside framework front the right is right camera, and camera center, left and right connecting line is x axle, and vertical with x axle on outside framework front is y axle, vertical line perpendicular to outside framework front is z axle, in computation process, the collection of z axle be exactly gesture posture, depth coordinate information.As Figure 13.T is the centre distance of left and right camera.Left camera, right camera are all CMOS cameras.
7, workflow: as shown in figure 12.
1) the left camera of the real-time gesture recognition device based on building, right camera obtain infrared image;
2) the Infrared image data that processor obtains left camera, right camera are respectively processed,
3) and by left camera Data Matching eigenwert, right camera Data Matching eigenwert mate;
4) output three-dimensional data information is as finger tip coordinate, gesture estimation etc.; Three-dimensional data output.
The prediction subsumption algorithm on the algorithm based target border that it is most crucial, can accurately lock finger fingertip middle part coordinate in the cycle at a two field picture, obtain edge, backbone and the dactylus end of each finger.Its algorithm performance can be described as limit when having reached limit output display image and carries out the detection of the related data coordinate of gesture, and is easy to the transplantability of FPGA hardware platform, has good compatibility.And in conjunction with the fitting algorithm of finger tip, further improved the setting accuracy of point coordinate in finger tip, for distance and the deformation variation of finger, finger tip fitting algorithm can have good adaptability.
Finally in order to mate left and right two width disparity maps, point one to one target data information, adopted the method for proper vector coupling, have very high matching accuracy rate.The gesture coupling that can realize fast two width figure, then by three-dimensional modeling formula, builds the three dimensional space coordinate information of gesture.
Embodiment mono-: concrete steps: as shown in Figure 3,
Step 1: obtain infrared image by the left camera of real-time gesture recognition device, right camera;
Step 2: processor is converted into gray level image by infrared image and processes by adaptive threshold, gesture region is carried out separated with background area, by the adaptive method of image binaryzation, process the binary picture (as Fig. 4) that obtains gesture target afterwards, white portion is that target area is gesture region, chequered with black and white region is that separated region is background area, and carries out regular update in time;
Step 3: the prediction subsumption algorithm by based target border carries out the prediction of the coordinate position of correlated characteristic (as edge, skeleton etc.) to each finger in gesture region, and by separated sort out (as Fig. 5 and the Fig. 6) of different finger characteristics, obtain different finger start edge array a in gesture region
ei, finger terminating edge array b
ei, volar edge array and point key array c
ei; When e=1, represent left camera data, during e=2, represent right camera data;
Step 4: by the finger start edge a of each finger
ei, finger terminating edge b
eiwith the key array c of finger
ei, obtain finger fingertip central coordinate of circle (x
ei, y
ei);
Step 5: by step 3,4, according to finger tips judgment criterion, obtain different finger dactylus ends;
Step 6: described left camera, right camera are respectively by corresponding difference finger dactylus end (as Fig. 8) and the volar edge matching palm center of circle (x
e0, y
e0) (as Fig. 8, Fig. 9); Data to left camera, the same gesture posture of right camera collection, carry out two disparity map couplings and calculate, and obtain mating gesture identification coordinate points; After registration by dual camera parallax model as Figure 10, Figure 11 by the gesture identification coordinate points having matched (as finger tip middle part, dactylus end, the centre of the palm etc.) thus carrying out three-dimensional modeling obtains gesture 3 dimension data information,
Step 7: thus coupling gesture identification coordinate points is carried out to the 3 dimension data information that three-dimensional modeling obtains gesture; The movement locus of three-dimensional data information is carried out to smoothing processing (by two-dimentional low-pass filtering); Three-dimensional data information after level and smooth is applied to output.
Embodiment bis-: in step 2, gesture region is carried out separated with background area, adaptive method by image binaryzation is processed the binary picture detailed process that obtains gesture target: (left camera, right camera data binary conversion treatment process are similar, in following process, can represent left camera data, can represent again right camera data binary conversion treatment process), as shown in Figure 4
Step 21: establishing image is f
(ex, ey), the image after binaryzation is p
(ex, ey), threshold values is T, the normalizing histogram of calculating input image gray level, uses h
(ei)represent.
Step 23: the zeroth order accumulation square w (k) of compute histograms and single order accumulation square μ (k)
Step 25: ask
maximal value, and using its corresponding k value as best threshold values T.Step 26: input picture is carried out to binary conversion treatment
When f (
x,
yduring)>=T, show it is gesture region, binary picture p (x, y) corresponding to this pixel is white pixel; Work as f
(x, y)during <T, show it is background area, binary picture p (x, y) corresponding to this pixel is black picture element; .
Embodiment tri-: step 3 is the prediction subsumption algorithm concrete steps on based target border wherein:
Step 31: when doubtful gesture target being detected, i.e. binary map white pixel, and record gesture posture at the starting point a of this row
ei, its coordinate figure is (xa
ei, ya
ei);
Step 32: when capable this suspected target width of k, when white pixel number is greater than threshold value p continuously, thinks and gesture target rather than noise spot detected, and record the terminating point b of this row gesture target
ei(xb
ei, yb
ei), p=10 wherein.
Step 33: the start edge (xa that suddenly obtains capable certain finger gesture of k-1 by first two steps
e (i-1), ya
e (i-1)) and terminating edge (xb
e (i-1), yb
e (i-1)), a of Fig. 5, b(a, b point are also that target is at the capable edge of i).And ya
e (i-1)=yb
e (i-1), by (xa
e (i-1), ya
e (i-1)), (xb
e (i-1), yb
e (i-1)) 2 obtain this finger gesture at the capable middle point coordinate (mx of k-1
e (i-1), my
e (i-1)), point key coordinate (mx
e (i-1), my
e (i-1)), wherein said
my
e (i-1)=ya
e (i-1); K>=1 wherein, i>1.
Step 34: because finger gesture image has continuity, each finger line scanning point midway also has continuity, by k-1 start of line edge (xa
e (i-1), ya
e (i-1)), terminating edge (xb
e (i-1), yb
e (i-1)), point key coordinate (mx
e (i-1), my
e (i-1)), prediction next line, the start edge (xa that k is capable
ei, ya
ei), terminating edge (xb
ei, yb
ei), point key coordinate (mx
ei, my
ei), wherein said
my
ei=ya
eiby predicted data as discriminant classification standard.
For example: if as Fig. 5, the start edge of certain gesture posture of the capable data of k-1 (certain finger) is a, and terminating edge is b, and mid point is c.Predict that so this target is similar to a and b in the capable edge value of k.So by scanning the capable mid point that obtains each target of k as mid point d, d2, because a<d<b, so d belongs to the mid point skeleton of middle finger, the relevant edge of d also belongs to middle finger edge so, and d2, not at [a, b] thus between, d2 does not belong to middle finger and belongs to forefinger mid point.
Step 35: equate with k-1 is capable according to the capable start edge of continuity k and terminating edge, between the finger gesture start edge and finger gesture terminating edge of the key coordinate of the capable finger of k in prediction, this mid point, start edge, terminating edge all belong to mid point, start edge point, the terminating edge of this finger target, otherwise do not belong to this finger target, start edge (xa
ei, ya
ei) be the edge forming by i start edge, terminating edge (xb
ei, yb
ei) be the edge forming by i terminating edge, point key coordinate (mx
ei, my
ei) be by i middle point coordinate skeleton;
Step 36: distinguish the key coordinate (mx of different finger in different gesture postures by real-time line scanning
ei, my
ei), start edge (xa
ei, ya
ei), terminating edge (xb
ei, yb
ei), obtain the key coordinate (mx of different finger in each gesture target
ei, my
ei), start edge (xa
ei, ya
ei), terminating edge (xb
ei, yb
ei), and volar edge array, the storage of classifying.
Embodiment tetra-: step 4 obtains finger tip central coordinate of circle and adopts approximating method, concrete steps:
Step 41: for pointing key coordinate (mx in the gesture posture obtaining
ei, my
ei), when colleague is first while occurring the target of a continuous P white pixel, left margin point a
0coordinate (xa
e0, ya
e0), right margin point b
0coordinate (xb
e0, yb
e0), next line left margin point c
0coordinate (xa
e1, ya
e1) form leg-of-mutton circumscribed circle, pass through a
0, b
0, c
0carry out finger tip central coordinate of circle (x
e0, y
e0) matching, (Fig. 7);
Step 42:ya
e0=yb
e0ya
e1=ya
e0+ c, corrected parameter c value range is 9 to 11;
Step 43: finger tip central coordinate of circle (x
e0, y
e0):
Embodiment five: step 4 obtains finger tip central coordinate of circle and adopts centroid algorithm, concrete steps:
Step 41: for pointing key coordinate (mx in the gesture posture obtaining
ei, my
ei), when colleague is first while occurring the target of a continuous P white pixel, left margin point a
0coordinate (xa
e0, ya
e0), right margin point b
0coordinate (xb
e0, yb
e0), next line left margin point c
0coordinate (xa
e1, ya
e1) form leg-of-mutton circumscribed circle, pass through a
0, b
0, c
0carry out finger tip central coordinate of circle (x
e0, y
e0) matching, (Fig. 7);
Step 42: finger tip central coordinate of circle (x
i, y
i) (Fig. 7),
Embodiment six: the principle of work of step 5: the finger skeleton prediction mid point h of two or more numbers appears in judgement first
e (h+1)coordinate (mx
e (h+1), my
e (h+1)), i
e (u+1)coordinate (mx
e (u+1), my
e (u+1)), j
e (v+1)coordinate (mx
e (v+1), my
e (v+1)) appear at same white scanning area, so lastrow with it corresponding finger skeleton mid point be finger dactylus end h
ehcoordinate (mx
eh, my
eh), i
eucoordinate (mx
eu, my
eu), j
evcoordinate (mx
ev, my
ev), wherein said h+1<=i, u+1<=i, v+1<=i.
Embodiment seven: step 6 concrete steps comprise:
Step 61: by centre of the palm coordinate (x
0, y
0) coordinate such as (or the average of the finger tip center of circle, dactylus end) is that initial point is set up rectangular coordinate system as Figure 14.
Step 62: by calculating each finger tip central coordinate of circle (x obtaining in two width disparity maps
ei, y
ei) and centre of the palm coordinate (x
0, y
0) trigonometric function relation (as cosine value
).The trigonometric function value obtaining in two width figure is compared, last left and right camera is calculated respectively sinq value, if sinq error is minimum, it is coupling finger tip, until final, realize each finger of left and right two width figure finger tip, start edge,, the coupling in key, the dactylus end of terminating edge, finger and the centre of the palm, find the coordinate one-to-one relationship of left and right two each fingers of width figure;
Step 63: according to the finger tip center of circle, point key coordinate, centre of the palm coordinate (x0, y0), volar edge, dactylus end, obtain gesture identification coordinate points.
Embodiment eight: step 6, by algorithm of convex hull is carried out respectively in each finger tip middle part (or dactylus end) of two width figure, obtains matching relationship, and concrete steps comprise:
Step 61: algorithm of convex hull principle is to identify Far Left or the finger tip on the right starts to carry out algorithm of convex hull starting point by camera, when meeting it successively clockwise or sine counterclockwise obtaining or cosine value is unified for just or when negative, acquisition left and right gesture is matching relationship one by one.If a finger tip center of circle is (x
ei, y
ei) another finger tip central coordinate of circle (x
ej, y
ej),
), with this, each finger tip is sorted clockwise or counterclockwise.The disparity map obtaining according to two cameras, calculates respectively sinq value, if sinq is just being all clockwise or counterclockwise or when negative, and difference minimum.For finger tip of each finger of about coupling two width figure, start edge,, the coupling in key, the dactylus end of terminating edge, finger and the centre of the palm, the final coupling that realizes, the coordinate one-to-one relationship in finger tip, edge, backbone, dactylus end and the centre of the palm of two each fingers of width figure about obtaining;
Step 63: according to the finger tip center of circle, point key coordinate, centre of the palm coordinate (x0, y0), volar edge, dactylus end, obtain gesture identification coordinate points.
Embodiment nine:
Step 61: obtain centre of the palm coordinate (x by the method for finger tip center of circle matching by 2 finger dactylus ends and arbitrary random volar edge point
0, y
0), and take centre of the palm coordinate and set up coordinate system as initial point, calculate respectively finger tip central coordinate of circle (x
ei, y
ei) to corresponding finger dactylus end (mx
ei, my
ei) (Figure 14), then from dactylus end (mx
ei, my
ei) to centre of the palm coordinate (x
0, y
0) vector (Figure 14):
Step 62: calculate Q=min (abs (W
1s-W
2s)), when Q is for hour, the same finger of the match is successful left and right two width figure, otherwise continue to find coupling, until final, realize the coupling in each finger tip, edge, backbone, dactylus end and centre of the palm of pointing of left and right two width figure, find the coordinate one-to-one relationship of left and right two each fingers of width figure;
Step 63: according to the finger tip center of circle, point key coordinate, centre of the palm coordinate (x0, y0), volar edge, dactylus end, obtain gesture identification coordinate points.
Embodiment ten: thus coupling gesture identification coordinate points is carried out to the 3 dimension data information that three-dimensional modeling obtains gesture; Detailed process comprises:
Step 71: as Figure 13, three-dimensional modeling Algorithm Analysis after single finger tip coupling: z, x, y are world's three-dimensional coordinate (true three-dimension coordinate), the center of the left camera of this three-dimensional system of coordinate is three-dimensional system of coordinate initial point, and T is the centre distance of left and right camera, and f is camera focal length.The initial point of dual camera is in the upper left corner, match gesture identification coordinate points and (refer to the finger tip center of circle, point key coordinate, centre of the palm coordinate (x0, y0), volar edge, dactylus end coordinate figure) left camera x axle, y axial coordinate is exactly respectively x_left, y_left, match gesture identification coordinate points and (refer to the finger tip center of circle, point key coordinate, centre of the palm coordinate (x0, y0), volar edge, dactylus end coordinate figure, palm of the hand edge) right camera x axial coordinate is exactly x_right, y_right, according to above information, obtain world's (with respect to real space coordinate of camera) coordinate x, y, the computing formula of z is:
Z=T*f/abs (x_left-x_right), abs() asks absolute value.
X=z* (x_left-xs)/f, xs is half of camera display pixel horizontal ordinate.
Y=z* (y_left-ys)/f, half of ys camera display pixel ordinate.
If camera pixel (resolution) 640*320, xs is that 320, ys is 160.
The present invention is not limited to aforesaid embodiment.The present invention expands to any new feature or any new combination disclosing in this manual, and the arbitrary new method disclosing or step or any new combination of process.
Claims (10)
1. a real-time gesture recognition methods, is characterized in that comprising:
Step 1: by the left camera of gesture identifying device, right camera, carry out the synchronous capable full screen scanning of k in real time of image display frame, obtain infrared picture data, processor is processed infrared picture data respectively; Step 2: processor is converted into gray level image by infrared image and processes by adaptive threshold, carries out gesture region and background area separated, and the adaptive method by image binaryzation is processed the binary picture that obtains gesture target;
Step 3: the prediction subsumption algorithm by based target border carries out the prediction of the coordinate position of correlated characteristic to each finger in gesture region, and by the separated classification of different finger characteristics, obtain different finger start edge array a in gesture region
ei, finger terminating edge array b
ei, volar edge array and point key array c
ei; When e=1, represent left camera data, during e=2, represent right camera data;
Step 4: by the finger start edge a of each finger
ei, finger terminating edge b
eiwith the key array c of finger
ei, obtain finger fingertip central coordinate of circle (x
ei, y
ei);
Step 5: by step 3,4, according to finger tips judgment criterion, obtain different finger dactylus ends;
Step 6: described left camera, right camera are respectively by corresponding difference finger dactylus end and the volar edge matching palm center of circle (x
e0, y
e0); Data to left camera, the same gesture posture of right camera collection, carry out two disparity map couplings and calculate, and obtain mating gesture identification coordinate points;
Step 7: thus coupling gesture identification coordinate points is carried out to the 3 dimension data information that three-dimensional modeling obtains gesture; The movement locus of three-dimensional data information is carried out to smoothing processing; Three-dimensional data information after level and smooth is applied to output.
2. a kind of real-time gesture recognition methods according to claim 1, is characterized in that described step 3
The prediction subsumption algorithm concrete steps on middle based target border:
Step 31: when doubtful gesture target being detected, i.e. binary map white pixel, and record gesture posture at the starting point a of this row
ei, its coordinate figure is (xa
ei, ya
ei);
Step 32: when capable this suspected target width of k, when white pixel number is greater than threshold value p continuously, thinks and gesture target rather than noise spot detected, and record the terminating point b of this row gesture target
ei(xb
ei, yb
ei), p=10 wherein.
Step 33: the start edge (xa that suddenly obtains capable certain finger gesture of k-1 by first two steps
e (i-1), ya
e (i-1)) and terminating edge (xb
e (i-1), yb
e (i-1)), and ya
e (i-1)=yb
e (i-1), by (xa
e (i-1), ya
e (i-1)), (xb
e (i-1), yb
e (i-1)) 2 obtain this finger gesture at the capable middle point coordinate (mx of k-1
e (i-1), my
e (i-1)), point key coordinate (mx
e (i-1), my
e (i-1)), wherein said
my
e (i-1)=ya
e (i-1); K>=1 wherein, i>1.Step 34: because finger gesture image has continuity, each finger line scanning point midway also has continuity, by k-1 start of line edge (xa
e (i-1), ya
e (i-1)), terminating edge (xb
e (i-1), yb
e (i-1)), point key coordinate (mx
e (i-1), my
e (i-1)), prediction next line, the start edge (xa that k is capable
ei, ya
ei), terminating edge (xb
ei, yb
ei), point key coordinate (mx
ei, my
ei), wherein said
my
ei=ya
ei;
Step 35: equate with k-1 is capable according to the capable start edge of continuity k and terminating edge, between the finger gesture start edge and finger gesture terminating edge of the key coordinate of the capable finger of k in prediction, this mid point, start edge, terminating edge all belong to mid point, start edge point, the terminating edge of this finger target, otherwise do not belong to this finger target, start edge (xa
ei, ya
ei) be the edge forming by i start edge, terminating edge (xb
ei, yb
ei) be the edge forming by i terminating edge, point key coordinate (mx
ei, my
ei) be by i middle point coordinate skeleton;
Step 36: distinguish the key coordinate (mx of different finger in different gesture postures by real-time line scanning
ei, my
ei), start edge (xa
ei, ya
ei), terminating edge (xb
ei, yb
ei), obtain the key coordinate (mx of different finger in each gesture target
ei, my
ei), start edge (xa
ei, ya
ei), terminating edge (xb
ei, yb
ei), and volar edge array, the storage of classifying.
3. a kind of real-time gesture recognition methods according to claim 2, is characterized in that described step 4 obtains finger tip central coordinate of circle and adopts approximating method, concrete steps:
Step 41: for pointing key coordinate (mx in the gesture posture obtaining
ei, my
ei), when colleague is first while occurring the target of a continuous P white pixel, left margin point a
0coordinate (xa
e0, ya
e0), right margin point b
0coordinate (xb
e0, yb
e0), next line left margin point c
0coordinate (xa
e1, ya
e1) form leg-of-mutton circumscribed circle, pass through a
0, b
0, c
0carry out finger tip central coordinate of circle (x
e0, y
e0) matching;
Step 42:ya
e0=yb
e0, ya
e1=ya
e0+ c, corrected parameter c value range is 9 to 11;
Step 43: finger tip central coordinate of circle (x
e0, y
e0):
4. a kind of real-time gesture recognition methods according to claim 2, is characterized in that described step 4 obtains finger tip central coordinate of circle and adopts centroid algorithm, concrete steps:
Step 41: for pointing key coordinate (mx in the gesture posture obtaining
ei, my
ei), when colleague is first while occurring the target of a continuous P white pixel, left margin point a
0coordinate (xa
e0, ya
e0), right margin point b
0coordinate (xb
e0, yb
e0), next line left margin point c
0coordinate (xa
e1, ya
e1) form leg-of-mutton circumscribed circle, pass through a
0, b
0, c
0carry out finger tip central coordinate of circle (x
e0, y
e0) matching;
Step 42: finger tip central coordinate of circle (x
i, y
i),
5. according to a kind of real-time gesture recognition methods described in claim 3 or 4, it is characterized in that described step 5 finger tips judgment criterion concrete steps are: the finger skeleton prediction mid point h of two or more numbers appears in judgement first
e (h+1)coordinate (mx
e (h+1), my
e (h+1)), i
e (u+1)coordinate (mx
e (u+1), my
e (u+1)),
je (v+1)coordinate (mx
e (v+1), my
e (v+1)) appear at same white scanning area, so lastrow with it corresponding finger skeleton mid point be finger dactylus end h
ehcoordinate (mx
eh, my
eh), i
eucoordinate (mx
eu, my
eu), j
evcoordinate (mx
ev, my
ev), wherein said h+1<=i, u+1<=i, v+1<=i.
6. a kind of real-time gesture recognition methods according to claim 5, is characterized in that described step 6 concrete steps comprise:
Step 61: obtain centre of the palm coordinate (x by the method for finger tip center of circle matching by 2 finger dactylus ends and arbitrary random volar edge point
0, y
0), and take centre of the palm coordinate and set up coordinate system as initial point, calculate respectively finger tip central coordinate of circle (x
ei, y
ei) to corresponding finger dactylus end (mx
ei, my
ei), then from dactylus end (mx
ei, my
ei) to centre of the palm coordinate (x
0, y
0) vector:
Step 62: calculate Q=min (abs (W
1s-W
2s)), when Q is for hour, the same finger of the match is successful left and right two width figure, otherwise continue to find coupling, until final, realize the coupling in each finger tip, edge, backbone, dactylus end and centre of the palm of pointing of left and right two width figure, find the coordinate one-to-one relationship of left and right two each fingers of width figure;
Step 63: according to the finger tip center of circle, point key coordinate, centre of the palm coordinate (x0, y0), volar edge, dactylus end, obtain gesture identification coordinate points.
7. a kind of real-time gesture recognition methods according to claim 6, it is characterized in that described gesture identifying device comprises gesture identification control device and 2DOF mechanical shaft, described gesture identification control device comprises left camera, right camera, minimum two infrared LEDs, processor, the infrared fileter corresponding with camera, outside framework, 2DOF mechanical shaft, described processor is used for controlling left camera, right camera, infrared LED, the work of 2DOF mechanical shaft, processor is positioned over outside framework inside, left camera, right camera base is distributed in outside framework front, described infrared LEDs etc. are distributed in outside framework front, described infrared fileter is connected with surface, camera upper surface, described 2DOF mechanical shaft one end is connected with outside framework side end face, the 2DOF mechanical shaft other end is by base ground connection, processor is controlled the electric rotating machine of 2DOF mechanical shaft.
8. a kind of real-time gesture recognition methods according to claim 7, comprises 3 infrared LEDs described in it is characterized in that, described infrared LED and left camera, right camera interval are evenly distributed on outside framework front.
9. a kind of real-time gesture recognition device based on one of claim 1 to 7 Suo Shu, it is characterized in that comprising gesture identification control device and 2DOF mechanical shaft, described gesture identification control device comprises left camera, right camera, minimum two infrared LEDs, processor, the infrared fileter corresponding with camera, outside framework, 2DOF mechanical shaft, described processor is used for controlling left camera, right camera, infrared LED, the work of 2DOF mechanical shaft, processor is positioned over outside framework inside, left camera, right camera base is distributed in outside framework front, described infrared LEDs etc. are distributed in outside framework front, described infrared fileter is connected with surface, camera upper surface, described 2DOF mechanical shaft one end is connected with outside framework side end face, the 2DOF mechanical shaft other end is by base ground connection, processor is controlled the electric rotating machine of 2DOF mechanical shaft.
10. a kind of real-time gesture recognition device according to claim 9, is characterized in that comprising 3 infrared LEDs, and described infrared LED and left camera, right camera interval are evenly distributed on outside framework front.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310731359.1A CN103714322A (en) | 2013-12-26 | 2013-12-26 | Real-time gesture recognition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310731359.1A CN103714322A (en) | 2013-12-26 | 2013-12-26 | Real-time gesture recognition method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103714322A true CN103714322A (en) | 2014-04-09 |
Family
ID=50407282
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310731359.1A Pending CN103714322A (en) | 2013-12-26 | 2013-12-26 | Real-time gesture recognition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103714322A (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104267802A (en) * | 2014-08-29 | 2015-01-07 | 福州瑞芯微电子有限公司 | Human-computer interactive virtual touch device, system and method |
CN104517100A (en) * | 2014-12-15 | 2015-04-15 | 中国科学院深圳先进技术研究院 | Gesture pre-judging method and system |
CN104536571A (en) * | 2014-12-26 | 2015-04-22 | 深圳市冠旭电子有限公司 | Earphone operating control method and device |
CN104952289A (en) * | 2015-06-16 | 2015-09-30 | 浙江师范大学 | Novel intelligentized somatosensory teaching aid and use method thereof |
CN105022472A (en) * | 2014-04-30 | 2015-11-04 | 中国海洋大学 | Knock-on interactive method and device serving patients with hemiplegia |
CN105491425A (en) * | 2014-09-16 | 2016-04-13 | 洪永川 | Methods for gesture recognition and television remote control |
CN106156398A (en) * | 2015-05-12 | 2016-11-23 | 西门子保健有限责任公司 | For the operating equipment of area of computer aided simulation and method |
CN106326860A (en) * | 2016-08-23 | 2017-01-11 | 武汉闪图科技有限公司 | Gesture recognition method based on vision |
CN106547359A (en) * | 2016-11-30 | 2017-03-29 | 珠海迈科智能科技股份有限公司 | High voltage power supply with gesture identification function and its recognition methodss |
CN107272899A (en) * | 2017-06-21 | 2017-10-20 | 北京奇艺世纪科技有限公司 | A kind of VR exchange methods, device and electronic equipment based on dynamic gesture |
CN108363482A (en) * | 2018-01-11 | 2018-08-03 | 江苏四点灵机器人有限公司 | A method of the three-dimension gesture based on binocular structure light controls smart television |
CN108829253A (en) * | 2018-06-19 | 2018-11-16 | 北京科技大学 | A kind of analog music commander's playback method and device |
CN109272519A (en) * | 2018-09-03 | 2019-01-25 | 先临三维科技股份有限公司 | Determination method, apparatus, storage medium and the processor of nail outline |
CN109344793A (en) * | 2018-10-19 | 2019-02-15 | 北京百度网讯科技有限公司 | Aerial hand-written method, apparatus, equipment and computer readable storage medium for identification |
CN111046796A (en) * | 2019-12-12 | 2020-04-21 | 哈尔滨拓博科技有限公司 | Low-cost space gesture control method and system based on double-camera depth information |
CN111160308A (en) * | 2019-12-30 | 2020-05-15 | 深圳泺息科技有限公司 | Gesture motion recognition method, device, equipment and readable storage medium |
CN111428693A (en) * | 2020-04-27 | 2020-07-17 | 洛阳师范学院 | Finger vein three-dimensional image extraction device and method based on rotating shaft handle |
CN111949119A (en) * | 2019-05-15 | 2020-11-17 | 和硕联合科技股份有限公司 | Data quick browsing method for electronic device |
CN112379779A (en) * | 2020-11-30 | 2021-02-19 | 华南理工大学 | Dynamic gesture recognition virtual interaction system based on transfer learning |
CN112488059A (en) * | 2020-12-18 | 2021-03-12 | 哈尔滨拓博科技有限公司 | Spatial gesture control method based on deep learning model cascade |
CN112686084A (en) * | 2019-10-18 | 2021-04-20 | 宏达国际电子股份有限公司 | Image annotation system |
CN115100747A (en) * | 2022-08-26 | 2022-09-23 | 山东宝德龙健身器材有限公司 | Treadmill intelligent auxiliary system based on visual detection |
CN115562501A (en) * | 2022-12-05 | 2023-01-03 | 南京达斯琪数字科技有限公司 | Man-machine interaction method for rotary scanning display |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7453489B2 (en) * | 2003-03-24 | 2008-11-18 | Sharp Kabushiki Kaisha | Image processing apparatus, image pickup system, image display system, image pickup display system, image processing program, and computer-readable recording medium in which image processing program is recorded |
CN101995943A (en) * | 2009-08-26 | 2011-03-30 | 介面光电股份有限公司 | Three-dimensional image interactive system |
CN102306065A (en) * | 2011-07-20 | 2012-01-04 | 无锡蜂巢创意科技有限公司 | Realizing method of interactive light sensitive touch miniature projection system |
CN102955563A (en) * | 2011-08-25 | 2013-03-06 | 鸿富锦精密工业(深圳)有限公司 | Robot control system and method |
CN103257709A (en) * | 2013-05-06 | 2013-08-21 | 上海大学 | Three-dimensional space two-point touch and control system based on optical reflection and three-dimensional space two-point touch and control method based on optical reflection |
-
2013
- 2013-12-26 CN CN201310731359.1A patent/CN103714322A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7453489B2 (en) * | 2003-03-24 | 2008-11-18 | Sharp Kabushiki Kaisha | Image processing apparatus, image pickup system, image display system, image pickup display system, image processing program, and computer-readable recording medium in which image processing program is recorded |
CN101995943A (en) * | 2009-08-26 | 2011-03-30 | 介面光电股份有限公司 | Three-dimensional image interactive system |
CN102306065A (en) * | 2011-07-20 | 2012-01-04 | 无锡蜂巢创意科技有限公司 | Realizing method of interactive light sensitive touch miniature projection system |
CN102955563A (en) * | 2011-08-25 | 2013-03-06 | 鸿富锦精密工业(深圳)有限公司 | Robot control system and method |
CN103257709A (en) * | 2013-05-06 | 2013-08-21 | 上海大学 | Three-dimensional space two-point touch and control system based on optical reflection and three-dimensional space two-point touch and control method based on optical reflection |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105022472B (en) * | 2014-04-30 | 2018-07-06 | 中国海洋大学 | Serve the percussive exchange method and device of hemiplegia patient |
CN105022472A (en) * | 2014-04-30 | 2015-11-04 | 中国海洋大学 | Knock-on interactive method and device serving patients with hemiplegia |
CN104267802A (en) * | 2014-08-29 | 2015-01-07 | 福州瑞芯微电子有限公司 | Human-computer interactive virtual touch device, system and method |
CN105491425A (en) * | 2014-09-16 | 2016-04-13 | 洪永川 | Methods for gesture recognition and television remote control |
CN104517100B (en) * | 2014-12-15 | 2017-09-29 | 中国科学院深圳先进技术研究院 | Gesture pre-judging method and system |
CN104517100A (en) * | 2014-12-15 | 2015-04-15 | 中国科学院深圳先进技术研究院 | Gesture pre-judging method and system |
CN104536571B (en) * | 2014-12-26 | 2018-02-23 | 深圳市冠旭电子股份有限公司 | The method of controlling operation thereof and device of earphone |
CN104536571A (en) * | 2014-12-26 | 2015-04-22 | 深圳市冠旭电子有限公司 | Earphone operating control method and device |
CN106156398A (en) * | 2015-05-12 | 2016-11-23 | 西门子保健有限责任公司 | For the operating equipment of area of computer aided simulation and method |
CN104952289A (en) * | 2015-06-16 | 2015-09-30 | 浙江师范大学 | Novel intelligentized somatosensory teaching aid and use method thereof |
CN106326860A (en) * | 2016-08-23 | 2017-01-11 | 武汉闪图科技有限公司 | Gesture recognition method based on vision |
CN106547359A (en) * | 2016-11-30 | 2017-03-29 | 珠海迈科智能科技股份有限公司 | High voltage power supply with gesture identification function and its recognition methodss |
CN107272899A (en) * | 2017-06-21 | 2017-10-20 | 北京奇艺世纪科技有限公司 | A kind of VR exchange methods, device and electronic equipment based on dynamic gesture |
CN107272899B (en) * | 2017-06-21 | 2020-10-30 | 北京奇艺世纪科技有限公司 | VR (virtual reality) interaction method and device based on dynamic gestures and electronic equipment |
CN108363482A (en) * | 2018-01-11 | 2018-08-03 | 江苏四点灵机器人有限公司 | A method of the three-dimension gesture based on binocular structure light controls smart television |
CN108829253A (en) * | 2018-06-19 | 2018-11-16 | 北京科技大学 | A kind of analog music commander's playback method and device |
CN108829253B (en) * | 2018-06-19 | 2021-06-01 | 北京科技大学 | Simulated music command playing method and device |
CN109272519A (en) * | 2018-09-03 | 2019-01-25 | 先临三维科技股份有限公司 | Determination method, apparatus, storage medium and the processor of nail outline |
CN109272519B (en) * | 2018-09-03 | 2021-06-01 | 先临三维科技股份有限公司 | Nail contour determination method and device, storage medium and processor |
CN109344793A (en) * | 2018-10-19 | 2019-02-15 | 北京百度网讯科技有限公司 | Aerial hand-written method, apparatus, equipment and computer readable storage medium for identification |
CN111949119A (en) * | 2019-05-15 | 2020-11-17 | 和硕联合科技股份有限公司 | Data quick browsing method for electronic device |
CN112686084A (en) * | 2019-10-18 | 2021-04-20 | 宏达国际电子股份有限公司 | Image annotation system |
CN111046796A (en) * | 2019-12-12 | 2020-04-21 | 哈尔滨拓博科技有限公司 | Low-cost space gesture control method and system based on double-camera depth information |
CN111160308B (en) * | 2019-12-30 | 2023-09-12 | 深圳新秦科技有限公司 | Gesture recognition method, device, equipment and readable storage medium |
CN111160308A (en) * | 2019-12-30 | 2020-05-15 | 深圳泺息科技有限公司 | Gesture motion recognition method, device, equipment and readable storage medium |
CN111428693A (en) * | 2020-04-27 | 2020-07-17 | 洛阳师范学院 | Finger vein three-dimensional image extraction device and method based on rotating shaft handle |
CN111428693B (en) * | 2020-04-27 | 2023-11-14 | 洛阳师范学院 | Finger vein three-dimensional image extraction device and method based on rotating shaft handle |
CN112379779A (en) * | 2020-11-30 | 2021-02-19 | 华南理工大学 | Dynamic gesture recognition virtual interaction system based on transfer learning |
CN112488059A (en) * | 2020-12-18 | 2021-03-12 | 哈尔滨拓博科技有限公司 | Spatial gesture control method based on deep learning model cascade |
CN112488059B (en) * | 2020-12-18 | 2022-10-04 | 哈尔滨拓博科技有限公司 | Spatial gesture control method based on deep learning model cascade |
CN115100747B (en) * | 2022-08-26 | 2022-11-08 | 山东宝德龙健身器材有限公司 | Treadmill intelligent auxiliary system based on visual detection |
CN115100747A (en) * | 2022-08-26 | 2022-09-23 | 山东宝德龙健身器材有限公司 | Treadmill intelligent auxiliary system based on visual detection |
CN115562501A (en) * | 2022-12-05 | 2023-01-03 | 南京达斯琪数字科技有限公司 | Man-machine interaction method for rotary scanning display |
CN115562501B (en) * | 2022-12-05 | 2023-03-03 | 南京达斯琪数字科技有限公司 | Man-machine interaction method for rotary scanning display |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103714322A (en) | Real-time gesture recognition method and device | |
CN106055091B (en) | A kind of hand gestures estimation method based on depth information and correcting mode | |
CN109544636B (en) | Rapid monocular vision odometer navigation positioning method integrating feature point method and direct method | |
CN107423729B (en) | Remote brain-like three-dimensional gait recognition system oriented to complex visual scene and implementation method | |
CN102982557B (en) | Method for processing space hand signal gesture command based on depth camera | |
Yilmaz et al. | A differential geometric approach to representing the human actions | |
CN103218605B (en) | A kind of fast human-eye positioning method based on integral projection and rim detection | |
Medioni et al. | Identifying noncooperative subjects at a distance using face images and inferred three-dimensional face models | |
CN102800126A (en) | Method for recovering real-time three-dimensional body posture based on multimodal fusion | |
CN105739702A (en) | Multi-posture fingertip tracking method for natural man-machine interaction | |
CN103514437B (en) | A kind of three-dimension gesture identifying device and three-dimensional gesture recognition method | |
Tulyakov et al. | Robust real-time extreme head pose estimation | |
Ren et al. | Change their perception: RGB-D for 3-D modeling and recognition | |
Zhang et al. | A practical robotic grasping method by using 6-D pose estimation with protective correction | |
CN111596767B (en) | Gesture capturing method and device based on virtual reality | |
CN104794449A (en) | Gait energy image acquisition method based on human body HOG (histogram of oriented gradient) features and identity identification method | |
CN108389260A (en) | A kind of three-dimensional rebuilding method based on Kinect sensor | |
CN111476077A (en) | Multi-view gait recognition method based on deep learning | |
Li et al. | FC-SLAM: Federated learning enhanced distributed visual-LiDAR SLAM in cloud robotic system | |
CN109670401A (en) | A kind of action identification method based on skeleton motion figure | |
CN113807287B (en) | 3D structured light face recognition method | |
CN113724329A (en) | Object attitude estimation method, system and medium fusing plane and stereo information | |
CN103533332A (en) | Image processing method for converting 2D video into 3D video | |
Ghidoni et al. | A multi-viewpoint feature-based re-identification system driven by skeleton keypoints | |
CN103810480B (en) | Method for detecting gesture based on RGB-D image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140409 |
|
WD01 | Invention patent application deemed withdrawn after publication |