CN102591533A - Multipoint touch screen system realizing method and device based on computer vision technology - Google Patents

Multipoint touch screen system realizing method and device based on computer vision technology Download PDF

Info

Publication number
CN102591533A
CN102591533A CN2012100514701A CN201210051470A CN102591533A CN 102591533 A CN102591533 A CN 102591533A CN 2012100514701 A CN2012100514701 A CN 2012100514701A CN 201210051470 A CN201210051470 A CN 201210051470A CN 102591533 A CN102591533 A CN 102591533A
Authority
CN
China
Prior art keywords
finger
point
video camera
display screen
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012100514701A
Other languages
Chinese (zh)
Other versions
CN102591533B (en
Inventor
张红梅
陈俊彦
叶进
张向利
张全君
吴阿沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN201210051470.1A priority Critical patent/CN102591533B/en
Publication of CN102591533A publication Critical patent/CN102591533A/en
Application granted granted Critical
Publication of CN102591533B publication Critical patent/CN102591533B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a multipoint touch screen system realizing method and a device based on computer vision technology, wherein the system device is composed of three groups of cameras, an image processing device and a common display screen; the output ends of the three groups of cameras are connected on the image processing device; a group of horizontal cameras pointing to the display screen frame is respectively arranged on the left and right ends of the display screen; if a finger enters framing ranges of the cameras, whether a touch is generated is judged according to the distance between the finger tip and the display screen; a group of vertical cameras pointing the to the display screen and locating the whole display screen in the framing ranges thereof is arranged in front of the display screen; if the touch is generated, a finger tip position is obtained according to images picked up by the cameras, and the finger is simultaneously tracked; and the image processing device is used for processing the images picked up by the cameras to obtain the position of the finger tip touch point and finger action information, and converting the finger action information into a corresponding command to finish a touch operation.

Description

Multiple point touching screen system implementation method and device based on computer vision technique
Technical field
The present invention relates to the touch screen technology field, be specifically related to a kind of multiple point touching screen system implementation method and device based on computer vision technique.
Background technology
Touch-screen is the simplest, convenient, natural at present a kind of man-machine interaction mode as a kind of up-to-date computer entry device.It has given multimedia with brand-new looks, is extremely attractive brand-new multimedia interactive equipment.In present application market, touch-screen mainly contains resistance-type, condenser type, surface acoustic wave type, infrared-type touch-screen four major types.Resistance-type, capacitive touch screen can only be applied in the small-medium size touch-screen owing to the cost reason.Though the infrared-type touch-screen is cheap, response is fast, and resolution is low, is vulnerable to extraneous high light and disturbs.The surface acoustic wave type touch-screen has solved the problem of infrared-type touch-screen, but influenced by dust and water droplet, greasy dirt etc.
Along with the development of image display technology and human convenience and comfortableness to message exchange require increasingly highly, market is not high and mutual natural touch-screen demand is more and more stronger to large scale, price.Based on the touch-screen of computer vision technique have cheap, resolution is high, can utilize any plane as touching advantages such as plane, the interference that is not subject to external environment, applied range, is the direction of touch-screen future development therefore.But the touch-screen of the correlation technique that existing market occurs is all had relatively high expectations to the background interface, can only be some simple black and white interfaces, perhaps requires to have supporting felt pen or touches gloves, can not realize man-machine interaction naturally with open arms, and use occasion is very limited.Simultaneously, the existing related techniques touch-screen only can be discerned a contact at one time, and the action message of realization is relatively more dull.Along with the widespread use of touching technique, multi-point touch receives publicity gradually.Multi-point touch can receive the input information from a plurality of points on the screen simultaneously, makes action message increase greatly, makes touch operation more convenient.But current touch screen technology all can not be discerned a plurality of contacts mostly, has therefore limited the development of multi-point touch technology.
Notification number is that the Chinese invention patent of CN1912816A discloses " a kind of virtual touch screen system based on camera "; This system at first points to display screen through two or more cameras from different visual angles, and confirms the imaging plane of two cameras and the mapping relations between the display screen; Obtain rough staff zone and set up complexion model through image difference then, to obtain gray level image and rough fingertip location; Re-use important sampling particle filter afterwards single finger is followed the tracks of with acquisition finger contours curve, and obtain the position of finger tip according to reference mark, finger tip place; At last confirm the finger tip click action according to the correspondence position of the finger tip point in two two field pictures on display screen.Yet said system since said system be through the particle filter algorithm calculate finger tip the position and the finger tip point is followed the tracks of; To be implemented in the touch function of common display screen; Therefore can only follow the tracks of with finger tip single finger and extract, can not realize multi-touch function.
Summary of the invention
Technical matters to be solved by this invention provides a kind of multiple point touching screen system implementation method and device based on computer vision technique; It can make any display screen have the function of multiple point touching, has simultaneously that cost is low, stable performance, can be used for advantages such as large scale screen.
For addressing the above problem, the present invention realizes through following scheme:
A kind of multiple point touching screen system implementation method based on computer vision technique comprises the steps:
(1) respectively places the motion images that one group of horizontal video camera that points to display screen frame obtains Fingertip touch in the right ends of display screen; Place one group in display screen the place ahead and point to display screen and entire display screen and be in the location drawing picture that vertical video camera in its viewfinder range comes Fingertip touch point, the output terminal of 2 horizontal video cameras and 1 vertical video camera all links to each other with the input end of image processing apparatus;
(2) the image processing apparatus vertical video camera that is opposite to display screen the place ahead in advance carries out system calibrating, obtains the mapping relations of imaging plane coordinate and display screen planimetric coordinates;
(3) image processing apparatus sends 2 horizontal video cameras of control signal startup, the position of display screen frame in image under the horizontal camera record of display screen right ends when system initialization; After finger gets into horizontal video camera viewfinder range, adopt background subtraction point-score and skin color segmentation algorithm to obtain the profile of finger, judge whether the generation touch according to the distance of finger fingertip and display screen frame;
(4) when producing touch; Image processing apparatus sends control signal and starts vertical video camera; And adopt background subtraction point-score and skin color segmentation algorithm to obtain the profile of finger, and through obtain the pixel of finger tip real-time and accurately based on the real-time finger tip location algorithm of vector;
(5) image processing apparatus converts the pixel of finger tip on the coordinate of display screen correspondence according to the mapping relations of imaging plane coordinate and display screen planimetric coordinates;
(6) image processing apparatus is clicked detection according to the finger tip coordinate and the finger tip residence time, if meet click conditional, then converts corresponding instruction into and accomplishes touch operation; If do not meet, then touch operation is not accomplished, and adopts Kalman filter that the finger tip point is followed the tracks of, prediction next frame image finger tip point position;
(7) step (4) skin color segmentation and finger tip location are carried out in the zone in the next frame image, to be with the finger tip point of prediction that central point marks (ROI) interested, constantly circulate and finish until touch operation.
In the such scheme, the real-time finger tip location algorithm concrete steps based on vector described in the step (4) are following:
(a) confirm the value of radius of neighbourhood r: the value of this radius of neighbourhood r need greater than finger in image width and less than the length of finger in image, promptly
finger . widthλimage . width screen . width r TM TM finger . lengthλimage . length screen . length
In the following formula; Finger.width and finger.length are respectively the width and the length of finger; Image.width and image.length are respectively figure image width and long pixel number, and screen.width and screen.length are respectively the width and the length of touch screen;
(b) flexibility of each point among the calculating staff region contour closed curve S obtains the set of flexibility
Figure BDA0000139915230000022
Certain 1 P among the staff region contour closed curve S wherein iFlexibility C iFor
Figure BDA0000139915230000031
In the following formula,
Figure BDA0000139915230000032
With Be a P iR field set On 2 end points, Be vector
Figure BDA0000139915230000036
And vector
Figure BDA0000139915230000037
Apposition,
Figure BDA0000139915230000038
Be vector
Figure BDA0000139915230000039
Mould;
(c) in set C, get | C i| value is minimum and be positive element, pairing some P of this element iBe the finger tip point;
The stand out threshold value of (d) establishing finger is constant d, off-take point P in set C iNeighborhood outer eligible
Figure BDA00001399152300000310
And be positive all elements, obtain new set Cr.
(e) result who obtains according to step (d) gathers if in the corresponding some set of Cr, have to be communicated with, and then keeps to be communicated with in the set | C i| the point that value is minimum, remove remaining point; Last remaining point is the finger tip point of all fingers.
In the such scheme, radius of neighbourhood r value is got the width of finger in image and the intermediate value of length in the step (a).
In the such scheme, adopt Kalman filter that the finger tip point is followed the tracks of described in the step (6), the concrete steps of prediction next frame image finger tip point position are: adopt
Figure BDA00001399152300000311
Carrying out the prediction of fingertip location, is that central point marks the ROI zone and carries out step (4) skin color segmentation and finger tip and locate with the finger tip point of prediction in the next frame image, and the accurate finger fingertip coordinate that will obtain subsequently is as the system state vector Z k, substitution Carry out filter correction, carry out the prediction of fingertip location after the correction again, constantly circulation finishes until touch operation;
In above-mentioned two formulas,
Figure BDA00001399152300000313
Be the state vector of prediction, A is a state-transition matrix,
Figure BDA00001399152300000314
Be the system state vector of moment k-1, B is a gating matrix, U kBe control vector, X kBe the system state vector of moment k, K kBe kalman gain matrix, Z kMeasure vector for the system state of moment k, H is an observing matrix.
In the such scheme, skin color segmentation algorithm concrete steps are following described in step (3) and (4):
(a) before finger got into camera watch region, horizontal video camera was at first stored some pictures and is done gradation conversion with vertical video camera, calculates its average gray level image as a setting subsequently, promptly
Figure BDA00001399152300000315
In the following formula, G iBe the i pictures of storage, n is a picture stored quantity, G BkimgBe the background gray level image that obtains;
(b) be chosen in and set up suitable complexion model on the YCbCr color space, finger areas is carried out skin color segmentation; Promptly
(b.1) with horizontal video camera and vertical video camera or the RGB image transitions be the YCbCr image, wherein rgb space to the conversion formula in YCbCr space is:
Figure BDA0000139915230000041
In the following formula, Y, Cb, Cr are luminance component, chroma blue component, the red color component value in YCbCr space; R, G, B are redness, green, the blue sub value of rgb space;
Figure BDA0000139915230000042
(b.2) after the background gray level image that utilizes Y component that formula obtains in 4. and formula 3. to obtain is made difference, obtain background subtraction score value D (x, y), promptly
Figure BDA0000139915230000043
In the following formula, Y FtimgBe the Y component of foreground image, (x y) is the background subtraction score value to D;
(b.3) 4. 5. setting up YCbCr color space Face Detection model with formula according to formula is:
In the following formula, Cb, Cr are chroma blue component, the red color component value in YCbCr space; D is the background subtraction score value; Constant Min, Max are the minimum and maximum threshold value of Cr and Cb ratio, and constant T is a background difference threshold value.
This Face Detection model is judged area of skin color through minimum and maximum threshold value and the background difference threshold value of setting Cr and Cb ratio; When result is 1, represent that this pixel is an area of skin color; When result is 0, be expressed as non-area of skin color.
In the such scheme, also further comprise and 7. background carried out automatic updating steps according to formula,
Figure BDA0000139915230000045
In the following formula, I AccBe the background image after upgrading, I FtimgBe foreground image, α is for upgrading constant coefficient.
A kind of multi-point touch panel system and device based on computer vision technique mainly is made up of display screen, 2 horizontal video cameras, 1 vertical video camera and image processing apparatus; Wherein 2 horizontal video cameras be placed on respectively display screen right ends, and point to display screen frame simultaneously; Vertical video camera then is placed on the place ahead of display screen and points to display screen and entire display screen is in the viewfinder range of vertical video camera; The output terminal of 2 horizontal video cameras and 1 vertical video camera all links to each other with the input end of image processing apparatus.
Above-mentioned image processing apparatus comprises the system calibrating module, touches judge module, touches reference mark extraction module, coordinate transferring, click detection module and Kalman's tracking module; Wherein the output terminal of 2 horizontal video cameras links to each other with the input end that touches judge module; The output terminal of vertical video camera is connected 2 input ends that touch the reference mark extraction module respectively with the output terminal that touches judge module; The output terminal of touch reference mark extraction module and the output terminal of system calibrating module are connected 2 input ends of coordinate transferring respectively; The output terminal of coordinate transferring links to each other with the output terminal of clicking detection module; The output terminal of clicking detection module is divided into two-way, and one the tunnel is connected to another input end that touches the reference mark extraction module via Kalman's tracking module, and another road is as the output terminal based on the multi-point touch panel system and device of computer vision technique.
In the such scheme, said touch judge module comprises horizontal video camera background subtraction subdivision, horizontal video camera skin color segmentation unit and touches judging unit; The input end of horizontal video camera background subtraction subdivision connects the output terminal of 2 horizontal video cameras; The output terminal of horizontal video camera background subtraction subdivision links to each other with the input end that touches judging unit via horizontal video camera skin color segmentation unit, and the output terminal that touches judging unit connects touch reference mark extraction module.
In the such scheme, said touch reference mark extraction module comprises vertical video camera background subtraction subdivision, vertical video camera skin color segmentation unit and finger tip positioning unit; The input end of vertical video camera background subtraction subdivision connects the output terminal of vertical video camera; The output terminal of vertical video camera background subtraction subdivision links to each other with the input end of finger tip positioning unit via vertical video camera skin color segmentation unit, and the output terminal of finger tip positioning unit connects coordinate transferring.
Compared with prior art, the present invention has following characteristics:
1, utilize computer vision technique to solve the deficiency that conventional touch screen exists.Because the position of video camera can arbitrarily be adjusted, so the size of screen is easy unrestricted, as long as display screen in the vision range of camera, can both accurately navigate to finger tip, can be applied on the large scale display screen;
2, image processing apparatus can be PC or embedded equipment, and system only needs on the existing equipment basis, to add three groups of video cameras can make the common display screen possess touch screen functionality, and design cost is low;
3, system can also be used for other screens or projection wall such as projection screen, and use occasion is easy to expansion, does not receive background interface or environmental limit;
4, system improves through algorithm and can discern a plurality of contacts, is applicable to the multi-point touch system environments.
Description of drawings
Fig. 1 is a kind of multi-point touch panel system and device line graph based on computer vision technique;
Fig. 2 is a kind of multi-point touch panel system and device internal module structural representation based on computer vision technique;
Fig. 3 is a kind of multiple point touching screen system implementation method process flow diagram based on computer vision technique;
Fig. 4 is the skeleton pattern of finger tip location.
Embodiment
Below in conjunction with concrete diagram, further set forth the present invention:
With reference to Fig. 1, a kind of multi-point touch panel system and device based on computer vision technique of the present invention is made up of 3 groups of video cameras, 1 image processing apparatus and 1 common display screen.The output terminal of 3 groups of video cameras among the figure all is connected on the image processing apparatus.Respectively place one group of horizontal video camera that points to display screen frame in the right ends of display screen, when finger gets into the video camera viewfinder range, can judge whether that generation touches according to finger fingertip and display screen distance.One group of sensing display screen is placed in display screen the place ahead and entire display screen is in the vertical video camera in its viewfinder range, when producing touch, can be according to the Image Acquisition finger tip point position of this video camera shooting, and while adversary's fingering line trace.Image processing apparatus is handled through the image to camera acquisition, obtains the position and the finger movement information of Fingertip touch point, converts corresponding instruction into and accomplishes touch operation.
Fig. 2 is a kind of multi-point touch panel system and device internal module structural representation based on computer vision technique.Among the figure, image processing apparatus comprises the system calibrating module, touches judge module, touches reference mark extraction module, coordinate transferring, click detection module and Kalman's tracking module.The output terminal of 2 horizontal video cameras links to each other with the input end that touches judge module.The output terminal of vertical video camera is connected 2 input ends that touch the reference mark extraction module respectively with the output terminal that touches judge module.The output terminal of touch reference mark extraction module and the output terminal of system calibrating module are connected 2 input ends of coordinate transferring respectively.The output terminal of coordinate transferring links to each other with the output terminal of clicking detection module.The output terminal of clicking detection module is divided into two-way, and one the tunnel is connected to another input end that touches the reference mark extraction module via Kalman's tracking module, and another road is as the output terminal based on the multi-point touch panel system and device of computer vision technique.
Said system calibrating module adopts the system calibrating algorithm that vertical video camera is carried out system calibrating, obtains the mapping relations of video camera imaging planimetric coordinates and display screen coordinate.
Said touch judge module comprises horizontal video camera background subtraction subdivision, horizontal video camera skin color segmentation unit and touches judging unit.The input end of horizontal video camera background subtraction subdivision connects the output terminal of 2 horizontal video cameras; The output terminal of horizontal video camera background subtraction subdivision links to each other with the input end that touches judging unit via horizontal video camera skin color segmentation unit, and the output terminal that touches judging unit connects touch reference mark extraction module.Touch judge module and handle 2 pictures that horizontal video camera obtains, judge whether to produce touch according to the distance of finger fingertip and display screen frame through the internal algorithm of above-mentioned each unit.
Said touch reference mark extraction module comprises vertical video camera background subtraction subdivision, vertical video camera skin color segmentation unit and finger tip positioning unit; The input end of vertical video camera background subtraction subdivision connects the output terminal of vertical video camera; The output terminal of vertical video camera background subtraction subdivision links to each other with the input end of finger tip positioning unit via vertical video camera skin color segmentation unit, and the output terminal of finger tip positioning unit connects coordinate transferring.Said touch reference mark extraction module is handled the picture that vertical video camera obtains through the internal algorithm of above-mentioned each unit, and the position coordinates that obtains finger fingertip is as the reference mark that touches.
Said coordinate transferring converts the position coordinates of finger tip on the coordinate of display screen correspondence according to the mapping relations of imaging plane coordinate and display screen planimetric coordinates.
Said click detection module is clicked detection according to the finger tip coordinate and the finger tip residence time, if meet click conditional, then converts corresponding instruction into and accomplishes touch operation.
Said Kalman's tracking module adopts Kalman filter that the finger fingertip point is followed the tracks of, the position of prediction next frame image finger tip point.
A kind of multiple point touching screen system implementation method process flow diagram based on computer vision technique that adopts said apparatus to realize is as shown in Figure 3, comprises following step:
(1) the vertical video camera that is opposite to display screen the place ahead carries out system calibrating, obtains the mapping relations of video camera imaging planimetric coordinates and display screen planimetric coordinates.
(2) start 2 horizontal video cameras, after finger gets into the video camera viewfinder range, adopt background subtraction point-score and skin color segmentation algorithm to obtain the profile of finger, judge whether the generation touch according to the distance of finger fingertip and display screen frame.
(3) when producing touch, start vertical video camera, adopt improved background subtraction point-score and skin color segmentation method to obtain the profile of finger, and obtain the pixel of finger tip real-time and accurately through the finger tip location algorithm.According to the mapping relations of imaging plane coordinate and display screen planimetric coordinates, the pixel of finger tip is converted into the coordinate of display screen correspondence.
(4) click detection according to the finger tip coordinate and the finger tip residence time,, then convert corresponding instruction into and accomplish touch operation if meet click conditional.If do not meet, then touch operation is not accomplished, and adopts Kalman filter that the finger tip point is followed the tracks of, prediction next frame image finger tip point position.
(5) in the next frame image, be that central point marks the ROI zone and carries out step (3) skin color segmentation and finger tip location, constantly circulate and finish until touch operation with the finger tip point of prediction.
In system, image processing algorithm comprises that system calibrating, finger contours are cut apart, finger tip is located, finger is followed the tracks of etc. and partly forms, will describe the image processing algorithm that system relates to below.
In the computer vision, on the space geometric position of certain point and its camera collection to image on the geometric relationship of corresponding point be by the imaging model decision of camera, be called system calibrating for the imaging model CALCULATION OF PARAMETERS.Through system calibrating, can obtain the imaging plane of video camera and the mapping relations between the display screen plane.
Suppose display screen in the plane of world coordinate system Z=0, the display screen upper left corner is initial point.(u, v), the coordinate points of corresponding display screen is (x to a pixel in the plane of delineation w, y w), two points have following mapping relations:
Figure BDA0000139915230000071
Wherein, matrix A is the intrinsic parameter matrix of video camera, can come out through the gridiron pattern scaling board image calculation of obtaining at 3 diverse locations.[r 1, r 2, t] and be the external parameters of cameras matrix The simplification matrix.The present invention utilizes Corner Detection Algorithm to extract tessellated interior angle point as corresponding point through on display screen, showing the cross-hatch pattern picture.Like this, can set up the equation of multiple series of images pixel and display screen coordinate points, obtain matrix [r according to formula (1) 1, r 2, t].Matrix A [r 1, r 2, t] inverse matrix be the transition matrix of the imaging plane pixel of video camera to display screen planimetric coordinates point.
Effectively cut apart and the finger tip location of touch-control finger are keys of the present invention.The present invention proposes a kind of background subtraction point-score that uses earlier the finger areas of prospect is extracted roughly, re-uses the method that the skin color segmentation method is further cut apart finger areas subsequently.This method can be removed in the background interference with colour of skin similar color, strengthens the validity that finger is cut apart.
Before finger gets into camera watch region, store some pictures earlier and do gradation conversion, calculate its average gray level image as a setting subsequently, shown in (2).
Figure BDA0000139915230000073
In the formula, G iBe the i pictures of storage, n is a picture stored quantity, G BkimgBe the background gray level image that obtains.
According to the skin color segmentation effect in actual detected, in order to reach the purpose that reduces calculated amount, be chosen in and set up suitable complexion model on the YCbCr color space simultaneously, finger areas is carried out skin color segmentation.Rgb space to the conversion formula in YCbCr space suc as formula shown in (3):
In the formula, Y, Cb, Cr are luminance component, chroma blue component, the red color component value in YCbCr space; R, G, B are redness, green, the blue sub value of rgb space;
Figure BDA0000139915230000082
After the background gray level image that utilizes Y component that formula (3) obtains and formula (2) to obtain is made difference, obtain image background subtraction score value D (x, y), shown in (4),
Figure BDA0000139915230000083
In the following formula, Y FtimgBe the Y component of foreground image, (x y) is the background subtraction score value to D.
Setting up YCbCr color space Face Detection model according to formula (3) and (4) is:
Figure BDA0000139915230000084
In the following formula, Cb, Cr are chroma blue component, the red color component value in YCbCr space; D is the background subtraction score value; Constant Min, Max are the minimum and maximum threshold value of Cr and Cb ratio, and constant T is a background difference threshold value.
This Face Detection model is judged area of skin color through minimum and maximum threshold value and the background difference threshold value of setting Cr and Cb ratio.When result is 1, represent that this pixel is an area of skin color; When result is 0, be expressed as non-area of skin color.
If background image can not upgrade according to the variation of environmental factor automatically, fixedly use then can have a strong impact on the segmentation effect to foreground target for a long time.Therefore also need upgrade automatically background, formula is suc as formula shown in (5).
Figure BDA0000139915230000085
In the formula, I AccBe the background image after upgrading, I FtimgBe foreground image.α is for upgrading constant coefficient.
Finger tip is the reference mark of touch-screen, and the present invention proposes a kind of real-time finger tip location algorithm based on vector, and this algorithm is quick and precisely judged the position of finger tip point according to the shortest characteristic of mould of corresponding adjacent two frontier points vector of contour curve flex point.
The staff zone through the closure edge curve S of profile can be expressed as one show preface point set:
Wherein,
Figure BDA0000139915230000087
Figure BDA0000139915230000088
Figure BDA0000139915230000089
and
Figure BDA00001399152300000810
have at least a value to equal 1.x iAnd y iBe a P iHorizontal ordinate and ordinate.
Some P on collection of curves S iGetting a constant r is the radius of neighbourhood, with a P iFor central point is got a set
Figure BDA0000139915230000091
This set is called a P iThe r neighborhood, and
Figure BDA0000139915230000092
Like fruit dot P kAt a P iThe r neighborhood in, then claim a some P kWith a P iIt is the r adjacency.If a set V (V T S) is arranged, get it more arbitrarily, all can in V, find another point is the r adjacency with this point, claims that then set V is communicated with set.
According to euclidean metric two dimension formula, the mould of vector is:
Figure BDA0000139915230000094
Characteristic by curve can be known; To change Shaoxing opera strong when direction of a curve; Curve is more crooked; The value of
Figure BDA0000139915230000095
is more little, as shown in Figure 4.
Figure BDA0000139915230000096
value that knee point is corresponding is minimum.Simultaneously, can be according to vector And vector
Figure BDA0000139915230000098
Apposition judge that curve is at a P iConcavity and convexity.Vector is worked as in checking easily
Figure BDA0000139915230000099
And vector
Figure BDA00001399152300000910
Apposition greater than 0 o'clock, curve is at a P iThe place is protruding; Otherwise then be recessed.
Therefore, the definable curve is at a P iThe flexibility formula be:
Figure BDA00001399152300000911
In the formula;
Figure BDA00001399152300000912
is the apposition of vector and vector
Figure BDA00001399152300000914
,
Figure BDA00001399152300000915
be the mould of vector
Figure BDA00001399152300000916
.
Can know that according to staff contour shape characteristic finger tip is that radian changes maximum protruding flex point in the finger curve, therefore can obtain the judgement that the each point flexibility is carried out finger tip point according to the embroidery formula.Because the width difference of each finger is little, the method that can utilize single finger tip to detect is earlier judged the wherein minimum finger fingertip of width, judges other finger fingertip subsequently again according to the flexibility of this finger tip.The step that finger tip detects is following:
(3.1) think the value of setting radius of neighbourhood r.Radius of neighbourhood r affects the calculating of embroidery.If the r value is too little; The modingization of vector
Figure BDA00001399152300000917
is just not obvious, is difficult to judge flex point.If the r value is too big, exceeded the length of finger, wrong flex point just appears easily.According to staff contour shape characteristic, the value of radius of neighbourhood r needs greater than the width of finger in image, less than the length of finger in image, shown in (9):
finger . widthλimage . width screen . width r TM TM finger . lengthλimage . length screen . length - - - ( 9 )
In the formula, finger.width and finger.length are the width and the length of finger, and image.width and image.length are figure image width and long pixel number, and screen.width and screen.length are the width and the length of touch screen.Usually the r value is got the width of finger in image and the intermediate value of length.
(3.2) according to the flexibility of each point among formula (8) the calculating staff region contour closed curve S, obtain the set
Figure BDA00001399152300000919
of flexibility
(3.3) in set C, get | C i| value is minimum and be positive element, pairing some P of this element iBe the finger tip point.
The stand out threshold value of (3.4) establishing finger is constant d, off-take point P in set C iNeighborhood outer eligible
Figure BDA0000139915230000101
And be positive all elements, obtain new set Cr.
(3.5) result who obtains according to step (3.4) gathers if in the corresponding some set of Cr, have to be communicated with, and then keeps to be communicated with in the set | C i| the point that value is minimum, remove remaining point.Last remaining point is the finger tip point of all fingers.
The position of finger fingertip in every two field picture constituted the track of target travel, and the present invention introduces Kalman filter and predicts the position that next frame image finger tip possibly occur according to the movement locus in the past of finger fingertip, realizes the finger tracking.
Kalman filter algorithm comprises signal model and two models of observation model, and signal model is suc as formula shown in (10), and observation model is suc as formula shown in (11).
Figure BDA0000139915230000102
Figure BDA0000139915230000103
In the formula, X kWith
Figure BDA0000139915230000104
Be the system state vector of moment k and k-1, Z kMeasure vector for the system state of moment k, A is a state-transition matrix, and B is a gating matrix, and H is an observing matrix, U kBe control vector, w kAnd v kBe respectively the motion of normal distribution and measure noise vector, and uncorrelated mutually, that is:
P(w)~N(0,Q),P(v)~N(0,R) (12)
In the formula, Q is the covariance matrix of motion noise, and R is for measuring the covariance matrix of noise.
Make system state vector X kBe expressed as [x k, v Xk, y k, v Yk] T, x wherein kAnd y kBe respectively the coordinate components of finger fingertip pixel on image x axle and y axle, v XkAnd v YkBe respectively the speed of finger fingertip pixel on image x axle and y axle.The system state vector Z kBe expressed as [x k, y k] T, x wherein kAnd y kBe respectively the coordinate components of present frame finger fingertip pixel on image x axle and y axle.
On the x axle, the equation of motion of finger tip coordinate is suc as formula shown in (13) and the formula (14):
In the formula, t is a time variable, a kBe k acceleration constantly.Because adjacent two two field pictures are shorter correlation time in tracing process, can be similar to think the finger tip coordinate on x axle and y axle, move all be one by acceleration at random and by the rectilinear motion of disturbance, acceleration a kIt is a random quantity
Figure BDA0000139915230000107
Similar equation is in like manner also arranged on the y axle.
According to formula (13) and formula (14), definable control vector Uk is acceleration a k, state-transition matrix A and gating matrix B do
Figure BDA0000139915230000109
Wherein, constant tau is two two field picture interval times.
Kalman filtering is divided into prediction and proofreaies and correct two stages.At forecast period, wave filter is estimated probabilistic model state afterwards according to current state, shown in (15) and formula (16).
Figure BDA0000139915230000112
In the formula;
Figure BDA0000139915230000113
is the prior uncertainty correlation matrix;
Figure BDA0000139915230000114
is the posteriority error correlation matrix of previous moment, and Q is the covariance matrix of motion noise.
Figure BDA0000139915230000115
is the state vector of prediction.
At calibration phase, wave filter is regulated the probabilistic model state, shown in (17)~(19) on the measurement parameter basis of given model state.
Figure BDA0000139915230000116
Figure BDA0000139915230000117
In the formula, K kBe kalman gain matrix, P kIt is the posteriority error correlation matrix of current time.
In the finger fingertip tracing process; Employing formula (15) is carried out the prediction of fingertip location; In the next frame image, be that central point marks the ROI zone and carries out skin color segmentation and finger tip and locate with the finger tip point of prediction; The accurate finger fingertip coordinate that will obtain subsequently is as system state vector Z k, and substitution formula (18) is carried out filter correction.Carry out the prediction of fingertip location after the correction again, constantly circulation finishes until touch operation.But add Kalman filter is followed the tracks of enhanced system to finger tip real-time and robustness.Above-mentioned ROI zone i.e. " area-of-interest "; It is the center with the finger tip point of Kalman filter prediction; Mark an area-of-interest at image, only the data in this two field picture zone are carried out Flame Image Process subsequently, extra-regional data just need not have been managed; Can reduce the processing time like this, strengthen real-time and robustness.

Claims (9)

1. based on the multiple point touching screen system implementation method of computer vision technique, it is characterized in that comprising the steps:
(1) respectively places the motion images that one group of horizontal video camera that points to display screen frame obtains Fingertip touch in the right ends of display screen; Place one group in display screen the place ahead and point to display screen and entire display screen and be in the location drawing picture that vertical video camera in its viewfinder range comes Fingertip touch point, the output terminal of 2 horizontal video cameras and 1 vertical video camera all links to each other with the input end of image processing apparatus;
(2) the image processing apparatus vertical video camera that is opposite to display screen the place ahead in advance carries out system calibrating, obtains the mapping relations of imaging plane coordinate and display screen planimetric coordinates;
(3) image processing apparatus sends 2 horizontal video cameras of control signal startup, the position of display screen frame in image under the horizontal camera record of display screen right ends when system initialization; After finger gets into horizontal video camera viewfinder range, adopt background subtraction point-score and skin color segmentation algorithm to obtain the profile of finger, judge whether the generation touch according to the distance of finger fingertip and display screen frame;
(4) when producing touch; Image processing apparatus sends control signal and starts vertical video camera; And adopt background subtraction point-score and skin color segmentation algorithm to obtain the profile of finger, and through obtain the pixel of finger tip real-time and accurately based on the real-time finger tip location algorithm of vector;
(5) image processing apparatus converts the pixel of finger tip on the coordinate of display screen correspondence according to the mapping relations of imaging plane coordinate and display screen planimetric coordinates;
(6) image processing apparatus is clicked detection according to the finger tip coordinate and the finger tip residence time, if meet click conditional, then converts corresponding instruction into and accomplishes touch operation; If do not meet, then touch operation is not accomplished, and adopts Kalman filter that the finger tip point is followed the tracks of, prediction next frame image finger tip point position;
(7) in the next frame image, be that central point marks area-of-interest and carries out step (4) skin color segmentation and finger tip location, constantly circulate and finish until touch operation with the finger tip point of prediction.
2. according to the said multiple point touching screen system implementation method of claim 1, it is characterized in that the real-time finger tip location algorithm concrete steps based on vector are following described in the step (4) based on computer vision technique:
(a) confirm the value of radius of neighbourhood r: the value of this radius of neighbourhood r need greater than finger in image width and less than the length of finger in image, promptly
finger . width &times; image . width screen . width < r < finger . length &times; image . length screen . length
In the following formula; Finger.width and finger.length are respectively the width and the length of finger; Image.width and image.length are respectively figure image width and long pixel number, and screen.width and screen.length are respectively the width and the length of touch screen;
(b) calculate the flexibility of each point among the staff region contour closed curve S, obtain the set C={C of flexibility i, i=1,2 ..., n}, wherein certain 1 P among the staff region contour closed curve S iFlexibility C iFor
Figure FDA0000139915220000021
In the following formula, P I-rAnd P I+rBe a P iR field set omega iOn 2 end points,
Figure FDA0000139915220000022
Be vector
Figure FDA0000139915220000023
And vector
Figure FDA0000139915220000024
Apposition,
Figure FDA0000139915220000025
Be vector
Figure FDA0000139915220000026
Mould;
(c) in set C, get | C i| value is minimum and be positive element, pairing some P of this element iBe the finger tip point;
The stand out threshold value of (d) establishing finger is constant d, off-take point P in set C iNeighborhood outer eligible | C k|<| C i|+d and be positive all elements obtains new set Cr.
(e) result who obtains according to step (d) gathers if in the corresponding some set of Cr, have to be communicated with, and then keeps to be communicated with in the set | C i| the point that value is minimum, remove remaining point; Last remaining point is the finger tip point of all fingers.
3. according to the said multiple point touching screen system implementation method of claim 2, it is characterized in that radius of neighbourhood r value is got the width of finger in image and the intermediate value of length in the step (a) based on computer vision technique.
4. according to the said multiple point touching screen system implementation method based on computer vision technique of claim 1, it is characterized in that adopting described in the step (6) Kalman filter that the finger tip point is followed the tracks of, the concrete steps of prediction next frame image finger tip point position are: adopt
Figure FDA0000139915220000027
Carrying out the prediction of fingertip location, is that central point marks the ROI zone and carries out skin color segmentation and finger tip and locate with the finger tip point of prediction in the next frame image, and the accurate finger fingertip coordinate that will obtain subsequently is as the system state vector Z k, substitution
Figure FDA0000139915220000028
Carry out filter correction, carry out the prediction of fingertip location after the correction again, constantly circulation finishes until touch operation;
In above-mentioned two formulas,
Figure FDA0000139915220000029
Be the state vector of prediction, A is a state-transition matrix, X K-1Be the system state vector of moment k-1, B is a gating matrix, U kBe control vector, X kBe the system state vector of moment k, K kBe kalman gain matrix, Z kMeasure vector for the system state of moment k, H is an observing matrix.
5. according to the said multiple point touching screen system implementation method based on computer vision technique of claim 1, the concrete steps of the profile that it is characterized in that adopting background subtraction point-score and skin color segmentation algorithm to obtain finger described in step (3) and (4) are following:
(a) before finger got into camera watch region, horizontal video camera was at first stored some pictures and is done gradation conversion with vertical video camera, calculates its average gray level image as a setting subsequently, promptly
G bkimg ( x , y ) = avg ( &Sigma; i = 1 n G i ( x , y ) )
In the following formula, G iBe the i pictures of storage, n is a picture stored quantity, G BkimgBe the background gray level image that obtains;
(b) be chosen in and set up suitable complexion model on the YCbCr color space, finger areas is carried out skin color segmentation; Promptly
(b.1) with horizontal video camera and vertical video camera or the RGB image transitions be the YCbCr image, wherein rgb space to the conversion formula in YCbCr space is:
Y = 0.299 &times; R + 0.587 &times; G + 0.114 &times; B Cr = 0.713 &times; ( R - Y ) + &delta; Cb = 0.564 &times; ( B - Y ) + &delta;
In the following formula, Y, Cb, Cr are luminance component, chroma blue component, the red color component value in YCbCr space; R, G, B are redness, green, the blue sub value of rgb space;
Figure FDA0000139915220000033
(b.2) after the background gray level image that utilizes Y component that formula obtains in 4. and formula 3. to obtain is made difference, obtain background subtraction score value D (x, y), promptly
D(x,y)=abs(Y ftimg(x,y)-G bkimg(x,y)) ⑤
In the following formula, Y FtimgBe the Y component of foreground image, (x y) is the background subtraction score value to D;
(b.3) 4. 5. setting up YCbCr color space Face Detection model with formula according to formula is:
result = 1 , Cr / Cb &Element; [ Min , Max ] &CenterDot; D > T 0 , otherwise
In the following formula, Cb, Cr are chroma blue component, the red color component value in YCbCr space; D is the background subtraction score value; Constant Min, Max are the minimum and maximum threshold value of Cr and Cb ratio, and constant T is a background difference threshold value;
This Face Detection model is judged area of skin color through minimum and maximum threshold value and the background difference threshold value of setting Cr and Cb ratio; When result is 1, represent that this pixel is an area of skin color; When result is 0, be expressed as non-area of skin color.
6. according to the said multiple point touching screen system implementation method of claim 5, it is characterized in that based on computer vision technique: also further comprise and 7. background is carried out automatic updating steps according to formula,
I acc(x,y)=(1-α)×I axx(x,y)+α×I ftimg(x,y) ⑦
In the following formula, I AccBe the background image after upgrading, I FtimgBe foreground image, α is for upgrading constant coefficient.
7. based on the multi-point touch panel system and device of computer vision technique, it is characterized in that: it mainly is made up of display screen, 2 horizontal video cameras, 1 vertical video camera and image processing apparatus; Wherein 2 horizontal video cameras be placed on respectively display screen right ends, and point to display screen frame simultaneously; Vertical video camera then is placed on the place ahead of display screen and points to display screen and entire display screen is in its viewfinder range; The output terminal of 2 horizontal video cameras and 1 vertical video camera all links to each other with the input end of image processing apparatus;
Above-mentioned image processing apparatus comprises the system calibrating module, touches judge module, touches reference mark extraction module, coordinate transferring, click detection module and Kalman's tracking module; Wherein the output terminal of 2 horizontal video cameras links to each other with the input end that touches judge module; The output terminal of vertical video camera is connected 2 input ends that touch the reference mark extraction module respectively with the output terminal that touches judge module; The output terminal of touch reference mark extraction module and the output terminal of system calibrating module are connected 2 input ends of coordinate transferring respectively; The output terminal of coordinate transferring links to each other with the output terminal of clicking detection module; The output terminal of clicking detection module is divided into two-way, and one the tunnel is connected to another input end that touches the reference mark extraction module via Kalman's tracking module, and another road is as the output terminal based on the multi-point touch panel system and device of computer vision technique.
8. according to the said multi-point touch panel system and device based on computer vision technique of claim 7, it is characterized in that: said touch judge module comprises horizontal video camera background subtraction subdivision, horizontal video camera skin color segmentation unit and touches judging unit; The input end of horizontal video camera background subtraction subdivision connects the output terminal of 2 horizontal video cameras; The output terminal of horizontal video camera background subtraction subdivision links to each other with the input end that touches judging unit via horizontal video camera skin color segmentation unit, and the output terminal that touches judging unit connects touch reference mark extraction module.
9. according to the said multi-point touch panel system and device based on computer vision technique of claim 7, it is characterized in that: said touch reference mark extraction module comprises vertical video camera background subtraction subdivision, vertical video camera skin color segmentation unit and finger tip positioning unit; The input end of vertical video camera background subtraction subdivision connects the output terminal of vertical video camera; The output terminal of vertical video camera background subtraction subdivision links to each other with the input end of finger tip positioning unit via vertical video camera skin color segmentation unit, and the output terminal of finger tip positioning unit connects coordinate transferring.
CN201210051470.1A 2012-03-01 2012-03-01 Multipoint touch screen system realizing method and device based on computer vision technology Expired - Fee Related CN102591533B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210051470.1A CN102591533B (en) 2012-03-01 2012-03-01 Multipoint touch screen system realizing method and device based on computer vision technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210051470.1A CN102591533B (en) 2012-03-01 2012-03-01 Multipoint touch screen system realizing method and device based on computer vision technology

Publications (2)

Publication Number Publication Date
CN102591533A true CN102591533A (en) 2012-07-18
CN102591533B CN102591533B (en) 2014-12-24

Family

ID=46480309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210051470.1A Expired - Fee Related CN102591533B (en) 2012-03-01 2012-03-01 Multipoint touch screen system realizing method and device based on computer vision technology

Country Status (1)

Country Link
CN (1) CN102591533B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103576990A (en) * 2012-07-20 2014-02-12 中国航天科工集团第三研究院第八三五八研究所 Optical touch method based on single Gaussian model
CN104423719A (en) * 2013-08-27 2015-03-18 鸿富锦精密工业(深圳)有限公司 Electronic device and display content update method thereof
CN106446911A (en) * 2016-09-13 2017-02-22 李志刚 Hand recognition method based on image edge line curvature and distance features
CN107402654A (en) * 2016-05-18 2017-11-28 原相科技股份有限公司 Touch detection method and touch sensing system
CN108089753A (en) * 2017-12-28 2018-05-29 安徽慧视金瞳科技有限公司 A kind of localization method predicted using Faster-RCNN fingertip location
CN109255324A (en) * 2018-09-05 2019-01-22 北京航空航天大学青岛研究院 Gesture processing method, interaction control method and equipment
CN110147162A (en) * 2019-04-17 2019-08-20 江苏大学 A kind of reinforced assembly teaching system and its control method based on fingertip characteristic
CN110968190A (en) * 2018-09-28 2020-04-07 苹果公司 IMU for touch detection
CN111860142A (en) * 2020-06-10 2020-10-30 南京翱翔信息物理融合创新研究院有限公司 Projection enhancement oriented gesture interaction method based on machine vision
CN113312947A (en) * 2020-02-27 2021-08-27 北京沃东天骏信息技术有限公司 Method and device for determining behavior object
CN113961094A (en) * 2021-10-20 2022-01-21 深圳市嘉中电子有限公司 Touch screen programming control management system
CN115953433A (en) * 2023-02-06 2023-04-11 宿迁学院 Hybrid image target tracking method
US11934652B2 (en) 2020-10-14 2024-03-19 Samsung Electronics Co., Ltd. Display apparatus and control method thereof
CN117827076A (en) * 2024-03-04 2024-04-05 上海海栎创科技股份有限公司 Multi-finger gathering touch area segmentation method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6788809B1 (en) * 2000-06-30 2004-09-07 Intel Corporation System and method for gesture recognition in three dimensions using stereo imaging and color vision
CN1912816A (en) * 2005-08-08 2007-02-14 北京理工大学 Virtus touch screen system based on camera head
CN101393497A (en) * 2008-10-30 2009-03-25 上海交通大学 Multi-point touch method based on binocular stereo vision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6788809B1 (en) * 2000-06-30 2004-09-07 Intel Corporation System and method for gesture recognition in three dimensions using stereo imaging and color vision
CN1912816A (en) * 2005-08-08 2007-02-14 北京理工大学 Virtus touch screen system based on camera head
CN101393497A (en) * 2008-10-30 2009-03-25 上海交通大学 Multi-point touch method based on binocular stereo vision

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103576990A (en) * 2012-07-20 2014-02-12 中国航天科工集团第三研究院第八三五八研究所 Optical touch method based on single Gaussian model
CN104423719A (en) * 2013-08-27 2015-03-18 鸿富锦精密工业(深圳)有限公司 Electronic device and display content update method thereof
TWI506535B (en) * 2013-08-27 2015-11-01 Hon Hai Prec Ind Co Ltd Electronic device and content updating method
CN107402654B (en) * 2016-05-18 2021-07-16 原相科技股份有限公司 Touch detection method and touch detection system
CN107402654A (en) * 2016-05-18 2017-11-28 原相科技股份有限公司 Touch detection method and touch sensing system
CN106446911B (en) * 2016-09-13 2018-09-18 李志刚 A kind of human hand recognition methods based on image border embroidery and distance feature
CN106446911A (en) * 2016-09-13 2017-02-22 李志刚 Hand recognition method based on image edge line curvature and distance features
CN108089753A (en) * 2017-12-28 2018-05-29 安徽慧视金瞳科技有限公司 A kind of localization method predicted using Faster-RCNN fingertip location
CN108089753B (en) * 2017-12-28 2021-03-09 安徽慧视金瞳科技有限公司 Positioning method for predicting fingertip position by using fast-RCNN
CN109255324A (en) * 2018-09-05 2019-01-22 北京航空航天大学青岛研究院 Gesture processing method, interaction control method and equipment
CN110968190B (en) * 2018-09-28 2021-11-05 苹果公司 IMU for touch detection
US11803233B2 (en) 2018-09-28 2023-10-31 Apple Inc. IMU for touch detection
CN110968190A (en) * 2018-09-28 2020-04-07 苹果公司 IMU for touch detection
US11360550B2 (en) 2018-09-28 2022-06-14 Apple Inc. IMU for touch detection
CN110147162B (en) * 2019-04-17 2022-11-18 江苏大学 Fingertip characteristic-based enhanced assembly teaching system and control method thereof
CN110147162A (en) * 2019-04-17 2019-08-20 江苏大学 A kind of reinforced assembly teaching system and its control method based on fingertip characteristic
CN113312947A (en) * 2020-02-27 2021-08-27 北京沃东天骏信息技术有限公司 Method and device for determining behavior object
WO2021248686A1 (en) * 2020-06-10 2021-12-16 南京翱翔信息物理融合创新研究院有限公司 Projection enhancement-oriented gesture interaction method based on machine vision
CN111860142A (en) * 2020-06-10 2020-10-30 南京翱翔信息物理融合创新研究院有限公司 Projection enhancement oriented gesture interaction method based on machine vision
US11934652B2 (en) 2020-10-14 2024-03-19 Samsung Electronics Co., Ltd. Display apparatus and control method thereof
CN113961094A (en) * 2021-10-20 2022-01-21 深圳市嘉中电子有限公司 Touch screen programming control management system
CN115953433A (en) * 2023-02-06 2023-04-11 宿迁学院 Hybrid image target tracking method
CN115953433B (en) * 2023-02-06 2023-09-19 宿迁学院 Hybrid image target tracking method
CN117827076A (en) * 2024-03-04 2024-04-05 上海海栎创科技股份有限公司 Multi-finger gathering touch area segmentation method

Also Published As

Publication number Publication date
CN102591533B (en) 2014-12-24

Similar Documents

Publication Publication Date Title
CN102591533B (en) Multipoint touch screen system realizing method and device based on computer vision technology
CN110472496B (en) Traffic video intelligent analysis method based on target detection and tracking
CN103809880B (en) Man-machine interaction system and method
CN102508574B (en) Projection-screen-based multi-touch detection method and multi-touch system
Kim et al. Simultaneous gesture segmentation and recognition based on forward spotting accumulative HMMs
CN103389799B (en) A kind of opponent&#39;s fingertip motions track carries out the method for following the tracks of
CN107357427A (en) A kind of gesture identification control method for virtual reality device
CN104851094A (en) Improved method of RGB-D-based SLAM algorithm
CN106384355B (en) A kind of automatic calibration method in projection interactive system
CN101593022A (en) A kind of quick human-computer interaction of following the tracks of based on finger tip
CN108182695B (en) Target tracking model training method and device, electronic equipment and storage medium
CN102609945B (en) Automatic registration method of visible light and thermal infrared image sequences
CN102868811B (en) Mobile phone screen control method based on real-time video processing
CN103383731A (en) Projection interactive method and system based on fingertip positioning and computing device
CN110390685B (en) Feature point tracking method based on event camera
TWI682326B (en) Tracking system and method thereof
CN103677274A (en) Interactive projection method and system based on active vision
CN109839827B (en) Gesture recognition intelligent household control system based on full-space position information
CN103577792A (en) Device and method for estimating body posture
CN103176606B (en) Based on plane interaction system and the method for binocular vision identification
CN113608663A (en) Fingertip tracking method based on deep learning and K-curvature method
CN106101640A (en) Adaptive video sensor fusion method and device
CN112488059B (en) Spatial gesture control method based on deep learning model cascade
Su et al. Virtual keyboard: A human-computer interaction device based on laser and image processing
CN111399634B (en) Method and device for recognizing gesture-guided object

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20141224

Termination date: 20210301