CN202736000U - Multipoint touch screen system device based on computer visual technique - Google Patents

Multipoint touch screen system device based on computer visual technique Download PDF

Info

Publication number
CN202736000U
CN202736000U CN 201220073566 CN201220073566U CN202736000U CN 202736000 U CN202736000 U CN 202736000U CN 201220073566 CN201220073566 CN 201220073566 CN 201220073566 U CN201220073566 U CN 201220073566U CN 202736000 U CN202736000 U CN 202736000U
Authority
CN
China
Prior art keywords
display screen
touch
finger
point
video camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201220073566
Other languages
Chinese (zh)
Inventor
张红梅
陈俊彦
叶进
张向利
张全君
吴阿沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN 201220073566 priority Critical patent/CN202736000U/en
Application granted granted Critical
Publication of CN202736000U publication Critical patent/CN202736000U/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The utility model discloses a multipoint touch screen system device based on a computer visual technique. The system device consists of three sets of cameras, an image processing device and a common display screen, wherein the output ends of the three sets of cameras are connected with the image processing device. The left end and the right end of the display screen are respectively provided with one set of horizontal camera pointing towards the edge frame of the display screen, and when fingers enter the image pickup range of the cameras, the generation of touch is judged according to the distance between a fingertip and the display screen. One set of vertical camera is arranged in front of the display screen, the vertical camera points towards the display screen, and the whole display screen is arranged in the image pickup range, so when the touch is generated, the fingertip position can be obtained according to the images shot by the camera, and the finger is tracked. The images collected by the camera are processed by the image processing device, and the fingertip touching point positions and the finger action information are obtained and then converted into corresponding commands to complete the touch operation.

Description

Multi-point touch panel system and device based on computer vision technique
Technical field
The utility model relates to the touch screen technology field, is specifically related to a kind of multi-point touch panel system and device based on computer vision technique.
Background technology
Touch-screen is simple, convenient, the most natural present a kind of man-machine interaction mode as a kind of up-to-date computer entry device.It has given multimedia with brand-new looks, is extremely attractive brand-new multimedia interactive equipment.In used in present market, touch-screen mainly contained resistance-type, condenser type, surface acoustic wave type, infrared-type touch-screen four major types.Resistance-type, capacitive touch screen can only be applied in the small-medium size touch-screen owing to the cost reason.Although the infrared-type touch-screen is cheap, response is fast, and resolution is low, is vulnerable to extraneous intense laser interfere.Surface acoustic wave touch screen has solved the problem of infrared-type touch-screen, but affected by dust and water droplet, greasy dirt etc.
More and more higher along with the development of image display technology and human convenience and the requirement of comfortableness to message exchange, not high and mutual natural touch-screen demand is more and more stronger to large scale, price in market.Based on the touch-screen of computer vision technique have cheap, resolution is high, can utilize any plane as touching the advantages such as plane, the interference that is not subject to external environment, applied range, is the direction of touch-screen future development therefore.But the touch-screen of the correlation technique that existing market occurs is all had relatively high expectations to the background interface, can only be some simple black and white interfaces, perhaps requires to have supporting felt pen or touches gloves, can not naturally realize with open arms man-machine interaction, and use occasion is very limited.Simultaneously, existing correlation technique touch-screen only can be identified a contact at one time, and the action message of realization is relatively more dull.Along with the widespread use of touching technique, multi-point touch receives publicity gradually.Multi-point touch can receive the input message from a plurality of points on the screen simultaneously, so that action message increases greatly, makes touch operation more convenient.But current touch screen technology all can not be identified a plurality of contacts mostly, has therefore limited the development of multipoint-touch-technology.
Notification number is that the Chinese utility model patent of CN1912816A discloses " a kind of virtual touch screen system based on camera ", this system at first points to display screen by two or more cameras from different visual angles, and determines the imaging plane of two cameras and the mapping relations between the display screen; Then obtain rough staff zone and set up complexion model by image difference, to obtain gray level image and rough fingertip location; Re-use afterwards important sampling particle filter single finger is followed the tracks of to obtain the finger contours curve, and obtain the position of finger tip according to reference mark, finger tip place; At last determine the finger tip click action according to the correspondence position of the finger tip point in two two field pictures on display screen.Yet said system since said system be by the particle filter algorithm calculate finger tip the position and the finger tip point is followed the tracks of, to be implemented in the touch function of common display screen, therefore can only follow the tracks of with finger tip single finger and extract, can not realize multi-touch function.
The utility model content
Technical problem to be solved in the utility model provides a kind of multi-point touch panel system and device based on computer vision technique, it can make any display screen have the function of multiple point touching, has simultaneously that cost is low, stable performance, can be used for the advantages such as large scale screen.
For addressing the above problem, the utility model is realized by following scheme:
Based on the multi-point touch panel system and device of computer vision technique, mainly formed by display screen, 2 horizontal video cameras, 1 vertical video camera and image processing apparatus; Wherein 2 horizontal video cameras be placed on respectively display screen two ends, the left and right sides, and point to simultaneously display screen frame; Vertical video camera then is placed on the place ahead of display screen and points to display screen and whole display screen is in the viewfinder range of vertical video camera; The output terminal of 2 horizontal video cameras and 1 vertical video camera all links to each other with the input end of image processing apparatus.Above-mentioned image processing apparatus adopts the system calibrating algorithm that vertical video camera is carried out system calibrating for having, and obtains the mapping relations of video camera imaging planimetric coordinates and display screen coordinate; Process 2 pictures that horizontal video camera obtains by internal algorithm, whether produce touch according to the Distance Judgment of finger fingertip and display screen frame; Process the picture that vertical video camera obtains by internal algorithm, obtain the position coordinates of finger fingertip as the reference mark that touches; According to the mapping relations of imaging plane coordinate and display screen planimetric coordinates, the position coordinates of finger tip is converted to coordinate corresponding to display screen; Click detection according to finger tip coordinate and the finger tip residence time, if meet click conditional, then be converted to corresponding instruction and finish touch operation; And adopt Kalman filter that the finger fingertip point is followed the tracks of, the position of prediction next frame image finger tip point; Image processing apparatus etc. function.
Compared with prior art, the utlity model has following features:
1, utilize computer vision technique to solve the deficiency that conventional touch screen exists.Because the position of video camera can arbitrarily be adjusted, so the size of screen is easily unrestricted, as long as display screen in the vision range of camera, can both accurately navigate to finger tip, can be applied on the large scale display screen;
2, image processing apparatus can be PC or embedded equipment, and system only needs to add three groups of video cameras on the existing equipment basis can make the common display screen possess the function of touch-screen, and design cost is low;
3, system can also be used for other screens or the projection wall such as projection screen, and use occasion is easy to expansion, is not subjected to background interface or environmental limit;
4, system improves by algorithm and can identify a plurality of contacts, is applicable to the multi-touch system environment.
Description of drawings
Fig. 1 is a kind of multi-point touch panel system and device line graph based on computer vision technique;
Fig. 2 is a kind of multi-point touch panel system and device internal module structural representation based on computer vision technique;
Fig. 3 is a kind of multi-point touch panel network system realization process flow diagram based on computer vision technique;
Fig. 4 is the skeleton pattern of finger tip location.
Embodiment
Chalaza is hit condition, then is converted to corresponding instruction and finishes touch operation.
Described Kalman's tracking module adopts Kalman filter that the finger fingertip point is followed the tracks of, the position of prediction next frame image finger tip point.
A kind of multi-point touch panel network system realization process flow diagram based on computer vision technique that adopts that said apparatus realizes comprises following step as shown in Figure 3:
(1) the vertical video camera that is opposite to display screen the place ahead carries out system calibrating, obtains the mapping relations of video camera imaging planimetric coordinates and display screen planimetric coordinates.
(2) start 2 horizontal video cameras, after finger enters the video camera viewfinder range, adopt background subtraction point-score and skin color segmentation algorithm to obtain the profile of finger, whether produce touch according to the Distance Judgment of finger fingertip and display screen frame.
(3) when producing touch, start vertical video camera, adopt improved background subtraction point-score and skin color segmentation method to obtain the profile of finger, and obtain real-time and accurately the pixel of finger tip by the finger tip location algorithm.According to the mapping relations of imaging plane coordinate and display screen planimetric coordinates, the pixel of finger tip is converted to coordinate corresponding to display screen.
(4) click detection according to finger tip coordinate and the finger tip residence time, if meet click conditional, then be converted to corresponding instruction and finish touch operation.If do not meet, then touch operation is not finished, and adopts Kalman filter that the finger tip point is followed the tracks of, prediction next frame image finger tip point position.
(5) in the next frame image centered by the finger tip point of prediction point mark the ROI zone and carry out step (3) skin color segmentation and finger tip location, constantly circulation is until the touch operation end.
In system, image processing algorithm comprises that system calibrating, finger contours are cut apart, finger tip is located, finger is followed the tracks of etc. and partly forms that the below will be described the image processing algorithm that system relates to.
In the computer vision, on the space geometric position of certain point and its camera collection to image on the geometric relationship of corresponding point be that imaging model by camera determines, be called system calibrating for the calculating of imaging model parameter.By system calibrating, can obtain the imaging plane of video camera and the mapping relations between the display screen plane.
Suppose display screen in the plane of world coordinate system Z=0, the display screen upper left corner is initial point.Pixel (u, v) in the plane of delineation, the coordinate points of corresponding display screen is (x w, y w), two points have following mapping relations:
z c u v 1 = A r 1 r 2 r 3 t x w y w 0 1 = A r 1 r 2 t x w y w 1 - - - ( 1 )
Wherein, matrix A is the Intrinsic Matrix of video camera, can be by the gridiron pattern scaling board image calculation obtained at 3 diverse locations out.[r 1, r 2, t] and be the external parameters of cameras matrix R T 0 T 1 Simplification matrix.The utility model utilizes Corner Detection Algorithm to extract tessellated interior angle point as corresponding point by showing screen display cross-hatch pattern picture.Like this, can set up according to formula (1) equation of multiple series of images pixel and display screen coordinate points, obtain matrix [r 1, r 2, t].Matrix A [r 1, r 2, t] contrary square
Below in conjunction with concrete diagram, further set forth the utility model:
With reference to Fig. 1, a kind of multi-point touch panel system and device based on computer vision technique of the utility model is comprised of 3 groups of video cameras, 1 image processing apparatus and 1 common display screen.The output terminal of 3 groups of video cameras among the figure all is connected on the image processing apparatus.Respectively place one group of horizontal video camera that points to display screen frame at the two ends, the left and right sides of display screen, when finger enters the video camera viewfinder range, can whether produce touch according to finger fingertip and display screen Distance Judgment.One group of sensing display screen is placed in display screen the place ahead and whole display screen is in the interior vertical video camera of its viewfinder range, when producing touch, can be according to the Image Acquisition finger tip point position of this video camera shooting, and while adversary's fingering line trace.Image processing apparatus is processed by the image to camera acquisition, obtains position and the finger movement information of Fingertip touch point, is converted to corresponding instruction and finishes touch operation.
Fig. 2 is a kind of multi-point touch panel system and device internal module structural representation based on computer vision technique.Among the figure, image processing apparatus comprises the system calibrating module, touches judge module, touch control point extraction module, coordinate transferring, click detection module and Kalman's tracking module.The output terminal of 2 horizontal video cameras links to each other with the input end that touches judge module.The output terminal of vertical video camera and the output terminal that is connected judge module connect respectively 2 input ends of touch control point extraction module.The output terminal of the output terminal of touch control point extraction module and system calibrating module is 2 input ends of connection coordinate modular converter respectively.The output terminal of coordinate transferring links to each other with the output terminal of clicking detection module.The output terminal of clicking detection module is divided into two-way, and one the tunnel is connected to another input end of touch control point extraction module via Kalman's tracking module, and another road conduct is based on the output terminal of the multi-point touch panel system and device of computer vision technique.
Described system calibrating module adopts the system calibrating algorithm that vertical video camera is carried out system calibrating, obtains the mapping relations of video camera imaging planimetric coordinates and display screen coordinate.
Described touch judge module comprises horizontal video camera background subtraction subdivision, horizontal video camera skin color segmentation unit and touches judging unit.The input end of horizontal video camera background subtraction subdivision connects the output terminal of 2 horizontal video cameras, the output terminal of horizontal video camera background subtraction subdivision links to each other with the input end that touches judging unit via horizontal video camera skin color segmentation unit, and the output terminal that touches judging unit connects touch control point extraction module.Touch judge module and process 2 pictures that horizontal video camera obtains by the internal algorithm of above-mentioned each unit, whether produce touch according to the Distance Judgment of finger fingertip and display screen frame.
Described touch control point extraction module comprises vertical video camera background subtraction subdivision, vertical video camera skin color segmentation unit and finger tip positioning unit; The input end of vertical video camera background subtraction subdivision connects the output terminal of vertical video camera, the output terminal of vertical video camera background subtraction subdivision links to each other the output terminal connection coordinate modular converter of finger tip positioning unit with the input end of finger tip positioning unit via vertical video camera skin color segmentation unit.Described touch control point extraction module is processed the picture that vertical video camera obtains by the internal algorithm of above-mentioned each unit, obtains the position coordinates of finger fingertip as the reference mark that touches.
Described coordinate transferring is according to the mapping relations of imaging plane coordinate and display screen planimetric coordinates, and the position coordinates of finger tip is converted to coordinate corresponding to display screen.
Described click detection module is clicked detection according to finger tip coordinate and the finger tip residence time, if the symbol battle array is the imaging plane pixel of video camera to the transition matrix of display screen planimetric coordinates point.
Touch-control finger effectively to cut apart with the finger tip location be key of the present utility model.The utility model proposes a kind of background subtraction point-score that uses first the finger areas of prospect is extracted roughly, re-use subsequently the method that the skin color segmentation method is further cut apart finger areas.The method can be removed in the background interference with colour of skin similar color, strengthens the validity that finger is cut apart.
Before finger enters camera watch region, store first some pictures and do gradation conversion, calculate subsequently as a setting gray level image of its average, shown in (2).
G bkimg ( x , y ) = avg ( Σ i = 1 n G i ( x , y ) ) - - - ( 2 )
In the formula, G iBe the i pictures of storage, n is the picture number of storage, G BkimgBe the background gray level image that obtains.
According to the skin color segmentation effect in reality detects, in order to reach the purpose that reduces calculated amount, be chosen in and set up suitable complexion model on the YCbCr color space simultaneously, finger areas is carried out skin color segmentation.Rgb space to the conversion formula in YCbCr space suc as formula shown in (3):
Y = 0.299 × R + 0.587 × G + 0.114 × B Gr = 0.713 × ( R - Y ) + δ Cb = 0.564 × ( B - Y ) + δ - - - ( 3 )
In the formula, Y, Cb, Cr are luminance component, chroma blue component, the red color component value in YCbCr space; R, G, B are redness, green, the blue component value of rgb space;
Figure DEST_PATH_GDA00002237712400053
The background gray level image that utilizes Y component that formula (3) obtains and formula (2) to obtain do poor after, obtain the background subtraction score value D (x, y) of image, shown in (4),
D(x,y)=abs(Y ftimg(x,y)-G bkimg(x,y)) (4)
In the following formula, Y FtimgBe the Y component of foreground image, D (x, y) is the background subtraction score value.Setting up YCbCr color space Face Detection model according to formula (3) and (4) is:
result = 1 , Cr / Cb ∈ [ Min , Max ] ∩ D > T 0 , otherwise - - - ( 5 )
In the following formula, Cb, Cr are chroma blue component, the red color component value in YCbCr space; D is the background subtraction score value; Constant Min, Max are the minimum and maximum threshold value of Cr and Cb ratio, and constant T is background difference threshold value.
This Face Detection model is judged area of skin color by minimum and maximum threshold value and the background difference threshold value of setting Cr and Cb ratio.When result is 1, represent that this pixel is area of skin color; When result is 0, be expressed as non-area of skin color.
If background image can not upgrade automatically according to the variation of environmental factor, fixedly use then can have a strong impact on segmentation effect to foreground target for a long time.Therefore also need background is upgraded automatically, formula is suc as formula shown in (5).
I acc(x,y)=(1-α)×I axx(x,y)+α×I ftimg(x,y) (5)
In the formula, I AccBe the background image after upgrading, I FtimgBe foreground image.α is for upgrading constant coefficient.
Finger tip is the reference mark of touch-screen, the utility model proposes a kind of real-time finger tip location algorithm based on vector, and this algorithm is quick and precisely judged the position of finger tip point according to the shortest characteristic of mould of corresponding adjacent two the frontier point vectors of contour curve flex point.
The staff zone can be expressed as a set of showing order point through the closure edge curve S of profile:
S={P i:(x i,y i),i=1,2,…,n} (6)
Wherein, | x I+1-x i|≤1, y I+1-y i|≤1, x I+1-x i| and | y I+1-y i| have at least a value to equal 1.x iAnd y iA P iHorizontal ordinate and ordinate.
Some P on collection of curves S iGetting a constant r is the radius of neighbourhood, with a P iCentered by point get a set omega i={ P I-r, P I-r+1..., P i..., P I+r-1, P Ir, this set is called a P iThe r neighborhood, and
Figure DEST_PATH_GDA00002237712400061
Such as fruit dot P kAt a P iThe r neighborhood in, then claim a some P kWith a P iIt is the r adjacency.If a set is arranged
Figure DEST_PATH_GDA00002237712400062
Get its any point, all can find another point in V is the r adjacency with this point, then claims set V to be communicated with set.
According to euclidean metric two dimension formula, vector
Figure DEST_PATH_GDA00002237712400063
Mould be:
Figure DEST_PATH_GDA00002237712400064
By the characteristic of curve as can be known, to change Shaoxing opera strong when direction of a curve, and curve is more crooked,
Figure DEST_PATH_GDA00002237712400065
Value less, as shown in Figure 4.Knee point is corresponding
Figure DEST_PATH_GDA00002237712400066
Value is minimum.Simultaneously, can be according to vector
Figure DEST_PATH_GDA00002237712400067
And vector
Figure DEST_PATH_GDA00002237712400068
The apposition judgment curves at a P iConcavity and convexity.Vector is worked as in checking easily
Figure DEST_PATH_GDA00002237712400069
And vector
Figure DEST_PATH_GDA000022377124000610
Apposition greater than 0 o'clock, curve is at a P iThe place is protruding; Otherwise then be recessed.
Therefore, the definable curve is at a P iThe flexibility formula be:
Figure DEST_PATH_GDA000022377124000611
In the formula,
Figure DEST_PATH_GDA000022377124000612
Be vector
Figure DEST_PATH_GDA000022377124000613
And vector
Figure DEST_PATH_GDA000022377124000614
Apposition,
Figure DEST_PATH_GDA000022377124000615
Be vector
Figure DEST_PATH_GDA000022377124000616
Mould.
According to staff contour shape feature as can be known, finger tip is that radian changes maximum protruding flex point in the finger curve, therefore can obtain the judgement that the each point flexibility is carried out finger tip point according to the embroidery formula.Because the width difference of each finger is little, the method that can utilize first single finger tip to detect is judged the wherein finger fingertip of width minimum, judges subsequently other finger fingertip according to the flexibility of this finger tip again.The step that finger tip detects is as follows:
(3.1) think the value of setting radius of neighbourhood r.Radius of neighbourhood r affects the calculating of embroidery.If the r value is too little, vector
Figure DEST_PATH_GDA00002237712400071
Moding just not obvious, be difficult to judge flex point.If the r value is too large, exceeded the length of finger, wrong flex point just appears easily.According to staff contour shape feature, the value of radius of neighbourhood r needs greater than the width of finger in image, less than the length of finger in image, shown in (9):
finger . width &times; image . width screen . width < r < finger . length &times; image . length screen . length - - - ( 9 )
In the formula, finger.width and finger.length are width and the length of finger, and image.width and image.length are figure image width and long pixel number, and screen.width and screen.length are width and the length of touch screen.Usually r value is got the width pointed in image and the intermediate value of length.
(3.2) according to the flexibility of each point among formula (8) the calculating staff region contour closed curve S, obtain the set C={C of flexibility i, i=1,2 ..., n}.
(3.3) in set C, get | C i| value is minimum and be positive element, corresponding some P of this element iBe the finger tip point.
The stand out threshold value of (3.4) establishing finger is constant d, off-take point P in set C iNeighborhood outer eligible | C k|<| C i|+d and be positive all elements obtains new set Cr.
(3.5) result who obtains according to step (3.4) if in some set corresponding to Cr the set of connection is arranged, then keeps in the connection set | C i| the point that value is minimum, remove remaining point.Last remaining point is the finger tip point of all fingers.
The position of finger fingertip in every two field picture consisted of the track of target travel, and the utility model is introduced Kalman filter and predicted the position that next frame image finger tip may occur according to the in the past movement locus of finger fingertip, realizes the finger tracking.
Kalman filter algorithm comprises signal model and two models of observation model, and signal model is suc as formula shown in (10), and observation model is suc as formula shown in (11).
X k=AX k-1+BU k+w k (10)
Z k=HX k+v k (11)
In the formula, X kAnd X K-1Be the system state vector of moment k and k-1, Z kMeasure vector for the system state of moment k, A is state-transition matrix, and B is gating matrix, and H is observing matrix, U kBe control vector, w kAnd v kBe respectively the motion of normal distribution and measure noise vector, and uncorrelated mutually, that is:
P(w)~N(0,Q),P(v)~N(0,R) (12)
In the formula, Q is the covariance matrix of motion artifacts, and R is for measuring the covariance matrix of noise.
Make system state vector X kBe expressed as [x k, v Xk, y k, v Yk] T, x wherein kAnd y kRespectively the coordinate components of finger fingertip pixel on image x axle and y axle, v XkAnd v YkRespectively the speed of finger fingertip pixel on image x axle and y axle.The system state vector Z kBe expressed as [x k, y k] T, x wherein kAnd y kRespectively the coordinate components of present frame finger fingertip pixel on image x axle and y axle.
On the x axle, the equation of motion of finger tip coordinate is suc as formula shown in (13) and the formula (14):
x k = x k - 1 + v xk - 1 t + 1 2 a k t 2 - - - ( 13 )
v xk=v xk-1+a kt (14)
In the formula, t is time variable, a kBe k acceleration constantly.Because adjacent two two field pictures are shorter correlation time in tracing process, can be similar to think the finger tip coordinate x axle and the motion of y axle all be one by accelerating at random by the rectilinear motion of disturbance, acceleration a kIt is a random quantity
Figure DEST_PATH_GDA00002237712400082
Similar equation is in like manner also arranged on the y axle.
According to formula (13) and formula (14), definable control vector Uk is acceleration a k, state-transition matrix A and gating matrix B are A = 1 &tau; 0 0 0 1 0 0 0 0 1 &tau; 0 0 0 1 , B = &tau; 2 2 &tau; &tau; 2 2 &tau; . Wherein, constant tau is two two field picture interval times.
Kalman filtering is divided into prediction and proofreaies and correct two stages.At forecast period, wave filter is according to current state estimation probabilistic model state afterwards, shown in (15) and formula (16).
X ^ k = AX k - 1 + BU k - - - ( 15 )
P ^ k = AP k - 1 A T + Q - - - ( 16 )
In the formula,
Figure DEST_PATH_GDA00002237712400087
The prior uncertainty correlation matrix, P K-1Be the posteriority error correlation matrix of previous moment, Q is the covariance matrix of motion artifacts.
Figure DEST_PATH_GDA00002237712400088
Be the state vector of prediction.
At calibration phase, wave filter is regulated the probabilistic model state, shown in (17)~(19) on the measurement parameter basis of setting models state.
K k = P ^ k &CenterDot; H T &CenterDot; ( H P ^ k H T + R ) - 1 - - - ( 17 )
X k = X ^ k + K k &CenterDot; ( Z k - H X ^ k ) - - - ( 18 )
P k = ( I - K k H ) &CenterDot; P ^ k - - - ( 19 )
In the formula, K kBe kalman gain matrix, P kIt is the posteriority error correlation matrix of current time.
In the finger fingertip tracing process, employing formula (15) is carried out the prediction of fingertip location, point marks the ROI zone and carries out skin color segmentation and finger tip and locate centered by the finger tip point of prediction in the next frame image, the accurate finger fingertip coordinate that will obtain subsequently is as system state vector Z k, and substitution formula (18) is carried out filter correction.Carry out the prediction of fingertip location after the correction, constantly circulation is until the touch operation end again.Add Kalman filter and finger tip is followed the tracks of real-time and the robustness that can strengthen system.Above-mentioned ROI zone i.e. " area-of-interest ", it is centered by the finger tip point of Kalman filter prediction, mark an area-of-interest at image, only the data in this two field picture zone being carried out image subsequently processes, extra-regional data just need not have been managed, can reduce the processing time like this, strengthen real-time and robustness.

Claims (1)

1. based on the multi-point touch panel system and device of computer vision technique, it is characterized in that: mainly formed by display screen, 2 horizontal video cameras, 1 vertical video camera and image processing apparatus; 2 horizontal video cameras be placed on respectively display screen two ends, the left and right sides, and point to simultaneously display screen frame; Vertical video camera then is placed on the place ahead of display screen and points to display screen and whole display screen is in its viewfinder range; The output terminal of 2 horizontal video cameras and 1 vertical video camera all links to each other with the input end of image processing apparatus.
CN 201220073566 2012-03-01 2012-03-01 Multipoint touch screen system device based on computer visual technique Expired - Fee Related CN202736000U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201220073566 CN202736000U (en) 2012-03-01 2012-03-01 Multipoint touch screen system device based on computer visual technique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201220073566 CN202736000U (en) 2012-03-01 2012-03-01 Multipoint touch screen system device based on computer visual technique

Publications (1)

Publication Number Publication Date
CN202736000U true CN202736000U (en) 2013-02-13

Family

ID=47661680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201220073566 Expired - Fee Related CN202736000U (en) 2012-03-01 2012-03-01 Multipoint touch screen system device based on computer visual technique

Country Status (1)

Country Link
CN (1) CN202736000U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628251A (en) * 2021-10-11 2021-11-09 北京中科金马科技股份有限公司 Smart hotel terminal monitoring method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628251A (en) * 2021-10-11 2021-11-09 北京中科金马科技股份有限公司 Smart hotel terminal monitoring method
CN113628251B (en) * 2021-10-11 2022-02-01 北京中科金马科技股份有限公司 Smart hotel terminal monitoring method

Similar Documents

Publication Publication Date Title
CN102591533B (en) Multipoint touch screen system realizing method and device based on computer vision technology
CN101593022B (en) Method for quick-speed human-computer interaction based on finger tip tracking
CN103226387B (en) Video fingertip localization method based on Kinect
Kim et al. Simultaneous gesture segmentation and recognition based on forward spotting accumulative HMMs
Ma et al. Kinect Sensor‐Based Long‐Distance Hand Gesture Recognition and Fingertip Detection with Depth Information
US8787663B2 (en) Tracking body parts by combined color image and depth processing
CN103389799B (en) A kind of opponent&#39;s fingertip motions track carries out the method for following the tracks of
CN103295016B (en) Behavior recognition method based on depth and RGB information and multi-scale and multidirectional rank and level characteristics
CN102508574B (en) Projection-screen-based multi-touch detection method and multi-touch system
CN103218605B (en) A kind of fast human-eye positioning method based on integral projection and rim detection
CN107357427A (en) A kind of gesture identification control method for virtual reality device
CN105760826A (en) Face tracking method and device and intelligent terminal.
CN103677274B (en) A kind of interaction method and system based on active vision
CN110688965A (en) IPT (inductive power transfer) simulation training gesture recognition method based on binocular vision
CN106384355B (en) A kind of automatic calibration method in projection interactive system
CN102426480A (en) Man-machine interactive system and real-time gesture tracking processing method for same
CN102609945B (en) Automatic registration method of visible light and thermal infrared image sequences
CN103997624A (en) Overlapped domain dual-camera target tracking system and method
CN110390685B (en) Feature point tracking method based on event camera
CN102289948A (en) Multi-characteristic fusion multi-vehicle video tracking method under highway scene
CN103105924B (en) Man-machine interaction method and device
CN102868811B (en) Mobile phone screen control method based on real-time video processing
CN106203261A (en) Unmanned vehicle field water based on SVM and SURF detection and tracking
CN113608663B (en) Fingertip tracking method based on deep learning and K-curvature method
CN104167006A (en) Gesture tracking method of any hand shape

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130213

Termination date: 20140301