CN102184056A - Method and device for identifying multiple touch points - Google Patents

Method and device for identifying multiple touch points Download PDF

Info

Publication number
CN102184056A
CN102184056A CN 201110104729 CN201110104729A CN102184056A CN 102184056 A CN102184056 A CN 102184056A CN 201110104729 CN201110104729 CN 201110104729 CN 201110104729 A CN201110104729 A CN 201110104729A CN 102184056 A CN102184056 A CN 102184056A
Authority
CN
China
Prior art keywords
touch
camera
volume image
ratio
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110104729
Other languages
Chinese (zh)
Other versions
CN102184056B (en
Inventor
郑金发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vtron Group Co Ltd
Original Assignee
Vtron Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vtron Technologies Ltd filed Critical Vtron Technologies Ltd
Priority to CN 201110104729 priority Critical patent/CN102184056B/en
Publication of CN102184056A publication Critical patent/CN102184056A/en
Application granted granted Critical
Publication of CN102184056B publication Critical patent/CN102184056B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for identifying multiple touch points. The method comprises the following steps of: respectively acquiring a frame of image by using two cameras, and acquiring shape, position, transverse dimensional information of touch object images in the two images; judging whether the types of the touch objects are the same according to the shape information, if so, calculating an angle and an intersection coordinate of links from the touch objects to the two cameras according to the position information, otherwise, identifying pen touch and finger touch and respectively calculating contact point coordinates of a pen and a finger; selecting a camera, calculating a distance from the intersection to the camera, and calculating a ratio between distances; calculating a reference value; and comparing the ratio between the distances with the reference value and determining the true touch point. According to the method and the device, the touch objects in different types can be accurately identified; and the problems that the cost and design difficulty are increased due to installation of auxiliary cameras are effectively avoided when a plurality of touch points in the same type are positioned.

Description

Method for identifying multiple touch points and device
Technical field
The present invention relates to computer vision field, particularly a kind of method for identifying multiple touch points and device based on the camera location.
Background technology
In recent years, large-scale LCDs shows to have obtained widely in the industry in large-scale man-machine interaction to be used, cooperate dull and stereotyped touching technique that infrared LED scanning location and ultrasonic locating arranged and use two cameras to be installed on screen edges and corners, utilize mode such as light path cross bearing, wherein be installed on the locator meams of screen edges and corners owing to the bearing accuracy height, can catch and calculate the touch point coordinate fast and be widely used gradually with two cameras.
As shown in Figure 1, with the panel display screen is example, camera A and camera B are installed on two corners of panel display screen respectively, when discerning the location, generally only intercept several capable image that camera collection arrives, these several capable images must be the images of pressing close to the flat pannel display screen surfaces, comprised and touched the image information that object throws on camera, can calculate the angle of touch point to the camera line by the picture position information of touch point, two cameras can uniquely be determined a touch point position to the line of touch point.
More than for only there being the situation of a touch point on the touch-screen, but when existing two to touch object, as shown in Figure 1, two touch object to two camera and all produce two lines separately, four lines that are not parallel to each other can produce four point of crossing (R1 so, R1 ', R2, R2 '), promptly had more two " false touch points " (R1 ', R2 '), should at this time touch-screen system determine which two point of crossing is the position of real touch object? in the prior art, generally be by auxiliary camera of increase in the middle of touch-screen, and verify that above-mentioned four point of crossing are to should whether existing line to reject false touch point by auxiliary camera.But this mode has not only increased hardware cost, has also increased the difficulty of design.
In addition; in the touch-screen applications field; general touch-screen is not distinguish finger touch or action touched in style of writing; serve as a teacher or the demonstrator when PPT of explanation or other documents and picture; usually use pen or finger removes to give directions touch-screen; this user's operation comprise as setting-out, on draw or operations such as the drop-down page, convergent-divergent; and usually touch-screen only can provide the coordinate of touch point, how does computing machine go to judge the operation that touch intention and the correct execution operator with the identifying operation person wanted? this has caused difficulty to computer software.The existing solution of industry is that some function buttons commonly used (as paintbrush, drag, blackboard eraser, page-turning button etc.) are set on the computer software operation interface, opens corresponding function by clicking these function buttons.But adopt above-mentioned method, the operator often needs these function buttons of frequent click to switch different user modes, therefore uses very inconvenience, influences operator's experience effect.
Summary of the invention
For addressing the above problem, the invention provides a kind of method for identifying multiple touch points and device, dissimilar touch objects can be correctly discerned, and just real touch point can be found out having a plurality of same types to touch not increase auxiliary camera under the situation of objects.
A kind of method for identifying multiple touch points may further comprise the steps:
Step S1, two cameras are gathered a two field picture respectively, obtain the shape information of the touch objects volume image in two width of cloth images respectively;
Step S2 judges according to the shape information of described touch objects volume image whether the type that touches object is identical, if the identical step S4 that then enters, if the step S3 that then enters inequality;
Step S3 touches and finger touch according to the shape of predefined nib, the shape recognition style of writing of finger, and difference calculating pen and the touch point coordinate of finger on touch-screen;
Step S4, calculate the positional information of the touch objects volume image in two width of cloth images, respectively touch angle between object to the two camera line according to described positional information calculation, and respectively touch the point of crossing coordinate of all lines of object to two camera according to the described angle calculation that respectively touches between object to the two camera line;
Step S5, selected camera calculates the distance of the point of crossing of all lines to this camera, and calculates the ratio between each distance;
Step S6 calculates the widthwise size information of the touch objects volume image in two width of cloth images, according to the ratio between the widthwise size of selected each the touch objects volume image that camera collected of described widthwise size information calculations and as with reference to value;
Step S7 compares ratio and described reference value between described each distance, determines real touch point.
A kind of multiple touch points recognition device comprises:
The shape acquisition module is used for obtaining the shape information of the touch objects volume image in two width of cloth images respectively after two cameras are gathered a two field picture respectively;
With the judge module that described shape acquisition module is connected, be used for judging according to the shape information of described touch objects volume image whether the type that touches object is identical;
The identification computing module that is connected with described judge module, be used for when the type of described touch object not simultaneously, touch and finger touch according to the shape of predefined nib, the shape recognition style of writing of finger, and difference calculating pen and the touch point coordinate of finger on touch-screen;
The coordinate Calculation module that is connected with described judge module, be used for type when described touch object when identical, calculate the positional information of the touch objects volume image in two width of cloth images, respectively touch angle between object to the two camera line according to described positional information calculation, and respectively touch the point of crossing coordinate of all lines of object to two camera according to the described angle calculation that respectively touches between object to the two camera line;
With the distance ratio computing module that described coordinate Calculation module is connected, be used for behind a selected camera, calculating the distance of the point of crossing of all lines, and calculate the ratio between each distance to this camera;
The reference value computing module that is connected with described shape acquisition module, be used for calculating the widthwise size information of the touch objects volume image of two width of cloth images, according to the ratio between the widthwise size of selected each the touch objects volume image that camera collected of described widthwise size information calculations and as with reference to value;
With the comparison module that described distance ratio computing module, reference value computing module are connected respectively, be used for ratio and described reference value between described each distance are compared, determine real touch point.
From above scheme as can be seen, method for identifying multiple touch points of the present invention and device touching under the object type situation inequality, can be discerned dissimilar touch objects correctly, apace, bring operational convenience to the user, have the favorable user experience effect; Touching under the identical situation of object type in addition, owing to utilized the size that touches object imaging in camera and touched object to the linearly proportional characteristic of the distance of camera, do not increase auxiliary camera and just can find out real touch point fast and accurately, effectively avoided having a good application prospect owing to increase the problem that cost increases and design difficulty increases that auxiliary camera is brought.
Description of drawings
Fig. 1 is two synoptic diagram that touch object to the camera line on the panel display screen;
Fig. 2 is the process flow diagram of method for identifying multiple touch points of the present invention;
Fig. 3 is type two touch objects inequality imaging synoptic diagram in two cameras respectively;
Fig. 4 is two identical touch objects of type imaging synoptic diagram in two cameras respectively;
Fig. 5 is a multiple touch points recognition device structural representation of the present invention.
Embodiment
The invention provides a kind of method for identifying multiple touch points and device, can solve the problem that correctly to discern dissimilar touch objects in the prior art, and when existing a plurality of touch objects and type identical, need increase the problem that hardware cost that auxiliary camera could correct location be brought and design difficulty increase, with the situation when having two touch points on the panel display screen, describe embodiments of the invention in conjunction with the accompanying drawings in detail below.
A kind of method for identifying multiple touch points as shown in Figure 2, may further comprise the steps:
Step S1, two cameras are gathered a two field picture respectively, obtain the shape information of the touch objects volume image in two width of cloth images respectively.
In the present embodiment, two cameras are installed on the lower left corner, the lower right corner of panel display screen respectively, and all adopt 90 ° of optical lens.
Preferably, after described two cameras are gathered a two field picture respectively, calculate before the touch objects volume image positional information and widthwise size information in two width of cloth images, can also comprise the steps S101: judge in two width of cloth images whether have the touch objects volume image.When not touching object on the display screen two camera collections to image be complete black, exist if not complete black then explanation to touch object.
Step S2 judges according to the shape information of described touch objects volume image whether the type that touches object is identical, if identical, enters step S4; If different, enter step S3.As shown in Figure 3, it is cylindrical that camera A collects two speck figure L1() and the L2(del), camera B collects two speck figure L3(dels) and L4(cylindrical), illustrate that then this type that touches object is inequality; In addition if two taken figures of camera are as shown in Figure 4, be that two figures that camera A photographs are cylindrical (difference only is that the size of image has difference), two figures that camera B photographs also are cylindrical, can assert that then this type that touches object is identical.
Step S3, if touch the type difference of object, touch-screen is discerned finger manipulation and pen operation by the taken touch objects volume image of each camera, at this moment just can discern according to the shape of the shape of predefined nib, finger that style of writing is touched and finger touch, and calculating pen and the touch point coordinate of finger on touch-screen respectively.Usually touch-screen has been equipped with the pen that suitable camera detects, and has preestablished written shape, and as an embodiment preferably, the shape of above-mentioned predefined nib, the shape of finger can be respectively del, cylindrical.Promptly the cylindrical speck by L1 among Fig. 3 and L4 as can be known L1 and L4 be finger touch, the del speck of L2 and L3 is that pen (blank pen or felt pen etc.) touches as can be known.
When identifying one of them object that is touching is style of writing when touching, and touch-screen provides style of writing when providing this touch point coordinate and touches sign, system detects described style of writing touch sign after, define this and touch to paintbrush and operate; And
When identifying one of them object finger touch that is touching, define this by the track of judging and calculate the touch point coordinate and touch and be other corresponding operatings, described other corresponding operatings comprise: drag, page turning, rotation, blackboard eraser etc.
In fact, when having dissimilar two touch objects (finger and a pen) on the touch-screen simultaneously, the situation that false touch point can not occur, as long as just only there are unique point of crossing in the correct identification and the back of having classified, promptly do not need to reject false touch point and just can find out real touch point.
Step S4 if it is identical to touch the type of object, promptly is two fingers or two pens simultaneously, at this moment needs to reject separately false touch point.With the touch object on the touch-screen is that two fingers are that (situation of two pens is similar for example, will not give unnecessary details), at first calculate the positional information of the touch objects volume image in two width of cloth images, respectively touch angle between object to the two camera line according to described positional information calculation, and respectively touch the point of crossing coordinate of all lines of object to two camera according to the described angle calculation that respectively touches between object to the two camera line.
As shown in Figure 4, the touch objects volume image positional information in above-mentioned calculating two width of cloth images is specially: calculate middle axial coordinate Lx1, Lx2, Lx3, the Lx4 of speck L1, L2, L3, L4 respectively, these coordinates can be represented the positional information that touches object.In addition in conjunction with Fig. 1, the process that touches object to two a camera line angle by touch objects volume image positional information calculation is specific as follows: 2048 * 90 ° of speck L1 and touch-screen base angle a1=90 °-Lx1 ÷, 2048 * 90 ° of angle a2=90 on speck L2 and display screen base °-Lx2 ÷, 2048 * 90 ° of the angle b1 on speck L3 and display screen base=Lx3 ÷, 2048 * 90 ° of the angle b2 on speck L4 and display screen base=Lx4 ÷.
The front has calculated on the display screen angle that touches object to two a camera line, has known that this angle then can calculate the point of crossing coordinate that institute might exist on the display screen (being the point of crossing coordinates of totally four lines of two touch object to two cameras).Suppose that the resolution that display screen shows is 1024 * 768, then by trigonometric function can in the hope of as shown in Figure 1 might exist touch point (point of crossing) coordinate R1 (X1, Y1), R1 ' (X1 ', Y1 '), R2 (X2, Y2), R2 ' (X2 ', Y2 ').
Step S5, selected camera arrives the distance of this camera according to the point of crossing of point of crossing all lines of coordinate Calculation that calculate gained among the step S4, and calculates the ratio between each distance.
For fear of when a plurality of touch point in the image that camera photographs certain touch point block other touch points (line has overlapped), as an embodiment preferably, can preferentially select to produce that maximum camera of line and carry out follow-up calculating, only need to calculate the many cameras of this line and get final product.As shown in Figure 3, because two cameras have all photographed two images that touch object in the present embodiment, and can produce two lines to camera, therefore can arbitrarily choose a camera image and calculate, the image of for example choosing camera A calculates.
Because selected the image of camera A to calculate, so according to four point of crossing coordinates that calculate gained among the step S2, can calculate four point of crossing and be respectively R1A, R1 ' A, R2A, R2 ' A, calculate above-mentioned ratio between the distance in twos then to the distance of camera A.Certainly because R1 and R1 ' are in the touch point to the same line of camera A, therefore need not calculate the ratio between them, in like manner need not calculate the ratio between R2 and the R2 '.
In order further to reduce the intensity of calculating, as an embodiment preferably, do not need to calculate the ratio between any two distances, only need calculating and camera A touch object far away to get final product to the ratio between the distance of camera A to the distance of camera A and nearer touch object.Suppose according to image 3 and judge that R1A, R1 ' A are the touch point that may exist nearer apart from camera A, R2A, R2 ' A are the touch point that may exist far away apart from camera A, therefore calculate the ratio value combination of R2 ' A ÷ R1 ' A, R2 ' A ÷ R1A, R2A ÷ R1A, R2A ÷ R1 ' A respectively.
Step S6 calculates the widthwise size information of the touch objects volume image in two width of cloth images, according to the ratio between the widthwise size of selected each the touch objects volume image that camera collected of described widthwise size information calculations and as with reference to value.
Suppose that camera adopts horizontal 2048 resolution to come images acquired, the coordinate of speck correspondence is in the scope of 0-2047 so, what pixels are the widthwise size of speck promptly be across, can be by calculating widthwise size value Lw1, Lw2, Lw3, the Lw4 of each speck.Remaining with camera A is example, and the image widthwise size of the L1 that it collects, L2 is respectively Lw1, Lw2, calculates both ratio values.Well-known, because touching the size of object imaging in camera (and is inverse relation with touching object linearly proportional to the distance of camera, be that imaging is big more, then distance is more little), so this ratio can be referred to as respectively to touch the ratio of object to the distance of camera again.According to inverse relation, and what calculated in the integrating step 3 is remote to in-plant ratio, therefore calculate touch objects volume image widthwise size and the ratio between the longer-distance touch objects volume image of camera A widthwise size herein, should calculate r=Lw1 ÷ Lw2 as can be seen from Figure 3 from camera A closer distance.
In fact, after a selected camera, step S5 and step S6 also can carry out in no particular order simultaneously.
Step S7, ratio and described reference value between described each distance are compared, determine the ratio with immediate those two distances of described reference value, other false touch point is rejected as real touch point in two respectively that these two distances is corresponding point of crossing.
For example, after the ratio r contrast that calculates among ratio value between each distance that calculates among the step S3 and the step S4, the ratio of finding R2A ÷ R1A is near r, can illustrate that so R2A and R1A are the distance that camera A is arrived in real touch point, therefore choose R2, R1 as real effectively touch point, abandon the touch point of other two falsenesses, and return the identifying that step S1 begins next round.
As an embodiment preferably, after this step S4 with before the step S5, can also comprise step S401: judge the touch object that whether exists on the display screen more than.Know one of existence or a plurality of touch objects on the display screen by the result of calculation among the S4, because according to geometrical principle, the touch point that might exist when calculating only has one, then illustrates only to exist to touch an object on the display screen; If there are four in the touch point that may exist, then explanation has two to touch object ... when on judging touch-screen, touching object, just carry out step S5 and follow-up operation more than one.
In fact adopt touch-screen method for identifying multiple touch points of the present invention, not only can effectively discern the situation of one and two touch point, to also discerning more than the situation of two touch points (as three touch points), just algorithm can relative complex some, but principle is consistent, will not give unnecessary details herein.
Corresponding with above-mentioned a kind of method for identifying multiple touch points, the present invention also provides a kind of multiple touch points recognition device, as shown in Figure 5, comprising:
The shape acquisition module is used for obtaining the shape information of the touch objects volume image in two width of cloth images respectively after two cameras are gathered a two field picture respectively;
With the judge module that described shape acquisition module is connected, be used for judging according to the shape information of described touch objects volume image whether the type that touches object is identical;
The identification computing module that is connected with described judge module, be used for when the type of described touch object not simultaneously, touch and finger touch according to the shape of predefined nib, the shape recognition style of writing of finger, and difference calculating pen and the touch point coordinate of finger on touch-screen;
The coordinate Calculation module that is connected with described judge module, be used for type when described touch object when identical, calculate the positional information of the touch objects volume image in two width of cloth images, respectively touch angle between object to the two camera line according to described positional information calculation, and respectively touch the point of crossing coordinate of all lines of object to two camera according to the described angle calculation that respectively touches between object to the two camera line;
With the distance ratio computing module that described coordinate Calculation module is connected, be used for behind a selected camera, calculating the distance of the point of crossing of all lines, and calculate the ratio between each distance to this camera;
The reference value computing module that is connected with described shape acquisition module, be used for calculating the widthwise size information of the touch objects volume image of two width of cloth images, according to the ratio between the widthwise size of selected each the touch objects volume image that camera collected of described widthwise size information calculations and as with reference to value;
With the comparison module that described distance ratio computing module, reference value computing module are connected respectively, be used for ratio and described reference value between described each distance are compared, determine real touch point.
Preferably, described identification computing module can comprise pen identification computing module and finger identification computing module:
Described pen identification computing module is used for touching according to the shape recognition style of writing of predefined nib, and the touch point coordinate of calculating pen on touch-screen; Touch-screen provides style of writing when providing this touch point coordinate and touches sign, system detects described style of writing touch sign after, define this and be operating as the paintbrush operation;
Described finger identification computing module is used for the shape recognition finger touch according to predefined finger, and calculates the touch point coordinate of finger on touch-screen; System defines this by the track of judging the touch point coordinate and touches and be other corresponding operatings, and described other corresponding operatings comprise: drag, page turning, rotation, blackboard eraser etc.
In addition, multiple touch points recognition device of the present invention, can also comprise that being connected first between described coordinate Calculation module and the distance ratio computing module judges submodule, this first judges that submodule is used for judging the touch object that whether exists on the display screen more than according to the point of crossing coordinate of described all lines.
Further, can comprise in the described shape acquisition module that second judges submodule, described second judges submodule is used for after described two cameras are gathered a two field picture respectively, obtains before the shape information of touch objects volume image of two width of cloth images, judges in two width of cloth images whether have the touch objects volume image.
In order to reduce the calculating strength of algorithm in the device of the present invention, can comprise relatively calculating sub module of distance in the described distance ratio computing module, this module is used to calculate with selected camera touch object far away and arrives ratio between the distance of this camera to the distance of this camera and nearer touch object; And for corresponding with inversely prroportional relationship described in the present invention, comprise the reference value calculating sub module in the described reference value computing module, be used to calculate and touch objects volume image widthwise size that selected camera is nearer and touch objects volume image widthwise size far away between ratio.
The other technologies feature of multiple touch points recognition device of the present invention is identical with the description in the above-mentioned method, will not give unnecessary details herein.
Method for identifying multiple touch points of the present invention and device, can discern dissimilar touch objects correctly, apace, make product also can produce identical locating effect, brought operational convenience, have preferably in some specific occasions and use to the user in the different application scene; Ought exist in addition under the identical situation of a plurality of touch objects and type, utilize the size that touches object imaging in camera and touch object to the linearly proportional characteristic of the distance of camera, obtain between the image widthwise size that respectively touches object ratio as with reference to value to help to determine real touch point, realized not increasing the purpose of assisting camera just can correctly find out real touch point.Because the present invention adopts the pure software algorithm to realize on the basis of existing technology, therefore not only effectively avoided owing to increase the problem that cost increases and design difficulty increases that auxiliary camera is brought, and when rejecting false touch point, have performance fast and accurately, have a good application prospect.
Above-described embodiment of the present invention does not constitute the qualification to protection domain of the present invention.Any modification of being done within the spirit and principles in the present invention, be equal to and replace and improvement etc., all should be included within the claim protection domain of the present invention.

Claims (10)

1. method for identifying multiple touch points may further comprise the steps:
Step S1, two cameras are gathered a two field picture respectively, obtain the shape information of the touch objects volume image in two width of cloth images respectively;
Step S2 judges according to the shape information of described touch objects volume image whether the type that touches object is identical, if the identical step S4 that then enters, if the step S3 that then enters inequality;
Step S3 touches and finger touch according to the shape of predefined nib, the shape recognition style of writing of finger, and difference calculating pen and the touch point coordinate of finger on touch-screen;
Step S4, calculate the positional information of the touch objects volume image in two width of cloth images, respectively touch angle between object to the two camera line according to described positional information calculation, and respectively touch the point of crossing coordinate of all lines of object to two camera according to the described angle calculation that respectively touches between object to the two camera line;
Step S5, selected camera calculates the distance of the point of crossing of all lines to this camera, and calculates the ratio between each distance;
Step S6 calculates the widthwise size information of the touch objects volume image in two width of cloth images, according to the ratio between the widthwise size of selected each the touch objects volume image that camera collected of described widthwise size information calculations and as with reference to value;
Step S7 compares ratio and described reference value between described each distance, determines real touch point.
2. method for identifying multiple touch points according to claim 1 is characterized in that, the shape of described predefined nib, the shape of finger are respectively del, cylindrical.
3. method for identifying multiple touch points according to claim 1 and 2 is characterized in that, among the step S3, when identifying style of writing and touch, touch-screen provides style of writing when providing this touch point coordinate and touches sign, system detects described style of writing touch sign after, define this and touch to paintbrush and operate; And
When identifying finger touch, define this by the track of judging the touch point coordinate and touch and be other corresponding operatings, described other corresponding operatings comprise: drag, page turning, rotation, blackboard eraser.
4. method for identifying multiple touch points according to claim 1, it is characterized in that, among the step S1 after described two cameras are gathered a two field picture respectively, obtain the shape information of the touch objects volume image in two width of cloth images before, also comprise step S101: judge in two width of cloth images whether have the touch objects volume image;
And/or
After step S4, before the step S5, also comprise step S401: judge the touch object that whether exists on the display screen more than according to the point of crossing coordinate of described all lines.
5. method for identifying multiple touch points according to claim 3 is characterized in that, the process of a selected camera is specially: select to produce that maximum camera of line.
6. method for identifying multiple touch points according to claim 1, it is characterized in that the process of calculating the ratio between each distance among the step S5 is specially: calculate with selected camera touch object far away and arrive ratio between the distance of this camera to the distance of this camera and nearer touch object;
And
The process of the ratio between each touch objects volume image widthwise size that camera collected that described calculating is selected is specially: the ratio between touch objects volume image widthwise size that the camera that calculates and select is nearer and the touch objects volume image widthwise size far away.
7. a multiple touch points recognition device is characterized in that, comprising:
The shape acquisition module is used for obtaining the shape information of the touch objects volume image in two width of cloth images respectively after two cameras are gathered a two field picture respectively;
With the judge module that described shape acquisition module is connected, be used for judging according to the shape information of described touch objects volume image whether the type that touches object is identical;
The identification computing module that is connected with described judge module, be used for when the type of described touch object not simultaneously, touch and finger touch according to the shape of predefined nib, the shape recognition style of writing of finger, and difference calculating pen and the touch point coordinate of finger on touch-screen;
The coordinate Calculation module that is connected with described judge module, be used for type when described touch object when identical, calculate the positional information of the touch objects volume image in two width of cloth images, respectively touch angle between object to the two camera line according to described positional information calculation, and respectively touch the point of crossing coordinate of all lines of object to two camera according to the described angle calculation that respectively touches between object to the two camera line;
With the distance ratio computing module that described coordinate Calculation module is connected, be used for behind a selected camera, calculating the distance of the point of crossing of all lines, and calculate the ratio between each distance to this camera;
The reference value computing module that is connected with described shape acquisition module, be used for calculating the widthwise size information of the touch objects volume image of two width of cloth images, according to the ratio between the widthwise size of selected each the touch objects volume image that camera collected of described widthwise size information calculations and as with reference to value;
With the comparison module that described distance ratio computing module, reference value computing module are connected respectively, be used for ratio and described reference value between described each distance are compared, determine real touch point.
8. multiple touch points recognition device according to claim 7 is characterized in that, described identification computing module comprises pen identification computing module and finger identification computing module;
Described pen identification computing module is used for touching according to the shape recognition style of writing of predefined nib, and the touch point coordinate of calculating pen on touch-screen; Touch-screen provides style of writing when providing this touch point coordinate and touches sign, system detects described style of writing touch sign after, define this and be operating as the paintbrush operation;
Described finger identification computing module is used for the shape recognition finger touch according to predefined finger, and calculates the touch point coordinate of finger on touch-screen; System defines this by the track of judging the touch point coordinate and touches and be other corresponding operatings, and described other corresponding operatings comprise: drag, page turning, rotation, blackboard eraser.
9. multiple touch points recognition device according to claim 7 is characterized in that:
Comprise that also being connected first between described coordinate Calculation module and the distance ratio computing module judges submodule, described first judges that submodule is used for judging the touch object that whether exists on the display screen more than according to the point of crossing coordinate of described all lines;
And/or
Comprise in the described shape acquisition module that second judges submodule, described second judges submodule is used for after described two cameras are gathered a two field picture respectively, obtains before the shape information of touch objects volume image of two width of cloth images, judges in two width of cloth images whether have the touch objects volume image.
10. according to claim 7 or 8 or 9 described multiple touch points recognition devices, it is characterized in that, comprise relatively calculating sub module of distance in the described distance ratio computing module, be used to calculate with selected camera touch object far away and arrive ratio between the distance of this camera to the distance of this camera and nearer touch object;
And
Comprise the reference value calculating sub module in the described reference value computing module, be used to calculate and touch objects volume image widthwise size that selected camera is nearer and touch objects volume image widthwise size far away between ratio.
CN 201110104729 2011-04-26 2011-04-26 Method and device for identifying multiple touch points Expired - Fee Related CN102184056B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110104729 CN102184056B (en) 2011-04-26 2011-04-26 Method and device for identifying multiple touch points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110104729 CN102184056B (en) 2011-04-26 2011-04-26 Method and device for identifying multiple touch points

Publications (2)

Publication Number Publication Date
CN102184056A true CN102184056A (en) 2011-09-14
CN102184056B CN102184056B (en) 2013-02-13

Family

ID=44570238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110104729 Expired - Fee Related CN102184056B (en) 2011-04-26 2011-04-26 Method and device for identifying multiple touch points

Country Status (1)

Country Link
CN (1) CN102184056B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360417A (en) * 2011-09-26 2012-02-22 广东威创视讯科技股份有限公司 Recognition method of touch screen by use of writing pen
CN102722296A (en) * 2012-07-04 2012-10-10 广东威创视讯科技股份有限公司 Passive eraser identification method of interactive touch screen
CN103914165A (en) * 2013-01-05 2014-07-09 联想(北京)有限公司 Multi-touch screen-based identifying method and device and electronic equipment
WO2016058387A1 (en) * 2014-10-16 2016-04-21 华为技术有限公司 Method, device and system for processing touch interaction
CN106687907A (en) * 2014-07-02 2017-05-17 3M创新有限公司 Touch systems and methods including rejection of unintentional touch signals
CN108427932A (en) * 2015-10-19 2018-08-21 广东欧珀移动通信有限公司 The recognition methods of fingerprint image and device
CN108804016A (en) * 2018-06-29 2018-11-13 江苏特思达电子科技股份有限公司 Object identification method based on touch screen, device and electronic equipment
CN113934312A (en) * 2020-06-29 2022-01-14 深圳市创易联合科技有限公司 Touch object identification method based on infrared touch screen and terminal equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6504532B1 (en) * 1999-07-15 2003-01-07 Ricoh Company, Ltd. Coordinates detection apparatus
CN101639747A (en) * 2009-08-31 2010-02-03 广东威创视讯科技股份有限公司 Spatial three-dimensional positioning method
CN101794184A (en) * 2010-04-07 2010-08-04 广东威创视讯科技股份有限公司 Coordinate detection device and locating method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6504532B1 (en) * 1999-07-15 2003-01-07 Ricoh Company, Ltd. Coordinates detection apparatus
CN101639747A (en) * 2009-08-31 2010-02-03 广东威创视讯科技股份有限公司 Spatial three-dimensional positioning method
CN101794184A (en) * 2010-04-07 2010-08-04 广东威创视讯科技股份有限公司 Coordinate detection device and locating method thereof

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360417B (en) * 2011-09-26 2013-03-20 广东威创视讯科技股份有限公司 Recognition method of touch screen by use of writing pen
CN102360417A (en) * 2011-09-26 2012-02-22 广东威创视讯科技股份有限公司 Recognition method of touch screen by use of writing pen
CN102722296A (en) * 2012-07-04 2012-10-10 广东威创视讯科技股份有限公司 Passive eraser identification method of interactive touch screen
CN102722296B (en) * 2012-07-04 2015-04-29 广东威创视讯科技股份有限公司 Passive eraser identification method of interactive touch screen
CN103914165A (en) * 2013-01-05 2014-07-09 联想(北京)有限公司 Multi-touch screen-based identifying method and device and electronic equipment
CN106687907A (en) * 2014-07-02 2017-05-17 3M创新有限公司 Touch systems and methods including rejection of unintentional touch signals
US10372325B2 (en) 2014-10-16 2019-08-06 Huawei Technologies Co., Ltd. Electromyographic based touch interaction processing method, device, and system
WO2016058387A1 (en) * 2014-10-16 2016-04-21 华为技术有限公司 Method, device and system for processing touch interaction
CN105573536A (en) * 2014-10-16 2016-05-11 华为技术有限公司 Touch interaction processing method, device and system
CN105573536B (en) * 2014-10-16 2018-09-07 华为技术有限公司 Processing method, the device and system of touch-control interaction
CN108427932A (en) * 2015-10-19 2018-08-21 广东欧珀移动通信有限公司 The recognition methods of fingerprint image and device
CN108427932B (en) * 2015-10-19 2021-07-02 Oppo广东移动通信有限公司 Fingerprint image identification method and device
CN108804016A (en) * 2018-06-29 2018-11-13 江苏特思达电子科技股份有限公司 Object identification method based on touch screen, device and electronic equipment
CN108804016B (en) * 2018-06-29 2020-12-29 江苏特思达电子科技股份有限公司 Object identification method and device based on touch screen and electronic equipment
CN113934312A (en) * 2020-06-29 2022-01-14 深圳市创易联合科技有限公司 Touch object identification method based on infrared touch screen and terminal equipment
CN113934312B (en) * 2020-06-29 2023-10-20 深圳市创易联合科技有限公司 Touch object identification method based on infrared touch screen and terminal equipment

Also Published As

Publication number Publication date
CN102184056B (en) 2013-02-13

Similar Documents

Publication Publication Date Title
CN102184056B (en) Method and device for identifying multiple touch points
CN102163108B (en) Method and device for identifying multiple touch points
US9069386B2 (en) Gesture recognition device, method, program, and computer-readable medium upon which program is stored
CN103365410B (en) Gesture sensing device and electronic system with gesture input function
CN102779001B (en) Light pattern used for touch detection or gesture detection
CN102799318A (en) Human-machine interaction method and system based on binocular stereoscopic vision
CN101520700A (en) Camera-based three-dimensional positioning touch device and positioning method thereof
Katz et al. A multi-touch surface using multiple cameras
CN104991684A (en) Touch control device and working method therefor
CN100478860C (en) Electronic plane display positioning system and positioning method
EP2672363A2 (en) Display device and method using a plurality of display panels
Liang et al. Turn any display into a touch screen using infrared optical technique
CN104679352B (en) Optical touch device and touch point detection method
CN101751194B (en) Touch control panel with function of multi-point touch control and multi-point touch control detecting method
CN102184054B (en) Multi-touch-point recognizing method and device
TW201317858A (en) Optical touch panel system, optical sensing module, and operation method thereof
CN104375697A (en) Mobile device
CN101819493B (en) Interactive display screen and method thereof
KR20090037535A (en) Method for processing input of touch screen
CN102184055B (en) Multi-touch-point recognition method and device
CN103543884A (en) Optical touch system and touch object distinguishing method thereof
CN204270260U (en) A kind of mobile device
CN102929434B (en) Optical projection system and its image treatment method
KR102529821B1 (en) Method for detecting writing pressure and processing writing pressure in convergence touch screen panel and interactive flat panel display therefor
Ukita et al. Wearable Virtual Tablet: Fingertip Drawing Interface Using an Active-Infrared Camera.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 510670 Guangdong city of Guangzhou province Kezhu Guangzhou high tech Industrial Development Zone, Road No. 233

Patentee after: Wei Chong group Limited by Share Ltd

Address before: 510663 Guangzhou province high tech Industrial Development Zone, Guangdong, Cai road, No. 6, No.

Patentee before: Guangdong Weichuangshixun Science and Technology Co., Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130213

Termination date: 20200426