WO2011116683A1 - Procédé de positionnement de contacts, dispositif et système de contacts associés - Google Patents

Procédé de positionnement de contacts, dispositif et système de contacts associés Download PDF

Info

Publication number
WO2011116683A1
WO2011116683A1 PCT/CN2011/072041 CN2011072041W WO2011116683A1 WO 2011116683 A1 WO2011116683 A1 WO 2011116683A1 CN 2011072041 W CN2011072041 W CN 2011072041W WO 2011116683 A1 WO2011116683 A1 WO 2011116683A1
Authority
WO
WIPO (PCT)
Prior art keywords
touch object
position information
imaging device
information group
actual
Prior art date
Application number
PCT/CN2011/072041
Other languages
English (en)
Chinese (zh)
Inventor
吴振宇
叶新林
刘建军
刘新斌
Original Assignee
北京汇冠新技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京汇冠新技术股份有限公司 filed Critical 北京汇冠新技术股份有限公司
Publication of WO2011116683A1 publication Critical patent/WO2011116683A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0428Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by sensing at the edges of the touch surface the interruption of optical paths, e.g. an illumination plane, parallel to the touch surface which may be virtual

Definitions

  • the present invention relates to the field of optoelectronic technologies, and in particular, to a touch positioning method and apparatus, and a touch system. Background technique
  • Touch screens that are commonly used today include infrared touch screens and touch screens with cameras.
  • the infrared touch screen uses a large number of one-to-one corresponding infrared transmitting tubes and infrared receiving tubes to determine the position information of the touch object, and the principle is relatively simple; however, since the infrared touch screen uses a large number of infrared components, the installation and debugging are complicated, so the production cost is relatively high.
  • the infrared transmitting tube and the infrared receiving tube are easily deteriorated, the reliability of the infrared touch screen is not high.
  • the touch screen with a camera is widely used because of its simple structure, low cost, easy production, and high reliability.
  • FIG. 1 it is a schematic structural diagram of a touch screen with a camera in the prior art, including a frame 12, infrared imaging devices 19 and 10 installed at two adjacent corners of the frame 12, respectively installed in the adjacent infrared imaging device 19 Two infrared light sources 112 and 113 at the position of 10 and 10, retroreflective strips 14 mounted at the edges of the frame 12, and processing units 16 connected to the infrared imaging devices 19 and 10, respectively.
  • the inside of the frame 12 is the touch detection area 17.
  • the illumination ranges of the infrared light sources 112 and 113 cover the entire touch detection area, and the fields of view of the infrared imaging devices 19 and 10 cover the entire touch detection area.
  • the position information of the imaging point can obtain an angle between the touch object P and the connection line of the two imaging devices respectively, and the distance between the infrared imaging devices 19 and 10 is L, assuming that the position of the infrared imaging device 19 is Coordinate origin,
  • FIG. 2 is a schematic diagram of the working principle of determining the position of two touch objects with the camera touch screen shown in FIG.
  • P2 and P3 are actual touch objects, and actual touch objects P2 and P3 are passed through the infrared imaging device 10 and After 19, four images will be obtained, that is, the images formed by the actual touch objects P2 and P3 after the infrared imaging device 10 are respectively located on the straight lines P2M1 and P3M2, and the images formed by the actual touch objects P2 and P3 after the infrared imaging device 9 respectively Located on the straight lines P2N1 and P3N2, when the position of the actual touch object is determined by the above method, the following two sets of position information groups can be obtained:
  • the processing unit records the position information of the two images located on the straight line P2M1 and the straight line P3N2, and is located on the straight line.
  • the position information of the two images on P3M2 and P2N2 can be obtained.
  • This group includes position information sets of positional information of the virtual touch objects PI and P4, based on the positional information of the two images located on the straight line P2M1 and the straight line P2N1, and located
  • the position information of the two images on the straight line P3M2 and the straight line P3N2 can obtain (P2(x2, y2), P3(x3, y3)) the set of positional information including the positional information of the actual touched objects P2 and P3; however, ( P2(x2, y2), P3(x3, y3) is the position information group including the position information of the actual touch objects P2 and P3, (Pl(xl, yl), P4(x4, y4)) is "ghost point" " , so that the touch screen cannot accurately locate the location of the touch object. May also appear when there are three or more touch objects
  • the present invention provides a touch positioning method and apparatus, and a touch system for removing a "ghost point" that occurs during the process of positioning two or more touch objects, and accurately positioning the touch object.
  • the present invention provides a touch positioning method, the method being applied to a touch system including at least one imaging device group and a touch detection area, the imaging device group including a first imaging device group, the first imaging device group including at least two Imaging devices, each location in the touch detection area
  • first touch object position information group includes position information of the actual touch object and/or a virtual touch object.
  • the present invention also provides a touch positioning method applied to a touch system including at least one multi-lens imaging apparatus and a touch detection area, the multi-lens imaging apparatus including a first multi-lens imaging apparatus, the first multi-lens
  • the imaging device includes at least two lenses and an optical sensor, each position in the touch detection area being located within a field of view of two different positions of the first multi-lens imaging device, the method comprising :
  • first touch object position information group includes position information of the actual touch object and/or a virtual touch object.
  • the invention also provides a touch positioning device, comprising:
  • At least one imaging device group including a first imaging device group, the first imaging device group including at least two imaging devices, each position in a touch detection region of the touch system being located in the first imaging
  • the image forming device of the two different imaging devices in the device group is used to collect the data of the touch detection area;
  • a first touch object position information group acquiring module configured to acquire, according to image data collected by the imaging device in the first imaging device group, a plurality of first touch object position information groups, where the first touch object position information group includes Position information of the actual touch object and/or position information of the virtual touch object; a first actual touch object position information group obtaining module, configured to use the plurality of first touch level The first touch object position information group including the position information of the virtual touch object located outside the touch detection area is removed from the set of information, and the first actual touch object position information group is obtained, where the first actual touch object position information group includes the actual The location information of the touch object.
  • the invention also provides a touch positioning device, comprising:
  • At least one multi-lens imaging device wherein the at least one multi-lens imaging device includes a first multi-lens imaging device, the first multi-lens imaging device including at least two lenses and an optical sensor in a touch detection area of the touch system Each position is located within a field of view of two different positions of the first multi-lens imaging device for acquiring image data of the touch detection area and imaging the image data On the optical sensor;
  • a fourth touch object position information group acquiring module configured to acquire, according to the image data collected by the lens in the first multi-lens imaging device, a plurality of first touch object position information groups, where the first touch object position information group includes The position information of the actual touch object and/or the position information of the virtual touch object; the fourth actual touch object position information group obtaining module, configured to be removed from the plurality of first touch object position information groups, including being located outside the touch detection area
  • the first touch object position information group of the position information of the virtual touch object obtains a first actual touch object position information group, and the first actual touch object position information group includes position information of the actual touch object.
  • the invention also provides a touch system, comprising:
  • At least one imaging device group installed around a touch detection area of the touch system, the imaging device group includes a first imaging device group, the first imaging device group includes at least two imaging devices, and the touch detection Each location in the zone is located within a field of view of two different imaging devices in the first imaging device group, the imaging device is configured to acquire image data of the touch detection zone;
  • At least one illumination source respectively mounted at a position adjacent to the at least one imaging device group; a retroreflective strip mounted around the touch detection area or on the touch object for transmitting the at least one illumination source to the retroreflective strip The light is reflected to the at least one imaging device group; the processing unit is configured to acquire, according to the image data collected by the imaging device in the first imaging device group, a plurality of first touch object position information groups, the first touch The object location information group includes location information of the actual touch object and/or location information of the virtual touch object from the plurality of first touch objects The first touch object position information group including the position information of the virtual touch object located outside the touch detection area is removed from the position information group, and the first actual touch object position information group is obtained, where the first actual touch object position information group includes the actual The location information of the touch object.
  • the invention also provides a touch system, comprising:
  • At least one imaging device group mounted around a touch detection area of the touch system, the imaging device group including a first imaging device group, the first imaging device group including at least two imaging devices, the touch detection region Each location is located within a field of view of two different imaging devices in the first imaging device group, the imaging device is configured to acquire image data of the touch detection region;
  • At least one illumination source respectively mounted around the touch detection area for transmitting light to the at least one imaging device group;
  • a processing unit configured to acquire, according to the image data collected by the imaging device in the first imaging device group, a plurality of first touch object location information groups, where the first touch object location information group includes location information of an actual touch object and And the location information of the virtual touch object, the first touch object location information group including the location information of the virtual touch object located outside the touch detection area is removed from the plurality of first touch object location information groups, to obtain the first actual Touching the object location information group, the first actual touch object location information group includes location information of the actual touch object.
  • the invention also provides a touch system, comprising:
  • At least one multi-lens imaging device including a first multi-lens imaging device, the first multi-lens imaging device including at least two lenses and an optical sensor, a touch detection area of the touch system Each position within is located within a field of view of two differently positioned lenses in the first multi-lens imaging device for acquiring image data of the touch detection zone and imaging the image data On the optical sensor;
  • At least one illumination source respectively mounted on a parasitic reflection strip adjacent to the at least one multi-lens imaging device, mounted around the touch detection area or on the touch object for transmitting the at least one illumination source to the regression Light from the reflective strip is reflected to the at least one multi-lens imaging device; a processing unit, configured to acquire, according to image data acquired by two lenses in the first multi-lens imaging device, a plurality of first touch object position information groups, where the first touch object position information group includes an actual touch object The location information and/or the location information of the virtual touch object, the first touch object location information group including the location information of the virtual touch object located outside the touch detection area is removed from the plurality of first touch object location information groups, The first actual touch object position information group, the first actual touch object position information group includes position information of the actual touch object.
  • the invention also provides a touch system, comprising:
  • At least one multi-lens imaging device including a first multi-lens imaging device, the first multi-lens imaging device including at least two lenses and an optical sensor, a touch detection area of the touch system Each position within is located within a field of view of two differently positioned lenses in the first multi-lens imaging device for acquiring image data of the touch detection zone and imaging the image data On the optical sensor;
  • At least one illumination source respectively mounted around the touch detection area for transmitting light to the at least one multi-lens imaging apparatus
  • a processing unit configured to acquire, according to image data acquired by two lenses in the first multi-lens imaging device, a plurality of first touch object position information groups, where the first touch object position information group includes an actual touch object
  • the location information and/or the location information of the virtual touch object, the first touch object location information group including the location information of the virtual touch object located outside the touch detection area is removed from the plurality of first touch object location information groups,
  • the first actual touch object position information group, the first actual touch object position information group includes position information of the actual touch object.
  • the present invention acquires a plurality of first touch object position information groups, and then removes from the plurality of first touch object position information groups, including being located outside the touch detection area.
  • the first touch object position information group of the position information of the virtual touch object obtains the first actual touch object position information group, thereby removing the "ghost point" appearing in the process of locating two or more touch objects, and accurately positioning the touch object location.
  • FIG. 1 is a schematic structural view of a touch screen with a camera in the prior art
  • 2 is a schematic diagram showing the working principle of determining the position of two touch objects with the touch screen of the camera shown in FIG. 1;
  • FIG. 3 is a schematic flow chart of a first embodiment of a touch positioning method according to the present invention.
  • FIG. 4 is a schematic diagram showing the working principle of an example of the first embodiment of the touch positioning method of the present invention.
  • FIG. 5 is a schematic diagram showing the working principle of another example of the first embodiment of the touch positioning method of the present invention.
  • FIG. 6 is a schematic diagram of positioning error analysis of two giant imaging devices in a second embodiment of the touch positioning method of the present invention.
  • FIG. 7 is a schematic flow chart of a second embodiment of a touch positioning method according to the present invention.
  • FIG. 8 is a schematic diagram of positioning error analysis of two remote imaging devices in a second embodiment of the touch positioning method of the present invention.
  • FIG. 9 is a schematic diagram showing the working principle of an example of processor matching location information in the second embodiment of the touch positioning method of the present invention.
  • FIG. 10 is a schematic diagram showing the working principle of another example of processor matching location information in the second embodiment of the touch positioning method of the present invention.
  • FIG. 11 is a schematic diagram showing the principle of calculating a touch object size in a second embodiment of the touch positioning method of the present invention.
  • FIG. 12 is a schematic flowchart of a third embodiment of a touch positioning method according to the present invention.
  • FIG. 13 is a schematic structural view of a first embodiment of the touch positioning device of the present invention.
  • FIG. 14 is a schematic structural view of a second embodiment of a touch positioning device according to the present invention.
  • FIG. 15 is a schematic structural view of a third embodiment of the touch positioning device of the present invention.
  • FIG. 16 is a schematic structural view of a fourth embodiment of the touch positioning device of the present invention.
  • FIG. 17 is a schematic structural view of a fourth embodiment of the touch positioning device of the present invention.
  • FIG. 18 is a schematic structural view of a fourth embodiment of the touch positioning device of the present invention.
  • FIG. 19 is a schematic structural view of a first embodiment of a touch system according to the present invention.
  • FIG. 20 is a schematic structural view of a second embodiment of a touch system according to the present invention.
  • 21 is a schematic structural view of a third embodiment of a touch system according to the present invention.
  • 22 is a schematic structural view of a fourth embodiment of a touch system according to the present invention.
  • FIG. 23 is a schematic structural view of a fifth embodiment of a touch system according to the present invention.
  • FIG. 24 is a schematic structural view of a sixth embodiment of a touch system of the present invention. detailed description
  • the "imaging device” refers to a “single lens imaging device", and the “single lens imaging device” includes a lens and an optical sensor. Further, the imaging device may be an image capturing device such as a camera or a camera.
  • the inventors found that when the image data collected by the two imaging devices are used to locate the touch object, if the distance between any two touch objects in the optical fiber connection direction of the two imaging devices is less than the two The distance between the optical centers of the imaging device, then all the “ghost points” are located in the touch detection area, and the “ghost points” cannot be removed at this time; if any two touch objects are in the direction of the optical connection of the two four-image devices If the distance is greater than or equal to the distance between the optical centers of the two imaging devices, then some "ghost points" will appear outside the touch detection area. Consider using “ghost points” appearing outside the touch detection area to remove all "ghost points”. " .
  • the present embodiment is applied to a touch detection area touch system including at least one imaging device group including a first imaging device group, the first imaging device group including at least two imaging devices, each position in the touch detection area
  • the imaging device is used to acquire image data of the touch detection area within the field of view of two different imaging devices in the first imaging device group.
  • the field of view of each of the first imaging device groups covers the entire touch detection area from different directions; optionally, the first imaging device includes three imaging devices, wherein the field of view of one imaging device covers the entire Touching the detection area, the fields of view of the other two imaging devices respectively cover a part of the touch detection area, and the sum of the fields of view of the other two imaging devices covers the entire touch detection area, and at this time, the other two imaging devices are equivalent to one Into a 4 elephant device.
  • Step 31 The processing unit acquires a plurality of first touch object location information groups.
  • the processing unit acquires a plurality of first touch object position information groups according to the image data collected by the imaging device in the first imaging device group, wherein the first touch object position information group includes position information of the actual touch object and/or The location information of the virtual touch object.
  • the processing unit obtains the first touch object location information group according to the image data collected by the imaging device. For details, refer to FIG. 1, and details are not described herein again.
  • Step 32 The processing unit obtains a first actual touch object location information group.
  • the processing unit removes the first touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of first touch object position information groups, to obtain the first actual touch object position information group,
  • the first actual touch object position information group includes position information of the actual touch object.
  • the number of touch objects detected by the two imaging devices is m and n, respectively, where m and n are natural numbers greater than or equal to 2, and the actual number of touch objects is max(m, n
  • the number of the first touch object position information groups obtained is m aX (m, n)! , and the first touch object position information group including the position information of the virtual touch object located outside the touch detection area is removed, and the first An actual touch object location information group.
  • FIG. 4 it is a schematic diagram of the working principle of an example of the first embodiment of the touch positioning method of the present invention, wherein P1 and P2 are two actual touch objects, and Ol and 02 are two devices, so that After the touch objects P1 and ⁇ 2 pass through the imaging devices Ol and 02, four images are obtained, that is, the images formed by the actual touch objects P1 and ⁇ 2 after the imaging device Ol are respectively located on the straight line P1S1 and the straight line P2S2, and actually touched by the imaging device 02. The images formed by the objects P1 and ⁇ 2 are located on the straight line P1T1 and the straight line ⁇ 2 ⁇ 2, respectively.
  • the processing unit can obtain (Pl(xl, yl), P2(x2, y2)) according to the position information of the two images located on the straight line P1S1 and the straight line P1T1, and the position information of the two images located on the straight line P2S2 and the straight line ⁇ 2 ⁇ 2.
  • the set includes a first touch object position information group of position information of the actual touch objects PI and P2, position information of two images located on the straight line P1S1 and the straight line P2T2, and two images on the straight line P2S2 and the straight line P1T1
  • the position information may obtain (P3(x3, y3), P4(x4, y4)) the first touch object position information group including the position information of the virtual touch objects P3 and P4, since the virtual touch object P3 is located outside the touch detection area.
  • the first touch object position information group is removed (P3(x3, y3), P4(x4, y4)), and the set of (Pl(xl, yl), P2(x2, y2)) is obtained including the actual touch object PI and The first actual touch object position information group of the position information of P2.
  • FIG. 5 it is a schematic diagram of the working principle of another example of the first embodiment of the touch positioning method of the present invention.
  • the actual touch objects P1 and P2 And P3 will obtain 6 images after imaging devices Ol and 02, that is, the images formed by the actual touch objects P1, P2 and P3 after the imaging device Ol are respectively located on the straight line P1S1, the straight line P2S2 and the straight line P3S3, after the imaging device 02
  • the images formed by the actual touch objects P1, P2, and P3 are located on P1T1, the line ⁇ 2 ⁇ 2, and the line ⁇ 3 ⁇ 3, respectively.
  • the processing unit can obtain the following six position information groups: the processing unit is based on the position information of the two images on the straight line P1S1 and the straight line P1T1, the position information of the two images on the straight line P2S2 and the line ⁇ 2 ⁇ 2, and the line P3S3 and The position information of the two images on the line ⁇ 3 ⁇ 3 can be obtained (Pl(xl, yl), P2(x2, y2), P3(x3, y3)).
  • This group includes the position information of the actual touch objects P1, P2, and P3.
  • the position information may obtain (Pl(xl, yl), P4(x4, y4), P5(x5, y5)) the first touch object position information including the actual touch object PI and the position information of the virtual touch objects P4 and P5.
  • the group can be obtained from the positional information of the two images on the straight line P1S1 and the straight line P2T2, and the positional information of the two images on the straight lines P2S2 and P1T1 (P6(x6, y6), P7(x7, y7), P3( X3, y3) )
  • This group includes virtual touch objects P6 and P7 and actual touch
  • the position information of the two images on the straight line P1T1 can be obtained (P6(x6, y6), P4(x4, y4), P8(x8, y8)).
  • This group includes the position information of the virtual touch objects P6, P4, and P8.
  • a touch object position information group based on position information of two images on the straight line P1S1 and the straight line P3T3, position information of two images on the straight line P2S2 and the straight line P1T1, and two images on the straight line P3S3 and the straight line P2T2
  • the position information can be obtained (P9(x9, y9), P7(x7, y7), P5(x5, y5)), the first touch object position information group including the position information of the virtual touch objects P9, P7 and P5, According to the line P1S1 and the line P3T3
  • the positional information of the two images, the positional information of the two images on the straight line P2S2 and the straight line P2T2, and the positional information of the two images on the straight lines P3S3 and P1T1 can be obtained (P9(x9, y9), P2(x2) , y2), P8(
  • the processing unit acquires a plurality of first touch object position information groups according to image data collected by the imaging device in the first imaging device group, and then the processing unit is removed from the plurality of first touch object position information groups.
  • the first touch object position information group including the position information of the virtual touch object located outside the touch detection area, the first actual touch object position information group is obtained, thereby removing the "ghost point" that occurs during the process of locating two or more touch objects.
  • any two in the touch detection area when each position in the touch detection area is located within the field of view of two different imaging devices in the first imaging device group, any two in the touch detection area
  • the distance between the actual touch objects in the optical fiber connection direction of the two different imaging devices is not less than the distance between the optical centers of the two different imaging devices, and the optical centers of the two different imaging devices
  • the distance between the two is greater than the width of the pixels that can be recognized by the different imaging devices, and the optical centers of any two of the actual touch devices and the two different imaging devices are not in a straight line.
  • the imaging device group may further include a second imaging device group, and the second imaging device group includes at least two imaging devices, each location in the touch detection region is located.
  • Field of view of two imaging devices of different positions in the second imaging device group The distance between any two actual touch objects in the optical fiber connection direction of the different four-image devices is not less than the distance between the optical centers of the two different imaging devices, and the two positions are different.
  • the distance between the optical centers of the imaging devices is greater than the width of the pixels that can be recognized by the different imaging devices at the two locations, and the light of any of the two actual imaging devices and the imaging devices of the two different positions The heart is not in a straight line.
  • FIG. 6 it is a schematic diagram of positioning error analysis of two macro imaging devices in the second embodiment of the touch positioning method according to the present invention, wherein 01, 02 and O are respectively three imaging devices, wherein the imaging device Ol is away from the imaging device. O is far away, the imaging device 02 is too close to the 4 image device O, P is the actual touch object, the actual image of the touch object P through the 4 image device O is located on the line PM1, the actual touch object P is 4 The actual imaging point after the device O is located on the line PM2.
  • the actual imaging point of the actual touch object P after passing through the imaging device Ol is located on the straight line PQ1
  • the actual imaging point of the actual touch object P after passing through the imaging device 02 is located on the straight line PQ2.
  • the position information of the touch object determined from the position information of the two actual image points located on the straight line PM2 and the straight line PQ1 is larger than the error based on the actual touch object P, which is larger than two according to the straight line PM2 and the straight line PQ2.
  • the position information of the actual imaging point determines the error of the position information of the touch object with respect to the actual touch object P, and therefore, the closer the distance between the two imaging devices is, The larger the bit error touch object.
  • FIG. 7 it is a schematic flowchart of the second embodiment of the touch positioning method of the present invention.
  • the difference from the flow diagram shown in FIG. 3 is that, in this embodiment, It can include the following steps:
  • Step 61 The processing unit acquires a plurality of second touch object location information groups.
  • the processing unit acquires a plurality of second touch object position information groups according to the image data collected by the imaging device in the second imaging device group, wherein the second touch object position information group includes position information of the actual touch object and/or The location information of the virtual touch object.
  • the processing unit obtains the second touch object location information group according to the image data collected by the two imaging devices. For details, refer to FIG. 1 , and details are not described herein again.
  • any two actual touches The distance between the object in the direction of the optical fiber connection of any two imaging devices that collect image data is greater than the distance between the optical centers of any two imaging devices, and between the optical centers of any two imaging devices that acquire image data. The distance is greater than the width of the pixels that any two imaging devices can recognize, and the optical centers of any two actual touch objects and any imaging device that collects image data are not in a straight line.
  • Step 62 The processing unit obtains a second actual touch object location information group.
  • the processing unit removes the second touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of second touch object position information groups, to obtain the second actual touch object position information group,
  • the second actual touch object position information group includes position information of the actual touch object.
  • steps 61 and 62 have no strict timing relationship with steps 31 and 32. After step 62 and step 32, the following steps may also be included:
  • Step 63 The processing unit acquires a plurality of third touch object location information groups.
  • the processing unit acquires several pieces according to the image data collected by the first imaging device in the imaging device in the first imaging device group and the image data collected by the second imaging device in the imaging device in the second imaging device group.
  • the third touch object position information group includes the position information of the actual touch object and/or the position information of the virtual touch object.
  • the processing unit obtains the third touch object location information group according to the image data collected by the two imaging devices. For details, refer to FIG. 1, and details are not described herein again.
  • Step 64 The processing unit obtains a third actual touch object location information group.
  • the processing unit matches the plurality of third touch object position information groups with the first actual touch object position information group and/or the second actual touch object position information group to obtain a third actual touch object position information group, and the third actual The touch object location information group includes location information of the actual touch object.
  • the processing unit firstly determines the position information of the actual touch object by using a group of macro imaging devices, and then obtains the position information of the actual touch object and the "ghost point" by using a group of giant image forming devices, because "Ghost point" is far from the position information determined by the iLi giant imaging device, and the actual touch object distance is closer to the position information determined by the imaging device.
  • each third touch object position information group is An actual touch object position information group and the second actual touch object position information group are matched, so that the position information of the touch object can be more accurately located. Interest.
  • the first imaging device may be an imaging device that detects the most touch objects among the at least two imaging devices in the first imaging device group; the second imaging device may A four-image device that detects the most touches among at least two of the second imaging device groups.
  • FIG. 8 it is a schematic diagram of positioning error analysis of two remote imaging devices in the second embodiment of the touch positioning method of the present invention, wherein Ol and 02 are two macro imaging devices, and P is a touch object.
  • Ol and 02 are two macro imaging devices
  • P is a touch object.
  • the distance between the two imaging points on the straight line PQ1 and the straight line 01Q2 is one pixel apart, and the actual imaging point of the touch object P after passing through the imaging device 02 is located on the straight line PQ3; when the touch object P is connected to the imaging devices Ol and 02 When the line is close, the ideal imaging point of the touch object P after passing through the imaging device Ol is located on the straight line PQ4, and the actual imaging point of the touch object P after being formed into the 4 image device Ol is located on the straight line 01Q5, and the two on the straight line PQ4 and the straight line 01Q5
  • the distance between the imaging points differs by one pixel, and the actual imaging point of the touch object P after passing through the imaging device 02 is located on the straight line PQ6.
  • the position information of the actual imaging point on the straight line 01Q2 and the straight line PQ6 is used.
  • the determined position information P1 of the touch object has an error with respect to the touch object P that is larger than the touch object determined by using the two actual image points located on the straight line 01Q5 and the straight line PQ3.
  • P2 with respect to the error position information P of the touch object and therefore, the connection of the touch object from the image forming apparatus of the closer, the larger the positioning error.
  • the first imaging device and the second imaging device are farthest from the touch detection area of the touch system when the two remote imaging devices are used to accurately position the touch object.
  • the processing unit matches the plurality of third touch object position information groups with the first actual touch object position information group and the second actual touch object position information group to obtain a third actual touch object position information group. Specifically, the processing unit respectively acquires a difference between the location information in each third touch object location information group and the corresponding location information in the first actual touch object location information group and the corresponding location information in the second actual touch object location information group.
  • FIG. 9 it is a schematic diagram of an operation example of an example of processor matching location information in a second embodiment of the touch positioning method according to the present invention.
  • the first imaging device group includes two imaging devices Ol and 02
  • the second imaging device group includes Two imaging devices 03 and 04
  • P1 and P2 are two actual touch objects
  • the actual images of the touch objects P1 and P2 through the imaging device Ol are located on the straight line P1Q1 and the straight line P2Q2
  • the actual touch objects P1 and P2 are imaged.
  • the image formed by the device 02 is located on the straight line P1Q3 and the straight line P2Q4
  • the processing unit obtains the first actual position based on the position information of the two images located on the straight line P1Q1 and the straight line P1Q3, and the position information of the two images located on the straight line P2Q2 and the straight line P2Q4.
  • the touch object position information group ((xll, yll), (xl2, yl2)), the actual touch objects PI and P2 are formed by the imaging device 03 on the straight line P1S1 and the straight line P2S2, and the actual touch objects P1 and P2 are imaged.
  • the image formed by the device 04 is located on the straight line P1S3 and the straight line P2S4, and the processing unit can be based on the position information of the two images on the straight line P1S1 and the straight line P1S3, and the position information of the two images on the straight line P2S2 and the straight line P2S4.
  • the processing unit is based on position information of two images located on the straight line P1Q1 and the straight line P1S1, and on the straight line P2Q2 and the straight line P2S2
  • the position information of the two images can obtain two third touch object position information groups ((x31, y31), (x32, y32)) and ((x41, y41), (x42, y42)), wherein, due to the obtained position
  • the image data of the image on the line P1Q1 on which the information (x31, y31) is based is the same as the image data of the image on the line P1Q1 on which the position information (x11, yll) in the first actual position information group is obtained,
  • the image data of the image on the line P1S1 on which the position information (x31, y31) is based is the same as the image data of the image on the line P1S1 on which the position information (x31, y31) is based is the same as the image data of the image on
  • location information (x32, y32) and first Position information (xl2, yl2) inter-touch position information and the second set of actual touch object location information group corresponds to (x22, y22)
  • the position information (x42, y42) corresponds to the position information (xl2, yl2) in the first actual touch object position information group and the position information (x21, y21) in the second actual touch object position information group
  • Each of the third touch object position information group ((x31, y31), (x32, y32)) and the corresponding position information in the first actual touch object position information group and the second actual touch object position information group The sum of the squares of the differences of the corresponding position information is:
  • Each of the third touch object position information group ((x41, y41), (x42, y42)) and the corresponding position information in the first actual touch object position information group and the second actual touch object position information group
  • the sum of the squares of the differences of the corresponding position information is:
  • the third touch object position information group having a smaller square sum of differences is the third actual touch object position information group.
  • the processor matches the position information in the second embodiment of the touch positioning method of the present invention.
  • the actual touch objects P1 and ⁇ 3 are in a straight line
  • the actual touch objects ⁇ 2 and ⁇ 3 are in a straight line
  • the actual touch objects ⁇ 3 and ⁇ 2 are passed through the imaging device Ol.
  • the formed image is located on the straight line P3Q1 and the straight line P2Q2, and the image formed by the actual touch objects P1 and ⁇ 3 via the imaging device 02 is located on the straight line P1Q3 and the straight line P3Q4, and the processing unit phase is located on the straight line P3Q1 and the straight line P1Q3.
  • the imaging points, and the two imaging points on the straight line P2Q2 and the straight line P3Q4, obtain the first actual touch object position information set ((x11, yll), (xl2, yl2)).
  • the images formed by the actual touch objects P1, P2, and P3 after passing through the imaging device 03 are respectively located on the straight line P1S2, the straight line P2S1, and the straight On line P3S3, the images formed by the actual touch objects PI, P2, and P3 after passing through the imaging device 04 are respectively located on the straight line P1S5, the straight line P2S4, and the straight line P3S6, and the processing unit phase is located on the two images on the straight line P1S2 and the straight line P1S5.
  • the position information, the positional information of the two images on the straight line P2S1 and the straight line P2S4, and the positional information of the two images on the straight line P3S3 and the straight line P3S6 obtain the second actual touch object position information group (( X 21, y21), (x22, y22), (x23, y23)), the processing unit phase * position information of two images located on the straight line P3Q1 and the straight line P2S1, position information of two images on the straight line P2Q2 and the straight line P1S2, and located
  • the position information of the two images on the straight line P2Q2 and the straight line P3S3 obtains a third touch object position information group ((x31, y31), (x32, y32), (x33, y33)), according to the straight line P1Q1 and the straight line P1S2.
  • the position information of the two images, the position information of the two images on the straight line P2Q2 and the straight line P2S1, and the position information of the two images on the straight line P2Q2 and the straight line P3S3 can obtain a third touch object position information group (( X41, y41), (x42, y42), (x43, y43)), based on position information of two images on the straight line P1Q1 and the straight line P3S3, positional information of two images on the straight line P2Q2 and the straight line P2S1, and two images on the straight line P2Q2 and the straight line P1S2
  • the position information can obtain a third touch object position information group ((x51, y51), (x52, y52), (x53, y53)), according to the position information of the two images located on the straight line P1Q1 and the straight line P2S1, The position information of the two images located on the straight line P1Q1 and the straight line P1S2, and the positional information of
  • the processing unit matches the plurality of third touch object position information groups with the first actual touch object position information group and the second actual touch object position information group
  • the processing may be performed in addition to the two methods shown in FIG. 9 and FIG.
  • the unit may further acquire an absolute value of a difference between the position information in each third touch object position information group and the corresponding position information in the first actual touch object position information group and the corresponding position information in the second actual touch object position information group.
  • a third touch object position information group that is the sum of the absolute values of the differences is used as the third actual touch object position information group.
  • step 31 the following steps may be further included:
  • Step 65 The processing unit acquires size information of the actual touch object.
  • the processing unit acquires the size information of the actual touch object according to the image data collected by the imaging device in the first imaging device group.
  • the size of the touch object is different, and the width of the dark spot area formed on the image data is also different. Therefore, the size of the touch object can be estimated by using this feature.
  • the touch positioning method of the present invention is the second embodiment.
  • the schematic diagram of calculating the size of the touch object, P is the touch object, Ol is one of the two imaging devices, and the image formed by the touch object P via the imaging device Ol is located between the line ⁇ ' and the line OIP, P,
  • the center of P" is P0. According to the data collected by the imaging device Ol, it can be calculated between OIP and OIP.
  • the approximate radius of the touch object P obtained by another imaging device of the two imaging devices can also be calculated, and the approximate radius r of the touch object P is:
  • the processing unit first obtains the first actual touch object position information group and the second actual touch object position information group according to the image data collected by the two macro imaging devices, and then obtains according to the two macro imaging devices.
  • the third touch object position information group further matches the third touch object position information group with the first actual touch object position information group and the second actual touch object position information group to obtain a third actual touch object position information group, thereby realizing Remove the "ghost point" that appears during the process of locating more than two touch objects to pinpoint the location of the touch object.
  • the processing unit may further determine the size of the touch object according to the two macro imaging devices.
  • each position in the touch detection area is located within the field of view of two different imaging devices in the first imaging device group.
  • the distance between any two actual touch objects in the touch detection area in the optical fiber connection direction of the two different imaging devices is not less than the distance between the optical centers of the two different position imaging devices, the two The distance between the optical centers of the different imaging devices is greater than the width of the pixels that can be recognized by the different imaging devices at the two positions, and any two of the actual touch objects and the two devices with different positions are different
  • the optical center of an imaging device is not in a straight line.
  • the touch system may further include at least one imaging device, and the at least one imaging device includes a third imaging device, and each position in the touch detection region is located in the third imaging device.
  • the flow chart of the third embodiment of the method for inventing a touch is different from the flow chart shown in FIG. 3, and the step 32 may further include the following steps:
  • Step 71 The processing unit acquires a plurality of second touch object location information groups.
  • the processing unit acquires a plurality of second touch object position information groups according to image data collected by the imaging device and the third imaging device in the first imaging device group, where the second touch object position information group includes position information of the actual touch object. And/or location information of the virtual touch object;
  • Step 72 The processing unit acquires a second actual touch object location information group.
  • the processing unit matches the plurality of second touch object position information groups with the first actual touch object position information group to obtain a second actual touch object position information group, where the second actual touch object position information group includes the actual touch object position. information.
  • the processing unit matches the plurality of second touch object position information groups with the first actual touch object position information group, the processing unit respectively acquires the position information and the first actual touch object position information in each second touch object position information group.
  • the processing unit may further acquire the position information and the second information in each second touch object position information group.
  • the sum of the absolute values of the differences of the respective position information in the actual touch object position information group, and the second touch object position information group in which the sum of the absolute values of the differences is the second actual touch object position information group.
  • the processing unit may further acquire size information of the actual touch object. Specifically, the processing unit obtains the size information of the actual touch object according to the image data collected by the imaging device in the first imaging device group. Referring to FIG. 11, details are not described herein again.
  • the processing unit first obtains the first actual touch object position information group according to the image data collected by the group of macro imaging devices, and then obtains the second touch object position information group according to the two remote imaging devices, and then The two touch object position information groups are matched with the first actual touch object position information group to obtain a second actual touch object position information group, thereby removing the "ghost point" appearing in the process of locating two or more touch objects, and accurately positioning The location where the object is touched.
  • the processing unit may further determine the size of the touch object based on the image data acquired by the two macro imaging devices.
  • Touch positioning method fourth embodiment is applied to a touch system including at least one multi-lens imaging apparatus including a first multi-lens imaging apparatus including at least one multi-lens imaging apparatus and a touch detection area, the first multi-lens imaging apparatus including at least two lenses and an optical sensor, and touch detection Each location within the zone is located within the field of view of two different lenses in the first multi-lens imaging device.
  • the field of view of each lens in the first multi-lens imaging device covers the entire touch detection area from different directions; optionally, the first multi-lens imaging device includes three lenses, wherein the field of view of one lens covers the entire Touching the detection area, the fields of view of the other two lenses respectively cover part of the touch detection area, and the sum of the fields of view of the other two lenses covers the entire touch detection area. At this time, the other two lenses are equivalent to one lens.
  • the lens acquires image data of the touch detection area and images the image data on the optical sensor, specifically, different lenses are imaged in different areas of the optical sensor.
  • the processing unit acquires a plurality of first touch object position information groups according to the image data collected by the lens in the first multi-lens imaging device, where the first touch object position information group includes an actual Location information of the touch object and/or location information of the virtual touch object.
  • step 32 the processing unit removes the first touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of first touch object position information groups, to obtain the first actual touch object position information.
  • the first actual touch object location information group includes location information of the actual touch object.
  • the number of touched objects detected by the two lenses in the first multi-lens imaging device is m and ! 1, where m and n are natural numbers greater than or equal to 2, the actual number of touch objects is max(m, n), and the obtained number of first touch object position information groups is max(m, n) !
  • the first touch object position information group including the position information of the virtual touch object located outside the touch detection area is removed, and the first actual touch object position information group is obtained.
  • FIG. 4 and FIG. 5 For the specific working principle of the embodiment, reference may be made to FIG. 4 and FIG. 5, and the imaging device in FIG. 4 and FIG. 5 is equivalent to the lens in this embodiment, and details are not described herein again.
  • the processing unit acquires a plurality of first touch object position information groups according to the image data collected by the lens in the first multi-lens imaging device in the at least one multi-lens imaging device, from the plurality of first touch object positions.
  • the virtual group including the virtual touch outside the touch detection area is removed from the information group.
  • the first touch object position information group of the position information of the object obtains the first actual touch object position information group, thereby removing the "ghost point" appearing in the process of locating two or more touch objects, and accurately positioning the touch object.
  • any two in the touch detection area when each position in the touch detection area is located within the field of view of two different positions of the lens in the first multi-lens imaging device, any two in the touch detection area
  • the distance between the actual touch object and the optical fiber connection direction of the lens at the two positions is not less than the distance between the optical centers of the two different positions of the lens, and the two positions are different between the optical centers of the lenses
  • the distance is greater than the width of the pixel that can be recognized by the lens with different positions, and the optical centers of any two of the actual touch objects and the two different positions are not in a straight line.
  • the multi-lens imaging device may further include a second multi-lens imaging device, the second multi-lens imaging device includes at least two lenses and an optical sensor, and the touch detection area
  • the second multi-lens imaging device includes at least two lenses and an optical sensor
  • the touch detection area Each position in the second multi-lens imaging device is located within the field of view of two different lenses in the second multi-lens imaging device, and any two actual touch objects in the touch detection area are different in the two positions.
  • the distance between the optical fiber connection directions is not less than the distance between the optical centers of the two different imaging devices, and the distance between the optical centers of the two different imaging devices is greater than the imaging devices of the two different positions.
  • the width of the identifiable pixel is not in line with the optical center of any of the two actual imaging objects and the imaging devices of the two different positions.
  • the field of view of each lens in the second multi-lens imaging device covers the entire touch detection area from different directions; optionally, the second multi-lens imaging device includes three lenses, wherein the field of view of one lens covers the entire Touching the detection area, the fields of view of the other two lenses respectively cover part of the touch detection area, and the sum of the fields of view of the other two lenses covers the entire touch detection area. At this time, the other two lenses are equivalent to one lens.
  • the lens when the lens is used to position the touch object in a multi-lens imaging device, the smaller the pitch of the lens is, the larger the positioning error is.
  • the imaging device in FIG. 6 is equivalent to the implementation. The lens in the example will not be described here.
  • the processing unit acquires a plurality of second touch object positions according to the image data collected by the lens in the second multi-lens imaging device.
  • the information group, the second touch object location information group includes location information of the actual touch object and/or location information of the virtual touch object.
  • step 62 the processing unit removes the second touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of second touch object position information groups, to obtain the second actual touch object position information.
  • the second actual touch object location information group includes location information of the actual touch object.
  • step 63 the processing unit acquires a plurality of third touch object positions according to the image data acquired by the first lens in the first multi-lens imaging device and the frame teaching data collected by the second lens in the second multi-lens imaging device.
  • the information group, the third touch object location information group includes location information of the actual touch object and/or location information of the virtual touch object.
  • step 64 the processing unit matches the plurality of third touch object position information groups with the first actual touch object position information group and/or the second actual touch object position information group to obtain a third actual touch object position information group,
  • the three actual touch object position information groups include position information of the actual touch object.
  • the processing unit firstly determines the position information of the actual touch object by using a set of macro lenses, and then obtains the position information of the actual touch object and the "ghost point" by using a set of remote lens, because "the ghost point” "The distance information determined by the macro lens is far away, and the actual touch object distance is closer to the position information determined by the lens.
  • each third touch object position information group and the first actual touch object position information are used. The group matches the second actual touch object position information group, and the position of the touch object can be positioned more accurately.
  • the first lens is the lens that detects the most touched objects in at least two of the first multi-lens imaging devices; the second lens is the second multi-lens The lens that detects the most touch is detected in at least two of the imaging devices.
  • the imaging device in FIG. 8 is equivalent.
  • the lens in this embodiment is not described here.
  • the first lens and the second lens are farthest from the touch detection area of the touch system when the position of the touch object is accurately positioned using two remote lenses.
  • step 64 preferably, the processing unit compares the plurality of third touch object position information groups with the first The actual touch object position information group and the second actual touch object position information group are matched to obtain a third actual touch object position information group. Specifically, the processing unit respectively acquires a difference between the location information in each third touch object location information group and the corresponding location information in the first actual touch object location information group and the corresponding location information in the second actual touch object location information group.
  • the processing unit may further acquire an absolute difference between the position information in each third touch object position information group and the corresponding position information in the first actual touch object position information group and the corresponding position information in the second actual touch object position information group.
  • the sum of the values, the third touch object position information group that minimizes the sum of the absolute values of the differences is taken as the third actual touch object position information group.
  • step 31 the following steps may be further included:
  • Step 66 The processing unit acquires size information of the actual touch object according to the image data collected by the lens in the first multi-lens imaging device.
  • the processing unit acquires size information of the actual touch object according to the image data collected by the lens in the first multi-lens imaging device.
  • the processing unit first obtains the first actual touch object position information group and the second actual touch object position information group according to the image data acquired by the two macro lenses, and then obtains the third touch object according to the two remote distance lenses. Positioning the information group, and matching the third touch object position information group with the first actual touch object position information group and the second actual touch object position information group to obtain a third actual touch object position information group, thereby implementing removal of the positioning two More than one "ghost point" in the process of touching objects, precisely locate the location of the touch object.
  • the processing unit may further determine the size of the touch object according to the image data collected by the two macro lenses. Touch positioning method sixth embodiment
  • the touch detection area when each position in the touch detection area is located within the field of view of two different positions of the first multi-lens imaging device, the touch detection area
  • the distance between any two actual touch objects in the optical fiber connection direction of the two different positions is not less than the distance between the optical centers of the two different positions, and the optical centers of the two different positions
  • the distance between the two pixels is larger than the width of the pixels that can be recognized by the two different positions, and the optical centers of any two of the actual touch objects and the two different positions are not in a straight line.
  • the touch system may further include at least one single lens imaging device, and the single lens imaging device includes a first single lens imaging device, each position in the touch detection area is located at the first Within the field of view of a single lens imaging device.
  • the processing unit acquires a plurality of second touch object location information according to the lens in the first multi-lens imaging device and the image data collected by the first single lens imaging device.
  • the second touch object location information group includes location information of the actual touch object and/or location information of the virtual touch object.
  • step 72 the processing unit matches the plurality of second touch object position information groups with the first actual touch object position information group to obtain a second actual touch object position information group, where the second actual touch object position information group includes the actual touch object. Location information.
  • the processing unit matches the plurality of second touch object position information groups with the first actual touch object position information group, the processing unit respectively acquires the position information and the first actual touch object position information in each second touch object position information group.
  • the processing unit may further acquire the position information and the second information in each second touch object position information group. a sum of absolute values of differences of corresponding position information in an actual touch object position information group, a second touch object position information group in which a sum of absolute values of differences is the smallest as second actual touch object position information Group.
  • the processing unit may further acquire the size information of the actual touch object according to the image data collected by the lens in the first multi-lens imaging device.
  • the imaging device in FIG. 11 is equivalent to the embodiment.
  • the lens is not mentioned here.
  • the processing unit first obtains the first actual touch object position information group according to the image data acquired by the two macro lenses, and then obtains the second touch object position information group according to the two macro lens, and then the second touch.
  • the object position information group is matched with the first actual touch object position information group to obtain a second actual touch object position information group, thereby removing the "ghost point" appearing in the process of locating two or more touch objects, and accurately positioning the touch object location.
  • the processing unit may further determine the size of the touch object according to the image data collected by the two macro lenses.
  • a schematic structural diagram of a first embodiment of the touch positioning device of the present invention may include at least one imaging device group 121 , a first touch object position information group acquiring module 122 , and a first actual touch object position information group acquiring module 123 .
  • the first touch object position information group acquisition module 122 is connected to at least one imaging device group 121
  • the first actual touch object position information group acquisition module 123 is connected to the first touch object position information group acquisition module 122.
  • the at least one imaging device group 121 includes at least a first imaging device group, and the first imaging device group may include at least two imaging devices, each of the touch detection regions of the touch system being located in the first imaging device group. Within the field of view of different imaging devices, the imaging device is used to acquire image data of the touch detection zone.
  • the field of view of each of the first imaging device groups covers the entire touch detection area from different directions; optionally, the first imaging device includes three imaging devices, wherein the field of view of one imaging device covers the entire Touching the detection area, the fields of view of the other two imaging devices respectively cover part of the touch detection area, and the sum of the fields of view of the other two imaging devices covers the entire touch detection area, and at this time, the other two 4 image devices are equivalent to one Imaging equipment.
  • the first touch object position information group acquiring module 122 is configured to acquire a plurality of first touch object position information groups according to the image data collected by the imaging device in the first imaging device group, where the first touch object position information group includes the actual touch object Location information and/or location information of a virtual touch object,
  • the first touch object position information group obtaining module 122 obtains the first touch object position information group according to the image data collected by the two image forming devices. For details, refer to FIG. 1 , and details are not described herein again.
  • the first actual touch object position information group obtaining module 123 is configured to remove the first touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of first touch object position information groups, to obtain the first An actual touch object position information group, the first actual touch object position information group includes position information of the actual touch object.
  • the number of touch objects detected by the imaging device is m and n, respectively, wherein m and n are natural numbers greater than or equal to 2, and the actual number of touch objects is max(m, n)
  • the number of the first touch object position information groups obtained by the first touch object position information group obtaining module 122 is ma X (m, n) !
  • the first actual touch object position information group obtaining module 123 is removed and included in the touch detection area.
  • the first touch object position information group of the position information of the outer virtual touch object obtains the first actual touch object position information group.
  • the first touch object position information group obtaining module 122 acquires a plurality of first touch object position information according to the drawing teaching data collected by the imaging device in the first imaging device group in the at least one imaging device group 121.
  • the first actual touch object position information group obtaining module 123 removes the first touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of first touch object position information groups, and obtains The first actual touch object position information group, thereby removing the "ghost point" appearing in the process of locating two or more touch objects, and accurately positioning the touch object.
  • any two in the touch detection area when each position in the touch detection area is located within the field of view of two different imaging devices in the first imaging device group, any two in the touch detection area
  • the distance between the actual touch objects in the optical fiber connection direction of the two different imaging devices is not less than the distance between the optical centers of the two different imaging devices, and the optical centers of the two different imaging devices
  • the distance between the two is greater than the width of the pixels that can be recognized by the different imaging devices, and the optical centers of any two of the actual touch devices and the two different imaging devices are not in a straight line.
  • FIG. 14 is a schematic structural diagram of a second embodiment of a touch positioning device according to the present invention, in order to The position of the touch object is more accurately positioned, which is different from the structural diagram shown in FIG. 13 in that at least one device group 121 can be at least two imaging device groups 131, at least two imaging images.
  • the device group 131 may include a second imaging device group in addition to the first imaging device group, and the second imaging device group may include at least two imaging devices, each location in the touch detection region is located in the second imaging device group.
  • the distance between any two actual touch objects in the touch detection area in the optical fiber connection direction of the different four-image devices is not less than the two
  • the distance between the optical centers of the different imaging devices, the distance between the optical centers of the two different imaging devices being greater than the width of the pixels that can be recognized by the different imaging devices at the two positions, any two actual touches
  • the optical center of any one of the imaging devices different in the two positions is not in a straight line.
  • the field of view of each of the second imaging device groups covers the entire touch detection area from different directions; optionally, the second imaging device includes three imaging devices, wherein the field of view of one imaging device covers the entire Touching the detection area, the fields of view of the other two imaging devices respectively cover part of the touch detection area, and the sum of the fields of view of the other two imaging devices covers the entire touch detection area, and at this time, the other two imaging devices are equivalent to one into four Like equipment.
  • the present embodiment may further include a second touch object position information group acquisition module 132, a second actual touch object position information group acquisition module 133, and a third touch object position information group.
  • the second touch object position information group obtaining module 132 is connected to the at least two imaging device group 131
  • the second actual touch object position information group obtaining module 133 is connected to the second touch object position information group acquiring module 132.
  • the third touch object position information is connected.
  • the group acquisition module 134 is connected to the at least two imaging device groups 131, and the third actual object location information group obtaining module 135 is respectively associated with the first actual touch object position information group obtaining module 123 and the second actual touch object position information group obtaining module 133.
  • the third touch object location information group acquisition module 134 is connected.
  • the second touch object location information group obtaining module 132 is configured to acquire a plurality of second touch object location information groups according to the map teaching data collected by the imaging device in the second imaging device group of the at least two imaging device groups 131.
  • the second touch object location information group includes location information of the actual touch object and/or location information of the virtual touch object, where the second touch object location information group acquisition module 132
  • the second touch object location information group is obtained according to the image data collected by the two imaging devices. For details, refer to FIG. 1 , and details are not described herein again.
  • the second actual touch object position information group obtaining module 133 is configured to remove the second touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of second touch object position information groups, to obtain the first The actual touch object location information group, the second actual touch object location information group includes location information of the actual touch object.
  • the third touch object position information group obtaining module 134 is configured to collect image data collected by the first imaging device in the imaging device in the first imaging device group and the second imaging device in the second imaging device group.
  • the image data is obtained by acquiring a plurality of third touch object position information groups, where the third touch object position information group includes position information of the actual touch object and/or position information of the virtual touch object, wherein the third touch object position information group obtaining module 134
  • the third touch object location information group is obtained according to the image data collected by the two imaging devices. For details, refer to FIG. 1 , and details are not described herein again.
  • the third actual touch object position information group obtaining module 135 is configured to match the plurality of third touch object position information groups with the first actual touch object position information group and/or the second actual touch object position information group to obtain a third actual touch.
  • the object location information group, the third actual touch object location information group includes location information of the actual touch object.
  • the first actual touch object position information group obtaining module 123 and the second actual touch object position information group obtaining module 133 respectively determine the position information of the actual touch object by using a group of macroscopic imaging devices, and then the third touch.
  • the object location information group obtaining module 134 obtains the position information and the "ghost point" of the actual touch object by using a set of remote imaging devices, because the "ghost point" is far from the position information determined by the near distance imaging device, and the actual touch object distance is near
  • the third actual touch object position information group obtaining module 135 utilizes this property to use the third actual object position information group and the first actual touch object position information group and the second actual touch object position.
  • the information groups are matched so that the bits of the touch object can be positioned more accurately in order to detect all the touch objects as much as possible.
  • the first imaging device can image at least two of the first imaging device groups.
  • the imaging device that detects the most touched objects in the device; the second imaging device may be the second imaging device The device group to at least two imaging apparatus 4 as the largest device detects a touch object.
  • the two imaging devices farthest from the touch detection region can also be selected, in this embodiment.
  • the first imaging device and the second imaging device are farthest from the touch detection area of the touch system.
  • the third actual touch object position information group obtaining module 135 matches the plurality of third touch object position information groups with the first actual touch object position information group and the second actual touch object position information group.
  • a third actual touch object position information group is obtained.
  • the third actual touch object position information group acquisition module 135 may include a third distance acquisition unit 1351 and a third actual touch object position information group acquisition unit 1352.
  • the third distance acquiring unit 1351 is respectively connected to the third touch object position information group acquiring module 135, the second actual touch object position information group obtaining module 133, and the first actual touch object position information group acquiring module 123, and the third actual touch object position.
  • the block acquisition unit 1352 is connected to the if-large acquisition unit 1351.
  • the third distance acquiring unit 1351 is configured to respectively acquire the location information in each third touch object location information group and the corresponding location information in the first actual touch object location information group and the second actual touch object location information group. The sum of the squares of the differences of the corresponding location information.
  • the third actual touch object position information group acquisition unit 1352 uses the third touch object position information group for minimizing the square sum of the differences as the third actual touch object position information group.
  • the image data of the image data according to the location information in the third touch object location information group and one image data of the image data according to the corresponding location information in the first actual touch object location information group are acquired. Acquiring one image data in the image data on which the corresponding position information in the second actual touch object position information group is based is the same.
  • the third distance acquiring unit 1351 and the third actual touch object position information group acquiring unit 1352 refer to the second embodiment of the touch positioning method of the present invention, which is not described herein.
  • the third distance acquiring unit 1351 may further acquire the location information in each third touch object location information group and the corresponding location information and the second actual touch object location information group in the first actual touch object location information group, respectively.
  • the third actual touch object position information acquiring unit 1352 may also use the third touch object position information group that minimizes the sum of the absolute values of the difference as the third actual touch object position information. group.
  • the embodiment may further include a first actual touch object size information acquiring module 136 connected to the at least two imaging device groups 131 for acquiring the actual touch object according to the data collected by the imaging device in the first imaging device group. Size Information. For details, refer to FIG. 11 , and details are not described herein again.
  • the first actual touch object position information group obtaining module 123 and the second actual touch object position information group obtaining module 133 respectively obtain the first actual touch object position information group according to the image data collected by the two macro imaging devices.
  • a second actual touch object position information group, and then the third touch object position information group obtaining module 134 obtains a third touch object position information group according to the two remote distance imaging devices, and the third actual touch object position information group obtaining module 135
  • the three-touch object position information group is matched with the first actual touch object position information group and the second actual touch object position information group to obtain a third actual touch object position information group, thereby achieving removal in the process of locating two or more touch objects.
  • the first actual touch object size information acquiring module 136 can also determine the size of the touch object.
  • each position in the touch detection area is located within the field of view of two different imaging devices in the first imaging device group.
  • the distance between any two actual touch objects in the touch detection area in the optical fiber connection direction of the two different imaging devices is not less than the distance between the optical centers of the two different position imaging devices, the two The distance between the optical centers of the different imaging devices is greater than the width of the pixels that can be recognized by the different imaging devices at the two positions, and any two of the actual touch objects and the imaging devices different from the two positions The center of the device is not in a straight line.
  • FIG. 15 is a schematic structural diagram of a third embodiment of the touch positioning device of the present invention, which is different from the structural schematic diagram shown in FIG. 13 in that the embodiment further includes at least one imaging device 141 and a seventh touch object.
  • at least one imaging device 141 may include a third imaging device, and each position in the touch detection area is located within the field of view of the third imaging device.
  • the seventh touch object position information group obtaining module 142 is connected to the at least one imaging device 141, and the seventh actual touch object position information group obtaining module 143 is respectively associated with the first actual touch object position information group obtaining module 123 and The seventh touch object location group acquisition module 142 is connected.
  • the seventh touch object position group obtaining module 142 is configured to acquire a plurality of second touch object position information groups according to the image data collected by the imaging device and the third imaging device in the first imaging device group, and the second touch object
  • the location information group includes location information of the actual touch object and/or location information of the virtual touch object.
  • the seventh actual touch object position information group obtaining module 143 is configured to match the plurality of second touch object position information groups with the first actual touch object position information group to obtain a second actual touch object position information group, and the second actual touch object position.
  • the information group includes location information of the actual touch object.
  • the seventh actual touch object position information group obtaining module 143 may include a seventh distance acquiring unit 1431 and a seventh actual touch object position information group acquiring unit 1432.
  • the seventh distance acquiring unit 1431 is connected to the first actual touch object position information group acquiring module 123 and the seventh touch object position information group acquiring module 142, respectively, and the seventh actual touch object position information group acquiring unit 1432 and the seventh distance acquiring unit 1431. connection.
  • the seventh distance acquiring unit 1431 is configured to respectively acquire a square sum of the difference between the position information in each second touch object position information group and the corresponding position information in the first actual touch object position information group.
  • the seventh actual touch object position information group obtaining unit 1432 is configured to use the second touch object position information group that minimizes the sum of squares of differences as the second actual touch object position information group; wherein, the position in the second touch object position information group is acquired.
  • One of the image data on which the information is based is identical to one of the image data from which the corresponding position information in the first actual touch object position information group is acquired.
  • the seventh distance acquiring unit 1431 may further acquire, respectively, a sum of absolute values of differences between the position information in each second touch object position information group and the corresponding position information in the first actual touch object position information group,
  • the seventh actual touch object position information group obtaining unit 1432 is configured to use the second touch object position information group that minimizes the sum of the absolute values of the differences as the second actual touch object position information group.
  • the embodiment may further include a first actual touch object size information acquiring module 136 connected to the at least one imaging device group 121 for collecting according to the imaging device in the first imaging device group of the at least one imaging device group 121. Image data, obtaining the size information of the actual touch object.
  • a first actual touch object size information acquiring module 136 connected to the at least one imaging device group 121 for collecting according to the imaging device in the first imaging device group of the at least one imaging device group 121.
  • Image data obtaining the size information of the actual touch object.
  • the first actual touch object location information group obtaining module 123 is based on two giants. Obtaining the first actual touch object position information group from the image data collected by the imaging device, and then the seventh touch object position information group obtaining module 142 acquires the plurality of second touch object position information groups according to the image data collected by the two remote imaging devices. The seventh actual touch object position information group obtaining module 143 matches the plurality of second touch object position information groups with the first actual touch object position information group to obtain the second actual touch object position information group, thereby implementing the removal of the two positioning positions.
  • the "ghost point" that appears in the process of touching the object above, precisely locates the location of the touch object.
  • the first actual touch object size information acquiring module 136 can also determine the size of the touch object according to the two macro imaging devices.
  • FIG. 16 is a schematic structural diagram of a fourth embodiment of the touch positioning apparatus according to the present invention, which may include at least one multi-lens imaging device 151, a fourth touch object position information group acquiring module 152, and a fourth actual touch object position information group.
  • Module 153 The fourth touch object position information group acquisition module 152 is connected to at least one multi-lens imaging device 151, and the fourth actual touch object position information group acquisition module 153 is connected to the fourth touch object position information group acquisition module 152.
  • the at least one multi-lens imaging device 151 may include a first multi-lens imaging device, and the first multi-lens imaging device may include at least two lenses and an optical sensor, each position in the touch detection area of the touch system is located at the first Within the field of view of two differently positioned lenses in a multi-lens imaging device, the lens captures image data of the touch detection zone and images the image data onto an optical sensor, specifically, different lenses are imaged in different regions of the optical sensor.
  • the field of view of each lens in the first multi-lens imaging device covers the entire touch detection area from different directions; optionally, the first multi-lens imaging device includes three lenses, wherein the field of view of one lens covers the entire Touching the detection area, the fields of view of the other two lenses respectively cover part of the touch detection area, and the sum of the fields of view of the other two lenses covers the entire touch detection area. At this time, the other two lenses are equivalent to one lens.
  • the fourth touch object position information group obtaining module 152 is configured to acquire a plurality of first touch object position information groups according to the image data collected by the lens in the first multi-lens imaging device in the at least one multi-lens imaging device 151, the first touch The object location information group includes location information of the actual touch object and/or location information of the virtual touch object.
  • the fourth actual touch object position information group obtaining module 153 is configured to be removed from the plurality of first touch object position information groups, and is located in the touch detection area.
  • the first touch object position information group of the position information of the outer virtual touch object obtains the first actual touch object position information group, and the first actual touch object position information group includes the position information of the actual touch object.
  • the number of touched objects detected by the lens is m and !1, respectively, where m and n are natural numbers greater than or equal to 2, and the actual number of touched objects is max(m, n)
  • the number of the first touch object position information groups obtained by the fourth touch object position information group obtaining module 152 is max(m, n) !
  • the fourth actual touch object position information group obtaining module 153 is removed and included in the touch detection area.
  • the first touch object position information group of the position information of the virtual touch object obtains the first actual touch object position information group.
  • the fourth touch object position information group obtaining module 152 acquires a plurality of first touch object position information groups according to the image data collected by the lens in the first multi-lens imaging device in the at least one multi-lens imaging device 151. Then, the fourth actual touch object position information group obtaining module 153 removes the first touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of first touch object position information groups, and obtains the first An actual touch object position information group, thereby removing the "ghost point" appearing in the process of locating two or more touch objects, and accurately positioning the touch object.
  • the difference from the previous embodiment is that, in this embodiment, when each position in the touch detection area is located within the field of view of two different positions of the lens in the first multi-lens imaging device, touch The distance between any two actual touch objects in the detection area in the optical fiber connection direction of the two different positions is not less than the distance between the optical centers of the two different positions, the two different positions The distance between the optical centers is greater than the width of the pixels that can be recognized by the two different positions of the lens, and the optical centers of any two of the actual touch objects and the two different positions are not in a straight line.
  • FIG. 17 a schematic structural view of a fifth embodiment of the touch positioning device of the present invention, in order to more accurately locate the position of the touch object, is different from the structural diagram shown in FIG. 16 in that at least one multi-lens imaging device 151 may specifically be at least two multi-lens imaging devices 161.
  • the at least two multi-lens imaging devices 161 may further include a second multi-lens imaging device based on the first multi-lens imaging device, and the second multi-lens imaging device may include at least two a lens and an optical sensor, each position in the touch detection area is located within the field of view of two different lenses in the second multi-lens imaging device, and any two actual touch objects in the touch detection area are in the
  • the distance between the optical fiber connecting directions of the two different positions of the lens is not less than the distance between the optical centers of the two different positions of the lens, and the distance between the optical centers of the two different positions is larger than the two positions
  • the width of the pixel that can be recognized by different lenses, the optical center of any two of the actual touch objects and the two different positions are not in a straight line.
  • the lens captures image data and images the image on an optical sensor.
  • the embodiment may further include a fifth touch object position information group acquisition module 162, a fifth actual touch object position information group acquisition module 163, and a sixth touch object position information group acquisition module. 164 and a sixth actual touch object location block acquisition module 165.
  • the fifth touch object position information group obtaining block 162 is connected to the at least two multi-lens imaging device 161, and the fifth actual touch object position information group acquiring module 163 is connected to the fifth touch object position information group acquiring module 162, and the sixth touch The object position information group obtaining module 164 is connected to the at least two multi-lens imaging devices 161, and the sixth actual touch object position information group obtaining module 165 and the first actual touch object position information group acquiring module 123 and the fifth actual touch object position information, respectively.
  • the group acquisition module 163 and the sixth touch object location information group are connected to the block 164.
  • the fifth touch object position information group obtaining module 162 is configured to acquire a plurality of second touch object position information according to the picture teaching data collected by the lens in the second multi-lens imaging device of the at least two multi-lens imaging devices 161.
  • the second touch object location information group includes location information of the actual touch object and/or location information of the virtual touch object.
  • the fifth actual touch object position information group obtaining module 163 is configured to remove the second touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of second touch object position information groups, to obtain the first
  • the actual touch object location information group, the second actual touch object location information group includes location information of the actual touch object.
  • the sixth touch object position group obtaining block 164 is configured to acquire a plurality of image data collected by the first lens in the first multi-lens imaging device and the second lens captured in the second multi-lens imaging device.
  • the third touch object position information group includes the position information of the actual touch object and/or the position information of the virtual touch object.
  • the sixth actual touch object position information group obtaining module 165 is configured to use the third touch object position information group and the first actual touch object position information group. And/or the second actual touch object position information group is matched to obtain a third actual touch object position information group, and the third actual touch object position information group includes position information of the actual touch object.
  • the fourth actual touch object position information group obtaining module 153 and the fifth actual touch object position information group obtaining module 163 respectively determine the position information of the actual touch object by using a set of macro lens, and then the sixth touch object.
  • the location information group obtaining module 164 obtains the position information of the actual touch object and the "ghost point" by using a set of remote lens. Since the "ghost point" is far away from the position information determined by the lens, the actual touch object distance is sharp away from the ilii.
  • the position information determined by the lens is relatively close, and the sixth actual touch object position information group obtaining module 165 utilizes this property to set each of the third touch object position information groups with the first actual touch object position information group and the second actual touch object position information group. Matching, so that the location of the touch object can be positioned more accurately.
  • the first lens is the lens that detects the most touched objects in at least two of the first multi-lens imaging devices; the second lens is the second multi-lens The lens that detects the most touch is detected in at least two of the imaging devices.
  • the sixth actual touch object position information group obtaining module 165 matches the plurality of third touch object position information groups with the first actual touch object position information group and the second actual touch object position information group to obtain The third actual touch object position information group.
  • the sixth actual touch object position information group acquisition module 165 may include a sixth distance acquisition unit 1651 and a sixth actual touch object position information group acquisition unit 1652.
  • the sixth distance acquisition unit 1651 is connected to the fourth actual touch object position information acquisition module 153, the fifth actual touch object position information group acquisition module 163, and the sixth touch object position information group acquisition module 164, respectively.
  • the sixth distance acquiring unit 1651 is configured to respectively acquire the location information in each third touch object location information group and the corresponding location information in the first actual touch object location information group and the second actual touch object location information group. The sum of the squares of the differences of the corresponding location information.
  • the sixth actual touch level The set information acquiring unit 1652 is configured to use, as the third actual touch object position information group, the third touch object position information group that minimizes the sum of the squares of the differences according to the calculation result of the sixth distance obtaining unit 1651.
  • the image data of the image data according to the location information in the third touch object location information group and one image data of the image data according to the corresponding location information in the first actual touch object location information group are acquired.
  • the sixth distance acquiring unit 1651 and the sixth actual touch object position information group acquiring unit 1652 may further acquire the location information in each third touch object location information group and the corresponding location information and the second actual touch object location information group in the first actual touch object location information group, respectively.
  • the sum of the absolute values of the differences of the corresponding position information in the middle, the sixth actual touch object position information group obtaining unit 1652 may further reduce the sum of the absolute values of the differences according to the calculation result of the sixth distance acquiring unit 1651
  • the location information group serves as a third actual touch object location information group.
  • the embodiment may further include a second actual touch object size information acquiring module 166, configured to obtain size information of the actual touch object according to the image data collected by the lens in the first multi-lens imaging device.
  • the working principle of the second actual touch object size information acquiring module 166 is specifically shown in FIG. 11, and the image forming apparatus in FIG. 11 is equivalent to the lens in this embodiment, and is not mentioned here.
  • the fourth actual touch object position information group obtaining module 153 and the fifth actual touch object position information group obtaining module 163 respectively obtain the first actual touch object position information group and the first according to the image data collected by the two macro lenses.
  • the second touch object position information group, the sixth touch object position information group obtaining module 164 obtains the third touch object position information group according to the two remote distance lenses, and the sixth actual touch object position information group obtaining module 165 further applies the third touch.
  • the object position information group is matched with the first actual touch object position information group and the second actual touch object position information group to obtain a third actual touch object position information group, thereby removing the occurrence of the process of positioning two or more touch objects. "Ghost point", pinpoint the location of the touch object.
  • the second actual touch object size information acquiring module 166 can also determine the size of the touch object.
  • Touch positioning device sixth embodiment The difference from the fourth embodiment of the touch positioning device is that, in the embodiment, each position in the touch detection area is located within the field of view of two different positions of the first multi-lens imaging device.
  • the distance between any two actual touch objects in the touch detection area in the optical fiber connection direction of the two different positions is not less than the distance between the optical centers of the two different positions, the two positions
  • the distance between the optical centers of different lenses is larger than the width of the pixels that can be recognized by the two different positions of the lens, and the optical center of any one of the two actual touch objects and the two different positions is not in one On the line.
  • FIG. 18 it is a schematic structural diagram of a sixth embodiment of the touch positioning device of the present invention.
  • the embodiment may further include at least one single-lens imaging device 171 and an eighth touch object position.
  • the eighth touch object position group acquisition module 172 is respectively connected to at least one single lens imaging device 171 and at least one multi-lens imaging device 151.
  • the single lens imaging device 171 may include a first single lens imaging device, each position within the touch detection area being within the field of view of the first single lens imaging device
  • the eighth touch object position information acquisition module 172 is configured to acquire an image according to the lens in the first multi-lens imaging device in the at least one multi-lens imaging device 151 and the first single-lens imaging device in the at least one single-lens imaging device 171 Data, acquiring a plurality of second touch object position information groups, the second touch object position information group including position information of the actual touch object and/or position information of the virtual touch object.
  • the eighth actual touch object position information group obtaining module 172 is configured to match the plurality of second touch object position information groups with the first actual touch object position information group to obtain a second actual touch object position information group, and the second actual touch object position.
  • the information group includes location information of the actual touch object.
  • the eighth actual touch object position information group obtaining module 172 may include an eighth distance acquiring unit 1721 and an eighth actual touch object position information group acquiring unit 1722; the eighth distance obtaining unit 1721 is configured to acquire each of the respective The sum of the squares of the difference between the position information in the second touch object position information group and the corresponding position information in the first actual touch object position information group.
  • the eighth actual touch object position information group obtaining unit 1722 is configured to use the second touch object position information group that minimizes the sum of squares of differences as the second actual touch object position information group; wherein, the position in the second touch object position information group is acquired One image data in the image data on which the information is based and the first actual data obtained One image data in the image data on which the corresponding position information in the touch object position information group is based is the same.
  • the eighth distance acquiring unit 1721 may further acquire a sum of absolute values of differences between the position information in each second touch object position information group and the corresponding position information in the first actual touch object position information group, respectively.
  • the eight actual touch object position information group obtaining unit 1722 may also use the second touch object position information group having the smallest sum of absolute values as the second actual touch object position information group.
  • the embodiment may further include a second actual touch object size information acquiring module 166, configured to obtain size information of the actual touch object according to the image data collected by the lens in the first multi-lens imaging device.
  • the working principle of the second actual touch object size information acquiring module 166 is specifically shown in FIG. 11, and the image forming apparatus in FIG. 11 is equivalent to the lens in this embodiment, and is not mentioned here.
  • the fourth actual touch object position information group obtaining module 153 obtains the first actual touch object position information group according to the image data collected by the two macro lenses, and then the eighth touch object position information group obtaining module 172 is The remote lens and the image data acquired by the single lens imaging device acquire a plurality of second touch object position information groups, and the eighth actual touch object position information group obtaining module 172 sets the plurality of second touch object position information groups with the first The actual touch object position information group is matched to obtain a second actual touch object position information group, thereby removing the "ghost point" appearing in the process of locating two or more touch objects, and accurately positioning the touch object.
  • the second actual touch object size information acquiring module 166 can also determine the size of the touch object.
  • FIG. 19 is a schematic structural diagram of a first embodiment of a touch system according to the present invention, which may include a frame 12, at least one illumination source 1411, 1412 141n, a retroreflective strip 14, at least one device-like device group 1431, 1432 143m, and Processing unit 16.
  • the inside of the frame 12 is a touch detection area 17, and retroreflective strips 14 are mounted around the touch detection area 17, and at least one illumination source 1411, 1412 141n is respectively mounted adjacent to at least one imaging device group 1431,
  • the imaging device group may include a first imaging device group, the first imaging device group may include at least two imaging devices, each position in the touch detection region is located at two different positions in the first imaging device group The field of view of the imaging device.
  • Processing unit 16 and at least An imaging device group 1431, 1432 143m is connected.
  • m and n are natural numbers greater than or equal to 1.
  • the retroreflective strip 14 reflects light emitted thereto by the at least one illumination source 1411, 1412 141n to at least one of the at least one imaging device group 1431, 1432 143m, at least one of the at least one imaging device group 1431, 1432 143m Acquiring image data of the touch detection area and transmitting the image data to the processing unit 16, the processing unit 16 acquiring a plurality of image data acquired by the imaging device in the first imaging device group of the at least one imaging device group 1431, 1432 143m a first touch object position information group, where the first touch object position information group includes position information of an actual touch object and/or position information of the virtual touch object, wherein how to acquire the first touch object position according to image data acquired by the two imaging devices
  • the first touch object position information group includes position information of an actual touch object and/or position information of the virtual touch object
  • the processing unit 16 removes the first touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of first touch object position information groups, to obtain the first actual touch object position information group,
  • the first actual touch object position information group includes position information of the actual touch object.
  • any two actual touch objects in the touch detection area are in the two
  • the distance between the optical fiber connection directions of the four-image devices having different positions is not less than the distance between the optical centers of the two different imaging devices, and the distance between the optical centers of the two different imaging devices is greater than the distance
  • the width of the pixels that can be recognized by the two different imaging devices, the optical centers of any two of the actual touch objects and the imaging devices of the two different positions are not in a straight line.
  • the processing unit 16 may include any one of the foregoing embodiments of the touch positioning device, and is not further described herein. Alternatively, the embodiment may not include the frame 12.
  • the processing unit 16 is based on at least one imaging device group 1431, 1432.
  • the image data collected by the imaging device in the first imaging device group of 143m acquires a plurality of first touch object position information groups, and then the processing unit 16 is removed from the plurality of first touch object position information groups, including the touch detection area.
  • First touch object location information of location information of the virtual touch object The group obtains the first actual touch object position information group, thereby removing the "ghost point" appearing in the process of locating two or more touch objects, and accurately positioning the touch object.
  • a schematic structural view of a second embodiment of the touch system of the present invention may include a frame 12, at least one illumination source 1411, 1412 141n, a retroreflective strip 14, and a touch object. At least one of the device groups 1431, 1432 143m and the processing unit 16. frame
  • the inside of the 12 is a touch detection area 17, and the retroreflective strip 14 is mounted on the touch object P, and at least one light source 1411, 1412 141n is respectively mounted adjacent to at least one imaging device group 1431,
  • At least one imaging device group 1431, 1432 143m may include a first imaging device group, the first imaging device group may include at least two imaging devices, each position in the touch detection region 17 is located at the first imaging Two of the devices in the device group have different fields of view of the imaging device.
  • m and n are natural numbers greater than or equal to 1.
  • the retroreflective strip 14 reflects light emitted thereto by the at least one illumination source 1411, 1412 141n to at least one of the at least one imaging device group 1431, 1432 143m, at least one of the at least one imaging device group 1431, 1432 143m Collecting the image data of the touch detection area and transmitting the image data to the processing unit 16, the processing unit 16 acquires a plurality of first touch object position information groups according to the image data collected by the imaging device in the first imaging device group, the first touch The object location information group includes location information of the actual touch object and/or location information of the virtual touch object, wherein the processing unit 16 acquires the first touch object location information group according to the image data collected by the imaging device, as specifically seen in FIG. No longer praise.
  • the processing unit 16 removes the first touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of first touch object position information groups, to obtain the first actual touch object position information group,
  • the first actual touch object position information group includes position information of the actual touch object.
  • the shape of the cross section of the touch object P may be a circle, a square, a triangle or any other shape.
  • the distance between the two actual touch objects in the optical fiber connection direction of the different four-image devices is not less than the distance between the optical centers of the two different imaging devices, and the two images are different in imaging.
  • the distance between the optical centers of the devices is greater than the width of the pixels that can be recognized by the different imaging devices at the two positions, and the optical centers of any of the two actual imaging devices and the imaging devices different from the two positions are absent On a straight line.
  • the processing unit 16 is based on at least one imaging device group 1431, 1432.
  • the image data collected by the imaging device in the first imaging device group of 143m acquires a plurality of first touch object position information groups, and then the processing unit 16 is removed from the plurality of first touch object position information groups, including the touch detection area.
  • the first touch object position information group of the position information of the outer virtual touch object obtains the first actual touch object position information group, thereby removing the "ghost point" appearing in the process of locating two or more touch objects, more precisely Position the touch object.
  • the processing unit 16 may include any one of the foregoing embodiments of the touch positioning device, and is not further described herein. Alternatively, the embodiment may not include the frame 12.
  • a schematic structural diagram of a third embodiment of the touch system of the present invention may include a frame 12, at least one illumination source 1411, 1412 141n, and at least one device group.
  • the inside of the frame 12 is a touch detection area 17. At least one of the illumination sources 1411, 1412 141n is mounted around the touch detection area 17, respectively.
  • the processing unit 16 is coupled to at least one imaging device group 1431, 1432 143m.
  • a first imaging device group may be included in at least one imaging device group 1431, 1432 143m, the first imaging device group may include at least two imaging devices, and each position in the touch detection region 17 is located in two of the first imaging device groups Within the field of view of different imaging devices.
  • m and n are natural numbers greater than or equal to 1.
  • At least one illumination source 1411, 1412 141n transmits light to at least one imaging device group 1431, 1432 143m, and at least two imaging devices of at least one of the 4 image device groups 1431, 1432 143m acquire image data of the touch detection area and
  • the image data is sent to processing unit 16, which is based on at least one imaging device group 1431, 1432 143m
  • the image data acquired by the imaging device in the first imaging device group in the first imaging device group acquires a plurality of first touch object position information groups, where the first touch object position information group includes position information of the actual touch object and/or position information of the virtual touch object
  • the processing unit 16 obtains the first touch object location information group according to the image data collected by the two imaging devices. For details, refer to FIG.
  • the processing unit 16 removes the first touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of first touch object position information groups, to obtain the first actual touch object position information group,
  • the first actual touch object position information group includes position information of the actual touch object.
  • any two actual areas in the touch detection area when each position in the touch detection area is located within the field of view of two different imaging devices in the first imaging device group, any two actual areas in the touch detection area
  • the distance between the touch object in the optical fiber connection direction of the different four-image device is not less than the distance between the optical centers of the two different imaging devices, and the optical centers of the two different imaging devices
  • the distance between the two is greater than the width of the pixels that can be recognized by the different imaging devices, and the optical centers of any two of the actual touch devices and the two different imaging devices are not in a straight line.
  • the processing unit 16 is based on at least one imaging device group 1431, 1432.
  • the image data collected by the imaging device in the first imaging device group of 143m acquires a plurality of first touch object position information groups, and then the processing unit 16 is removed from the plurality of first touch object position information groups, including the touch detection area.
  • the first touch object position information group of the position information of the outer virtual touch object obtains the first actual touch object position information group, thereby removing the "ghost point" appearing in the process of locating two or more touch objects, and accurately positioning the touch The location of the object.
  • the processing unit 16 may include any one of the foregoing embodiments of the touch positioning device, and is not further described herein. Alternatively, the embodiment may not include the frame 12.
  • a schematic structural view of a fourth embodiment of the touch system of the present invention may include a frame 12, at least one illumination source 1411, 1412 141n, a retroreflective strip 14, at least one multi-lens imaging device 1931, 1932 193m, and processing.
  • Unit 16 The inside of the frame 12 is a touch detection area 17, and the retroreflective strip 14 is mounted around the touch detection area 17, at least one The light sources 1411, 1412 141n are respectively mounted adjacent to at least one multi-lens imaging device
  • At least one multi-camera device 1931, 1932 1931, 1932 193m position. At least one multi-camera device 1931, 1932
  • a first multi-lens imaging apparatus may be included in the 193m, the first multi-lens imaging apparatus may include at least two lenses and an optical sensor, each of the positions in the touch detection area 17 being located in the first multi-lens imaging apparatus Within the field of view of the differently positioned lenses, processing unit 16 is coupled to the optical sensors of at least one multi-lens imaging device 1931, 1932 193m.
  • m and n are natural numbers greater than or equal to 1.
  • the retroreflective strip 14 reflects the light emitted by the at least one illumination source to the retroreflective strip 14 to at least one multi-lens imaging device 1931, 1932 193m.
  • the lens captures the image data of the touch detection area 17 and images the image data onto the optical sensor, and the different lenses are imaged in different areas on the optical sensor.
  • the processing unit 16 is configured to be based on at least one multi-lens imaging device 1931,
  • Image data acquired by the lens in the first multi-lens imaging device of 1932, ..., 193m acquiring a plurality of first touch object position information groups, the first touch object position information group including position information of the actual touch object and/or virtual Touching the location information of the object, removing the first touch object location information group including the location information of the virtual touch object located outside the touch detection area from the plurality of first touch object location information groups, to obtain the first actual touch object location information group
  • the first actual touch object location information group includes location information of the actual touch object.
  • any two actual areas in the touch detection area when each position in the touch detection area is located within the field of view of two different positions of the first multi-lens imaging device, any two actual areas in the touch detection area
  • the distance between the center of the lens in the two positions of the touch object is not less than the distance between the optical centers of the two different positions, and the distance between the optical centers of the two different positions is greater than
  • the width of the pixel that can be recognized by the two different positions of the lens, the optical center of any one of the two actual touch objects and the two different positions is not in a straight line.
  • the processing unit 16 may include any one of the foregoing embodiments of the touch positioning device, and is not further described herein. Alternatively, the embodiment may not include the frame 12.
  • the processing unit 16 is configured according to at least one multi-lens imaging device 1931. Acquiring image data acquired by the lens in the first multi-lens imaging device in 1932, ..., 193m, acquiring a plurality of first touch object position information groups, being removed from the plurality of first touch object position information groups, including being located in the touch detection area The first touch object position information group of the position information of the outer virtual touch object obtains the first actual touch object position information group, thereby removing the "ghost point" appearing in the process of locating two or more touch objects, and accurately positioning the touch The location of the object.
  • a schematic structural view of a fifth embodiment of the touch system of the present invention may include a frame 12, at least one illumination source 1411, 1412 141n, a touch object, at least one multi-lens imaging device 1931, 1932 193m, and a processing unit. 16.
  • the inside of the frame 12 is a touch detection area 17, and the retroreflective strip 14 is mounted on the touch object P, at least one light source 1411,
  • the retroreflective strip 14 reflects the light emitted thereto by the at least one illumination source 1411, 1412 141n to at least one multi-lens imaging device 1931, 1932 193m.
  • the lens captures the image data of the touch detection area 17 and images the image data onto corresponding optical sensors, and the different lenses are imaged in different areas on the optical sensor.
  • the processing unit 16 is configured to acquire, according to the image data collected by the lens in the first multi-lens imaging device of the at least one multi-lens imaging device 1931, 1932 193m, a plurality of first touch object position information groups, the first touch object position information group Include location information of the actual touch object and/or location information of the virtual touch object, and remove first touch object location information including location information of the virtual touch object located outside the touch detection area from the plurality of first touch object position information groups Group, obtaining a first actual touch object position information group, where the first actual touch object position information group includes a position information of the actual touch object Interest.
  • the shape of the cross section of the touch object P may be a circle, a square, a triangle or any other shape.
  • any two actual areas in the touch detection area when each position in the touch detection area is located within the field of view of two different positions of the first multi-lens imaging device, any two actual areas in the touch detection area
  • the distance between the center of the lens in the two positions of the touch object is not less than the distance between the optical centers of the two different positions, and the distance between the optical centers of the two different positions is greater than
  • the width of the pixel that can be recognized by the two different positions of the lens, the optical center of any one of the two actual touch objects and the two different positions is not in a straight line.
  • the processing unit 16 acquires a plurality of first touch object position information groups according to image data collected by the lens in the first multi-lens imaging device of the at least one multi-lens imaging device 1931, 1932, ..., 193m. Removing the first touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of first touch object position information groups, to obtain the first actual touch object position information group, thereby implementing the removal in the positioning
  • the "ghost point" that appears in the process of two or more touch objects, precisely locates the location of the touch object.
  • the processing unit 16 may include any one of the foregoing embodiments of the touch positioning device, and is not further described herein. Alternatively, the embodiment may not include the frame 12.
  • FIG. 24 it is a schematic structural diagram of a sixth embodiment of the touch system of the present invention, which may include a frame 12, at least one illumination source 1411, 1412 141n, at least one multi-lens imaging device 1931, 1932 193m, and processing unit 16. .
  • the inside of the frame 12 is a touch detection area
  • the at least one multi-lens imaging device 1931, 1932 193m may include a first multi-lens imaging device, and the first multi-lens imaging device may include at least two lenses and an optical sensor, each position in the touch detection area 17 being located at the first Within the field of view of two differently positioned lenses in the multi-lens imaging device, the processing unit 16 and the at least one multi-lens imaging device 1931 1932 193m optical sensor connection.
  • m and n are natural numbers greater than or equal to 1.
  • At least one illumination source 1411, 1412 141n emits light to at least one multi-lens imaging device 1931, 1932 193m, the lens captures the image of the touch detection area 17 and images the image on the optical sensor, and the different lenses are imaged on the optical sensor Different areas.
  • the processing unit 16 is configured to acquire, according to the image data collected by the lens in the first multi-lens imaging device of the at least one multi-lens imaging device 1931, 1932 193m, a plurality of first touch object position information groups, the first touch object position information group Include location information of the actual touch object and/or location information of the virtual touch object, and remove first touch object location information including location information of the virtual touch object located outside the touch detection area from the plurality of first touch object position information groups The group obtains a first actual touch object position information group, and the first actual touch object position information group includes position information of the actual touch object.
  • any two actual areas in the touch detection area when each position in the touch detection area is located within the field of view of two different positions of the first multi-lens imaging device, any two actual areas in the touch detection area
  • the distance between the center of the lens in the two positions of the touch object is not less than the distance between the optical centers of the two different positions, and the distance between the optical centers of the two different positions is greater than
  • the width of the pixel that can be recognized by the two different positions of the lens, the optical center of any one of the two actual touch objects and the two different positions is not in a straight line.
  • the processing unit 16 acquires a plurality of first touch object position information groups according to image data collected by the lens in the first multi-lens imaging device of the at least one multi-lens imaging device 1931, 1932, ..., 193m. Removing the first touch object position information group including the position information of the virtual touch object located outside the touch detection area from the plurality of first touch object position information groups, to obtain the first actual touch object position information group, thereby implementing the removal in the positioning
  • the "ghost point" that appears in the process of two or more touch objects, precisely locates the location of the touch object.
  • the processing unit 16 may include any one of the foregoing embodiments of the touch positioning device, and is not further described herein. Alternatively, the embodiment may not include the frame 12.

Abstract

Cette invention se rapporte à un procédé de positionnement de contacts, à un dispositif et à un système de contacts associés. Le procédé de positionnement de contacts comprend les étapes suivantes consistant à : selon les données d'images capturées par le dispositif d'imagerie du premier groupe de dispositifs d'imagerie, acquérir plusieurs premiers groupes d'informations de positions d'objets de contacts qui comprennent des informations de positions d'objets de contacts réels et/ou d'objets de contacts virtuels ; obtenir les premiers groupes d'informations de positions d'objets de contacts réels en enlevant desdits plusieurs premiers groupes d'informations de positions d'objets de contacts, les informations de positions des objets de contacts virtuels au-delà de la zone de détection de contacts, les premiers groupes d'informations de positions d'objets de contacts réels comprennent les informations de positions des objets de contacts réels. L'invention permet un positionnement précis des objets de contacts en enlevant les « points fantômes » présents au cours du procédé de positionnement de plus de deux objets de contacts.
PCT/CN2011/072041 2010-03-23 2011-03-22 Procédé de positionnement de contacts, dispositif et système de contacts associés WO2011116683A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201010131614.5A CN102200860B (zh) 2010-03-23 2010-03-23 触摸定位方法和装置、触摸系统
CN201010131614.5 2010-03-23

Publications (1)

Publication Number Publication Date
WO2011116683A1 true WO2011116683A1 (fr) 2011-09-29

Family

ID=44661578

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/072041 WO2011116683A1 (fr) 2010-03-23 2011-03-22 Procédé de positionnement de contacts, dispositif et système de contacts associés

Country Status (2)

Country Link
CN (1) CN102200860B (fr)
WO (1) WO2011116683A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7015894B2 (en) * 2001-09-28 2006-03-21 Ricoh Company, Ltd. Information input and output system, method, storage medium, and carrier wave
US20060232830A1 (en) * 2005-04-15 2006-10-19 Canon Kabushiki Kaisha Coordinate input apparatus, control method therefore, and program
CN101320307A (zh) * 2007-06-04 2008-12-10 北京汇冠新技术有限公司 一种识别红外触摸屏上多个触摸点的方法
CN101403951A (zh) * 2008-08-11 2009-04-08 广东威创视讯科技股份有限公司 交互式电子显示系统的多点定位装置及方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4094794B2 (ja) * 1999-09-10 2008-06-04 株式会社リコー 座標検出装置、情報記憶媒体および座標検出方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7015894B2 (en) * 2001-09-28 2006-03-21 Ricoh Company, Ltd. Information input and output system, method, storage medium, and carrier wave
US20060232830A1 (en) * 2005-04-15 2006-10-19 Canon Kabushiki Kaisha Coordinate input apparatus, control method therefore, and program
CN101320307A (zh) * 2007-06-04 2008-12-10 北京汇冠新技术有限公司 一种识别红外触摸屏上多个触摸点的方法
CN101403951A (zh) * 2008-08-11 2009-04-08 广东威创视讯科技股份有限公司 交互式电子显示系统的多点定位装置及方法

Also Published As

Publication number Publication date
CN102200860B (zh) 2014-02-05
CN102200860A (zh) 2011-09-28

Similar Documents

Publication Publication Date Title
CN100501657C (zh) 一种触摸屏装置及触摸屏装置的定位方法
CN102799318B (zh) 一种基于双目立体视觉的人机交互方法及系统
CN1784649A (zh) 自动校准的触摸系统及方法
KR102198352B1 (ko) Tiled Display에서 3D 영상을 보정하는 방법 및 장치
US10310675B2 (en) User interface apparatus and control method
US8614694B2 (en) Touch screen system based on image recognition
JP2007129709A (ja) イメージングデバイスをキャリブレートするための方法、イメージングデバイスの配列を含むイメージングシステムをキャリブレートするための方法およびイメージングシステム
CN101520700A (zh) 一种基于摄像头的三维定位触摸装置及其定位方法
US10254893B2 (en) Operating apparatus, control method therefor, and storage medium storing program
CN102722254A (zh) 一种定位交互方法及系统
WO2012031513A1 (fr) Procédé de détermination de la position de l'effleurement, écran tactile, système tactile, et affichage
CN101627356A (zh) 交互式输入系统和方法
CN101813993A (zh) 一种曲面显示系统以及手势识别和定位方法
TWI394072B (zh) 平面顯示器位置檢出裝置及其方法
KR20090116544A (ko) 적외선 카메라 방식의 공간 터치 감지 장치, 방법 및스크린 장치
WO2011147301A1 (fr) Procédé et appareil d'étalonnage d'écran tactile, écran tactile, système tactile et dispositif d'affichage
JP6011885B2 (ja) 符号読取装置および符号読取方法
CN103076925B (zh) 光学触控系统、光学感测模块及其运作方法
CN101149653B (zh) 判读影像位置的装置
WO2011116683A1 (fr) Procédé de positionnement de contacts, dispositif et système de contacts associés
JP5445064B2 (ja) 画像処理装置および画像処理プログラム
CN103425355A (zh) 一种全向摄像头构造的便携光学触摸屏及其定位校准方法
KR20130052567A (ko) 터치 로케이팅 방법과 시스템 및 디스플레이
CN102033641B (zh) 一种触摸系统及多点定位方法
US9535535B2 (en) Touch point sensing method and optical touch system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11758793

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11758793

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC

122 Ep: pct application non-entry in european phase

Ref document number: 11758793

Country of ref document: EP

Kind code of ref document: A1