CN102323866B - Camera shooting type touch control method and device - Google Patents

Camera shooting type touch control method and device Download PDF

Info

Publication number
CN102323866B
CN102323866B CN 201110257227 CN201110257227A CN102323866B CN 102323866 B CN102323866 B CN 102323866B CN 201110257227 CN201110257227 CN 201110257227 CN 201110257227 A CN201110257227 A CN 201110257227A CN 102323866 B CN102323866 B CN 102323866B
Authority
CN
China
Prior art keywords
touch control
touch
coordinate
control area
camera head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201110257227
Other languages
Chinese (zh)
Other versions
CN102323866A (en
Inventor
曾昭兴
施杨
刘志文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changshu Intellectual Property Operation Center Co ltd
Guangdong Gaohang Intellectual Property Operation Co ltd
Original Assignee
Vtron Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vtron Technologies Ltd filed Critical Vtron Technologies Ltd
Priority to CN 201110257227 priority Critical patent/CN102323866B/en
Publication of CN102323866A publication Critical patent/CN102323866A/en
Application granted granted Critical
Publication of CN102323866B publication Critical patent/CN102323866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The invention provides a camera shooting type touch control method and device. The camera shooting type touch control method comprises the steps of: obtaining an image of a boundary locator in a touch control region under a boundary determining mode, obtaining boundary location information of the touch control region, wherein the boundary location information is shot by each camera device; then figuring out an effective range of the touch control region according to the boundary location information; then under the touch control mode, obtaining an image of the object to be touched and calculating initial coordinates of the object to be touched, screening the initial coordinates of the object to be touched according to the effective range of the touch control region so that the initial coordinates exceeding the effective range of the touch control region are screened out, and determining coordinates of the object to be touched according to the screened initial coordinates. Therefore, computation confirmed by the calculation of the coordinates of the object to be touched and location delay can be reduced, and the speed of camera shooting type touch location is increased.

Description

A kind of camera type touch control method and device thereof
Technical field
The present invention relates to touch the control technology field, especially relate to a kind of camera type touch control method, and a kind of camera type touch control device.
Background technology
Touch control technology as a kind of novel computer input technology, make man-machine interaction more directly perceived, because to bringing great convenience property of user, except being applied to the portable personal information products, application has spreaded all over every field such as information household appliances, public information, electronic game, business automation equipment.
Common touch control technology comprises capacitance touch technology, resistive touch technology, infrared touch technology or camera type touch technology etc., wherein to have use equipment simple for the camera type touch technology, advantage such as easy for installation becomes touch control technology one more and more part and parcel.
The principle of a kind of camera type touch technology of prior art as shown in Figure 1, at least three camera heads of diverse location setting at the edge of display device; Set up coordinate system, obtain the coordinate information of each described camera head and the touch locating information that each described camera head is taken touch objects; When a plurality of touch objects, according to the described touch locating information of described first camera head and the shooting of described the 3rd camera head, calculate second preliminary coordinate of touch objects; Effective range according to described touch control area is screened described first preliminary coordinate and described second preliminary coordinate respectively; To be defined as the actual coordinate of touch objects through identical coordinate points in first preliminary coordinate and described second preliminary coordinate after the screening, thereby touch objects will be positioned.
Yet, though above-mentioned camera type touch technology can position each touch objects accurately, owing to calculate the preliminary coordinate of twice described touch point respectively, so operand is bigger, may make the location of touch objects certain time-delay occur, make the user not feel well.
Summary of the invention
The object of the present invention is to provide and a kind ofly can reduce operand, improve the camera shooting type of locating speed and touch control method, can reduce the time-delay of the location of the touch objects of bringing because operand is excessive.
A kind of camera type touch control method may further comprise the steps:
Obtain the image of the described touch control area that at least three camera heads at the edge, touch control area that is arranged on display device take respectively;
Start the border deterministic model;
Under the deterministic model of border, obtain the boundary alignment information of the described touch control area of each described camera head shooting, wherein, described boundary alignment information comprises the positional information of image in the image of the described touch control area that each described camera head is taken of boundary alignment thing;
According to the described boundary alignment information that each described camera head is taken, calculate the coordinate of described boundary alignment thing, and according to the coordinate of described boundary alignment thing, determine the effective range of described touch control area;
Switch to the touch control model;
In the touch locating information that touches the described touch control area that obtains each described camera head shooting under the control model, wherein, described touch locating information comprises the positional information of touch objects image in the image of the described touch control area that each described camera head is taken;
According to the described touch locating information that each described camera head is taken, calculate the preliminary coordinate of touch objects;
Effective range according to described touch control area is screened the preliminary coordinate of described touch objects;
Determine touch article coordinate according to the preliminary coordinate after the screening.
Compared with prior art, in the camera type touch control method of the present invention, at first under the deterministic model of border, obtain the image at the boundary alignment thing of described touch control area, obtain the boundary alignment information of the described touch control area of each described camera head shooting, go out the effective range of described touch control area then according to described boundary alignment information calculations; Touching under the control model then, obtain the image of touch objects and calculate the preliminary coordinate of touch objects, effective range according to described touch control area is screened the preliminary coordinate of described touch objects, thereby the preliminary coordinate that will exceed the effective range of described touch control area is screened, and determines touch article coordinate according to the preliminary coordinate after the screening again.Thereby can reduce the operand that the calculating of touch article coordinate is confirmed, and the time-delay of location, the speed of camera type touch location improved.
Another object of the present invention is to provide a kind of and can reduce operand, improve the camera type touch control device of locating speed, can reduce the time-delay of the location of the touch objects of bringing because operand is excessive.
A kind of camera type touch control device comprises:
Image collection module is used for obtaining the image of the described touch control area that at least three camera heads at the edge, touch control area that is arranged on display device take respectively;
The schema management module is used for starting the border deterministic model or touches control model, or switches to the touch control model from starting the border deterministic model;
The boundary alignment module, boundary alignment information for the described touch control area that under the deterministic model of border, obtains each described camera head shooting, and the described boundary alignment information of taking according to each described camera head, calculate the coordinate of described boundary alignment thing, according to the coordinate of described boundary alignment thing, determine the effective range of described touch control area;
Wherein, described boundary alignment information comprises the positional information of image in the image of the described touch control area that each described camera head is taken of boundary alignment thing;
The Primary Location module is used in the touch locating information that touches the described touch control area that obtains each described camera head shooting under the control model, according to the described touch locating information that each described camera head is taken, calculates the preliminary coordinate of touch objects;
Wherein, described touch locating information comprises the positional information of touch objects image in the image of the described touch control area that each described camera head is taken;
The screening module is used for according to the effective range of the definite described touch control area of described boundary alignment module the preliminary coordinate of described touch objects being screened;
Locating module is used for determining touch article coordinate according to the preliminary coordinate after the screening.
Compared with prior art, in the camera type touch control device of the present invention, described schema management module at first starts the border deterministic model, described boundary alignment module is obtained the image at the boundary alignment thing of described touch control area under the deterministic model of border, obtain the boundary alignment information of the described touch control area of each described camera head shooting, go out the effective range of described touch control area then according to described boundary alignment information calculations; Described schema management module switches to the touch control model then, described Primary Location module is touching under the control model, obtain the image of touch objects and calculate the preliminary coordinate of touch objects, described screening module is screened the preliminary coordinate of described touch objects according to the effective range of described touch control area, thereby the preliminary coordinate that will exceed the effective range of described touch control area is screened, and described locating module is determined touch article coordinate according to the preliminary coordinate after the screening again.Thereby can reduce the operand that the calculating of touch article coordinate is confirmed, and the time-delay of location, the speed of camera type touch location improved.
Description of drawings
Fig. 1 is the principle schematic of a kind of camera type touch technology of prior art;
Fig. 2 is the process flow diagram of camera type touch control method of the present invention;
Fig. 3 is the synoptic diagram that three camera heads are set at the display device side among the present invention;
Fig. 4 is the synoptic diagram that among the present invention the touch control area of display device shown in Figure 3 is carried out boundary alignment;
Fig. 5 is the synoptic diagram of among the present invention the subregion image of each camera head being divided;
Fig. 6 is the synoptic diagram of among the present invention the shooting subregion of each camera head being divided;
Fig. 7 is the touch control area of display device described in the present invention when touch objects occurring, the synoptic diagram of the subregion image that each camera head is taken;
Fig. 8 is the structural representation of camera type touch control device of the present invention;
Fig. 9 is the structural representation of a kind of preferred implementation of camera type touch control device of the present invention.
Embodiment
See also Fig. 2, Fig. 2 is the process flow diagram of camera type touch control method of the present invention.
Described camera type touch control method may further comprise the steps:
S201 obtains the image of the described touch control area that at least three camera heads at the edge, touch control area that is arranged on display device take respectively;
Wherein, described touch control area is that positioned area is taken in user's touch control, is preferably the zone of the positive top of described display device.Touch operation in described touch control area will be detected, and handles as the information that touches control.
For guaranteeing that each camera head can both photograph the image of the touch control area of complete described display device, therefore, the shooting visual angle of each described camera head all must comprise whole described touch control area.The shooting axle preferred parallel of each described camera head is in the plane of described camera head, from the side described touch control area is taken, when touch objects appears at described touch control area, photograph the image that described touch objects appears at described touch control area equally from the side.
The position is set is set on each position at edge, described touch control area according to those skilled in the art's needs of described three camera heads, those skilled in the art only need obtain the position coordinates of described three camera heads and the relative position of described display device, just can calculate the position relation of the touch control area of the photographic images of each camera head and described display device by computing, thereby the touch objects that occurs in the described touch control area is positioned.
And as preferred embodiment a kind of, three described camera heads are separately positioned on the two ends on described display device one side and the midpoint on described limit, and the shooting angle that is arranged on two described camera heads at described display device two ends on one side is 90 degree, and the shooting angle that is arranged on the camera head of described limit midpoint is 180 degree.Therefore can utilize three camera heads just whole described touch control area to be taken, and avoid touch objects to appear at two situations between the camera head.
The detection and location of touch objects are calculated for convenience of description, orientate the example explanation as with two-dimensional touch below:
See also Fig. 3, the plane at display device 11 places as true origin, is set up coordinate system with any point in the plane at display device 11 places.The touch control area of described display device 11 is the zone onesize with the positive viewing area of described display device 11.As being true origin with the first camera head A, then, the coordinate of this first camera head A is (0,0), and the length of establishing one side 111 of this display device 11 is L, the coordinate of the second camera head B is (L, 0), the coordinate of the 3rd camera head C is (L/2,0), the coordinate of touch objects O be unknown-value (x, y).From the described first camera head A, the described second camera head B and described the 3rd camera head C, obtain the image that it is taken the touch control area of described display device 11 respectively respectively.
S202 starts the border deterministic model;
The pattern that described border deterministic model positions for the effective range to described touch control area, the touch objects that occurs in the described touch control area under the deterministic model of described border is defaulted as the boundary alignment thing by processor.
As a kind of preferred implementation, when starting the border deterministic model, send the cue that starts the border deterministic model, remind user touch point to hit the border of described touch control area, in order to carry out boundary alignment.
S203, under the deterministic model of border, obtain the boundary alignment information of the described touch control area of each described camera head shooting, wherein, described boundary alignment information comprises the positional information of image in the image of the described touch control area that each described camera head is taken of boundary alignment thing;
S204 according to the described boundary alignment information that each described camera head is taken, calculates the coordinate of described boundary alignment thing;
S205 according to the coordinate of described boundary alignment thing, determines the effective range of described touch control area;
To step S205, when under the deterministic model of border, obtaining described boundary alignment information, can set the different boundary alignment mode that adopts according to the user from step S203;
Wherein a kind of boundary alignment mode is:
Under the deterministic model of border, obtain the positional information of image in the image of described touch control area of a plurality of boundary alignment things that occur successively as described boundary alignment information;
When determining the effective range of described touch control area, calculate the coordinate of described a plurality of boundary alignment things respectively according to described boundary alignment information, and will be that the polygonal region on summit is as the effective range of described touch control area with the coordinate points of described a plurality of boundary alignment things.
This mode is clicked by the touch of user under the deterministic model of described border fully, determines the effective range of described touch control area, so the user can set more neatly according to the needs of oneself, and is very convenient.
Another kind of boundary alignment mode is:
Under the deterministic model of border, obtain the positional information of image in the image of described touch control area of several boundary alignment things that occur successively as described boundary alignment information;
When determining the effective range of described touch control area, calculate the coordinate of described a plurality of boundary alignment things respectively according to described boundary alignment information, and the polygonal region that will be the summit with coordinate points and the predefined coordinate points of described a plurality of boundary alignment things is as the effective range of described touch control area.
This mode is clicked and default coordinate points according to the touch of user under the deterministic model of described border, determine the effective range of described touch control area, therefore the user can only click one of them or several point, just can easily determine the effective range of described touch control area, very convenient, and also can accelerate the speed of boundary alignment.
For example shown in Figure 4, boundary alignment to the touch control area of the described display device 11 among Fig. 3, just can only click the edge of described display device 11, some H with on the diagonal line of the first camera head A obtains the positional information of image in the image of described touch control area of described some H as described boundary alignment information; According to the described boundary alignment information that each described camera head is taken, the coordinate that calculates described some H be (L, h); Then, be that (L, h), and the coordinate of the first camera head A (0,0), the effective range that just can determine described touch control area is (0≤x≤L, 0≤y≤h) according to the coordinate of described some H.
S206 switches to the touch control model;
Under the deterministic model of border, determine after the effective range of described touch control area, automatically switch or switch to the touch control model according to user's input instruction.
The pattern of described touch control model for detection and location are carried out in the touch control operation in the effective range of described touch control area.Under described touch control model, the touch objects that occurs in the described touch control area is defaulted as general touch objects by processor.
As a kind of preferred implementation, when switching to the touch control model, send and switch to the cue that touches control model, remind user border affirmation mode to finish, can touch control operation.
S207, in the touch locating information that touches the described touch control area that obtains each described camera head shooting under the control model, wherein, described touch locating information comprises the positional information of touch objects image in the image of the described touch control area that each described camera head is taken;
S208 according to the described touch locating information that each described camera head is taken, calculates the preliminary coordinate of touch objects;
S209 screens the preliminary coordinate of described touch objects according to the effective range of described touch control area;
S210 determines touch article coordinate according to the preliminary coordinate after the screening.
To step S210, after obtaining the touch locating information of described touch control area under the touch control model, can adopt various touch article coordinate account form from step S207;
Wherein a kind of touch article coordinate account form is:
Suppose that described camera head comprises first camera head, second camera head and the 3rd camera head;
Then touching under the control model, according to the described touch locating information of described first camera head and the shooting of described second camera head, calculating first preliminary coordinate of touch objects;
According to the described touch locating information of described first camera head and the shooting of described the 3rd camera head, calculate second preliminary coordinate of touch objects;
Effective range according to described touch control area is screened described first preliminary coordinate and described second preliminary coordinate respectively;
To be defined as touch article coordinate through identical coordinate points in first preliminary coordinate and described second preliminary coordinate after the screening.
This kind mode need be calculated the preliminary coordinate of twice touch objects, and the preliminary coordinate with twice calculating compares definite touch article coordinate again.Result of calculation is more accurate, is easy to realize.
Wherein a kind of touch article coordinate account form is:
Suppose that described camera head comprises first camera head, second camera head and the 3rd camera head;
Then touching under the control model, according to the described touch locating information of described first camera head and the shooting of described second camera head, calculating first preliminary coordinate of touch objects;
Effective range according to described touch control area is screened described first preliminary coordinate;
According to first preliminary coordinate after the screening and the coordinate of the 3rd camera head, calculate described the 3rd camera head to the theoretical value of the touch locating information of described first preliminary coordinate shooting;
The theoretical value of described touch locating information and the touch locating information of described the 3rd camera head actual photographed are compared, if comparative result is identical, then described first preliminary coordinate with correspondence is defined as touch article coordinate.
This kind mode need not to calculate the preliminary coordinate of twice touch objects, but the theoretical value of the touch locating information of another camera head of coordinate Calculation of the preliminary coordinate that utilization is once calculated and another camera head, with the actual value of the touch locating information of described theoretical value and another camera head relatively, again described first preliminary coordinate is screened, obtain touch article coordinate.Because need not to carry out the calculating of twice preliminary coordinate, so operand is less, be conducive to improve locating speed.
Compared with prior art, in the camera type touch control method of the present invention, at first under the deterministic model of border, obtain the image at the boundary alignment thing of described touch control area, obtain the boundary alignment information of the described touch control area of each described camera head shooting, go out the effective range of described touch control area then according to described boundary alignment information calculations; Touching under the control model then, obtain the image of touch objects and calculate the preliminary coordinate of touch objects, effective range according to described touch control area is screened the preliminary coordinate of described touch objects, thereby the preliminary coordinate that will exceed the effective range of described touch control area is screened, and determines touch article coordinate according to the preliminary coordinate after the screening again.Thereby can reduce the operand that the calculating of touch article coordinate is confirmed, and the time-delay of location, the speed of camera type touch location improved.
As a kind of preferred implementation, after the described touch locating information of obtaining each described camera head shooting, calculate before the preliminary coordinate of touch objects, further carry out following steps:
S212, the image of the described touch control area that each described camera head is taken is divided at least two sub regions images in the direction that is parallel to described display device, and according to the dividing mode of described subregion image, and effective coordinate range of described touch control area, described touch control area is divided at least two respectively with respect to each camera head takes subregion;
Whether S213 when the touch objects image occurring at least three described subregion images, has common overlapping region according to each described shooting subregion corresponding with each described subregion image, judges whether the combination of each described subregion image is effective;
If judge that in step S213 the combination of each described subregion image is effective, illustrate that then described touch objects is all photographed by each camera head, and touch article coordinate is effective touch objects, execution in step S214 within the scope of described touch control area.If judge that the combination of each described subregion image is invalid, illustrate that then described touch objects is not all to be photographed by each camera head, perhaps touch article coordinate is invalid touch objects, execution in step S215 outside the scope of described touch control area.
S214 is divided into effective touch objects with described touch objects;
S215 is divided into invalid touch objects with described touch objects.
Then, in step S208, calculate the coordinate time of touch objects, can directly ignore invalid touch objects, only effective touch objects is carried out coordinate setting and calculate, save the computing time to invalid touch article coordinate, accelerate locating speed.
When in step S212, carrying out the division of described subregion image, the division number of described subregion image, and the size of each described subregion image etc. can be set according to user's self use needs.The overall principle is: the division number of described subregion image is more many, anticipation elimination ability to touch objects is just more strong, but the operand that anticipation is got rid of can increase along with the division number of subregion image again and become big, therefore the division number of described subregion image should be chosen moderate, namely both guarantee enough anticipation elimination abilities, and can not increase too much extra computing again.And the size of each described subregion image is preferably the equal portions division, can reduce operand.
As a kind of preferred implementation, in this step, the image of the described touch control area that two camera heads that are arranged on described display device two ends on one side can be taken is being parallel to the three sub regions images that described display device direction is divided into equal in length; And the image of the described touch control area taken of the camera head that will be arranged on the midpoint on described limit, be parallel to six sub regions images that described display device direction is divided into equal in length.Through overtesting, the dividing mode of this seed region image can realize the anticipation of touch objects is got rid of efficiently, also can guarantee higher arithmetic speed, is more excellent dividing mode.
As another kind of preferred implementation of the present invention, in this step, the image of described touch control area is being parallel to the three sub regions images that described display device direction is divided into equal in length; Then corresponding described touch control area is divided into each described camera head: be the summit with described camera head place, three of equating of drift angle take subregions.
As shown in Figure 5, the image of the touch control area that this first camera head A takes is divided into three sub regions image A 1 of equal in length, A2, A3, then the touch control area of shown device is divided into three shooting subregion a1 that drift angle equates, a2, a3 with respect to this first camera head A;
The image of the touch control area that this second camera head B takes is divided into three sub regions image B 1 of equal in length, B2, B3, then the touch control area of shown device is divided into three shooting subregion b1 that drift angle equates with respect to this second camera head B, b2, b3;
The image of the touch control area that the 3rd camera head C takes is divided into three sub regions image C 1 of equal in length, C2, C3, then the touch control area of shown device is divided into three shooting subregion c1 that drift angle equates with respect to the 3rd camera head C, c2, c3.
Because when touch objects appears in described touch control area, will inevitably be photographed the touch objects image simultaneously by three camera heads, therefore, each touch objects image all will inevitably appear at respectively in the sub regions image of three camera heads, that is, each touch objects image will inevitably appear at all that (Ax, Bx is Cx) in the subregion image collection of Zu Chenging.Yet, not all (Ax that appears at, Bx, Cx) touch objects in the subregion image collection of Zu Chenging all is effective touch objects, because described subregion image is to be parallel to described display device plane from the side to take, therefore, the scope of its shooting may exceed the touch control area of described display device, and photographs invalid touch objects.
And the shooting subregion that corresponding described subregion image is divided is within the effective range of described touch control area, that is to say, any point in the effective range of described touch control area should appear at all that (ax, bx is in the shooting subregion of cx) the forming set.For example, among Fig. 6, the zone is 1 by a1, and b1, three of c1 take that subregion is overlapping to form, and zone 2 is by a2, and b1, three of c1 take that subregion is overlapping to form; The zone is 3 by a3, and b1, three of c1 take that subregion is overlapping to form; The zone is 4 by a3, and b2, three of c1 take that subregion is overlapping to form; The zone is 5 by a3, and b2, three of c2 take that subregion is overlapping to form; The zone is 6 by a2, and b1, three of c2 take that subregion is overlapping to form; The zone is 7 by a1, and b1, three of c2 take that subregion is overlapping to form; The zone is 8 by a1, and b2, three of c2 take that subregion is overlapping to form; The zone is 9 by a2, and b2, three of c2 take that subregion is overlapping to form; The zone is 10 by a2, and b3, three of c2 take that subregion is overlapping to form; The zone is 11 by a2, and b3, three of c3 take that subregion is overlapping to form; The zone is 12 by a1, and b3, three of c3 take that subregion is overlapping to form; The zone is 13 by a1, and b2, three of c3 take that subregion is overlapping to form; The zone is 14 by a1, and b1, three of c3 take that subregion is overlapping to form.
As seen, each zone is by overlapping the forming of shooting subregion of different camera head correspondences.
In step S213, when there was touch objects described touch control area, each described camera head all can photograph the touch objects image, and described touch objects image appears at respectively in one of them subregion image of each described camera head shooting.Be illustrated in figure 7 as described touch control area zone 13 (a1, b2, when c3) touch objects occurring, the synoptic diagram of the subregion image that each camera head is taken.At this moment, according to the corresponding relation of described subregion image and described shooting subregion, the combination of each described subregion image is judged.
The rule of judging is: whether each the described shooting subregion that each described subregion image correspondence of described touch objects image occurs has common overlapping region, and then the combination of the described subregion image of Dui Ying each is effective; Otherwise it is invalid.
Provide whether effective method of two kinds of combinations of judging each described subregion image at this:
A kind of determination methods is when touch objects occurs, instant dividing mode according to each the described subregion image that the touch objects image occurs, and effective coordinate range of described touch control area, the coordinate range of each described shooting subregion of calculating correspondence respectively; And according to the coordinate range of each described shooting subregion, judge whether each described shooting subregion has common overlapping region; If common overlapping region is arranged, then judge being combined as effectively of each corresponding described subregion image; If there is not common overlapping region, judge that then being combined as of each corresponding described subregion image is invalid.This method is whether calculate the combination of each described subregion image when touch objects occurs again effective, and this moment is because determine the shooting subregion of described touch objects correspondence, so only need less computing.
Another determination methods is in advance according to the dividing mode of each described subregion image, and effective coordinate range of described touch control area, calculate the coordinate range of all described shooting subregions in advance, and from described at least three camera heads a plurality of shooting subregions separately, get one of them shooting subregion respectively and make up, judge whether each shooting subregion has common overlapping region; If common overlapping region is arranged, then the combination with each described subregion image of correspondence is divided into effective combination; If there is not common overlapping region, then the combination with each described subregion image of correspondence is divided into invalid combination;
Then, when the touch objects image occurring in the subregion image that each described camera head is taken, judge directly whether each described subregion image constitutes described effective combination.Thereby whether the combination that can judge each described subregion image fast is effective.
This method is before touch objects occurs, calculate the combination of effective subregion image with regard to budget, when touch objects is arranged, judge directly whether correspondence effectively makes up each the sub regions image that the touch objects image occurs, need not to carry out again coordinate Calculation, can significantly reduce the operation time in the position fixing process.
Therefore, in step S208, calculate the coordinate time of touch objects, can directly ignore invalid touch objects, only effective touch objects is carried out coordinate setting and calculate, save the computing time to invalid touch article coordinate, accelerate locating speed.
In the camera type touch control method of this preferred implementation, the image of the described touch control area that each described camera head is taken is divided into a plurality of subregion images, and correspondingly described touch control area is divided into a plurality of shooting subregions for each camera head.When photographing the touch objects image, judge that the touch objects image appears at that subregion image, thereby can judge which shooting subregion that described touch objects appears at each camera head.Whether each shooting subregion that occurs according to described touch objects has common overlapping region, whether the combination of judging each described subregion image is effective, if effectively, illustrate that then corresponding described touch objects might all be photographed by all camera heads, and therefore touch article coordinate is divided into effective touch objects with these touch objects within the scope of described touch control area; If photograph touch objects in the corresponding subregion image, but the combination of each described subregion image is invalid, illustrate that then described touch objects is not all photographed by each camera head, perhaps touch article coordinate is invalid touch objects outside the scope of described touch control area.
Therefore, can be under the situation of the coordinate that does not calculate touch objects by described camera type touch control method, according to photographing the subregion image that touch objects occurs, whether touch objects is effectively carried out preliminary judgement, the touch objects that exclusive segment is invalid, the coordinate time calculating each touch objects then need not to calculate the coordinate that is divided into invalid touch objects again, significantly reduce the operand of touch location and the time-delay of location, improved the speed of camera type touch location.
See also Fig. 8, Fig. 8 is the structural representation of camera type touch control device of the present invention.
Described camera type touch control device comprises:
Image collection module 81 is used for obtaining the image of the described touch control area that at least three camera heads at the edge, touch control area that is arranged on display device take respectively;
Schema management module 82 is used for starting the border deterministic model or touches control model, or switches to the touch control model from starting the border deterministic model;
Boundary alignment module 83, boundary alignment information for the described touch control area that under the deterministic model of border, obtains each described camera head shooting, and the described boundary alignment information of taking according to each described camera head, calculate the coordinate of described boundary alignment thing, according to the coordinate of described boundary alignment thing, determine the effective range of described touch control area;
Wherein, described boundary alignment information comprises the positional information of image in the image of the described touch control area that each described camera head is taken of boundary alignment thing;
Primary Location module 84 is used in the touch locating information that touches the described touch control area that obtains each described camera head shooting under the control model, according to the described touch locating information that each described camera head is taken, calculates the preliminary coordinate of touch objects;
Wherein, described touch locating information comprises the positional information of touch objects image in the image of the described touch control area that each described camera head is taken;
Screening module 85 is used for according to the effective range of the definite described touch control area of described boundary alignment module the preliminary coordinate of described touch objects being screened;
Locating module 86 is used for determining touch article coordinate according to the preliminary coordinate after the screening.
Wherein, described touch control area is that positioned area is taken in user's touch control, is preferably the zone of the positive top of described display device.Touch operation in described touch control area will be detected, and handles as the information that touches control.
Described image collection module 81 can by with the image that therefrom obtains described touch control area that is connected of each described camera head.
For guaranteeing that each camera head can both photograph the image of the touch control area of complete described display device, therefore, the shooting visual angle of each described camera head all must comprise whole described touch control area.The shooting axle preferred parallel of each described camera head is in the plane of described camera head, from the side described touch control area is taken, when touch objects appears at described touch control area, photograph the image that described touch objects appears at described touch control area equally from the side.
The position is set is set on each position at edge, described touch control area according to those skilled in the art's needs of described three camera heads, those skilled in the art only need obtain the position coordinates of described three camera heads and the relative position of described display device, just can calculate the position relation of the touch control area of the photographic images of each camera head and described display device by computing, thereby the touch objects that occurs in the described touch control area is positioned.
And as preferred embodiment a kind of, three described camera heads are separately positioned on the two ends on described display device one side and the midpoint on described limit, and the shooting angle that is arranged on two described camera heads at described display device two ends on one side is 90 degree, and the shooting angle that is arranged on the camera head of described limit midpoint is 180 degree.Therefore can utilize three camera heads just whole described touch control area to be taken, and avoid touch objects to appear at two situations between the camera head.
The detection and location of touch objects are calculated for convenience of description, orientate the example explanation as with two-dimensional touch below:
See also Fig. 3, the plane at display device 11 places as true origin, is set up coordinate system with any point in the plane at display device 11 places.The touch control area of described display device 11 is the zone onesize with the positive viewing area of described display device 11.As being true origin with the first camera head A, then, the coordinate of this first camera head A is (0,0), and the length of establishing one side 111 of this display device 11 is L, the coordinate of the second camera head B is (L, 0), the coordinate of the 3rd camera head C is (L/2,0), the coordinate of touch objects O be unknown-value (x, y).From the described first camera head A, the described second camera head B and described the 3rd camera head C, obtain the image that it is taken the touch control area of described display device 11 respectively respectively.
Described schema management module 82 at first starts the border deterministic model;
The pattern that described border deterministic model positions for the effective range to described touch control area, the touch objects that occurs in the described touch control area under the deterministic model of described border is defaulted as the boundary alignment thing by processor.
As a kind of preferred implementation, described schema management module 82 is sent the cue that starts the border deterministic model when starting the border deterministic model, reminds user touch point to hit the border of described touch control area, in order to carry out boundary alignment.
As a kind of preferred implementation, under the deterministic model of border, described boundary alignment module 83 is obtained the positional information of image in the image of described touch control area of a plurality of boundary alignment things that occur successively as described boundary alignment information; And calculate the coordinate of described a plurality of boundary alignment things respectively according to described boundary alignment information, and will be that the polygonal region on summit is as the effective range of described touch control area with the coordinate points of described a plurality of boundary alignment things.
This mode is clicked by the touch of user under the deterministic model of described border fully, determines the effective range of described touch control area, so the user can set more neatly according to the needs of oneself, and is very convenient.
As another kind of preferred implementation, under the deterministic model of border, described boundary alignment module 83 is obtained the positional information of image in the image of described touch control area of several boundary alignment things that occur successively as described boundary alignment information; And calculate the coordinate of described a plurality of boundary alignment things respectively according to described boundary alignment information, and the polygonal region that will be the summit with coordinate points and the predefined coordinate points of described a plurality of boundary alignment things is as the effective range of described touch control area.
This mode is clicked and default coordinate points according to the touch of user under the deterministic model of described border, determine the effective range of described touch control area, therefore the user can only click one of them or several point, just can easily determine the effective range of described touch control area, very convenient, and also can accelerate the speed of boundary alignment.
For example shown in Figure 4, boundary alignment to the touch control area of the described display device 11 among Fig. 3, just can only click the edge of described display device 11, with the some H on the diagonal line of the first camera head A, described boundary alignment module 83 is obtained the positional information of image in the image of described touch control area of described some H as described boundary alignment information; According to the described boundary alignment information that each described camera head is taken, the coordinate that calculates described some H be (L, h); Then, be that (L, h), and the coordinate of the first camera head A (0,0), the effective range that just can determine described touch control area is (0≤x≤L, 0≤y≤h) according to the coordinate of described some H.
Determine under the deterministic model of border after the effective range of described touch control area that described schema management module 82 automaticallyes switch or switches to the touch control model according to user's input instruction.
The pattern of described touch control model for detection and location are carried out in the touch control operation in the effective range of described touch control area.Under described touch control model, the touch objects that occurs in the described touch control area is defaulted as general touch objects by processor.
As a kind of preferred implementation, described schema management module 82 is sent and is switched to the cue that touches control model when switching to the touch control model, reminds user border affirmation mode to finish, and can touch control operation.
After obtaining the touch locating information of described touch control area under the touch control model, can adopt various touch article coordinate account form;
Wherein a kind of touch article coordinate account form is:
Suppose that described camera head comprises first camera head, second camera head and the 3rd camera head;
Then described Primary Location module 84 is touching under the control model, according to the described touch locating information of described first camera head and the shooting of described second camera head, calculates first preliminary coordinate of touch objects; And according to the described touch locating information that described first camera head and described the 3rd camera head are taken, calculate second preliminary coordinate of touch objects;
Described screening module 85 is screened described first preliminary coordinate and described second preliminary coordinate respectively according to the effective range of described touch control area;
Described locating module 86 will be defined as touch article coordinate through identical coordinate points in first preliminary coordinate and described second preliminary coordinate after the screening.
This kind mode need be calculated the preliminary coordinate of twice touch objects, and the preliminary coordinate with twice calculating compares definite touch article coordinate again.Result of calculation is more accurate, is easy to realize.
Wherein a kind of touch article coordinate account form is:
Suppose that described camera head comprises first camera head, second camera head and the 3rd camera head;
Then described Primary Location module 84 is touching under the control model, according to the described touch locating information of described first camera head and the shooting of described second camera head, calculates first preliminary coordinate of touch objects;
Described screening module 85 is screened described first preliminary coordinate according to the effective range of described touch control area;
Described locating module 86 calculates described the 3rd camera head to the theoretical value of the touch locating information of described first preliminary coordinate shooting according to first preliminary coordinate after screening and the coordinate of the 3rd camera head; And with the touch locating information of the theoretical value of described touch locating information and described the 3rd camera head actual photographed relatively, if comparative result is identical, then described first preliminary coordinate with correspondence is defined as touch article coordinate.
This kind mode need not to calculate the preliminary coordinate of twice touch objects, but the theoretical value of the touch locating information of another camera head of coordinate Calculation of the preliminary coordinate that utilization is once calculated and another camera head, with the actual value of the touch locating information of described theoretical value and another camera head relatively, again described first preliminary coordinate is screened, obtain touch article coordinate.Because need not to carry out the calculating of twice preliminary coordinate, so operand is less, be conducive to improve locating speed.
Compared with prior art, in the camera type touch control device of the present invention, described schema management module at first starts the border deterministic model, described boundary alignment module is obtained the image at the boundary alignment thing of described touch control area under the deterministic model of border, obtain the boundary alignment information of the described touch control area of each described camera head shooting, go out the effective range of described touch control area then according to described boundary alignment information calculations; Described schema management module switches to the touch control model then, described Primary Location module is touching under the control model, obtain the image of touch objects and calculate the preliminary coordinate of touch objects, described screening module is screened the preliminary coordinate of described touch objects according to the effective range of described touch control area, thereby the preliminary coordinate that will exceed the effective range of described touch control area is screened, and described locating module is determined touch article coordinate according to the preliminary coordinate after the screening again.Thereby can reduce the operand that the calculating of touch article coordinate is confirmed, and the time-delay of location, the speed of camera type touch location improved.
See also Fig. 9, Fig. 9 is the structural representation of a kind of preferred implementation of camera type touch control device of the present invention.
As a kind of preferred implementation of camera type touch control device of the present invention, described camera type touch control device further comprises:
Module 87 is divided in the zone, image for the described touch control area that each described camera head is taken is divided at least two sub regions images in the direction that is parallel to described display device, and according to the dividing mode of described subregion image, and effective coordinate range of described touch control area, described touch control area is divided at least two respectively with respect to each camera head takes subregion;
Judge module 88, be used for when the touch objects image appears at least three described subregion images, according to each described shooting subregion corresponding with each described subregion image whether common overlapping region is arranged, judge whether the combination of each described subregion image is effective;
Sort module 89 is used in the judged result of described judge module 88 described touch objects being divided into effective touch objects when being; For not the time, described touch objects is divided into invalid touch objects in the judged result of described judge module 88.
If described judge module 88 judges that the combination of each described subregion image is effective, illustrate that then described touch objects is all photographed by each camera head, and touch article coordinate is effective touch objects within the scope of described touch control area.If described judge module 88 judges that the combination of each described subregion image is invalid, illustrate that then described touch objects is not all to be photographed by each camera head, perhaps touch article coordinate is invalid touch objects outside the scope of described touch control area.Therefore, described Primary Location module 84 is calculated the coordinate time of touch objects, can directly ignore invalid touch objects, only effective touch objects is carried out coordinate setting and calculates, and saves the computing time to invalid touch article coordinate, accelerates locating speed.
Wherein, module 87 is divided when carrying out the division of described subregion image in described zone, the division number of described subregion image, and the size of each described subregion image etc. can be set according to user's self use needs.The overall principle is: the division number of described subregion image is more many, anticipation elimination ability to touch objects is just more strong, but the operand that anticipation is got rid of can increase along with the division number of subregion image again and become big, therefore the division number of described subregion image should be chosen moderate, namely both guarantee enough anticipation elimination abilities, and can not increase too much extra computing again.And the size of each described subregion image is preferably the equal portions division, can reduce operand.
As a kind of preferred implementation, the image of the described touch control area that described zone division module 87 can be taken two camera heads that are arranged on described display device two ends on one side is being parallel to the three sub regions images that described display device direction is divided into equal in length; And the image of the described touch control area taken of the camera head that will be arranged on the midpoint on described limit, be parallel to six sub regions images that described display device direction is divided into equal in length.Through overtesting, the dividing mode of this seed region image can realize the anticipation of touch objects is got rid of efficiently, also can guarantee higher arithmetic speed, is more excellent dividing mode.
As another kind of preferred implementation of the present invention, module 87 is divided in described zone also can be parallel to the three sub regions images that described display device direction is divided into equal in length with the image of described touch control area; Then corresponding described touch control area is divided into each described camera head: be the summit with described camera head place, three of equating of drift angle take subregions.
As shown in Figure 4, the image of the touch control area that this first camera head A takes is divided into three sub regions image A 1 of equal in length, A2, A3, then the touch control area of shown device is divided into three shooting subregion a1 that drift angle equates, a2, a3 with respect to this first camera head A;
The image of the touch control area that this second camera head B takes is divided into three sub regions image B 1 of equal in length, B2, B3, then the touch control area of shown device is divided into three shooting subregion b1 that drift angle equates with respect to this second camera head B, b2, b3;
The image of the touch control area that the 3rd camera head C takes is divided into three sub regions image C 1 of equal in length, C2, C3, then the touch control area of shown device is divided into three shooting subregion c1 that drift angle equates with respect to the 3rd camera head C, c2, c3.
Because when touch objects appears in described touch control area, will inevitably be photographed the touch objects image simultaneously by three camera heads, therefore, each touch objects image all will inevitably appear at respectively in the sub regions image of three camera heads, that is after the image of 87 pairs of touches of described zone division module control area was divided, each touch objects image all will inevitably appear at (Ax, Bx is Cx) in the subregion image collection of Zu Chenging.Yet, not all (Ax that appears at, Bx, Cx) touch objects in the subregion image collection of Zu Chenging all is effective touch objects, because described subregion image is to be parallel to described display device plane from the side to take, therefore, the scope of its shooting may exceed the touch control area of described display device, and photographs invalid touch objects.
Each shooting subregion that module 87 corresponding described subregion images divisions are divided in described zone all is within the effective range of described touch control area, that is to say, any point in the effective range of described touch control area all should appear at (ax, bx, cx) in the shooting subregion set of forming, each subregion of described touch control area is by overlapping the forming of shooting subregion of different camera head correspondences.
When there was touch objects described touch control area, each described camera head all can photograph the touch objects image, and described touch objects image appears at respectively in one of them subregion image of each described camera head shooting.At this moment, described judge module 88 is judged the combination of each described subregion image according to the corresponding relation of described subregion image and described shooting subregion.
The judgment rule of described judge module 88 is: whether each the described shooting subregion that each described subregion image correspondence of described touch objects image occurs has common overlapping region, and then the combination of the described subregion image of Dui Ying each is effective; Otherwise it is invalid.
In a preferred implementation, described judge module 88 can be when touch objects occurs, instant dividing mode according to each the described subregion image that the touch objects image occurs, and effective coordinate range of described touch control area, the coordinate range of each described shooting subregion of calculating correspondence respectively; And according to the coordinate range of each described shooting subregion, judge whether each described shooting subregion has common overlapping region; If common overlapping region is arranged, then judge being combined as effectively of each corresponding described subregion image; If there is not common overlapping region, judge that then being combined as of each corresponding described subregion image is invalid.This method is whether calculate the combination of each described subregion image when touch objects occurs again effective, and this moment is because determine the shooting subregion of described touch objects correspondence, so only need less computing.
In another preferred implementation, described judge module 88 comprises:
Presetting module, be used in advance the dividing mode according to each described subregion image, and effective coordinate range of described touch control area, calculate the coordinate range of all described shooting subregions in advance, and from described at least three camera heads a plurality of shooting subregions separately, get one of them shooting subregion respectively and make up, judge whether each shooting subregion has common overlapping region; If common overlapping region is arranged, then the combination with each described subregion image of correspondence is divided into effective combination; If there is not common overlapping region, then the combination with each described subregion image of correspondence is divided into invalid combination;
The real-time judge module is used for judging directly whether each described subregion image constitutes described effective combination when the touch objects image appears in the subregion image that each described camera head is taken.Thereby whether the combination that can judge each described subregion image fast is effective.
This method is before touch objects occurs, and described presetting module calculates effective shooting subregion combination, the perhaps combination of subregion image with regard to budget.And when touch objects is arranged, described real-time judge module judges directly whether correspondence effectively makes up each the sub regions image that the touch objects image occurs, if just can directly judge corresponding shooting subregion common overlapping region is arranged, need not to carry out again coordinate Calculation, can significantly reduce the operation time in the position fixing process.
If described judge module 88 judges that the combination of each described subregion image is effective, illustrate that then described touch objects is all photographed by each camera head, and touch article coordinate is within the scope of described touch control area, be effective touch objects, therefore described sort module 89 is divided into effective touch objects with described touch objects.If described judge module 88 judges that the combination of each described subregion image is invalid, illustrate that then described touch objects is not all to be photographed by each camera head, perhaps touch article coordinate is outside the scope of described touch control area, be invalid touch objects, therefore described sort module 89 is divided into invalid touch objects with described touch objects.
Therefore, described Primary Location module 84 can directly be ignored invalid touch objects at the coordinate time that calculates touch objects, only effective touch objects is calculated, and saves the computing time to invalid touch article coordinate, accelerates locating speed.
Camera type touch control device by this preferred implementation can be under the situation of the coordinate that does not calculate touch objects, according to photographing the subregion image that touch objects occurs, whether touch objects is effectively carried out preliminary judgement, the touch objects that exclusive segment is invalid, at the coordinate time that calculates each touch objects, then need not to calculate the coordinate that is divided into invalid touch objects again, significantly reduced the operand of touch location and the time-delay of location, improved the speed of camera type touch location.
Coordinate of the present invention is identical or overlapping, all be the error in considering the shooting process of camera head, and defines on the basis of the suitable error of computation process, and identical or overlapping on the absolute sense frequently.Should in having allowed certain error range, understand identical or the overlapping and similar description of coordinate of the present invention.
The present invention is not limited to above embodiment, for example: use camera type touch control method of the present invention and install the multipoint positioning that can carry out more than 3.When carrying out the multipoint positioning (for example four point locations) more than 3, the photographic images information of each camera head made up in twos find the solution, contrast obtains the touch objects actual coordinate.
In addition, by understanding technical scheme of the present invention, those skilled in the art will recognize, the photographic images information that makes these all camera heads combination is in twos respectively found the solution, perhaps increase captured image information by the number that increases camera head, can make the location of this camera type touch control method of the present invention and device thereof more accurate, also can realize the multipoint positioning more than 3 more easily.
Above-described embodiment of the present invention does not constitute the restriction to protection domain of the present invention.Any modification of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., all should be included within the claim protection domain of the present invention.

Claims (10)

1. camera type touch control method is characterized in that may further comprise the steps:
Obtain the image of the described touch control area that at least three camera heads at the edge, touch control area that is arranged on display device take respectively;
Start the border deterministic model;
Under the deterministic model of border, obtain the boundary alignment information of the described touch control area of each described camera head shooting, wherein, described boundary alignment information comprises the positional information of image in the image of the described touch control area that each described camera head is taken of boundary alignment thing;
According to the described boundary alignment information that each described camera head is taken, calculate the coordinate of described boundary alignment thing, and according to the coordinate of described boundary alignment thing, determine the effective range of described touch control area;
Switch to the touch control model;
In the touch locating information that touches the described touch control area that obtains each described camera head shooting under the control model, wherein, described touch locating information comprises the positional information of touch objects image in the image of the described touch control area that each described camera head is taken;
According to the described touch locating information that each described camera head is taken, calculate the preliminary coordinate of touch objects;
Effective range according to described touch control area is screened the preliminary coordinate of described touch objects;
Determine touch article coordinate according to the preliminary coordinate after the screening.
2. camera type touch control method as claimed in claim 1 is characterized in that, the step of obtaining described boundary alignment information under the deterministic model of border comprises:
Obtain the positional information of image in the image of described touch control area of a plurality of boundary alignment things that occur successively as described boundary alignment information;
When determining the effective range of described touch control area, calculate the coordinate of described a plurality of boundary alignment things respectively according to described boundary alignment information, and will be that the polygonal region on summit is as the effective range of described touch control area with the coordinate points of described a plurality of boundary alignment things.
3. camera type touch control method as claimed in claim 1 is characterized in that, the step of obtaining described boundary alignment information under the deterministic model of border comprises:
Obtain the positional information of image in the image of described touch control area of a plurality of boundary alignment things that occur successively as described boundary alignment information;
When determining the effective range of described touch control area, calculate the coordinate of described a plurality of boundary alignment things respectively according to described boundary alignment information, and the polygonal region that will be the summit with coordinate points and the predefined coordinate points of described a plurality of boundary alignment things is as the effective range of described touch control area.
4. as any described camera type touch control method in the claim 1 to 3, it is characterized in that described camera head comprises first camera head, second camera head and the 3rd camera head;
Touching under the control model, according to the described touch locating information of described first camera head and the shooting of described second camera head, calculating first preliminary coordinate of touch objects;
According to the described touch locating information of described first camera head and the shooting of described the 3rd camera head, calculate second preliminary coordinate of touch objects;
Effective range according to described touch control area is screened described first preliminary coordinate and described second preliminary coordinate respectively;
To be defined as touch article coordinate through identical coordinate points in first preliminary coordinate and described second preliminary coordinate after the screening.
5. as any described camera type touch control method in the claim 1 to 3, it is characterized in that, when starting the border deterministic model, send the cue that starts the border deterministic model; When switching to the touch control model, send and switch to the cue that touches control model.
6. camera type touch control device comprises:
Image collection module is used for obtaining the image of the described touch control area that at least three camera heads at the edge, touch control area that is arranged on display device take respectively;
The schema management module is used for starting the border deterministic model or touches control model, or switches to the touch control model from starting the border deterministic model;
The boundary alignment module, boundary alignment information for the described touch control area that under the deterministic model of border, obtains each described camera head shooting, and the described boundary alignment information of taking according to each described camera head, calculate the coordinate of described boundary alignment thing, according to the coordinate of described boundary alignment thing, determine the effective range of described touch control area;
Wherein, described boundary alignment information comprises the positional information of image in the image of the described touch control area that each described camera head is taken of boundary alignment thing;
The Primary Location module is used in the touch locating information that touches the described touch control area that obtains each described camera head shooting under the control model, according to the described touch locating information that each described camera head is taken, calculates the preliminary coordinate of touch objects;
Wherein, described touch locating information comprises the positional information of touch objects image in the image of the described touch control area that each described camera head is taken;
The screening module is used for according to the effective range of the definite described touch control area of described boundary alignment module the preliminary coordinate of described touch objects being screened;
Locating module is used for determining touch article coordinate according to the preliminary coordinate after the screening.
7. camera type touch control device as claimed in claim 6 is characterized in that, described boundary alignment module is obtained the positional information of image in the image of described touch control area of a plurality of boundary alignment things that occur successively as described boundary alignment information; And calculate the coordinate of described a plurality of boundary alignment things respectively according to described boundary alignment information, will be that the polygonal region on summit is as the effective range of described touch control area with the coordinate points of described a plurality of boundary alignment things.
8. camera type touch control device as claimed in claim 6 is characterized in that, described boundary alignment module is obtained the positional information of image in the image of described touch control area of a plurality of boundary alignment things that occur successively as described boundary alignment information; And calculating the coordinate of described a plurality of boundary alignment things respectively according to described boundary alignment information, the polygonal region that will be the summit with coordinate points and the predefined coordinate points of described a plurality of boundary alignment things is as the effective range of described touch control area.
9. as any described camera type touch control device of claim 6 to 8, it is characterized in that described camera head comprises first camera head, second camera head and the 3rd camera head;
Described Primary Location module is touching under the control model, according to the described touch locating information of described first camera head and the shooting of described second camera head, calculates first preliminary coordinate of touch objects; And according to the described touch locating information that described first camera head and described the 3rd camera head are taken, calculate second preliminary coordinate of touch objects;
Described screening module is screened described first preliminary coordinate and described second preliminary coordinate respectively according to the effective range of described touch control area;
Described locating module will be defined as touch article coordinate through identical coordinate points in first preliminary coordinate and described second preliminary coordinate after the screening.
10. as any described camera type touch control device of claim 6 to 8, it is characterized in that described schema management module is sent the cue that starts the border deterministic model when starting the border deterministic model; When switching to the touch control model, send and switch to the cue that touches control model.
CN 201110257227 2011-09-01 2011-09-01 Camera shooting type touch control method and device Active CN102323866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110257227 CN102323866B (en) 2011-09-01 2011-09-01 Camera shooting type touch control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110257227 CN102323866B (en) 2011-09-01 2011-09-01 Camera shooting type touch control method and device

Publications (2)

Publication Number Publication Date
CN102323866A CN102323866A (en) 2012-01-18
CN102323866B true CN102323866B (en) 2013-08-21

Family

ID=45451612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110257227 Active CN102323866B (en) 2011-09-01 2011-09-01 Camera shooting type touch control method and device

Country Status (1)

Country Link
CN (1) CN102323866B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324361B (en) * 2013-06-28 2016-05-25 广州视睿电子科技有限公司 Method and system for positioning touch point

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1739529A1 (en) * 2000-07-05 2007-01-03 Smart Technologies Inc. Camera-based touch system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8115593B2 (en) * 2005-03-30 2012-02-14 Memsic Transducer Systems Co., Ltd. Adaptive network and method
JP2007072501A (en) * 2005-09-02 2007-03-22 Canon Inc Coordinate-input device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1739529A1 (en) * 2000-07-05 2007-01-03 Smart Technologies Inc. Camera-based touch system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JP特开2007-72501A 2007.03.22

Also Published As

Publication number Publication date
CN102323866A (en) 2012-01-18

Similar Documents

Publication Publication Date Title
US20100283722A1 (en) Electronic apparatus including a coordinate input surface and method for controlling such an electronic apparatus
US20110234522A1 (en) Touch sensing method and system using the same
EP2846308A2 (en) Pointing direction detecting device and its method, program and computer readable-medium
US20130141326A1 (en) Gesture detecting method, gesture detecting system and computer readable storage medium
US20190114471A1 (en) Gesture identification with natural images
CN109815876A (en) Gesture recognition method based on address event flow feature
EP2691841A1 (en) Method of identifying multi-touch scaling gesture and device using the same
WO2012129989A1 (en) Method of identifying translation gesture and device using the same
CN102184056A (en) Method and device for identifying multiple touch points
CN102364419B (en) Camera type touch control method and system thereof
CN101833401B (en) Optical touch display device and operating method thereof
CN104978018B (en) Touch system and touch method
CN102323866B (en) Camera shooting type touch control method and device
CN104063142A (en) Information processing method, device and electronic equipment
CN102331889B (en) Image pick-up type touch control method and device
CN102622140B (en) Image pick-up multi-point touch system
CN102346605B (en) Photographing type touch control method and system
CN202443449U (en) Photographic multi-point touch system
CN102306070B (en) Camera shooting type touch control method and device
WO2023070933A1 (en) Gesture recognition method and apparatus, device, and medium
US20150277624A1 (en) Sensing Method and Related Touch Panel
CN102306069B (en) Camera shooting type touch control method and device
CN105094459B (en) A kind of optics multi-point touch locating method
CN103778405B (en) Gesture recognition method based on natural images
CN102306068B (en) Camera shooting type touch control method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP03 Change of name, title or address

Address after: 510670 Guangdong city of Guangzhou province Kezhu Guangzhou high tech Industrial Development Zone, Road No. 233

Patentee after: VTRON GROUP Co.,Ltd.

Address before: 510663 Guangzhou province high tech Industrial Development Zone, Guangdong, Cai road, No. 6, No.

Patentee before: VTRON TECHNOLOGIES Ltd.

CP03 Change of name, title or address
TR01 Transfer of patent right

Effective date of registration: 20201130

Address after: Unit 2414-2416, main building, no.371, Wushan Road, Tianhe District, Guangzhou City, Guangdong Province

Patentee after: GUANGDONG GAOHANG INTELLECTUAL PROPERTY OPERATION Co.,Ltd.

Address before: 510670 Guangdong city of Guangzhou province Kezhu Guangzhou high tech Industrial Development Zone, Road No. 233

Patentee before: VTRON GROUP Co.,Ltd.

Effective date of registration: 20201130

Address after: 215500 No.13, Caotang Road, Changshu, Suzhou, Jiangsu Province

Patentee after: Changshu intellectual property operation center Co.,Ltd.

Address before: Unit 2414-2416, main building, no.371, Wushan Road, Tianhe District, Guangzhou City, Guangdong Province

Patentee before: GUANGDONG GAOHANG INTELLECTUAL PROPERTY OPERATION Co.,Ltd.

TR01 Transfer of patent right
CP02 Change in the address of a patent holder

Address after: 215500 5th floor, building 4, 68 Lianfeng Road, Changfu street, Changshu City, Suzhou City, Jiangsu Province

Patentee after: Changshu intellectual property operation center Co.,Ltd.

Address before: No.13 caodang Road, Changshu City, Suzhou City, Jiangsu Province

Patentee before: Changshu intellectual property operation center Co.,Ltd.

CP02 Change in the address of a patent holder