CN112181211A - Touch positioning method and device and terminal equipment - Google Patents

Touch positioning method and device and terminal equipment Download PDF

Info

Publication number
CN112181211A
CN112181211A CN201910594180.3A CN201910594180A CN112181211A CN 112181211 A CN112181211 A CN 112181211A CN 201910594180 A CN201910594180 A CN 201910594180A CN 112181211 A CN112181211 A CN 112181211A
Authority
CN
China
Prior art keywords
touch
target
point
area
depth camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910594180.3A
Other languages
Chinese (zh)
Inventor
吴振华
王瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Corp
TCL Research America Inc
Original Assignee
TCL Research America Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Research America Inc filed Critical TCL Research America Inc
Priority to CN201910594180.3A priority Critical patent/CN112181211A/en
Publication of CN112181211A publication Critical patent/CN112181211A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers

Abstract

The invention is applicable to the technical field of touch control, and provides a touch control positioning method, a touch control positioning device and terminal equipment, wherein the touch control positioning method comprises the following steps: acquiring three-dimensional information of a touch point detected on a target touch area through a depth camera, wherein the depth camera is positioned outside the target touch area, and the shooting direction of the depth camera is the direction in which the coordinate origin of the target touch area points to the central point of the target touch area; based on the three-dimensional information of the touch points, calculating an included angle between a vector of the touch points and a vector of a horizontal coordinate axis of the target touch area, wherein the vector of the touch points is a vector pointing to the touch points from a coordinate origin of the target touch area; and obtaining the coordinates of the touch points on the target touch area based on the three-dimensional information of the touch points and the included angle. The invention can reduce the cost of touch positioning.

Description

Touch positioning method and device and terminal equipment
Technical Field
The invention belongs to the technical field of touch control, and particularly relates to a touch control positioning method, a touch control positioning device and terminal equipment.
Background
With the development of science and technology, touch technology is more and more widely applied, for example, intelligent terminals such as mobile phones and tablet computers do not have the support of touch technology, and users can trigger corresponding functions by performing touch operation on a terminal screen.
In the existing touch technology, a capacitive or resistive touch screen is generally integrated on an intelligent terminal, or an infrared positioning system is integrated, so that a touch gesture of a user on the screen is identified and positioned, and corresponding operation is triggered. However, in both of the two modes, components need to be pre-integrated on the terminal equipment, and the defects of high hardware cost and low precision exist.
Disclosure of Invention
In view of this, embodiments of the present invention provide a touch positioning method, a touch positioning device, and a terminal device, so as to solve the problem in the prior art how to reduce the hardware cost of touch positioning and improve the accuracy and convenience of touch positioning.
A first aspect of an embodiment of the present invention provides a touch positioning method, including:
acquiring three-dimensional information of a touch point detected on a target touch area through a depth camera, wherein the depth camera is positioned outside the target touch area, and the shooting direction of the depth camera is the direction in which the coordinate origin of the target touch area points to the central point of the target touch area;
based on the three-dimensional information of the touch points, calculating an included angle between a vector of the touch points and a vector of a horizontal coordinate axis of the target touch area, wherein the vector of the touch points is a vector pointing to the touch points from a coordinate origin of the target touch area;
and obtaining the coordinates of the touch points on the target touch area based on the three-dimensional information of the touch points and the included angle.
A second aspect of the embodiments of the present invention provides a touch positioning system, where the system includes a touch positioning apparatus and a target device, the touch positioning apparatus and the target device are connected in a wired or wireless manner, where:
the touch positioning device is used for executing the steps of the triggering positioning method;
and the target equipment is used for acquiring the touch point coordinates sent by the touch positioning device and executing target operation according to the touch point coordinates.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the touch location method when executing the computer program.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the touch positioning method.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: in the embodiment of the invention, the coordinates of the touch points on the target touch area are calculated through the three-dimensional information of the touch points detected on the target touch area by the depth camera, and components do not need to be integrated on the target touch area in advance, so that the hardware cost of touch positioning can be reduced, and the convenience of touch positioning is improved; meanwhile, the coordinates of the touch points on the target touch area are calculated through the three-dimensional information acquired from the depth camera, so that the calculated coordinate precision is higher, and the touch positioning precision can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart illustrating an implementation of a first touch positioning method according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating an included angle between a vector of a touch point and a vector of a horizontal coordinate axis of a target touch area according to an embodiment of the present invention;
fig. 3 is a schematic flow chart illustrating an implementation of a second touch positioning method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a first planar coordinate system provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of a second planar coordinate system provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of a third planar coordinate system provided by the embodiment of the present invention;
fig. 7 is a schematic flow chart illustrating an implementation of a third touch positioning method according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a lens holder mounting position of a depth camera according to an embodiment of the present invention;
fig. 9 is a side view of a lens holder according to an embodiment of the present invention;
fig. 10 is a schematic flow chart illustrating an implementation of a fourth touch positioning method according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a system of a touch positioning system according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
The first embodiment is as follows:
fig. 1 shows a schematic flow chart of a first touch positioning method provided in an embodiment of the present application, which is detailed as follows:
in S101, three-dimensional information of a touch point detected on a target touch area is acquired by a depth camera, where the depth camera is located outside the target touch area, and a shooting direction of the depth camera is a direction in which a coordinate origin of the target touch area points to a center point of the target touch area.
The target touch area is an area for receiving a touch operation (a click operation or a swipe operation of a user), the target touch area may be a surface area on a screen of a designated terminal device (e.g., a computer, a television, a mobile phone, etc.), an LED electronic display screen, a billboard, an outer surface of a glass cabinet, a projection curtain, a projection wall, a wood board, or any other material object, and the shape of the target touch area may be a quadrangle, a circle, or other irregular shapes, which is not limited herein.
A two-dimensional plane coordinate system needs to be established in advance on the target touch area so as to represent the position information on the target touch area. The depth camera is located outside the target touch area, and the shooting direction of the depth camera is the direction in which the coordinate origin of the target touch area points to the central point of the target touch area, so that the shooting direction of the depth camera is parallel to the target touch area and the shooting range comprises the target touch area.
When a user performs a touch operation, the depth camera acquires three-dimensional information of a touch point falling on a target touch area. Specifically, the three-dimensional information of the touch point is the three-dimensional coordinate P (x) of the touch point Pp,yp,zp) Or specifically includes the three-dimensional coordinates P (x) of the touch point Pp,yp,zp) And the depth value d of the touch point P. Specifically, three-dimensional coordinate P (x)p,yp,zp) The three-dimensional coordinates can be obtained through point cloud data generated by the depth camera, wherein the point cloud data can be directly generated by the depth camera or obtained through conversion according to a depth map shot by the depth camera, or obtained through calculation according to depth information detected by the depth camera and other physical parameters.
Alternatively, the depth camera may be a Time of Flight (TOF) depth camera, a structured light based depth camera, a binocular depth camera, or the like. Preferably, the depth camera is a TOF depth camera. The structured light depth camera and the binocular depth camera both need to obtain image information and then analyze the image information frame by frame to obtain depth information in the image so as to obtain point cloud data, a large number of images need to be processed by an algorithm, and the TOF depth camera directly obtains the depth information of a measured object by using the principle that the flight time of light in space is multiplied by the speed of the light so as to obtain the point cloud data, so that the TOF depth camera has low calculation energy consumption and higher three-dimensional information obtaining speed relative to the structured light depth camera and the binocular depth camera.
Optionally, before the obtaining, by the depth camera, three-dimensional information of the touch point detected on the target touch area, the method further includes:
sensing that the plane of the target touch area has touch operation through a depth camera;
and if the corresponding touch point is out of the target touch area, ending the process.
Since the shooting range of the depth camera may be larger than the target touch area, that is, the shooting range of the depth camera further includes an area located on the plane where the target touch area is located but outside the target touch area, when the depth camera senses a touch point, it is determined whether the touch point falls within the target touch area. If the touch point is in the target touch area, the next execution steps are continued. And if the touch point is out of the target touch area, directly ending the process, and not executing the next coordinate conversion step. Specifically, the three-dimensional coordinates of the touch point and the three-dimensional coordinates of the boundary point corresponding to the target touch area may be obtained, and it is determined whether the three-dimensional coordinates of the touch point fall within the range of the three-dimensional coordinates of the boundary point corresponding to the target touch area, and if not, the process is directly ended.
In S102, based on the three-dimensional information of the touch point, an included angle between a vector of the touch point and a vector of a horizontal coordinate axis of the target touch area is calculated, where the vector of the touch point is a vector pointing to the touch point from an origin of coordinates of the target touch area.
As shown in fig. 2, based on the three-dimensional information of the touch point P, specifically, based on the three-dimensional coordinate P (x) of the touch point Pp,yp,zp) And origin of coordinates O (x) of target touch areao,yo,zo) Determining the vector of the touch point
Figure BDA0002117093060000061
The vector of the touch point is a vector pointing to the touch point from the origin of coordinates of the target touch area
Figure BDA0002117093060000062
The vector of the horizontal coordinate axis of the target touch area is specifically a vector taking the coordinate origin of the target touch area as a starting point and taking any determined point falling on the positive half axis of the horizontal coordinate axis of the target touch area as an end point. As shown in fig. 2, determining a vector of the horizontal coordinate axis of the target touch area according to a point k on the positive half axis of the horizontal coordinate axis X on the plane coordinate system XOY of the target touch area
Figure BDA0002117093060000063
According to the determined vector of the touch point
Figure BDA0002117093060000064
And the vector of the horizontal coordinate axis of the target touch area
Figure BDA0002117093060000065
Determining an included angle between the two vectors, wherein the calculation formula is as follows:
Figure BDA0002117093060000066
in step S103, based on the three-dimensional information of the touch point and the included angle, a coordinate of the touch point on the target touch area is obtained.
And obtaining the distance between the touch point and the coordinate origin of the target touch area based on the three-dimensional information of the touch point, and obtaining the coordinate of the touch point on the target touch area according to the distance L and the included angle < POk.
Specifically, the step S103 includes:
obtaining the distance between the touch point and the coordinate origin of the target touch area based on the three-dimensional information of the touch point and the three-dimensional information of the coordinate origin of the target touch area;
and obtaining the coordinates of the touch points on the target touch area according to the distance and the included angle.
According to the three-dimensional coordinate P (x) of the touch pointp,yp,zp) And three-dimensional coordinates O (x) of the origin of coordinates of the target touch areao,yo,zo) Obtaining the distance between the touch point and the coordinate origin of the target touch area
Figure BDA0002117093060000071
According to the distance L and the included angle POk, obtaining the contact point coordinate (X) of the touch point on the target touch areaP,YP) Wherein:
Xp=L×cot∠POk
Yp=L×tan∠POk。
in the embodiment of the invention, the coordinates of the touch points on the target touch area are calculated through the three-dimensional information of the touch points detected on the target touch area by the depth camera, and components do not need to be integrated on the target touch area in advance, so that the hardware cost of touch positioning can be reduced, and the convenience of touch positioning is improved; meanwhile, the coordinates of the touch points on the target touch area are calculated through the three-dimensional information acquired from the depth camera, so that the calculated coordinate precision is higher, and the touch positioning precision can be improved.
Example two:
fig. 3 shows a flowchart of a second touch positioning method provided in the embodiment of the present application, which is detailed as follows:
in S301, a depth camera is disposed on a plane to be detected.
The plane to be detected is a plane on a designated target object, for example, a plane on the outer surface of a target object made of any material, such as a screen of a terminal device (e.g., a computer, a television, a mobile phone, etc.), an LED electronic display screen, a billboard, a glass cabinet, a projection curtain, a projection wall, or a wood board. And arranging a depth camera on the plane to be detected so that the depth camera can shoot the touch points on the plane to be detected.
In S302, a shooting area of the depth camera is set, and a vertical distance from each point in the shooting area of the depth camera to the plane to be detected is less than a preset distance and is greater than or equal to 0.
Setting a shooting area of the depth camera, wherein the vertical distance from each point in the shooting area to the plane to be detected is less than a preset distance and greater than or equal to 0, and the preset distance can be several millimeters, for example, 5mm, so that the depth camera only detects contact information which is close to the plane to be detected ((the vertical distance is greater than 0 and less than the preset distance) or just falls on the plane to be detected (the vertical distance is equal to 0), and the detection of the touch points by the depth camera is more accurate.
Specifically, the vertical distance of each point in the shooting area from the plane to be detected can be constrained by setting the shooting sensitive area of the depth camera through software or limiting the lens field of view through hardware. Or, a specific shooting chip, for example, a TOF chip of the epc901, may be adopted by the depth camera, and a shot image is in a pixel specification of 1000 × 1, so that the depth camera automatically shoots only an image within a vertical distance of 1 pixel from a plane to be detected without software or hardware setting, that is, the vertical distance between the point shot by the depth camera and the plane to be detected is automatically defined.
In S303, the target touch area is determined on the plane to be detected based on a shooting area of the depth camera, where the shooting area at least includes the target touch area.
The defined operation in the shooting area is acquired through the depth camera, and the target touch area is determined on the plane to be detected, namely the target touch area is an effective touch area defined through the operation detected in the shooting area, so that the shooting area comprises the target touch area, and the shooting area can also comprise areas outside the target touch area.
The user can be prompted to perform a defining operation on the plane to be detected in the shooting area in a prompting mode of sending voice or character display information so as to define the boundary of the target touch area, wherein the defining operation is a touch operation executed on the plane to be detected according to the prompt and can comprise clicking, sliding and the like on a target object. Optionally, a target touch area type selection instruction may be received before issuing the prompt. If the type of the target touch area selected by the user is quadrilateral, prompting the user to click on four target vertexes on the plane to be detected, and accordingly demarcating the quadrilateral target touch area according to contact information on the four target vertexes; and if the type of the target touch area selected by the user is circular, prompting the user to click the center of the target circle and the point on the target circle on the plane to be detected in sequence, and accordingly demarcating the circular target touch area according to the contact information of the center of the target circle and the point on the target circle. And if the user does not select, the type of the target touch area is a quadrangle by default.
After the target touch area is determined, the method further comprises the following steps: and establishing a plane coordinate system of the target touch area. For convenience of coordinate calculation, when the target touch area is a quadrilateral, a vertex of the target touch area can be used as a coordinate origin O, and a straight line where a boundary line of the quadrilateral target touch area is located is used as one coordinate axis to establish a plane coordinate system, as shown in fig. 4; when the target touch area is rectangular, establishing a plane coordinate system by taking a vertex of the target touch area as a coordinate origin O and taking a straight line where two right-angle sides of the rectangular target touch area are located as two coordinate axes, respectively, as shown in fig. 5; when the target touch area is circular, an intersection point of two perpendicularly intersecting tangent lines of the circular target touch area may be used as the coordinate origin O, and a straight line where the two tangent lines are located may be used as two coordinate axes to establish a plane coordinate system, as shown in fig. 6.
In S304, three-dimensional information of a touch point detected on a target touch area is acquired by a depth camera, where the depth camera is located outside the target touch area, and a shooting direction of the depth camera is a direction in which a coordinate origin of the target touch area points to a center point of the target touch area.
In this embodiment, S304 is the same as S101 in the first embodiment, and please refer to the related description of S101 in the first embodiment, which is not repeated herein.
In S305, based on the three-dimensional information of the touch point, an included angle between a vector of the touch point and a vector of a horizontal coordinate axis of the target touch area is calculated, where the vector of the touch point is a vector pointing to the touch point from an origin of coordinates of the target touch area.
In this embodiment, S305 is the same as S102 in the first embodiment, and please refer to the related description of S102 in the first embodiment, which is not repeated herein.
In S306, based on the three-dimensional information of the touch point and the included angle, a coordinate of the touch point on the target touch area is obtained.
In this embodiment, S306 is the same as S103 in the first embodiment, and please refer to the related description of S103 in the first embodiment, which is not repeated herein.
In the embodiment of the invention, the coordinates of the touch points on the target touch area are calculated through the three-dimensional information of the touch points detected on the target touch area by the depth camera, and components do not need to be integrated on the target touch area in advance, so that the hardware cost of touch positioning can be reduced, and the convenience of touch positioning is improved; meanwhile, the coordinate of the touch point on the target touch area is calculated through the three-dimensional information acquired from the depth camera, so that the calculated coordinate precision is higher, and the touch positioning precision can be improved; in addition, by setting a shooting area and determining a target touch area, the detection of the touch point and the coordinate acquisition can be more accurate.
Example three:
fig. 7 shows a flowchart of a third touch positioning method provided in the embodiment of the present application, which is detailed as follows:
in S701, a depth camera is disposed on a plane to be detected.
S701 in this embodiment is the same as S301 in the second embodiment, and please refer to the related description of S301 in the second embodiment, which is not repeated herein.
In S702, a shooting area of the depth camera is set, and a vertical distance from each point in the shooting area of the depth camera to the plane to be detected is less than a preset distance and is greater than or equal to 0.
In this embodiment, S702 is the same as S302 in the second embodiment, and please refer to the related description of S302 in the second embodiment, which is not repeated herein.
Optionally, the setting a shooting area of the depth camera includes:
and setting a shooting area by installing a lens clamp on the depth camera, wherein the gap of the lens clamp is smaller than a second preset value.
As shown in fig. 8, a lens holder 82 having a gap smaller than a second preset value is mounted on the depth camera 81 to restrict a photographing region to a region having a vertical distance from a plane to be detected smaller than the second preset value. The second preset value can be several millimeters, for example, 5mm, so that the depth camera only detects the contact information on the plane to be detected, and the detection of the touch point by the depth camera is more accurate. The second preset value may be equal to the preset distance described in the second embodiment.
Specifically, the lens holder 82 is a hollow structure, as shown in fig. 9, the end AB is the end of the lens holder close to the lens of the depth camera, a gap EFGH is formed at the end of the lens holder far from the depth camera, the width of the gap is several millimeters, for example, 5mm, and the approximate plane constrained by the gap is parallel to the plane to be detected, and specifically, the plane where the calibration GE is located is parallel to the plane to be detected. The method comprises the steps of setting a plane to be detected as a y0 plane, setting the width of a gap as m, constraining a shooting area to be an area with a vertical distance to the plane to be detected smaller than m through the gap, setting x and z values of three-dimensional coordinates in the shooting area to be arbitrary values, and setting the y value to satisfy that y0 is smaller than or equal to y0+ m, namely, the shooting area is a space area (including the two planes) enclosed between the two planes of y0 and y0+ m. After the lens holder 82 is mounted, the depth image captured by the depth camera is shown in the abcd area of fig. 9, where the EFGH area is the corresponding depth image area after the slot EFGH restricts the capture area, and in the depth image, only the EFGH area has depth information, and the abge area and the hfcd area are depth image areas corresponding to a space outside the restricted capture area, and thus do not have depth information.
In S703, a first target point is determined on the plane to be detected according to the position of the depth camera, and a horizontal distance between the depth camera and the first target point is smaller than a first preset value.
As shown in fig. 8, according to the position of the depth camera, a first target point o whose horizontal distance from the depth camera is smaller than a first preset value is determined on the plane to be detected. That is, the first target point o is located near the depth camera, so that the distance between the lens of the depth camera and the origin of coordinates o of the target touch area is smaller than a first preset value, and the shooting area of the depth camera is close to the target touch area. For example, the first preset value may be 1 mm. The first preset value can be obtained by receiving a first preset value setting instruction of a user. The shooting direction of the depth camera is the direction in which the coordinate origin o points to the center of the quadrilateral target touch area.
In S704, a second target point, a third target point, and a fourth target point are determined on the plane to be detected according to the position of the depth camera, and the first target point, the second target point, the third target point, and the fourth target point are connected to form a rectangular area, the rectangular area is used as the target touch area, and the first target point is used as the origin of coordinates of the target touch area.
As shown in fig. 8, a plane where the screen is located is used as a plane to be detected, and a second target point i, a third target point j, and a fourth target point k are sequentially determined on the plane to be detected in the shooting area according to the position of the depth camera. After the four target points are determined, the four target points are used as four vertexes of a rectangle, the four vertexes are connected to form a rectangular area, and the rectangular area is used as a target touch area. As shown in fig. 8, a rectangular region oijk formed by four target points o, i, j, and k is a target touch region.
And establishing a plane rectangular coordinate system on the target touch area by taking the first target point as the origin of coordinates of the target touch area. For example, as shown in fig. 8, a rectangular plane coordinate system of the target touch area is established by using a point o as the origin of coordinates of the target touch area, using the directions from o to k as the positive X-axis direction, using the straight line where the line ok is located as the positive X-axis direction of the target touch area, using the directions from o to i as the positive Y-axis direction, and using the straight line where the line oi is located as the positive Y-axis direction of the target touch area.
In S705, three-dimensional coordinates of the first target point, the second target point, the third target point, and the fourth target point are obtained.
After the target touch area is determined, acquiring the three-dimensional coordinates of a first target point O, wherein the three-dimensional coordinates of the first target point O are the three-dimensional coordinates O (x) of the coordinate origin of the target touch areao,yo,zo) So as to calculate the distance between the touch point and the coordinate origin of the target touch area and determine the vector of the touch point P
Figure BDA0002117093060000121
At the same time, according to the three-dimensional coordinates O (x) of the first target point Oo,yo,zo) And the three-dimensional coordinate k (x) of the fourth target point kk,yk,zk) Determining the vector of the horizontal coordinate axis of the target touch area
Figure BDA0002117093060000122
And the three-dimensional coordinates of the second target point and the third target point can be acquired, so that the coordinate conversion operation is executed only when the touch point after constraint falls within a space range formed by the three-dimensional coordinates of the four target points.
Alternatively, the three-dimensional coordinates of the target point may be acquired while the first target point is determined directly in step S703 and the second, third and fourth target points are determined in step S704, thereby omitting step S705.
In S706, three-dimensional information of a touch point detected on a target touch area is acquired by a depth camera, where the depth camera is located outside the target touch area, and a shooting direction of the depth camera is a direction in which a coordinate origin of the target touch area points to a center point of the target touch area.
S706 in this embodiment is the same as S101 in the first embodiment, and please refer to the related description of S101 in the first embodiment, which is not repeated herein.
In S707, based on the three-dimensional information of the touch point, an included angle between a vector of the touch point and a vector of a horizontal coordinate axis of the target touch area is calculated, where the vector of the touch point is a vector pointing to the touch point from an origin of coordinates of the target touch area.
S707 in this embodiment is the same as S102 in the first embodiment, and please refer to the related description of S102 in the first embodiment, which is not repeated herein.
In S708, based on the three-dimensional information of the touch point and the included angle, a coordinate of the touch point on the target touch area is obtained.
In this embodiment, S708 is the same as S103 in the first embodiment, and please refer to the related description of S103 in the first embodiment, which is not repeated herein.
In the embodiment of the invention, the rectangular area is specifically used as the target touch area, the target touch area is determined by determining the four target points, and components do not need to be integrated on the target touch area in advance, so that the hardware cost for touch positioning on the target touch area can be reduced, and the convenience of touch positioning is improved; meanwhile, the coordinates of the touch points on the target touch area are calculated through the three-dimensional information acquired from the depth camera, so that the calculated coordinate precision is higher, and the touch positioning precision can be improved.
Example four:
fig. 10 shows a flowchart of a fourth touch positioning method provided in the embodiment of the present application, which is detailed as follows:
in S1001, three-dimensional information of a touch point detected on a target touch area is acquired by a depth camera, the depth camera is located outside the target touch area, and a shooting direction of the depth camera is a direction in which a coordinate origin of the target touch area points to a center point of the target touch area.
S1001 in this embodiment is the same as S101 in the first embodiment, and please refer to the related description of S101 in the first embodiment, which is not repeated herein.
In S1002, based on the three-dimensional information of the touch point, an included angle between a vector of the touch point and a vector of a horizontal coordinate axis of the target touch area is calculated, where the vector of the touch point is a vector pointing to the touch point from an origin of coordinates of the target touch area.
In this embodiment, S1002 is the same as S102 in the first embodiment, and please refer to the related description of S102 in the first embodiment, which is not repeated herein.
In S1003, based on the three-dimensional information of the touch point and the included angle, a coordinate of the touch point on the target touch area is obtained.
S1003 in this embodiment is the same as S103 in the first embodiment, and please refer to the related description of S103 in the first embodiment, which is not repeated herein.
In S1004, the coordinates of the touch point on the target touch area are sent to a target device, and the target device is instructed to execute a target operation.
And sending the coordinates (Xp, Yp) of the point P to be measured on the target touch area to the target equipment in a wired or wireless mode, and indicating the target equipment to execute target operation. For example, after receiving the touch point coordinates, the target device converts the coordinates into logical coordinates of the target device, and triggers corresponding target operations such as clicking to open a file, drawing, zooming in and out a picture, and the like. Optionally, after the touch point coordinates are sent to the target device, the target device converts the coordinates into logical coordinates of the target device, and displays the logical coordinates on a display screen of the target device in a specific icon manner, so as to instruct the target device to perform a target operation (i.e., equivalent to a mouse function). The target device may be a terminal device such as a computer, a mobile phone, a television, a server, and the like, which is not limited herein.
In the embodiment of the invention, the three-dimensional information of the contact detected on the target touch area by the depth camera is used for calculating the contact coordinate of the contact on the target touch area and sending the contact coordinate to the target equipment for processing, and because components do not need to be integrated on the target touch area in advance, the hardware cost of touch positioning can be reduced, and the convenience of touch positioning is improved; meanwhile, the touch point coordinates are calculated through the three-dimensional information acquired from the depth camera, so that the calculated touch point coordinates are higher in precision, and the touch positioning precision can be improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Example five:
fig. 11 shows a schematic structural diagram of a touch positioning system provided in an embodiment of the present application, and for convenience of description, only parts related to the embodiment of the present application are shown:
the touch positioning system 11 includes a touch positioning device 111 and a target device 112, and the touch positioning device 111 and the target device 112 are connected in a wired or wireless manner. For example, the touch positioning device 111 may be connected to the target device 112 through a data line, or may be connected in a wireless manner such as bluetooth, WiFi, Zigbee, and the like for data transmission.
The touch positioning device 111 includes: a touch point three-dimensional information obtaining unit 1111, an included angle calculating unit 1112, and a touch point plane coordinate determining unit 1113. Wherein:
the touch point three-dimensional information acquiring unit 1111 is configured to acquire three-dimensional information of a touch point detected on a target touch area through a depth camera, where the depth camera is located outside the target touch area, and a shooting direction of the depth camera is a direction in which a coordinate origin of the target touch area points to a center point of the target touch area.
The target touch area is an area for receiving a touch operation (a click operation or a swipe operation of a user), the target touch area may be a surface area of an object made of any material, such as a screen of a designated terminal device (e.g., a computer, a television, a mobile phone, etc.), an LED electronic display screen, a billboard, an outer surface of a glass cabinet, a projection curtain, a projection wall, or a wood board, and the shape of the target touch area may be a quadrangle, a circle, or other irregular shapes, which is not limited herein.
A two-dimensional plane coordinate system needs to be established in advance on the target touch area so as to represent the position information on the target touch area. The depth camera is located outside the target touch area, and the shooting direction of the depth camera is the direction in which the coordinate origin of the target touch area points to the central point of the target touch area, so that the shooting direction of the depth camera is parallel to the target touch area and the shooting range comprises the target touch area.
When a user performs touch operation on a target touch area, the depth camera detects a touch point P falling on the target touch area, and acquires three-dimensional information of the touch point. Specifically, the three-dimensional information of the touch point is the three-dimensional coordinate P (x) of the touch point Pp,yp,zp) Or specifically includes the three-dimensional coordinates P (x) of the touch point Pp,yp,zp) And the depth value d of the touch point P. Specifically, three-dimensional coordinate P (x)p,yp,zp) The three-dimensional coordinates can be obtained through point cloud data generated by the depth camera, wherein the point cloud data can be directly generated by the depth camera or obtained through conversion according to a depth map shot by the depth camera, or obtained through calculation according to depth information detected by the depth camera and other physical parameters.
An included angle calculating unit 1112, configured to calculate an included angle between a vector of the touch point and a vector of a horizontal coordinate axis of the target touch area based on the three-dimensional information of the touch point, where the vector of the touch point is a vector pointing to the touch point from an origin of coordinates of the target touch area.
As shown in fig. 2, based on the three-dimensional information of the touch point P, specifically, based on the three-dimensional coordinate P (x) of the touch point Pp,yp,zp) And origin of coordinates O (x) of target touch areao,yo,zo) Determining the vector of the touch point
Figure BDA0002117093060000151
The vector of the touch point is the rootThe coordinate origin of the target touch area points to the vector of the touch point
Figure BDA0002117093060000152
The vector of the horizontal coordinate axis of the target touch area is specifically a vector taking the coordinate origin of the target touch area as a starting point and taking any determined point falling on the positive half axis of the horizontal coordinate axis of the target touch area as an end point. As shown in fig. 2, determining a vector of the horizontal coordinate axis of the target touch area according to a point k on the positive half axis of the horizontal coordinate axis X on the plane coordinate system XOY of the target touch area
Figure BDA0002117093060000161
According to the determined vector of the touch point
Figure BDA0002117093060000162
And the vector of the horizontal coordinate axis of the target touch area
Figure BDA0002117093060000163
Determining an included angle between the two vectors, wherein the calculation formula is as follows:
Figure BDA0002117093060000164
and a touch point plane coordinate determining unit 1113, configured to obtain coordinates of the touch point on the target touch area based on the three-dimensional information of the touch point and the included angle.
And obtaining the distance between the touch point and the coordinate origin of the target touch area based on the three-dimensional information of the touch point, and obtaining the coordinate of the touch point on the target touch area according to the distance L and the included angle < POk.
Specifically, the touch point plane coordinate determination unit 1113 includes a distance determination module and a coordinate calculation module:
the distance determining module is used for obtaining the distance between the touch point and the coordinate origin of the target touch area based on the three-dimensional information of the touch point and the three-dimensional information of the coordinate origin of the target touch area;
and the coordinate calculation module is used for obtaining the coordinate of the touch point on the target touch area according to the distance and the included angle.
Optionally, the touch positioning device 111 further includes a setting unit, a shooting area setting unit, and a target touch area determining unit:
the setting unit is used for setting a depth camera on a plane to be detected;
the shooting area setting unit is used for setting a shooting area of the depth camera, and the vertical distance from each point in the shooting area of the depth camera to the plane to be detected is smaller than a preset distance and is larger than or equal to 0;
and the target touch area determining unit is used for determining the target touch area on the plane to be detected based on the shooting area of the depth camera, wherein the shooting area at least comprises the target touch area.
Optionally, the target touch area determining unit includes a first target point determining module, a target touch area determining module, and a target point coordinate determining module:
the first target point determining module is used for determining a first target point on the plane to be detected according to the position of the depth camera, and the horizontal distance between the depth camera and the first target point is smaller than a first preset value;
the target touch area determining module is used for determining a second target point, a third target point and a fourth target point on the plane to be detected according to the position of the depth camera, connecting the first target point, the second target point, the third target point and the fourth target point to form a rectangular area, using the rectangular area as the target touch area, and using the first target point as the origin of coordinates of the target touch area;
and the target point coordinate determination module is used for acquiring three-dimensional coordinates of the first target point, the second target point, the third target point and the fourth target point.
Optionally, the shooting area setting unit is specifically configured to set a shooting area by installing a lens clamp on the depth camera, where a gap of the lens clamp is smaller than a second preset value.
Optionally, the touch positioning device 111 further includes a sensing unit and a determining unit:
the sensing unit is used for sensing that the plane of the target touch area has touch operation through the depth camera;
and the judging unit is used for ending if the corresponding touch point falls outside the target touch area.
Optionally, the touch positioning device 111 further includes:
and the sending unit is used for sending the coordinates of the touch points on the target touch area to target equipment and indicating the target equipment to execute target operation.
The target device 112 includes:
a receiving unit 1121, configured to obtain coordinates of a touch point sent by the touch positioning apparatus, and execute a target operation according to the coordinates of the touch point.
Specifically, the receiving unit includes a receiving module and a target operation executing module:
and the receiving module is used for acquiring the touch point coordinates sent by the touch positioning device and converting the touch point coordinates into the logical coordinates of the target equipment.
And the target operation execution module is used for executing the target operation according to the converted logical coordinates of the target equipment.
The target device may be a terminal device such as a computer, a mobile phone, a television, a server, and the like, which is not limited herein. The receiving module acquires the touch point coordinates sent by the touch positioning device in a wired or wireless data transmission mode and converts the touch point coordinates into the logical coordinates of the target equipment. The touch point coordinates are coordinates of the touch point on a plane coordinate system of the target touch area, and for the target device, the touch point coordinates have own logical coordinates and can be converted into the logical coordinates of the target device according to a mapping relation between the pre-stored touch point coordinates and the logical coordinates of the target device.
And according to the converted logical coordinates, performing target operations such as file opening, drawing, picture zooming and the like on the target equipment. Optionally, the logical coordinates are displayed on the display screen of the target device in a specific icon manner, instructing the target device to perform the target operation (i.e., equivalent to a mouse function).
In the embodiment of the invention, the touch point coordinates of the touch point on the target touch area are calculated through the three-dimensional information of the touch point detected on the target touch area by the depth camera, and then the target equipment acquires the touch point coordinates for processing, and since components do not need to be integrated on the target touch area in advance, the hardware cost of touch positioning can be reduced, and the convenience of touch positioning is improved; meanwhile, the touch point coordinates are calculated through the three-dimensional information acquired from the depth camera, so that the calculated touch point coordinates are higher in precision, and the touch positioning precision can be improved.
Example six:
fig. 12 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 12, the terminal device 12 of this embodiment includes: a processor 120, a memory 121, and a computer program 122, such as a touch location program, stored in the memory 121 and executable on the processor 120. The processor 120 executes the computer program 122 to implement the steps in the above-mentioned embodiments of the touch positioning method, such as the steps S101 to S103 shown in fig. 1. Alternatively, the processor 120, when executing the computer program 122, implements the functions of the modules/units in the above device embodiments, such as the functions of the units 1111 to 1113 shown in fig. 11.
Illustratively, the computer program 122 may be partitioned into one or more modules/units that are stored in the memory 121 and executed by the processor 120 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 122 in the terminal device 12. For example, the computer program 122 may be divided into a touch point three-dimensional information obtaining unit, an included angle calculating unit, and a touch point plane coordinate determining unit, where the specific functions of each unit are as follows:
the device comprises a touch point three-dimensional information acquisition unit, a depth camera and a control unit, wherein the touch point three-dimensional information acquisition unit is used for acquiring three-dimensional information of a touch point detected on a target touch area through the depth camera, the depth camera is positioned outside the target touch area, and the shooting direction of the depth camera is the direction in which the coordinate origin of the target touch area points to the central point of the target touch area.
And the included angle calculation unit is used for calculating an included angle between a vector of the touch point and a vector of a horizontal coordinate axis of the target touch area based on the three-dimensional information of the touch point, wherein the vector of the touch point is a vector pointing to the touch point from a coordinate origin of the target touch area.
And the touch point plane coordinate determination unit is used for obtaining the coordinate of the touch point on the target touch area based on the three-dimensional information of the touch point and the included angle.
The terminal device 12 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 120, a memory 121. Those skilled in the art will appreciate that fig. 12 is merely an example of a terminal device 12 and does not constitute a limitation of terminal device 12 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 120 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 121 may be an internal storage unit of the terminal device 12, such as a hard disk or a memory of the terminal device 12. The memory 121 may also be an external storage device of the terminal device 12, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 12. Further, the memory 121 may also include both an internal storage unit and an external storage device of the terminal device 12. The memory 121 is used to store the computer program and other programs and data required by the terminal device. The memory 121 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A touch positioning method is characterized by comprising the following steps:
acquiring three-dimensional information of a touch point detected on a target touch area through a depth camera, wherein the depth camera is positioned outside the target touch area, and the shooting direction of the depth camera is the direction in which the coordinate origin of the target touch area points to the central point of the target touch area;
based on the three-dimensional information of the touch points, calculating an included angle between a vector of the touch points and a vector of a horizontal coordinate axis of the target touch area, wherein the vector of the touch points is a vector pointing to the touch points from a coordinate origin of the target touch area;
and obtaining the coordinates of the touch points on the target touch area based on the three-dimensional information of the touch points and the included angle.
2. The touch location method of claim 1, wherein before the acquiring, by the depth camera, three-dimensional information of the touch point detected on the target touch area, further comprises:
arranging a depth camera on a plane to be detected;
setting a shooting area of the depth camera, wherein the vertical distance from each point in the shooting area of the depth camera to the plane to be detected is smaller than a preset distance and is greater than or equal to 0;
determining the target touch area on the plane to be detected based on a shooting area of the depth camera, wherein the shooting area at least comprises the target touch area.
3. The touch positioning method of claim 2, wherein determining the target touch area on the to-be-detected plane based on the capture area of the depth camera comprises:
determining a first target point on the plane to be detected according to the position of the depth camera, wherein the horizontal distance between the depth camera and the first target point is smaller than a first preset value;
determining a second target point, a third target point and a fourth target point on the plane to be detected according to the position of the depth camera, and connecting the first target point, the second target point, the third target point and the fourth target point to form a rectangular area, wherein the rectangular area is used as the target touch area, and the first target point is used as the origin of coordinates of the target touch area;
and acquiring three-dimensional coordinates of the first target point, the second target point, the third target point and the fourth target point.
4. The touch positioning method of claim 3, wherein setting the capture area of the depth camera comprises:
and setting a shooting area by installing a lens clamp on the depth camera, wherein the gap of the lens clamp is smaller than a second preset value.
5. The touch location method of claim 1, wherein before acquiring three-dimensional information of the touch point detected on the target touch area by the depth camera, the method further comprises:
sensing that the plane of the target touch area has touch operation through a depth camera;
and if the corresponding touch point is out of the target touch area, ending the process.
6. The touch positioning method of claim 1, wherein the obtaining coordinates of the touch point on the target touch area based on the three-dimensional information of the touch point and the included angle comprises:
obtaining the distance between the touch point and the coordinate origin of the target touch area based on the three-dimensional information of the touch point and the three-dimensional information of the coordinate origin of the target touch area;
and obtaining the coordinates of the touch points on the target touch area according to the distance and the included angle.
7. The touch positioning method according to any one of claims 1 to 6, wherein after obtaining the coordinates of the touch point on the target touch area based on the three-dimensional information of the touch point and the included angle, the method further comprises:
and sending the coordinates of the touch point on the target touch area to target equipment, and indicating the target equipment to execute target operation.
8. A touch positioning system is characterized in that the system comprises a touch positioning device and a target device, wherein the touch positioning device is connected with the target device in a wired or wireless mode, and the touch positioning device comprises:
the touch positioning device is used for executing the method of any one of claims 1 to 7;
and the target equipment is used for acquiring the touch point coordinates sent by the touch positioning device and executing target operation according to the touch point coordinates.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201910594180.3A 2019-07-03 2019-07-03 Touch positioning method and device and terminal equipment Pending CN112181211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910594180.3A CN112181211A (en) 2019-07-03 2019-07-03 Touch positioning method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910594180.3A CN112181211A (en) 2019-07-03 2019-07-03 Touch positioning method and device and terminal equipment

Publications (1)

Publication Number Publication Date
CN112181211A true CN112181211A (en) 2021-01-05

Family

ID=73914930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910594180.3A Pending CN112181211A (en) 2019-07-03 2019-07-03 Touch positioning method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN112181211A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103207709A (en) * 2013-04-07 2013-07-17 布法罗机器人科技(苏州)有限公司 Multi-touch system and method
CN105373266A (en) * 2015-11-05 2016-03-02 上海影火智能科技有限公司 Novel binocular vision based interaction method and electronic whiteboard system
CN106095199A (en) * 2016-05-23 2016-11-09 广州华欣电子科技有限公司 A kind of touch-control localization method based on projection screen and system
CN106125994A (en) * 2016-06-17 2016-11-16 深圳迪乐普数码科技有限公司 Coordinate matching method and use control method and the terminal of this coordinate matching method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103207709A (en) * 2013-04-07 2013-07-17 布法罗机器人科技(苏州)有限公司 Multi-touch system and method
CN105373266A (en) * 2015-11-05 2016-03-02 上海影火智能科技有限公司 Novel binocular vision based interaction method and electronic whiteboard system
CN106095199A (en) * 2016-05-23 2016-11-09 广州华欣电子科技有限公司 A kind of touch-control localization method based on projection screen and system
CN106125994A (en) * 2016-06-17 2016-11-16 深圳迪乐普数码科技有限公司 Coordinate matching method and use control method and the terminal of this coordinate matching method

Similar Documents

Publication Publication Date Title
US11842438B2 (en) Method and terminal device for determining occluded area of virtual object
US11290651B2 (en) Image display system, information processing apparatus, image display method, image display program, image processing apparatus, image processing method, and image processing program
KR101900873B1 (en) Method, device and system for acquiring antenna engineering parameters
CN108668086B (en) Automatic focusing method and device, storage medium and terminal
CN108965835B (en) Image processing method, image processing device and terminal equipment
CN104081307A (en) Image processing apparatus, image processing method, and program
CN108134903B (en) Shooting method and related product
CN112017133B (en) Image display method and device and electronic equipment
CN108693997B (en) Touch control method and device of intelligent interaction panel and intelligent interaction panel
CN110738185B (en) Form object identification method, form object identification device and storage medium
CN105426067A (en) Desktop icon replacement method and apparatus
CN106569716B (en) Single-hand control method and control system
CN115097975A (en) Method, apparatus, device and storage medium for controlling view angle conversion
CN111381224B (en) Laser data calibration method and device and mobile terminal
CN110858814B (en) Control method and device for intelligent household equipment
CN112262364A (en) Electronic device and system for generating objects
WO2021004413A1 (en) Handheld input device and blanking control method and apparatus for indication icon of handheld input device
CN109444905B (en) Dynamic object detection method and device based on laser and terminal equipment
CN115031635A (en) Measuring method and device, electronic device and storage medium
CN112181211A (en) Touch positioning method and device and terminal equipment
KR20130085094A (en) User interface device and user interface providing thereof
CN112308768B (en) Data processing method, device, electronic equipment and storage medium
CN109308113A (en) Non-contact inputting devices and method, the display that can be carried out contactless input
CN112308767A (en) Data display method and device, storage medium and electronic equipment
KR20220053394A (en) Electronic device and method for controlling display of a plurality of objects on wearable display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210105

RJ01 Rejection of invention patent application after publication