CN110471577B - 360-degree omnibearing virtual touch control method, system, platform and storage medium - Google Patents

360-degree omnibearing virtual touch control method, system, platform and storage medium Download PDF

Info

Publication number
CN110471577B
CN110471577B CN201910759816.5A CN201910759816A CN110471577B CN 110471577 B CN110471577 B CN 110471577B CN 201910759816 A CN201910759816 A CN 201910759816A CN 110471577 B CN110471577 B CN 110471577B
Authority
CN
China
Prior art keywords
camera
matrix
virtual touch
degree
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910759816.5A
Other languages
Chinese (zh)
Other versions
CN110471577A (en
Inventor
赖习章
曾庆彬
邓奕明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongshan Yelang Intelligent Technology Co ltd
Original Assignee
Zhongshan Yelang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongshan Yelang Intelligent Technology Co ltd filed Critical Zhongshan Yelang Intelligent Technology Co ltd
Publication of CN110471577A publication Critical patent/CN110471577A/en
Application granted granted Critical
Publication of CN110471577B publication Critical patent/CN110471577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03542Light pens for emitting or receiving light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the technical field of computers, in particular to a 360-degree omnibearing virtual touch control method, a system, a platform and a storage medium. Obtaining illuminated partial data to perform background modeling by projecting parallel light according to laser; acquiring parallel laser illuminated finger tip position image data in combination with the illuminated portion data; obtaining the distance between the finger tip and the center of the camera device through the camera and combining with the triangular ranging; and sending control instruction information according to the obtained distance to realize real-time touch control. The method can realize supporting simultaneous operation of multiple people, and effectively improves sharing efficiency and pleasure; the up-down and left-right directions of each person are relative to the directions of the people, so that each person can feel that the touch panel is in front of the person.

Description

360-degree omnibearing virtual touch control method, system, platform and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a 360-degree omnibearing virtual touch control method, a system, a platform and a storage medium.
Background
At present, in round table conferences, people discuss a certain point of view on ppt, mark on ppt and the like, the cooperative work mode occurs in the company, and no good scheme for solving the cooperative operation problem exists at present;
the existing solutions of realizing touch control by the virtual keyboard, the virtual mouse and the like through the infrared linear mirror do not support simultaneous operation of multiple people.
Disclosure of Invention
Aiming at the technical problem that simultaneous operation of multiple persons is not supported, the invention provides a 360-degree omnibearing virtual touch control method, a 360-degree omnibearing virtual touch control system, a 360-degree omnibearing virtual touch control platform and a 360-degree omnibearing virtual touch control storage medium, which can realize that simultaneous operation of multiple persons is supported and effectively improve sharing efficiency and enjoyment; the up-down and left-right directions of each person are relative to the directions of the people, so that each person can feel that the touch panel is in front of the person.
The invention is realized by the following technical scheme:
a360-degree omnibearing virtual touch control method specifically comprises the following steps:
according to the laser projection parallel light, acquiring illuminated partial data for background modeling;
acquiring parallel laser illuminated finger tip position image data in combination with the illuminated portion data;
obtaining the distance between the finger tip and the center of the camera device through the camera and combining with the triangular ranging;
and sending control instruction information according to the obtained distance to realize real-time touch control.
Further, the camera device comprises a monocular camera device 360-degree coverage and a binocular camera device 360-degree coverage;
the camera device is specifically in a multi-person mode or a single-person mode.
Further, before the step of obtaining the illuminated partial data for background modeling according to the laser projection parallel light, the method further comprises the following steps:
and acquiring an internal reference matrix and an external reference matrix of the camera.
Further, in the step of combining the illuminated partial data, obtaining parallel laser illumination into the finger tip position image data, the method further comprises the steps of:
and performing contour recognition matching processing on the fingertip position image data in the view field of the acquired camera.
Further, in the step of obtaining the distance between the finger tip and the center of the camera device by the camera and combining the triangulation ranging, the method further comprises the following steps:
acquiring normal and tangential position data of a finger touch point;
and obtaining corresponding using track and gesture data through the rotation matrix and the translation matrix of the camera.
In order to achieve the above objective, the present invention further provides a 360-degree omnibearing virtual touch control system, which specifically includes:
the first acquisition unit is used for acquiring the illuminated partial data to carry out background modeling according to the laser projected parallel light;
a second acquisition unit for acquiring parallel laser-illuminated finger tip position image data in combination with the illuminated partial data;
the third acquisition unit is used for acquiring the distance between the finger tip and the center of the camera device through the camera and combining with the triangular ranging;
and the control unit is used for sending control instruction information according to the obtained distance to realize real-time touch control.
Further, the system further comprises:
the first acquisition module is used for acquiring an internal reference matrix and an external reference matrix of the camera;
the second acquisition unit further includes:
the second acquisition module is used for carrying out contour recognition matching processing on fingertip position image data in the view field of the acquired camera;
the third obtaining unit further includes:
the third acquisition module is used for acquiring normal and tangential position data of the finger touch point;
and the fourth acquisition module is used for acquiring corresponding use track and gesture data through the rotation matrix and the translation matrix of the camera.
In order to achieve the above object, the present invention further provides a 360-degree omnibearing virtual touch control platform, including:
the processor, the memory and the 360-degree omnibearing virtual touch control platform control program;
the processor executes the 360-degree omnibearing virtual touch control platform control program, the 360-degree omnibearing virtual touch control platform control program is stored in the memory, and the 360-degree omnibearing virtual touch control platform control program realizes the 360-degree omnibearing virtual touch control method steps.
In order to achieve the above objective, the present invention further provides a computer readable storage medium, where the computer readable storage medium stores a 360-degree omnidirectional virtual touch platform control program, and the 360-degree omnidirectional virtual touch platform control program implements the 360-degree omnidirectional virtual touch method steps.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a 360-degree omnibearing virtual touch control method,
according to the laser projection parallel light, acquiring illuminated partial data for background modeling;
acquiring parallel laser illuminated finger tip position image data in combination with the illuminated portion data;
obtaining the distance between the finger tip and the center of the camera device through the camera and combining with the triangular ranging;
and sending control instruction information according to the obtained distance to realize real-time touch control.
System units and modules accordingly:
the first acquisition unit is used for acquiring the illuminated partial data to carry out background modeling according to the laser projected parallel light;
a second acquisition unit for acquiring parallel laser-illuminated finger tip position image data in combination with the illuminated partial data;
the third acquisition unit is used for acquiring the distance between the finger tip and the center of the camera device through the camera and combining with the triangular ranging;
and the control unit is used for sending control instruction information according to the obtained distance to realize real-time touch control.
Correspondingly, the system further comprises:
the first acquisition module is used for acquiring an internal reference matrix and an external reference matrix of the camera;
the second acquisition unit further includes:
the second acquisition module is used for carrying out contour recognition matching processing on fingertip position image data in the view field of the acquired camera;
the third obtaining unit further includes:
the third acquisition module is used for acquiring normal and tangential position data of the finger touch point;
and the fourth acquisition module is used for acquiring corresponding use track and gesture data through the rotation matrix and the translation matrix of the camera.
And a platform and storage medium accordingly;
the method can realize supporting simultaneous operation of multiple people, and effectively improves sharing efficiency and pleasure; the up-down and left-right directions of each person are relative to the directions of the people, so that each person can feel that the touch panel is in front of the person.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a 360-degree omnibearing virtual touch control method according to the present invention;
FIG. 2 is a schematic diagram of three cameras and laser positions of a 360-degree omnidirectional virtual touch method according to the present invention;
FIG. 3 is a schematic view illustrating a direction definition of a 360-degree omni-directional virtual touch method according to the present invention;
FIG. 4 is a diagram showing a binocular camera placement of a 360-degree omnidirectional virtual touch method according to the present invention;
FIG. 5 is a schematic view illustrating a direction definition of a 360-degree omni-directional virtual touch method according to the present invention;
FIG. 6 is a schematic diagram illustrating the division of the main optical axis and the boundary of the region in a 360-degree omni-directional virtual touch method according to the present invention;
FIG. 7 is a diagram illustrating a 360-degree omni-directional virtual touch method camera distribution according to the present invention;
FIG. 8 is a schematic view of a 360-degree omnidirectional virtual touch method according to the present invention;
FIG. 9 is a schematic diagram of a 360-degree omnidirectional virtual touch system architecture according to the present invention;
FIG. 10 is a schematic diagram of a module frame of a 360 degree omnidirectional virtual touch system according to the present invention;
FIG. 11 is a schematic diagram of a 360-degree omnidirectional virtual touch platform architecture according to the present invention;
FIG. 12 is a schematic diagram of a computer-readable storage medium architecture according to an embodiment of the invention;
the achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
For a better understanding of the present invention, its objects, technical solutions and advantages, further description of the present invention will be made with reference to the drawings and detailed description, and further advantages and effects will be readily apparent to those skilled in the art from the present disclosure.
The invention may be practiced or carried out in other embodiments and details within the scope and range of equivalents of the various features and advantages of the invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and rear … …) are included in the embodiments of the present invention, the directional indications are merely used to explain the relative positional relationship, movement conditions, etc. between the components in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are correspondingly changed.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. Secondly, the technical solutions of the embodiments may be combined with each other, but it is necessary to be based on the fact that those skilled in the art can realize the technical solutions, and when the technical solutions are contradictory or cannot be realized, the technical solutions are considered to be absent and are not within the scope of protection claimed in the present invention.
Preferably, the 360-degree omnibearing virtual touch control method is applied to one or more terminals or servers. The terminal is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a programmable gate array (Field-Programmable Gate Array, FPGA), a digital processor (Digital Signal Processor, DSP), an embedded device, etc.
The terminal can be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The terminal can perform man-machine interaction with a client through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The invention discloses a 360-degree omnibearing virtual touch control method, a system, a platform and a storage medium.
Fig. 1 is a flowchart of a 360-degree omnibearing virtual touch control method according to an embodiment of the present invention.
In this embodiment, the 360-degree omnibearing virtual touch control method can be applied to a terminal or a fixed terminal with a display function, and the terminal is not limited to a personal computer, a smart phone, a tablet personal computer, a desktop computer or an integrated machine with a camera, and the like.
The 360-degree omnibearing virtual touch control method can also be applied to a hardware environment formed by a terminal and a server connected with the terminal through a network. Networks include, but are not limited to: a wide area network, a metropolitan area network, or a local area network. The 360-degree omnibearing virtual touch control method of the embodiment of the invention can be executed by a server, a terminal or both.
For example, for a terminal that needs 360-degree omnibearing virtual touch control, the 360-degree omnibearing virtual touch control function provided by the method of the invention can be directly integrated on the terminal, or a client for realizing the method of the invention is installed. For another example, the method provided by the invention can also run on devices such as a server in the form of a software development kit (Software Development Kit, SDK), an interface of 360-degree omnibearing virtual touch control functions is provided in the form of the SDK, and the terminal or other devices can realize the 360-degree omnibearing virtual touch control functions through the provided interface.
As shown in fig. 1, the present invention provides a 360-degree omnibearing virtual touch control method, which specifically includes the following steps, according to different requirements, the order of the steps in the flowchart may be changed, and some steps may be omitted.
According to the laser projection parallel light, acquiring illuminated partial data for background modeling;
acquiring parallel laser illuminated finger tip position image data in combination with the illuminated portion data;
obtaining the distance between the finger tip and the center of the camera device through the camera and combining with the triangular ranging;
and sending control instruction information according to the obtained distance to realize real-time touch control.
Specifically, the camera device comprises a monocular camera device 360-degree coverage and a binocular camera device 360-degree coverage; that is, binocular vision requires 360 degrees of coverage of the required number and placement of cameras; monocular vision requires 360 degrees of coverage of the required number of cameras and placement of the cameras.
The camera device is specifically in a multi-person mode or a single-person mode.
Preferably, before the step of obtaining the illuminated partial data for background modeling according to the laser projected parallel light, the method further comprises the following steps:
and acquiring an internal reference matrix and an external reference matrix of the camera.
Combining the illuminated partial data in the step to acquire parallel laser illumination into the finger tip position image data, and further comprising the steps of:
and performing contour recognition matching processing on the fingertip position image data in the view field of the acquired camera.
The method comprises the following steps that in the step of obtaining the distance between the finger tip and the center of the camera device through the camera and combining with the triangular ranging, the method further comprises the following steps:
acquiring normal and tangential position data of a finger touch point; specifically, in the direction definition: the vertical direction is normal, and the horizontal direction is tangential.
And obtaining corresponding using track and gesture data through the rotation matrix and the translation matrix of the camera.
That is, in an embodiment of the present invention,
the laser emits parallel light, and background modeling is carried out on the illuminated part;
when the parallel laser lights up the finger tip, the distance between the finger tip and the center of the device is obtained through the triangular ranging of the camera;
when the device is set to be in a multi-person mode, each device can support multiple persons at the same time, when a plurality of light spots are found, the speed coordinate axes of each light spot are different, the moving offset is calculated respectively, and meanwhile, a mouse message is sent to the system.
The device consists of three cameras for 360-degree omnibearing identification, and is equipped with at least 3 infrared linear laser transmitters with opening angles of more than 120 degrees or one 360-degree omnibearing infrared linear laser transmitter
As shown in fig. 2, the placement of three cameras and three infrared light sources is illustrated,
as shown in fig. 3, it is shown that if directions are determined for users of different orientations, a tangential direction is defined as a left-right direction and a normal direction is an up-down direction, with a concentric circle centered on the center of the device.
The specific technical principle is as follows:
1) The laser emits parallel light to model the background of the illuminated part, and the part can be used after the completion
2) When the parallel laser lights up to the finger tip, the distance between the finger tip and the center of the device is obtained through the triangular distance measurement of the camera, and the direction defined by the figure 3 is used for controlling the mouse.
3) When the device is set to be in a multi-person mode, each device can support multiple persons at the same time, when a plurality of light spots are found, the speed coordinate axes of each light spot are different, the moving offset is calculated respectively, and meanwhile, a mouse message is sent to the system.
In the embodiment of the invention, the proposal solves the problem of 360-degree full coverage of monocular/binocular vision,
specifically, as shown in fig. 4, the binocular scheme (taking the case of 6 cameras as an example): according to the number of cameras required by the user using region division definition, setting the user division region as N, the number of required cameras as 2N, and the horizontal visual angle of the cameras as more than 360/N. As in the case of the 6-group camera, the camera horizontal viewing angle must be greater than 120 degrees.
As shown in fig. 2, the monocular scheme (taking the case of 3 cameras as an example): a strategy similar to the binocular approach, except that no overlap is required, assuming a camera horizontal angle of a degrees, the number of cameras required is I.e. rounded up. The number of the divided areas is C. As in the 3 camera solution, the viewing angle of each camera must be greater than 120 degrees.
In the embodiment of the present invention, specifically, in the direction definition: the vertical direction is normal, and the horizontal direction is tangential.
As shown in fig. 5, when the user touches the P point with a finger, the P point is obtained according to the normal and tangential positions.
In the step of the embodiment of the invention, distortion correction is also involved, specifically, a camera internal parameter camera matrix K and a distortion matrix D are obtained according to Zhang Zhengyou calibration and radial distortion model calculation;
wherein, the liquid crystal display device comprises a liquid crystal display device,
for reality ofFor the fisheye lens, kannala proposes a general polynomial approximation model of the fisheye camera in order to facilitate calibration of the fisheye camera. θ d Is an odd function of θ, and the equations are expanded according to the Taylor series to find θ d Can be expressed in terms of an odd degree polynomial of θ as follows:
for practical calculation convenience, it is necessary to determine θ d The obtained power number, kannala proposed that the first 5 terms can be approximated to various projection models, d 0 Has a value of 1, and has the following equation:
θ d =θ+d 1 θ 3 +d 2 θ 5 +d 3 θ 7 +d 4 θ 9 (2)
the transformation from the spatial point to the point on the fisheye image is then as follows, wherein (X c ,Y c ,Z c ) T For points of the camera coordinate system, X is the point of the three-dimensional world coordinate system, R is their rotation matrix, T is their translation matrix:
(X c ,Y c ,Z c ) T =RX+T (3)
r 2 =x c 2 +y c 2 (5)
θ=arctan(r) (6)
θ d =θ+d 1 θ 3 +d 2 θ 5 +d 3 θ 7 +d 4 θ 9 (7)
u=f x x d +c x ,v=f y y d +c y (9)
wherein (u, v) T Is the corresponding coordinate point on the fish-eye image.
By the method, camera distortion can be removed, and no distortion point is obtained.
Assuming a 4-person number, 4-work area, 8-camera special scenarios are required, without loss of generality, for simplicity of illustration, a camera angle of 135 ° (according to the above assumption, only a camera angle of greater than 360/4=90 is required).
As shown in fig. 6, the main optical axis of each camera is not necessarily connected with the boundary or angular bisector of the divided area, and is not necessarily located on the bisector of 360/N, however, the common practice is to divide equally, cam i And the included angle theta between the main optical axis and the boundary of the region can be any value.
As shown in fig. 7, the visual angle of each camera is 135 degrees, two cameras are responsible for a working area (dotted line area in the figure), i, j refers to the boundary just shared by the ith camera and the jth camera, each area has a relative coordinate system, the position of the 0 th camera can be defined as a standard world coordinate system, and other cameras have an R (rotation matrix) and T (translation matrix) relation relative to the 0 th camera.
Then the rotation translation matrix combinations RT for the 8 cameras are each as follows:
0 th camera1 st camera->2 nd camera->3 rd camera->4 th camera->5 th camera->No. 6 camera7 th camera->
Wherein the method comprises the steps of
So that
Now assume that the internal reference matrices of each camera are identical (the internal reference matrices are not identical and do not affect the final result, here for ease of calculation use), let the internal reference matrices of the cameras beWherein f x And f y Is the focal length of the camera in the x-axis and y-axis (f because the industry cmos particles are not necessarily strictly square x =sf y ),c x And c y Image coordinates of the principal point;
the rotation matrix R and translation matrix T between the double shots of each working area can be obtained by using PnP and Given decomposition, respectively, and according to the camera arrangement design described above, the visible areas of the cameras form a loop, and the arrangement can enable each camera to have intersecting visible areas between every two cameras.
After obtaining the external parameter matrix of the cameras, the relative relation between the cameras can be established according to the working area, and the relation is obtained and given by the following relational expressionThe visual schematic is shown in FIG. 8 (where x r And x l Respectively, are matching points in the left camera coordinate system and the right camera coordinate system, X is a point in the world coordinate system, R r And T r Is the rotation matrix and translation matrix of the right camera, R l And T l Is the rotation matrix and translation matrix of the left camera, K r And K l Is the internal parameter matrix of the left and right cameras, e l And e r Pole pairs for left and right cameras);
x r =R r X+T r ……①
x l =R l X+T l ……②
simultaneous (1) (2); the left camera rotation matrix is I, and the translation matrix is 0 T
The right camera rotation matrix is R l R r T The translation matrix is T l -R l R r T T r
The above transformation can be changed into a matrix of the attitudes of the 8 cameras, taking the first camera as a standard world coordinate system.
In order to enable the user to normally use the system according to the respective areas, the coordinate system is also required to be adjusted, and the user moves based on the relative coordinate system of the respective areas, so that the relative coordinate transformation process of the four areas is also required to be recorded as follows:
camera No. 0 and camera No. 1 control the a region, their rotation translation matrices are as follows:
no. 0 camera R 0 ′=I,T 0 ′=0;
No. 1 camera R 1 w =R 1 ,T 1 ′=-R 1 T T 1
Camera No. 2 and camera No. 3 control the B region, their rotation translation matrices are as follows:
no. 2 camera R 2 ′=I,T 2 ′=0;
No. 3 camera R 3 ′=R 2 R 3 T ,T 3 ′=T 2 -R 2 R 3 T T 3
Camera No. 4 and camera No. 5 control the C region, their rotation translation matrices are as follows:
no. 4 camera R 4 ′=I,T 4 ′=0;
No. 4 camera R 5 ′=R 4 R 5 T ,T 5 ′=T 4 -R 4 R 5 T T 5
The camera No. 6 and the camera No. 7 control the region C, their rotation translation matrices are as follows:
no. 6 camera R 6 ′=I,T 6 ′=0;
No. 7 camera R 7 ′=R 6 R 7 T ,T 7 ′=T 6 -R 6 R 7 T T 7
The four processes can be used for ranging in each area, the obtained three-dimensional coordinates are distributed relative to respective standard coordinate systems, so that the coordinates are transformed through the above 8 relation matrixes, the coordinates of respective users can be obtained according to the distribution of No. 0 cameras, the global overall planning of the system can be performed, and the use track and the use gesture of the respective users are obtained.
In the embodiment of the invention, the contour recognition and matching processing is also involved, and because the scene is performed in the environment of infrared laser, a canny operator can be adopted to obtain a corresponding contour, and a click position coordinate point x of the contour of one of the images is obtained, and at the moment, the problem of searching for a matching point of a corresponding camera is solved.
Consider two cameras corresponding to them, whose extrinsic matrix relationships are respectively as follows:
defining a left camera as a standard coordinate system, wherein a rotation matrix is I, a translation matrix is 0, and an internal reference matrix is K;
the rotation matrix of the right camera is R, the translation matrix is T, and the internal reference matrix is K'; projection matrix p=kj [ i|0] of the left camera; the projection matrix P '=k' [ r|t ] of the right camera;
and the basis matrix f= [ e ]'] × P′P + Wherein P is + Is the pseudo-inverse of P.
ThenC is the world coordinate of the optical center.
e '=p' C; f= [ P' C] × P′P + =[K′T] × K′RK -1
According to the above relation, the epipolar line equation l=fx on another image, and the epipolar line equation on the corresponding image is obtained at this time, the SGBM algorithm may be run in the corresponding limit region to search for the matching point x'.
At this time, it is necessary to calculate a three-dimensional coordinate point because x×px=0, having the following formula
x(P 3T X)-(P 1T X)=0……③
y(P 3T X)-(P 1T X)=0……④
x(P 3T X)-y(P 1T X)=0……⑤
Since (5) can be represented linearly by (3) and (4), it can be omitted
Order the
The least squares solution of the homogeneous equation can be solved to obtain the three-dimensional points.
In order to achieve the above objective, as shown in fig. 9, the present invention further provides a 360-degree omnibearing virtual touch system, the system comprising:
the first acquisition unit is used for acquiring the illuminated partial data to carry out background modeling according to the laser projected parallel light;
a second acquisition unit for acquiring parallel laser-illuminated finger tip position image data in combination with the illuminated partial data;
the third acquisition unit is used for acquiring the distance between the finger tip and the center of the camera device through the camera and combining with the triangular ranging;
and the control unit is used for sending control instruction information according to the obtained distance to realize real-time touch control.
Accordingly, as shown in fig. 10, the system further includes:
the first acquisition module is used for acquiring an internal reference matrix and an external reference matrix of the camera;
the second acquisition unit further includes:
the second acquisition module is used for carrying out contour recognition matching processing on fingertip position image data in the view field of the acquired camera;
the third obtaining unit further includes:
the third acquisition module is used for acquiring normal and tangential position data of the finger touch point;
and the fourth acquisition module is used for acquiring corresponding use track and gesture data through the rotation matrix and the translation matrix of the camera.
The invention also provides a 360-degree omnibearing virtual touch control platform, as shown in fig. 11, comprising: the processor, the memory and the 360-degree omnibearing virtual touch control platform control program;
executing the 360-degree omnibearing virtual touch control platform control program on the processor, wherein the 360-degree omnibearing virtual touch control platform control program is stored in the memory, and the 360-degree omnibearing virtual touch control platform control program realizes the 360-degree omnibearing virtual touch control method steps, such as:
according to the laser projection parallel light, acquiring illuminated partial data for background modeling;
acquiring parallel laser illuminated finger tip position image data in combination with the illuminated portion data;
obtaining the distance between the finger tip and the center of the camera device through the camera and combining with the triangular ranging;
and sending control instruction information according to the obtained distance to realize real-time touch control.
The specific details of the steps are set forth above and are not repeated here;
in the embodiment of the invention, the 360-degree omnibearing virtual touch control platform built-in processor can be composed of integrated circuits, for example, can be composed of single packaged integrated circuits, can also be composed of a plurality of integrated circuits packaged with the same function or different functions, and comprises one or a plurality of central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, various control chips and the like. The processor utilizes various interfaces and lines to connect and take various components, and executes various functions and processes data of 360-degree omnibearing virtual touch control by running or executing programs or units stored in the memory and calling data stored in the memory;
the memory is used for storing program codes and various data, is arranged in the 360-degree omnibearing virtual touch control platform, and realizes high-speed and automatic access of programs or data in the running process.
The Memory includes Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic disk Memory, tape Memory, or any other medium from which a computer can be used to carry or store data.
The present invention also proposes a computer readable storage medium, as shown in fig. 12, where the computer readable storage medium stores a 360-degree omnidirectional virtual touch platform control program, where the 360-degree omnidirectional virtual touch platform control program implements the 360-degree omnidirectional virtual touch method steps, for example,
according to the laser projection parallel light, acquiring illuminated partial data for background modeling;
acquiring parallel laser illuminated finger tip position image data in combination with the illuminated portion data;
obtaining the distance between the finger tip and the center of the camera device through the camera and combining with the triangular ranging;
and sending control instruction information according to the obtained distance to realize real-time touch control.
The specific details of the steps are set forth above and are not repeated here;
in the description of embodiments of the invention, it should be noted that any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and that scope of preferred embodiments of the invention includes additional implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, as would be understood by those reasonably skilled in the art of the embodiments of the invention.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, system that includes a processing module, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM).
In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
Through the method steps, the system, the platform and the storage medium, the simultaneous operation of multiple people can be supported, and the sharing efficiency and the pleasure are effectively improved; the up-down and left-right directions of each person are relative to the directions of the people, so that each person can feel that the touch panel is in front of the person.
That is, the system device of the method can support simultaneous operation of multiple people, and effectively improves sharing efficiency and pleasure; background modeling of the illuminated portion by laser emitting parallel light; when the parallel laser lights up the finger tip, the distance between the finger tip and the center of the device is obtained through binocular or monocular of the camera; when a plurality of light spots are found, the speed coordinate axes of each light spot are different, the moving offset is calculated respectively, and meanwhile, a mouse message is sent to the system to perform real-time method system operation.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (8)

1. The 360-degree omnibearing virtual touch control method is characterized by comprising the following steps of:
according to the laser projection parallel light, acquiring illuminated partial data for background modeling;
acquiring parallelism in combination with illuminated partial dataIlluminating the finger tip position image data with a laser; performing contour recognition matching processing on fingertip position image data in the view field of the obtained camera; obtaining a corresponding contour by adopting a canny operator, and obtaining a click position coordinate point of the contour of one of the imagesAt this time, a problem of finding a matching point of the corresponding camera is required;
consider two cameras corresponding to them, whose extrinsic matrix relationships are respectively as follows:
defining the left camera as a standard coordinate system, and the rotation matrix of the left camera asThe translation matrix is 0, the reference matrix is +.>
The rotation matrix of the right camera is R, the translation matrix is T, and the internal reference matrix isThe method comprises the steps of carrying out a first treatment on the surface of the Projection matrix of left camera->The method comprises the steps of carrying out a first treatment on the surface of the Projection matrix of right camera->
Basic matrixWherein->Is->Is the pseudo-inverse of (a);
then,/>C is the world coordinate of the optical center;
the method comprises the steps of carrying out a first treatment on the surface of the Then->
According to the above relation, epipolar line equation on another imageAt the moment, a epipolar line equation on a corresponding image is obtained, and SGBM algorithm is operated in a corresponding limit area to search for a matching point +.>
It is necessary to calculate three-dimensional coordinate points at this time becauseThe following formula is provided:
……③
……④
……⑤
since (5) can be represented linearly by (3) and (4), it can be eliminated;
let a=……⑥
The least squares solution of the homogeneous equation ax=0 can be solved to obtain a three-dimensional point;
obtaining the distance between the finger tip and the center of the camera device through the camera and combining with the triangular ranging;
and sending control instruction information according to the obtained distance to realize real-time touch control.
2. The 360-degree omnidirectional virtual touch method of claim 1, wherein said camera assembly comprises a monocular camera assembly 360-degree overlay and a binocular camera assembly 360-degree overlay;
the camera device is specifically in a multi-person mode or a single-person mode.
3. The 360-degree omnidirectional virtual touch method of claim 1, further comprising the steps of, before the step of obtaining the illuminated portion of data for background modeling according to the laser projected parallel light:
and acquiring an internal reference matrix and an external reference matrix of the camera.
4. The 360-degree omnibearing virtual touch control method according to claim 1, wherein the step of obtaining the distance between the finger tip and the center of the camera device by combining the triangular ranging through the camera comprises the following steps:
acquiring normal and tangential position data of a finger touch point;
and obtaining corresponding using track and gesture data through the rotation matrix and the translation matrix of the camera.
5. A360-degree omnibearing virtual touch control system is characterized in that the system specifically comprises:
the first acquisition unit is used for acquiring the illuminated partial data to carry out background modeling according to the laser projected parallel light;
a second acquisition unit for acquiring parallel laser-illuminated finger tip position image data in combination with the illuminated partial data;
the second acquisition unit further includes:
the second acquisition module is used for carrying out contour recognition matching processing on fingertip position image data in the view field of the acquired camera; obtaining a corresponding contour by adopting a canny operator, and obtaining a click position coordinate point of the contour of one of the imagesAt this time, a problem of finding a matching point of the corresponding camera is required;
consider two cameras corresponding to them, whose extrinsic matrix relationships are respectively as follows:
defining the left camera as a standard coordinate system, and the rotation matrix of the left camera asThe translation matrix is 0, the reference matrix is +.>
The rotation matrix of the right camera is R, the translation matrix is T, and the internal reference matrix isThe method comprises the steps of carrying out a first treatment on the surface of the Projection matrix of left camera->The method comprises the steps of carrying out a first treatment on the surface of the Projection matrix of right camera->
Basic matrixWherein->Is->Is the pseudo-inverse of (a);
then,/>C is the world coordinate of the optical center;
the method comprises the steps of carrying out a first treatment on the surface of the Then->
According to the above relation, epipolar line equation on another imageAt the moment, a epipolar line equation on a corresponding image is obtained, and SGBM algorithm is operated in a corresponding limit area to search for a matching point +.>
It is necessary to calculate three-dimensional coordinate points at this time becauseThe following formula is provided:
……③
……④
……⑤
since (5) can be represented linearly by (3) and (4), it can be eliminated;
let a=……⑥
The least squares solution of the homogeneous equation ax=0 can be solved to obtain a three-dimensional point;
the third acquisition unit is used for acquiring the distance between the finger tip and the center of the camera device through the camera and combining with the triangular ranging;
and the control unit is used for sending control instruction information according to the obtained distance to realize real-time touch control.
6. The 360 degree all-round virtual touch system of claim 5, further comprising:
the first acquisition module is used for acquiring an internal reference matrix and an external reference matrix of the camera;
the third obtaining unit further includes:
the third acquisition module is used for acquiring normal and tangential position data of the finger touch point;
and the fourth acquisition module is used for acquiring corresponding use track and gesture data through the rotation matrix and the translation matrix of the camera.
7. 360-degree all-round virtual touch platform, characterized by comprising:
the processor, the memory and the 360-degree omnibearing virtual touch control platform control program;
wherein executing the 360 degree full-scale virtual touch platform control program on the processor, the 360 degree full-scale virtual touch platform control program being stored in the memory, the 360 degree full-scale virtual touch platform control program implementing the 360 degree full-scale virtual touch method steps of any one of claims 1 to 4.
8. A computer readable storage medium, wherein the computer readable storage medium stores a 360-degree omnidirectional virtual touch platform control program, and the 360-degree omnidirectional virtual touch platform control program implements the 360-degree omnidirectional virtual touch method steps of any one of claims 1 to 4.
CN201910759816.5A 2018-08-17 2019-08-16 360-degree omnibearing virtual touch control method, system, platform and storage medium Active CN110471577B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810940137 2018-08-17
CN2018109401373 2018-08-17

Publications (2)

Publication Number Publication Date
CN110471577A CN110471577A (en) 2019-11-19
CN110471577B true CN110471577B (en) 2023-08-22

Family

ID=68510997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910759816.5A Active CN110471577B (en) 2018-08-17 2019-08-16 360-degree omnibearing virtual touch control method, system, platform and storage medium

Country Status (1)

Country Link
CN (1) CN110471577B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882883A (en) * 2020-08-28 2020-11-03 智慧互通科技有限公司 Roadside parking management method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799318A (en) * 2012-08-13 2012-11-28 深圳先进技术研究院 Human-machine interaction method and system based on binocular stereoscopic vision
CN103383731A (en) * 2013-07-08 2013-11-06 深圳先进技术研究院 Projection interactive method and system based on fingertip positioning and computing device
CN103914152A (en) * 2014-04-11 2014-07-09 周光磊 Recognition method and system for multi-point touch and gesture movement capturing in three-dimensional space
CN105991929A (en) * 2016-06-21 2016-10-05 浩云科技股份有限公司 Extrinsic parameter calibration and whole-space video stitching method for whole-space camera
CN106095309A (en) * 2016-06-03 2016-11-09 广东欧珀移动通信有限公司 The method of controlling operation thereof of terminal and device
CN107918507A (en) * 2016-10-10 2018-04-17 广东技术师范学院 A kind of virtual touchpad method based on stereoscopic vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799318A (en) * 2012-08-13 2012-11-28 深圳先进技术研究院 Human-machine interaction method and system based on binocular stereoscopic vision
CN103383731A (en) * 2013-07-08 2013-11-06 深圳先进技术研究院 Projection interactive method and system based on fingertip positioning and computing device
CN103914152A (en) * 2014-04-11 2014-07-09 周光磊 Recognition method and system for multi-point touch and gesture movement capturing in three-dimensional space
CN106095309A (en) * 2016-06-03 2016-11-09 广东欧珀移动通信有限公司 The method of controlling operation thereof of terminal and device
CN105991929A (en) * 2016-06-21 2016-10-05 浩云科技股份有限公司 Extrinsic parameter calibration and whole-space video stitching method for whole-space camera
CN107918507A (en) * 2016-10-10 2018-04-17 广东技术师范学院 A kind of virtual touchpad method based on stereoscopic vision

Also Published As

Publication number Publication date
CN110471577A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN107223269B (en) Three-dimensional scene positioning method and device
US10936874B1 (en) Controller gestures in virtual, augmented, and mixed reality (xR) applications
US10854012B1 (en) Concealing loss of distributed simultaneous localization and mapping (SLAM) data in edge cloud architectures
US20170076497A1 (en) Computer program for directing line of sight
US20220358663A1 (en) Localization and Tracking Method and Platform, Head-Mounted Display System, and Computer-Readable Storage Medium
WO2022078467A1 (en) Automatic robot recharging method and apparatus, and robot and storage medium
TW202205059A (en) Control method, electronic device and computer-readable storage medium for virtual object
JP2022524718A (en) Relative spatial positioning of mobile devices
JP2019537023A (en) Positioning method and device
JP2004062758A (en) Information processor and information processing method
CN110782492B (en) Pose tracking method and device
CN110362193A (en) With hand or the method for tracking target and system of eyes tracking auxiliary
CN108885487B (en) Gesture control method of wearable system and wearable system
JP2023509291A (en) Joint infrared and visible light visual inertial object tracking
CN112288825A (en) Camera calibration method and device, electronic equipment, storage medium and road side equipment
US20210256733A1 (en) Resolving region-of-interest (roi) overlaps for distributed simultaneous localization and mapping (slam) in edge cloud architectures
US7377650B2 (en) Projection of synthetic information
US20230098910A1 (en) Method for tracking head mounted display device and head mounted display system
CN116261706A (en) System and method for object tracking using fused data
CN110471577B (en) 360-degree omnibearing virtual touch control method, system, platform and storage medium
CN110850973B (en) Audio device control method, audio device and storage medium
CN113918015B (en) Interaction method and device for augmented reality
CN114549285A (en) Controller positioning method and device, head-mounted display equipment and storage medium
CN110750094A (en) Method, device and system for determining pose change information of movable equipment
WO2022183372A1 (en) Control method, control apparatus, and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant