CN106251323A - Method, device and the electronic equipment of a kind of bore hole three-dimensional tracking - Google Patents
Method, device and the electronic equipment of a kind of bore hole three-dimensional tracking Download PDFInfo
- Publication number
- CN106251323A CN106251323A CN201510970248.5A CN201510970248A CN106251323A CN 106251323 A CN106251323 A CN 106251323A CN 201510970248 A CN201510970248 A CN 201510970248A CN 106251323 A CN106251323 A CN 106251323A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- camera
- screen
- acquiring
- coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000003287 optical effect Effects 0.000 claims abstract description 64
- 238000012937 correction Methods 0.000 claims description 73
- 238000006243 chemical reaction Methods 0.000 claims description 67
- 238000013519 translation Methods 0.000 claims description 48
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000012360 testing method Methods 0.000 claims description 16
- 238000005259 measurement Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims 5
- 230000000694 effects Effects 0.000 description 18
- 238000005516 engineering process Methods 0.000 description 10
- 238000003384 imaging method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides method, device and the electronic equipment of a kind of bore hole three-dimensional tracking, relate to bore hole stereo display technique field, solving existing bore hole stereo display technique and cannot set up the problem of photographic head coordinate system and screen coordinate system transformational relation accurately and efficiently, the method includes: set up the screen coordinate system of a screen;One binocular camera is carried out binocular optical axis collimation;Under the first photographic head coordinate system of described binocular camera, obtain described first photographic head coordinate and be tied to converted variable and the impact point first coordinate under described first photographic head coordinate system of described screen coordinate system, and according to described converted variable and described first coordinate, obtain the described impact point the second coordinate under described screen coordinate system.The solution of the present invention can set up the transformational relation of photographic head coordinate system and screen coordinate system accurately and efficiently.
Description
Technical Field
The invention relates to the technical field of naked eye stereoscopic display, in particular to a naked eye stereoscopic tracking method, a naked eye stereoscopic tracking device and electronic equipment.
Background
Over the years of research, the naked eye stereoscopic display technology has made a great deal of progress. Stereoscopic display products using such technologies are gradually receiving attention of consumers because stereoscopic display effects can be viewed without wearing special glasses. Generally, the technology utilizes the light splitting effect of a light splitting device such as a grating, so that the content seen by the left eye and the right eye of a viewer is different, and parallax is formed and a stereoscopic effect is felt. In order to accurately capture an image projected by light splitting equipment by human eyes, the most efficient and accurate method at present is to capture a mark point by using human eye capturing equipment such as a camera, wherein the mark point is the position of human eyes or an object which has obvious characteristics and keeps a fixed distance with the human eyes, and then the light splitting equipment is adjusted in a self-adaptive manner or the image is rearranged according to the positions of the left eye and the right eye of a person, so that the person can freely move within a certain range and observe a three-dimensional display effect.
The mark point capturing is completed under a camera coordinate system, and the light splitting and the drawing arrangement are performed under a screen coordinate system, so that the problem of conversion from the camera coordinate system to the screen coordinate system exists. Under the condition of binocular cameras, the conversion between a camera coordinate system and a screen coordinate system is caused by errors during assembly, so that not only translation but also uncontrollable rotation possibly exists, and the stereoscopic display effect is influenced. Therefore, how to accurately and efficiently establish the conversion relation between the camera coordinate system and the screen coordinate system becomes a difficult problem to be solved by the current naked eye stereoscopic display technology.
Disclosure of Invention
The invention aims to provide a method, a device and electronic equipment for naked eye three-dimensional tracking, and solves the problem that the naked eye three-dimensional display technology in the prior art cannot accurately and efficiently establish the conversion relation between a camera coordinate system and a screen coordinate system.
In order to solve the above technical problem, an embodiment of the present invention provides a method for naked eye stereo tracking, where the method includes:
establishing a screen coordinate system of a screen;
performing binocular optical axis parallel correction on a binocular camera;
and acquiring a conversion variable from the first camera coordinate system to the screen coordinate system and a first coordinate of a target point under the first camera coordinate system of the binocular camera, and acquiring a second coordinate of the target point under the screen coordinate system according to the conversion variable and the first coordinate.
The acquiring a conversion variable from the first camera coordinate system to the screen coordinate system and a first coordinate of a target point in the first camera coordinate system, and acquiring a second coordinate of the target point in the screen coordinate system according to the conversion variable and the first coordinate includes:
under a first camera coordinate system of the binocular camera, acquiring a rotation variable from the first camera coordinate system to the screen coordinate system, and performing rotation correction on the first camera coordinate system according to the rotation variable;
acquiring a first coordinate of a target point in the first camera coordinate system after rotation correction;
and further carrying out translation correction on the first camera coordinate system after the rotation correction according to an origin translation variable from the first camera coordinate system to the screen coordinate system, and determining the coordinates of the first coordinates in the first camera coordinate system after the translation correction as the second coordinates.
The acquiring a conversion variable from the first camera coordinate system to the screen coordinate system and a first coordinate of a target point in the first camera coordinate system, and acquiring a second coordinate of the target point in the screen coordinate system according to the conversion variable and the first coordinate includes:
under a first camera coordinate system of the binocular camera, acquiring a rotation variable from the first camera coordinate system to the screen coordinate system, and performing rotation correction on the first camera coordinate system according to the rotation variable;
further carrying out translation correction on the first camera coordinate system after rotation correction according to an origin translation variable from the first camera coordinate system to the screen coordinate system;
and acquiring a first coordinate of a target point in the first camera coordinate system after translation correction, and determining the first coordinate as a second coordinate of the target point in the screen coordinate system.
Wherein, the establishing of the screen coordinate system of a screen comprises the following steps:
taking the center of the screen as an origin, taking the optical center connecting line direction of the binocular cameras as an X coordinate axis, taking the component direction of the optical axis of one of the binocular cameras on the optical center connecting line vertical plane as a Y coordinate axis, and taking the vector product of the first coordinate axis and the second coordinate axis as a Z coordinate axis, and establishing a screen coordinate system of the screen;
and the optical center connecting line is parallel to the plane where the screen is located.
Wherein, under the first camera coordinate system of binocular camera, acquire first camera coordinate system to the rotational variable of screen coordinate system includes:
acquiring a first vector from the optical center of the first camera to the optical center of a second camera in the binocular camera under the first camera coordinate system;
acquiring a Z axis of the first camera coordinate system as a second vector;
acquiring an included angle between the first vector and the second vector, and acquiring a first rotation variable of the first camera coordinate system rotating around the Y axis of the screen coordinate system according to the included angle between the first vector and the second vector; and/or
Acquiring a Y axis of the first camera coordinate system as a third vector;
acquiring an included angle between the first vector and the third vector, and acquiring a second rotation variable of the first camera coordinate system rotating around the Z axis of the screen coordinate system according to the included angle between the first vector and the third vector; and/or
Acquiring an X axis of the first camera coordinate system as a fourth vector;
and acquiring an included angle between the first vector and the fourth vector, and acquiring a third rotation variable of the first camera coordinate system rotating around the X axis of the screen coordinate system according to the included angle between the first vector and the third vector.
Wherein, under the first camera coordinate system of binocular camera, acquire the first camera coordinate system to the conversion variable of screen coordinate system includes:
acquiring a third coordinate of each test point in at least three test points selected on a vertical line in the optical center connecting line in advance under the first camera coordinate system, and acquiring a fourth coordinate of each target point acquired in advance in a measuring manner under the screen coordinate system;
and acquiring a conversion variable from the first camera coordinate system to the screen coordinate system according to the conversion relation between the third coordinate and the fourth coordinate.
After acquiring the second coordinate of the target point in the screen coordinate system according to the conversion variable and the first coordinate, the method further includes:
and acquiring the coordinates of the human eyes corresponding to the target points under the screen coordinate system according to the second coordinates and the pre-calibrated position relationship between the target points and the human eyes.
The acquiring a first coordinate of a target point in the first camera coordinate system includes:
acquiring pixel coordinates of the target point under the pixel coordinate systems of at least two cameras and the internal reference matrix of the corresponding camera;
acquiring a ray equation where the target point is located according to the pixel coordinate and the internal reference matrix;
determining the object point of the target point in the space by adopting a least square criterion according to the ray equation;
and acquiring the coordinate of the object point under the first camera coordinate system as the first coordinate.
The acquiring a first coordinate of a target point in the first camera coordinate system includes:
acquiring two pixel points of the target point on two shot images of the binocular camera respectively;
obtaining rays respectively formed by the two pixel points according to the reversibility of the light path;
determining an object point of the target point in the space according to a common perpendicular line between the two rays;
and acquiring the coordinate of the object point under the first camera coordinate system as the first coordinate.
In order to solve the above technical problem, an embodiment of the present invention further provides an apparatus for naked eye stereo tracking, where the apparatus includes:
the establishing module is used for establishing a screen coordinate system of a screen;
the correction module is used for carrying out binocular optical axis parallel correction on a binocular camera;
the first acquisition module is used for acquiring a conversion variable from the first camera coordinate system to the screen coordinate system and a first coordinate of a target point under the first camera coordinate system of the binocular camera, and acquiring a second coordinate of the target point under the screen coordinate system according to the conversion variable and the first coordinate.
Wherein the first obtaining module comprises:
the first acquisition unit is used for acquiring a rotation variable from a first camera coordinate system to a screen coordinate system under the first camera coordinate system of the binocular camera and performing rotation correction on the first camera coordinate system according to the rotation variable;
the second acquisition unit is used for acquiring a first coordinate of a target point in the first camera coordinate system after rotation correction;
and the first translation unit is used for further performing translation correction on the rotation-corrected first camera coordinate system according to an origin translation variable from the first camera coordinate system to the screen coordinate system, and determining the coordinates of the first coordinates in the translation-corrected first camera coordinate system as the second coordinates.
Wherein the first obtaining module comprises:
the first acquisition unit is used for acquiring a rotation variable from a first camera coordinate system to a screen coordinate system under the first camera coordinate system of the binocular camera and performing rotation correction on the first camera coordinate system according to the rotation variable;
the second translation unit is used for further performing translation correction on the first camera coordinate system after rotation correction according to an origin translation variable from the first camera coordinate system to the screen coordinate system;
and the third acquisition unit is used for acquiring a first coordinate of a target point in the first camera coordinate system after translation correction, and determining the first coordinate as a second coordinate of the target point in the screen coordinate system.
Wherein the establishing module comprises:
the device comprises an establishing unit, a display unit and a control unit, wherein the establishing unit is used for establishing a screen coordinate system of the screen by taking the center of the screen as an origin, taking the optical center connecting line direction of the binocular cameras as an X coordinate axis, taking the component direction of the optical axis of one camera in the binocular cameras on the optical center connecting line vertical plane as a Y coordinate axis, and taking the vector product of the first coordinate axis and the second coordinate axis as a Z coordinate axis;
and the optical center connecting line is parallel to the plane where the screen is located.
Wherein the first acquisition unit includes:
the first acquisition subunit is used for acquiring a first vector from the optical center of the first camera to the optical center of a second camera in the binocular camera under the first camera coordinate system;
the second acquisition subunit is used for acquiring a Z axis of the first camera coordinate system as a second vector;
the third obtaining subunit is configured to obtain an included angle between the first vector and the second vector, and obtain a first rotation variable of the first camera coordinate system rotating around a Y axis of the screen coordinate system according to the included angle between the first vector and the second vector; and/or
The fourth acquisition subunit is used for acquiring a Y axis of the first camera coordinate system as a third vector;
a fifth obtaining subunit, configured to obtain an included angle between the first vector and the third vector, and obtain, according to the included angle between the first vector and the third vector, a second rotation variable of the first camera coordinate system rotating around a Z axis of the screen coordinate system; and/or
A sixth obtaining subunit, configured to obtain an X axis of the first camera coordinate system as a fourth vector;
and the seventh obtaining subunit is configured to obtain an included angle between the first vector and the fourth vector, and obtain a third rotation variable of the first camera coordinate system rotating around the X axis of the screen coordinate system according to the included angle between the first vector and the third vector.
Wherein the first acquisition unit includes:
the eighth acquiring subunit is configured to acquire a third coordinate of each test point in the first camera coordinate system, among at least three test points selected in advance on a vertical line in the optical center connecting line, and acquire a fourth coordinate of each target point in the screen coordinate system, which is acquired in advance in a measurement manner;
and the ninth acquisition subunit is configured to acquire a conversion variable from the first camera coordinate system to the screen coordinate system according to a conversion relationship between the third coordinate and the fourth coordinate.
Wherein the apparatus further comprises:
and the second acquisition module is used for acquiring the coordinates of the human eyes corresponding to the target points under the screen coordinate system according to the second coordinates and the pre-calibrated position relationship between the target points and the human eyes.
Wherein the first obtaining module comprises:
the fourth acquisition unit is used for acquiring pixel coordinates of the target point under the pixel coordinate systems of the at least two cameras and the internal reference matrix of the corresponding camera;
a fifth obtaining unit, configured to obtain a ray equation where the target point is located according to the pixel coordinate and the internal reference matrix;
the first determining unit is used for determining an object point of the target point in the space by adopting a least square criterion according to the ray equation;
and the sixth acquisition unit is used for acquiring the coordinate of the object point in the first camera coordinate system as the first coordinate.
Wherein the first obtaining module comprises:
the seventh acquiring unit is used for acquiring two pixel points of the target point on two shot images of the binocular camera respectively;
the eighth obtaining unit is used for obtaining rays formed by the two pixel points respectively according to the reversibility of the light path;
the second determining unit is used for determining an object point of the target point in the space according to a common perpendicular line between the two rays;
and the ninth acquisition unit is used for acquiring the coordinate of the object point in the first camera coordinate system as the first coordinate.
To solve the above technical problem, an embodiment of the present invention further provides an electronic device, including:
the device comprises a shell, a processor, a memory, a display screen, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for performing the steps of:
establishing a screen coordinate system of a screen;
performing binocular optical axis parallel correction on a binocular camera;
and acquiring a conversion variable from the first camera coordinate system to the screen coordinate system and a first coordinate of a target point under the first camera coordinate system of the binocular camera, and acquiring a second coordinate of the target point under the screen coordinate system according to the conversion variable and the first coordinate.
The technical scheme of the invention has the following beneficial effects:
the method for naked eye three-dimensional tracking of the embodiment of the invention comprises the steps of firstly establishing a screen coordinate system of a screen, and then carrying out binocular optical axis parallel correction on a binocular camera; acquiring a conversion variable from a first camera coordinate system to a screen coordinate system and a first coordinate of a target point under the first camera coordinate system under a first camera coordinate system of a binocular camera; and finally, acquiring a second coordinate of the target point in the screen coordinate system according to the conversion variable and the first coordinate. The method can accurately and efficiently establish the conversion relation between the camera coordinate system and the screen coordinate system, accurately acquire the space coordinate under the target point screen camera, ensure the stereoscopic display effect and solve the problem that the naked eye stereoscopic display technology in the prior art cannot accurately and efficiently establish the conversion relation between the camera coordinate system and the screen coordinate system.
Drawings
FIG. 1 is a flow chart of a method for naked eye stereo tracking according to the present invention;
FIG. 2 is a first schematic diagram of a camera coordinate system of the method for naked eye three-dimensional tracking according to the invention;
FIG. 3 is a second schematic view of a camera coordinate system of the method for naked eye three-dimensional tracking according to the present invention;
FIG. 4 is a schematic diagram of test point selection in the naked eye three-dimensional tracking method of the present invention;
FIG. 5 is a schematic diagram of projection point ray non-coplanar straight lines in the method for naked eye stereo tracking of the present invention;
fig. 6 is a schematic structural diagram of the device for naked eye stereo tracking of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
According to the method for naked eye three-dimensional tracking, provided by the embodiment of the invention, the coordinate system where the target point obtained by the camera is located is corrected according to the conversion relation between the camera coordinate system and the screen coordinate system in consideration of the fact that the installation position of the camera usually deviates from the center of the display screen and the world coordinate system assumed by the computer has uncertainty, and finally, the image displayed on the screen conforms to the watching habits of audiences.
As shown in fig. 1, a method for naked eye stereo tracking according to an embodiment of the present invention includes:
step 101, establishing a screen coordinate system of a screen.
After the screen coordinate system of the screen is established, the conversion relation between the camera coordinate system and the screen coordinate system can be obtained through the subsequent steps, and then the camera coordinate system is corrected to obtain the object point coordinates in the screen coordinate system.
And 102, performing binocular optical axis parallel correction on a binocular camera.
The binocular optical axis of the binocular camera is corrected in parallel, the consistency of the axial directions of the two camera coordinate systems of the binocular camera is guaranteed, and therefore the conversion relation between the binocular camera coordinate system and the screen coordinate system can be obtained through the conversion relation between the monocular camera coordinate system and the screen coordinate system. And after the binocular optical axes of the binocular cameras are subjected to parallel correction, the space coordinates of the target points can be accurately acquired.
Specifically, binocular optical axes can be corrected in parallel by using a binocular relative position relationship (external reference matrix) obtained by calibrating a binocular camera.
Wherein, above-mentioned screen and binocular camera should be screen and the camera of installing on same electronic equipment.
Step 103, acquiring a conversion variable from the first camera coordinate system to the screen coordinate system and a first coordinate of a target point in the first camera coordinate system under the first camera coordinate system of the binocular camera, and acquiring a second coordinate of the target point in the screen coordinate system according to the conversion variable and the first coordinate.
The first camera may be any one of binocular cameras (left camera or right camera).
After a conversion variable from a certain camera coordinate system of the binocular camera to a screen coordinate system is obtained, the coordinates of the target point in the screen coordinate system can be accurately obtained according to the conversion variable and the coordinates of the target point in the camera coordinate system, and therefore a display image can be adjusted according to the coordinates of the target point in the screen coordinate system, and the image displayed on the screen can meet the watching habits of audiences.
The method for naked eye three-dimensional tracking of the embodiment of the invention can accurately and efficiently establish the conversion relation between the camera coordinate system and the screen coordinate system, accurately acquire the space coordinate under the screen camera of the target point, ensure the three-dimensional display effect and solve the problem that the conversion relation between the camera coordinate system and the screen coordinate system cannot be accurately and efficiently established by the naked eye three-dimensional display technology in the prior art.
In the case of a binocular camera, there is not only translation but also uncontrollable rotation of the conversion of the camera coordinate system to the screen coordinate system due to errors in assembly, and therefore the camera coordinate system should be corrected in both translation and rotation.
As a preferred implementation manner, the step of step 103 may include:
and step 1031, acquiring a rotation variable from the first camera coordinate system to the screen coordinate system under the first camera coordinate system of the binocular camera, and performing rotation correction on the first camera coordinate system according to the rotation variable.
Here, by acquiring a rotation variable from the camera coordinate system to the screen coordinate system, rotation correction of the camera coordinate system is realized, so that the axial direction of the camera coordinate system is consistent with the axial direction of the screen coordinate system.
Step 1032, acquiring a first coordinate of a target point in the first camera coordinate system after the rotation correction.
Here, after the camera coordinate system is subjected to rotation correction, the spatial coordinates of the target point can be acquired in the coordinate system, but this is not the final coordinate, and it is also necessary to translate the origin of the coordinate system of the target point to obtain the final coordinate.
Specifically, the object point coordinates of the target point in the space can be obtained by using a binocular stereo reconstruction method.
Step 1033, further performing translation correction on the rotation-corrected first camera coordinate system according to the origin translation variable from the first camera coordinate system to the screen coordinate system, and determining the coordinate of the first coordinate in the translation-corrected first camera coordinate system as the second coordinate.
After the camera coordinate system is subjected to rotation correction, the camera coordinate system is continuously subjected to translation correction, so that the space coordinates of the target point under the screen coordinate system can be obtained, the display image can be adjusted according to the coordinates of the target point under the screen coordinate system, and the stereoscopic display effect of the image is ensured.
At the moment, firstly, the camera coordinate system is subjected to rotation correction, then the first coordinate of the target point under the camera coordinate system is obtained, then the camera coordinate system is subjected to translation correction, so that the target point also carries out origin translation along with the camera coordinate system, and finally the space coordinate of the target point under the screen coordinate system is obtained, so that the camera coordinate system is corrected in the aspects of rotation and translation, and the three-dimensional display effect of the image is fully ensured.
Step 1031-. Therefore, as another preferred implementation manner, the step of step 103 may also include:
under a first camera coordinate system of the binocular camera, acquiring a rotation variable from the first camera coordinate system to the screen coordinate system, and performing rotation correction on the first camera coordinate system according to the rotation variable;
further carrying out translation correction on the first camera coordinate system after rotation correction according to an origin translation variable from the first camera coordinate system to the screen coordinate system;
and acquiring a first coordinate of a target point in the first camera coordinate system after translation correction, and determining the first coordinate as a second coordinate of the target point in the screen coordinate system.
Here, after the rotation correction of the camera coordinate system is performed, the translation correction of the camera coordinate system is continued, and then the coordinates of the target point are determined as the spatial coordinates in the screen coordinate system. In this way, as well as the above-mentioned way of steps 1031-.
If the deviation between the installation position of the camera and the center of the screen is not considered, according to an effective imaging model, the relationship between the camera coordinate system and the screen coordinate system can be established through the following formula (1):
s·m'=A·[R|T]·M' (1);
firstly, the internal parameters of the camera need to be determined And external parameters So as to correctly establish the relation between the target point in the world coordinate system and the target point in the screen coordinate system. Wherein f isx、fyIs the focal length of two cameras, fyIs the origin of the image coordinate system, and R is the camera seatThe rotation matrix of the coordinate system relative to the world coordinate system, T is the translation vector of the camera coordinate system relative to the world coordinate system, s is the zoom factor of the camera, M 'is the coordinate of the image point in the image pixel coordinate system, and M' is the coordinate of the target point in the screen coordinate system.
Based on the theoretical basis of the formula (1), the embodiment of the invention firstly corrects the binocular optical axes in parallel, then establishes the conversion relation between the camera coordinate system and the screen coordinate system by analyzing the geometric position relation of the camera coordinate system and the screen coordinate system, and determines the space coordinate of the target point under the screen coordinate system, so that the image displayed on the screen accords with the watching habit of audiences, and the three-dimensional display effect of the image is ensured.
As a preferred implementation manner, the step of step 101 may include:
step 1011, setting the center of the screen as an origin, setting the optical center connecting line direction of the binocular cameras as an X coordinate axis, setting the component direction of the optical axis of one of the binocular cameras on the optical center connecting line vertical plane as a Y coordinate axis, and setting the vector product of the first coordinate axis and the second coordinate axis as a Z coordinate axis, and establishing a screen coordinate system of the screen; and the optical center connecting line is parallel to the plane where the screen is located.
The origin of the screen coordinate system of the embodiment of the present invention may also be arbitrarily selected, for example, the optical center of a certain camera, but in order to sufficiently ensure the accuracy of screen display, the center of the screen is used as the origin of the screen coordinate system.
At the moment, based on the screen coordinate system, the conversion relation from the camera coordinate system to the screen coordinate system can be accurately and efficiently established through subsequent steps, so that the images displayed on the screen conform to the watching habits of audiences, and the three-dimensional display effect of the images is ensured.
Based on the screen coordinate system, the embodiment of the invention can calculate the conversion relationship from the camera coordinate system to the screen coordinate system by a manual calibration method or a binocular camera calibration method, which is described in detail below.
As a preferred implementation manner, the step of acquiring, in the first camera coordinate system of the binocular camera, a rotation variable from the first camera coordinate system to the screen coordinate system may include:
step 10311, acquiring a first vector from the optical center of the first camera to the optical center of a second camera in the binocular camera in the first camera coordinate system;
step 10312, acquiring a Z axis of the first camera coordinate system as a second vector;
step 10313, acquiring an included angle between the first vector and the second vector, and acquiring a first rotation variable of the first camera coordinate system rotating around the Y axis of the screen coordinate system according to the included angle between the first vector and the second vector; and/or
Step 10314, acquiring a Y axis of the first camera coordinate system as a third vector;
step 10315, obtaining an included angle between the first vector and the third vector, and obtaining a second rotation variable of the first camera coordinate system rotating around the Z axis of the screen coordinate system according to the included angle between the first vector and the third vector; and/or
Step 10316, obtaining the X axis of the first camera coordinate system as a fourth vector;
and step 10317, acquiring an included angle between the first vector and the fourth vector, and acquiring a third rotation variable of the first camera coordinate system rotating around the X axis of the screen coordinate system according to the included angle between the first vector and the third vector.
At the moment, the rotation variables of the camera coordinate system to the screen coordinate system in three axial directions are obtained, so that the camera coordinate system can be subjected to rotation correction, and the stereoscopic display effect of the image is ensured.
The steps for acquiring the three rotational variables are further described below with reference to the drawings.
As shown in fig. 2, assuming that the first camera is a right camera and the second camera is a left camera in the binocular camera, the left eye center of light O is under the right camera coordinate systemCLHas the coordinates of (X)2,Y2,Z2) The projection point coordinate of the point on the XZ plane is (X)2,Z2) Right eye optical center OCRTo the left eye center of vision OCLA first vector may be determinedThe coordinate of the main point of the right eye is (X)1,Y1,Z1) The projection point coordinate of the main point of the right eye on the XZ plane is (X)1,Z1) Right eye optical center OCRTo the right eye principal point a second vector may be determinedThe second vector is Z axis of the right camera coordinate systemCR。Andis α, and can be determined by the following formula (2)Andis α:
the angle α is found as shown in equation (3):
the left and right camera coordinate systems need to be rotated around the Y axis of the screen coordinate system by an angle β -90 °, and a first rotation variable can be obtained according to the rotation angle β, and the first rotation variable is expressed by the following formula (4):
after rotation by the first rotation variable, the optical axes of the binocular cameras will be simultaneously perpendicular to the base line (X-axis of the screen coordinate system).
As shown in fig. 3, similar to the manner of acquiring the first rotation variable, the first vector determined by the binocular optical centers is first acquired in the right camera coordinate systemAnd a third vector determined by the Y axis of the right camera coordinate systemThen obtainAndthe angle of inclusion gamma. The angle of rotation of the left and right camera coordinate systems about the Z-axis of the screen coordinate system is γ -90 °, and a second rotation variable can be obtained according to the rotation angle, and is shown in the following formula (5):
after the second rotation variable is rotated, the X axis of the binocular camera coordinate system is coincided with the base line.
Similar to the acquisition mode of the first rotation variable and the second rotation variable, the first vector determined by the binocular optical centers is firstly acquired under the coordinate system of the right cameraAnd a fourth vector determined by the X axis of the right camera coordinate systemThen obtainAndthe angle ω of (c). The left and right camera coordinate systems need to rotate around the X axis of the screen coordinate system by an angle θ of ω -90 °, and a third rotation variable can be obtained according to the rotation angle θ, and is shown in the following formula (6):
assuming that the coordinate of the target point before the camera coordinate system rotation correction is mc(xp,yp,zp) Then, the coordinates of the target point after rotation correction in the camera coordinate system can be obtained by the following formula (7):
m'c=Rx·Rz·Ry·mc(7)。
here, although it is theoretically possible to achieve the purpose of correcting the rotation of the camera coordinate system by only rotating the camera coordinate system about one or more of the X axis, the Y axis, and the Z axis, in practice, it is generally necessary to rotate the camera coordinate system about three axes.
The order of rotation about the X-axis, the Y-axis, and the Z-axis may be adjusted as needed, and is not limited to the order described in the above embodiments.
As another implementation manner, the step of acquiring, in the first camera coordinate system of the binocular camera, a rotation variable from the first camera coordinate system to the screen coordinate system may also include:
and 10318, acquiring a third coordinate of each test point in at least three test points selected in advance on a vertical line in the optical center connecting line, wherein each test point is respectively under the first camera coordinate system, and acquiring a fourth coordinate of each target point, which is acquired in advance in a measuring manner, under the screen coordinate system.
Here, the three-dimensional coordinates of the test point in the camera coordinate system may be calculated by a binocular stereo reconstruction method.
And 10319, acquiring a conversion variable from the first camera coordinate system to the screen coordinate system according to the conversion relation between the third coordinate and the fourth coordinate.
Here, the rotational variable and the translational variable from the camera coordinate system to the screen coordinate system may be simultaneously acquired according to a conversion relationship of the third coordinate and the fourth coordinate.
At the moment, the conversion variable from the camera coordinate system to the screen coordinate system is obtained, so that the camera coordinate system can be subjected to rotation correction and translation correction, and the stereoscopic display effect of the image is ensured.
As shown in fig. 4, in the measurement method by manual calibration, the measured coordinates of the test point P in the screen coordinate system and the calculated coordinates in the camera coordinate system are recorded separately, and the conversion relationship between them is obtained according to the geometric relationship, and the coordinates in the camera coordinate system can be converted into the screen coordinate system by using the conversion relationship.
As mentioned above, in the embodiment of the present invention, the coordinates of the target point in the space can be solved by using the binocular stereo reconstruction method, and the following describes in detail how to solve the coordinates of the target point.
As a preferred implementation manner, in step 103, the step of acquiring a first coordinate of a target point in the first camera coordinate system may include:
and 1034, acquiring pixel coordinates of the target point in the pixel coordinate systems of the at least two cameras and the reference matrix corresponding to the cameras.
And 1035, acquiring a ray equation where the target point is located according to the pixel coordinate and the internal reference matrix.
Here, knowing the pixel coordinates of a target point on an image captured by a camera and the camera reference matrix, the ray equation X where the target point is located can be calculated by the following formula (8), and the target point is located on the ray (ray) in space:
αj·uj=Aj·X(j=1...n) (8);
wherein, αjIs the conversion factor from the pixel coordinate system of the camera j to the image coordinate system, n is the number of cameras contained in the camera module, ujIs the pixel coordinate of the target point on camera j, AjIs the internal reference matrix of camera j.
The target point is on the ray in the space, and similarly, the rays corresponding to other cameras can be calculated by the target point on other cameras. Theoretically, the rays intersect at a point, i.e. the spatial position of the current target point, but actually, the rays do not intersect at the same point due to the digitization error of the camera, the calibration error of the internal and external references of the camera, and so on, so we need to approximate the spatial position of the target point by using a Triangulation (Triangulation) method.
And 1036, determining the object point of the target point in the space by adopting a least square criterion according to the ray equation.
Here, the least square criterion is adopted to determine the point closest to all the rays as the object point of the target point in space, and the object point can be specifically determined by the following formula (9):
wherein,is AjThe matrix factor of (a) is,is ujThe matrix factor of (2).
Step 1037, acquiring the coordinate of the object point in the first camera coordinate system as the first coordinate.
The coordinates of the object point corresponding to the target point under the first camera coordinate system are obtained, and the coordinates of the target point under the screen coordinate system can be accurately obtained through the conversion relation between the camera coordinate system and the screen coordinate system, so that the displayed image can be adjusted, and the watching experience of a user is guaranteed.
At the moment, the object point coordinates corresponding to the target point can be accurately obtained through the least square criterion, and a foundation is laid for obtaining the space coordinates of the target point under a screen coordinate system.
As another preferred implementation manner, in step 103, the step of acquiring a first coordinate of a target point in the first camera coordinate system may also include:
acquiring two pixel points of the target point on two shot images of the binocular camera respectively; obtaining rays respectively formed by the two pixel points according to the reversibility of the light path; determining an object point of the target point in the space according to a common perpendicular line between the two rays; and acquiring the coordinate of the object point under the first camera coordinate system as the first coordinate.
The method that the midpoints of the non-coplanar straight lines and the common vertical lines approach the target point is adopted, the object point coordinates corresponding to the target point can be accurately obtained, and a foundation is laid for obtaining the space coordinates of the target point under a screen coordinate system.
The following describes the method for finding the object point of the target point by the method for approaching the midpoint of the common perpendicular line of the non-coplanar straight line to the target point in detail.
As shown in fig. 5, a pixel point P1And P2The imaging point is the imaging point formed by the target point on the CCDs (Charge Coupled devices) of the left and right cameras, and due to the reversibility of the optical path, the rays meet at one point under an ideal model, namely the space position of the current target point. In practice, due to the fact that an imaging model of the camera is not ideal for pinhole imaging, the influence of noise and the small error of calibration parameters during CCD imaging, the digital error of the camera, and other factors, the two straight lines do not necessarily intersect at one point, but the two straight lines are not parallel, so the two straight lines may be non-coplanar straight lines (for the generality of the solution, the intersecting straight lines are regarded as special non-coplanar straight lines with a common perpendicular line length of zero).
It can be proved that, in geometry, the error of using the midpoint of the common perpendicular line of the non-coplanar straight line to approach the space point is small. Specifically, the shortest distance between two out-of-plane straight lines, i.e., the length of the common perpendicular line, may be calculated first. If the length of the common vertical line is relatively short,the midpoint of the common perpendicular line is assumed to be the position of the target point to be solved (when the length of the common perpendicular line is zero, the two straight lines are intersected, and the intersection point is the target point to be solved); if the common vertical line is too long, then the projection point P can be determined1And P2The error in matching is too large and the desired point is negligible.
The detailed calculation method is as follows, assuming P1And P2Two out-of-plane straight lines L1And L2Respectively is l1And l2,Q1And Q2Points sliding on two different straight lines, k1Is Q1At L1Coefficient of position of2Is Q2At L2Position coefficient of (2) when Q1Q2When the value of the distance between is minimum, Q1Q2I.e. the common perpendicular lines of the two non-coplanar straight lines, the midpoint Q of the common perpendicular line is the object point of the target point on the space, (Q)1+Q2) And/2 is the coordinate of Q. Because the two different-surface straight lines are perpendicular to the common perpendicular line, the following results are obtained:
namely, it is
Solving equation (10) yields:
substituting the obtained position coefficient into Q1、Q2In the linear equation, Q is obtained1And Q2Thereby obtaining the coordinates of Q (Q)1+Q2) And/2, wherein Q is the object point of the target point in the space.
At the moment, the object point coordinates corresponding to the target point are accurately obtained by a method of approaching the midpoint of the common perpendicular line of the non-coplanar straight line to the target point, and a foundation is laid for obtaining the space coordinates of the target point in a screen coordinate system.
Preferably, after the step 103, the method may further include:
and 104, acquiring coordinates of the human eyes corresponding to the target points in the screen coordinate system according to the second coordinates and the pre-calibrated position relationship between the target points and the human eyes.
At the moment, the spatial position of the human eye can be obtained according to the position relation between the target point and the human eye which is calibrated in advance, so that the image displayed on the screen can be accurately adjusted according to the spatial position of the human eye, and the watching experience of a user is ensured.
The method for naked eye three-dimensional tracking of the embodiment of the invention can accurately and efficiently establish the conversion relation between the camera coordinate system and the screen coordinate system, accurately acquire the space coordinate under the screen camera of the target point, ensure the three-dimensional display effect and solve the problem that the conversion relation between the camera coordinate system and the screen coordinate system cannot be accurately and efficiently established by the naked eye three-dimensional display technology in the prior art.
As shown in fig. 6, an embodiment of the present invention further provides an apparatus for autostereoscopic tracking, including:
the establishing module is used for establishing a screen coordinate system of a screen;
the correction module is used for carrying out binocular optical axis parallel correction on a binocular camera;
the first acquisition module is used for acquiring a conversion variable from the first camera coordinate system to the screen coordinate system and a first coordinate of a target point under the first camera coordinate system of the binocular camera, and acquiring a second coordinate of the target point under the screen coordinate system according to the conversion variable and the first coordinate.
The naked eye three-dimensional tracking device provided by the embodiment of the invention can accurately and efficiently establish the conversion relation between the camera coordinate system and the screen coordinate system, accurately acquire the space coordinate under the screen camera of the target point, ensure the three-dimensional display effect and solve the problem that the naked eye three-dimensional display technology in the prior art cannot accurately and efficiently establish the conversion relation between the camera coordinate system and the screen coordinate system.
Preferably, the first obtaining module may include:
the first acquisition unit is used for acquiring a rotation variable from a first camera coordinate system to a screen coordinate system under the first camera coordinate system of the binocular camera and performing rotation correction on the first camera coordinate system according to the rotation variable;
the second acquisition unit is used for acquiring a first coordinate of a target point in the first camera coordinate system after rotation correction;
and the first translation unit is used for further performing translation correction on the rotation-corrected first camera coordinate system according to an origin translation variable from the first camera coordinate system to the screen coordinate system, and determining the coordinates of the first coordinates in the translation-corrected first camera coordinate system as the second coordinates.
Preferably, the first obtaining module may include:
the first acquisition unit is used for acquiring a rotation variable from a first camera coordinate system to a screen coordinate system under the first camera coordinate system of the binocular camera and performing rotation correction on the first camera coordinate system according to the rotation variable;
the second translation unit is used for further performing translation correction on the first camera coordinate system after rotation correction according to an origin translation variable from the first camera coordinate system to the screen coordinate system;
and the third acquisition unit is used for acquiring a first coordinate of a target point in the first camera coordinate system after translation correction, and determining the first coordinate as a second coordinate of the target point in the screen coordinate system.
Preferably, the establishing module may include:
the device comprises an establishing unit, a display unit and a control unit, wherein the establishing unit is used for establishing a screen coordinate system of the screen by taking the center of the screen as an origin, taking the optical center connecting line direction of the binocular cameras as an X coordinate axis, taking the component direction of the optical axis of one camera in the binocular cameras on the optical center connecting line vertical plane as a Y coordinate axis, and taking the vector product of the first coordinate axis and the second coordinate axis as a Z coordinate axis; and the optical center connecting line is parallel to the plane where the screen is located.
Preferably, the first obtaining unit may include:
the first acquisition subunit is used for acquiring a first vector from the optical center of the first camera to the optical center of a second camera in the binocular camera under the first camera coordinate system;
the second acquisition subunit is used for acquiring a Z axis of the first camera coordinate system as a second vector;
the third obtaining subunit is configured to obtain an included angle between the first vector and the second vector, and obtain a first rotation variable of the first camera coordinate system rotating around a Z axis of the screen coordinate system according to the included angle between the first vector and the second vector; and/or
The fourth acquisition subunit is used for acquiring a Y axis of the first camera coordinate system as a third vector;
the fifth obtaining subunit is configured to obtain an included angle between the first vector and the third vector, and obtain, according to the included angle between the first vector and the third vector, a second rotation variable of the first camera coordinate system rotating around the Y axis of the screen coordinate system; and/or
A sixth obtaining subunit, configured to obtain an X axis of the first camera coordinate system as a fourth vector;
and the seventh obtaining subunit is configured to obtain an included angle between the first vector and the fourth vector, and obtain a third rotation variable of the first camera coordinate system rotating around the X axis of the screen coordinate system according to the included angle between the first vector and the third vector.
Preferably, the first obtaining unit may include:
the eighth acquiring subunit is configured to acquire a third coordinate of each test point in the first camera coordinate system, among at least three test points selected in advance on a vertical line in the optical center connecting line, and acquire a fourth coordinate of each target point in the screen coordinate system, which is acquired in advance in a measurement manner;
and the ninth acquisition subunit is configured to acquire a conversion variable from the first camera coordinate system to the screen coordinate system according to a conversion relationship between the third coordinate and the fourth coordinate.
Preferably, the apparatus may further include:
and the second acquisition module is used for acquiring the coordinates of the human eyes corresponding to the target points under the screen coordinate system according to the second coordinates and the pre-calibrated position relationship between the target points and the human eyes.
Preferably, the first obtaining module may include:
the fourth acquisition unit is used for acquiring pixel coordinates of the target point under the pixel coordinate systems of the at least two cameras and the internal reference matrix of the corresponding camera;
a fifth obtaining unit, configured to obtain a ray equation where the target point is located according to the pixel coordinate and the internal reference matrix;
the first determining unit is used for determining an object point of the target point in the space by adopting a least square criterion according to the ray equation;
and the sixth acquisition unit is used for acquiring the coordinate of the object point in the first camera coordinate system as the first coordinate.
Preferably, the first obtaining module may include:
the seventh acquiring unit is used for acquiring two pixel points of the target point on two shot images of the binocular camera respectively;
the eighth obtaining unit is used for obtaining rays formed by the two pixel points respectively according to the reversibility of the light path;
the second determining unit is used for determining an object point of the target point in the space according to a common perpendicular line between the two rays;
and the ninth acquisition unit is used for acquiring the coordinate of the object point in the first camera coordinate system as the first coordinate.
The naked eye three-dimensional tracking device provided by the embodiment of the invention can accurately and efficiently establish the conversion relation between the camera coordinate system and the screen coordinate system, accurately acquire the space coordinate under the screen camera of the target point, ensure the three-dimensional display effect and solve the problem that the naked eye three-dimensional display technology in the prior art cannot accurately and efficiently establish the conversion relation between the camera coordinate system and the screen coordinate system.
It should be noted that the apparatus for autostereoscopic tracking is an apparatus corresponding to the above method for autostereoscopic tracking, and all the implementation manners in the above method embodiments are applicable to the embodiment of the apparatus, and the same technical effect can be achieved.
In order to better achieve the above object, an embodiment of the present invention further provides an electronic device, including:
the device comprises a shell, a processor, a memory, a display screen, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for performing the steps of:
establishing a screen coordinate system of a screen;
performing binocular optical axis parallel correction on a binocular camera;
and acquiring a conversion variable from the first camera coordinate system to the screen coordinate system and a first coordinate of a target point under the first camera coordinate system of the binocular camera, and acquiring a second coordinate of the target point under the screen coordinate system according to the conversion variable and the first coordinate.
The electronic device exists in a variety of forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include: smart phones (e.g., iphones or android operating system based phones), multimedia phones, functional phones, and low-end phones, etc.;
(2) ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PC, PDA, MID, and UMPC devices, etc., such as iPad;
(3) a portable entertainment device: such devices can display and play multimedia content. This type of device comprises: audio, video players (e.g., ipods), handheld game consoles, electronic books, and smart toys and portable car navigation devices;
(4) and other electronic devices with data interaction functions.
It should be noted that the electronic device provided in the embodiment of the present invention is an electronic device to which the above method for naked eye stereo tracking can be applied, and all embodiments and advantageous effects of the above method for naked eye stereo tracking are applicable to the electronic device.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (19)
1. A method of autostereoscopic tracking, the method comprising:
establishing a screen coordinate system of a screen;
performing binocular optical axis parallel correction on a binocular camera;
and acquiring a conversion variable from the first camera coordinate system to the screen coordinate system and a first coordinate of a target point under the first camera coordinate system of the binocular camera, and acquiring a second coordinate of the target point under the screen coordinate system according to the conversion variable and the first coordinate.
2. The method of claim 1, wherein the obtaining a transformation variable from the first camera coordinate system to the screen coordinate system and a first coordinate of a target point in the first camera coordinate system, and obtaining a second coordinate of the target point in the screen coordinate system according to the transformation variable and the first coordinate comprises:
under a first camera coordinate system of the binocular camera, acquiring a rotation variable from the first camera coordinate system to the screen coordinate system, and performing rotation correction on the first camera coordinate system according to the rotation variable;
acquiring a first coordinate of a target point in the first camera coordinate system after rotation correction;
and further carrying out translation correction on the first camera coordinate system after the rotation correction according to an origin translation variable from the first camera coordinate system to the screen coordinate system, and determining the coordinates of the first coordinates in the first camera coordinate system after the translation correction as the second coordinates.
3. The method of claim 1, wherein the obtaining a transformation variable from the first camera coordinate system to the screen coordinate system and a first coordinate of a target point in the first camera coordinate system, and obtaining a second coordinate of the target point in the screen coordinate system according to the transformation variable and the first coordinate comprises:
under a first camera coordinate system of the binocular camera, acquiring a rotation variable from the first camera coordinate system to the screen coordinate system, and performing rotation correction on the first camera coordinate system according to the rotation variable;
further carrying out translation correction on the first camera coordinate system after rotation correction according to an origin translation variable from the first camera coordinate system to the screen coordinate system;
and acquiring a first coordinate of a target point in the first camera coordinate system after translation correction, and determining the first coordinate as a second coordinate of the target point in the screen coordinate system.
4. The method of claim 2 or 3, wherein the establishing a screen coordinate system of a screen comprises:
taking the center of the screen as an origin, taking the optical center connecting line direction of the binocular cameras as an X coordinate axis, taking the component direction of the optical axis of one of the binocular cameras on the optical center connecting line vertical plane as a Y coordinate axis, and taking the vector product of the first coordinate axis and the second coordinate axis as a Z coordinate axis, and establishing a screen coordinate system of the screen;
and the optical center connecting line is parallel to the plane where the screen is located.
5. The method of claim 4, wherein the obtaining of the rotation variables of the first camera coordinate system to the screen coordinate system under the first camera coordinate system of the binocular camera comprises:
acquiring a first vector from the optical center of the first camera to the optical center of a second camera in the binocular camera under the first camera coordinate system;
acquiring a Z axis of the first camera coordinate system as a second vector;
acquiring an included angle between the first vector and the second vector, and acquiring a first rotation variable of the first camera coordinate system rotating around the Y axis of the screen coordinate system according to the included angle between the first vector and the second vector; and/or
Acquiring a Y axis of the first camera coordinate system as a third vector;
acquiring an included angle between the first vector and the third vector, and acquiring a second rotation variable of the first camera coordinate system rotating around the Z axis of the screen coordinate system according to the included angle between the first vector and the third vector; and/or
Acquiring an X axis of the first camera coordinate system as a fourth vector;
and acquiring an included angle between the first vector and the fourth vector, and acquiring a third rotation variable of the first camera coordinate system rotating around the X axis of the screen coordinate system according to the included angle between the first vector and the third vector.
6. The method of claim 4, wherein the obtaining, under the first camera coordinate system of the binocular camera, a transformation variable of the first camera coordinate system to the screen coordinate system comprises:
acquiring a third coordinate of each test point in at least three test points selected on a vertical line in the optical center connecting line in advance under the first camera coordinate system, and acquiring a fourth coordinate of each target point acquired in advance in a measuring manner under the screen coordinate system;
and acquiring a conversion variable from the first camera coordinate system to the screen coordinate system according to the conversion relation between the third coordinate and the fourth coordinate.
7. The method of claim 1, wherein after obtaining the second coordinate of the target point in the screen coordinate system according to the transformed variable and the first coordinate, the method further comprises:
and acquiring the coordinates of the human eyes corresponding to the target points under the screen coordinate system according to the second coordinates and the pre-calibrated position relationship between the target points and the human eyes.
8. The method of claim 1, wherein obtaining first coordinates of a target point in the first camera coordinate system comprises:
acquiring pixel coordinates of the target point under the pixel coordinate systems of at least two cameras and the internal reference matrix of the corresponding camera;
acquiring a ray equation where the target point is located according to the pixel coordinate and the internal reference matrix;
determining the object point of the target point in the space by adopting a least square criterion according to the ray equation;
and acquiring the coordinate of the object point under the first camera coordinate system as the first coordinate.
9. The method of claim 1, wherein obtaining first coordinates of a target point in the first camera coordinate system comprises:
acquiring two pixel points of the target point on two shot images of the binocular camera respectively;
obtaining rays respectively formed by the two pixel points according to the reversibility of the light path;
determining an object point of the target point in the space according to a common perpendicular line between the two rays;
and acquiring the coordinate of the object point under the first camera coordinate system as the first coordinate.
10. An apparatus for autostereoscopic tracking, the apparatus comprising:
the establishing module is used for establishing a screen coordinate system of a screen;
the correction module is used for carrying out binocular optical axis parallel correction on a binocular camera;
the first acquisition module is used for acquiring a conversion variable from the first camera coordinate system to the screen coordinate system and a first coordinate of a target point under the first camera coordinate system of the binocular camera, and acquiring a second coordinate of the target point under the screen coordinate system according to the conversion variable and the first coordinate.
11. The apparatus of claim 10, wherein the first obtaining module comprises:
the first acquisition unit is used for acquiring a rotation variable from a first camera coordinate system to a screen coordinate system under the first camera coordinate system of the binocular camera and performing rotation correction on the first camera coordinate system according to the rotation variable;
the second acquisition unit is used for acquiring a first coordinate of a target point in the first camera coordinate system after rotation correction;
and the first translation unit is used for further performing translation correction on the rotation-corrected first camera coordinate system according to an origin translation variable from the first camera coordinate system to the screen coordinate system, and determining the coordinates of the first coordinates in the translation-corrected first camera coordinate system as the second coordinates.
12. The apparatus of claim 10, wherein the first obtaining module comprises:
the first acquisition unit is used for acquiring a rotation variable from a first camera coordinate system to a screen coordinate system under the first camera coordinate system of the binocular camera and performing rotation correction on the first camera coordinate system according to the rotation variable;
the second translation unit is used for further performing translation correction on the first camera coordinate system after rotation correction according to an origin translation variable from the first camera coordinate system to the screen coordinate system;
and the third acquisition unit is used for acquiring a first coordinate of a target point in the first camera coordinate system after translation correction, and determining the first coordinate as a second coordinate of the target point in the screen coordinate system.
13. The apparatus of claim 11 or 12, wherein the establishing module comprises:
the device comprises an establishing unit, a display unit and a control unit, wherein the establishing unit is used for establishing a screen coordinate system of the screen by taking the center of the screen as an origin, taking the optical center connecting line direction of the binocular cameras as an X coordinate axis, taking the component direction of the optical axis of one camera in the binocular cameras on the optical center connecting line vertical plane as a Y coordinate axis, and taking the vector product of the first coordinate axis and the second coordinate axis as a Z coordinate axis;
and the optical center connecting line is parallel to the plane where the screen is located.
14. The apparatus of claim 13, wherein the first obtaining unit comprises:
the first acquisition subunit is used for acquiring a first vector from the optical center of the first camera to the optical center of a second camera in the binocular camera under the first camera coordinate system;
the second acquisition subunit is used for acquiring a Z axis of the first camera coordinate system as a second vector;
the third obtaining subunit is configured to obtain an included angle between the first vector and the second vector, and obtain a first rotation variable of the first camera coordinate system rotating around a Y axis of the screen coordinate system according to the included angle between the first vector and the second vector; and/or
The fourth acquisition subunit is used for acquiring a Y axis of the first camera coordinate system as a third vector;
a fifth obtaining subunit, configured to obtain an included angle between the first vector and the third vector, and obtain, according to the included angle between the first vector and the third vector, a second rotation variable of the first camera coordinate system rotating around a Z axis of the screen coordinate system; and/or
A sixth obtaining subunit, configured to obtain an X axis of the first camera coordinate system as a fourth vector;
and the seventh obtaining subunit is configured to obtain an included angle between the first vector and the fourth vector, and obtain a third rotation variable of the first camera coordinate system rotating around the X axis of the screen coordinate system according to the included angle between the first vector and the third vector.
15. The apparatus of claim 13, wherein the first obtaining unit comprises:
the eighth acquiring subunit is configured to acquire a third coordinate of each test point in the first camera coordinate system, among at least three test points selected in advance on a vertical line in the optical center connecting line, and acquire a fourth coordinate of each target point in the screen coordinate system, which is acquired in advance in a measurement manner;
and the ninth acquisition subunit is configured to acquire a conversion variable from the first camera coordinate system to the screen coordinate system according to a conversion relationship between the third coordinate and the fourth coordinate.
16. The apparatus of claim 10, further comprising:
and the second acquisition module is used for acquiring the coordinates of the human eyes corresponding to the target points under the screen coordinate system according to the second coordinates and the pre-calibrated position relationship between the target points and the human eyes.
17. The apparatus of claim 10, wherein the first obtaining module comprises:
the fourth acquisition unit is used for acquiring pixel coordinates of the target point under the pixel coordinate systems of the at least two cameras and the internal reference matrix of the corresponding camera;
a fifth obtaining unit, configured to obtain a ray equation where the target point is located according to the pixel coordinate and the internal reference matrix;
the first determining unit is used for determining an object point of the target point in the space by adopting a least square criterion according to the ray equation;
and the sixth acquisition unit is used for acquiring the coordinate of the object point in the first camera coordinate system as the first coordinate.
18. The apparatus of claim 10, wherein the first obtaining module comprises:
the seventh acquiring unit is used for acquiring two pixel points of the target point on two shot images of the binocular camera respectively;
the eighth obtaining unit is used for obtaining rays formed by the two pixel points respectively according to the reversibility of the light path;
the second determining unit is used for determining an object point of the target point in the space according to a common perpendicular line between the two rays;
and the ninth acquisition unit is used for acquiring the coordinate of the object point in the first camera coordinate system as the first coordinate.
19. An electronic device, comprising:
the device comprises a shell, a processor, a memory, a display screen, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for performing the steps of:
establishing a screen coordinate system of a screen;
performing binocular optical axis parallel correction on a binocular camera;
and acquiring a conversion variable from the first camera coordinate system to the screen coordinate system and a first coordinate of a target point under the first camera coordinate system of the binocular camera, and acquiring a second coordinate of the target point under the screen coordinate system according to the conversion variable and the first coordinate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510970248.5A CN106251323A (en) | 2015-12-22 | 2015-12-22 | Method, device and the electronic equipment of a kind of bore hole three-dimensional tracking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510970248.5A CN106251323A (en) | 2015-12-22 | 2015-12-22 | Method, device and the electronic equipment of a kind of bore hole three-dimensional tracking |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106251323A true CN106251323A (en) | 2016-12-21 |
Family
ID=57626494
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510970248.5A Pending CN106251323A (en) | 2015-12-22 | 2015-12-22 | Method, device and the electronic equipment of a kind of bore hole three-dimensional tracking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106251323A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108012139A (en) * | 2017-12-01 | 2018-05-08 | 北京理工大学 | The image generating method and device shown applied to the nearly eye of the sense of reality |
CN108133366A (en) * | 2017-12-22 | 2018-06-08 | 恒宝股份有限公司 | The method of payment and payment system and mobile terminal of a kind of fiscard |
CN109842793A (en) * | 2017-09-22 | 2019-06-04 | 深圳超多维科技有限公司 | A kind of naked eye 3D display method, apparatus and terminal |
CN110309486A (en) * | 2019-06-24 | 2019-10-08 | 宁波大学 | Coordinate transformation method and laser microprobe dating method |
CN110390686A (en) * | 2019-07-24 | 2019-10-29 | 张天 | Naked eye 3D display method and system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103091849A (en) * | 2011-11-08 | 2013-05-08 | 原创奈米科技股份有限公司 | Three-dimensional image display method |
CN105005986A (en) * | 2015-06-19 | 2015-10-28 | 北京邮电大学 | Three-dimensional registering method and apparatus |
-
2015
- 2015-12-22 CN CN201510970248.5A patent/CN106251323A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103091849A (en) * | 2011-11-08 | 2013-05-08 | 原创奈米科技股份有限公司 | Three-dimensional image display method |
CN105005986A (en) * | 2015-06-19 | 2015-10-28 | 北京邮电大学 | Three-dimensional registering method and apparatus |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109842793A (en) * | 2017-09-22 | 2019-06-04 | 深圳超多维科技有限公司 | A kind of naked eye 3D display method, apparatus and terminal |
CN108012139A (en) * | 2017-12-01 | 2018-05-08 | 北京理工大学 | The image generating method and device shown applied to the nearly eye of the sense of reality |
CN108012139B (en) * | 2017-12-01 | 2019-11-29 | 北京理工大学 | The image generating method and device shown applied to the nearly eye of the sense of reality |
CN108133366A (en) * | 2017-12-22 | 2018-06-08 | 恒宝股份有限公司 | The method of payment and payment system and mobile terminal of a kind of fiscard |
CN110309486A (en) * | 2019-06-24 | 2019-10-08 | 宁波大学 | Coordinate transformation method and laser microprobe dating method |
CN110309486B (en) * | 2019-06-24 | 2022-12-09 | 宁波大学 | Coordinate conversion method and laser microdissection method |
CN110390686A (en) * | 2019-07-24 | 2019-10-29 | 张天 | Naked eye 3D display method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11928838B2 (en) | Calibration system and method to align a 3D virtual scene and a 3D real world for a stereoscopic head-mounted display | |
CN108765498B (en) | Monocular vision tracking, device and storage medium | |
CN105704468B (en) | Stereo display method, device and electronic equipment for virtual and reality scene | |
US10269139B2 (en) | Computer program, head-mounted display device, and calibration method | |
US10019831B2 (en) | Integrating real world conditions into virtual imagery | |
US20180332222A1 (en) | Method and apparatus for obtaining binocular panoramic image, and storage medium | |
US9324190B2 (en) | Capturing and aligning three-dimensional scenes | |
CN106251323A (en) | Method, device and the electronic equipment of a kind of bore hole three-dimensional tracking | |
US20140192164A1 (en) | System and method for determining depth information in augmented reality scene | |
CN105704478A (en) | Stereoscopic display method, device and electronic equipment used for virtual and reality scene | |
CN110335307B (en) | Calibration method, calibration device, computer storage medium and terminal equipment | |
CN104169965A (en) | Systems, methods, and computer program products for runtime adjustment of image warping parameters in a multi-camera system | |
CN106383596A (en) | VR (virtual reality) dizzy prevention system and method based on space positioning | |
CN104204848B (en) | There is the search equipment of range finding camera | |
CN106231292B (en) | A kind of stereoscopic Virtual Reality live broadcasting method, device and equipment | |
US20120293549A1 (en) | Computer-readable storage medium having information processing program stored therein, information processing apparatus, information processing system, and information processing method | |
CN110377148A (en) | Computer-readable medium, the method for training object detection algorithm and training equipment | |
CN110337674A (en) | Three-dimensional rebuilding method, device, equipment and storage medium | |
CN107015655A (en) | Museum virtual scene AR experiences eyeglass device and its implementation | |
CN106296589A (en) | The processing method and processing device of panoramic picture | |
CN108668108A (en) | A kind of method, apparatus and electronic equipment of video monitoring | |
CN108174178A (en) | A kind of method for displaying image, device and virtual reality device | |
CN104216533B (en) | A kind of wear-type virtual reality display based on DirectX9 | |
KR101545633B1 (en) | Method and System for Vehicle Stereo Camera Calibration | |
CN106228613B (en) | A kind of construction method, device and the stereoscopic display device of virtual three-dimensional scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161221 |