CN109272570B - Space point three-dimensional coordinate solving method based on stereoscopic vision mathematical model - Google Patents

Space point three-dimensional coordinate solving method based on stereoscopic vision mathematical model Download PDF

Info

Publication number
CN109272570B
CN109272570B CN201810931481.6A CN201810931481A CN109272570B CN 109272570 B CN109272570 B CN 109272570B CN 201810931481 A CN201810931481 A CN 201810931481A CN 109272570 B CN109272570 B CN 109272570B
Authority
CN
China
Prior art keywords
camera
coordinate system
target point
image
right camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810931481.6A
Other languages
Chinese (zh)
Other versions
CN109272570A (en
Inventor
张进
柴志文
邓华夏
余寰
马孟超
钟翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN201810931481.6A priority Critical patent/CN109272570B/en
Publication of CN109272570A publication Critical patent/CN109272570A/en
Application granted granted Critical
Publication of CN109272570B publication Critical patent/CN109272570B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a high-precision solving method of three-dimensional coordinates of space points in a world coordinate system based on a stereoscopic vision mathematical model and considering multiple factors in internal parameters. Based on a binocular stereo vision system, after initial parameters of left and right camera parameters in the binocular stereo vision system are obtained through calibration, iterative optimization is carried out on the camera parameters through an image reprojection error minimization principle, and the unequal focal length of the left camera internal parameter, the unequal focal length of the right camera internal parameter and the offset of the left and right camera coordinate system camera optical axes in a left and right image coordinate system are obtained. Shooting the same target point, and improving the precision of recovering the three-dimensional information of the target point through the optimized left and right camera parameters. And establishing a relation between a left image coordinate system and a world coordinate system, establishing a relation between a right camera coordinate system and the world coordinate system, establishing a relation between a right image coordinate system and a right camera coordinate system, and realizing high-precision solving of the three-dimensional coordinates of the space points through the relation between the left image coordinate system, the right image coordinate system, the world coordinate system and the right camera coordinate system.

Description

Space point three-dimensional coordinate solving method based on stereoscopic vision mathematical model
Technical Field
The invention relates to the technical field of three-dimensional reconstruction of photogrammetry, in particular to a space point three-dimensional coordinate solving method based on a stereoscopic vision mathematical model.
Background
Photogrammetry techniques are widely used in artificial intelligence, vision measurement and robotics. The vision measurement binocular three-dimensional reconstruction comprises the processes of image acquisition, camera calibration, image correction, image feature extraction and matching, three-dimensional point calculation and the like. The camera calibration is typically a checkerboard calibration method between the traditional calibration and the self-calibration; for nonlinear distortion, correction parameters reflecting distortion influence can be introduced, and then correction coefficients are solved based on control points or other methods to correct the image; feature extraction and matching algorithms are typical like the extraction and matching of sift points. The projection coordinates of the space three-dimensional points on the image plane are obtained through the process.
As the last step of three-dimensional reconstruction, the calculation of three-dimensional points is usually solved by a parallax-based method in a binocular stereo vision system, which is a method of acquiring two images of an object to be measured from different positions by using imaging equipment and acquiring three-dimensional geometric information of the object by calculating the position deviation between corresponding points of the images. However, when the positions of the two cameras are not particularly required, especially when the difference between the upper and lower positions of the two cameras is large due to the operation relationship in the head-up binocular stereoscopic vision system, the method based on the parallax and the re-projection matrix solution sometimes cannot solve the three-dimensional coordinates of the space points, especially the depth direction information, with high precision after the two planes of the cameras are aligned in a complete row by using the correction matrix.
A binocular vision-based large-size geometric measurement method is provided in the patent application number CN201610429820.1 of Liuhongxia Xixia et al, firstly, a calibration method combining a plane target and active vision is adopted to accurately calibrate parameters of a vision system, and then the space coordinates of a target point are calculated according to a binocular stereo vision model. However, in the solution process, only one effective focal length is considered by the left camera and the right camera, and the offset factor of the camera optical axis in the image coordinate system in the camera coordinate system is not considered.
An omnidirectional stereoscopic vision three-dimensional reconstruction method based on a Taylor series model is provided in patents with patent application numbers of CN200810120794.X of university of Zhejiang and admiration, the method utilizes the Taylor series model to calibrate an omnidirectional vision sensor to obtain camera internal parameters, external epipolar line correction is carried out on a shot omnidirectional stereoscopic image pair in external epipolar line correction, a corrected polar quadratic curve is made to coincide with an image scanning line, feature point matching is carried out on the corrected stereoscopic image pair, and the three-dimensional coordinates of points are calculated according to the matching result.
A three-dimensional reconstruction method based on images is proposed in a patent with the patent application number of CN200810224347.9 of Li Renzhen et al, beijing university, a group of images are obtained, one point on one image is designated as a point to be reconstructed, then a 128-dimensional descriptor of a target point is calculated, and then three-dimensional coordinates of the characteristic point are reversely solved according to the descriptor.
A binocular three-dimensional reconstruction method is proposed in patent application No. CN201210543958.6 of Xuxian et al, southern China university, and the scheme is that layered feature extraction and matching are carried out on images according to the strength of image feature lines, the matching times are determined by restoration requirements, and the three-dimensional coordinates of a physical space are calculated by using the principle of binocular parallax and the known parameters of binocular lenses.
In practice, however, due to defects of the digital camera sensor, non-uniform scaling of the image in post-processing, unintentional distortion caused by the camera lens, the use of a distorted format by the camera, the compression of the wide screen scene by the lens into a sensor of standard size, errors in camera calibration, etc., the camera has two different focal length parameters; since the intersection of the imager and the optical axis is unlikely to be at the center of the imager, there must be some offset. When the three-dimensional coordinates of the space points are calculated by using the equivalent focal length, the precision of the three-dimensional coordinates of the space points can be influenced, and the effect of the whole three-dimensional reconstruction process is further influenced.
Disclosure of Invention
In view of the above problems in the prior art, an object of the present invention is to provide a method for solving three-dimensional coordinates of spatial points based on a stereoscopic vision mathematical model, which realizes high-precision solution of three-dimensional coordinates of spatial points in a world coordinate system.
The invention aims to realize high-precision solving of three-dimensional coordinates of space points under binocular stereoscopic vision. Based on binocular stereo vision, the invention only considers one effective focal length for the left camera and the right camera in the current mathematical model of the binocular stereo vision, and does not consider the deviation value of the camera optical axis in the image coordinate system in the camera coordinate system. And after the initial values of the internal parameter matrix and the external parameter in the binocular stereoscopic vision system are obtained through calibration, optimizing the internal parameter matrix and the external parameter in the binocular stereoscopic vision system. Shooting the same target point, fully considering the equivalent of the unequal focal length of the internal parameters of the left camera, the unequal focal length of the internal parameters of the right camera and the offset of the optical axes of the cameras in the left and right image coordinate systems in the binocular stereo vision mathematical model, and reducing the three-dimensional recovery of the target pointThe impact of the information. And establishing the left image coordinate system O L -X L Y L Establishing a right camera coordinate system O according to the relation between the world coordinate system O and xyz r -x r y r z r Establishing a right image coordinate system O according to the relation between the world coordinate system O and xyz R -X R Y R And the right camera coordinate system O r -x r y r z r And the high-precision solution of the three-dimensional coordinates of the space points is realized through the relationship among the left image coordinate system, the right image pixel coordinate system, the world coordinate system and the right camera coordinate system.
Specifically, in order to achieve the above object, the present invention provides a method for solving three-dimensional coordinates of spatial points based on a mathematical model of stereoscopic vision, which is based on a binocular stereoscopic vision system, and includes:
s1, obtaining camera parameters of a left camera and a right camera in a binocular stereoscopic vision system through calibration;
s2, optimizing camera parameters of the camera according to an image re-projection error minimization principle to respectively obtain final camera parameters of the left camera and the right camera;
s3, shooting a target point at the same time through the binocular stereo vision system, and obtaining a coordinate point (X) of the matched target point in a left image shot by a left camera L ,Y L ) And obtaining a coordinate point (X) of the matched target point in a right image shot by a right camera R ,Y R );
S4, establishing a left image coordinate system O according to the perspective transformation model of the camera L -X L Y L With respect to the world coordinate system O-xyz, if the scaling factor is S L Obtaining three-dimensional coordinates (x, y, z) of a target point and a scale factor S according to camera parameters of the left camera and the right camera L (ii) an inter-relationship;
s5, establishing a right camera coordinate system O r -x r y r z r With respect to the world coordinate system O-xyz, if the scaling factor is S R Camera parameters according to the left and right cameras, and three-dimensional coordinates (x, y,z) and a scaling factor S L Obtaining the z relation between the coordinate system of the right camera and the three-dimensional coordinates of the space points;
s6, establishing a right image coordinate system O according to the perspective transformation model of the camera R -X R Y R And the right camera coordinate system O r -x r y r z r Establishing a corresponding relation between the image surfaces of the left camera and the right camera according to a z relation between a coordinate system of the right camera and three-dimensional coordinates of the space points;
s7, obtaining two values Z1 and Z2 of Z in the three-dimensional coordinate of the target point according to the corresponding relation between the left image surface and the right image surface;
s8, obtaining the expected value of Z in the three-dimensional coordinate of the target point according to the two values Z1 and Z2 of Z in the three-dimensional coordinate of the target point, and then obtaining the expected value of Z in the three-dimensional coordinate of the target point according to the three-dimensional coordinate (x, y, Z) of the target point and the scale factor S L And obtaining the x and y values in the three-dimensional coordinates of the target point.
In some embodiments, the camera parameters of the left camera and the right camera comprise: the left camera focal length f Lx 、f Ly Offset position C of camera optical axis of said left camera coordinate system in left image coordinate system Lx 、C Ly (ii) a The right camera focal length f Rx 、f Ry Offset position C of the camera optical axis of the right camera coordinate system in the right image coordinate system Rx 、C Ry And a rotation matrix R and a translation vector T of the coordinate system of the left camera and the right camera.
In some embodiments, in step S2, two-dimensional projection calculation of the target point is performed according to the calibration result, the calculation result is compared with the actual measurement value, and a nonlinear global optimization objective function is established based on the principle of minimizing the difference between the calculation result and the actual measurement value:
Figure BDA0001766734610000041
coordinates, a is an internal parameter matrix, K1, K2 represent radial distortion parameters, W is an external parameter matrix, and Mj is a spatial point in the world coordinate system to which the image point mij matches.
In some embodiments, the three-dimensional coordinates (x, y, z) of the target point are scaled by a scaling factor S L The relationship between them is:
x=S L (X L -C Lx )/f Lx (1)
y=S L (Y L -C Ly )/f Ly (2)
z=S L (3)
in some embodiments, the relationship between the right camera coordinate system and the z in three-dimensional coordinates of the spatial points is:
Figure BDA0001766734610000042
in some embodiments, the correspondence between the image planes of the left camera and the right camera is:
Figure BDA0001766734610000043
in some embodiments, the two values Z1 and Z2 of Z in the three-dimensional coordinates of the target point are derived by:
Figure BDA0001766734610000051
Figure BDA0001766734610000052
Figure BDA0001766734610000053
Figure BDA0001766734610000054
wherein A and B are internal parameter matrixes.
In some embodiments, the three-dimensional coordinates of the target point are determined by:
Figure BDA0001766734610000055
x=Az (11)
y=Bz (12)
in some embodiments, the left camera focal length is f Lx 、f Ly And f is a Lx ≠f Ly The right camera focal length is f Lx 、f Ly And f is Rx ≠f Ry
In some embodiments, in step S4, a left image coordinate system O is established L -X L Y L When the coordinate system of the left camera is coincident with the world coordinate system when the coordinate system of the left camera is in relation with the world coordinate system O-xyz, the coordinate system O of the left image is established L -X L Y L The relation between the left image coordinate system O and the world coordinate system O-xyz is equal to the establishment of the left image coordinate system O L -X L Y L And the left camera coordinate system.
Compared with the prior art, the invention has the following advantages:
(1) The method can be applied to a head-up binocular stereoscopic vision system and can also be applied to a binocular stereoscopic vision system when the position arrangement of two cameras is not required. Particularly, when two cameras are not strictly in the operation relationship in a head-up binocular stereoscopic vision system, the three-dimensional coordinates of the space points cannot be solved with high precision sometimes by a parallax-based method.
(2) In practice, due to various reasons, a single pixel is rectangular instead of square on an imager, two different focal length parameters and camera optical axis offset are introduced, the camera parameters are optimized according to the principle of minimizing the image reprojection error, and the accuracy of the camera parameters is greatly improved. By combining a binocular stereoscopic vision mathematical model, the influence of unequal focal lengths of the left camera and the right camera and offset factors of optical axes of the left camera and the right camera in an image coordinate system on the recovery of the coordinates of the two-dimensional pixel points in the image coordinate system to the three-dimensional information of the target points is reduced.
(3) In the process of solving the three-dimensional coordinates of the space points, the expected value of the coordinates z is obtained according to the calculation results of different formulas, and the actual calculation result shows that the invention further improves the solving precision of the three-dimensional coordinates of the space points in the head-up binocular stereoscopic vision system; and high-precision solving of the three-dimensional coordinates of the space point is realized when the positions of the two cameras in the binocular stereo vision system are placed without special requirements.
Drawings
Fig. 1 is a flow chart of a three-dimensional coordinate solving method of a space point based on a stereoscopic vision mathematical model.
Fig. 2 is a diagram of a mathematical model of binocular stereo vision.
Detailed Description
Specific embodiments of the present application will be described in detail below with reference to the accompanying drawings, but the present application is not limited thereto.
It will be understood that various modifications may be made to the embodiments disclosed herein. Accordingly, the foregoing description should not be considered as limiting, but merely as exemplifications of embodiments. Other modifications within the scope and spirit of the present disclosure will occur to those skilled in the art.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above, and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
These and other characteristics of the present application will become apparent from the following description of a preferred form of embodiment, given as a non-limiting example, with reference to the attached drawings.
It should also be understood that, although the present application has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of application, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present disclosure will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present disclosure are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely examples of the disclosure that may be embodied in various forms. Well-known and/or repeated functions and structures have not been described in detail so as not to obscure the present disclosure with unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the disclosure.
The process of the present invention will now be described in detail with reference to the accompanying figures 1 and 2 and the specific process steps set out below.
Specifically, in some embodiments, the methods of the present invention comprise the steps of:
the method comprises the following steps: obtaining initial parameters of the left camera and the right camera in the binocular stereoscopic vision system through calibration, wherein the initial parameters of the left camera and the right camera comprise: the left camera focal length f Lx 、f Ly Offset position C of the left camera coordinate system camera optical axis in the left image coordinate system Lx 、C Ly The right camera focal length f Rx 、f Ry Offset position C of the right camera coordinate system camera optical axis in the right image coordinate system Rx 、C Ry And a rotation matrix R, a translation vector T of the two camera coordinate systems;
step two: according to the maximum likelihood estimation, the difference between the optimal solution value and the true value, namely the principle of minimum reprojection error, is obtained by establishing a nonlinear minimization model, the two-dimensional projection calculation of the space point is carried out according to the calibration result, the calculation result is compared with the actual measurement value, and the nonlinear global optimization objective function is established according to the principle of minimum difference:
Figure BDA0001766734610000071
the index A is an internal parameter matrix, K1 and K2 represent radial distortion parameters, W is an external parameter matrix, and Mj is a space point under a world coordinate system matched with the image point mij;
step three: as shown in the mathematical model diagram of binocular stereo vision in fig. 2, the target point P is shot at the same time by the binocular stereo vision system, and the coordinate point (X) of the matched target point in the left image is obtained L ,Y L ) The matched target point is at the right image coordinate point (X) R ,Y R );
Step four: establishing a left image coordinate system O according to a camera perspective transformation model L -X L Y L With respect to the world coordinate system O-xyz, if the scaling factor is S L Obtaining three-dimensional coordinates (x, y, z) of a space point and a scale factor S according to the two camera parameters obtained in the step one L (ii) an inter-relationship;
x=S L (X L -C Lx )/f Lx (1)
y=S L (Y L -C Ly )/f Ly (2)
z=S L (3)
step five: establishing a right camera coordinate system O r -x r y r z r With respect to the world coordinate system O-xyz, if the scaling factor is S R Deducing the z relation between the right camera coordinate system and the three-dimensional coordinates of the space points according to the rotation matrix R and the translation vector T of the two camera coordinate systems obtained in the first step and formulas (1), (2) and (3) obtained in the third step;
Figure BDA0001766734610000081
step six: establishing a right image coordinate system O according to a camera perspective transformation model R -X R Y R And the right camera coordinate system O r -x r y r z r Establishing a corresponding relation between the left camera and the right camera image surface according to the formula (4) obtained in the fourth step;
Figure BDA0001766734610000082
step seven: deducing two expressions of z in the three-dimensional coordinates of the space points according to the equality of corresponding rows of the left matrix and the right matrix of the formula (5) obtained in the step five;
Figure BDA0001766734610000083
Figure BDA0001766734610000084
Figure BDA0001766734610000085
Figure BDA0001766734610000086
A. b is an intrinsic parameter matrix;
step eight: calculating the expected value of z in the three-dimensional coordinate of the space point according to z1 and z2 obtained by the formulas (6) and (7) obtained in the sixth step, and deducing the values of x and y in the three-dimensional coordinate of the space point according to the formulas (1) and (2) obtained in the first step;
Figure BDA0001766734610000091
x=Az (11)
y=Bz (12)
the above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present application and such modifications and equivalents should also be considered to be within the scope of the present application.

Claims (10)

1. A three-dimensional coordinate solving method of a space point based on a stereoscopic vision mathematical model is based on a binocular stereoscopic vision system and comprises the following steps:
s1, obtaining initial camera parameters of a left camera and a right camera in a binocular stereoscopic vision system through calibration;
s2, optimizing camera parameters of the camera according to an image re-projection error minimization principle to respectively obtain final camera parameters of the left camera and the right camera;
s3, shooting a target point at the same time through the binocular stereo vision system, and obtaining a coordinate point (X) of the matched target point in a left image shot by a left camera L ,Y L ) And obtaining a coordinate point (X) of the matched target point in a right image shot by a right camera R ,Y R );
S4, establishing a left image coordinate system O according to the perspective transformation model of the camera L -X L Y L With respect to the world coordinate system O-xyz, if the scaling factor is S L Obtaining three-dimensional coordinates (x, y, z) of a target point and a scale factor S according to camera parameters of the left camera and the right camera L (ii) an inter-relationship;
s5, establishing a right camera coordinate system O r -x r y r z r With respect to the world coordinate system O-xyz if the scaling factor is S R According to the camera parameters of the left camera and the right camera, and the three-dimensional coordinates (x, y, z) of the target point and the scale factor S L Obtaining the z relation between the coordinate system of the right camera and the three-dimensional coordinates of the space points;
s6, establishing a right image coordinate system O according to the camera perspective transformation model R -X R Y R And the right camera coordinate system O r -x r y r z r Establishing a corresponding relation between the image surfaces of the left camera and the right camera according to a z relation between a coordinate system of the right camera and three-dimensional coordinates of the space points;
s7, obtaining two values Z1 and Z2 of Z in the three-dimensional coordinate of the target point according to the corresponding relation between the left image surface and the right image surface;
s8, obtaining the expected value of Z in the three-dimensional coordinate of the target point according to the two values Z1 and Z2 of Z in the three-dimensional coordinate of the target point, and then obtaining the expected value of Z in the three-dimensional coordinate of the target point according to the three-dimensional coordinate (x, y, Z) of the target point and the scale factor S L And obtaining the x and y values in the three-dimensional coordinates of the target point through the relationship.
2. The method of claim 1, wherein the camera parameters of the left camera and the right camera comprise: the left camera focal length f Lx 、f Ly Offset position C of the camera optical axis of the left camera coordinate system in the left image coordinate system Lx 、C Ly (ii) a The right camera focal length f Rx 、f Ry Offset position C of the camera optical axis of the right camera coordinate system in the right image coordinate system Rx 、C Ry And a rotation matrix R, a translation vector T of the coordinate system of the left camera and the right camera.
3. The method according to claim 1, characterized in that in step S2, a two-dimensional projection calculation of the target point is performed based on the calibration results, the calculation results are compared with the actual measurement values, and a nonlinear global optimization objective function is established based on the principle of minimizing the difference between the calculation results and the actual measurement values:
Figure FDA0001766734600000023
the index, a is the internal parameter matrix, K1, K2 represent the radial distortion parameters, W is the external parameter matrix, and Mj is the target point in the world coordinate system to which the image point mij matches.
4. The method of claim 2, wherein the three-dimensional coordinates (x, y, z) of the target point are proportionalSon S L The relationship between them is:
x=S L (X L -C Lx )/f Lx (1)
y=S L (Y L -C Ly )/f Ly (2)
z=S L (3)
5. the method of claim 2, wherein the right camera coordinate system has a z relationship with the three dimensional coordinates of the spatial points as follows:
Figure FDA0001766734600000021
6. the method of claim 5, wherein the correspondence between the image planes of the left camera and the right camera is:
Figure FDA0001766734600000022
7. the method according to claim 6, characterized in that the two values Z1 and Z2 of Z in the three-dimensional coordinates of the target point are derived by:
Figure FDA0001766734600000031
Figure FDA0001766734600000032
Figure FDA0001766734600000033
Figure FDA0001766734600000034
wherein, A and B are internal parameter matrixes.
8. The method of claim 7, wherein the three-dimensional coordinates of the target point are determined by:
Figure FDA0001766734600000035
x=Az (11)
y=Bz (12)
9. the method of claim 1, wherein the left camera focal length is f Lx 、f Ly And f is Lx ≠f Ly The right camera focal length is f Lx 、f Ly And f is Rx ≠f Ry
10. The method of claim 1, wherein in step S4, a left image coordinate system O is established L -X L Y L When the coordinate system of the left camera is coincident with the world coordinate system when the coordinate system of the left camera is in relation with the world coordinate system O-xyz, the coordinate system O of the left image is established L -X L Y L The relation between the left image coordinate system O and the world coordinate system O-xyz is equivalent to the establishment of the left image coordinate system O L -X L Y L And the left camera coordinate system.
CN201810931481.6A 2018-08-16 2018-08-16 Space point three-dimensional coordinate solving method based on stereoscopic vision mathematical model Active CN109272570B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810931481.6A CN109272570B (en) 2018-08-16 2018-08-16 Space point three-dimensional coordinate solving method based on stereoscopic vision mathematical model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810931481.6A CN109272570B (en) 2018-08-16 2018-08-16 Space point three-dimensional coordinate solving method based on stereoscopic vision mathematical model

Publications (2)

Publication Number Publication Date
CN109272570A CN109272570A (en) 2019-01-25
CN109272570B true CN109272570B (en) 2022-10-25

Family

ID=65153828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810931481.6A Active CN109272570B (en) 2018-08-16 2018-08-16 Space point three-dimensional coordinate solving method based on stereoscopic vision mathematical model

Country Status (1)

Country Link
CN (1) CN109272570B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135277B (en) * 2019-07-05 2022-08-16 南京邮电大学 Human behavior recognition method based on convolutional neural network
CN110332930B (en) * 2019-07-31 2021-09-17 小狗电器互联网科技(北京)股份有限公司 Position determination method, device and equipment
CN110728721B (en) * 2019-10-21 2022-11-01 北京百度网讯科技有限公司 Method, device and equipment for acquiring external parameters
CN110812710B (en) * 2019-10-22 2021-08-13 苏州雷泰智能科技有限公司 Accelerator frame rotation angle measuring system and method and radiotherapy equipment
CN111192321B (en) * 2019-12-31 2023-09-22 武汉市城建工程有限公司 Target three-dimensional positioning method and device
CN111325803B (en) * 2020-02-12 2023-05-12 清华大学深圳国际研究生院 Calibration method for evaluating internal and external participation time synchronization of binocular camera
CN112002016B (en) * 2020-08-28 2024-01-26 中国科学院自动化研究所 Continuous curved surface reconstruction method, system and device based on binocular vision
CN112085770A (en) * 2020-09-10 2020-12-15 上海庞勃特科技有限公司 Binocular multi-target matching and screening method for table tennis track capture
CN112085657B (en) * 2020-09-10 2023-09-26 北京信息科技大学 OCT image stitching method based on binocular stereoscopic vision tracking and retinal vascular characteristics
CN112907654B (en) * 2021-02-08 2024-03-26 上海汽车集团股份有限公司 Method and device for optimizing external parameters of multi-camera, electronic equipment and storage medium
CN113112553B (en) * 2021-05-26 2022-07-29 北京三快在线科技有限公司 Parameter calibration method and device for binocular camera, electronic equipment and storage medium
CN113551611B (en) * 2021-06-15 2022-04-22 西安交通大学 Stereo vision measuring method, system, equipment and storage medium for large-size moving object
CN113513981B (en) * 2021-06-15 2022-10-25 西安交通大学 Multi-target parallel measurement method, system, equipment and storage medium based on binocular stereo vision
CN113538592B (en) * 2021-06-18 2023-10-27 深圳奥锐达科技有限公司 Calibration method and device for distance measuring device and camera fusion system
CN113284196B (en) * 2021-07-20 2021-10-22 杭州先奥科技有限公司 Camera distortion pixel-by-pixel calibration method
CN113724337B (en) * 2021-08-30 2024-02-23 合肥工业大学 Camera dynamic external parameter calibration method and device without depending on tripod head angle
CN113850815B (en) * 2021-11-29 2022-03-08 季华实验室 Workpiece point cloud obtaining method and device, electronic equipment and storage medium
CN114708335B (en) * 2022-03-20 2023-03-14 元橡科技(苏州)有限公司 External parameter calibration system, calibration method, application and storage medium of binocular stereo camera

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358631A (en) * 2017-06-27 2017-11-17 大连理工大学 A kind of binocular vision method for reconstructing for taking into account three-dimensional distortion
CN107907048A (en) * 2017-06-30 2018-04-13 长沙湘计海盾科技有限公司 A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9509979B2 (en) * 2013-11-26 2016-11-29 Mobileye Vision Technologies Ltd. Stereo auto-calibration from structure-from-motion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358631A (en) * 2017-06-27 2017-11-17 大连理工大学 A kind of binocular vision method for reconstructing for taking into account three-dimensional distortion
CN107907048A (en) * 2017-06-30 2018-04-13 长沙湘计海盾科技有限公司 A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
大视场双目立体视觉柔性标定;倪章松等;《光学精密工程》;20170715(第07期);全文 *

Also Published As

Publication number Publication date
CN109272570A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN109272570B (en) Space point three-dimensional coordinate solving method based on stereoscopic vision mathematical model
CN107194972B (en) Camera calibration method and system
CN111243033B (en) Method for optimizing external parameters of binocular camera
CN105096329B (en) Method for accurately correcting image distortion of ultra-wide-angle camera
WO2019179200A1 (en) Three-dimensional reconstruction method for multiocular camera device, vr camera device, and panoramic camera device
CN107767420B (en) Calibration method of underwater stereoscopic vision system
WO2016037486A1 (en) Three-dimensional imaging method and system for human body
CN110345921B (en) Stereo visual field vision measurement and vertical axis aberration and axial aberration correction method and system
Chatterjee et al. Algorithms for coplanar camera calibration
CN109325981B (en) Geometric parameter calibration method for micro-lens array type optical field camera based on focusing image points
CN102567989A (en) Space positioning method based on binocular stereo vision
CN109712232B (en) Object surface contour three-dimensional imaging method based on light field
CN113129430B (en) Underwater three-dimensional reconstruction method based on binocular structured light
CN110874854B (en) Camera binocular photogrammetry method based on small baseline condition
CN112734863A (en) Crossed binocular camera calibration method based on automatic positioning
CN113450416B (en) TCSC method applied to three-dimensional calibration of three-dimensional camera
CN111435539A (en) Multi-camera system external parameter calibration method based on joint optimization
CN108269234A (en) A kind of lens of panoramic camera Attitude estimation method and panorama camera
Chan et al. An improved method for fisheye camera calibration and distortion correction
CN116740187A (en) Multi-camera combined calibration method without overlapping view fields
CN116957987A (en) Multi-eye polar line correction method, device, computer equipment and storage medium
CN109990756B (en) Binocular ranging method and system
CN109345595B (en) Stereoscopic vision sensor calibration method based on spherical lens
CN112950727B (en) Large-view-field multi-target simultaneous ranging method based on bionic curved compound eye
CN107806861B (en) Inclined image relative orientation method based on essential matrix decomposition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant