KR101387692B1 - Method for adjusting optimum optical axis distance of stereo camera - Google Patents
Method for adjusting optimum optical axis distance of stereo camera Download PDFInfo
- Publication number
- KR101387692B1 KR101387692B1 KR1020120110398A KR20120110398A KR101387692B1 KR 101387692 B1 KR101387692 B1 KR 101387692B1 KR 1020120110398 A KR1020120110398 A KR 1020120110398A KR 20120110398 A KR20120110398 A KR 20120110398A KR 101387692 B1 KR101387692 B1 KR 101387692B1
- Authority
- KR
- South Korea
- Prior art keywords
- camera
- coordinate values
- optical axis
- dimensional coordinate
- noise
- Prior art date
Links
Images
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to a method for adjusting the optical axis of a stereo camera. More particularly, the present invention relates to a method of adjusting an optical axis of a stereo camera, which can minimize a restoration error of a 3D image. According to the present invention, since the optical axis interval between the two cameras can be adjusted to minimize the distortion of the restored 3D image due to the noise generated when the images are acquired by the two cameras constituting the stereo camera, It has the effect of improving the restoration accuracy.
Description
The present invention relates to a method for adjusting the optical axis of a stereo camera. More particularly, the present invention relates to a method of adjusting an optical axis of a stereo camera, which can minimize a restoration error of a 3D image obtained by a stereo camera.
In general, a stereo camera is used in various fields because a 3D image can be generated by using two 2D images simultaneously acquired by two cameras having a predetermined interval.
In particular, when the 3D coordinates of the key feature points are acquired by using the coordinates of the key feature points included in the 2D images obtained from the two cameras constituting the stereo camera, such as a motion capture device, the image is acquired by the two cameras. The noise caused by the noise generated by the lens distortion of the two cameras and the noise of the two cameras are included in the two-dimensional image, and as a result, a restoration error occurs during the restoration to the three-dimensional image. It is essential to adjust the optical axis spacing of the two cameras.
However, in the prior art (US Pat. Nos. 7,933,512, 6,701,081, and 8,139,935), only the mechanical configuration capable of adjusting the optical axis spacing or the angle of the lens of two cameras for obtaining stereo images is described. As it does not present a method for adjusting the optical axis spacing of the two cameras to minimize the noise generated in the two-dimensional image acquisition process as shown in the prior art, only the prior art restores the three-dimensional image from the two-dimensional image obtained by the stereo camera There was a problem in that it is not possible to ensure the accuracy of restoration of the 3D image reconstructed in time.
The present invention has been made to solve the above problems and adjusts the optimal optical axis distance between the two cameras in consideration of noise caused by lens distortion and noise caused by the external environment included in the image obtained by the two cameras constituting the stereo camera. An object of the present invention is to provide a method for adjusting an optical axis of a stereo camera.
In order to achieve the above object, an optical axis spacing method of a stereo camera according to a preferred embodiment of the present invention includes (a) a field of view (FOV) of a first camera and a second camera constituting a stereo camera intersect. Generating any three-dimensional coordinates within the area; (b) generating a plurality of camera matrices of the second camera by adjusting values of camera matrices of the second camera among camera matrices of the first camera and the second camera calculated in advance; (c) generating virtual two-dimensional coordinate values obtainable by the first camera according to the coordinate values of the three-dimensional coordinates by using the coordinate values of the three-dimensional coordinates and the camera matrix of the first camera. Generating a plurality of two-dimensional coordinate values obtainable by the second camera according to the coordinate values of the three-dimensional coordinates by using the arbitrary three-dimensional coordinate values and the camera matrix of the second camera in which the plurality is generated. Doing; (d) using virtual two-dimensional coordinate values obtainable by the first camera, a plurality of two-dimensional coordinate values obtainable by the second camera, and noise values of the first and second cameras calculated in advance Generating a plurality of two-dimensional coordinate values including the noise value; (e) generating a plurality of three-dimensional coordinate values including the noise value using the plurality of two-dimensional coordinate values including the noise value; (f) comparing an error between the plurality of three-dimensional coordinate values including the noise value and the coordinate values of the arbitrary three-dimensional coordinates to determine an optical axis interval corresponding to the three-dimensional coordinate value including the noise value that minimizes the error. Determining; And (g) adjusting the optical axis spacing of the first camera and the second camera according to the determined optical axis spacing.
The method may further include calculating lens distortion coefficients of the first camera and the second camera before step (a).
Further, in step (d), noise values of the first camera and the second camera are radial distortion of the first camera and the second camera determined by lens distortion coefficients of the first camera and the second camera. It may be a noise value.
delete
In addition, in step (b), the value adjusting range of the camera matrix of the second camera may be 100 mm to 900 mm.
According to the present invention, since the optical axis interval between the two cameras can be adjusted to minimize the distortion of the restored 3D image due to the noise generated when the images are acquired by the two cameras constituting the stereo camera, It has the effect of improving the restoration accuracy.
1 is a flowchart illustrating a method of adjusting an optical axis of a stereo camera according to a preferred embodiment of the present invention, and
2 to 6 are reference diagrams for an optimal optical axis spacing method of a stereo camera according to a preferred embodiment of the present invention.
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the drawings, the same reference numerals are used to designate the same or similar components throughout the drawings. In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. Further, the preferred embodiments of the present invention will be described below, but it is needless to say that the technical idea of the present invention is not limited thereto and can be practiced by those skilled in the art.
1 is a flowchart illustrating a method of adjusting an optical axis of a stereo camera according to an exemplary embodiment of the present invention.
As shown in FIG. 1, an arbitrary three-dimensional coordinate is generated in S10 for application to a stereo camera.
In this case, in S10, the arbitrary 3D coordinates may be generated in an area where the field of view (FOV) of the first camera and the second camera constituting the stereo camera intersect, and the arbitrary 3D coordinates. May be coordinates generated by a computer program to which the optimal optical axis spacing adjustment method of the stereo camera according to the preferred embodiment of the present invention is applied for 3D reconstruction of an image acquired by the stereo camera.
In this case, prior to S10, calculating the camera matrix of the first camera and the second camera and the lens distortion coefficient of the first camera and the second camera, and whether the optical axis is parallel between the first camera and the second camera. And if the optical axis is not parallel, the method may further include adjusting the optical axis in parallel, wherein a camera matrix is a unique characteristic of a camera for calculating coordinates of an image obtained by the camera. This value is represented by Equation 1 below.
Where P is the camera matrix, f x and f y are the focal lengths of the camera, s is the skew value of the camera, u o and v o are the coordinates of the camera's principal, and R is the x, y, z direction of the camera. The rotation of, and t means the coordinates for the x, y, z direction of the camera.
In addition, the camera matrix and the radiation distortion coefficient of the first camera and the second camera may be calculated by camera calibration. In the case of camera calibration, since it is a known technique, a detailed description thereof will be omitted. In the case of a method for parallel reconciliation (stereo recification) is also a well-known technique, detailed description thereof will be omitted.
In operation S20, a plurality of camera matrices for the second camera are generated by changing coordinate values of the second camera camera matrix among the camera matrices of the first camera and the second camera, which are calculated in advance.
In other words, when the camera matrix of the first camera is P 1 and the camera matrix of the second camera is P 2 with reference to Equation 1, P 1 = K 1 [R 1 t 1 ], P 2 = K 2 [R 2 t 2 ] can be represented, and by the optical axis parallel adjustment step described above can be t 1 = [0 0 0] T , t 2 = [0 t y 0] T.
In addition, ty denotes the distance between the y-axis and the second camera, so if the ty value is adjusted within a predetermined interval range (for example, if the ty value is adjusted from 100mm to 900mm in 100mm units), t 2 It can have multiple values, such as [0 100 0], [0 200 0], ..., [0 900 0], and assigns the generated t 2 values to P 2 to reset the multiple P 2 values. In other words, it is possible to generate a plurality of camera matrices for the second camera.
In addition, in the case of 100mm to 900mm presented as an adjustment range of the ty value, at least a 2D image of the object obtained by the first camera and the second camera is used to obtain a 3D image of the object. The value is given at the optical axis interval of, and the ty value is not limited thereto.
In operation S30, a plurality of two-dimensional coordinate values are generated using coordinate values of arbitrary three-dimensional coordinates generated in S10, a camera matrix of a first camera, and a plurality of camera matrices for a second camera generated in plurality in S20. .
In other words, a virtual two-dimensional coordinate value obtainable by the first camera may be generated according to the coordinate value of the arbitrary three-dimensional coordinates by using the coordinate value of the arbitrary three-dimensional coordinates and the camera matrix of the first camera. And a plurality of two-dimensional coordinate values obtainable by the second camera according to the coordinate values of the arbitrary three-dimensional coordinates by using the arbitrary three-dimensional coordinate values and the camera matrix of the second camera in which the plurality is generated. It can be generated, which can be expressed as Equation 2 below.
Here, x denotes a two-dimensional coordinate value obtainable by the camera, P denotes a camera matrix, and X denotes a coordinate value of the arbitrary three-dimensional coordinates.
A plurality of two-dimensional coordinate values (in other words, a virtual two-dimensional coordinate value obtainable by the first camera and a plurality of virtual two-dimensional coordinate values obtainable by the second camera) obtained in S40 to S30 and the first A plurality of two-dimensional coordinate values including the noise value are generated using the noise values of the camera and the second camera.
In this case, in S40, noise values of the first camera and the second camera may be radiation distortion noise values of the first camera and the second camera determined by lens distortion coefficients of the first camera and the second camera. The reason for additionally generating a plurality of two-dimensional coordinate values including noise values is as follows.
In general, in case of a stereo camera, if two two-dimensional coordinate values of the same object included in the images acquired by two cameras exist, the three-dimensional coordinate values of the object are calculated using the same and then the corresponding object is used. It is possible to generate a three-dimensional image of the camera, which includes noise caused by the external environment (noise due to illumination or the sensitivity of the camera's image sensor) and noise caused by lens distortion (radiation distortion noise) during image acquisition by each camera. In this case, the two-dimensional coordinate value of the object is changed, and as a result, an error occurs in the process of generating the three-dimensional image of the object.
For example, in the case of an image sensor, as shown in FIG. 2, when the image is located at the center of the image sensor, an error due to the sensitivity of the image sensor (that is, noise) is compared with the case where the image is located at the edge of the image sensor. ) Becomes large.
Accordingly, in the case of a stereo camera composed of two cameras, as shown in FIG. 3, an image obtained as the optical axis spacing between the two cameras is farther away than when the optical axis spacing between the two cameras is close to each other is located at the edge of the image sensor. As shown in FIG. 4, the noise caused by the image sensor (DepthRMSError in FIG. 4) decreases as the distance between two cameras, that is, the optical axis spacing (Camera distance in FIG. 4) between the two cameras increases.
However, since most camera lenses have the shape of a convex lens, as shown in FIG. 5, as the image moves away from the center point of the image sensor, radiation distortion, in which the image becomes convex, occurs. When the radiation distortion generated in the images acquired by the two cameras becomes large (in other words, the noise caused by the radiation distortion increases), the accuracy of reconstruction of the three-dimensional image of the object is consequently reduced.
Accordingly, in order to prevent this, a plurality of two-dimensional coordinate values including noise values are generated as in S40 so that the above-mentioned radiation distortion noise can be considered. Such noise values (that is, radiation distortion noise values) are as follows. It can be calculated as in Equation 3.
Where N u and N v are radiation distortion noise values, u 1 and v 1 are two-dimensional coordinates of the object, K 1 and k 2 are the camera's radiation distortion coefficients, and u 0 and v O are the coordinates of the camera's principal point. it means.
In addition, the two-dimensional coordinate value of the object containing the noise value may be calculated as shown in Equation 4 below.
Here, (u 1 * , v 1 * ), (u 2 * , v 2 * ) are two-dimensional coordinate values of the object containing the noise value, (N u1 , N v1 ), (N u2 , N v2 ) Denotes a radiation distortion noise value, and (u 1 , v 1 ) and (u 2 , v 2 ) refer to two-dimensional coordinate values of an object not including a noise value.
In S50, a plurality of three-dimensional coordinate values including the noise value is calculated using the plurality of two-dimensional coordinate values including the noise value generated in S40.
In this case, the plurality of 3D coordinate values including the noise value in S50 may be calculated as in Equation 5 below.
Here, (X i Y i Z i ) is the three-dimensional coordinate value P 1 is the camera matrix of the first camera, P 2 is the camera matrix of the second camera, (u 1 , v 1 ) is 2 by the first camera The dimensional coordinate values, and (u 2 , v 2 ) mean two-dimensional coordinate values by the second camera.
In other words, the camera matrix of the first camera, the camera matrix of the second camera in which plural numbers are generated, the two-dimensional coordinate values by the first camera including noise values, and the noise value in which plural numbers are generated. It is possible to generate a three-dimensional coordinate value including the plurality of noise values by using the two-dimensional coordinate value by the second camera.
Comparing the error between the plurality of three-dimensional coordinate value including the noise value calculated in S50 and the coordinate value of the arbitrary three-dimensional coordinates from S60 to 3D coordinate value including the noise value at which the error is minimum The optical axis spacing (that is, the ty value of the second camera matrix corresponding to the three-dimensional coordinate value including the noise value) corresponding to is determined.
Then, when the optical axis interval of the first camera and the second camera is adjusted according to the optical axis interval determined in S70 to S60 is terminated.
For example, as shown in FIG. 6, as a result of the error comparison between the plurality of three-dimensional coordinate values including the noise value in S60 and the coordinate values of the arbitrary three-dimensional coordinates, the optical axis spacing (camera distance of FIG. 6) is about. In the case of 350 mm, since the error (DepthRMSError of FIG. 6) appears to be the smallest, about 60 mm, after adjusting the optical axis interval of the first camera and the second camera to 350 mm in S70, the first camera and the second camera are adjusted. When the 3D image is reconstructed using the 2D image obtained by the camera, it is possible to obtain an optimal 3D image for the object in which the reconstruction error is minimized.
The optical axis spacing method of the stereo camera of the present invention is a three-dimensional according to the noise (for example, noise from the external environment and radiation distortion noise) included in the two-dimensional image of the object obtained from the two cameras constituting the stereo camera In order to minimize the restoration error that occurs when restoring an image, a plurality of two-dimensional coordinate values that reflect noise are generated and then a plurality of three-dimensional coordinate values that reflect the noise calculated therein and coordinate values of a predetermined three-dimensional coordinate. By comparing the error of the optical axis distance corresponding to the three-dimensional coordinate value reflecting the noise to minimize the error by determining the optimal optical axis distance between the two cameras constituting the stereo camera.
Therefore, the optical axis spacing between the two cameras can be adjusted in a direction that minimizes errors occurring when the 3D image is restored according to the images obtained from the two cameras constituting the stereo camera. The restoration accuracy of the generated 3D image can be greatly improved.
It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. It will be possible. Therefore, the embodiments disclosed in the present invention and the accompanying drawings are intended to illustrate and not to limit the technical spirit of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments and the accompanying drawings . The scope of protection of the present invention should be construed according to the following claims, and all technical ideas within the scope of equivalents thereof should be construed as being included in the scope of the present invention.
Claims (5)
(b) generating a plurality of camera matrices of the second camera by adjusting values of camera matrices of the second camera among camera matrices of the first camera and the second camera calculated in advance;
(c) generating virtual two-dimensional coordinate values obtainable by the first camera according to the coordinate values of the three-dimensional coordinates by using the coordinate values of the three-dimensional coordinates and the camera matrix of the first camera. Generating a plurality of two-dimensional coordinate values obtainable by the second camera according to the coordinate values of the three-dimensional coordinates by using the arbitrary three-dimensional coordinate values and the camera matrix of the second camera in which the plurality is generated. Doing;
(d) using virtual two-dimensional coordinate values obtainable by the first camera, a plurality of two-dimensional coordinate values obtainable by the second camera, and noise values of the first and second cameras calculated in advance Generating a plurality of two-dimensional coordinate values including the noise value;
(e) generating a plurality of three-dimensional coordinate values including the noise value using the plurality of two-dimensional coordinate values including the noise value;
(f) comparing an error between a plurality of three-dimensional coordinate values including the noise value and coordinate values of the arbitrary three-dimensional coordinates, and determining an optical axis interval corresponding to the three-dimensional coordinate value including the noise value that minimizes the error. Determining; And
and (g) adjusting the optical axis spacing of the first camera and the second camera according to the determined optical axis spacing.
Prior to step (a),
And calculating a lens distortion coefficient of the first camera and the second camera.
In the step (d)
The noise values of the first camera and the second camera are stereo distortion noise values of the first camera and the second camera determined by lens distortion coefficients of the first camera and the second camera. How to adjust the optical axis spacing.
In the step (b)
Method of adjusting the optical axis of the stereo camera, characterized in that the value adjustment range of the camera matrix of the second camera is 100mm to 900mm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020120110398A KR101387692B1 (en) | 2012-10-05 | 2012-10-05 | Method for adjusting optimum optical axis distance of stereo camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020120110398A KR101387692B1 (en) | 2012-10-05 | 2012-10-05 | Method for adjusting optimum optical axis distance of stereo camera |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20140044443A KR20140044443A (en) | 2014-04-15 |
KR101387692B1 true KR101387692B1 (en) | 2014-04-22 |
Family
ID=50652399
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020120110398A KR101387692B1 (en) | 2012-10-05 | 2012-10-05 | Method for adjusting optimum optical axis distance of stereo camera |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101387692B1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111402315B (en) * | 2020-03-03 | 2023-07-25 | 四川大学 | Three-dimensional distance measurement method for adaptively adjusting binocular camera baseline |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070101580A (en) * | 2006-04-11 | 2007-10-17 | 엘아이지넥스원 주식회사 | Matrix presumption method of presumption camera |
KR20110003611A (en) * | 2009-07-06 | 2011-01-13 | (주) 비전에스티 | High speed camera calibration and rectification method and apparatus for stereo camera |
KR20110081714A (en) * | 2010-01-08 | 2011-07-14 | (주)한비젼 | 3-dimensional image sensor and sterioscopic camera having the same sensor |
-
2012
- 2012-10-05 KR KR1020120110398A patent/KR101387692B1/en active IP Right Grant
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070101580A (en) * | 2006-04-11 | 2007-10-17 | 엘아이지넥스원 주식회사 | Matrix presumption method of presumption camera |
KR20110003611A (en) * | 2009-07-06 | 2011-01-13 | (주) 비전에스티 | High speed camera calibration and rectification method and apparatus for stereo camera |
KR20110081714A (en) * | 2010-01-08 | 2011-07-14 | (주)한비젼 | 3-dimensional image sensor and sterioscopic camera having the same sensor |
Also Published As
Publication number | Publication date |
---|---|
KR20140044443A (en) | 2014-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230362344A1 (en) | System and Methods for Calibration of an Array Camera | |
US10127682B2 (en) | System and methods for calibration of an array camera | |
JP5635218B1 (en) | Pattern alignment method and system for spatially encoded slide images | |
KR101259835B1 (en) | Apparatus and method for generating depth information | |
CN111566437B (en) | Three-dimensional measurement system and three-dimensional measurement method | |
US9946955B2 (en) | Image registration method | |
EP3144894A1 (en) | Method and system for calibrating an image acquisition device and corresponding computer program product | |
JP2013036831A (en) | Calibration apparatus and distortion error calculation method | |
JP7489253B2 (en) | Depth map generating device and program thereof, and depth map generating system | |
KR101387692B1 (en) | Method for adjusting optimum optical axis distance of stereo camera | |
CN111652967B (en) | Three-dimensional reconstruction system and method based on front-back fusion imaging | |
CN110708532B (en) | Universal light field unit image generation method and system | |
KR101634225B1 (en) | Device and Method for Multi-view image Calibration | |
CN113706693B (en) | Polarization three-dimensional reconstruction method under low-light condition | |
US20240242364A1 (en) | Image signal processor | |
Park et al. | Novel depth extraction algorithm incorporating a lens array and a camera by reassembling pixel columns of elemental images | |
JP2014016687A (en) | Image processing apparatus, image processing method, and program | |
CN118524198A (en) | Method and device for selecting pixel mask | |
Liu et al. | Stereo matching algorithm based on phase dynamic programming in 3D profile measurement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant | ||
FPAY | Annual fee payment |
Payment date: 20170412 Year of fee payment: 4 |
|
FPAY | Annual fee payment |
Payment date: 20180417 Year of fee payment: 5 |