CN111553955A - Multi-view camera three-dimensional system and calibration method thereof - Google Patents

Multi-view camera three-dimensional system and calibration method thereof Download PDF

Info

Publication number
CN111553955A
CN111553955A CN202010360630.5A CN202010360630A CN111553955A CN 111553955 A CN111553955 A CN 111553955A CN 202010360630 A CN202010360630 A CN 202010360630A CN 111553955 A CN111553955 A CN 111553955A
Authority
CN
China
Prior art keywords
camera
image
cameras
module
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010360630.5A
Other languages
Chinese (zh)
Other versions
CN111553955B (en
Inventor
黄兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hangda Qingyun Technology Co ltd
Original Assignee
Suzhou Longtou Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Longtou Intelligent Technology Co ltd filed Critical Suzhou Longtou Intelligent Technology Co ltd
Priority to CN202010360630.5A priority Critical patent/CN111553955B/en
Publication of CN111553955A publication Critical patent/CN111553955A/en
Application granted granted Critical
Publication of CN111553955B publication Critical patent/CN111553955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a multi-view camera three-dimensional system and a calibration method thereof, and particularly relates to the technical field of thermometers, wherein the multi-view camera three-dimensional system comprises a camera module: the camera module comprises cameras and straight rods for adjusting the cameras, and the cameras are fixed through the adjustable straight rods; an image acquisition module: the system is used for acquiring internal and external parameters of each camera in the multi-view stereo video acquisition system and obtaining basic data of an image by acquiring the image; an image preprocessing module: the image acquisition module is used for preprocessing the image in the image acquisition module, so that the signal-to-noise ratio of the image is improved, and the processing pressure in the later period is reduced; the invention solves the problem of depth information loss caused by a visual field blind area in the traditional three-dimensional visual scheme by providing a set of high-precision calibration scheme and providing a multi-view visual scheme with adjustable structure, controllable precision and single reconstruction time within 100 ms.

Description

Multi-view camera three-dimensional system and calibration method thereof
Technical Field
The invention relates to the field of industrial three-dimensional vision, in particular to a multi-view camera three-dimensional system and a calibration method thereof.
Background
The current vision technology in the industrial field mainly uses a monocular, binocular or structured light module as a main part, monocular structured light lacks information in the depth direction, measurement blind areas easily appear in the binocular vision and the structured light module, and large-scale, high-precision and complete-information-degree measurement is difficult to realize. The non-industrial panoramic reconstruction is mainly based on SFM (motion structure reconstruction), and is not suitable for the industrial field. Most of the existing multi-vision is matched with a motion link mechanism, the dynamic error of the link mechanism has the characteristics of multiple degrees of freedom, real variability, transferability and the like, and the measurement precision of the link mechanism is greatly limited.
The monocular vision is mainly based on pinhole imaging, projects an object in a three-dimensional space onto a phase plane, detects and positions the object according to the surface contour characteristics of the object, and is mainly applied to surface detection of articles, rough positioning and measurement of matched tools; the binocular vision principle is a triangulation method, and angular point matching is performed by matching with speckle light spots as prior knowledge of angular points, so that depth and space coordinates are calculated. The main principle of the structured light module is the coupling between the laser or the coding light curtain and the camera imaging. Both of which are used for three-dimensional localization and detection of general objects.
The prior art has the following defects: 1. monocular structured light lacks information in the depth direction, and measuring blind areas easily occur in the binocular and structured light modules;
2. the SFM panoramic reconstruction consumes time, is difficult to ensure precision, and is not suitable for industrial scenes;
3. the existing multi-view vision is limited to a connecting rod mechanism, and has uncontrollable connecting rod errors.
Therefore, it is necessary to invent a multi-view camera three-dimensional system and a calibration method thereof.
Disclosure of Invention
Therefore, the embodiment of the invention provides a multi-view camera three-dimensional system and a calibration method thereof, and the invention provides a set of high-precision calibration scheme and a multi-view vision scheme with adjustable structure, controllable precision and single reconstruction time within 100ms, so as to solve the problem of depth information loss caused by a view blind area in the traditional three-dimensional vision scheme.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions: a multi-view camera three-dimensional system and a calibration method thereof comprise
The camera module: the camera module comprises cameras and straight rods for adjusting the cameras, and the cameras are fixed through the adjustable straight rods;
an image acquisition module: the system is used for acquiring internal and external parameters of each camera in the multi-view stereo video acquisition system and obtaining basic data of an image by acquiring the image;
an image preprocessing module: the image acquisition module is used for preprocessing the image in the image acquisition module, so that the signal-to-noise ratio of the image is improved, and the processing pressure in the later period is reduced;
a camera calibration module: when an object only appears in the two cameras, depth calculation is carried out according to a binocular vision system, if the object appears in more camera ranges, the three-dimensional coordinates of a target point can be expressed as:
x=cotα1·(cotα1+cotα2)·1/2d
y=(cotα1+cotα2)/2d
Figure BDA0002474947000000021
wherein: camera is the position of the optical centers of the three cameras, P is the measured Ducheng position, Pxy is the projection of the object on the xy plane, the included angle between camera1 and the x axis is defined as alpha 1, the included angle between camera2 and the x axis is defined as alpha 2, and the included angle between camera3 and the xoy plane is defined as alpha 3;
acquiring a parameter matrix of the camera through the coordinates, and further acquiring calibrated parameters by solving the parameter matrix;
a three-dimensional reconstruction module: the method is used for recovering the geometric information of the space object in the multi-view two-dimensional image, and reconstructing the space point according to the corresponding coordinate of the space point in the plurality of images and the parameter matrix of the camera.
Preferably, the specific reconstruction method of the three-dimensional reconstruction module is as follows:
s1, starting, loading the calibrated system parameters, and triggering the camera to shoot when the article enters the measuring area;
s2: storing and recording an interest angular point of speckle structure light;
s3: according to the angular points in the S2, the multi-view angular points generate point cloud data by combining camera calibration parameters according to a beam adjustment method, and the double-view angular points generate point cloud data according to a general binocular measurement principle and respectively transmit the point cloud data to the next step;
s4: the missing points are subjected to dense speech according to the Poisson reconstruction principle;
s5: and matching the process requirements and outputting related data results, and ending.
Preferably, the system further comprises an optimization module, wherein the optimization module is used for obtaining a re-projection error according to the three-dimensional space point cloud coordinate and the internal and external parameters of the camera, and optimizing the re-projection error and the internal and external parameters of the camera.
A calibration method of a multi-view camera three-dimensional system comprises the following specific calibration steps:
s1: firstly, adjusting the position relation between cameras through a straight rod, and putting the cameras into a calibration plate for multiple times;
s2: calibrating the internal parameters of the cameras, and performing binocular calibration between every two groups of cameras;
s3: outputting the relation between every two camera groups, carrying out nonlinear optimization, and solving a relation matrix between the systems;
s4: and optimizing the relation among the camera sets, and ending.
The embodiment of the invention has the following advantages:
1. the position and posture relation between the cameras can be adjusted according to the actual application scene;
2. after calibration is carried out between the camera sets, a reconstruction result can be obtained in about 100ms compared with a reconstruction result obtained through calibration parameters;
3. the blind area of current 3D vision module and the condition that the information is lost have been avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions that the present invention can be implemented, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the effects and the achievable by the present invention, should still fall within the range that the technical contents disclosed in the present invention can cover.
FIG. 1 is a flow chart of the calibration provided by the present invention;
FIG. 2 is a diagram of a camera and a straight rod provided by the present invention;
FIG. 3 is a schematic diagram of the basic three-dimensional vision provided by the present invention;
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to the attached fig. 1-3 of the specification, the multi-view camera three-dimensional system and the calibration method thereof of the embodiment comprise
The camera module: the camera module comprises cameras and straight rods for adjusting the cameras, and the cameras are fixed through the adjustable straight rods;
an image acquisition module: the system is used for acquiring internal and external parameters of each camera in the multi-view stereo video acquisition system and obtaining basic data of an image by acquiring the image;
an image preprocessing module: the image acquisition module is used for preprocessing the image in the image acquisition module, so that the signal-to-noise ratio of the image is improved, and the processing pressure in the later period is reduced;
a camera calibration module: when an object appears only inside two cameras, depth calculation is carried out according to a binocular vision system, and the object appears in more camera ranges, the three-dimensional coordinates of a target point can be expressed as (as shown in fig. 3):
x=cotα1·(cotα1+cotα2)·1/2d
y=(cotα1+cotα2)/2d
Figure BDA0002474947000000051
wherein: camera is the position of the optical centers of the three cameras, P is the measured Ducheng position, Pxy is the projection of the object on the xy plane, the included angle between camera1 and the x axis is defined as alpha 1, the included angle between camera2 and the x axis is defined as alpha 2, and the included angle between camera3 and the xoy plane is defined as alpha 3;
acquiring a parameter matrix of the camera through the coordinates, and further acquiring calibrated parameters by solving the parameter matrix;
for multi-view vision, if there are M scene points Xi (i ═ 1, 2.. M), there are M cameras Mj(j 1, 2.. said.. m), projection of scene points to the camera image satisfies
Figure BDA0002474947000000061
Wherein
Figure BDA0002474947000000062
The ith image point is in the jth image, for the whole reconstruction process, the scene point Xi is determined by the image shooting itself, the parameter external parameters among the camera groups can roughly determine the positions of the common scene point in different images, and for the common area of a plurality of cameras, in the re-projection process, the X is solvediAnd MjThe system of equations (a) has much more corresponding points than necessary in the common region, and therefore it is desirable to minimize the reprojection error, i.e., the
Figure BDA0002474947000000063
Giving an initial estimation according to the existing basic parameters of the camera, and performing optimization solution by using a nonlinear least square method (Levenberg-Marquart algorithm), namely solving to obtain a parameter matrix;
a three-dimensional reconstruction module: the method is used for recovering the geometric information of the space object in the multi-view two-dimensional image, and reconstructing the space point according to the corresponding coordinate of the space point in the plurality of images and the parameter matrix of the camera.
Further, a specific reconstruction method of the three-dimensional reconstruction module is as follows:
s1, starting, loading the calibrated system parameters, and triggering the camera to shoot when the article enters the measuring area;
s2: storing and recording an interest angular point of speckle structure light;
s3: according to the angular points in the S2, the multi-view angular points generate point cloud data by combining camera calibration parameters according to a beam adjustment method, and the double-view angular points generate point cloud data according to a general binocular measurement principle and respectively transmit the point cloud data to the next step;
s4: the missing points are subjected to dense speech according to the Poisson reconstruction principle;
s5: and matching the process requirements and outputting related data results, and ending.
Further, the system also comprises an optimization module, wherein the optimization module is used for obtaining a re-projection error according to the three-dimensional space point cloud coordinate and the internal and external parameters of the camera, and optimizing the re-projection error and the internal and external parameters of the camera.
A calibration method of a multi-view camera three-dimensional system comprises the following specific calibration steps:
s1: firstly, adjusting the position relation between cameras through a straight rod, and putting the cameras into a calibration plate for multiple times;
s2: calibrating the internal parameters of the cameras, and performing binocular calibration between every two groups of cameras;
s3: outputting the relation between every two camera groups, carrying out nonlinear optimization, and solving a relation matrix between the systems;
s4: and optimizing the relation among the camera sets, and ending.
The implementation scenario is specifically as follows: the invention solves the problem of depth information loss caused by a visual field blind area in the traditional three-dimensional visual scheme by providing a set of high-precision calibration scheme and providing a multi-view visual scheme with adjustable structure, controllable precision and single reconstruction time within 100 ms.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (4)

1. A multi-view camera three-dimensional system is characterized in that: comprises that
The camera module: the camera module comprises cameras and straight rods for adjusting the cameras, and the cameras are fixed through the adjustable straight rods;
an image acquisition module: the system is used for acquiring internal and external parameters of each camera in the multi-view stereo video acquisition system and obtaining basic data of an image by acquiring the image;
an image preprocessing module: the image acquisition module is used for preprocessing the image in the image acquisition module, so that the signal-to-noise ratio of the image is improved, and the processing pressure in the later period is reduced;
a camera calibration module: when an object only appears in the two cameras, depth calculation is carried out according to a binocular vision system, if the object appears in more camera ranges, the three-dimensional coordinates of a target point can be expressed as:
x=cotα1·(cotα1+cotα2)·1/2d
y=(cotα1+cotα2)/2d
Figure FDA0002474946990000011
wherein: camera is the position of the optical centers of the three cameras, P is the measured Ducheng position, Pxy is the projection of the object on the xy plane, the included angle between camera1 and the x axis is defined as alpha 1, the included angle between camera2 and the x axis is defined as alpha 2, and the included angle between camera3 and the xoy plane is defined as alpha 3;
acquiring a parameter matrix of the camera through the coordinates, and further acquiring calibrated parameters by solving the parameter matrix;
a three-dimensional reconstruction module: the method is used for recovering the geometric information of the space object in the multi-view two-dimensional image, and reconstructing the space point according to the corresponding coordinate of the space point in the plurality of images and the parameter matrix of the camera.
2. The multi-view camera three-dimensional system according to claim 1, wherein: the specific reconstruction method of the three-dimensional reconstruction module comprises the following steps:
s1, starting, loading the calibrated system parameters, and triggering the camera to shoot when the article enters the measuring area;
s2: storing and recording an interest angular point of speckle structure light;
s3: according to the angular points in the S2, the multi-view angular points generate point cloud data by combining camera calibration parameters according to a beam adjustment method, and the double-view angular points generate point cloud data according to a general binocular measurement principle and respectively transmit the point cloud data to the next step;
s4: the missing points are subjected to dense speech according to the Poisson reconstruction principle;
s5: and matching the process requirements and outputting related data results, and ending.
3. The multi-view camera three-dimensional system according to claim 1, wherein: the system also comprises an optimization module, wherein the optimization module is used for obtaining a re-projection error according to the three-dimensional space point cloud coordinate and the internal and external parameters of the camera, and optimizing the re-projection error and the internal and external parameters of the camera.
4. A calibration method of a multi-view camera three-dimensional system is characterized by comprising the following steps: the specific calibration steps are as follows:
s1: firstly, adjusting the position relation between cameras through a straight rod, and putting the cameras into a calibration plate for multiple times;
s2: calibrating the internal parameters of the cameras, and performing binocular calibration between every two groups of cameras;
s3: outputting the relation between every two camera groups, carrying out nonlinear optimization, and solving a relation matrix between the systems;
s4: and optimizing the relation among the camera sets, and ending.
CN202010360630.5A 2020-04-30 2020-04-30 Multi-camera three-dimensional system and calibration method thereof Active CN111553955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010360630.5A CN111553955B (en) 2020-04-30 2020-04-30 Multi-camera three-dimensional system and calibration method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010360630.5A CN111553955B (en) 2020-04-30 2020-04-30 Multi-camera three-dimensional system and calibration method thereof

Publications (2)

Publication Number Publication Date
CN111553955A true CN111553955A (en) 2020-08-18
CN111553955B CN111553955B (en) 2024-03-15

Family

ID=72000374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010360630.5A Active CN111553955B (en) 2020-04-30 2020-04-30 Multi-camera three-dimensional system and calibration method thereof

Country Status (1)

Country Link
CN (1) CN111553955B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090097039A1 (en) * 2005-05-12 2009-04-16 Technodream21, Inc. 3-Dimensional Shape Measuring Method and Device Thereof
CN102982548A (en) * 2012-12-11 2013-03-20 清华大学 Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof
CN106803273A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 A kind of panoramic camera scaling method
CN110509281A (en) * 2019-09-16 2019-11-29 中国计量大学 The apparatus and method of pose identification and crawl based on binocular vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090097039A1 (en) * 2005-05-12 2009-04-16 Technodream21, Inc. 3-Dimensional Shape Measuring Method and Device Thereof
CN102982548A (en) * 2012-12-11 2013-03-20 清华大学 Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof
CN106803273A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 A kind of panoramic camera scaling method
CN110509281A (en) * 2019-09-16 2019-11-29 中国计量大学 The apparatus and method of pose identification and crawl based on binocular vision

Also Published As

Publication number Publication date
CN111553955B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN110296691B (en) IMU calibration-fused binocular stereo vision measurement method and system
CN106981083B (en) The substep scaling method of Binocular Stereo Vision System camera parameters
CN109727290B (en) Zoom camera dynamic calibration method based on monocular vision triangulation distance measurement method
CN109323650B (en) Unified method for measuring coordinate system by visual image sensor and light spot distance measuring sensor in measuring system
CN111369630A (en) Method for calibrating multi-line laser radar and camera
CN109737913B (en) Laser tracking attitude angle measurement system and method
CN109579695B (en) Part measuring method based on heterogeneous stereoscopic vision
CN113175899B (en) Camera and galvanometer combined three-dimensional imaging model of variable sight line system and calibration method thereof
CN110874854B (en) Camera binocular photogrammetry method based on small baseline condition
CN110889873A (en) Target positioning method and device, electronic equipment and storage medium
CN115861445B (en) Hand-eye calibration method based on three-dimensional point cloud of calibration plate
CN111024047B (en) Six-degree-of-freedom pose measurement device and method based on orthogonal binocular vision
CN107038753B (en) Stereoscopic vision three-dimensional reconstruction system and method
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
CN109087339A (en) A kind of laser scanning point and Image registration method
CN112229323A (en) Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method
KR20220113781A (en) How to measure the topography of your environment
CN116681827A (en) Defect-free three-dimensional point cloud reconstruction method and device based on multi-monitoring camera and point cloud fusion
KR101597163B1 (en) Method and camera apparatus for calibration of stereo camera
CN115359127A (en) Polarization camera array calibration method suitable for multilayer medium environment
CN111829435A (en) Multi-binocular camera and line laser cooperative detection method
CN114037768A (en) Method and device for joint calibration of multiple sets of tracking scanners
CN111721194A (en) Multi-laser-line rapid detection method
CN111429571A (en) Rapid stereo matching method based on spatio-temporal image information joint correlation
CN106934861B (en) Object three-dimensional reconstruction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240205

Address after: 86-A3101, Wanxing Road, Changyang, Fangshan District, Beijing, 102400

Applicant after: Beijing Hangda Qingyun Technology Co.,Ltd.

Country or region after: China

Address before: Room 504, Science and Technology Square, Qianjin East Road, Kunshan Economic Development Zone, Suzhou City, Jiangsu Province, 215323

Applicant before: Suzhou Longtou Intelligent Technology Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant