CN112991372B - 2D-3D camera external parameter calibration method based on polygon matching - Google Patents

2D-3D camera external parameter calibration method based on polygon matching Download PDF

Info

Publication number
CN112991372B
CN112991372B CN202110430415.2A CN202110430415A CN112991372B CN 112991372 B CN112991372 B CN 112991372B CN 202110430415 A CN202110430415 A CN 202110430415A CN 112991372 B CN112991372 B CN 112991372B
Authority
CN
China
Prior art keywords
camera
point cloud
point
edge
polygon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110430415.2A
Other languages
Chinese (zh)
Other versions
CN112991372A (en
Inventor
熊鑫鑫
陈浩
郑军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jushi Technology Jiangsu Co ltd
Original Assignee
Jushi Technology Jiangsu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jushi Technology Jiangsu Co ltd filed Critical Jushi Technology Jiangsu Co ltd
Priority to CN202110430415.2A priority Critical patent/CN112991372B/en
Publication of CN112991372A publication Critical patent/CN112991372A/en
Application granted granted Critical
Publication of CN112991372B publication Critical patent/CN112991372B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention particularly discloses a 2D-3D camera external parameter calibration method based on polygon matching, which comprises the steps of extracting 2D polygon and 3D polygon information for constructing an image through a calibration plate, respectively corresponding edges, calculating the distance from a 2D point to a straight line, and obtaining the external parameter between a 2D camera and a 3D camera through minimizing a global loss function. Compared with the 2D-3D camera calibration method of the feature points, the 2D-3D camera external reference calibration method provided by the invention has the advantages that the linear features replace the point features, and the feature detection precision of the image data is greatly improved.

Description

2D-3D camera external parameter calibration method based on polygon matching
Technical Field
The invention relates to the technical field of image processing, in particular to a 2D-3D camera external parameter calibration method based on polygon matching.
Background
The fusion of the multi-source sensors is an effective way for accurately representing the environment, and the fusion of the sensors can make up for the defect of insufficient information quantity of a single sensor. In recent years, the multi-sensor fusion technology is widely applied in both military and civil fields, and has become a problem of concern in many aspects such as military, industry and high-tech development. The technology is widely applied to the fields of complex industrial process control, robots, automatic target recognition, inertial navigation, agriculture, remote sensing, medical diagnosis and treatment, image processing, mode recognition and the like. Practice proves that: compared with a single sensor system, the multi-sensor data fusion technology is applied to solve the problems of detection, tracking, target identification and the like, the survival capability of the system can be effectively enhanced, the reliability and robustness of the whole system are improved, the reliability of data is enhanced, and the precision of the data is improved at the same time, so that the time and space coverage rate of the whole system is expanded.
Rigid transformations between different sensors, also called external parameters, are a prerequisite for fusing multi-sensor information. The external reference calibration method can be divided into two categories of online calibration and offline calibration. The online calibration method is to calibrate an external parameter matrix in real time in the running process of the unmanned vehicle, and can correct external parameter offset caused by factors such as jolt in the running process, but an initial value needs to be obtained in an offline calibration or manual measurement mode to improve the convergence rate. At present, the off-line calibration method of the multi-sensor mainly depends on the detection and matching of features to calculate the external parameters, however, in the external parameter calibration of the 2D-3D camera, compared with the image with higher resolution, the resolution of the point cloud acquired by the 3D sensor is lower, therefore, the point cloud feature points with higher precision cannot be detected to be matched with the image feature points, and the external parameter calibration precision between the 2D-3D cameras is seriously reduced.
Disclosure of Invention
The invention aims to provide a 2D-3D camera external reference calibration method based on polygon matching so as to improve the accuracy of external reference calibration between 2D-3D cameras.
In view of this, the scheme of the invention is as follows:
A2D-3D camera external parameter calibration method based on polygon matching comprises the following steps:
calibrating internal parameters of the 2D camera;
extracting 2D polygon information for constructing an image using a calibration plate;
extracting 3D polygonal straight line side information for constructing an image to obtain point clouds meeting single straight line characteristics;
respectively corresponding the edges of the 2D polygon and the 3D polygon;
and calculating the distance from the point to the straight line, and obtaining the external parameter between the 2D camera and the 3D camera by minimizing the global loss function.
According to the embodiment of the invention, the 2D polygon and the 3D polygon are corresponding to each other by sorting the 2D polygon and the 3D polygon and the index compensation value isoffset
Figure 733626DEST_PATH_IMAGE001
Wherein the content of the first and second substances,
Figure 566453DEST_PATH_IMAGE002
in the formulan∈{0,1,2,3},iFor the index of the polygon matching pair,Q i a is shown asiGroup match of 3D polygons in dataaThe number of the edges is one,E i a is shown asiGroup match data of 2D polygonsaThe number of the edges is one, f(Q,E)as a function of the 2D-3D dot line error,g(.)is a function of the distance of the 2D point from the straight line, whereinR’Andt’is the initial parameter between the 2D camera and the 3D camera.
According to an embodiment of the present invention, the 3D polygon straight-line side information constructing method includes:
s31, extracting a calibration plate plane point cloud, slicing the calibration plate plane point cloud into a plurality of point cloud blocks in the horizontal direction, wherein the left end point and the right end point of each point cloud block form left edge point cloud and right edge point cloud of the calibration plate;
and S32, dividing the obtained point cloud of the edge of the calibration plate into point cloud blocks with single straight line characteristics according to the shape of the calibration plate.
Further, the step S32 includes:
s3201, sequencing edge point clouds according to a polar angle sequencing principle;
s3202, traversing each point in edge point cloudP seg Dividing the edge point cloud into two parts according to the point indexP 1 AndP 2
s3203, respectively calculating point cloudP 1 AndP 2 ratio between maximum principal component value and second maximum principal component valueratio 1 Andratio 2 and, remember the scorescore=ratio 1 * ratio 2
S3204, repeating the processes of S3202 and S3203, and obtaining the division point with the maximum divisionP seg And dividing the edge point cloud into point clouds meeting the single straight line characteristics.
According to an embodiment of the invention, the loss function is of the form:
Figure 458185DEST_PATH_IMAGE003
in the formulaf(.)As a function of 2D-3D dot line error. Further, the loss functionAnd removing rough differences, and then using Levenberg-Marquart to carry out iterative solution to obtain the 2D-3D camera external parameter.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides a 2D-3D camera external parameter calibration method based on a polygonal calibration plate, which is compared with a 2D-3D camera calibration method based on characteristic points, and the accuracy of point cloud data characteristic detection is greatly improved due to the fact that linear characteristics are used for replacing point characteristics, so that the accuracy of external parameter calibration is guaranteed;
2. according to the invention, by using the edge point cloud segmentation method based on principal component analysis, the point cloud containing various linear features can be accurately segmented into point cloud blocks with single linear features without setting segmentation threshold values, so that the efficiency of point cloud edge linear extraction is improved, and the accuracy of point cloud edge linear extraction is also improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a general flowchart of the method for calibrating the external parameters of the 2D-3D camera based on polygon matching.
Fig. 2 is a schematic diagram of the "anchor" connection principle of the present invention.
FIG. 3 is a schematic view of a horizontal point cloud slice according to the present invention.
FIG. 4 is a diagram illustrating the result of extracting the straight edge of the image according to the present invention.
FIG. 5 is a diagram illustrating the result of extracting the edge straight line of the calibration plate according to the present invention.
FIG. 6 is a schematic diagram of the point cloud extraction effect of the left and right edges of the calibration plate according to the present invention.
FIG. 7 is a schematic diagram of the extraction effect of the point cloud at the edge of the calibration plate according to the present invention.
FIG. 8 is a schematic diagram illustrating the effect of projecting an image to a point cloud according to the present invention.
FIG. 9 is a schematic diagram illustrating the effect of point cloud projection on an image according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantageous effects of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and the detailed description. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides a 2D-3D camera external parameter calibration method based on polygon matching, which is used for acquiring data of a rectangular calibration plate by using a 2D camera and a 3D camera simultaneously (for convenience, a checkerboard is adopted as the calibration plate in the invention), and edges of the calibration plate are acquired in image data and point cloud data for matching, so that external parameters of the 2D camera and the 3D camera are accurately solved. The method is run in a CPU, and a technical route diagram is shown in figure 1.
The method comprises the following steps: 2D camera internal reference calibration
Since the 2D camera can only represent plane information of the environment, the 3D camera represents three-dimensional spatial information of the environment. Therefore, the external parameter calibration between the 2D camera and the 3D camera is critical to solve the rigid transformation parameters between the 2D camera image space auxiliary coordinate system and the 3D camera coordinate system. However, the information collected by the 2D camera is only in a two-dimensional space, and therefore, the focal length and the image point translation parameters (i.e. camera parameters) of the 2D camera need to be solved, and the two-dimensional information needs to be projected into a three-dimensional space, so as to realize the conversion from the image plane coordinate system to the image space auxiliary coordinate system.
The Zhang Zhengyou camera calibration method is used, in order to improve the calibration precision of the 2D camera, the 2D camera is calibrated by using the circular calibration plate, and the finally obtained calibration reprojection error of the 2D camera is 0.05pixel
Step two: 2D polygon information extraction
The invention uses the rectangular calibration plate as a calibration tool, so that 2D polygon information of a rectangular four-side construction image needs to be extracted. The traditional straight-line segment detection algorithm generally uses an edge image of an image to extract all line segments containing a certain number of edge points through Hough transformation, and extracts straight lines by setting a length threshold, however, due to the large data volume, the method is generally low in efficiency, and a fixed threshold needs to be set for detection. Therefore, the invention uses a line segment extraction algorithm based on the anchor point, and verifies the validity of the line segment based on Helmholtz Principle:
1. image edge extraction
The invention uses an image edge extraction algorithm based on gradient, and the detailed steps are as follows:
1) image smoothing
The aim of image smoothing is to reduce the influence of noise on the image edge extraction algorithm, and the invention uses a 5 multiplied by 5 Gaussian smoothing operator to smooth the image, wherein the error of the operator isδ=1;
2) Edge pixel computation
In order to improve the efficiency of image edge extraction, the invention uses a gradient-based image edge extraction technology, and Sobel is used for checking an image to carry out pixel-by-pixel convolution operation to obtainxDirection andygradient values of directions are respectivelyG x AndG y if, ifG x +G y Is greater than a given threshold, the pixel is labeled as an "edge pixel", otherwise it is labeled as a "non-edge pixel". Meanwhile, to speed up the following "anchor" connection algorithm, the direction of each "edge pixel" is marked: if it isG x G y Marking the direction of the 'edge pixel' as a vertical direction, otherwise, marking the direction of the pixel as a horizontal direction;
3) edge "anchor" computation
After the edge pixels are calculated, in order to accelerate the anchor point connection algorithm, a certain number of edge pixel points are selected as anchor points, and the anchor points are selected by adopting an interval point-taking method in consideration of the fact that the anchor points need to be uniformly distributed in an image, namely the anchor points are selected in the selected edge pixels at intervals of certain image linesn row And selecting an 'edge pixel' as an 'anchor point'. n row The value of (c) determines the detail of the image edge,n row the smaller the image, the richer the image edge details;
4) "Anchor" connection
After the extraction of the anchor points is finished, the anchor points are required to be connected into a line, and the invention guides the connection path of the anchor points according to the edge pixels and the direction thereof calculated in the step 3). As shown in fig. 2, the values in the box refer to the gradient values of the pixel, and three "anchor points" are referred to using three red circular boxes, assuming that the "anchor point" connection procedure is connected starting from the middle "anchor point". If the direction of the anchor point is the vertical direction, searching the point with the maximum gradient in the vertical direction (three pixels at the upper part and the lower part) to be connected with the anchor point; if the direction of the "anchor point" is the horizontal direction, the point with the maximum gradient is searched in the horizontal direction (three pixels on the left and right) and is connected with the "anchor point". According to the above principle. The "anchor" connection results are shown in fig. 2 for the circle frame pixels, and continuous single pixel wide image edges can be obtained;
2. calibration plate edge line extraction
Based on the method in step 1, the edge information of the image can be extracted, and the edge information simultaneously contains the edge straight line of the calibration plate. The method adopts a principle based on least square, and points in the edge information are sequentially added into a least square straight line extractor, so that all straight line information in the image is obtained; in consideration of robustness of straight line extraction, a straight line with a small number of points is provided, and a straight line with poor robustness is removed by using a Helmholtz Principle. In this case, a stable and reliable straight line feature in the image can be obtained.
Step three: 3D polygon information extraction
1. Point cloud edge extraction
In order to accurately extract point cloud data of the edge of the calibration plate, firstly, a RANSAC algorithm is used for extracting point cloud of a calibration plate plane, in addition, the point cloud of the calibration plate plane is sliced into a plurality of point cloud blocks in the horizontal direction, and the left end point and the right end point of each point cloud block form the point cloud of the left edge and the right edge of the calibration plate. As shown in fig. 3, different stripes represent different point cloud slices:
2. point cloud edge segmentation
After the left and right point cloud edges of the calibration plate are obtained according to the method in the step 1, in order to determine the corresponding relationship between the point cloud edge point of each calibration plate and the image edge straight line, the obtained point cloud of the calibration plate edge needs to be divided into point cloud blocks with single straight line characteristics according to the shape of the calibration plate. The invention provides a method for dividing left and right edges of a calibration plate based on principal component analysis. And (3) for the left edge or the right edge of the calibration plate extracted in the step (1), point cloud segmentation comprises the following steps:
(1) sequencing edge point clouds according to a polar angle sequencing principle;
(2) traversing each point in an edge point cloudP seg Dividing the edge point cloud into two parts according to the point indexP 1 AndP 2
(3) separately computing point cloudsP 1 AndP 2 ratio between maximum principal component value and second maximum principal component valueratio 1 Andratio 2 and, remember the scorescore=ratio 1 * ratio 2
(4) Repeating the processes (2) and (3) to obtain the division point with the maximum divisionP seg And dividing the edge point cloud into point clouds meeting the single straight line characteristics.
Step four: polygon matching
Based on the 2D polygon information and the 3D polygon information solved in the second step and the third step, the edges of the 2D polygon and the edges of the 3D polygon need to be in one-to-one correspondence. In order to simplify the corresponding problem, the invention changes the matching problem into the index compensation problem by sequencing the 2D polygons and the 3D polygons, namely, for the same group of the 2D polygons and the 3D polygons, a compensation value must exist between polygon edge indexesoffsetSatisfies the following formula:
Figure 376463DEST_PATH_IMAGE004
Figure 277423DEST_PATH_IMAGE005
for the rectangular calibration plate used in the present invention, in the formulan∈{0,1,2,3},iFor the index of the polygon matching pair,Q i a is shown asiGroup match of 3D polygons in dataaThe number of the edges is one,E i a is shown asiGroup match data of 2D polygonsaThe number of the edges is one, f(Q,E)as a function of the 2D-3D dot line error, g(.)is a function of the distance of the 2D point from the straight line, whereinR’Andt’the initial external parameter between the 2D camera and the 3D camera is set as a unit array.
Step five: L-M based extrinsic solution
The method obtains the external parameter between the 2D camera and the 3D camera by minimizing the global loss function. The global loss function adds the matching data into the function one by calculating the distance from a point to a straight line, and the specific form of the loss function is as follows:
Figure 964756DEST_PATH_IMAGE006
in the formulaf(.)Is a 2D-3D dotted error function defined in equation (2). To avoid the effect of the gross error on the loss function, a robust kernel function is addedφ(.)The range of the residual is limited to a specified function space, and at this time, equation (3) may become:
Figure 292969DEST_PATH_IMAGE007
the robust kernel function has more forms, and the form is often used in practical engineering applicationφ(u)=u 2 In the form of a curve of (a), however,the kernel function in the form has high sensitivity to the gross error, so the robust kernel function of Soft = L1 is used in the invention, the function can reduce the influence of the gross error on the loss function from an exponential level to a linear level, and the effect of the camera external parameter calibration is greatly improved, and the kernel function is in the form as follows.
Figure 934428DEST_PATH_IMAGE008
And (4) according to the principle in the formula (4), using Levenberg-Marquart (L-M) to carry out iterative solution on the formula (4), and obtaining the 2D-3D camera external parameter.
Examples
(1) 2D polygon extraction
By adopting the edge straight line information detection method based on the image gradient information, the method determines the edge anchor point by calculating the gradient information of the image, and then connects the anchor point according to the method provided in the step two, so that the edge information in the image can be obtained, at the moment, the straight line information in the edge can be extracted one by repeatedly adding points in the edge information through a least square method, as shown in figure 4:
as can be seen from fig. 4, the calibration board edge extraction algorithm extracts the edges of the calibration board, and also extracts the edges of the checkerboard inside the calibration board and the edges of the external environment, however, the edges of the checkerboard and the edges of the external environment are redundant information, and the present invention filters all the extracted edge information by adding four vertexes of the calibration board as prior information, so as to obtain the image edge information only including the edges of the calibration board, as shown in fig. 5.
(2) 3D polygon extraction
By adopting the point cloud edge extraction algorithm based on the point cloud slice and the principal component analysis method, based on the point cloud slice in the horizontal direction, the point cloud edge can be divided into a left part and a right part according to the method in the third step, as shown in fig. 6:
as can be seen from fig. 6, due to the reasons of partial loss of original point cloud data, measurement noise and the like, certain noise exists on the left edge and the right edge extracted by the algorithm, and the calibration plate edge point cloud extracting effect obtained by using a linear extraction method based on RANSAC in the invention in combination with a point cloud segmentation method based on principal component analysis is as shown in fig. 7.
(3) Polygon matching
The purpose of polygon matching is to correctly match the 2D polygon data with the 3D polygon data, and the 2D polygons and the 3D polygons are in one-to-one correspondence according to the polygon matching method in the fourth step, and at the moment, the correspondence between the image features and the point cloud features is completed.
(4) L-M optimization
After the image features and the point cloud features are in one-to-one correspondence, the method solves the external parameters between the 2D camera and the 3D camera by using a nonlinear optimization method based on L-M optimization, projects the point cloud into the image and back-projects image information into the point cloud for evaluating the calibration result of the external parameters of the camera, and has the specific effects as shown in fig. 8 and fig. 9.
As can be seen from fig. 8 and 9, the 2D-3D external reference calibration method provided by the present invention can effectively and accurately calibrate the external references of the two cameras.
The invention is not limited solely to that described in the specification and embodiments, and additional advantages and modifications will readily occur to those skilled in the art, so that the invention is not limited to the specific details, representative apparatus, and illustrative examples shown and described herein, without departing from the spirit and scope of the general concept as defined by the appended claims and their equivalents.

Claims (5)

1. A2D-3D camera external reference calibration method based on polygon matching is characterized by comprising the following steps:
calibrating internal parameters of the 2D camera;
extracting 2D polygon information for constructing an image using a calibration plate;
extracting 3D polygonal straight line side information for constructing an image to obtain point clouds meeting single straight line characteristics;
respectively corresponding the edges of the 2D polygon and the 3D polygon;
calculating the distance from the point to the straight line, and obtaining external parameters between the 2D camera and the 3D camera by minimizing a global loss function;
the 3D polygon straight line side information construction method comprises the following steps:
s31, extracting a calibration plate plane point cloud, slicing the calibration plate plane point cloud into a plurality of point cloud blocks in the horizontal direction, wherein the left end point and the right end point of each point cloud block form left edge point cloud and right edge point cloud of the calibration plate;
and S32, dividing the obtained point cloud of the edge of the calibration plate into point cloud blocks with single straight line characteristics according to the shape of the calibration plate.
2. The extrinsic parameter calibration method according to claim 1, wherein the edges of the 2D polygons and the 3D polygons correspond by sorting the 2D polygons and the 3D polygons, and the index compensation value isoffset
Figure DEST_PATH_IMAGE001
Wherein the content of the first and second substances,
Figure 110073DEST_PATH_IMAGE002
in the formulan∈{0,1,2,3},iFor the index of the polygon matching pair,Q i a is shown asiGroup match of 3D polygons in dataaThe number of the edges is one,E i a is shown asiGroup match data of 2D polygonsaThe number of the edges is one,Q i b is shown asiGroup match of 3D polygons in databThe number of the edges is one,E i b is shown asiGroup match data of 2D polygonsbA side;f(Q,E)as a function of the 2D-3D dot line error, g (.)is a function of the distance of the 2D point from the straight line, whereinR’Andt’for initial external reference between the 2D camera and the 3D camera,Erepresents one of a 2D polygonThe number of the edges is one,Krepresents the internal reference of the 2D camera,P i representsQTo (1)iAnd (4) points.
3. The external reference calibration method according to claim 1, wherein the step S32 includes:
s3201, sequencing edge point clouds according to a polar angle sequencing principle;
s3202, traversing each point in edge point cloudP seg Dividing the edge point cloud into two parts according to the point indexP 1 AndP 2
s3203, respectively calculating point cloudP 1 AndP 2 ratio between maximum principal component value and second maximum principal component valueratio 1 Andratio 2 and, remember the scorescore=ratio 1 * ratio 2
S3204, repeating the processes of S3202 and S3203, and obtaining the division point with the maximum divisionP seg And dividing the edge point cloud into point clouds meeting the single straight line characteristics.
4. The external reference calibration method according to claim 1, wherein the loss function is of the form:
Figure DEST_PATH_IMAGE003
in the formulaf(.)Is a 2D-3D point line error function, wherein:Q i k is shown asiGroup match of 3D polygons in datakThe number of the edges is one,E i k is shown asiGroup match data of 2D polygonskAn edge.
5. The external reference calibration method according to claim 4, wherein the loss function is subjected to coarse error removal, and then Levenberg-Marquart is used for iterative solution, so that the 2D-3D camera external reference is obtained.
CN202110430415.2A 2021-04-21 2021-04-21 2D-3D camera external parameter calibration method based on polygon matching Active CN112991372B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110430415.2A CN112991372B (en) 2021-04-21 2021-04-21 2D-3D camera external parameter calibration method based on polygon matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110430415.2A CN112991372B (en) 2021-04-21 2021-04-21 2D-3D camera external parameter calibration method based on polygon matching

Publications (2)

Publication Number Publication Date
CN112991372A CN112991372A (en) 2021-06-18
CN112991372B true CN112991372B (en) 2021-10-22

Family

ID=76341513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110430415.2A Active CN112991372B (en) 2021-04-21 2021-04-21 2D-3D camera external parameter calibration method based on polygon matching

Country Status (1)

Country Link
CN (1) CN112991372B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241063A (en) * 2022-02-22 2022-03-25 聚时科技(江苏)有限公司 Multi-sensor external parameter online calibration method based on depth Hough transform

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754578B (en) * 2019-03-26 2023-09-19 舜宇光学(浙江)研究院有限公司 Combined calibration method for laser radar and camera, system and electronic equipment thereof
CN111640158B (en) * 2020-06-11 2023-11-10 武汉斌果科技有限公司 End-to-end camera and laser radar external parameter calibration method based on corresponding mask

Also Published As

Publication number Publication date
CN112991372A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN107301654B (en) Multi-sensor high-precision instant positioning and mapping method
CN107063228B (en) Target attitude calculation method based on binocular vision
CN111563921B (en) Underwater point cloud acquisition method based on binocular camera
CN112037159B (en) Cross-camera road space fusion and vehicle target detection tracking method and system
CN110334701B (en) Data acquisition method based on deep learning and multi-vision in digital twin environment
CN111897349A (en) Underwater robot autonomous obstacle avoidance method based on binocular vision
CN110310331B (en) Pose estimation method based on combination of linear features and point cloud features
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
KR101453143B1 (en) Stereo matching process system, stereo matching process method, and recording medium
CN109470149B (en) Method and device for measuring position and posture of pipeline
CN109214254B (en) Method and device for determining displacement of robot
CN111612728A (en) 3D point cloud densification method and device based on binocular RGB image
KR20200023211A (en) A method for rectifying a sequence of stereo images and a system thereof
KR20180098945A (en) Method and apparatus for measuring speed of vehicle by using fixed single camera
CN112991372B (en) 2D-3D camera external parameter calibration method based on polygon matching
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN112712566B (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
CN114037762A (en) Real-time high-precision positioning method based on image and high-precision map registration
CN116758136B (en) Real-time online identification method, system, equipment and medium for cargo volume
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
KR100792172B1 (en) Apparatus and method for estimating fundamental matrix using robust correspondence point
CN113393413B (en) Water area measuring method and system based on monocular and binocular vision cooperation
CN116958218A (en) Point cloud and image registration method and equipment based on calibration plate corner alignment
CN111598956A (en) Calibration method, device and system
US9378555B2 (en) Enhanced outlier removal for 8 point algorithm used in camera motion estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant