CN114842093A - Automatic calibration system and calibration method for external parameters of vehicle-mounted monocular camera based on key points - Google Patents

Automatic calibration system and calibration method for external parameters of vehicle-mounted monocular camera based on key points Download PDF

Info

Publication number
CN114842093A
CN114842093A CN202210550682.8A CN202210550682A CN114842093A CN 114842093 A CN114842093 A CN 114842093A CN 202210550682 A CN202210550682 A CN 202210550682A CN 114842093 A CN114842093 A CN 114842093A
Authority
CN
China
Prior art keywords
camera
vehicle
solving
points
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210550682.8A
Other languages
Chinese (zh)
Inventor
张德泽
单玉梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210550682.8A priority Critical patent/CN114842093A/en
Publication of CN114842093A publication Critical patent/CN114842093A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a vehicle-mounted monocular camera external parameter automatic calibration system and a calibration method based on key points, wherein the system comprises a characteristic point extraction and matching module, a feature point extraction and matching module and a feature point matching module, wherein the characteristic point extraction and matching module is used for extracting and matching characteristic points in continuous frame images; the relative motion resolving module is used for resolving the relative motion between the continuous frames by adopting an epipolar geometry method based on the characteristic point pairs; the homography solving module screens out high-matching road surface characteristic points according to the homography relation; the external parameter solving module is used for solving external parameters of the camera; and the result output module calculates the average value after reaching the fixed quantity and finally outputs the average value. The method comprises the following steps: setting an initial value; extracting and matching the characteristic points; solving for relative motion; extracting and matching the road surface characteristic points; solving homography; resolving external parameters; the output is accumulated. The invention can correct the small-amplitude deflection of the camera caused by vehicle vibration or installation deviation without depending on calibration objects, lane lines or other information and manual participation, and is also suitable for side-looking camera calibration.

Description

Automatic calibration system and calibration method for external parameters of vehicle-mounted monocular camera based on key points
Technical Field
The invention relates to the technical field of intelligent automobiles, in particular to a system and a method for automatically calibrating external parameters of a vehicle-mounted monocular camera based on key points.
Background
The vehicle-mounted camera is an important sensor of a vehicle-mounted vision system, can provide driving environment perception capability for a vehicle, and provides basic data required by decision making for an auxiliary driving system. The effective performance of the vehicle-mounted vision system depends on the accurate calibration of a vehicle-mounted camera, and the camera can deflect due to vehicle body vibration or vehicle maintenance in the working period, so that the accuracy of the vision system is influenced. Therefore, a simple and efficient camera external parameter calibration method is needed.
The current calibration method of the camera based on the pure vision algorithm mainly comprises a traditional calibration method and an automatic calibration method.
In the traditional calibration method, for example, in the patent application with the application number of CN202110511210.7, which is an invention of a calibration method for a vehicle-mounted panoramic camera in a simple calibration environment, external reference of the camera is obtained by using calibration objects with known sizes, such as calibration plates, and the like, and establishing a corresponding relationship between points with known coordinates on the calibration objects and image pixel points of the points in a fixed field and using a certain method. Although the method has high calibration precision, the operation is complex, a high-precision calibration object needs to be manually participated, and parameter calibration and correction cannot be performed on a vehicle-mounted camera in motion.
Most of the currently common automatic calibration methods are based on some parallel or orthogonal information in camera images, for example, in the invention patent application with the application number of CN202011551440.8, an on-line calibration method of a vehicle-mounted camera and a vehicle-mounted information entertainment system, the vehicle-mounted camera is externally calibrated by using a regular straight lane line, after lane line images are collected by the vehicle-mounted camera and a world coordinate linear equation of the lane line is obtained by detecting the lane line based on the collected images, the external reference current value of the vehicle-mounted camera is optimized by taking the external reference current value of the vehicle-mounted camera as an initial value and taking the straightness of the world coordinate linear equation of the lane line as an optimization target, and the external reference calibration of the vehicle-mounted camera is realized. Although the calibration method is high in calculation speed, the calibration method is only suitable for the front-view camera and the rear-view camera, and cannot be used for calibrating road sections or side-view cameras without lane line information.
Disclosure of Invention
Aiming at the defects in the prior art, the technical problems to be solved by the invention are as follows: how to provide a need not to rely on calibration object, lane line or other information and artifical participation, can revise the camera small amplitude deflection that leads to because vehicle vibrations or installation deviation, still be applicable to the on-vehicle monocular camera external reference automatic calibration system and calibration method based on key point that the side view camera was markd simultaneously.
In order to solve the technical problems, the invention adopts the following technical scheme:
the automatic calibration system of vehicle-mounted monocular camera external parameter based on the key point comprises:
the characteristic point extraction and matching module is used for extracting and matching characteristic points in the continuous frame images;
the relative motion resolving module is used for solving the relative motion between the continuous frames by adopting an epipolar geometry method based on the characteristic point pairs;
the homography solving module is used for extracting the characteristic points on the road surface and screening out the road surface characteristic points with high matching according to the homography relation;
the external parameter solving module is used for solving the external parameters of the camera based on the continuous frame relative motion and the road surface feature point pairs;
and the result output module is used for accumulating the results obtained by the external parameter solving module, and calculating the average value after a fixed amount is reached and finally outputting the average value.
A vehicle-mounted monocular camera external parameter automatic calibration method based on key points adopts the vehicle-mounted monocular camera external parameter automatic calibration system based on key points, and comprises the following steps:
step 1) setting an initial value;
step 2) extracting and matching feature points, wherein the feature point extracting and matching module extracts and matches feature points in continuous frame images;
step 3) solving the relative motion, wherein the relative motion solving module adopts an epipolar geometry method to solve the relative motion between continuous frames based on the characteristic point pairs;
step 4), extracting and matching the road surface feature points, wherein the homography solving module extracts and matches the road surface feature points;
step 5), homography solving, wherein the homography solving module is used for solving homography;
step 6), solving external parameters, wherein the external parameter solving module is used for solving the external parameters of the camera based on the continuous frame relative motion and road surface feature point pairs;
and 7) accumulating and outputting, wherein the result output module accumulates the result obtained by the external parameter solving module, and calculates and finally outputs the average value after the result reaches a fixed amount.
Preferably, the setting of the initial value in step 1) includes setting of world coordinates of the installation position of the camera and setting of an initial value of an euler angle of the camera;
the setting method of the installation position of the camera comprises the following steps: taking the ground below the center of a rear axle of the vehicle as the origin of a world coordinate system, taking the front direction of the vehicle as the positive direction of an X axis of the world coordinate system, taking the direction vertical to the upward direction of the ground as the positive direction of a Z axis of the world coordinate system to establish the world coordinate system, and determining the world coordinate of the installation position of the camera according to the established world coordinate system;
the setting method of the initial value of the Euler angle of the camera comprises the following steps: the known state value of the last camera euler angle is used as the initial value of the camera euler angle.
Preferably, the initial value setting in step 1) further includes continuous frame images Im1 and Im2, vehicle body speed and vehicle body angular speed of the camera, the continuous frame images are all provided with time stamps, the time difference between the two frame camera images is obtained through calculating the time stamps of the two frame continuous frame images, and the relative motion of the two frame camera images is determined by solving epipolar constraint between the two frame camera images;
and when the angular speed of the vehicle body is more than 1 degree/second, the vehicle is not considered to move linearly, and the calculation is quit.
Preferably, in step 2), when extracting and matching feature points in consecutive frame images, the feature point extraction and matching module performs corner point detection on the two input frame images, then extracts corner points in the images and feature vectors thereof, then matches the feature points based on hamming distances between the feature vectors, and finally retains all feature point pairs successfully matched.
Preferably, in the step 2), an ORB corner detection method is adopted to extract feature points in the continuous frame images, and the number of the feature points extracted from each frame image is less than or equal to 3000;
and matching the feature points based on the Hamming distance between the feature vectors, and selecting the first 30 percent with the highest matching degree as a final matching result.
Preferably, when solving the relative motion in step 3), firstly, the feature point pair with high matching degree in step 2) is screened out, then the intrinsic matrix between the two frames of images is solved based on the feature point pair, and finally the relative rotation matrix and the relative displacement vector between the two frames of images are solved based on the intrinsic matrix.
Preferably, in step 3), if the extracted pair of feature points is smaller than 8 hours, the calculation is exited, and if the extracted pair of feature points is greater than or equal to 8 hours, the essence matrix between the two frames of images is solved by adopting a five-point method, and after the calculation of the essence matrix is completed, the relative rotation matrix and the relative translation vector between the two frames of images are calculated based on the essence matrix.
Preferably, it is further determined in step 3) whether the estimation of the relative motion state of the camera is correct, and if the estimation is correct, step 4) is executed, and if the estimation is incorrect, the calculation is exited;
the method for judging whether the estimation of the relative motion state of the camera is correct comprises the following steps:
1) based on the initial Euler angle [ omega ] 000 ]Calculating its initial rotation matrix R e0
Figure BDA0003650576320000031
2) Calculating a relative translation vector tc based on the initial values 0 (ii) a For a vehicle in a translation state, the normalized translation vector between two continuous frames of images is t car =[-1,0,0]Then the relative of the cameras according to the rotation-translation transformation relation of the coordinate system
Translation vector tc 0 Comprises the following steps:
t c0 =R e0 *t car
3) comparing the relative translation vector tc of the estimation value with the relative translation vector tc of the initial value 0 Setting a threshold value for the distance between the estimated value and the initial value, if the relative translation vector tc of the estimated value is larger than the relative translation vector tc of the initial value 0 Is greater thanAnd (4) exiting the calculation if the threshold value is reached, and if the relative translation vector tc of the estimated value and the relative translation vector tc of the initial value are reached 0 The distance therebetween is less than or equal to the threshold value, step 4) is performed.
Preferably, in the step 4), when the feature points on the road surface are extracted and matched, the number of the non-road surface feature points is reduced by adopting a method of limiting the feature point extraction area, and the feature points are matched based on the vehicle body motion information and the local image information.
Preferably, when the width of the camera screen is W and the height is H, the feature point extraction areas of the front camera and the rear camera are set as
Figure BDA0003650576320000041
The feature point extraction area of the side view camera is
Figure BDA0003650576320000042
Preferably, the method for matching the feature points based on the vehicle body motion information and the local image information in step 4) includes: establishing a camera model based on an initial input value, then converting pixel coordinates of road surface feature points in a first frame image into world coordinates, predicting the position of the same feature point in a second frame image based on vehicle body motion information, meanwhile, taking the feature point in the first frame image as the center, taking a first set length as a side length to intercept a square local image of the first frame image as a picture to be matched, after position prediction is completed, converting the world coordinates of the predicted position into pixel coordinates, taking the pixel coordinates as the center, taking a second set length as a radius to construct a search area, searching feature points in the second frame image in the area, if corresponding feature points exist in the area, taking the pixel coordinates as the center, intercepting the square local image of the second frame image with the first frame image feature point local image by taking the first set length as the side length to match, and selecting the feature point with the highest matching degree as a final matching point, if the corresponding feature points do not exist in the region, the feature point matching fails, and the next matching is started.
Preferably, when the homography in the step 5) is solved, the homography relation between the two frames of images is solved on the basis of the feature point pairs successfully matched in the step 4); for points on the same plane in the two frames of images, pixel coordinates of the points meet the homography relationship, and feature point pairs meeting the homography are screened out through homography solution;
if the number of the feature point pairs successfully matched in the step 4) is less than 4, the homography solution fails, and the calculation is finished;
and if the successfully matched characteristic point pairs in the step 4) are not less than 4 pairs, estimating a homography matrix by adopting a least square method, and reserving all the characteristic point pairs meeting the homography relation.
Preferably, when the external parameter solution of the camera is performed in the step 6), three orthogonal unit vectors are constructed based on the relative displacement vector in the step 3) to serve as three components of the external parameter rotation matrix of the camera, and one undetermined parameter is reserved; then, selecting two pairs of successfully matched feature point pairs from the step 5) to obtain all possible selection methods, selectively solving each group to obtain an undetermined parameter and an external reference rotation matrix corresponding to the undetermined parameter, and obtaining a corresponding Euler angle through the conversion relation between the external reference rotation matrix and the Euler angle; if the difference between the obtained Euler angle and the initial value is more than 5 degrees, the Euler angle is considered as an abnormal value and discarded; if the difference between the obtained Euler angles and the initial value is less than or equal to 5 degrees, keeping all Euler angles meeting the requirements, obtaining the mean value of the Euler angles, and obtaining corresponding external reference rotation matrixes according to the mean value of the Euler angles through calculation and calibration as the Euler angles and the external reference rotation matrixes of the camera obtained through the calculation and calibration.
Preferably, the method for calculating the external reference rotation matrix of the camera in step 6) includes:
1) constructing a group of orthonormal vectors based on the relative translation vector tc obtained by solving in the step 3):
Figure BDA0003650576320000043
Figure BDA0003650576320000051
n 2 =r 1 ×n 1
in the formula: r is 11 ,r 21 ,r 31 Is r 1 Three components of (a);
based on the orthonormal vector, a single parameterized camera rotation matrix can be constructed:
R e (α)=[r 1 -r 1 ×r 3 (α) r 3 (α)]
r 3 (α)=cos(α)n 1 +sin(α)n 2
in the formula: alpha is a parameter to be determined, r 1 Is the first column of the rotation matrix, r 3 Is the third column of the rotation matrix, R e Is a rotation matrix;
2) arranging and combining all the characteristic point pairs extracted in the step 5) to obtain all possible extraction methods, wherein if the characteristic point pairs in the step 5) have N pairs in common, the extraction methods have the same structure
Figure BDA0003650576320000059
Seed growing;
3) taking the ith combination as an example, the coordinates of the characteristic point camera coordinate system are solved, if the homogeneous coordinates of the characteristic point pixels are m p11 ,m p12 ,m p21 ,m p22 Then, the coordinates of the camera coordinate system are:
Figure BDA0003650576320000052
Figure BDA0003650576320000053
Figure BDA0003650576320000054
Figure BDA0003650576320000055
in the formula: k being a cameraInternal reference matrix, m c11 ,m c12 ,m c2 ,m c2 As the coordinates of the camera coordinate system of the feature points, [ u ] c11 ,u c12 ,u c21 ,u c22 ],[v c1 ,v c12 ,v c2 v c22 ]Respectively representing an x-axis component and a y-axis component at each coordinate of a feature point camera coordinate system;
4) solving rotation matrix parameters based on the coordinates of the feature point camera coordinate system, wherein the calculation method comprises the following steps:
Figure BDA0003650576320000056
Figure BDA0003650576320000057
Figure BDA0003650576320000058
preferably, in step 6), the method for calculating the dynamic calibration value of the euler angle of the camera includes:
if the obtained external reference rotation matrix of the camera is obtained as follows:
Figure BDA0003650576320000061
then the euler angle obtained is found to be:
Figure BDA0003650576320000062
Figure BDA0003650576320000063
Figure BDA0003650576320000064
in the formula: r e11 To R e3 Is a rotation matrix R e Nine components of (a), omega eee Three components of the euler angle, respectively;
and if the difference between the calculated Euler angles and the initial value is greater than 5 degrees, discarding the Euler angles, and if the difference is less than or equal to 5 degrees, reserving all Euler angles meeting the requirements, and calculating the mean value, namely the dynamic calibration value of the camera Euler angle obtained by the calculation.
Preferably, in step 7), the result output module accumulates the single euler angles obtained in step 6), when the data accumulation of the euler angles reaches a set amount, finds a mean value of all the euler angles, and calculates a corresponding external reference rotation matrix according to the mean value of the euler angles to output as a final calibration result.
Compared with the prior art, the invention has the following advantages:
1. the monocular camera automatic calibration method provided by the invention can realize the on-line dynamic calibration of the camera external parameter rotation matrix and the Euler angle based on the self picture and the vehicle body motion information of the camera. Compared with the traditional calibration method, the method does not need a high-precision calibration object or manual participation, and can realize the calibration and compensation of the camera rotation matrix in the vehicle motion state; compared with the conventional automatic calibration method, the method does not need the assistance of a road lane line, can realize the calibration of the cameras at all installation angles, and is not limited to forward-looking and backward-looking cameras. Meanwhile, the method has low requirement on hardware, does not need strict implementation conditions, has the advantages and characteristics of simplicity, high efficiency and accuracy, and is suitable for various vehicle types.
2. The invention can realize automatic calibration of the camera rotation matrix or Euler angle under the condition of straight line driving of the vehicle, does not need road lane line information, and can be used for calibration of front-view and side-view cameras.
3. The method can realize the on-line calibration of the three rotating Euler angles of the camera only by depending on the input of continuous frame data of the camera and the motion information of the vehicle, does not depend on a calibration plate, lane lines or other information and manual participation, can correct the small-amplitude deflection of the camera caused by the vibration or the installation deviation of the vehicle, and has strong flexibility and simple and convenient realization.
Drawings
FIG. 1 is a flow chart of a method for automatically calibrating external parameters of a vehicle-mounted monocular camera based on key points according to the present invention;
FIG. 2 is a typical feature point matching result diagram in the on-vehicle monocular camera external parameter automatic calibration method based on the key points;
FIG. 3 is a flow chart of the calculation of the relative movement in the automatic calibration method of the vehicle-mounted monocular camera external parameter based on the key point;
FIG. 4 is a flowchart of homography solving in the on-vehicle monocular camera external parameter automatic calibration method based on key points according to the present invention;
fig. 5 is a flow chart of external parameter calculation in the method for automatically calibrating the external parameters of the vehicle-mounted monocular camera based on key points.
Detailed Description
The invention will be further explained with reference to the drawings and the embodiments.
First, it should be noted that: the algorithm involved in the invention needs to be based on the following four assumptions:
1) the vehicle makes linear motion;
2) the camera only has installation angle change relative to the vehicle body, and has no position change;
3) the difference between the initial value and the true value of the Euler angle is not more than 5 degrees;
4) the camera internal parameters are known and do not change in the whole process; the camera external parameters refer to parameters of the camera in a world coordinate system, such as the position, the rotation direction and the like of the camera; the camera internal reference refers to parameters related to the characteristics of the camera itself, such as the focal length, pixel size, and the like of the camera.
Specifically, the invention provides a vehicle-mounted monocular camera external parameter automatic calibration system based on key points, which comprises:
the characteristic point extraction and matching module is used for extracting and matching characteristic points in the continuous frame images;
the relative motion resolving module is used for solving the relative motion between the continuous frames by adopting an epipolar geometry method based on the characteristic point pairs;
the homography solving module is used for extracting the characteristic points on the road surface and screening out the road surface characteristic points with high matching according to the homography relation;
the external parameter solving module is used for solving the external parameters of the camera based on the continuous frame relative motion and the road surface feature point pairs;
and the result output module is used for accumulating the results obtained by the external parameter solving module, and calculating the average value after a fixed amount is reached and finally outputting the average value.
In addition, the invention also provides a method for automatically calibrating the external parameter of the vehicle-mounted monocular camera based on the key points, which adopts the system for automatically calibrating the external parameter of the vehicle-mounted monocular camera based on the key points and comprises the following steps as shown in the attached figure 1:
step 1) setting an initial value;
step 2) extracting and matching feature points, wherein a feature point extracting and matching module extracts and matches feature points in continuous frame images;
step 3) solving the relative motion, wherein the relative motion solving module adopts an epipolar geometry method to solve the relative motion between continuous frames based on the characteristic point pairs;
step 4), extracting and matching the road surface feature points, and extracting and matching the road surface feature points by a homography solving module;
step 5), homography solving, wherein a homography solving module carries out homography solving;
step 6), solving external parameters, wherein the external parameter solving module is used for solving the external parameters of the camera based on the continuous frame relative motion and road surface feature point pairs;
and 7) accumulating and outputting, wherein the result output module accumulates the result obtained by the external parameter solving module, and calculates and finally outputs the average value after the result reaches a fixed quantity.
In the present embodiment, the setting of the initial value in step 1) includes the camera installation position world coordinate [ x ] w0 ,y w0 ,z w0 ]And initial value [ omega ] of Euler angle of camera 000 ]Setting (2); euler angle is a set of 3 independent angle parameters for determining the position of a fixed-point rotating rigid bodyComposed of nutation angle ω, precession angle (i.e., precession angle) θ, and rotation angle τ, first proposed by euler.
The method for setting the installation position of the camera comprises the following steps: the ground below the center of the rear axle of the vehicle is taken as the origin of a world coordinate system, the front direction of the vehicle is the positive direction of the X axis of the world coordinate system, the upward direction vertical to the ground is the positive direction of the Z axis of the world coordinate system to establish the world coordinate system, and the world coordinate [ X ] of the installation position of the camera is determined according to the established world coordinate system w0 ,y w0 ,z w0 ]In specific implementation, the world coordinates of the installation position of the camera can be determined in a manual measurement mode;
the setting method of the initial value of the Euler angle of the camera comprises the following steps: and adopting the known state value of the Euler angle of the last camera as the initial value of the Euler angle of the camera, and determining the initial value by adopting a method of manual visual measurement if no known state exists.
In the present embodiment, the data required for the algorithm of the present invention are input as camera continuous frame images Im1 and Im2, vehicle body velocity V, and angular velocity YawRate. Successive frame images are time-stamped and the time difference dt between two frames can be calculated. The algorithm determines its relative motion by solving the epipolar constraint between the two images. And if the YawRate of the vehicle body is more than 1 degree/second, the vehicle is not considered to move linearly, and the calculation is abandoned.
In this embodiment, in step 2), when extracting and matching feature points in consecutive frame images, the feature point extraction and matching module performs corner detection on the two input frame images, then extracts corner points in the images and feature vectors thereof, matches the feature points based on hamming distances between the feature vectors, and finally retains all feature point pairs successfully matched.
Specifically, the algorithm firstly extracts feature points in the camera continuous frame images Im1 and Im2, and the extraction algorithm adopts ORB corner point detection. ORB is the abbreviation of Oriented FAST and rotaed BRIEF, the feature detection operator is provided on the basis of the famous FAST feature detection and BRIEF feature descriptor, the running time of the operator is far better than SIFT and SURF, and the operator can be applied to real-time feature detection. ORB feature detection has scale and rotation invariance, as well as invariance to noise and its perspective transformation. Firstly, detecting characteristic points in an image through a FAST operator, and then extracting Steered BRIEF characteristics of the characteristic points for describing the characteristic points, wherein the characteristics are one-dimensional vectors. And finally, matching is carried out based on the Hamming distance between the feature vectors. And selecting the first 30% with the highest matching degree as a final matching result. As shown in fig. 2, which is a typical feature point matching result graph, the algorithm limits the maximum number of feature points extracted per frame to not more than 3000.
In this embodiment, when solving the relative motion in step 3), first, the feature point pair with high matching degree in step 2) is screened out, then the intrinsic matrix E between two frames of images is solved based on the feature point pair, and finally the relative rotation matrix Rc and the relative displacement vector tc between two frames of images are solved based on the intrinsic matrix E.
Specifically, fig. 3 shows a flow chart of the calculation of the relative movement. After the feature point matching in the step 2) is completed, solving an essential matrix E between two frames by adopting a five-point method. And 5 pairs of feature points are needed at least for solving the intrinsic matrix E, in order to ensure the solving precision, when the extracted feature points are less than 8 pairs, the calculation is abandoned, and if the extracted feature points are more than 8 pairs, the estimation of the intrinsic matrix E is carried out. After the computation of the intrinsic matrix E is completed, a relative rotation matrix Rc and a relative translation vector tc between two frames can be computed based on E.
At this time, it is necessary to determine whether the relative motion state of the camera is estimated correctly. If the vehicle is in a translation state, the relative rotation matrix of the vehicle is a unit matrix. The invention adopts the following method to judge whether the relative motion estimation is correct:
1) calculating a relative translation vector tc based on the initial values 0 . First based on the initial Euler angle [ omega ] 000 ]Calculating its initial rotation matrix R e0
Figure BDA0003650576320000091
A relative translation vector tc based on the initial value is then calculated 0 . For vehicles in translation, it is continuousThe normalized translation vector between two frames is t car =[-1,0,0]Then, according to the rotation-translation transformation relation of the coordinate system, the relative translation vector tc of the camera 0 Comprises the following steps:
t c0 =R e0 *t car
2) comparing the estimated value tc with the initial value tc 0 The threshold value was set to 0.15. If | t c -t c0 ‖>And 0.15, judging that the estimation error is overlarge, and abandoning the calculation. And if the threshold condition is met, performing the next calculation.
In this embodiment, in the step 4), when extracting and matching the road surface feature points, the number of non-road surface feature points is reduced by using a method of limiting the feature point extraction area, and the feature points are matched based on the vehicle body motion information and the local image information.
Specifically, the camera extrinsic calculation depends on the feature points located on the road plane, and therefore the feature points in the image need to be extracted again. The extraction algorithm also adopts ORB corner detection, and the difference is that the detection needs to be limited in the detection area. If the width of the camera frame is W and the height is H, for the forward-looking and backward-looking cameras, the feature point extraction area is
Figure BDA0003650576320000092
For a side view camera, the feature point extraction area is
Figure BDA0003650576320000093
This ensures that the extracted feature points are located on the road surface as far as possible.
In this embodiment, the method for matching the feature points based on the vehicle body motion information and the local image information in step 4) includes: establishing a camera model based on an initial input value, then converting pixel coordinates of road surface feature points in a first frame image into world coordinates, predicting the position of the same feature point in a second frame image based on vehicle body motion information, meanwhile, taking the feature point in the first frame image as the center, taking a first set length as a side length to intercept a square local image of the first frame image as a picture to be matched, after position prediction is completed, converting the world coordinates of the predicted position into pixel coordinates, taking the pixel coordinates as the center, taking a second set length as a radius to construct a search area, searching feature points in the second frame image in the area, if corresponding feature points exist in the area, taking the pixel coordinates as the center, intercepting the square local image of the second frame image with the first frame image feature point local image by taking the first set length as the side length to match, and selecting the feature point with the highest matching degree as a final matching point, if the corresponding feature points do not exist in the region, the feature point matching fails, and the next matching is started.
Specifically, after feature point extraction is completed, feature points in two frames need to be matched. Because the number of similar points on the road surface is too many, if the Hamming distance of BRIEF characteristics is directly adopted for matching, a large number of wrong matching can be caused, and the subsequent calculation is influenced. Therefore, the invention adopts a feature point matching algorithm based on vehicle body movement and image matching, specifically as follows, taking the ith feature point in the first frame image as an example, the matching of other feature points is similar:
1) a pinhole camera model is established based on the initial values.
P i =K[R e0 t w ]P wi
Wherein K is a camera internal reference matrix and is a known parameter; r e0 Is an initial rotation matrix; t is t w For the translation vector, determined by the camera mounting position:
Figure BDA0003650576320000101
P i =[x i ,y i ,1]is the homogeneous coordinate of the ith feature point pixel, P wi =[x wi ,y wi ,z wi ,1]Corresponding to world homogeneous coordinates.
2) With [ x ] i ,y i ]As the center, d is the side length, and the local region Rect1 in the first frame image is cut out as the image to be matched.
3) Calculating world coordinate P of characteristic point i according to camera model wi =[x wi ,y wi ,z wi ,1]Since the characteristic point is located on the road surface, Z thereofCoordinate z wi 0. And predicting the position of the characteristic point in the next frame according to the speed of the vehicle body. The moving speed of the vehicle body is V, the time interval between two frames is dt, and the world coordinate of the predicted point is P wi_pre =[x wi -Vdt,y wi ,z wi ,1]. And reversely deducing the pixel coordinate of the predicted point in the second frame image as P according to the camera model i_pre =[x i_pre ,y i_pre ,1]。
4) With P i_pre And establishing a search area for the radius r as the center, and searching the second frame image feature points in the area. If the region has no characteristic point, the matching fails, and the next matching is started. If n characteristic points exist in the region, the local region Rect2 of the second frame image is intercepted by taking the characteristic points as the centers and d as the side length 1 ~Rect2 n As an image to be matched.
5) Calculate Rect2 1 ~Rect2 n The sum of absolute differences SAD between the area and Rect1, j is more than or equal to 1 and less than or equal to n for the jth characteristic point in the area, and the calculation method is as follows:
Figure BDA0003650576320000102
and calculating SADs of all the feature points to be matched, and selecting the point with the minimum SAD as a final matching result.
In this embodiment, as shown in fig. 4, during the homography solution in step 5), the homography relationship between two frames of images is solved based on the feature point pairs successfully matched in step 4); for points on the same plane in the two frames of images, pixel coordinates of the points meet the homography relationship, and a characteristic point pair meeting the homography is screened out through homography solution to provide a basis for subsequent calculation;
if the number of the feature point pairs successfully matched in the step 4) is less than 4, the homography solution fails, and the calculation is finished;
and if the successfully matched characteristic point pairs in the step 4) are not less than 4 pairs, estimating a homography matrix by adopting a least square method, and reserving all the characteristic point pairs meeting the homography relation.
In this embodiment, when the external parameter solution of the camera is performed in step 6), three orthogonal unit vectors are constructed based on the relative displacement vector in step 3) to serve as three components of the external parameter rotation matrix of the camera, and one undetermined parameter is reserved; then, selecting two pairs of successfully matched feature point pairs from the step 5) to obtain all possible selection methods, selectively solving each group to obtain an undetermined parameter and an external reference rotation matrix corresponding to the undetermined parameter, and obtaining a corresponding Euler angle through the conversion relation between the external reference rotation matrix and the Euler angle; if the difference between the obtained Euler angle and the initial value is more than 5 degrees, the Euler angle is considered as an abnormal value and discarded; if the difference between the obtained Euler angles and the initial value is less than or equal to 5 degrees, keeping all Euler angles meeting the requirements, obtaining the mean value of the Euler angles, and obtaining corresponding external reference rotation matrixes according to the mean value of the Euler angles through calculation and calibration as the Euler angles and the external reference rotation matrixes of the camera obtained through the calculation and calibration.
Specifically, as shown in fig. 5, the external parameter settlement in step 6) is implemented by solving the camera rotation matrix by using a parameterization method, and by performing single parameterization on the camera rotation matrix and solving a linear equation. The method comprises the following specific steps:
1) constructing a group of orthonormal vectors based on the relative translation vector tc obtained by solving in the step 3):
Figure BDA0003650576320000111
Figure BDA0003650576320000112
n 2 =r 1 ×n 1
based on the orthonormal vector, a single parameterized camera rotation matrix can be constructed:
R e (α)=[r 1 -r 1 ×r 3 (α) r 3 (α)]
r 3 (α)=cos(α)n 1 +sin(α)n 2
where α is the parameter to be determined, r 1 Is the first column of the rotation matrix, r 3 Is the third column of the rotation matrix, R e Is a rotation matrix.
2) Two pairs of feature point pairs need to be extracted in the external reference matrix solution, and all the feature point pairs extracted in the step 5) are arranged and combined to obtain all possible extraction methods. If the characteristic point pairs in the step 5) have N pairs in common, the extraction method has N pairs in common
Figure BDA0003650576320000113
And (4) seed preparation.
3) And taking the ith combination as an example, solving the coordinates of the characteristic point camera coordinate system. If the homogeneous coordinate of the characteristic point pixel is m p11 ,m p12 ,m p21 ,m p2 Then, the coordinates of the camera coordinate system are:
Figure BDA0003650576320000121
Figure BDA0003650576320000122
Figure BDA0003650576320000123
Figure BDA0003650576320000124
in the formula: k is the internal reference matrix of the camera, m c11 ,m c12 ,m c21 ,m c2 As the coordinates of the camera coordinate system of the feature points, [ u ] c11 ,u c12 ,u c21 ,u c22 ],[v c11 ,v c12 ,v c21 v c22 ]Respectively an x-axis component and a y-axis component at each coordinate of the camera coordinate system of the feature points.
4) And solving the rotation matrix parameters based on the coordinates of the feature point camera coordinate system. The calculation method comprises the following steps:
Figure BDA0003650576320000125
Figure BDA0003650576320000126
Figure BDA0003650576320000127
5) the above operation is performed for all possible combinations to find all R e And (alpha) a rotation matrix. And the Euler angle of the camera is reversely deduced based on the relation between the Euler angle and the rotation matrix. The specific method comprises the following steps:
if the obtained external reference rotation matrix is obtained as follows:
Figure BDA0003650576320000128
then the euler angle obtained is found to be:
Figure BDA0003650576320000129
Figure BDA00036505763200001210
Figure BDA0003650576320000131
in the formula: r e11 To R e3 Is a rotation matrix R e Nine components of (a), omega eee Three components of the euler angle, respectively;
if the difference between the calculated Euler angle and the initial value is more than 5 degrees, discarding the Euler angle, and if the difference is less than 5 degrees, keeping the Euler angle. And storing all euler angles meeting the requirements and solving a mean value, namely the dynamic calibration value of the euler angle of the camera obtained by the calculation.
In this embodiment, in step 7), the camera external parameter is calibrated as a continuous process, and the steady-state average value is obtained as the final result through repeated calculation within a certain period of time. The result output module accumulates the single Euler angles obtained in the step 6), when the data accumulation of the Euler angles reaches a set amount, in the embodiment, the set amount is 500 groups, the mean value of all the Euler angles is obtained, and a corresponding external reference rotation matrix is calculated according to the mean value of the Euler angles and is output as a final calibration result.
Compared with the prior art, the monocular camera automatic calibration method provided by the invention can realize the on-line dynamic calibration of the camera external parameter rotation matrix and the Euler angle based on the self picture and the vehicle body motion information of the camera. Compared with the traditional calibration method, the method does not need a high-precision calibration object or manual participation, and can realize the calibration and compensation of the camera rotation matrix in the vehicle motion state; compared with the conventional automatic calibration method, the method does not need the assistance of a road lane line, can realize the calibration of the cameras at all installation angles, and is not limited to forward-looking and backward-looking cameras. Meanwhile, the method has low requirement on hardware, does not need strict implementation conditions, has the advantages and characteristics of simplicity, high efficiency and accuracy, and is suitable for various vehicle types. The invention can realize automatic calibration of the camera rotation matrix or Euler angle under the condition of straight line driving of the vehicle, does not need road lane line information, and can be used for calibration of front-view and side-view cameras. The method can realize the on-line calibration of the three rotating Euler angles of the camera only by depending on the input of continuous frame data of the camera and the motion information of the vehicle, does not depend on a calibration plate, lane lines or other information and manual participation, can correct the small-amplitude deflection of the camera caused by the vibration or the installation deviation of the vehicle, and has strong flexibility and simple and convenient realization.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the technical solutions, and those skilled in the art should understand that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all that should be covered by the claims of the present invention.

Claims (17)

1. On-vehicle monocular camera external reference automatic calibration system based on key point, its characterized in that includes:
the characteristic point extraction and matching module is used for extracting and matching characteristic points in the continuous frame images;
the relative motion resolving module is used for solving the relative motion between the continuous frames by adopting an epipolar geometry method based on the characteristic point pairs;
the homography solving module is used for extracting the characteristic points on the road surface and screening out the road surface characteristic points with high matching according to the homography relation;
the external parameter solving module is used for solving the external parameters of the camera based on the continuous frame relative motion and the road surface feature point pairs;
and the result output module is used for accumulating the results obtained by the external parameter solving module, and calculating the average value after a fixed amount is reached and finally outputting the average value.
2. A method for automatically calibrating vehicle-mounted monocular camera external parameters based on key points is characterized in that the system for automatically calibrating the vehicle-mounted monocular camera external parameters based on key points as claimed in claim 1 is adopted, and the method comprises the following steps:
step 1) setting an initial value;
step 2) extracting and matching feature points, wherein the feature point extracting and matching module extracts and matches feature points in continuous frame images;
step 3) solving the relative motion, wherein the relative motion solving module adopts an epipolar geometry method to solve the relative motion between continuous frames based on the characteristic point pairs;
step 4), extracting and matching the road surface feature points, wherein the homography solving module extracts and matches the road surface feature points;
step 5), homography solving, wherein the homography solving module is used for solving homography;
step 6), solving external parameters, wherein the external parameter solving module is used for solving the external parameters of the camera based on the continuous frame relative motion and road surface feature point pairs;
and 7) accumulating and outputting, wherein the result output module accumulates the result obtained by the external parameter solving module, and calculates and finally outputs the average value after the result reaches a fixed amount.
3. The method for automatically calibrating the vehicle-mounted monocular camera external parameter based on the key point according to claim 2, wherein the setting of the initial value in the step 1) comprises setting of a world coordinate of a camera mounting position and setting of an initial value of an Euler angle of the camera;
the setting method of the installation position of the camera comprises the following steps: the method comprises the following steps of establishing a world coordinate system by taking the ground below the center of a rear axle of a vehicle as the origin of the world coordinate system, taking the front direction of the vehicle as the positive direction of an X axis of the world coordinate system, taking the direction vertical to the upward direction of the ground as the positive direction of a Z axis of the world coordinate system, and determining the world coordinate of the installation position of a camera according to the established world coordinate system;
the setting method of the initial value of the Euler angle of the camera comprises the following steps: the known state value of the last camera euler angle is used as the initial value of the camera euler angle.
4. The method for automatically calibrating the vehicle-mounted monocular camera external parameters based on the key points as claimed in claim 3, wherein the initial value setting in step 1) further comprises continuous frame images Im1 and Im2, vehicle body speed and vehicle body angular speed of the camera, the continuous frame images are all provided with time stamps, the time difference between the two frame camera images is obtained by calculating the time stamps of the two frame continuous frame images, and the relative motion of the two frame camera images is determined by solving epipolar constraint between the two frame camera images;
and when the angular speed of the vehicle body is more than 1 degree/second, the vehicle is not considered to move linearly, and the calculation is quit.
5. The method for automatically calibrating the external parameters of the vehicle-mounted monocular camera based on the key points as claimed in claim 4, wherein in the step 2), when the feature point extraction matching module extracts the feature points in the continuous frame images and matches the feature points, the feature point extraction matching module firstly detects the corner points of the two input frame images, then extracts the corner points and the feature vectors of the two input frame images, then matches the feature points based on the hamming distance between the feature vectors, and finally retains all the successfully matched feature point pairs.
6. The method for automatically calibrating the external parameters of the vehicle-mounted monocular camera based on the key points according to claim 5, wherein in the step 2), an ORB corner point detection method is adopted to extract the feature points in the continuous frame images, and the number of the extracted feature points in each frame image is less than or equal to 3000;
and matching the feature points based on the Hamming distance between the feature vectors, and selecting the first 30 percent with the highest matching degree as a final matching result.
7. The method for automatically calibrating the external parameters of the vehicle-mounted monocular camera based on the key points according to claim 6, wherein when solving the relative motion in the step 3), firstly, the feature point pair with high matching degree in the step 2) is screened out, then, the intrinsic matrix between the two frames of images is solved based on the feature point pair, and finally, the relative rotation matrix and the relative displacement vector between the two frames of images are solved based on the intrinsic matrix.
8. The method according to claim 7, wherein in step 3), if the extracted feature point pairs are smaller than 8 hours, the calculation is exited, if the extracted feature point pairs are greater than or equal to 8 hours, an essential matrix between the two frames of images is solved by a five-point method, and after the calculation of the essential matrix is completed, a relative rotation matrix and a relative translation vector between the two frames of images are calculated based on the essential matrix.
9. The method for automatically calibrating the external parameters of the vehicle-mounted monocular camera based on the key points as claimed in claim 8, wherein in step 3), it is further determined whether the estimation of the relative motion state of the camera is correct, if so, step 4) is executed, and if not, the calculation is exited;
the method for judging whether the estimation of the relative motion state of the camera is correct comprises the following steps:
1) based onInitial Euler angle [ omega ] 000 ]Calculating its initial rotation matrix R e0
Figure FDA0003650576310000021
2) Calculating a relative translation vector tc based on the initial values 0 (ii) a For a vehicle in a translation state, the normalized translation vector between two continuous frames of images is t car =[-1,0,0]Then, according to the rotation-translation transformation relation of the coordinate system, the relative translation vector tc of the camera 0 Comprises the following steps:
t c0 =R e0 *t car
3) comparing the relative translation vector tc of the estimation value with the relative translation vector tc of the initial value 0 Setting a threshold value for the distance between the estimated value and the initial value, if the relative translation vector tc of the estimated value is larger than the relative translation vector tc of the initial value 0 If the distance between the estimated value and the initial value is larger than the threshold value, the calculation is quitted, and if the relative translation vector tc of the estimated value and the relative translation vector tc of the initial value are larger than the threshold value, the calculation is stopped 0 The distance therebetween is less than or equal to the threshold value, step 4) is performed.
10. The method for automatically calibrating the vehicle-mounted monocular camera external parameters based on the key points as claimed in claim 9, wherein in the step 4), when the road surface feature points are extracted and matched, the number of non-road surface feature points is reduced by adopting a method for limiting a feature point extraction area, and the feature points are matched based on the vehicle body motion information and the local image information.
11. The method for automatically calibrating the external reference of the vehicle-mounted monocular camera based on the key points as claimed in claim 10, wherein if the width of the camera picture is set as W and the height is set as H, the feature point extraction areas of the front-view camera and the rear-view camera are set as W
Figure FDA0003650576310000031
The feature point extraction area of the side view camera is
Figure FDA0003650576310000032
12. The method for automatically calibrating the external parameters of the vehicle-mounted monocular camera based on the key points as claimed in claim 11, wherein the method for matching the feature points based on the vehicle body motion information and the local image information in the step 4) comprises the following steps: establishing a camera model based on an initial input value, then converting pixel coordinates of road surface feature points in a first frame image into world coordinates, predicting the position of the same feature point in a second frame image based on vehicle body motion information, meanwhile, taking the feature point in the first frame image as the center, taking a first set length as a side length to intercept a square local image of the first frame image as a picture to be matched, after position prediction is completed, converting the world coordinates of the predicted position into pixel coordinates, taking the pixel coordinates as the center, taking a second set length as a radius to construct a search area, searching feature points in the second frame image in the area, if corresponding feature points exist in the area, taking the pixel coordinates as the center, intercepting the square local image of the second frame image with the first frame image feature point local image by taking the first set length as the side length to match, and selecting the feature point with the highest matching degree as a final matching point, if the corresponding feature points do not exist in the region, the feature point matching fails, and the next matching is started.
13. The method for automatically calibrating the external parameters of the vehicle-mounted monocular camera based on the key points according to claim 12, wherein when the homography in step 5) is solved, the homography relation between the two frames of images is solved based on the feature point pairs successfully matched in step 4); for points on the same plane in the two frames of images, pixel coordinates of the points meet the homography relationship, and feature point pairs meeting the homography are screened out through homography solution;
if the number of the feature point pairs successfully matched in the step 4) is less than 4, the homography solution fails, and the calculation is finished;
and if the number of the successfully matched characteristic point pairs in the step 4) is not less than 4, estimating a homography matrix by adopting a least square method, and reserving all the characteristic point pairs meeting the homography relation.
14. The method for automatically calibrating the vehicle-mounted monocular camera extrinsic parameter based on the key points according to claim 13, wherein when solving the camera extrinsic parameter in step 6), three orthogonal unit vectors are constructed based on the relative displacement vector in step 3) as three components of the camera extrinsic parameter rotation matrix, and one undetermined parameter is reserved; then, selecting two pairs of successfully matched feature point pairs from the step 5) to obtain all possible selection methods, selectively solving each group to obtain an undetermined parameter and an external reference rotation matrix corresponding to the undetermined parameter, and obtaining a corresponding Euler angle through the conversion relation between the external reference rotation matrix and the Euler angle; if the difference between the obtained Euler angle and the initial value is more than 5 degrees, the Euler angle is considered as an abnormal value and discarded; if the difference between the obtained Euler angles and the initial value is less than or equal to 5 degrees, keeping all Euler angles meeting the requirements, obtaining the mean value of the Euler angles, and obtaining corresponding external reference rotation matrixes according to the mean value of the Euler angles through calculation and calibration as the Euler angles and the external reference rotation matrixes of the camera obtained through the calculation and calibration.
15. The method for automatically calibrating the external parameters of the vehicle-mounted monocular camera based on the key points as claimed in claim 14, wherein the method for calculating the external parameter rotation matrix of the camera in the step 6) comprises the following steps:
1) based on the relative translation vector t obtained by solving in the step 3) c Constructing a set of orthonormal vectors r 1 ,n 1 ,n 2 The expression is as follows:
Figure FDA0003650576310000041
Figure FDA0003650576310000042
n 2 =r 1 ×n 1
in the formula: r is 11 ,r 21 ,r 31 Is r 1 Three components of (a);
based on the orthonormal vector, a single parameterized camera rotation matrix can be constructed:
R e (α)=[r 1 -r 1 ×r 3 (α) r 3 (α)]
r 3 (α)=cos(α)n 1 +sin(α)n 2
in the formula: alpha is a parameter to be determined, r 1 Is the first column of the rotation matrix, r 3 Is the third column of the rotation matrix, R e Is a rotation matrix;
2) arranging and combining all the characteristic point pairs extracted in the step 5) to obtain all possible extraction methods, wherein if the characteristic point pairs in the step 5) have N pairs in common, the extraction methods have the same structure
Figure FDA0003650576310000043
Seed growing;
3) taking the ith combination as an example, the coordinates of the characteristic point camera coordinate system are solved, if the homogeneous coordinates of the characteristic point pixels are m p11 ,m p12 ,m p2 ,m p22 Then, the coordinates of the camera coordinate system are:
Figure FDA0003650576310000044
Figure FDA0003650576310000051
Figure FDA0003650576310000052
Figure FDA0003650576310000053
in the formula: k is the inside of the cameraParameter matrix, m c11 ,m c1 ,m c2 ,m c22 As the coordinates of the camera coordinate system of the feature points, [ u ] c11 ,u c12 ,u c2 ,u c22 ],[v c11 ,v c12 ,v c2 v c22 ]Respectively an x-axis component and a y-axis component at each coordinate of the camera coordinate system of the feature points.
4) Solving rotation matrix parameters based on the coordinates of the feature point camera coordinate system, wherein the calculation method comprises the following steps:
Figure FDA0003650576310000054
Figure FDA0003650576310000055
Figure FDA0003650576310000056
16. the method for automatically calibrating the external parameters of the vehicle-mounted monocular camera based on the key points as claimed in claim 15, wherein in the step 6), the method for calculating the dynamic calibration value of the euler angle of the camera comprises the following steps:
if the rotation matrix of the camera external parameters is obtained as follows:
Figure FDA0003650576310000057
then the euler angle obtained is found to be:
Figure FDA0003650576310000058
Figure FDA0003650576310000059
Figure FDA00036505763100000510
in the formula: r e11 To R e33 Is a rotation matrix R e Nine components of (a), omega eee Three components of the euler angle, respectively;
and if the difference between the calculated Euler angles and the initial value is greater than 5 degrees, discarding the Euler angles, and if the difference is less than or equal to 5 degrees, reserving all Euler angles meeting the requirements, and calculating the mean value, namely the dynamic calibration value of the camera Euler angle obtained by the calculation.
17. The method for automatically calibrating the external parameters of the vehicle-mounted monocular camera based on the key points as claimed in claim 16, wherein in step 7), the result output module accumulates the single euler angles obtained in step 6), when the data accumulation of the euler angles reaches a set amount, the mean value of all the euler angles is obtained, and the corresponding external parameter rotation matrix is calculated according to the mean value of the euler angles and is output as the final calibration result.
CN202210550682.8A 2022-05-18 2022-05-18 Automatic calibration system and calibration method for external parameters of vehicle-mounted monocular camera based on key points Pending CN114842093A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210550682.8A CN114842093A (en) 2022-05-18 2022-05-18 Automatic calibration system and calibration method for external parameters of vehicle-mounted monocular camera based on key points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210550682.8A CN114842093A (en) 2022-05-18 2022-05-18 Automatic calibration system and calibration method for external parameters of vehicle-mounted monocular camera based on key points

Publications (1)

Publication Number Publication Date
CN114842093A true CN114842093A (en) 2022-08-02

Family

ID=82569068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210550682.8A Pending CN114842093A (en) 2022-05-18 2022-05-18 Automatic calibration system and calibration method for external parameters of vehicle-mounted monocular camera based on key points

Country Status (1)

Country Link
CN (1) CN114842093A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036505A (en) * 2023-08-23 2023-11-10 长和有盈电子科技(深圳)有限公司 On-line calibration method and system for vehicle-mounted camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036505A (en) * 2023-08-23 2023-11-10 长和有盈电子科技(深圳)有限公司 On-line calibration method and system for vehicle-mounted camera
CN117036505B (en) * 2023-08-23 2024-03-29 长和有盈电子科技(深圳)有限公司 On-line calibration method and system for vehicle-mounted camera

Similar Documents

Publication Publication Date Title
CN110910453B (en) Vehicle pose estimation method and system based on non-overlapping view field multi-camera system
CN110567469B (en) Visual positioning method and device, electronic equipment and system
JP4956452B2 (en) Vehicle environment recognition device
CN110569704A (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
US11398051B2 (en) Vehicle camera calibration apparatus and method
US8498479B2 (en) Image processing device for dividing an image into a plurality of regions
KR20170077223A (en) Online calibration of a motor vehicle camera system
US20150279017A1 (en) Stereo image processing device for vehicle
JP2001082955A (en) Device for adjusting dislocation of stereoscopic image
US20100013908A1 (en) Asynchronous photography automobile-detecting apparatus
CN109801220B (en) Method for solving mapping parameters in vehicle-mounted video splicing on line
DE102016104732A1 (en) Method for motion estimation between two images of an environmental region of a motor vehicle, computing device, driver assistance system and motor vehicle
JP3765862B2 (en) Vehicle environment recognition device
EP3293700A1 (en) 3d reconstruction for vehicle
CN112184792A (en) Road slope calculation method and device based on vision
CN112819711A (en) Monocular vision-based vehicle reverse positioning method utilizing road lane line
CN114842093A (en) Automatic calibration system and calibration method for external parameters of vehicle-mounted monocular camera based on key points
CN114550042A (en) Road vanishing point extraction method, vehicle-mounted sensor calibration method and device
EP3410705B1 (en) 3d vision system for a motor vehicle and method of controlling a 3d vision system
WO2023093515A1 (en) Positioning system and positioning method based on sector depth camera
JP6768554B2 (en) Calibration device
CN112116644B (en) Obstacle detection method and device based on vision and obstacle distance calculation method and device
JPH07152914A (en) Distance detecting device for vehicle
CN112132902B (en) Vehicle-mounted camera external parameter adjusting method and device, electronic equipment and medium
CN111402593A (en) Video traffic parameter acquisition method based on polynomial fitting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination