WO2023103679A1 - 一种车载环视相机快速自动标定方法和装置 - Google Patents

一种车载环视相机快速自动标定方法和装置 Download PDF

Info

Publication number
WO2023103679A1
WO2023103679A1 PCT/CN2022/130545 CN2022130545W WO2023103679A1 WO 2023103679 A1 WO2023103679 A1 WO 2023103679A1 CN 2022130545 W CN2022130545 W CN 2022130545W WO 2023103679 A1 WO2023103679 A1 WO 2023103679A1
Authority
WO
WIPO (PCT)
Prior art keywords
cameras
calibration
calibration pattern
points
image
Prior art date
Application number
PCT/CN2022/130545
Other languages
English (en)
French (fr)
Inventor
顾乐妍
张笑东
王云鹏
Original Assignee
纵目科技(上海)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 纵目科技(上海)股份有限公司 filed Critical 纵目科技(上海)股份有限公司
Publication of WO2023103679A1 publication Critical patent/WO2023103679A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • This application generally relates to perception technology in intelligent driving, and in particular relates to the calibration of vehicle-mounted surround-view cameras.
  • ADAS Advanced Driver Assistance Systems
  • Car cameras can be divided into many types according to different functions, characteristics, and installation locations, such as surround-view cameras, front-view binocular or multi-eye cameras, side-view cameras, etc. Usually, a car will carry multiple or even a dozen cameras. In order to reasonably utilize the perception information of the camera, camera calibration has become an indispensable step, and the accuracy of calibration also directly affects the accuracy of visual perception.
  • An exemplary aspect of the present disclosure includes a method for calibrating a vehicle-mounted surround view camera, comprising obtaining one or more frames captured by a plurality of cameras installed in a plurality of locations on a vehicle, wherein each frame includes A plurality of images captured simultaneously by a plurality of cameras; for each frame, detecting image points related to a calibration pattern in the plurality of images captured by the plurality of cameras, the calibration pattern comprising one or more feature points; for In each frame, a plurality of image points in one or more images taken by the same or different cameras in the plurality of cameras corresponding to the feature points in the calibration pattern with relative physical relationship are determined as a group of matching point; using the respective image coordinates of each group of matching points, establish a plurality of constraint equations based on the relative physical relationship of the feature points of the calibration pattern, wherein the number of the plurality of constraint equations is greater than or equal to the respective number of the plurality of cameras The total number of external parameters to be obtained; and performing nonlinear optimization solution to the plurality
  • the calibration pattern includes an existing non-preset pattern on the ground or a preset calibration pattern set on the ground, and the calibration pattern includes one or more identical or different calibration patterns.
  • the plurality of cameras includes any plurality of cameras capable of forming a surround view.
  • the one or more frames are taken while the vehicle is in motion or in a stationary state.
  • the relative physical relationship of the feature points of the calibration pattern includes one or more of the following or any combination thereof: At least two image points corresponding to the same feature point of the same physical point are mapped to the same physical point; the known distance, length or area value of the feature point of the calibration pattern; and the relative distance, coordinate or length of the feature point of the calibration pattern physical relationship.
  • the feature points include corner points, and may optionally further include one or more of the following or any combination thereof: straight lines, parallel lines, and vertical lines.
  • detecting the image points related to the calibration pattern in the plurality of images captured by the plurality of cameras further includes recording the detected image points and their corresponding frame numbers, Camera number, and image coordinates.
  • Determining image points as a group of matching points further includes recording the frame number, camera number, and image coordinates of each image point in the determined group of matching points.
  • the camera includes a camera with distortion
  • the calibration method further includes using the initial extrinsic parameters of the camera to convert the image captured by the camera into a surround-view stitched a bird's-eye view; and detecting the image point on the bird's-eye view of the look-around mosaic.
  • using the initial external parameters of the camera to transform the image captured by the camera into a bird's-eye view stitched together includes establishing a world coordinate system with a specific point on the calibration pattern as the origin; based on the calibration The prior information of the pattern determines the coordinates that the feature points in the calibration pattern should have in the world coordinate system; and calculates the initial value of the external parameter matrix based on the detected image points corresponding to the feature points.
  • aspects of the present disclosure also include corresponding devices, devices, and computer-readable media.
  • FIG. 1 shows a schematic diagram of a calibration data collection scheme of a vehicle-mounted surround view camera according to an aspect of the present disclosure.
  • Fig. 2 shows a diagram of a calibration data processing scheme of a vehicle-mounted surround view camera according to an aspect of the present disclosure.
  • FIG. 3 shows a diagram of an example of corner point recognition according to an aspect of the present disclosure.
  • Figure 4 shows a diagram of a calibration pattern according to some embodiments of the present disclosure.
  • FIG. 5 shows a schematic diagram of a vehicle during calibration according to an aspect of the present disclosure.
  • Fig. 6 shows a flow chart of a method for fast and automatic calibration of a vehicle-mounted surround view camera according to an aspect of the present disclosure.
  • Fig. 7 shows a fisheye image captured by a fisheye camera according to an exemplary embodiment of the present disclosure.
  • FIG. 8 shows edges and corners detected on an original image captured by a camera according to an exemplary embodiment of the present disclosure.
  • FIG. 9 shows edges and corners detected on a bird's-eye view generated using initialization data according to an exemplary embodiment of the present disclosure.
  • Fig. 10 shows a surround view effect diagram after calibration is completed using dynamic and static calibration processes respectively according to an exemplary embodiment of the present disclosure.
  • Fig. 11 shows a block diagram of a fast automatic calibration device for a vehicle-mounted surround view camera according to an aspect of the present disclosure.
  • wide-angle cameras such as, but not limited to, fisheye cameras
  • image fusion and surround view stitching techniques to synthesize panoramic views around the vehicle body and/or Or bird's-eye view, etc., and finally displayed on the screen of the vehicle center console.
  • the driver can intuitively see whether there are obstacles around the vehicle and the relative orientation and distance to the obstacles while sitting in the car, so that he can park safely in a narrow and congested parking lot , can pass through complex venues, and can effectively prevent accidents such as scratches, collisions, and collapses.
  • it can also provide support for algorithms such as recognition, detection, and tracking in the automatic driving system.
  • the calibration images with checkerboard grids pre-placed in at least four directions of front, rear, left, and right sides of the vehicle can be collected first, and the parameters of the front, rear, left, and right cameras are respectively calibrated using the calibration images, and the parameters of each camera are obtained and saved.
  • the image distortion correction parameter is used to correct the distortion of the calibration image to eliminate the distortion of the camera image.
  • the projective transformation can be carried out on the calibration image after distortion correction, and the projective transformation parameters can be obtained and saved.
  • the video images of the front, rear, left, and right cameras can be spliced and a virtual bird's-eye view can be generated.
  • the camera calibration scheme directly affects the effect and safety of such systems.
  • Camera calibration mainly includes internal reference calibration and external reference calibration.
  • the internal parameters of the camera include eccentricity, distortion, etc., which will not change after the camera is produced. Therefore, the internal parameter calibration can be completed when the camera leaves the factory, and there is no need to re-calibrate the internal parameters for subsequent use of the camera.
  • Camera extrinsic parameters include rotation and displacement, that is, the conversion relationship between the camera coordinate system and the vehicle body coordinate system.
  • rotation There are many ways to express rotation, such as rotation matrix, Euler angle, rotation vector, etc., and the expressions can be converted to each other.
  • the external parameter calibration only needs the value of one expression method. After the camera is installed in the car, if the position of the camera changes, or the camera is replaced, the external parameters need to be re-calibrated. The calibration of the external parameters needs to be carried out on the whole vehicle. The difficulty and demand of calibration are much greater than that of the internal parameters.
  • the camera imaging equation of the example is as follows (1):
  • t) includes the rotation matrix R and the translation matrix t, the rotation matrix R and the translation matrix t together describe how to convert the point from the world coordinate system to the camera coordinate system, where the rotation matrix It describes the direction of the coordinate axis of the world coordinate system relative to the camera coordinate axis, and the translation matrix describes the position of the origin of space in the camera coordinate system.
  • the calibration success rate is also an evaluation item that must be considered by the algorithm.
  • the calibration requires high precision and is greatly affected by factors such as the environment, so the calibration effect largely depends on the calibration site.
  • One aspect of the present disclosure relates to a solution for fast and automatic calibration of a vehicle-mounted surround-view camera, wherein the type and number of cameras of the surround-view camera are not limited.
  • the following solution of the present disclosure aims to solve the problems that the traditional calibration method is significantly affected by the environment, the calibration process is cumbersome, and the calibration effect is not ideal.
  • the automatic calibration technology implemented in the present disclosure is described below, which is less affected by environmental factors, has good calibration effect, and is fast and effective. Through the technology of the present disclosure, the calibration accuracy can be improved, the calibration effect can be improved, and the calibration process can be simplified to improve the efficiency.
  • FIG. 1 shows a diagram of a calibration data collection scheme 100 for a vehicle-mounted surround view camera according to an aspect of the present disclosure.
  • two calibration patterns 104-a and 104-b may be provided on the ground.
  • the distance between the two calibration patterns 104-a and 104-b slightly exceeds the width of the vehicle body, so that the complete image of the calibration patterns can be collected when the vehicle 102 passes through.
  • the cameras on the vehicle 102 in directions such as front, rear, left, and right continuously collect multiple frames of images.
  • the figures show the two indexing patterns as being the same pattern, other embodiments of the present disclosure may include the use of different indexing patterns.
  • the present disclosure is not limited thereto.
  • the purpose of setting two calibration patterns is to enable the vehicle to calibrate all the vehicle-mounted surround-view cameras through one pass.
  • the present disclosure may also include the provision of more or fewer calibration patterns.
  • only one calibration pattern (for example, 104-a or 104-b) can be set, and the left and right sides can be calibrated by driving the vehicle past the calibration pattern twice in front and back.
  • more than two calibration patterns may be provided. For example, one each for front, rear, left, and right, or two each for left and right, etc., for a more precise calibration.
  • the scheme of the present disclosure uses the feature points on the ground to complete the calibration through the relative positional relationship between the points without measuring the coordinates of the feature points, it is actually not necessary to set the calibration pattern according to the relative position, distance, angle relationship and other conditions.
  • the calibration pattern is shown in the example of FIG. 1 as being positioned parallel to the lateral and longitudinal axes of the vehicle, the present disclosure is not so limited.
  • the present disclosure does not need to limit the parking position of the vehicle during calibration, but only needs to ensure that the calibration pattern has appeared in the public field of view of the camera during the calibration process.
  • Common field of view refers to the area that can be captured by two or more cameras at the same time.
  • the calibration process shown in FIG. 1 may be dynamic, that is, the vehicle is in a driving state during the calibration process.
  • the calibration process can also be static, that is, the vehicle is stationary.
  • at least one calibration pattern can be placed in multiple different positions around the stationary vehicle so that each camera on the vehicle can communicate with at least one other camera. It is only necessary for one camera to have a common field of view of the calibration pattern so as to capture the image of the calibration pattern in the same frame.
  • the effective driving process is that the calibration pattern can be detected in the camera field of view, and the positions of the feature points on the image at different frames are collected and recorded. This process can collect the calibration pattern relative to the vehicle. Image point coordinates at position. This greatly increases the amount of data used to calculate the external parameters without increasing the range or number of feature points, thereby improving the calibration accuracy.
  • the collected image data can be processed.
  • FIG. 2 shows a diagram of a calibration data processing solution 200 for a vehicle-mounted surround view camera according to an aspect of the present disclosure.
  • the calibration data processing scheme 200 may include:
  • the calibration data acquisition scheme may collect one or more frames of images at least partially including calibration patterns at one or more (for example, M) times through multiple cameras.
  • the multiple (for example, N) images collected by the multiple (for example, N) cameras at the same moment are the i-th frame image corresponding to the moment i.
  • multiple images at the same moment may not all include images of the same calibration pattern, but at least some of the image frames (eg, from two or more of the cameras) may include images of the same calibration pattern.
  • the image point detection of the calibration pattern can be performed in the camera 1 image, camera 2 image, ... camera N image of the frame image, and the images of the image points detected in each camera image of the frame can be recorded Coordinates and camera number.
  • the image points of the calibration pattern in each camera image of each frame can be detected, and the image coordinates, camera number and frame number of the corresponding image points can be recorded.
  • the detection method can be flexibly adjusted according to different calibration patterns.
  • image point detection may include using edge detection algorithms (including but not limited to edge detection methods such as findcounters in OpenCV, Hough, or such as Roberts operator, Prewitt operator, Sobel operator and Laplacian operator, etc. ) to detect the edge in the captured image, and use a corner detection algorithm (including but not limited to, for example, using the intersection of the detected edges as the image corner, or using Harris operator, KLT operator, SIFT operator, SUSAN operator operator, Trajkovic operator, Moravec operator, etc.) to detect corner points as image points.
  • edge detection algorithms including but not limited to edge detection methods such as findcounters in OpenCV, Hough, or such as Roberts operator, Prewitt operator, Sobel operator and Laplacian operator, etc.
  • corner detection algorithm including but not limited to, for example, using the intersection of the detected edges as the image corner, or using Harris operator, KLT operator, SIFT operator, SUSAN operator operator, Trajkovic operator, Moravec operator, etc.
  • Table 1 shows the detected image points recorded in sequence (for example, image points 1 . . . L) and their corresponding frame numbers, camera numbers, and image coordinates.
  • which feature point in the calibration pattern the detected image point corresponds to can be identified or determined later, or can be identified or determined at the same time of detection. Sure.
  • the initial external parameters may also be used to convert the fisheye image into a bird's-eye view and then detect the image points.
  • the world coordinate system can be established based on specific points in the fisheye image, and based on the prior information of the calibration pattern, the coordinates that the main feature points in the calibration pattern should have in the world coordinate system are determined.
  • an extrinsic parameter matrix is calculated based on the detected image points, and a specific embodiment thereof will be specifically described below.
  • the fisheye map is converted into a more regular bird's-eye view, and then the feature point detection will be able to improve the detection rate and detection accuracy.
  • the calibration data processing solution 200 may further include:
  • the image points detected in the same frame in step 1) can be sorted out, that is, it is confirmed that each detected image point corresponds to which of the calibration patterns A feature point is stored in the corresponding order.
  • Feature point matching means that for the feature points with relative physical relationship that appear in the common area of two or more cameras in the same frame, the image points mapped to each camera image are a set of matching points, and each set of matching points is also Corresponding to save.
  • Relative physical relationships may include, but are not limited to, for example, belonging to the same feature point and therefore having equal coordinates, belonging to two feature points with a known distance (for example, side length information), or equal distances between two pairs of feature points, and so on.
  • Table 2 shows the results of sorting out and matching feature points according to an example. This example corresponds to the situation where 4 cameras capture M frames in total. As can be seen, only one image point (for example, the upper left corner of the white outer circle) in the calibration pattern 104-a of FIG. And the coordinates (u, v) of the matched corresponding image points. Some coordinates in the table are empty (NULL), which indicates that in the frame, the corresponding camera did not capture the feature point (that is, the image does not contain the corresponding image point).
  • NULL empty
  • the detected feature points may relate to one or more image points on one or more calibration patterns, as will be described in detail below.
  • the combing and matching of feature points can adopt various methods and techniques.
  • the combing and matching of feature points can be based on prior knowledge of calibration patterns, and/or by utilizing various image processing, feature extraction and recognition techniques.
  • the combing and matching of feature points may include performing one-to-one correspondence according to the relative positions of corner points to obtain 4 sets of matching points. If the calibration pattern is irregular or relatively complex, you can use feature point detection and descriptor matching methods to perform feature point matching, such as orb feature point detection, and then extract the descriptors of feature points for matching, and obtain matching points and save them.
  • auxiliary information including but not limited to, for example, one or more of the direction of travel of the vehicle, the speed of the vehicle, the relative position and direction of the calibration pattern placement, time information, etc. ) to identify image points respectively corresponding to one or more different feature points of one or more calibration patterns.
  • the nearer outer corner point that passes the front of the vehicle on the left side of the vehicle’s driving direction can be determined as feature point 1
  • the far corner point is determined as feature point 2
  • the nearer corner point that crosses the front of the vehicle afterward can be determined as feature point 2.
  • Feature point 3 and the far corner point are determined as feature point 4.
  • the nearer corner point that crosses the front of the vehicle on the right side of the vehicle's driving direction can be determined as feature point 5
  • the far corner point is determined as feature point 6
  • the nearer corner point that crosses the front of the vehicle after that can be determined as feature point 7, the farthest corner point
  • the corner points are determined as feature points 8 . . .
  • FIG. 3 shows a diagram of an example 300 of corresponding corner point recognition according to an aspect of the present disclosure.
  • the findChessboard Corners function in OpenCV can be used to detect checkerboard corners on the original image.
  • combing and matching of feature points can be performed to merge the image points of feature points with relative physical relationships into a set of matching points, for example, the feature pattern is in the front left of the front of the car When , the coordinates of feature point 1 on the front and left fisheye images are a set of matching points.
  • the sorting and matching of feature points may include utilizing a calibration pattern with identifiable feature points.
  • a calibration pattern may include, for example, identifiable feature points marked with numbers, different colors, or different shapes, which will be described in detail below.
  • the calibration data processing solution 200 may further include:
  • the image points matched by different cameras actually map the same physical point in the public field of view.
  • Two or more image points corresponding to a feature point detected in the common field of view of two or more cameras can be converted to the body coordinate system through their respective internal and external parameters. According to the obtained physical coordinates should be equal, the constraint equation can be established. In the constraint equation, it is not necessary to use the specific values of the coordinates in the vehicle body coordinate system, but to use the relative physical relationship of the converted coordinates of the two or more image points instead.
  • the image point p 1 (u 1 , v 1 ) and p 2 (u 2 , v 2 ).
  • the image point p 1 can be converted to the vehicle body coordinate system through the internal and external parameter matrices of the first camera to obtain (x 1 , y 1 , z 1 ).
  • the image point p 2 can be converted to the vehicle body coordinate system through the internal and external parameter matrices of the second camera to obtain (x 2 , y 2 , z 2 ). Since in the body coordinate system, these two points correspond to the same feature point and thus the physical coordinates should be equal, a constraint equation can be established, for example, as shown in equation (2):
  • R1 is the rotation matrix associated with the first camera
  • A1 is the internal reference matrix of the first camera, is the translation vector associated with the first camera
  • R 2 is the rotation matrix associated with the second camera
  • a 2 is the internal reference matrix of the second camera, is the translation vector associated with the second camera
  • step 2 based on the data sorted out and matched by feature points in step 2) above, a pair of constraint equations similar to formula (2) can be established for each group of matching points, so that enough constraint equations can be obtained.
  • the solution of the present disclosure does not need to solve the relative spatial displacement between the cameras, but can directly and simultaneously determine the respective extrinsic parameters of the plurality of cameras by performing nonlinear optimization on the plurality of constraint equations simultaneously.
  • the size of the calibration pattern may also be determined in advance, for example, the side length of a square is 1.5 meters. This information can then be used when building the equations.
  • the relationship that can be established with respect to feature point 1 may include but not limited to, for example, one or more of the following relationships:
  • Distance between feature point 2 and feature point 4 distance between feature point 1 and feature point 3;
  • the distance between feature point 1 and feature point 2 the distance between feature point 3 and feature point 4; and so on.
  • corresponding relative physical relationships can also be established.
  • the above distances can be obtained based on simple geometric operations on the dimensions of the known calibration patterns.
  • the present disclosure is not limited to establishing relationships using distances, but may additionally or alternatively include establishing relationships using other parameters, such as length relationships, relative positions, and the like. From such relationships, constraint equations can also be established.
  • the corresponding relationship can be established. For example, image points from two cameras are mapped to the same feature point in the physical world, and an equation is established by using the coordinates of the image points after their respective internal and external parameters are converted.
  • image points and feature points are described above using “points” as examples, the present disclosure is not limited to point-shaped image points and feature points.
  • image points and feature points may refer to other features, such as straight lines, parallel lines, vertical lines, right angles, areas, specific angles, colors, characters, and the like.
  • the equation that can be established may include the parallel or perpendicular relationship between the detected lines, or other geometric relationships, etc., or may further include the distance between parallel lines, etc. wait.
  • the calibration data processing solution 200 may further include:
  • the nonlinear optimization method generally adopts an iterative method, that is, given any starting point, a new solution is generated through continuous iteration, and finally converges to the optimal solution.
  • Commonly used nonlinear optimization methods include gradient descent method, steepest descent method, Gauss-Newton method, Levenberg-Marquardt method, nonlinear least squares method, etc. The present disclosure is not limited in this respect.
  • the solution of the present disclosure can establish a large number of equations, and thus can reduce the impact of certain detection errors.
  • the traditional calibration method is to calibrate a single camera independently. No matter how accurate the calibration is, the surround view effect cannot be guaranteed, because the calibration process does not consider the factor of surround view splicing dislocation.
  • the present disclosure creatively utilizes the closed-loop characteristics of the surround-view cameras and globally optimizes the external parameters in a non-linear solution method, which can greatly improve the calibration accuracy and enhance the robustness of the calibration results.
  • the points of each camera image converted to the bird's-eye view after calibration data can be overlapped, so as to reduce the splicing dislocation of the surround view.
  • FIG. 4 shows a diagram of a calibration pattern 400 according to some embodiments of the present disclosure.
  • the calibration pattern 400 may include feature points on the ground, existing feature points (such as lane lines, parking space lines, etc.), artificial feature points (such as calibration cloth, calibration board, etc.), or some irregular patterns (such as numbers, letters, etc.). Different calibration patterns, colors, and sizes can use this scheme, as long as there are feature points in the calibration pattern.
  • the calibration pattern may include various geometric shapes, characters, or combinations thereof, and may include feature points identified by different colors, characters, numbers, or symbols.
  • Feature points may include, but are not limited to, easy-to-recognize image features such as corner points, intersection points, parallel lines, and vertical lines.
  • the calibration pattern of the present disclosure is easy to set up at the calibration site, without knowing the coordinates of the feature points, without setting the calibration pattern according to conditions such as position and distance, and without limiting the parking position of the vehicle during calibration, or driving through the strict route of the calibration pattern, Just make sure that the calibration pattern has appeared in the public view of the camera during the calibration process.
  • the calibration process of the present disclosure may be dynamic, that is, the vehicle is in a driving state during the calibration process.
  • FIG. 5 shows a schematic diagram 500 of a vehicle during calibration according to an aspect of the present disclosure.
  • at least one (or two or more) calibration cloth can be laid in front of the vehicle side, or the existing pattern on the ground can be used as the calibration pattern, starting from the front of the vehicle behind the calibration pattern or behind the side, and the vehicle starts from
  • the calibration pattern is driven sideways until the rear of the vehicle leaves the calibration pattern, and the calibration data collection is completed. Depending on the circumstances, this process can be done in one go, or it can be repeated several times.
  • the calibration is completed. This process ensures that the calibration pattern appears in the field of view of each camera within a certain period of time, and the distance from the vehicle to the calibration cloth, driving speed, and driving trajectory do not affect the implementation of this case.
  • the calibration process can also be static, that is, the vehicle is stationary.
  • the vehicle is parked in an environment with characteristic patterns around it, such as parking spaces, calibration rooms, etc., but there is no need to limit the position of the vehicle or know the size and position of the characteristic patterns in advance.
  • calibration patterns can be set around the vehicle, but there is no need to use any props to set them according to conditions such as position and distance.
  • FIG. 6 shows a flow chart of a method 600 for fast and automatic calibration of a vehicle-mounted surround view camera according to an aspect of the present disclosure.
  • Method 600 may include, for example, at block 602, performing feature point detection.
  • the method 600 further includes, at block 604, performing feature point sorting and matching.
  • Method 600 further includes, at block 606, establishing an equation.
  • the method 600 further includes, at block 608 , performing a non-linear optimization solution to all established equations to determine camera extrinsic parameters.
  • ground features of each camera image in each frame may be detected and the image coordinates, camera numbers, and frame numbers of corresponding feature points recorded.
  • the feature points in each camera image in the same frame can be detected and the image coordinates, camera numbers, and frame numbers of the corresponding feature points can be recorded.
  • the detection methods disclosed in the present disclosure are varied and can be flexibly adjusted according to different calibration patterns.
  • FIG. 7 shows a look-around fisheye image 700 captured by a fisheye camera according to an exemplary embodiment of the present disclosure.
  • only black blocks in the pattern are used for the calibration pattern, and the calibration pattern described above with reference to FIG. 4 or other suitable calibration patterns, etc. may also be used.
  • a contour extraction algorithm can be used, such as the findcounters algorithm in OpenCV, or such as Roberts operator, Prewitt operator, Sobel operator, and Laplacian operator.
  • the black square contour can be detected on the original image. Subsequently, the quadrilaterals can be filtered out according to the number of corner points of the contour and the coordinates of the corner points in the camera coordinate system can be obtained. Corner detection may use detection algorithms (including but not limited to Harris operator, KLT operator, SIFT operator, SUSAN operator, Traikovic operator, Moravec operator, etc.).
  • Fig. 8 shows an edge and corner points of a black block detected on an original image captured by a camera according to an exemplary embodiment of the present disclosure. As can be seen, considerable distortion is induced when using, for example, a fisheye camera as the vehicle's surround view camera.
  • the fisheye image when a camera with serious distortion such as a fisheye camera is used, in order to improve the detection rate and detection accuracy, the fisheye image can also be converted into a bird's-eye view by using the initial external parameters, and then Detect the calibration pattern on the overhead view.
  • Edge detection methods such as Hough can be used to detect the edge of the black block, and the intersection point of the edge can be detected as a feature corner point. If the calibration pattern is a parking space line, this method can also be used for detection.
  • the way to solve the initial external parameters can be, for example: take a specific point on the calibration pattern (for example, the upper left corner point of the black block) as the origin point (0, 0), and pass through the two points of (0, 0)
  • R, t that is, the rotation matrix and displacement vector of the external parameters.
  • the characteristic pattern used for calibration is a checkerboard
  • the orb feature point detection method can be used to detect feature points.
  • FIG. 9 shows edges and corners of black blocks detected on a bird's-eye view generated using initialization data according to an exemplary embodiment of the present disclosure.
  • the detection rate and detection accuracy can be significantly improved by using the initial external parameters to convert the fisheye image into a bird's-eye view, and then detecting the calibration pattern on the bird's-eye view.
  • the feature points detected in the same frame in block 602 may be combed. According to an exemplary embodiment, it may be confirmed which feature point in the calibration pattern each detected image point corresponds to. According to some exemplary embodiments, the confirmed image points may be stored in a sequence corresponding to the feature points.
  • Feature point matching means that for feature points that appear in the common area of two or more cameras in the same frame, the image points mapped to the images of each camera are a set of matching points, and each set of matching points is also stored correspondingly.
  • the calibration pattern shown in Figure 9 is a black square with only four corner points of upper left, upper right, lower left, and lower right.
  • the relative positions can be one-to-one correspondence, and 4 sets of matching points can be obtained.
  • the calibration pattern is irregular or relatively complex, you can use feature point detection and descriptor matching methods to perform feature point matching, such as orb feature point detection, and then extract the descriptors of feature points for matching, and obtain matching points and save them.
  • an equation can be established according to the relative physical relationship between physical points mapped to different image points in each frame, such as positions and distances. No specific physical coordinate values need to appear in the equation.
  • Example 1 Two or more image points of a certain feature point detected in the common field of view of the camera in the same frame (ie, the matching points stored in block 604) are actually mapped to the same physical point, that is, the physical coordinates are equal.
  • the two image points pass through their respective internal and external parameter matrices to obtain the physical coordinates in the body coordinate system, and the two physical coordinates are equal to establish an equation.
  • Example 2 Two image points map the adjacent corner points of the square black block in the same frame, and the distance between the two corner points is equal to the side length, and an equation can be established.
  • the equation is established on the premise that the size of the simple pattern is known.
  • Example 3 You don’t know the specific distance or size value, but you can establish the equation if you know the relative relationship of some distance, size or area (for example, equal or difference value). For example, it is known that the calibration pattern is a square, but the side length of the square is not known. Then the equation can be established by using the relationship that the side lengths of the square are equal.
  • any feature pattern can be used under the constraint that the physical coordinates of each group of image matching points mapped to the same feature point are equal after their respective internal and external parameters conversion.
  • all the equations established in block 610 can be used to simultaneously solve and calculate all unknown variables, that is, all camera extrinsic parameters, by using a nonlinear optimization method.
  • a nonlinear optimization method There are various ways to express the external parameters, such as solving in the form of angle and displacement, solving in the form of matrix, solving in the form of quaternion and rotation vector, etc.
  • nonlinear optimization such as Gauss-Newton method, Levenberg-Marquardt method, nonlinear least squares method, etc. There is no specific method requirement for the solution.
  • Fig. 10 shows a surround view effect diagram 1000 after calibration is completed using dynamic and static calibration processes respectively according to an exemplary embodiment of the present disclosure, wherein the left figure shows a surround view effect diagram after calibration is completed using a dynamic calibration process, and the right figure shows Shown is the look-around effect after calibration using the static calibration process.
  • FIG. 11 shows a block diagram of a fast automatic calibration device 1100 for a vehicle-mounted surround view camera according to an aspect of the present disclosure.
  • a vehicle-mounted surround view camera fast automatic calibration device 1100 may include, but not limited to, one or more cameras 1102, a processor 1104, a memory 1106, a display 1108, an image processing module 1110, an image Point detection module 1112, feature point combing and matching module 1114, constraint equation establishment and solution module 1116, etc.
  • the one or more cameras 1102 may include an in-vehicle surround view camera.
  • the one or more cameras 1102 may include wide-angle cameras (eg, fisheye cameras) installed on the front, rear, left, and right sides of the vehicle, and the like.
  • the one or more cameras 1102 can collect video images around the vehicle and provide them to the image processing module 1110 in the form of image frames.
  • the image processing module 1110 can perform various image preprocessing, including denoising, initial fisheye bird's-eye view conversion, and the like.
  • the preprocessed images may be provided to the image point detection module 1112 .
  • the image point detection module 1112 can detect the image points in each frame image by using the methods described above in conjunction with the embodiments of the present disclosure, and record information such as image coordinates, camera numbers, and frame numbers of the corresponding image points.
  • the recorded information can be stored in the memory 1106 and/or passed to the feature point sorting and matching module 1114 .
  • the feature point sorting and matching module 1114 can sort out the detected image points to determine which feature point in the calibration pattern the detected image point corresponds to in each frame, and the corresponding feature point in the calibration pattern in the same frame
  • Each image point of the same feature point is regarded as a group of matching points, and its frame number, camera number, image coordinates and other information are recorded.
  • the recorded information may be stored in memory 1106 and/or passed to constraint equation creation and solution module 1116 .
  • the constraint equation establishment and solution module 1116 can establish the constraint equation based on the prior knowledge about the feature points, and adopt the non-linear optimization solution method to solve and calculate the camera extrinsic parameters.
  • the resolved camera extrinsics may be provided to the image processing module 1110 and/or stored in the memory 1106 .
  • one or more cameras 1102 may capture video images around the vehicle and provide them to the image processing module 1110 in the form of image frames.
  • the image processing module 1110 may process (for example, pre-distort, etc.) the captured image frames based on camera extrinsic parameters and data such as camera intrinsic parameters stored in a memory.
  • the processed data may be provided to an automatic driving system (not shown) for automatic driving.
  • the data obtained through processing may be stitched around.
  • the obtained results are provided to a display 1108 for display to a user (eg, driver) to assist driving/parking, etc.
  • the image processing module 1110, the image point detection module 1112, the feature point combing and matching module 1114, and the constraint equation establishment and solution module 1116 are described as independent modules, which can be composed of, for example, dedicated hardware, firmware, programmable circuit, ASIC, etc. to achieve. But in an alternative embodiment, one or more or all of the image processing module 1110, the image point detection module 1112, the feature point combing and matching module 1114, and the constraint equation establishment and solution module 1116 can also be stored as software modules in the stored in the memory 1106 and executed by the processor 1104, so as to realize the functions described above in conjunction with each module. In addition, the above function division of modules is exemplary. That is, the functions of the above modules may be combined in one module, or may be further divided to be performed by different modules.
  • the present invention makes use of the characteristics of the common field of view between the surround-view cameras and the closed loop formed by the surround-view cameras, and utilizes the relative relationship between physical points to bypass the physical point coordinate values that are necessary in the traditional calibration method but difficult to obtain accurately, so that the feature points exist
  • the constraint of the location is released, and the problem caused by the calibration site affecting the accuracy of the physical coordinates in the traditional calibration is solved, and the impact of the calibration environment on the calibration is reduced.
  • the invention collects a large amount of calibration pattern data and establishes many constraint equations, so the calibration accuracy is high and the result is very robust.
  • the method of surround view camera combined with global optimization the surround view stitching effect has been optimized.
  • the absolute constraints of site accuracy and calibration accuracy are untied, and the negative impact brought by the calibration environment is reduced, which greatly improves the calibration process. simplify.
  • the present invention does not need to accurately calibrate the site, does not need to measure the coordinates of feature points, does not need to set a calibration pattern according to conditions such as position or distance, and does not need to limit the parking position of the vehicle during calibration.
  • the method of setting up the calibration environment is simple, without any other measurement props, and the calibration conditions are easy to realize.
  • the calibration process is fast and convenient, automatic calibration without manual intervention, high calibration accuracy, and excellent surround view stitching.
  • the invention can be used for mass calibration of production lines, calibration in harsh environments such as after-sales or after-installation.
  • the embodiments of the present disclosure can be realized by corresponding methods, apparatuses, devices, programs (for example, programs stored on a computer-readable medium and executable by a processor), and the like.
  • the method, device, device, etc. and/or detection method, device, device, etc. of the present disclosure may be implemented on a client, an enterprise end, a third-party server, or the like.
  • the methods, devices, devices, etc. that include or implement the embodiments of the present disclosure may be implemented in the form of software, hardware, or firmware, all of which are within the scope of the present disclosure.
  • the corresponding program code can be stored in a floppy disk, CD, DVD, hard disk, flash memory, U disk, CF card, SD card, MMC card, SM card, memory stick, XD card, SDHC card and other media, or can be transmitted through communication media, and executed by, for example, a processor to realize corresponding functions or parts thereof, or any combination of functions.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGA field programmable gate arrays designed to perform the functions described herein (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof.
  • a general-purpose processor can be a microprocessor, but in the alternative, the processor can be any commercially available processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • a software module may reside in any form of storage medium known in the art. Some examples of storage media that may be used include random access memory (RAM), read only memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, hard disks, removable disks, CD-ROMs, and the like.
  • RAM random access memory
  • ROM read only memory
  • flash memory EPROM memory
  • EEPROM memory EEPROM memory
  • registers hard disks, removable disks, CD-ROMs, and the like.
  • a software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs and across multiple storage media.
  • a storage medium may be coupled to a processor such that the processor can read and write information from, and to, the storage medium. Alternatively, the storage medium may be integrated into the processor.
  • Methods disclosed herein include one or more steps or actions for achieving the described method.
  • the method steps and/or actions may be interchanged with one another without departing from the scope of the claims.
  • the order and/or use of specific steps and/or actions may be altered without departing from the scope of the claims.
  • the processor can execute software stored on the machine-readable medium.
  • a processor can be implemented with one or more general and/or special purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry capable of executing software.
  • Software should be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • a machine-readable medium may include RAM (Random Access Memory), Flash memory, ROM (Read Only Memory), PROM (Programmable Read Only Memory), EPROM (Erasable Programmable Read Only Memory), EEPROM (Electrical erasable programmable read-only memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof.
  • RAM Random Access Memory
  • Flash memory ROM (Read Only Memory)
  • PROM PROM (Programmable Read Only Memory)
  • EPROM Erasable Programmable Read Only Memory
  • EEPROM Electrical erasable programmable read-only memory
  • registers magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof.
  • a machine-readable medium can be embodied in a computer program product.
  • the computer program product may include packaging materials.
  • the machine-readable medium may be a portion of the processing system separate from the processor.
  • the machine-readable medium, or any portion thereof can be external to the processing system, as will be readily appreciated by those skilled in the art.
  • a machine-readable medium may include a transmission line, a carrier wave modulated by data, and/or a computer product separate from a wireless node, all of which are accessible by a processor through a bus interface.
  • the machine-readable medium, or any portion thereof may be integrated into the processor, such as may be the case with cache memory and/or general register files.
  • the processing system may be configured as a general-purpose processing system having one or more microprocessors providing processor functionality and external memory providing at least a portion of the machine-readable medium, all of which communicate with the Other supporting circuitry is linked together.
  • the processing system may be implemented using an ASIC (Application Specific Integrated Circuit) with a processor, bus interface, user interface (in the case of an access terminal), supporting circuitry, and at least a portion of the machine-readable medium integrated into a single chip.
  • ASIC Application Specific Integrated Circuit
  • a machine-readable medium may include several software modules. These software modules include instructions that, when executed by a device such as a processor, cause the processing system to perform various functions. These software modules may include a transmitting module and a receiving module. Each software module may reside on a single storage device or be distributed across multiple storage devices. As an example, a software module may be loaded from a hard drive into RAM when a triggering event occurs. During execution of a software module, the processor may load some instructions into the cache for faster access. The one or more cache lines can then be loaded into the general purpose register file for execution by the processor. Where the functionality of a software module is discussed below, it will be understood that such functionality is implemented by the processor when it executes instructions from the software module.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage media may be any available media that can be accessed by a computer.
  • such computer-readable media may include RAM, ROM, EEPROM, CD-ROM, or other optical disk storage, magnetic disk storage, or other magnetic storage devices, or other desired program code and any other medium that can be accessed by a computer. Any connection is also properly termed a computer-readable medium.
  • Disk and disc as used herein include compact disc (CD), laser disc, compact disc, digital versatile disc (DVD), floppy disc, and Blu-ray Discs, where disks usually reproduce data magnetically, and discs, which use laser light to reproduce data optically.
  • computer readable media may comprise non-transitory computer readable media (eg, tangible media).
  • computer readable media may comprise transitory computer readable media (eg, a signal). Combinations of the above should also be included within the scope of computer-readable media.
  • certain aspects may include a computer program product for performing the operations presented herein.
  • a computer program product may include a computer-readable medium having stored thereon (and/or encoded) instructions executable by one or more processors to perform the operations described herein.
  • a computer program product may include packaging materials.

Abstract

本公开的一方面涉及一种车载环视相机的标定方法,包括获得由安装在一车辆上的多个位置的多个相机拍摄的一个或多个帧,其中每一帧包括由该多个相机同时拍摄的多个图像;对于每一帧,检测由该多个相机拍摄的该多个图像中与标定图案有关的图像点,该标定图案包括一个或多个特征点;对于每一帧,将由该多个相机中的相同或不同相机拍摄的一张或多张图像中对应于该标定图案中的有相对物理关系的特征点的多个图像点确定为一组匹配点;使用每组匹配点各自的图像坐标,建立基于该标定图案的特征点的相对物理关系的多个约束方程;以及同时对该多个约束方程进行非线性优化求解以同时确定该多个相机各自的待求外参。本公开还涉及其他相关方面。

Description

一种车载环视相机快速自动标定方法和装置 技术领域
本申请一般涉及智能驾驶中的感知技术,尤其涉及车载环视相机的标定。
背景技术
在智能驾驶使用的传感器中,相机以其价格低廉,适应性好等优点得到了广泛应用。大多ADAS(高级驾驶辅助系统)功能都是基于视觉影像处理开发的。车载相机也就成为了实现众多预警、识别等感知功能的硬件基础。
车载相机根据不同的功能,特性,安装位置,又可以分为众多种类,例如环视相机,前视双目或多目相机,侧视相机等等,通常一辆汽车上会搭载多个甚至十几个相机。为了合理地对相机的感知信息加以利用,相机标定就成了不可或缺的步骤,而标定的精度也直接影响到视觉感知的精度。
发明内容
本公开的一示例性方面包括一种车载环视相机的标定方法,包括获得由安装在一车辆上的多个位置的多个相机拍摄的一个或多个帧,其中每一帧包括由所述多个相机同时拍摄的多个图像;对于每一帧,检测由所述多个相机拍摄的所述多个图像中与标定图案有关的图像点,所述标定图案包括一个或多个特征点;对于每一帧,将由所述多个相机中的相同或不同相机拍摄的一张或多张图像中对应于所述标定图案中的有相对物理关系的特征点的多个图像点确定为一组匹配点;使用每组匹配点各自的图像坐标,建立基于所述标定图案的特征点的相对物理关系的多个约束方程,其中所述多个约束方程的数量大于或等于所述多个相机各自的待求外参的总数;以及同时对所述多个约束方程进行非线性优化求解以同时确定所述多个相机各自的待求外参。
根据一些示例性实施例,所述标定图案包括地面上已有的非预设图案或设置于地面的预设标定图案,并且所述标定图案包括一个或多个相同或不同的标 定图案。
根据一些示例性实施例,所述多个相机包括能组成环视的任意多个相机。
根据一些示例性实施例,所述一个或多个帧是在所述车辆处于行驶状态或处于静止状态时拍摄的。
根据一些示例性实施例,所述标定图案的特征点的相对物理关系包括以下一者或多者或其任何组合:同一帧中在至少两个相机的公共视野中检测到的与所述标定图案的同一特征点对应的至少两个图像点映射到同一物理点;所述标定图案的特征点的已知距离、长度或面积值;以及所述标定图案的特征点的距离、坐标或长度的相对物理关系。
根据一些示例性实施例,所述特征点包括角点,并且可任选地进一步包括以下一者或多者或其任何组合:直线、平行线、以及垂直线。
根据一些示例性实施例,其中对于每一帧,检测由所述多个相机拍摄的所述多个图像中与标定图案有关的图像点进一步包括记录检测出的图像点及其对应的帧号、相机编号、以及图像坐标。
根据一些示例性实施例,其中对于每一帧,将由所述多个相机中的相同或不同相机拍摄的一个或多个图像中对应于所述标定图案中的有相对物理关系的特征点的多个图像点确定为一组匹配点进一步包括记录所确定的一组匹配点中每一个图像点的帧号、相机编号、以及图像坐标。
根据一些示例性实施例,所述相机包括具有畸变的相机,并且其中所述标定方法进一步包括在检测图像点之前,利用所述相机的初始外参将所述相机拍摄的图像转化为环视拼接的俯瞰图;以及在所述环视拼接的俯瞰图上检测所述图像点。
根据一些示例性实施例,利用所述相机的初始外参将所述相机拍摄的图像转化为环视拼接的俯瞰图包括以所述标定图案上的特定点为原点建立世界坐标系;基于所述标定图案的先验信息,确定所述标定图案中的特征点在世界坐标系中应具有的坐标;以及基于检测到的与所述特征点对应的图像点来解算出外参矩阵的初始值。
本公开的其他方面还包括相应的装置、设备以及计算机可读介质等。
附图说明
图1示出了根据本公开的一方面的车载环视相机的标定数据采集方案的示图。
图2示出了根据本公开的一方面的车载环视相机的标定数据处理方案的示图。
图3示出了根据本公开的一方面的角点识别的示例的示图。
图4示出了根据本公开的一些实施例的标定图案的示图。
图5示出了根据本公开的一方面的车辆在标定过程中的示意图。
图6示出了根据本公开的一方面进行车载环视相机快速自动标定的方法的流程图。
图7示出了根据本公开示例性实施例由鱼眼相机拍摄的一张鱼眼图。
图8示出了根据本公开示例性实施例的在相机所拍摄的原始图像上检出的边缘及角点。
图9示出了根据本公开示例性实施例的在使用初始化数据生成的俯瞰图上检出的边缘及角点。
图10示出了根据本公开示例性实施例的分别使用动态和静态标定过程完成标定后的环视效果图。
图11示出了根据本公开的一方面的车载环视相机快速自动标定装置的框图。
具体实施方式
在例如泊车辅助系统中,通过安装在车辆前后左右的广角摄像头(例如,包括但不限于鱼眼相机)采集车辆四周的视频影像,利用图像融合和环视拼接技术合成车身周围的全景视图和/或俯瞰图等,最后在车辆中控台的屏幕上显示。借助于此类泊车辅助系统,驾驶员坐在车中即可直观地看到车辆周围是否存在障碍物以及与障碍物的相对方位与距离,从而可在狭窄拥堵的停车场安全泊车入位,可通过复杂的场地,还可有效防止刮蹭、碰撞、陷落等事故的发生,同 时也可以为自动驾驶系统中识别、检测、跟踪等算法提供支持。
根据一种方案,可首先采集预先放置于车辆的至少前后左右四个方位的带有棋盘格的标定图像,利用标定图像分别对前后左右的摄像头各自进行参数标定,求出并保存每个摄像头的图像畸变矫正参数,对标定图像进行畸变矫正,以消除摄像头成像失真。然后,可对经畸变矫正的标定图像进行射影变换,求出并保存射影变换参数。随后,可采集预先放置于车辆前后左右四个方位的带有丰富特征点的特定图像,并通过查找摄像头图像畸变矫正参数进行畸变矫正,通过查找射影变换参数将矫正后的特定图像变换成俯瞰图。进一步,可对俯瞰图提取特征并进行粗匹配,并拟合出单应性矩阵的初始值,并经图像配准、融合和拼接后,可生成俯瞰全景视图。
在实际使用期间,通过查找已保存的摄像头图像畸变矫正参数、射影变换参数以及单应性矩阵参数,便可将前后左右四个摄像头的视频图像拼接并生成虚拟的俯瞰全景视图。
由于摄像头内外参数校正准确性对图像投影效果影响很大,因此相机标定方案直接影响着此类系统的效果和安全性。
相机标定主要包括内参标定和外参标定。相机内参包括偏心,畸变等,在相机生产完成后便不会产生变化,所以在相机出厂时就可完成内参标定,后续如何使用该相机,都无需重新进行内参标定。
另一方面,外参标定则需要相机装车后进行。相机外参包括旋转与位移,即相机坐标系与车身坐标系的转换关系。旋转的表达方式多种多样,例如旋转矩阵,欧拉角,旋转向量等,各表达方式之间可以互相转换,外参标定只需求出一种表达方式的值即可。相机装车后如果相机位置有所变化,或者更换相机等都需要重新标定外参,加之外参标定需要整车进行,标定难度及需求量远大于内参。
目前广泛应用的车载相机标定方案,多数利用标定场地上的特征点在图像上的坐标以及车身坐标系下的物理坐标,求解得到外参矩阵,即旋转矩阵和位移向量。不论是利用什么标定图案,首先都需要知道标定场地的图案具体尺寸、位置,以及车辆在进入标定场地后的停车位置,以此换算出标定图案在当前车 身坐标系下的物理坐标,其次需要在环视相机图像上检测出标定所需的特征点及其坐标。根据以上得到的该特征点的物理坐标(X,Y,Z)和归一化平面的图像坐标(u,v),列出相机成像方程,利用最小二乘法求解出相机的外参矩阵,即旋转矩阵R和位移向量t,如果需要也可求出内参矩阵A。示例的相机成像方程如下式(1):
Figure PCTCN2022130545-appb-000001
其中s为尺度因子,相机外参矩阵(R|t)包括旋转矩阵R和平移矩阵t,旋转矩阵R和平移矩阵t共同描述了如何把点从世界坐标系转换到摄像机坐标系,其中旋转矩阵描述了世界坐标系的坐标轴相对于摄像机坐标轴的方向,而平移矩阵描述了在摄像机坐标系下,空间原点的位置。
由于每一辆车上每一个相机都需要标定,而在相机转动或挪动之后又需要重新标定,因此标定工作量是相当大的。由此,标定成功率也是算法必须考虑的评估项。同时标定对精度要求很高,受环境等因素的影响很大,所以标定效果很大程度上取决于标定场地。
对于车辆的下线标定,车辆生产商通常会建造标定专属的高精度场地,对于地面的平整度,以及标定图案的尺寸都经过精细的调整,同时设置车轮卡正器来固定车辆位置。但是往往这样的高精度场地在一定程度上也无法满足标定高精度、高通过率的要求,尤其是例如鱼眼相机的标定,因其大畸变的自身缺陷,标定效果不佳。
在售后或者后装情况下,假使相机需要维修或者更换,则高质量的标定往往更难以实现。4S店往往没有高精度的标定场地,大多只是找一块空地铺设标定布来进行标定。由于标定的精度与场地息息相关,在这种地面不平整,标定布有褶皱,铺设不标准,光线不佳等恶劣标定环境下,效果自然不言而喻。同时在铺设标定布时因其需要尽量保证场地精度,往往需要花费大量时间布置与测量,标定的效率低下,而效果很难满足要求。这些问题大多是由于传统标定方法需要提供准确的标定点在车身坐标系下的坐标,而该物理坐标对于标定环境以及精度非常敏感,最终导致在恶劣标定环境下,在标定场地搭建繁琐耗 时的情况下,标定效果依然不佳。
本公开的一方面涉及一种车载环视相机快速自动标定的方案,其中环视相机在相机种类和相机个数上并不受限定。本公开的下述方案旨在解决传统标定方法受环境影响显著,标定过程繁琐,标定效果不理想等问题。以下描述本公开所实现的自动标定技术,该技术受环境因素影响小、标定效果佳、且快速有效。通过本公开的技术,可以提高标定精度,改善标定效果,同时简化标定流程,提高效率。
图1示出了根据本公开的一方面的车载环视相机的标定数据采集方案100的示图。如图中所示,可在地面设置两个标定图案104-a和104-b。较佳地,这两个标定图案104-a和104-b之间隔开略超过车身宽度的距离,以便于车辆102在从中间开过时能采集到标定图案的完整图像。在此过程中,车辆102上的前后左右等方向上的相机连续地采集多帧图像。尽管图中将两个标定图案示为相同图案,但是本公开的其他实施例可包括使用不同的标定图案。
如所可知,尽管上例中给出了设置两个标定图案104-a和104-b的方案,但是本公开并不被限定于此。实际上,设置两个标定图案是为便于使得车辆能够通过一次驶过就标定所有车载环视相机。本公开也可包括设置更多或更少标定图案的方案。
例如,根据至少一些实施例,可以设置仅一个标定图案(例如,104-a或104-b),并通过由车辆正反两次驶过该标定图案旁边来完成左右两侧的标定。又如,根据其他实施例,可以设置不止两个标定图案。例如,前后左右各一个,或者左侧右侧各两个,等等,以实现更精确的标定。
由于本公开的方案利用地面上的特征点,通过点与点的相对位置关系完成标定,而无需测量特征点的坐标,因此实际上无需按照相对位置、距离、角度关系等条件设置标定图案。例如,尽管图1的示例中标定图案被示为按与车辆的横轴和纵轴平行的方式放置,但是本公开并不被限定于此。
本公开无需限定车辆在标定时的停靠位置,只需保证标定过程中标定图案曾出现在摄像头公共视野内即可。公共视野指两个或多个相机在同一时刻均能拍摄到的区域。
另外,图1中所示出的标定过程可以为动态的,即标定过程中车辆处于行驶状态。但是如果场地较小,车辆无法行驶,标定过程也可为静态,即车辆静止。例如,根据示例性实施例,在此类条件较为恶劣的场景中,可通过将至少一个标定图案摆放到静止车辆周围的多个不同位置,以便于车辆上的每个相机都能与至少另一个相机具有对该标定图案的公共视野以便在同一帧拍摄到该标定图案的图像即可。
标定过程中,若车辆处于行驶状态,有效行驶过程为相机视野内能够检测到标定图案,采集并收集不同帧时特征点在图像上的位置并记录,该过程可以收集标定图案相对于车辆处于不同位置时的图像点坐标。这在不增加特征点范围或数量的情况下大大增加用于计算外参的数据量,由此可提高标定精度。
在通过例如图1的环视相机的标定数据采集方案100之类完成多个车载环视相机对标定图案的多帧图像的采集之后,可以对所采集的图像数据进行处理。
图2示出了根据本公开的一方面的车载环视相机的标定数据处理方案200的示图。标定数据处理方案200可包括:
1)图像点检测
取决于采用了动态标定过程还是静态标定过程,标定数据采集方案可以通过多个相机在一个或多个(例如,M个)时刻采集到至少部分地包括标定图案的一帧或多帧图像。在同一时刻由该多个(例如,N个)相机采集到的多个(例如,N个)图像即为该时刻i对应的第i帧图像。如所可知,同一时刻的多个图像可以并不全都包括同一标定图案的图像,但是其中至少一些图像帧(例如,来自其中两个或更多个相机)可以包括同一标定图案的图像。
对于静态标定过程,可以在该帧图像的相机1图像、相机2图像、……相机N图像中进行标定图案的图像点检测,并记录在该帧的各相机图像中检测出的图像点的图像坐标和相机编号。
对于动态标定过程,可检测每一帧各相机图像中标定图案的图像点,并记录对应图像点的图像坐标、相机编号及帧号。检测方法可根据不同的标定图案灵活做出调整。
根据一示例性实施例,图像点检测可包括采用边缘检测算法(包括但不限 于OpenCV中的findcounters、Hough等边缘检测法、或例如Roberts算子、Prewitt算子、Sobel算子和Laplacian算子等)检测出所拍摄图像中的边缘,并采用角点检测算法(包括但不限于例如将所检测出的边缘的交点作为图像角点,或使用Harris算子、KLT算子、SIFT算子、SUSAN算子、Trajkovic算子、Moravec算子等)检测出角点作为图像点。
图像点 帧号 相机 坐标
1 1 1 (u 1,v 1)
2 1 1 (u 2,v 2)
3 1 2 (u 3,v 3)
4 1 2 (u 4,v 4)
5 1 3 (u 5,v 5)
6 1 4 (u 6,v 6)
7 2 1 (u 7,v 7)
8 2 1 (u 8,v 8)
... ... ... ...
L M 1 (u L,v L)
表1
表1示出了按顺序记录的检测出的图像点(例如,图像点1...L)及其各自对应的帧号、相机编号、以及图像坐标。取决于所采用的标定图案和/或检测方案,此时所检测出的图像点具体对应于哪个标定图案中的哪个特征点可以在稍后被标识或确定,也可以在检测的同时被标识或确定。
根据一些示例性实施例,为提高检出率及检测精度,也可利用初始外参将鱼眼图转化为俯瞰图后再检测图像点。例如,可以基于鱼眼图中的特定点建立世界坐标系,并基于标定图案的先验信息,确定标定图案中主要特征点在世界坐标系中应具有的坐标。随后,基于检测到的图像点来解算出外参矩阵,其具体实施例将在下文具体描述。在此基础上,鱼眼图被转化为更加规整的俯瞰图,由此再进行特征点检测将能够提高检出率和检测精度。
标定数据处理方案200可进一步包括:
2)特征点梳理及匹配
在完成图像点检测后,或在进行图像点检测的同时,可对于步骤1)中在同一帧里检测到的图像点进行梳理,即确认检测到的每一个图像点对应于标定 图案中的哪一个特征点,并按照对应顺序保存。特征点匹配表示对于同一帧中出现在两个或多个摄像头公共区域的有相对物理关系的特征点,其映射到各相机图像上的图像点即为一组匹配点,将每组匹配点也对应保存。
相对物理关系可以包括但不限于例如属于同一特征点因此坐标相等,属于两个特征点且距离为某一个已知值(例如,边长信息),或者两对特征点距离相等,等等。
特征点 帧号 相机 坐标
104-a左上 1 1 (u a0101,v a0101)
104-a左上 1 2 (u a0102,v a0102)
104-a左上 1 3 (u a0103,v a0103)
104-a左上 1 4 NULL
104-a左上 2 1 (u a0201,v a0201)
104-a左上 2 2 (u a0202,v a0202)
104-a左上 2 3 NULL
104-a左上 2 4 NULL
... ... ... ...
104-a左上 M 1 (u a0M01,v a0M01)
104-a左上 M 2 NULL
104-a左上 M 3 (u a0M03,v a0M03)
104-a左上 M 4 (x a0M04,v a0M04)
表2
表2示出了根据一示例的特征点梳理及匹配结果。该示例对应于4个相机共拍摄M帧的情形。如所可见,表2中仅示出了对于例如图1的标定图案104-a中的一个图像点(例如,白色外圈左上角)在由该4个相机所拍摄的M帧图像中所梳理和匹配出的的相应图像点的坐标(u,v)。表中有些坐标为空(NULL),这表明该帧中,对应相机没有拍摄到该特征点(即,图像中不包含对应的图像点)。取决于所布置的标定图案,所检测的特征点可涉及一个或多个标定图案上的一个或多个图像点,这将在下文中详细描述。
特征点的梳理和匹配可以采取各种方式和技术。例如,特征点的梳理和匹配可以基于标定图案的先验知识,和/或通过利用各种图像处理、特征提取和识别技术。
根据一些示例性实施例,特征点的梳理和匹配可包括根据角点存在的相对 位置进行一一对应以得到4组匹配点。如果标定图案不规则或者相对复杂,可以利用特征点检测及描述子匹配等方法进行特征点匹配,例如orb特征点检测,再提取特征点的描述子进行匹配,得到匹配点并保存。
附加地或替换地,也可以基于或不基于其他辅助信息(包括但不限于例如,车辆行进方向、车辆行进速度、标定图案摆放相对位置和相对方向、时间信息等中的一项或多项)来识别分别对应于一个或多个标定图案的一个或多个不同的特征点的图像点。
以图1为例,例如,可将车辆行驶方向左侧先越过车头的较近外角点确定为特征点1、较远角点确定为特征点2,并将后越过车头的较近角点确定为特征点3、较远角点确定为特征点4。又如,可将车辆行驶方向右侧先越过车头的较近角点确定为特征点5、较远角点确定为特征点6,并将后越过车头的较近角点确定为特征点7、较远角点确定为特征点8……。图3示出了根据本公开的一方面的对应的角点识别的示例300的示图。
根据其他实施例,例如,若用于标定的特征图案为棋盘格,则可以使用OpenCV中的findChessboard Corners函数在原图上检测棋盘格角点。
以上述任何方式,以及可任选地基于辅助信息,可以进行特征点的梳理和匹配,以将有相对物理关系的特征点的图像点归并为一组匹配点,例如,特征图案在车头左前方时,特征点1在前路和左路鱼眼图上的坐标为一组匹配点。
根据另一些示例性实施例,特征点的梳理和匹配可包括利用具有可标识特征点的标定图案。此类标定图案可包括例如带有数字编号、不同颜色、或不同形状标记等可识别的特征点,这将在下文中具体描述。通过使用此类标定图案,可以直接识别出属于同一特征点的图像点归并为与该特征点对应的一组匹配点。
标定数据处理方案200可进一步包括:
3)建立方程
基于环视相机拥有公共视野的性质,不同相机匹配的图像点实为映射了公共视野中的同一物理点这一原理,本公开的方案在数据采集以及特征点的梳理和匹配后,对于相同帧中在两个或更多个相机的公共视野内检出的对应于某特 征点的两个或更多个图像点,可使其经过各自的内外参转换到车身坐标系,根据得到的物理坐标应当相等,便可建立约束方程。约束方程中无需用到车身坐标系下的坐标具体数值,而用这两个或更多个图像点转换后的坐标的相对物理关系来代替。
根据不同图像点映射的地面标定图案的物理点之间存在的相对位置、距离、边长等相对物理关系,可以建立多个约束方程,其中这些方程中不需要出现具体物理坐标数值。可罗列出针对所有帧中全部或至少部分相对物理关系建立的约束方程。一般而言,方程数量越多,结果越鲁棒。方程数量需要至少超过待求的外参变量总个数,且所有外参变量都曾出现在方程中。同样,如果是静止标定状态,则仅罗列出对于在一帧中检测到的点所建立的相对约束方程即可。
根据一些示例性实施例,例如,在某一帧中,分别在第一相机和第二相机的公共视野内检出对应于特征点P的在归一化成像面的图像点p 1(u 1,v 1)和p 2(u 2,v 2)。图像点p 1可通过第一相机的内参和外参矩阵转换到车身坐标系得到(x 1,y 1,z 1)。图像点p 2可通过第二相机的内参和外参矩阵转换到车身坐标系得到(x 2,y 2,z 2)。由于在车身坐标系中,这两个点对应于同一个特征点并且因此物理坐标应当相等,因此便可建立约束方程,例如,如式(2)所示:
Figure PCTCN2022130545-appb-000002
其中R 1是与第一相机相关联的旋转矩阵,A 1是第一相机的内参矩阵,
Figure PCTCN2022130545-appb-000003
是与第一相机相关联的平移向量;R 2是与第二相机相关联的旋转矩阵,A 2是第二相机的内参矩阵,
Figure PCTCN2022130545-appb-000004
是与第二相机相关联的平移向量;
如此,可基于如以上步骤2)中经特征点梳理及匹配后的数据,对每一组匹配点两两建立类似于式(2)的约束方程,如此便可获得足够多的约束方程。以此方式,本公开的方案无需求解相机之间的相对空间位移,而可通过同时对所述多个约束方程进行非线性优化求解来直接同时确定所述多个相机各自的外参。
作为补充或替换,根据其他示例性实施例,例如,在采用如以上结合图1所描述的标定图案时,也可以事先确定标定图案的尺寸,例如正方形边长为1.5 米。此信息便可在建立方程时利用。例如,基于以上例示的特征点编号,关于特征点1可以建立的关系可包括但不限于例如以下一个或多个关系:
特征点1与特征点2距离=1.5米;
特征点1与特征点3距离=1.5米;
特征点2与特征点4距离=特征点1与特征点3距离;
特征点1与特征点2距离=特征点3与特征点4距离;等等。
对于其他特征点,也可建立相应的相对物理关系。上述距离基于对已知标定图案的尺寸的简单几何运算即可获得。本公开并不被限定于使用距离来建立关系,而是也可以附加地或替换地包括使用其他参数,例如长度关系、相对位置等等来建立关系。通过此类关系,也可建立约束方程。
根据其他示例性实施例,假使已知标定图案的形状或几何性质,即便不知道标定图案的具体尺寸,也可以建立相应的关系。例如,来自两个相机的图像点映射物理世界中同一特征点,利用图像点经过各自内外参换算后的坐标相等建立方程。
尽管以上以“点”为示例来描述图像点和特征点,但是本公开并不被限定于点状的图像点和特征点。实际上,图像点和特征点可以指其他特征,例如,直线、平行线、垂直线、直角、面积、特定角度、颜色、字符等等。例如,根据其他实施例,对于直线类型的特征点,可以建立的方程可包括所检测到的直线之间的平行或垂直关系、或其他几何关系等,或可进一步包括平行线之间的距离等等。
标定数据处理方案200可进一步包括:
4)非线性优化求解
利用步骤3)中建立的所有方程,用非线性优化的方法可以同时解算出所有未知变量,即所有相机外参。外参的表达方式多样,例如可以用角度和位移形式求解,用矩阵形式求解,用四元数求解,用旋转向量形式求解等。如本领域技术人员所知,这些方式之间可以现成地相互转换。
非线性优化方法一般采用迭代方法,即给定任意一个起始点,通过不断迭代产生新的解,并最终收敛至最优解。常用的非线性优化求解方式有例如梯度 下降法、最速下降法、高斯牛顿法,Levenberg-Marquardt法,非线性最小二乘法等。本公开在此方面不受限定。
本公开的方案可以建立的方程数量大,因此可以减小某些检测误差带来的影响。传统标定方法都是单个摄像头独立进行标定,即使标定再精确也无法保证环视效果,因为标定过程没有考虑环视拼接错位的因素。而本公开创造性地利用环视相机组成闭环的特点,用非线性求解方式全局优化外参,可大大提高标定精度,增强标定结果的鲁棒性。
在求解出全局优化外参后,便可使各相机图像经过标定数据转换到俯瞰图的点重合,以此减小环视的拼接错位。
图4示出了根据本公开的一些实施例的标定图案400的示图。根据各种实施例,标定图案400可以包括地面上的特征点、已有特征点(例如车道线、车位线等)、人为制作的特征点(例如标定布、标定板等),或者一些不规则的图案(例如数字、字母等)。不同的标定图案、颜色、大小皆可使用本方案,只要标定图案中有特征点即可。
如图4中所示,标定图案可以包括各种几何形状、字符、或其组合等,并且可以包括用不同的颜色、字符、数字、或者符号来标识的特征点等。特征点可以包括但不限于角点、交点、平行线、垂直线等易于识别的图像特征。
本公开的标定图案在标定场地的设置简单,无需知道特征点的坐标,无需按照位置、距离等条件设置标定图案,也无需限定车辆在标定时的停靠位置,或行驶经过标定图案的严格路线,只需保证标定过程中标定图案曾出现在摄像头公共视野内即可。
本公开的标定过程可以为动态的,即标定过程车辆处于行驶状态。图5示出了根据本公开的一方面的车辆在标定过程中的示意图500。如图所示,可在车辆侧前方铺设至少一块(或两块或更多块)标定布,或利用地面已有的图案作为标定图案,从车头在标定图案后方或侧后方起始,车辆从标定图案侧边驶过,至车尾离开标定图案结束,标定数据采集即完成。视具体情况,该过程可以一次完成,或可重复数次。经标定数据处理后,标定即完成。该过程保证了在某些时段内标定图案出现在各相机视野中,车辆到标定布距离,行驶速度, 行驶轨迹不影响本案例实施。
如果场地较小,车辆无法行驶,标定过程也可为静态,即车辆静止。例如将车辆停靠在四周有特征图案的环境中,如车位、标定间等,但无需限制车辆位置或者事先已知特征图案的尺寸、位置等。再比如,可在车辆周围设置标定图案,但无需利用任何道具按照位置、距离等条件设置。
图6示出了根据本公开的一方面进行车载环视相机快速自动标定的方法600的流程图。方法600可包括例如在框602,进行特征点检测。
方法600进一步包括在框604,进行特征点梳理及匹配。
方法600进一步包括在框606,建立方程。
方法600进一步包括在框608,对所建立的所有方程进行非线性优化求解以确定相机外参。
对于框602,根据至少一些示例性实施例,若使用动态标定过程,可检测每一帧中各相机图像的地面特征并记录对应特征点的图像坐标、相机编号及帧号。
根据至少另一些示例性实施例,若使用静态标定过程,可检出同一帧中各相机图像中的特征点并记录记录对应特征点的图像坐标、相机编号及帧号。本公开的检测方法多种多样,可根据不同的标定图案灵活做出调整。
图7示出了根据本公开示例性实施例由鱼眼相机拍摄的一张环视鱼眼图700。在此示例中,标定图案仅使用图案中的黑块,也可以使用以上参照图4描述的标定图案或其他合适的标定图案等。
为了确定标定图案中的特征点,可以使用轮廓提取算法,例如OpenCV中的findcounters算法、或例如Roberts算子、Prewitt算子、Sobel算子和Laplacian算子等。
通过使用轮廓提取算法,可在该原图上检测出黑色方块轮廓。随后,可根据轮廓的角点数量筛选出四边形并得到角点在相机坐标系中的坐标。角点检测可采用检测算法(包括但不限于例如Harris算子、KLT算子、SIFT算子、SUSAN算子、Traikovic算子、Moravec算子等)。
图8示出了根据本公开示例性实施例的在相机所拍摄的原始图像上检出的 一个黑块的边缘及其角点。如所可见,当使用例如鱼眼相机之类作为车辆的环视相机时,会引起相当大的畸变。
根据至少一些进一步的实施例,在使用了诸如鱼眼相机之类的畸变较为严重的相机时,为提高检出率及检测精度,也可利用初始外参将鱼眼图转化为俯瞰图,进而在俯瞰图上检测标定图案。
可利用Hough等边缘检测法检测黑块边缘,边缘交点即可被检测为特征角点,若标定图案为车位线,也可使用该方法进行检测。
根据一示例性实施例,初始外参求解方式可为例如:以标定图案上的特定点(例如,黑块左上角点)为原点即(0,0)点,通过(0,0)的两条边为X,Y轴建立世界坐标系,用方块边长信息,将4个角点在世界坐标系中的坐标罗列出来,结合以上步骤3)中检测到的图像点,解算出黑塞矩阵H为初始值(如使用OpenCV中的findHomography函数),H=A[R t],A为内参矩阵,本申请中默认为已知。
如此,很容易求解得到R、t,即外参的旋转矩阵和位移向量。例如,用于标定的特征图案为棋盘格,那么可以用OpenCV中的findChessboard Corners函数在原图上检测棋盘格角点,或者在同上方法生成的俯瞰图上检测。再比如,假使标定图案为不规则图案,可以用orb特征点检测法等来检测特征点。
图9示出了根据本公开示例性实施例的在使用初始化数据生成的俯瞰图上检出的黑块的边缘及角点。如所可见,通过利用初始外参将鱼眼图转化为俯瞰图,进而在俯瞰图上检测标定图案,可以显著提高检出率及检测精度。
对于框604,可对在框602中在同一帧中检测到的特征点进行梳理。根据示例性实施例,可确认检测到的每一个图像点对应于标定图案中的哪一个特征点。根据一些示例性实施例,可将经确认的图像点按照与特征点对应的顺序保存。
特征点匹配表示对于同一帧中出现在两个或多个摄像头公共区域的特征点,其映射到各相机图像上的图像点即为一组匹配点,将每组匹配点也对应保存。
如果用于标定的图案相对简单或规则,例如图9所示标定图案为一个黑色 方块,只有左上、右上、左下、右下四个角点,即使存在于不同相机图像上,根据角点存在的相对位置就可一一对应,即可得到4组匹配点。
如果标定图案不规则或者相对复杂,可以利用特征点检测及描述子匹配等方法进行特征点匹配,例如orb特征点检测,再提取特征点的描述子进行匹配,得到匹配点并保存。
对于框606,若使用动态标定过程,可根据各帧中不同图像点映射的物理点之间存在的位置、距离等相对物理关系建立方程。方程中无需出现具体物理坐标数值。
类似地,若使用静止标定过程,则只需罗列出基于在同一帧中检测到的点建立的相对物理关系方程即可。方程数量至少需要超过待求的外参变量总个数,且所有外参变量都曾出现在方程中。
物理点之间存在的相对关系众多,这里列举几个典型例子:
例1同一帧中相机公共视野内检测到的某特征点的两个或多个图像点(即框604中存储的匹配点)实为映射了同一物理点,即物理坐标相等。两图像点经过各自的内外参矩阵,得到车身坐标系下的物理坐标,两物理坐标相等建立方程。
例2两图像点映射了同一帧正方形黑块的相邻角点,两角点距离即等于边长也可建立方程,该方程建立在已知简单图案尺寸的前提下进行。
例3不知具体距离或尺寸数值,但知某些距离、尺寸或面积的相对关系(例如,相等或相差值)也可建立方程。例如已知标定图案是正方形,但不知道该正方形的边长。那么可以利用正方形边长相等关系来建立方程。
对于不同的特征图案,特征点之间的相对物理关系不尽相同,可以灵活运用。但每一组映射同一特征点的图像匹配点在经过各自的内外参转换后的物理坐标相等这一约束,任何特征图案均可用。
对于框608,可利用框610中建立的所有方程,用非线性优化的方法同时解算出所有未知变量,即所有相机外参。外参的表达方式多样,例如以角度和位移形式求解,以矩阵形式求解,以四元数,旋转向量形式求解等。非线性优化的方式多种多样,例如高斯牛顿法,Levenberg-Marquardt法,非线性最小 二乘法等,求解没有特定方法要求。
图10示出了根据本公开示例性实施例的分别使用动态和静态标定过程完成标定后的环视效果图1000,其中左图示出了使用动态标定过程完成标定后的环视效果图,右图示出了使用静态标定过程完成标定后的环视效果图。
图11示出了根据本公开的一方面的车载环视相机快速自动标定装置1100的框图。
如图11中所示,根据本公开的一方面的车载环视相机快速自动标定装置1100可包括但不限于一个或多个相机1102、处理器1104、存储器1106、显示器1108、图像处理模块1110、图像点检测模块1112、特征点梳理及匹配模块1114以及约束方程建立及求解模块1116等。
根据示例性实施例,一个或多个相机1102可包括车载环视相机。例如,该一个或多个相机1102可包括安装在车辆前后左右的广角摄像头(例如,鱼眼相机)等。该一个或多个相机1102可采集车辆四周的视频影像并将其以图像帧的形式提供给图像处理模块1110。
图像处理模块1110可进行各种图像预处理,包括去噪、初始鱼眼图俯瞰图转换等。经预处理的图像可被提供给图像点检测模块1112。
图像点检测模块1112可采用以上结合本公开各实施例所描述的方式来检测各帧图像中的图像点,并记录对应图像点的图像坐标、相机编号及帧号等信息。例如,所记录的信息可被保存在存储器1106中和/或被传递给特征点梳理及匹配模块1114。
特征点梳理及匹配模块1114可对所检测出的图像点进行梳理,以确定在每帧里检测到的图像点对应于标定图案中的哪一个特征点,并将同一帧中对应于标定图案的同一个特征点的各图像点作为一组匹配点,并记录其帧号、相机编号以及图像坐标等信息。例如,所记录的信息可被保存在存储器1106中和/或被传递给约束方程建立及求解模块1116。
约束方程建立及求解模块1116可基于关于特征点的先验知识建立约束方程,并采用非线性优化求解方式来解算出相机外参。解算出的相机外参可被提供给图像处理模块1110和/或存储在存储器1106中。
此后,当车辆正常形式时,一个或多个相机1102可采集车辆四周的视频影像并将其以图像帧的形式提供给图像处理模块1110。图像处理模块1110可以基于相机外参以及例如存储在存储器中的相机内参等数据对所采集到的图像帧进行处理(例如,预畸变等)。根据示例性实施例,经处理获得的数据可被提供给自动驾驶系统(未示出)用于自动驾驶。根据其他示例性实施例,经处理获得的数据可被环视拼接。所获得的结果被提供给显示器1108供向用户(例如,驾驶员)显示以辅助驾驶/泊车等。
在图11的示例中,图像处理模块1110、图像点检测模块1112、特征点梳理及匹配模块1114以及约束方程建立及求解模块1116被描述为独立的模块,其可由例如专用硬件、固件、可编程电路、ASIC等实现。但是在替换实施例中,图像处理模块1110、图像点检测模块1112、特征点梳理及匹配模块1114以及约束方程建立及求解模块1116中的一者或多者或全部也可作为软件模块被存储在存储器1106中并由处理器1104执行,以用于实现以上结合各个模块所描述的功能。另外,以上对模块的功能划分是示例性的。即,上述各模块的功能可被合并在同一个模块中,或可被进一步拆分以由不同模块来执行。
本发明利用环视相机之间具有公共视野、环视相机组成闭环等特性,利用物理点之间的相对关系,绕开了传统标定法中必须的但难以精确得到的物理点坐标值,使特征点存在的位置这一约束得到释放,并解决了传统标定中标定场地影响物理坐标精度带来的问题,减小标定环境对标定的影响。
由于该方法不再需要物理坐标,在布置标定场地时,也无需测量坐标值,或者按照位置、距离等条件设置标定图案,以此简化了标定流程,提高了标定精度与结果的鲁棒性,提高了标定效率,减少环境因素对标定产生的影响,改善了环视拼接效果,简化了标定流程,提高了标定效率。
本发明收集的标定图案数据量大,建立的约束方程多,所以标定精度高且结果非常鲁棒。采用环视相机联合全局优化的方法,环视拼接效果得到了优化。同时由于解算过程中无需用到标定点在车身坐标系下的坐标,以此解绑了场地精度与标定精度的绝对约束,减小标定环境带来的负面影响,使标定过程得到 极大的简化。
本发明无需精确的标定场地,无需测量特征点的坐标,无需按照位置或者距离等条件设置标定图案,无需限定车辆在标定时的停靠位置。标定环境搭建方法简单,不用任何其他测量道具,标定条件易于实现。标定过程快速便捷,无需人工干预自动标定,标定精度高,环视图拼接优秀。本发明可用于产线大批量标定,售后或者后装等在恶劣环境的标定。
本公开的实施例可以通过相应的方法、装置、设备以及程序(例如,存储在计算机可读介质上,并可由处理器执行的程序)等来实现。例如,本公开的方法、装置、设备等和/或检测方法、装置、设备等可被实现在客户端、企业端、第三方服务器等设备上。包含或实现本公开的实施例的方法、装置、设备等可以通过软件、硬件、或固件等形式来实现,这些均在本公开的范围之内。当采用软件或固件等形式来实现时,相应的程序代码可被存储在软盘、光盘、DVD、硬盘、闪存、U盘、CF卡、SD卡、MMC卡、SM卡、记忆棒、XD卡、SDHC卡等介质上,或可通过通信介质进行传输,并由例如处理器等来执行以实现相应的功能或其部分、或功能的任何组合。
以上所述的仅为本发明的示例性具体实施例。但本发明的保护范围并不局限于此。任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。
结合本公开所描述的各种解说性逻辑块、模块、以及电路可用设计成执行本文描述的功能的通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或其他可编程逻辑器件(PLD)、分立的门或晶体管逻辑、分立的硬件组件、或其任何组合来实现或执行。通用处理器可以是微处理器,但在替换方案中,处理器可以是任何市售的处理器、控制器、微控制器、或状态机。处理器还可以被实现为计算设备的组合,例如,DSP与微处理器的组合、多个微处理器、与DSP核心协同的一个或多个微处理器、或任何其他此类配置。
结合本公开描述的方法或算法的步骤可直接在硬件中、在由处理器执行的软件模块中、或在这两者的组合中实施。软件模块可驻留在本领域所知的任何 形式的存储介质中。可使用的存储介质的一些示例包括随机存取存储器(RAM)、只读存储器(ROM)、闪存、EPROM存储器、EEPROM存储器、寄存器、硬盘、可移动盘、CD-ROM,等等。软件模块可以包括单条指令、或许多条指令,且可分布在若干不同的代码段上,分布在不同的程序间以及跨多个存储介质分布。存储介质可被耦合到处理器以使得该处理器能从/向该存储介质读写信息。替换地,存储介质可以被整合到处理器。
本文中所公开的方法包括用于达成所描述的方法的一个或多个步骤或动作。这些方法步骤和/或动作可以彼此互换而不会脱离权利要求的范围。换言之,除非指定了步骤或动作的特定次序,否则具体步骤和/或动作的次序和/或使用可以改动而不会脱离权利要求的范围。
处理器可执行存储在机器可读介质上的软件。处理器可用一个或多个通用和/或专用处理器来实现。示例包括微处理器、微控制器、DSP处理器、以及其他能执行软件的电路系统。软件应当被宽泛地解释成意指指令、数据、或其任何组合,无论是被称作软件、固件、中间件、微代码、硬件描述语言、或其他。作为示例,机器可读介质可包括RAM(随机存取存储器)、闪存、ROM(只读存储器)、PROM(可编程只读存储器)、EPROM(可擦式可编程只读存储器)、EEPROM(电可擦式可编程只读存储器)、寄存器、磁盘、光盘、硬驱动器、或者任何其他合适的存储介质、或其任何组合。机器可读介质可被实施在计算机程序产品中。该计算机程序产品可以包括包装材料。
在硬件实现中,机器可读介质可以是处理系统中与处理器分开的一部分。然而,如本领域技术人员将容易领会的,机器可读介质或其任何部分可在处理系统外部。作为示例,机器可读介质可包括传输线、由数据调制的载波、和/或与无线节点分开的计算机产品,所有这些都可由处理器通过总线接口来访问。替换地或补充地,机器可读介质或其任何部分可被集成到处理器中,诸如高速缓存和/或通用寄存器文件可能就是这种情形。
处理系统可以被配置成通用处理系统,该通用处理系统具有一个或多个提供处理器功能性的微处理器、以及提供机器可读介质中的至少一部分的外部存储器,它们都通过外部总线架构与其他支持电路系统链接在一起。替换地,处 理系统可以用带有集成在单块芯片中的处理器、总线接口、用户接口(在接入终端情形中)、支持电路系统、和至少一部分机器可读介质的ASIC(专用集成电路)来实现,或者用一个或多个FPGA(现场可编程门阵列)、PLD(可编程逻辑器件)、控制器、状态机、门控逻辑、分立硬件组件、或者任何其他合适的电路系统、或者能执行本公开通篇所描述的各种功能性的电路的任何组合来实现。取决于具体应用和加诸于整体系统上的总设计约束,本领域技术人员将认识到如何最佳地实现关于处理系统所描述的功能性。
机器可读介质可以包括数个软件模块。这些软件模块包括当由装置(诸如处理器)执行时使处理系统执行各种功能的指令。这些软件模块可以包括传送模块和接收模块。每个软件模块可以驻留在单个存储设备中或者跨多个存储设备分布。作为示例,当触发事件发生时,可以从硬驱动器中将软件模块加载到RAM中。在软件模块执行期间,处理器可以将一些指令加载到高速缓存中以提高访问速度。可随后将一个或多个高速缓存行加载到通用寄存器文件中以供处理器执行。在以下述及软件模块的功能性时,将理解此类功能性是在处理器执行来自该软件模块的指令时由该处理器来实现的。
如果以软件实现,则各功能可作为一条或多条指令或代码存储在计算机可读介质上或藉其进行传送。计算机可读介质包括计算机存储介质和通信介质两者,这些介质包括促成计算机程序从一地向另一地转移的任何介质。存储介质可以是能被计算机访问的任何可用介质。作为示例而非限定,此类计算机可读介质可包括RAM、ROM、EEPROM、CD-ROM或其他光盘存储、磁盘存储或其他磁存储设备、或能用于携带或存储指令或数据结构形式的期望程序代码且能被计算机访问的任何其他介质。任何连接也被正当地称为计算机可读介质。例如,如果软件是使用同轴电缆、光纤电缆、双绞线、数字订户线(DSL)、或无线技术(诸如红外(IR)、无线电、以及微波)从web网站、服务器、或其他远程源传送而来,则该同轴电缆、光纤电缆、双绞线、DSL或无线技术(诸如红外、无线电、以及微波)就被包括在介质的定义之中。如本文中所使用的盘(disk)和碟(disc)包括压缩碟(CD)、激光碟、光碟、数字多用碟(DVD)、软盘、和蓝光
Figure PCTCN2022130545-appb-000005
碟,其中盘(disk)常常磁性地再现数据,而碟(disc)用激光 来光学地再现数据。因此,在一些方面,计算机可读介质可以包括非瞬态计算机可读介质(例如,有形介质)。另外,对于其他方面,计算机可读介质可以包括瞬态计算机可读介质(例如,信号)。上述的组合应当也被包括在计算机可读介质的范围内。
因此,某些方面可以包括用于执行本文中给出的操作的计算机程序产品。例如,此类计算机程序产品可以包括其上存储(和/或编码)有指令的计算机可读介质,这些指令能由一个或多个处理器执行以执行本文中所描述的操作。在某些方面,计算机程序产品可包括包装材料。
将理解,权利要求并不被限于以上所解说的精确配置和组件。可在以上所描述的方法和装置的布局、操作和细节上做出各种改动、更换和变形而不会脱离权利要求的范围。

Claims (13)

  1. 一种车载环视相机的标定方法,包括:
    获得由安装在一车辆上的多个位置的多个相机拍摄的一个或多个帧,其中每一帧包括由所述多个相机同时拍摄的多个图像;
    对于每一帧,检测由所述多个相机拍摄的所述多个图像中与标定图案有关的图像点,所述标定图案包括一个或多个特征点;
    对于每一帧,将由所述多个相机中的相同或不同相机拍摄的一张或多张图像中对应于所述标定图案中的有相对物理关系的特征点的多个图像点确定为一组匹配点;
    使用每组匹配点各自的图像坐标,建立基于所述标定图案的特征点的相对物理关系的多个约束方程,其中所述多个约束方程的数量大于或等于所述多个相机各自的待求外参的总数;以及
    同时对所述多个约束方程进行非线性优化求解以同时确定所述多个相机各自的待求外参,其中,
    所述标定图案的特征点的相对物理关系包括以下一者或多者或其任何组合:
    同一帧中在至少两个相机的公共视野中检测到的与所述标定图案的同一特征点对应的至少两个图像点映射到同一物理点;
    所述标定图案的特征点的已知距离、长度或面积值;以及
    所述标定图案的特征点的距离、坐标或长度的相对物理关系。
  2. 如权利要求1所述的标定方法,其中,所述标定图案包括地面上已有的非预设图案或设置于地面的预设标定图案,并且所述标定图案包括一个或多个相同或不同的标定图案。
  3. 如权利要求1所述的标定方法,其中,所述多个相机包括能组成环视的任意多个相机。
  4. 如权利要求1所述的标定方法,其中,所述一个或多个帧是在所述车辆处于行驶状态或处于静止状态时拍摄的。
  5. 如权利要求1所述的标定方法,其中,所述特征点包括角点,并且可任选地进一步包括以下一者或多者或其任何组合:直线、平行线、以及垂直线。
  6. 如权利要求1所述的标定方法,其中,对于每一帧,检测由所述多个相机拍摄的所述多个图像中与标定图案有关的图像点进一步包括:
    记录检测出的图像点及其对应的帧号、相机编号、以及图像坐标。
  7. 如权利要求6所述的标定方法,其中,对于每一帧,将由所述多个相机中的相同或不同相机拍摄的一个或多个图像中对应于所述标定图案中的有相对物理关系的特征点的多个图像点确定为一组匹配点进一步包括:
    记录所确定的一组匹配点中每一个图像点的帧号、相机编号、以及图像坐标。
  8. 如权利要求1所述的标定方法,其中,所述相机包括具有畸变的相机,并且所述标定方法进一步包括,
    在检测图像点之前,利用所述相机的初始外参将所述相机拍摄的图像转化为环视拼接的俯瞰图;以及
    在所述环视拼接的俯瞰图上检测所述图像点。
  9. 如权利要求8所述的标定方法,其中,利用所述相机的初始外参将所述相机拍摄的图像转化为环视拼接的俯瞰图包括:
    以所述标定图案上的特定点为原点建立世界坐标系;
    基于所述标定图案的先验信息,确定所述标定图案中的特征点在世界坐标系中应具有的坐标;以及
    基于检测到的与所述特征点对应的图像点来解算出外参矩阵的初始值。
  10. 如权利要求1所述的标定方法,进一步包括,确定对应于所述标定图案中的有相对物理关系的特征点的多个图像点。
  11. 一种车载环视相机的标定装置,包括:
    用于获得由安装在一车辆上的多个位置的多个相机拍摄的一个或多个帧的模块,其中每一帧包括由所述多个相机同时拍摄的多个图像;
    用于对于每一帧,检测由所述多个相机拍摄的所述多个图像中与标定图案有关的图像点的模块,所述标定图案包括一个或多个特征点;
    用于对于每一帧,将由所述多个相机中的相同或不同相机拍摄的一张或多张图像中对应于所述标定图案中的有相对物理关系的特征点的多个图像点确定为一组匹配点的模块;
    用于使用每组匹配点各自的图像坐标,建立基于所述标定图案的特征点的相对物理关系的多个约束方程的模块,其中所述多个约束方程的数量大于或等于所述多个相机各自的待求外参的总数;以及
    用于同时对所述多个约束方程进行非线性优化求解以同时确定所述多个相机各自的待求外参的模块,其中,
    所述标定图案的特征点的相对物理关系包括以下一者或多者或其任何组合:
    同一帧中在至少两个相机的公共视野中检测到的与所述标定图案的同一特征点对应的至少两个图像点映射到同一物理点;
    所述标定图案的特征点的已知距离、长度或面积值;以及
    所述标定图案的特征点的距离、坐标或长度的相对物理关系。
  12. 一种车载环视相机的标定设备,包括:
    存储器;以及
    处理器,所述处理器耦合到所述存储器并被配置成:
    获得由安装在一车辆上的多个位置的多个相机拍摄的一个或多个帧,其中每一帧包括由所述多个相机同时拍摄的多个图像;
    对于每一帧,检测由所述多个相机拍摄的所述多个图像中与标定图案有关的图像点,所述标定图案包括一个或多个特征点;
    对于每一帧,将由所述多个相机中的相同或不同相机拍摄的一张或多张图像中对应于所述标定图案中的有相对物理关系的特征点的多个图像点确定为一组匹配点;
    使用每组匹配点各自的图像坐标,建立基于所述标定图案的特征点的相对物理关系的多个约束方程,其中所述多个约束方程的数量大于或等于所述多个相机各自的待求外参的总数;以及
    同时对所述多个约束方程进行非线性优化求解以同时确定所述多个相机各自的待求外参,其中,
    所述标定图案的特征点的相对物理关系包括以下一者或多者或其任何组合:
    同一帧中在至少两个相机的公共视野中检测到的与所述标定图案的同一特征点对应的至少两个图像点映射到同一物理点;
    所述标定图案的特征点的已知距离、长度或面积值;以及
    所述标定图案的特征点的距离、坐标或长度的相对物理关系。
  13. 一种存储有处理器可执行指令的计算机可读介质,所述处理器可执行指令在由处理器执行时使所述处理器进行以下操作:
    获得由安装在一车辆上的多个位置的多个相机拍摄的一个或多个帧,其中每一帧包括由所述多个相机同时拍摄的多个图像;
    对于每一帧,检测由所述多个相机拍摄的所述多个图像中与标定图案有关的图像点,所述标定图案包括一个或多个特征点;
    对于每一帧,将由所述多个相机中的相同或不同相机拍摄的一张或多张图像中对应于所述标定图案中的有相对物理关系的特征点的多个图像点确定为一组匹配点;
    使用每组匹配点各自的图像坐标,建立基于所述标定图案的特征点的相对物理关系的多个约束方程,其中所述多个约束方程的数量大于或等于所述多个相机各自的待求外参的总数;以及
    同时对所述多个约束方程进行非线性优化求解以同时确定所述多个相机各自的待求外参,其中,
    所述标定图案的特征点的相对物理关系包括以下一者或多者或其任何组合:
    同一帧中在至少两个相机的公共视野中检测到的与所述标定图案的同一特征点对应的至少两个图像点映射到同一物理点;
    所述标定图案的特征点的已知距离、长度或面积值;以及
    所述标定图案的特征点的距离、坐标或长度的相对物理关系。
PCT/CN2022/130545 2021-12-09 2022-11-08 一种车载环视相机快速自动标定方法和装置 WO2023103679A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111497464.4A CN114202588B (zh) 2021-12-09 2021-12-09 一种车载环视相机快速自动标定方法和装置
CN202111497464.4 2021-12-09

Publications (1)

Publication Number Publication Date
WO2023103679A1 true WO2023103679A1 (zh) 2023-06-15

Family

ID=80651511

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/130545 WO2023103679A1 (zh) 2021-12-09 2022-11-08 一种车载环视相机快速自动标定方法和装置

Country Status (2)

Country Link
CN (1) CN114202588B (zh)
WO (1) WO2023103679A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102470298B1 (ko) * 2017-12-01 2022-11-25 엘지이노텍 주식회사 카메라 보정 방법 및 장치
CN115601450B (zh) * 2022-11-29 2023-03-31 浙江零跑科技股份有限公司 周视标定方法及相关装置、设备、系统和介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160343136A1 (en) * 2014-01-27 2016-11-24 Xylon d.o.o. Data-processing system and method for calibration of a vehicle surround view system
US20180150976A1 (en) * 2016-11-25 2018-05-31 Continental Teves Ag & Co. Ohg Method for automatically establishing extrinsic parameters of a camera of a vehicle
CN109615659A (zh) * 2018-11-05 2019-04-12 成都西纬科技有限公司 一种车载多摄像机环视系统的摄像机参数获得方法及装置
CN110288527A (zh) * 2019-06-24 2019-09-27 北京智行者科技有限公司 一种车载环视相机全景鸟瞰图生成方法

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014217939A1 (de) * 2014-09-08 2016-03-10 Continental Automotive Gmbh Kalibration von Surround-View-Systemen
CN105818746A (zh) * 2015-01-05 2016-08-03 上海纵目科技有限公司 环视高级辅助驾驶系统的标定方法及系统
CN107622513A (zh) * 2017-07-31 2018-01-23 惠州市德赛西威汽车电子股份有限公司 一种拼缝标定点检测装置及环视系统自动标定方法
CN111435540A (zh) * 2019-01-15 2020-07-21 苏州沃迈智能科技有限公司 一种车载环视系统的环视图拼接方法
CN109903341B (zh) * 2019-01-25 2023-09-08 东南大学 一种车载摄像机外参动态自标定方法
US11435756B2 (en) * 2019-12-01 2022-09-06 Nvidia Corporation Visual odometry in autonomous machine applications
WO2021196108A1 (zh) * 2020-04-02 2021-10-07 深圳市瑞立视多媒体科技有限公司 大空间环境下边扫场边标定方法、装置、设备及存储介质
CN111667538B (zh) * 2020-04-20 2023-10-24 长城汽车股份有限公司 一种全景环视系统的标定方法、装置及系统
CN111640062B (zh) * 2020-05-15 2023-06-09 上海赫千电子科技有限公司 一种车载环视图像的自动拼接方法
CN111784778B (zh) * 2020-06-04 2022-04-12 华中科技大学 基于线性求解非线性优化的双目相机外参标定方法和系统
CN111985300B (zh) * 2020-06-29 2023-11-03 魔门塔(苏州)科技有限公司 自动驾驶动态目标定位方法、装置、电子设备及存储介质
CN112529966B (zh) * 2020-12-17 2023-09-15 豪威科技(武汉)有限公司 一种车载环视系统的在线标定方法及其车载环视系统
CN113281723B (zh) * 2021-05-07 2022-07-22 北京航空航天大学 一种基于AR tag的3D激光雷达与相机间结构参数的标定方法
CN113160336B (zh) * 2021-05-11 2023-07-07 北京易航远智科技有限公司 一种简易标定环境下的车载环视相机标定方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160343136A1 (en) * 2014-01-27 2016-11-24 Xylon d.o.o. Data-processing system and method for calibration of a vehicle surround view system
US20180150976A1 (en) * 2016-11-25 2018-05-31 Continental Teves Ag & Co. Ohg Method for automatically establishing extrinsic parameters of a camera of a vehicle
CN109615659A (zh) * 2018-11-05 2019-04-12 成都西纬科技有限公司 一种车载多摄像机环视系统的摄像机参数获得方法及装置
CN110288527A (zh) * 2019-06-24 2019-09-27 北京智行者科技有限公司 一种车载环视相机全景鸟瞰图生成方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG TAO, XIE SHAO-RONG, PAN ZHI-HAO, LUO JUN(SCHOOL OF MECHANICAL AND ELECTRONIC ENGINEERING AND AUTOMATION, SHANGHAI UNIVERSITY: "A Quick and on-Line Self-Calibration Algorithm for External Camera Parameters Based on Binocular Vision", MECHANICAL ENGINEER., no. 1, 10 January 2009 (2009-01-10), pages 104 - 106, XP093069344 *

Also Published As

Publication number Publication date
CN114202588B (zh) 2022-09-23
CN114202588A (zh) 2022-03-18

Similar Documents

Publication Publication Date Title
US11157766B2 (en) Method, apparatus, device and medium for calibrating pose relationship between vehicle sensor and vehicle
WO2023103679A1 (zh) 一种车载环视相机快速自动标定方法和装置
TWI517670B (zh) 車用鏡頭之自動化校正與應用其之影像轉換方法與裝置
JP5588812B2 (ja) 画像処理装置及びそれを用いた撮像装置
US20210103741A1 (en) Detection method and apparatus for automatic driving sensor, and electronic device
CN103985118A (zh) 一种车载环视系统摄像头参数标定方法
CN103177439A (zh) 一种基于黑白格角点匹配的自动标定方法
WO2022078074A1 (zh) 车辆与车道线的位置关系检测方法、系统和存储介质
CN108596982A (zh) 一种简易的车载多目摄像机环视系统标定方法及装置
EP3998580A2 (en) Camera calibration method and apparatus, electronic device, storage medium, program product, and road side device
US20200074660A1 (en) Image processing device, driving assistance system, image processing method, and program
CN110766762A (zh) 一种全景泊车的标定方法及标定系统
CN111210386A (zh) 图像拍摄拼接方法及系统
CN110766761A (zh) 用于相机标定的方法、装置、设备和存储介质
CN104282010A (zh) 车辆多鱼眼摄像机360度俯视图像拼接曲线标定方法
CN113409396A (zh) 一种adas单目相机的标定方法
CN104276102B (zh) 一种基于车辆位置检测的环视系统标定装置
CN111382591B (zh) 一种双目相机测距校正方法及车载设备
KR101697229B1 (ko) 차량용 영상 정합을 위한 차선 정보 기반의 자동보정장치 및 그 방법
JP2012007972A (ja) 車両寸法計測装置
CN203966198U (zh) 一种汽车全景摄像头标定系统
CN108376384A (zh) 视差图的矫正方法、装置及存储介质
CN110543612A (zh) 一种基于单目视觉测量的集卡定位方法
WO2023184869A1 (zh) 室内停车场的语义地图构建及定位方法和装置
CN113012239B (zh) 一种车路协同路侧感知相机的焦距变化定量计算方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22903102

Country of ref document: EP

Kind code of ref document: A1