CN115205396A - Combined calibration method and system for vehicle sensor, vehicle and storage medium - Google Patents

Combined calibration method and system for vehicle sensor, vehicle and storage medium Download PDF

Info

Publication number
CN115205396A
CN115205396A CN202210748221.1A CN202210748221A CN115205396A CN 115205396 A CN115205396 A CN 115205396A CN 202210748221 A CN202210748221 A CN 202210748221A CN 115205396 A CN115205396 A CN 115205396A
Authority
CN
China
Prior art keywords
edge feature
vehicle
edge
point cloud
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210748221.1A
Other languages
Chinese (zh)
Inventor
康宇宸
谭海宇
费贤松
彭思崴
吴绍权
夏宇峰
崔国才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Weilai Zhijia Technology Co Ltd
Original Assignee
Anhui Weilai Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Weilai Zhijia Technology Co Ltd filed Critical Anhui Weilai Zhijia Technology Co Ltd
Priority to CN202210748221.1A priority Critical patent/CN115205396A/en
Publication of CN115205396A publication Critical patent/CN115205396A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application relates to a method and a system for jointly calibrating vehicle sensors, a vehicle and a storage medium. The method comprises the following steps: simultaneously detecting the environment of the vehicle through an image sensor and a laser radar; performing semantic segmentation on an image acquired through the image sensor to extract a first edge feature; extracting a second edge feature from the point cloud acquired by the laser radar, wherein the second edge feature comprises a plurality of points in the point cloud; aligning the first edge feature and the second edge feature to calibrate a coarse external parameter of the image sensor and the laser radar respectively; and minimizing the distance between the first edge feature and the second edge feature to optimize the coarse parameters to obtain respective optimized parameters.

Description

Combined calibration method and system for vehicle sensor, vehicle and storage medium
Technical Field
The present application relates to calibration of vehicle sensors, and more particularly, to a method of joint calibration of vehicle sensors, a system for joint calibration of vehicle sensors, a vehicle, and a computer-readable storage medium.
Background
The technology of automatic driving and assistant driving develops rapidly in recent years, and laser radar (LiDAR) and a camera are two sensors which are most important in the field of automatic driving perception. The environmental information acquired by the camera is regular, ordered and dense, but the camera is only sensitive to light and has no image distance information. In order to make up for the weakness of the camera, the laser radar can accurately acquire the distance of the object, but the point cloud is more sparse than the image information. For this reason, the fusion of these two sensors enables the autonomous vehicle to better understand the external environmental information, and thus has become an important research direction in recent years. The sensor fusion algorithm requires precise external parameters to translate the coordinates between the two sensors to achieve data alignment. Therefore, online auto-calibration algorithms with high accuracy and robustness are becoming increasingly important.
The currently mainstream external reference calibration method of the laser radar camera highly depends on high-precision calibration targets and specific environments. In conventional calibration, calibration features are manually detected, extracted and matched from the image and point cloud. These methods are time consuming and labor intensive and are not feasible in a variety of outdoor scenarios. Furthermore, sensor calibration is mostly performed only once and it is assumed that the external calibration remains unchanged for the remaining life-cycle of the sensor suite. The calibration results may vary due to the presence of calibration errors and due to various conditions during driving.
In view of this, an on-line calibration algorithm needs to be provided to detect and correct the calibration error.
Disclosure of Invention
The embodiment of the application provides a vehicle sensor combined calibration method, a vehicle sensor combined calibration system, a vehicle and a computer readable storage medium, which are used for carrying out combined calibration on a camera sensor and a laser radar of the vehicle in an online mode.
According to one aspect of the application, a joint calibration method for vehicle sensors is provided. The method comprises the following steps: simultaneously detecting the environment of the vehicle through an image sensor and a laser radar; performing semantic segmentation on an image acquired through the image sensor to extract a first edge feature; extracting a second edge feature from the point cloud acquired by the laser radar, wherein the second edge feature comprises a plurality of points in the point cloud; aligning the first edge feature and the second edge feature to calibrate a coarse external parameter of the image sensor and the lidar, respectively; and minimizing the distance between the first edge feature and the second edge feature to optimize the coarse parameters to obtain respective optimized parameters.
In some embodiments of the present application, optionally, performing semantic segmentation on the image acquired by the image sensor to extract the first edge feature includes: performing semantic segmentation on the image to identify a calibration object; and extracting the edge of the calibration object to obtain the first edge feature.
In some embodiments of the present application, optionally, the semantic segmentation is performed based on a Mask-RCNN network, and/or the edge of the calibration object is extracted based on a Canny edge detection algorithm.
In some embodiments of the present application, optionally, extracting a second edge feature from the point cloud acquired by the lidar includes: extracting the second edge feature using depth discontinuities of the point cloud.
In some embodiments of the present application, optionally, extracting the second edge feature using depth discontinuities of the point cloud comprises: extracting transverse adjacent points and longitudinal adjacent points of each point in the point cloud; determining a transverse threshold value and a longitudinal threshold value according to the depth distances of each point and the transverse adjacent points and the longitudinal adjacent points thereof respectively; and extracting the second edge feature according to the transverse threshold and the longitudinal threshold respectively.
In some embodiments of the present application, optionally, extracting the second edge feature using depth discontinuity of the point cloud further comprises: determining a sky position according to invalid points in the point cloud; determining a roof position of a vehicle as a calibration object based on the sky position; and using a point cloud representing the roof as part of the second edge feature.
In some embodiments of the present application, optionally aligning the first edge feature and the second edge feature to calibrate the coarse external parameters of the image sensor and the lidar, respectively, comprises: and using a grid search method to maximize the matching number of the feature points in the first edge feature and the second edge feature so as to calibrate the image sensor and the rough external parameter of the laser radar.
In some embodiments of the present application, optionally, minimizing the distance between the first edge feature and the second edge feature to optimize the coarse external parameters to obtain optimized external parameters comprises: determining an optimal solution of the distance from each point in the second edge feature to the first edge feature by a least square method; and determining the optimized external parameters according to the optimal solution.
In some embodiments of the present application, optionally, the image sensor and the lidar detect an environment in which the vehicle is located during operation of the vehicle.
In some embodiments of the present application, optionally, the image sensor and the lidar detect the environment of the vehicle again at a predetermined period.
According to another aspect of the present application, a joint calibration system for vehicle sensors is provided. The system comprises: a memory configured to store instructions; and a processor configured to execute the instructions to perform any of the above-described methods of joint calibration of vehicle sensors.
According to another aspect of the application, a vehicle is provided, comprising a joint calibration system for any one of the vehicle sensors as described above.
According to another aspect of the present application, there is provided a computer readable storage medium having instructions stored therein, wherein the instructions, when executed by a processor, cause the processor to perform any one of the above-described methods for joint calibration of a vehicle sensor.
According to the vehicle sensor joint calibration method, the vehicle sensor joint calibration system, the vehicle and the computer readable storage medium, the edge features extracted by using laser radar point cloud depth discontinuity and the vehicle edge features segmented by semantics can be combined for calibration, and the external reference calibration result is optimal through alignment and minimized projection distance.
Drawings
The above and other objects and advantages of the present application will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings, in which like or similar elements are designated by like reference numerals.
FIG. 1 illustrates a method for joint calibration of vehicle sensors according to an embodiment of the present application;
FIG. 2 illustrates a joint calibration system for vehicle sensors according to an embodiment of the present application;
FIG. 3 illustrates a method for joint calibration of vehicle sensors according to an embodiment of the present application;
FIG. 4 illustrates a first edge feature extracted according to an embodiment of the present application;
FIG. 5 illustrates a second edge feature extracted according to an embodiment of the present application.
Detailed Description
For the purposes of brevity and explanation, the principles of the present application are described herein with reference primarily to exemplary embodiments thereof. However, those skilled in the art will readily appreciate that the same principles are equally applicable to all types of methods of joint calibration of vehicle sensors, joint calibration systems for vehicle sensors, vehicles, and computer-readable storage media, and that these same or similar principles may be implemented therein, with any such variations not departing from the true spirit and scope of the present application.
In the conventional calibration method, a high-precision calibration plate is generally used for joint calibration. In recent years, with the penetration of deep learning in various fields, a deep neural network-based calibration method has also been proposed. However, the above method has high requirements on the scene, and is not suitable for adjusting the extrinsic parameters in the conventional driving scene to maximize mutual information so as to obtain the optimal extrinsic parameters. The application provides an online calibration method, which uses edge features extracted by laser radar point cloud depth discontinuity and vehicle edge features segmented by semantics as calibration target features, and optimizes external parameter calibration results through alignment and minimized projection distance.
According to one aspect of the application, a method for joint calibration of vehicle sensors is provided. As shown in FIG. 1, a method 10 for jointly calibrating vehicle sensors (hereinafter referred to as method 10) includes the steps of: in step S102, the environment of the vehicle is detected by the image sensor and the laser radar at the same time; performing semantic segmentation on the image acquired through the image sensor to extract a first edge feature in step S104; extracting a second edge feature from the point cloud obtained by the laser radar in step S106; aligning the first edge feature and the second edge feature to calibrate rough external parameters of the image sensor and the laser radar respectively in step S108; and minimizing the distance between the first edge feature and the second edge feature in step S110 to optimize the coarse parameters to obtain respective optimized parameters. Simultaneous online calibration of the image sensor and the lidar may be achieved via the above steps of method 10, and the calibration is not dependent on a particular calibration object (e.g., checkerboard). The working principle of the individual steps of the method 10 will be explained in detail below.
The method 10 simultaneously detects the environment of the vehicle by the image sensor and the lidar in step S102. In connection with the example in fig. 3, the purpose of step S102 is to acquire environmental data, specifically, an image and point cloud data on environmental information by an image sensor and a laser radar, respectively. The reason why the two sensors are required to detect at the same time is that calibration objects for the joint calibration of the two sensors should be consistent, and if the scenes detected by the two sensors are deviated, the results of the joint calibration will be deviated, and even calibration cannot be performed.
On the other hand, as will be appreciated, and as constructed with a drive-assisted, autonomous vehicle, there will be at least a portion of the image sensor and lidar field of view that is repeated, and the scene in this repeated portion of the field of view will be targeted for subsequent operation. In some vehicle configurations the fields of view of both are nearly identical, or the field of view of one would completely encompass the field of view of the other. Furthermore, the number of image sensors in a vehicle may not be unique, and the images used as calibration are combined from the fields of view of these image sensors. In this case, the calibration external parameters obtained below will be applied to a system composed of these image sensors.
As shown in fig. 3, the image acquired by the image sensor will be used to extract features. Firstly, the image is sent to a semantic segmentation network to obtain a semantic image, and then the edge of an object in the voice image is detected to obtain the edge characteristic of the image. To this end, the method 10 performs semantic segmentation on the image acquired by the image sensor to extract a first edge feature (hereinafter, referred to as a first edge feature to show distinction) in step S104. Specifically, in step S104, the image may be semantically segmented to identify the calibration object, and then the edge of the calibration object is extracted to obtain the edge feature. It should be noted that the calibration target is not specified in advance, but obtained by performing analysis based on a live image captured by the image sensor. Of course, in order to improve the calibration effect, it may be intentionally detected when the scene features are more obvious in step S102. For example, detection may be performed in situations where objects in the scene remain stationary and have empirically more recognizable features.
In some embodiments of the present application, semantic segmentation may be implemented based on a Mask-RCNN network, and in addition, an edge of a calibration object may be extracted based on a Canny edge detection algorithm. With the rapid development and maturation of deep learning, compared with traditional edge extraction methods such as Canny and LSD, the semantic segmentation learning algorithm can provide better accuracy and robustness. Therefore, in step S104, the original image data may be segmented first using the Mask-RCNN network, and semantic categories of "Car", "Truck", and "Bus" may be selected as the calibration target objects. And marking the corresponding area as a white area, and finally obtaining the edge characteristics of the target image by using a Canny algorithm and the like. As shown in fig. 4, the top is the edge extracted directly by using the Canny algorithm, and the bottom is the edge feature obtained after using the semantic segmentation information. It can be seen that the extraction of edge features using the speech segmentation method is obviously due to the traditional Canny algorithm.
The method 10 extracts an edge feature (hereinafter referred to as a second edge feature for distinction) from the point cloud acquired by the laser radar in step S106. Wherein the second edge feature comprises a plurality of points in the point cloud. In some embodiments of the present application, as shown in fig. 3, the point cloud acquired by the lidar will be used to extract features. The feature extraction is divided into transverse feature extraction and longitudinal feature extraction, and the two results are combined into LiDAR edge features. Specifically, the second edge feature may be extracted using depth discontinuity of the point cloud in step S106. The extracting of the second edge feature in step S106 may include: firstly, extracting transverse adjacent points and longitudinal adjacent points of each point in a point cloud; secondly, determining a transverse threshold value and a longitudinal threshold value according to the depth distance between each point and a transverse adjacent point and a longitudinal adjacent point thereof; and finally, extracting second edge features according to the transverse threshold and the longitudinal threshold respectively. Here, the lateral threshold, the vertical threshold, will be used to determine whether the depth of the point cloud can be used to extract points as part of the second edge feature. In general, the lateral threshold and the longitudinal threshold may be the same value, and in other examples, the lateral threshold and the longitudinal threshold may be different.
Due to occlusion between objects, the neighboring points of depth discontinuity are most likely the edge profiles of the objects. In consideration of the laser radar measurement principle and the algorithm calculation efficiency, laser radar points can be organized into an image form so as to conveniently obtain transverse and longitudinal adjacent points, and then operations such as ground removal, image point interpolation and the like are carried out. Horizontal edge features and vertical edge features can be extracted separately using depth discontinuities.
In some embodiments of the present application, the feature of the invalid point in the point cloud can be used to identify the position of the sky, and the roof can be filtered as an important transverse feature. Specifically, the step of extracting the second edge feature in step S106 further includes the following steps: determining the sky position according to invalid points in the point cloud; determining a roof position of a vehicle as a calibration object based on the sky position; and using the point cloud representing the roof as part of the second edge feature. Since the vehicle roof belongs to an obvious edge, the calibration accuracy is facilitated by taking the vehicle roof point cloud as a part of the edge feature. If the roof point cloud is not introduced into the calibration process, the problem of low matching degree with the edge features extracted from the image may be caused, which is unfavorable for the subsequent rough calibration and the fine calibration.
FIG. 5 illustrates a process of performing a second edge feature extraction according to some embodiments of the present application, where the acquired point cloud array is shown above and the second edge feature extracted from the point cloud array is below. As shown, where a large dark area above the vehicle is the sky, no valid point cloud will be generated here. A position of the roof may be determined from the position of the sky, and a point cloud representing the roof may then be used as part of the second edge feature.
The method 10 aligns the first edge feature with the second edge feature in step S108 to calibrate the coarse parameters of the image sensor and the lidar, respectively (also referred to as initial parameters relative to the optimized parameters in the afternoon). This process is also referred to as coarse calibration in fig. 3. In the optimization process shown in FIG. 3, the coarse calibration would receive the extracted image edge features and LiDAR edge features and maximally align them to solve for the initial outliers. At present, most of calibration methods based on optimization need a reasonable initial external parameter, otherwise, the optimization result is likely to fall into local optimization, and even the optimization iteration process cannot be converged. The corresponding relation between the characteristics generated by the initial external parameters plays an important role in optimizing the result. To this end, methods such as grid search may be utilized to provide reasonable arguments before fine optimization. In some examples, the rotation angle search range is [ -2, 2] degrees, and the grid step size is 0.5 degrees. In general, the coarse optimization process obtains the initial external parameters by adjusting the external parameters to maximize the matching rate corresponding to the edges. In other words, in step S108, a grid search method may be used to maximize the number of matches of feature points in the first edge feature and the second edge feature, so as to calibrate the image sensor and the rough laser radar parameters.
The method 10 minimizes the distance of the first edge feature from the second edge feature in step S110 to optimize the coarse parameters to obtain respective optimized parameters. In some embodiments of the present application, in step S110, an optimal solution of the distance from each point in the second edge feature to the first edge feature may be determined by a least squares method; and determining an optimized external parameter according to the optimal solution.
After the coarse optimization in step S108, the initial external parameters and the optimal correspondence between the laser radar features and the camera features can be obtained. Based on this, a least squares problem may be formulated in step S110 by minimizing the dotted distance residual between the image and the extracted LiDAR features. For example, a problem may be modeled and solved using Ceres Solver. In the iterative optimization process, the corresponding state can be adjusted according to the intermediate optimization result, the influence of abnormal matching is weakened, and finally the final calibration result is obtained. As shown in fig. 3, the respective optimized external parameters of the image sensor and the lidar are obtained through the above fine calibration process, and are used as the final result of calibration.
In some embodiments of the present application, the image sensor and the lidar detect an environment in which the vehicle is located during operation of the vehicle. As highlighted above, the solution of the present application does not require a joint calibration by a specific calibration object (e.g. a calibration plate). In more cases, the scheme of the application can implement online operation as long as other functions of the vehicle are not seriously and adversely affected in the operation process. In some embodiments of the present application, the image sensor and the lidar may re-detect the environment of the vehicle at a predetermined period, such that method 10 may be repeatedly performed to achieve a re-online joint calibration of the sensors.
According to another aspect of the present application, a joint calibration system for vehicle sensors is provided. As shown in FIG. 2, the joint calibration system 20 for vehicle sensors (hereinafter referred to as system 20) includes a memory 202 and a processor 204. Wherein the memory 202 is configured to store instructions and the processor 204 is configured to execute the instructions to perform a method of jointly calibrating a vehicle sensor as any one of the above.
According to another aspect of the application, a vehicle is provided, the vehicle comprising a joint calibration system of any one of the vehicle sensors as described above. The present application is not limited to the layout of the vehicle (e.g., wheeled vehicle, tracked vehicle, etc.) nor the driving force of the vehicle (e.g., motor drive, gasoline drive, etc.), and encompasses a variety of vehicles currently known in the art as well as vehicles developed in the future.
According to another aspect of the present application, there is provided a computer readable storage medium having stored therein instructions that, when executed by a processor, cause the processor to perform a joint calibration method for a vehicle sensor as described above. Computer-readable media as referred to in the present application includes various types of computer storage media and can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, computer-readable media may include RAM, ROM, EPROM, E 2 PROMs, registers, hard disks, removable disks, CD-ROMs or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other device that can be used to carry or store desired units of program code in the form of instructions or data structures and that can be used by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processorAny other temporary or non-temporary medium that is accessed. A disk, as used herein, typically reproduces data magnetically, whereas a disc reproduces data optically with a laser. Combinations of the above should also be included within the scope of computer-readable media. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
The above embodiments of the present application are intended to provide a method for automatic calibration of a laser radar-camera, which realizes automatic calibration during the operation of an automatic driving system, and avoids the dependence of the traditional calibration method on a high-precision calibration board. The method provided by the application has strong adaptability to outdoor scenes, and can also solve the problem of the function of automatic identification, detection and correction of the calibration parameters mentioned in the background in the system operation process. The above description is only a specific embodiment of the present application, but the scope of the present application is not limited thereto. Other possible variations or substitutions may occur to those skilled in the art based on the teachings herein, and are intended to be covered by the present disclosure. The embodiments and features of the embodiments of the present application may be combined with each other without conflict. The scope of protection of the present application is subject to the description of the claims.

Claims (13)

1. A method for jointly calibrating vehicle sensors, the method comprising:
simultaneously detecting the environment of the vehicle through an image sensor and a laser radar;
performing semantic segmentation on an image acquired through the image sensor to extract a first edge feature;
extracting second edge features from the point cloud acquired through the laser radar, wherein the second edge features comprise a plurality of points in the point cloud;
aligning the first edge feature and the second edge feature to calibrate a coarse external parameter of the image sensor and the laser radar respectively; and
the distances of the first edge feature from the second edge feature are minimized to optimize the coarse parameters to obtain respective optimized parameters.
2. The method of claim 1, wherein semantically segmenting the image acquired by the image sensor to extract first edge features comprises:
performing semantic segmentation on the image to identify a calibration object; and
and extracting the edge of the calibration object to obtain the first edge feature.
3. The method according to claim 2, wherein the semantic segmentation is performed based on a Mask-RCNN network and/or the edges of the calibration objects are extracted based on a Canny edge detection algorithm.
4. The method of claim 1, wherein extracting second edge features from the point cloud acquired by the lidar comprises: extracting the second edge feature using depth discontinuities of the point cloud.
5. The method of claim 4, wherein extracting the second edge feature using depth discontinuities of the point cloud comprises:
extracting transverse adjacent points and longitudinal adjacent points of each point in the point cloud;
determining a transverse threshold value and a longitudinal threshold value according to the depth distances of each point and the transverse adjacent points and the longitudinal adjacent points thereof respectively; and
and extracting the second edge feature according to the transverse threshold and the longitudinal threshold respectively.
6. The method of claim 5, wherein extracting the second edge feature using depth discontinuities of the point cloud further comprises:
determining a sky position according to invalid points in the point cloud;
determining a roof position of a vehicle as a calibration object based on the sky position; and
a point cloud representing the roof is included as part of the second edge feature.
7. The method of claim 1, wherein aligning the first edge feature and the second edge feature to calibrate a coarse external reference of the image sensor, the lidar, respectively comprises:
and using a grid search method to maximize the matching number of the feature points in the first edge feature and the second edge feature so as to calibrate the image sensor and the rough external parameter of the laser radar.
8. The method of claim 1, wherein minimizing the distance of the first edge feature from the second edge feature to optimize the coarse outlier to obtain an optimized outlier comprises:
determining an optimal solution of the distance from each point in the second edge feature to the first edge feature by a least square method; and
and determining the optimized external parameters according to the optimal solution.
9. The method of claim 1, wherein the image sensor and the lidar detect an environment in which the vehicle is located during operation of the vehicle.
10. The method of claim 9, wherein the image sensor and lidar re-detect the environment of the vehicle at a predetermined period.
11. A system for joint calibration of vehicle sensors, the system comprising:
a memory configured to store instructions; and
a processor configured to execute the instructions so as to perform the method of any one of claims 1-10.
12. A vehicle characterized in that it comprises a system according to claim 11.
13. A computer-readable storage medium having instructions stored therein, which when executed by a processor, cause the processor to perform the method of any one of claims 1-10.
CN202210748221.1A 2022-06-29 2022-06-29 Combined calibration method and system for vehicle sensor, vehicle and storage medium Pending CN115205396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210748221.1A CN115205396A (en) 2022-06-29 2022-06-29 Combined calibration method and system for vehicle sensor, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210748221.1A CN115205396A (en) 2022-06-29 2022-06-29 Combined calibration method and system for vehicle sensor, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN115205396A true CN115205396A (en) 2022-10-18

Family

ID=83578200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210748221.1A Pending CN115205396A (en) 2022-06-29 2022-06-29 Combined calibration method and system for vehicle sensor, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN115205396A (en)

Similar Documents

Publication Publication Date Title
WO2018142900A1 (en) Information processing device, data management device, data management system, method, and program
Kang et al. Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model
US8121400B2 (en) Method of comparing similarity of 3D visual objects
US5867591A (en) Method of matching stereo images and method of measuring disparity between these image
CN109801333B (en) Volume measurement method, device and system and computing equipment
KR101244498B1 (en) Method and Apparatus for Recognizing Lane
US20150363668A1 (en) Traffic lane boundary line extraction apparatus and method of extracting traffic lane boundary line
KR102145557B1 (en) Apparatus and method for data fusion between heterogeneous sensors
CN111179152A (en) Road sign identification method and device, medium and terminal
KR20150112656A (en) Method to calibrate camera and apparatus therefor
CN113008247B (en) High-precision map construction method and device for mining area
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN113885046A (en) Intelligent internet automobile laser radar positioning system and method for low-texture garage
CN111723724A (en) Method and related device for identifying road surface obstacle
CN113298026A (en) Lane line determination method and system, vehicle, and storage medium
CN114578328B (en) Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors
CN115272478A (en) Combined calibration method and system for vehicle sensor, vehicle and storage medium
CN115729245A (en) Obstacle fusion detection method, chip and terminal for mine ramp
CN115205396A (en) Combined calibration method and system for vehicle sensor, vehicle and storage medium
CN110322508B (en) Auxiliary positioning method based on computer vision
CN112001357A (en) Target identification detection method and system
CN114612563B (en) Automatic splicing method, system and storage medium for aerial cable
CN115468576A (en) Automatic driving positioning method and system based on multi-mode data fusion
CN115656991A (en) Vehicle external parameter calibration method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination