CN115272478A - Combined calibration method and system for vehicle sensor, vehicle and storage medium - Google Patents

Combined calibration method and system for vehicle sensor, vehicle and storage medium Download PDF

Info

Publication number
CN115272478A
CN115272478A CN202210778064.9A CN202210778064A CN115272478A CN 115272478 A CN115272478 A CN 115272478A CN 202210778064 A CN202210778064 A CN 202210778064A CN 115272478 A CN115272478 A CN 115272478A
Authority
CN
China
Prior art keywords
image data
point cloud
multiframe
calibration
calibration results
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210778064.9A
Other languages
Chinese (zh)
Inventor
康宇宸
谭海宇
费贤松
吴绍权
崔国才
彭思崴
夏宇峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Weilai Zhijia Technology Co Ltd
Original Assignee
Anhui Weilai Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Weilai Zhijia Technology Co Ltd filed Critical Anhui Weilai Zhijia Technology Co Ltd
Priority to CN202210778064.9A priority Critical patent/CN115272478A/en
Publication of CN115272478A publication Critical patent/CN115272478A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computational Linguistics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application relates to a combined calibration method and a system of vehicle sensors, a vehicle and a storage medium, wherein the method comprises the following steps: receiving first multiframe image data generated by an image sensor and second multiframe point cloud data generated by a laser radar; temporally aligning the first multiframe image data with the second multiframe point cloud data; carrying out scene recognition on the aligned image data, and screening out third multi-frame image data according to a scene recognition result; calibrating the image sensor and the laser radar by using third multi-frame image data and contour information in point cloud data aligned with the third multi-frame image data to obtain a third plurality of calibration results; and deleting abnormal values in the third plurality of calibration results to generate valid calibration results.

Description

Combined calibration method and system for vehicle sensor, vehicle and storage medium
Technical Field
The present application relates to calibration of vehicle sensors, and more particularly, to a method and system for joint calibration of vehicle sensors, a vehicle, and a storage medium.
Background
The automatic driving and assistant driving technology is rapidly developed in recent years, and a laser radar (LiDAR) and an image sensor (camera) are two most important sensors in the automatic driving perception field. The accurate external parameters between the two modules are the basis of the operation of modules such as sensor fusion and the like. The environmental information acquired by the camera is regular, ordered and dense, but the camera is only sensitive to light and has no image distance information. In order to make up for the weakness of the camera, the laser radar can accurately acquire the distance of the object, but the point cloud is more sparse than the image information. For this reason, the fusion of these two sensors enables the autonomous automobile to better understand the external environment information, and thus has become a focus of research in recent years. The sensor fusion algorithm requires precise external parameters to translate the coordinates between the two sensors to achieve data alignment. Therefore, online auto-calibration algorithms with high accuracy and robustness are becoming increasingly important.
The currently mainstream external reference calibration method of the laser radar camera highly depends on high-precision calibration targets and specific environments. In conventional calibration, calibration features are manually detected, extracted and matched from the image and point cloud. These methods are time consuming and laborious and are not feasible in various outdoor scenarios. Furthermore, the sensor calibration is mostly performed only once and it is assumed that the external calibration remains unchanged for the rest of the lifetime of the sensor suite. The calibration results may vary due to the presence of calibration errors and due to various conditions during driving.
In view of the above, it is desirable to provide an online calibration algorithm to reduce the calibration requirement and reduce the calibration error.
Disclosure of Invention
The embodiment of the application provides a vehicle sensor joint calibration method and system, a vehicle and a storage medium, which are used for carrying out joint calibration on a camera sensor and a laser radar of the vehicle in an online mode.
According to one aspect of the application, a method for joint calibration of vehicle sensors is provided. The method comprises the following steps: receiving first multiframe image data generated by an image sensor and second multiframe point cloud data generated by a laser radar; temporally aligning the first multi-frame image data with the second multi-frame point cloud data; carrying out scene recognition on the aligned image data, and screening out third multi-frame image data according to a scene recognition result; calibrating the image sensor and the laser radar by using third multi-frame image data and outline information in point cloud data aligned with the third multi-frame image data to obtain a third plurality of calibration results; and deleting abnormal values in the third plurality of calibration results to generate valid calibration results.
In some embodiments of the present application, optionally, temporally aligning the first plurality of frames of image data with the second plurality of frames of point cloud data comprises: respectively stamping time stamps on the first multiframe image data and the second multiframe point cloud data by using a uniform time service source; and aligning two frames of the first multiframe image data and the second multiframe point cloud data, wherein the difference of the time stamps in the first multiframe image data and the second multiframe point cloud data is below a threshold value.
In some embodiments of the present application, optionally, temporally aligning the first plurality of frames of image data with the second plurality of frames of point cloud data further comprises: discarding the image data and the point cloud data which are not aligned.
In some embodiments of the present application, optionally, performing scene recognition on the aligned image data, and screening out a third multi-frame image data according to a result of the scene recognition includes: respectively carrying out scene recognition on the aligned image data through semantic segmentation to obtain semantic features; and screening the third multi-frame image data according to the number of the semantic features and the positions of the semantic features in the image data.
In some embodiments of the present application, optionally, screening out the third multiframe image data according to the number of semantic features and the positions of the semantic features in the image data includes: determining whether the number of semantic features is within a preset range.
In some embodiments of the present application, optionally, screening out the third multiframe image data according to the number of the semantic features and the positions of the semantic features in the image data includes: determining whether the semantic features are uniformly distributed in the image.
In some embodiments of the present application, optionally, screening out the third multiframe image data according to the number of semantic features and the positions of the semantic features in the image data includes: and determining whether the objects corresponding to the semantic features are mutually occluded in the image.
In some embodiments of the present application, optionally, calibrating the image sensor and the lidar by using the third multi-frame image data and the profile information in the point cloud data aligned therewith, so as to obtain a third plurality of respective calibration results includes: extracting image contour information included in the image data based on the semantic features; extracting point cloud contour information according to depth information in the point cloud data; and constructing an optimization problem about the matching degree of the image contour information and the point cloud contour information, and determining a third plurality of calibration results according to the solution of the optimization problem, wherein the third plurality of calibration results comprise external parameters aiming at the image sensor and the laser radar.
In some embodiments of the present application, optionally deleting outliers in the third plurality of calibration results to generate valid calibration results comprises: and carrying out unsupervised classification on the third plurality of calibration results through an isolation forest algorithm and eliminating isolated points as abnormal values.
In some embodiments of the present application, optionally, the valid calibration result is an average of the calibration results retained in the third plurality of calibration results.
According to another aspect of the present application, a joint calibration system for vehicle sensors is provided. The system comprises: a memory configured to store instructions; and a processor configured to execute the instructions to perform any of the above-described methods of joint calibration of vehicle sensors.
According to another aspect of the application, a vehicle is provided, comprising a joint calibration system for any one of the vehicle sensors as described above.
According to another aspect of the present application, there is provided a computer-readable storage medium having instructions stored therein, wherein the instructions, when executed by a processor, cause the processor to perform any one of the above-mentioned methods for joint calibration of vehicle sensors.
The vehicle sensor joint calibration method, the vehicle sensor joint calibration system, the vehicle and the computer readable storage medium provided by some embodiments of the present application can perform calibration by using edge features extracted by laser radar point cloud depth discontinuity and vehicle edge features segmented by semantics, and can delete obvious unreasonable values from a plurality of calibration results to generate effective calibration results.
Drawings
The above and other objects and advantages of the present application will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings, in which like or similar elements are designated by like reference numerals.
FIG. 1 illustrates a method for joint calibration of vehicle sensors according to one embodiment of the present application;
FIG. 2 illustrates a joint calibration system for vehicle sensors according to an embodiment of the present application;
FIG. 3 illustrates a method for joint calibration of vehicle sensors according to one embodiment of the present application.
Detailed Description
For the purposes of brevity and explanation, the principles of the present application are described herein with reference primarily to exemplary embodiments thereof. However, those skilled in the art will readily recognize that the same principles are equally applicable to joint calibration methods and systems for vehicle sensors of all types, vehicles, and storage media, and that these same or similar principles may be implemented therein, with any such variations not departing from the true spirit and scope of the present application.
In the conventional calibration method, a high-precision calibration plate is generally used for joint calibration. In recent years, with the penetration of deep learning in various fields, a deep neural network-based calibration method has also been proposed. However, the method has high requirements on the scene, and is not suitable for adjusting the external parameters in the conventional driving scene to maximize mutual information so as to obtain the optimal external parameters. The application provides an online calibration method, which uses edge features extracted by laser radar point cloud depth discontinuity and vehicle edge features segmented by semantics as calibration target features, and optimizes external reference calibration results by aligning and minimizing projection distance.
According to one aspect of the application, a method for joint calibration of vehicle sensors is provided. As shown in fig. 1, a method 10 for jointly calibrating vehicle sensors (hereinafter referred to as method 10) includes the steps of: receiving first multiframe image data generated by an image sensor and second multiframe point cloud data generated by a laser radar in step S102; aligning the first multiframe image data and the second multiframe point cloud data in time in step S104; performing scene recognition on the aligned image data in step S106, and screening out a third multi-frame image data according to a result of the scene recognition; calibrating the image sensor and the laser radar by using the third multi-frame image data and the outline information in the point cloud data aligned with the third multi-frame image data in step S108 to obtain a third plurality of respective calibration results; and deleting abnormal values in the third plurality of calibration results in step S110 to generate valid calibration results. Simultaneous online calibration of the image sensor and the lidar may be achieved via the above steps of method 10, and the calibration is not dependent on a particular calibration object (e.g., checkerboard). The working principle of the individual steps of the method 10 will be explained in detail below.
The method 10 receives a first multiframe image data generated by an image sensor and a second multiframe point cloud data generated by a lidar in step S102. The terms "first plurality", "second plurality", "first plurality(s)" and "second plurality(s)" and the like in this application are intended only to distinguish the following accompanying subject matter, and do not represent that they are necessarily unequal in value. Furthermore, unless otherwise indicated or a contrary conclusion can be drawn from the context, "plurality" or "frames" generally refers to the case of "at least two" or "at least two frames". That is, for the accuracy of calibration, image data and point cloud data generated over a period of time will be received in step S102. Ideally, for calibration convenience, the image sensor and lidar should always acquire a frame of image data and a frame of point cloud data "simultaneously". However, in general, the sampling frequencies of the image sensor and the lidar are not the same, and thus the number of frames of image data and point cloud data acquired in the same time is different.
In order to solve the problem that the sampling frequencies of the image sensor and the lidar are not matched, the method 10 of the present application may align the first multi-frame image data and the second multi-frame point cloud data received in step S102 in time in step S104. In other words, the purpose of step S104 is to temporally correspond the image data and the point cloud data with different frame rates, because calibration objects for joint calibration of two sensors should be consistent, and if there is a deviation between the scenes detected by the two sensors, the result of the joint calibration will be deviated, and even calibration will be impossible.
Ideally, each frame of point cloud data can find a corresponding frame of image data because the sampling rate of the point cloud data is lower than that of the image data, and thus more image data is available for adapting the point cloud data. But in practice one frame of image data or point cloud data sampled at the same time may not be found. Therefore, it is necessary to screen out data that can be aligned in step S104.
In some embodiments of the present application, in step S104, a uniform time service source may be used to timestamp the first multi-frame image data and the second multi-frame point cloud data respectively, and two frames of the first multi-frame image data and the second multi-frame point cloud data whose difference between the timestamps is below a threshold value may be aligned. Specifically, a common clock source is provided to enable the image data and the point cloud data to correspond to each other. This clock source may be used as a clock for the image sensor and the lidar in step S104, and the current time is recorded every time one frame of data is captured. In fact, even if each frame of image data and point cloud data is time stamped, there may be very little data sampled at exactly the same time. For this reason, a time threshold (e.g., 5 ms) may be set in step S104, and the image data and the point cloud data having a difference in sampling time below this threshold are "regarded as" sampled at the same time, or "temporally aligned".
As shown in fig. 3, when one frame of image data corresponding to one frame of point cloud data is to be searched on a time axis, it may be searched first whether there is one frame of image data sampled at the same time. If so, the frame of image data is aligned with the point cloud data; if the image data does not exist, the left frame image data and the right frame image data which are closest to the frame of point cloud data can be searched on the time axis, then the time intervals between the two frames of image data and the point cloud data are compared, and the smaller one is taken as an alternative frame (if the image data is the same, one frame is selected). Then, the time interval between the selected frame of image data and the selected point cloud data can be compared with a preset time threshold, and if the time interval is below the time threshold, the image data corresponding to one frame is considered to be found; otherwise, if the interval is greater than the time threshold, it is determined that there is no corresponding image data.
The unaligned image data and point cloud data may also be discarded in step S104 because the aligned image data and point cloud data are eligible for subsequent steps of the method 10. Several of the retained aligned image data (denoted as "image frames") and point cloud data (denoted as "LiDAR") are shown in FIG. 3, and the rest of the data will be discarded and not go on to subsequent steps.
The method 10 performs scene recognition on the aligned image data in step S106, and screens out the third multiframe image data according to the result of the scene recognition. As shown in fig. 3, scene recognition may be performed on the image data aligned in step S104 by a scene classifier (scene recognition module), so as to determine whether the scene meets the requirement of the automatic calibration algorithm. The third multiframe is generally smaller in number than the first multiframe, as will be appreciated by those skilled in the art after reading this application.
In some embodiments of the present application, in step S106, scene recognition may be performed on the aligned image data through semantic segmentation to obtain semantic features, and the third multiframe image data is screened out according to the number of the semantic features and the positions thereof in the image data. For example, whether the number of the semantic features is within a preset range or not, whether the semantic features are uniformly distributed in the image or not, whether the objects corresponding to the semantic features are mutually shielded in the image or not can be determined, and the third multi-frame image data can be screened out according to the judgment result. As further shown in fig. 3, the data filtered by the scene classifier (scene recognition module) will be entered as key frames (the number of which is the third plurality described above) for further processing in subsequent steps of the method 10. The scene classifier (scene recognition module) is constructed into a semantic segmentation network and used for performing semantic segmentation on the image, and the results of the semantic segmentation network are collected to judge whether the scene has enough semantic features. In some examples, the semantic characteristics are required to be distributed uniformly enough, and there is a requirement for the distribution location to provide sufficient constraints. In addition, whether the semantic result is clear enough and has no obvious occlusion needs to be judged. Meanwhile, scenes which are too dense or open can be removed, so that a subsequent algorithm can be developed.
In step S108, the method 10 calibrates the image sensor and the lidar by using the third multi-frame image data and the profile information in the point cloud data aligned therewith to obtain a third plurality of calibration results.
In some embodiments of the present application, the calibration process may be implemented in step S108 by: extracting image contour information included in the image data based on the semantic features; extracting point cloud outline information according to depth information in the point cloud data; and constructing an optimization problem about the matching degree of the image contour information and the point cloud contour information, and determining a third plurality of calibration results according to the solution of the optimization problem, wherein the third plurality of calibration results comprise external parameters aiming at the image sensor and the laser radar.
Specifically, the online calibration module (abbreviated as "calibration module" in the figure) shown in fig. 3 may be used to establish an optimization problem by using semantic features commonly seen in the laser radar and the image sensor, and perform construction of an error term and solution of the optimization problem. The image data uses vehicle contour information after semantic segmentation, the point cloud data uses discontinuity information of transverse/longitudinal separation, and optimization and solution of an automatic calibration algorithm are carried out by taking the vehicle contour as a reference feature. The final calibration result includes a third plurality of sets of external parameters corresponding to a third plurality of frames of image data (point cloud data). As shown in FIG. 3, the external parameters may include a lidar rotation parameter θ for the image sensor1、θ2、θ3And a translation parameter t1、t2、t3
The method 10 deletes outliers in the third plurality of calibration results in step S110 to generate valid calibration results. In some embodiments of the present application, in step S110, unsupervised classification may be performed on the third plurality (group) of calibration results through the isolation forest algorithm, and isolated points therein are removed as abnormal values. Although efforts have been made above to ensure alignment of the image data with the point cloud data, there are still some unreasonable external parameters generated in step S108, subject to the limitations of the algorithm. Step S110 is intended to filter out these unreasonable parameters and retain only reasonable parameters. Specifically, the third plurality of calibration results may be subjected to unsupervised learning classification by the anomaly detector shown in fig. 3, so as to reject the results of significant outliers. implementation of the isolation forest algorithm can be performed according to the prior art, and is not described herein in detail.
In some embodiments of the present application, the valid calibration result is an average of the retained calibration results in the third plurality of calibration results. In step S110, calibration results that all satisfy the filtering adjustment may be obtained, and in some examples, the calibration results generated within a period of time may be selected for smoothing (for example, averaging), and the smoothed result may be used as an effective calibration result generated within the period of time for the lidar and the image sensor.
According to another aspect of the present application, a joint calibration system for vehicle sensors is provided. As shown in FIG. 2, the joint calibration system 20 for vehicle sensors (hereinafter referred to as system 20) includes a memory 202 and a processor 204. Wherein the memory 202 is configured to store instructions and the processor 204 is configured to execute the instructions to perform a method of jointly calibrating a vehicle sensor as any one of the above.
According to another aspect of the application, a vehicle is provided, the vehicle comprising a joint calibration system of any one of the vehicle sensors as described above. The present application is not limited to the layout of the vehicle (e.g., wheeled vehicle, tracked vehicle, etc.) nor the driving force of the vehicle (e.g., motor drive, gasoline drive, etc.), and encompasses a variety of vehicles currently known in the art as well as vehicles developed in the future.
According to another aspect of the present application, there is provided a computer readable storage medium having stored therein instructions that, when executed by a processor, cause the processor to perform a joint calibration method for a vehicle sensor as described above. Computer-readable media, as referred to in this application, includes all types of computer storage media, which can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, computer-readable media may include RAM, ROM, EPROM, E2PROMs, registers, hard disks, removable disks, CD-ROMs or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other device that can be used to carry or store desired program code means in the form of instructions or data structures and that can be used by a general purpose or special purpose computer, or a general purpose computerAny other transitory or non-transitory medium that is accessed by or through a dedicated processor. A disk, as used herein, typically reproduces data magnetically, while a disk reproduces data optically with a laser. Combinations of the above should also be included within the scope of computer-readable media. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
Some embodiments above in the present application provide an online calibration algorithm framework for automatic operation, which realizes full-operating-condition automatic operation of a calibration algorithm by introducing semantic information and combining scene information and abnormal value processing. The above description is only a specific embodiment of the present application, but the scope of the present application is not limited thereto. Other possible variations or substitutions may occur to those skilled in the art based on the teachings herein, and are intended to be covered by the present disclosure. In the present invention, the embodiments and features of the embodiments may be combined with each other without conflict. The scope of protection of the present application is subject to the description of the claims.

Claims (13)

1. A method for jointly calibrating vehicle sensors, the method comprising:
receiving first multiframe image data generated by an image sensor and second multiframe point cloud data generated by a laser radar;
temporally aligning the first multi-frame image data with the second multi-frame point cloud data;
performing scene recognition on the aligned image data, and screening out third multi-frame image data according to the result of the scene recognition;
calibrating the image sensor and the laser radar by using third multi-frame image data and contour information in point cloud data aligned with the third multi-frame image data to obtain a third plurality of calibration results; and
deleting outliers in the third plurality of calibration results to generate valid calibration results.
2. The method of claim 1, wherein temporally aligning the first multiframe image data with the second multiframe point cloud data comprises:
respectively stamping timestamps on the first multiframe image data and the second multiframe point cloud data by using a uniform time service source; and
and aligning two frames of the first multiframe image data and the second multiframe point cloud data, wherein the difference of the time stamps in the first multiframe image data and the second multiframe point cloud data is below a threshold value.
3. The method of claim 2, wherein temporally aligning the first multiframe image data with the second multiframe point cloud data further comprises: and discarding the image data and the point cloud data which are not aligned.
4. The method of claim 1, wherein performing scene recognition on the aligned image data, and filtering out the third multiframe image data according to a result of the scene recognition comprises:
respectively carrying out scene recognition on the aligned image data through semantic segmentation to obtain semantic features; and
and screening the third multi-frame image data according to the number of the semantic features and the positions of the semantic features in the image data.
5. The method of claim 4, wherein screening the third plurality of frames of image data according to the number of semantic features and their locations in the image data comprises: determining whether the number of semantic features is within a preset range.
6. The method of claim 4, wherein screening the third plurality of frames of image data according to the number of semantic features and their locations in the image data comprises: determining whether the semantic features are evenly distributed in the image.
7. The method of claim 4, wherein screening the third plurality of frames of image data according to the number of semantic features and their locations in the image data comprises: and determining whether the objects corresponding to the semantic features are mutually occluded in the image.
8. The method of claim 4, wherein calibrating the image sensor and the lidar using contour information in a third plurality of frames of image data and point cloud data aligned therewith to obtain a respective third plurality of calibration results comprises:
extracting image contour information included in the image data based on the semantic features;
extracting point cloud contour information according to depth information in the point cloud data; and
constructing an optimization problem about the matching degree of the image contour information and the point cloud contour information, and determining the third plurality of calibration results according to the solution of the optimization problem, wherein the third plurality of calibration results comprise external parameters aiming at the image sensor and the laser radar.
9. The method of claim 1, wherein deleting outliers in the third plurality of calibration results to generate valid calibration results comprises:
and carrying out unsupervised classification on the third plurality of calibration results through an isolation forest algorithm and eliminating isolated points as abnormal values.
10. A method as claimed in claim 1, wherein the valid calibration result is an average of the retained calibration results in the third plurality of calibration results.
11. A system for joint calibration of vehicle sensors, the system comprising:
a memory configured to store instructions; and
a processor configured to execute the instructions so as to perform the method of any one of claims 1-10.
12. A vehicle characterized in that it comprises a system according to claim 11.
13. A computer-readable storage medium having instructions stored therein, which when executed by a processor, cause the processor to perform the method of any one of claims 1-10.
CN202210778064.9A 2022-07-04 2022-07-04 Combined calibration method and system for vehicle sensor, vehicle and storage medium Pending CN115272478A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210778064.9A CN115272478A (en) 2022-07-04 2022-07-04 Combined calibration method and system for vehicle sensor, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210778064.9A CN115272478A (en) 2022-07-04 2022-07-04 Combined calibration method and system for vehicle sensor, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN115272478A true CN115272478A (en) 2022-11-01

Family

ID=83763979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210778064.9A Pending CN115272478A (en) 2022-07-04 2022-07-04 Combined calibration method and system for vehicle sensor, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN115272478A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524014A (en) * 2023-05-23 2023-08-01 斯乾(上海)科技有限公司 Method and device for calibrating external parameters on line
CN116938960A (en) * 2023-08-07 2023-10-24 北京斯年智驾科技有限公司 Sensor data processing method, device, equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524014A (en) * 2023-05-23 2023-08-01 斯乾(上海)科技有限公司 Method and device for calibrating external parameters on line
CN116938960A (en) * 2023-08-07 2023-10-24 北京斯年智驾科技有限公司 Sensor data processing method, device, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN115272478A (en) Combined calibration method and system for vehicle sensor, vehicle and storage medium
US9349194B2 (en) Method for superpixel life cycle management
KR101125765B1 (en) Apparatus and method for registration between color channels based on depth information of image taken by multiple color filter aperture camera
JPH1091795A (en) Device for detecting mobile object and method therefor
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN112633255B (en) Target detection method, device and equipment
CN114611635B (en) Object identification method and device, storage medium and electronic device
CN107392948B (en) Image registration method of amplitude-division real-time polarization imaging system
CN113516625A (en) Method, device and equipment for detecting abnormity of photovoltaic module image
CN116862922B (en) Target positioning method, system and medium based on image segmentation and radar information fusion
US20120033888A1 (en) Image processing system, image processing method, and computer readable medium
CN113096016A (en) Low-altitude aerial image splicing method and system
CN112700653A (en) Method, device and equipment for judging illegal lane change of vehicle and storage medium
CN115035168B (en) Multi-constraint-based photovoltaic panel multi-source image registration method, device and system
CN115578531A (en) Urban three-dimensional model reconstruction method based on remote sensing data
US11178382B1 (en) Auto-calibration of stereoscopic imaging device
CN114897801A (en) AOI defect detection method, device and equipment and computer medium
CN114384681A (en) Rapid and accurate automatic focusing method and system for microscope, computer equipment and medium
CN113658089A (en) Double-data-stream fusion object identification method based on depth camera
US20190104298A1 (en) Method for adjusting a stereoscopic imaging device
CN116503492B (en) Binocular camera module calibration method and calibration device in automatic driving system
CN116797953B (en) Remote sensing data processing system and method
JP2004145592A (en) Motion vector extraction device, method and program, and its recording medium
CN115205396A (en) Combined calibration method and system for vehicle sensor, vehicle and storage medium
EP2765556A1 (en) Method for superpixel life cycle management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination