CN113947134A - Multi-sensor registration fusion system and method under complex terrain - Google Patents

Multi-sensor registration fusion system and method under complex terrain Download PDF

Info

Publication number
CN113947134A
CN113947134A CN202111130555.4A CN202111130555A CN113947134A CN 113947134 A CN113947134 A CN 113947134A CN 202111130555 A CN202111130555 A CN 202111130555A CN 113947134 A CN113947134 A CN 113947134A
Authority
CN
China
Prior art keywords
data
camera
sensor
depth
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111130555.4A
Other languages
Chinese (zh)
Inventor
高�浩
朱海鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202111130555.4A priority Critical patent/CN113947134A/en
Publication of CN113947134A publication Critical patent/CN113947134A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D18/00Testing or calibrating apparatus or arrangements provided for in groups G01D1/00 - G01D15/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)
  • Navigation (AREA)

Abstract

The method discloses a multi-sensor registration fusion system under complex terrain, which comprises the following steps: the multi-sensor module is used for acquiring data of the multi-sensor, and comprises data of a laser radar, a color camera, a depth camera, a thermal infrared camera, a multispectral camera and an IMU; the data synchronization module is used for realizing the time consistency and the space consistency of the multiple sensors; the data registration and alignment module is used for realizing registration and alignment of the laser radar, the color image, the depth image, the thermal imaging and the multispectral image by using the depth camera and the depth information of the laser radar after data synchronization; and the data fusion module is used for carrying out pixel level fusion on the sensor images by utilizing deep learning. The invention utilizes the sensor data after time and space synchronization and the depth information to carry out pixel level registration, then realizes data fusion among different sensors by means of deep learning, and provides data support for the work of three-dimensional reconstruction, perception, path planning and the like under complex terrains.

Description

Multi-sensor registration fusion system and method under complex terrain
Technical Field
The method belongs to the technical field of unmanned vehicle multi-sensor fusion, and particularly relates to a multi-sensor registration fusion system and method under complex terrains.
Background
Sensing and positioning are key technologies of the unmanned vehicle, are the basis of path planning, intelligent decision and automatic control, sense the environment around the vehicle body and position the position of the vehicle body, and various sensors are required to sense the information of the form, speed, distance, category and the like of dynamic and static obstacles. These sensors are mainly inertial sensors, optical cameras, laser radars, etc. Each sensor has advantages and disadvantages of the sensor, the environment around the unmanned vehicle cannot be effectively sensed by only one sensor, input information of different sensors needs to be acquired, and all information is fused by a corresponding algorithm, so that the environment around the unmanned vehicle can be sensed more accurately.
With the gradual development of the fields of unmanned driving, unmanned exploration and the like, various sensors are not required to be separated from an unmanned vehicle to fuse more information, so that subsequent tasks such as perception planning can be conveniently performed. The multi-sensor data fusion based on the unmanned vehicle is characterized in that local data information provided by a plurality of sensors deployed at different positions of the unmanned vehicle is integrated, namely, a plurality of sensors of the same type and a plurality of different accumulated sensors are comprehensively analyzed by utilizing a computer technology, possible redundancy and contradiction among multi-sensor information are eliminated, interference is removed and complemented, the uncertainty of a certain sensor is reduced, the consistency explanation and description of the surrounding environment and obstacles of the unmanned vehicle are obtained, and therefore, more sufficient and accurate information is provided for a decision planning layer and an autonomous control layer of the unmanned vehicle.
In 2007, the system can be roughly divided into four generations of unmanned systems, wherein the first generation mainly adopts a separate processing scheme of Velodyne64 line laser radar and a camera; the second generation of main scheme is to fuse a plurality of 16-line 32-line radars into a camera and other sensors for positioning and target identification; the third generation mainly upgrades the second generation laser radar to a solid state laser radar, and the solid state laser radar is arranged in front of the vehicle; the fourth generation solution will remove the steering wheel, and adopts the concept of moving space, which is the ultimate goal pursued by all manufacturers. In summary, it can be seen that the sensor fusion in the second and third generation schemes studied at present is a difficult point that must be overcome.
The invention fully investigates and researches the registration and fusion problem of multiple sensors under complex terrains, performs time and space synchronization on all sensors, performs pixel level alignment on all optical camera data by using depth information, performs pre-fusion on the data by using depth learning, provides a data matrix comprising multi-dimensional information, and provides key data support for reconstruction, perception, planning, decision-making and the like.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a multi-sensor registration fusion system and a multi-sensor registration fusion method under complex terrains.
According to an aspect of the present specification, there is provided a multi-sensor registration fusion system under complex terrain, including: the system comprises a multi-sensor module, a data synchronization module, a data registration and alignment module and a data fusion module, wherein the multi-sensor module, the data synchronization module, the data registration and alignment module and the data fusion module are sequentially connected;
the multi-sensor module is used for acquiring data of a multi-sensor, and the data comprises data of a laser radar, a color camera, a depth camera, a thermal infrared camera, a multispectral camera and an IMU;
the data synchronization module is used for realizing the time consistency and the space consistency of the multiple sensors;
the data registration and alignment module is used for realizing registration and alignment of the laser radar image, the color image, the depth image, the thermal imaging and the multispectral image by using the depth camera and the depth information of the laser radar after data synchronization;
and the data fusion module is used for performing pixel level fusion on the sensor images by utilizing deep learning after data registration alignment.
According to the technical scheme, time synchronization and space synchronization are carried out on data lines of a plurality of sensors to achieve consistency in time and space, then pixel level registration alignment of the sensors is carried out based on depth information by utilizing sensor data after time and space synchronization, and finally data fusion among different sensors is achieved by means of depth learning, so that sensor data which can see color information, depth, thermal infrared and multispectral information and three-dimensional point cloud information is formed, and more useful information is provided for work such as three-dimensional reconstruction, perception and path planning under complex terrains.
As a further technical solution, the multi-sensor module further includes a data preprocessing unit, which is used for centralizing the SDK input acquisition part programs of each sensor into the ROS operating system, and respectively issuing and transmitting the programs to the data synchronization module by using the message form of the ROS. Because the types of the sensors are different and the corresponding respective drives are different, the SDK input acquisition part programs of the sensors are firstly integrated into an ROS operating system and are respectively issued by utilizing the information form of the ROS, so that the subsequent synchronization, registration, fusion and positioning can be conveniently used.
As a further technical solution, the implementing of the time consistency of the multiple sensors further comprises: the laser radar is synchronized by adopting a GPS time stamp, the color camera, the depth camera, the thermal infrared camera and the multispectral camera are synchronized by adopting a hard trigger, and the IMU is synchronized with the laser radar and each camera by adopting a nearby time stamp.
Specifically, in time synchronization, the laser radar adopts GPS timestamp synchronization, an Xavier industrial personal computer is used for sending laser radar GPS signals and PPS signals, and current UTC time is written into a laser radar internal clock, so that the laser radar timestamp can be kept consistent with the UTC time.
Specifically, in time synchronization, each camera sets the exposure time in advance, and the trigger mode is set to hard trigger and rising edge trigger. The trigger signal is sent by a common IO port of the Xavier industrial personal computer, can be set as a square wave signal with fixed frequency, and can also trigger the cameras by giving a rising edge signal at any time so as to ensure synchronous triggering between the cameras.
Specifically, in time synchronization, a common IMU which does not support hard triggering is adopted in consideration of cost, and according to the high-frequency characteristic of the IMU, a camera of a near timestamp and hard triggering scheme and a laser radar of a GPS timestamp scheme are adopted for time synchronization.
As a further technical solution, the implementing spatial consistency of the multiple sensors further comprises: firstly, internal reference calibration of each camera and IMU is carried out, and then external reference combined calibration of each sensor is carried out by using the calibrated internal reference.
Specifically, the internal reference of each sensor needs to be calibrated in space synchronization, and the zero drift and white noise of the IMU are calibrated mainly through the static calibration; the internal reference of the camera is calibrated by a calibration plate shot by the camera at different angles by using a Zhang Zhen calibration method, and the focal length, principal point offset and distortion coefficient of each camera are obtained by taking a pixel as a unit.
As a further technical scheme, the external reference joint calibration of each sensor is performed by using the calibrated internal reference, and the method further comprises the following steps: and (3) utilizing the calibrated internal parameters to calibrate the color camera and the IMU firstly, and then calibrating the color camera and the depth camera, calibrating the color camera and the multispectral camera, calibrating the color camera and the thermal imaging camera, and calibrating the color camera and the laser radar.
Specifically, calibration of external parameters among various sensors is needed in spatial synchronization, wherein an IMU fixes the center of the unmanned vehicle body, the color camera and the IMU are calibrated to calibrate the camera under a vehicle body center coordinate system, and the other sensors and the color camera are calibrated to calibrate under the vehicle body center coordinate system by taking the color camera as an intermediate medium.
As a further technical solution, the data registration and alignment module further includes: firstly, aligning a depth image with a color image and aligning a laser radar with the color image by using depth information; and then mapping the aligned depth information to other camera images to align all camera data to form high-dimensional information data containing color, depth, heat and multiple spectrums. The registration alignment of data requires the use of depth information of the depth camera and the lidar to perform data alignment between different cameras.
As a further technical solution, the data fusion module includes: and performing pixel-level data fusion by using the registered and aligned data through a deep learning DIDFuse fusion network to form sensor data containing color, depth, heat, multispectral segment and three-dimensional point cloud information. The registered and aligned data are utilized, and the pixel-level data fusion is carried out through a deep learning DIDFuse fusion algorithm, so that super sensor data which is capable of seeing color information, depth, thermal infrared and multispectral information and three-dimensional point cloud information is formed, and more useful information is provided for the work of three-dimensional reconstruction, perception, path planning and the like under complex terrains.
Furthermore, the DIDFuse fusion network can receive input images with any size after pixel alignment, the network is input in a U-net network mode, data keeps front original information in the convolution process, the network is composed of an Encoder network and a Decode network, the network training part is used for extracting image information of the input original images through the Encoder network and decomposing the image information into a foreground and a background through the Encoder network, the images are restored through the Decode network, two images to be fused are adopted in the fusion testing part, image features are extracted through the Encoder network respectively and then fused by using a certain fusion strategy, and finally the Decode network is used for reconstructing the fused two images.
According to another aspect of the present specification, there is provided a multi-sensor registration fusion method under complex terrain, including:
data input: data input of each sensor is unified under an ROS operating system, and the data input comprises data of a laser radar, a color camera, a depth camera, a thermal infrared camera, a multispectral camera and an IMU;
time synchronization: the color camera, the depth camera, the thermal infrared camera and the multispectral camera realize hard trigger synchronization, the laser radar realizes GPS timestamp synchronization, and the high frame rate of the IMU realizes the synchronization of the nearby timestamp and other sensors;
space synchronization: firstly, calibrating internal references of all cameras and IMUs, and then carrying out external reference combined calibration on all sensors by using the calibrated internal references;
registration alignment of data: firstly, aligning a depth image, a laser radar and a color image by using calibrated external parameters, and then aligning a thermal imaging camera and a multispectral camera with the color image by using depth information;
and (3) fusing data: and after the data are registered and aligned, performing pixel-level fusion on the data of each sensor by utilizing deep learning.
Compared with the prior art, the method has the beneficial effects that:
(1) the invention provides a system, which is characterized in that time synchronization and space synchronization are carried out on data lines of a plurality of sensors to achieve consistency in time and space, then pixel-level registration and alignment of the sensors are carried out based on depth information by utilizing sensor data after time and space synchronization, and finally data fusion among different sensors is realized by means of depth learning to form sensor data which has the capability of seeing color information and depth, thermal infrared and multispectral information and can also see three-dimensional point cloud information, so that more useful information is provided for work such as three-dimensional reconstruction, perception, path planning and the like under complex terrains.
(2) The invention provides a method, firstly, inputting data of a plurality of sensors into a unified system so as to be issued through a unified format; then, time synchronization and space synchronization are carried out on the sensors, and the time consistency and the space consistency of the sensors are realized; and then, carrying out registration alignment on the synchronized data of the multiple sensors, and carrying out data fusion after the registration alignment, thereby realizing pixel-level fusion of the data of each sensor and obtaining high-dimensional information data containing color, depth, heat, multiple spectra and three-dimensional point cloud.
Drawings
Fig. 1 is a schematic diagram of a multi-sensor registration fusion system under complex terrain according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a data fusion algorithm according to an embodiment of the present invention.
Fig. 3 is a flowchart of a multi-sensor registration fusion method under complex terrain according to an embodiment of the present invention.
Detailed Description
The technical solutions of the embodiments of the present method will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present method, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the method without any inventive step, are within the scope of the protection of the method.
Example 1
The embodiment provides a multi-sensor registration fusion system under a complex terrain, as shown in fig. 1, including: the system comprises a multi-sensor module, a data synchronization module, a data registration and alignment module and a data fusion module, wherein the multi-sensor module, the data synchronization module, the data registration and alignment module and the data fusion module are sequentially connected.
A multi-sensor module comprising a lidar, a color camera, a depth camera, a thermal infrared camera, a multispectral camera, and an IMU. Because the types of the sensors are different and the corresponding respective drives are different, the SDK input acquisition part programs of the sensors are firstly integrated into an ROS operating system and are respectively issued by utilizing the information form of the ROS, so that the subsequent synchronization, registration, fusion and positioning can be conveniently used.
And the data synchronization module is used for time synchronization and space synchronization of the multiple sensors.
In the time synchronization: the laser radar is synchronized by adopting a GPS time stamp, the color camera, the depth camera, the thermal infrared camera and the multispectral camera are synchronized by adopting a hard trigger, and the IMU is synchronized with the laser radar and each camera by adopting a nearby time stamp.
The laser radar adopts GPS time stamp synchronization, and the Xavier industrial personal computer is utilized to write current UTC time into the internal clock of the laser radar for the laser radar time stamp can be kept consistent with the UTC time.
The exposure time of each camera is set in advance, the trigger mode is set to be hard trigger, and rising edge trigger is carried out. The trigger signal is sent by a common IO port of the Xavier industrial personal computer, can be set as a square wave signal with fixed frequency, and can also trigger the cameras by giving a rising edge signal at any time so as to ensure synchronous triggering between the cameras.
And in consideration of cost, adopting a common IMU which does not support hard triggering, and adopting a camera of a near timestamp and hard triggering scheme and a laser radar of a GPS timestamp scheme to perform time synchronization according to the high-frequency characteristic of the IMU.
In the space synchronization: firstly, internal reference calibration of each camera and IMU is carried out, and then external reference combined calibration of each sensor is carried out by using the calibrated internal reference.
In space synchronization, the internal reference calibration of the IMU is mainly used for calibrating the zero drift and white noise of the IMU under standing. The internal reference of the camera is calibrated by a calibration plate shot by the camera at different angles by using a Zhang Zhen calibration method, and the focal length, principal point offset and distortion coefficient of each camera are obtained by taking a pixel as a unit.
After the internal reference is calibrated, firstly calibrating the color camera and the IMU by using the calibrated internal reference, and secondly calibrating the color camera and the depth camera, calibrating the color camera and the multispectral camera, calibrating the color camera and the thermal imaging camera, and calibrating the color camera and the laser radar. The IMU is fixed in the center of the vehicle body, the color camera and the IMU are calibrated to be under a vehicle body center coordinate system, and the other sensors and the color camera are calibrated to be under the vehicle body center coordinate system by taking the color camera as an intermediate medium.
The data registration and alignment module needs to use the depth information of the depth camera and the laser radar to perform data alignment between different cameras.
Data registration alignment firstly aligns a depth image with a color image and aligns a laser radar with the color image by using depth information; and then mapping the aligned depth information to other camera images to align all camera data to form high-dimensional information data containing color, depth, heat and multiple spectrums.
And the data fusion module performs pixel-level data fusion by using the registered and aligned data through a deep learning DIDFuse fusion network to form sensor data containing color, depth, heat, multispectral segment and three-dimensional point cloud information. The registered and aligned data are utilized, and the pixel-level data fusion is carried out through a deep learning DIDFuse fusion algorithm, so that super sensor data which is capable of seeing color information, depth, thermal infrared and multispectral information and three-dimensional point cloud information is formed, and more useful information is provided for the work of three-dimensional reconstruction, perception, path planning and the like under complex terrains.
As shown in fig. 2, the diduse fusion network can receive input images of any size after pixel alignment, the network inputs the input images in a U-net network form, data retains front original information in a convolution process, the network is composed of an Encoder network and a Decoder network, the network training part extracts image information from the input original images through the Encoder network and decomposes the image information into a foreground and a background through the Encoder network, the images are restored through the Decoder network, two images to be fused are adopted in the fusion testing part, image features are extracted through the Encoder network respectively and then fused by using a certain fusion strategy, and finally the Decoder network is used for reconstructing the fused two images.
According to the embodiment, time synchronization and space synchronization are carried out on data lines of a plurality of sensors to achieve consistency in time and space, then pixel level registration alignment of each sensor is carried out based on depth information by utilizing sensor data after time and space synchronization, and finally data fusion among different sensors is realized by means of depth learning to form sensor data which has the capability of seeing color information, depth, thermal infrared and multispectral information and three-dimensional point cloud information, so that more useful information is provided for work such as three-dimensional reconstruction, perception and path planning under complex terrains.
In the embodiment, a perfect and unified multi-sensor system is constructed, and a fusion algorithm based on deep learning is adopted to realize a multi-sensor registration fusion task under complex terrain.
Example 2
As shown in fig. 3, the present embodiment provides a multi-sensor registration fusion method under complex terrain, including the following steps:
1. input of data
First, data input is unified. Since the sensors in this embodiment include a color camera, a depth camera, a thermal imaging camera, a multispectral camera, a laser radar, and an IMU, and their respective drives are different, it is necessary to integrate the SDK data acquisition portion of each sensor into the ROS operating system, where each optical camera needs to acquire a data stream through the corresponding SDK, convert the data stream into an OpenCV image format, convert the data stream into an ROS message format by using cv _ bridge, and finally output the data acquired by each sensor separately in a unified manner through the topic format of the ROS.
2. Time consistency
The part combines GPS time stamp synchronization, hardware trigger synchronization and nearby time stamp alignment synchronization to realize the time consistency of all sensors.
The laser radar adopts GPS timestamp synchronization, the industrial personal computer is used for providing laser radar GPS signals and PPS signals, and the rising edge of the first PPS signal after the laser radar receives the GPS signals writes the current UTC time into the internal clock of the laser radar, so that the laser radar timestamp can be kept consistent with the UTC time.
The optical camera comprises a color camera, a depth camera, a thermal imaging camera and a multispectral camera, and is synchronized by adopting external hardware triggering, the exposure time of the camera is set, the triggering mode is set to be hard triggering, and rising edge triggering is carried out. The trigger signal is sent by the ordinary IO port of the industrial personal computer, can be set as a square wave signal with fixed frequency, and can also give a rising edge signal at any time to externally trigger the camera so as to ensure the synchronous triggering between the cameras.
The IMU and other sensors are synchronized by adopting a nearby timestamp, a common IMU which does not support external triggering is adopted for cost reasons, and according to the high-frequency characteristic of the IMU, a message _ filter message synchronizer in ROS is adopted to perform time synchronization on the IMU, a camera in a hard triggering scheme and a laser radar in a GPS timestamp scheme.
3. Spatial consistency
The part is subjected to space synchronization, namely, internal reference and external reference calibration of the sensor. The internal reference calibration solves the transformation between a single sensor and a world coordinate system, and the external reference calibration is the transformation between different sensors in the world coordinate system, wherein the calibration of the external reference of the sensor depends on the accurate internal reference of the sensor.
In the space synchronization, the internal parameters of each sensor need to be calibrated. Calibrating internal references of the IMU by using a six-face method, and calibrating zero drift and white noise mainly through data under standing; the internal reference of the camera is estimated by a Zhang calibration method, characteristic points are extracted from calibration plate images shot at different angles by the camera, the internal reference under an ideal distortion-free condition is estimated, a least square method is applied to estimate distortion coefficients under radial distortion, and finally a maximum likelihood method is used for optimization estimation, so that the precision is improved. The calibration of camera internal parameters mainly comprises the steps of calibrating the focal length, the principal point offset and the distortion coefficient of each camera by taking a pixel as a unit.
In the space synchronization, the external parameters among the sensors need to be calibrated. Firstly, calibrating a color camera and an IMU (inertial measurement Unit) by using calibrated internal parameters, and roughly dividing the calibration of the camera and the IMU external parameters by using a Kalibr tool into three steps, (1) roughly estimating time delay between the camera and the IMU; (2) acquiring initial rotation, gravitational acceleration and gyroscope bias between imu-camera; (3) large optimization, including angular point reprojection error, imu accelerometer and gyroscope measurement error; secondly, calibrating a color camera and a depth camera, calibrating the color camera and a multispectral camera, and calibrating the color camera and a thermal imaging camera, wherein the color camera and the thermal imaging camera need specific heatable calibration plates, and the color camera and the cameras have the same external reference calibration principle and are calibrated by a binocular camera; and finally, calibrating the color camera and the laser radar, calibrating by using an automatic tool, and optimizing and solving the relative pose of the camera and the radar by finding 9 groups of corresponding points in the image and the point cloud. The IMU is fixed in the center of the vehicle body, the color camera and the IMU are calibrated to be under a vehicle body center coordinate system, and the other sensors and the color camera are calibrated to be under the vehicle body center coordinate system by taking the color camera as an intermediate medium.
4. Registration alignment of data
The part needs to use the depth information of the depth camera and the laser radar to align data between different cameras.
Firstly, recovering a three-dimensional point cloud under a depth camera coordinate system from a depth image by using internal parameters of the depth camera, converting the three-dimensional point cloud under the depth camera coordinate system and external parameters of a color camera into a color camera coordinate system, and finally converting the three-dimensional point cloud under the color camera pixel coordinate system by using the internal parameters of the color camera into a color camera pixel coordinate system to perform pixel alignment with a color image, wherein the same principle is used for aligning a laser radar and the color image;
and secondly, the aligned depth information is converted into other camera coordinate systems through external parameters, and then is converted into corresponding camera pixel coordinate systems through internal parameters of corresponding cameras, so that the alignment of the depth and all camera data is performed, and high-dimensional information data containing color, depth, heat and multiple spectra is formed.
5. Fusion of data
The part utilizes the registered and aligned data to perform pixel-level data fusion through a deep learning DIDFuse fusion network to form super sensor data which can see color information, depth, heat and multispectral information and three-dimensional point cloud information. More useful information is provided for the work of three-dimensional reconstruction, perception, path planning and the like under complex terrain.
The embodiment firstly obtains multi-sensor data and performs time and space synchronization of the multi-sensors to achieve time consistency and space consistency of the multi-sensors, then performs pixel level registration based on depth information by using the sensor data after time and space synchronization, and finally realizes data fusion between different sensors based on the data after registration and alignment by means of depth learning.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present method and not to limit it; although the present method has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical scheme deviate from the technical scheme of the embodiment of the method.

Claims (8)

1. A multi-sensor registration fusion system under complex terrain, comprising: the system comprises a multi-sensor module, a data synchronization module, a data registration and alignment module and a data fusion module, wherein the multi-sensor module, the data synchronization module, the data registration and alignment module and the data fusion module are sequentially connected;
the multi-sensor module is used for acquiring data of a multi-sensor, and the data comprises data of a laser radar, a color camera, a depth camera, a thermal infrared camera, a multispectral camera and an IMU;
the data synchronization module is used for realizing the time consistency and the space consistency of the multiple sensors;
the data registration and alignment module is used for realizing registration and alignment of the laser radar image, the color image, the depth image, the thermal imaging and the multispectral image by using the depth camera and the depth information of the laser radar after data synchronization;
and the data fusion module is used for performing pixel level fusion on the sensor images by utilizing deep learning after data registration alignment.
2. The multi-sensor registration fusion system under complex terrain of claim 1, wherein the multi-sensor module further comprises a data preprocessing unit for centralizing the procedure of the SDK input acquisition part of each sensor into the ROS operating system, and respectively issuing and transmitting to the data synchronization module by using the message form of ROS.
3. The multi-sensor registration fusion system under complex terrain according to claim 1, wherein achieving temporal consistency of multi-sensors further comprises: the laser radar is synchronized by adopting a GPS time stamp, the color camera, the depth camera, the thermal infrared camera and the multispectral camera are synchronized by adopting a hard trigger, and the IMU is synchronized with the laser radar and each camera by adopting a nearby time stamp.
4. The multi-sensor registration fusion system under complex terrain according to claim 1, wherein achieving spatial consistency of the multi-sensors further comprises: firstly, internal reference calibration of each camera and IMU is carried out, and then external reference combined calibration of each sensor is carried out by using the calibrated internal reference.
5. The multi-sensor registration fusion system under complex terrain according to claim 4, wherein the calibrated internal parameters are used for external parameter joint calibration of each sensor, and further comprising: and (3) utilizing the calibrated internal parameters to calibrate the color camera and the IMU firstly, and then calibrating the color camera and the depth camera, calibrating the color camera and the multispectral camera, calibrating the color camera and the thermal imaging camera, and calibrating the color camera and the laser radar.
6. The multi-sensor registration fusion system under complex terrain according to claim 1, wherein the data registration alignment module further comprises: firstly, aligning a depth image with a color image and aligning a laser radar with the color image by using depth information; and then mapping the aligned depth information to other camera images to align all camera data to form high-dimensional information data containing color, depth, heat and multiple spectrums.
7. The multi-sensor registration fusion system under complex terrain according to claim 1, wherein the data fusion module comprises: and performing pixel-level data fusion by using the registered and aligned data through a deep learning DIDFuse fusion network to form sensor data containing color, depth, heat, multispectral segment and three-dimensional point cloud information.
8. A multi-sensor registration fusion method under complex terrain is characterized by comprising the following steps:
data input: data input of each sensor is unified under an ROS operating system, and the data input comprises data of a laser radar, a color camera, a depth camera, a thermal infrared camera, a multispectral camera and an IMU;
time synchronization: the color camera, the depth camera, the thermal infrared camera and the multispectral camera realize hard trigger synchronization, the laser radar realizes GPS timestamp synchronization, and the high frame rate of the IMU realizes the synchronization of the nearby timestamp and other sensors;
space synchronization: firstly, calibrating internal references of all cameras and IMUs, and then carrying out external reference combined calibration on all sensors by using the calibrated internal references;
registration alignment of data: firstly, aligning a depth image, a laser radar and a color image by using calibrated external parameters, and then aligning a thermal imaging camera and a multispectral camera with the color image by using depth information;
and (3) fusing data: and after the data are registered and aligned, performing pixel-level fusion on the data of each sensor by utilizing deep learning.
CN202111130555.4A 2021-09-26 2021-09-26 Multi-sensor registration fusion system and method under complex terrain Pending CN113947134A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111130555.4A CN113947134A (en) 2021-09-26 2021-09-26 Multi-sensor registration fusion system and method under complex terrain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111130555.4A CN113947134A (en) 2021-09-26 2021-09-26 Multi-sensor registration fusion system and method under complex terrain

Publications (1)

Publication Number Publication Date
CN113947134A true CN113947134A (en) 2022-01-18

Family

ID=79328690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111130555.4A Pending CN113947134A (en) 2021-09-26 2021-09-26 Multi-sensor registration fusion system and method under complex terrain

Country Status (1)

Country Link
CN (1) CN113947134A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116165958A (en) * 2023-04-25 2023-05-26 舜泰汽车有限公司 Automatic driving system of amphibious special unmanned platform
CN116597074A (en) * 2023-04-18 2023-08-15 五八智能科技(杭州)有限公司 Method, system, device and medium for multi-sensor information fusion
CN117113284A (en) * 2023-10-25 2023-11-24 南京舜云智慧城市科技有限公司 Multi-sensor fusion data processing method and device and multi-sensor fusion method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597074A (en) * 2023-04-18 2023-08-15 五八智能科技(杭州)有限公司 Method, system, device and medium for multi-sensor information fusion
CN116165958A (en) * 2023-04-25 2023-05-26 舜泰汽车有限公司 Automatic driving system of amphibious special unmanned platform
CN117113284A (en) * 2023-10-25 2023-11-24 南京舜云智慧城市科技有限公司 Multi-sensor fusion data processing method and device and multi-sensor fusion method
CN117113284B (en) * 2023-10-25 2024-01-26 南京舜云智慧城市科技有限公司 Multi-sensor fusion data processing method and device and multi-sensor fusion method

Similar Documents

Publication Publication Date Title
Heng et al. Self-calibration and visual slam with a multi-camera system on a micro aerial vehicle
CN113947134A (en) Multi-sensor registration fusion system and method under complex terrain
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
CN112197770B (en) Robot positioning method and positioning device thereof
US10109104B2 (en) Generation of 3D models of an environment
Panahandeh et al. Vision-aided inertial navigation based on ground plane feature detection
CN102313536B (en) Method for barrier perception based on airborne binocular vision
WO2019241782A1 (en) Deep virtual stereo odometry
CN106055104A (en) Methods and apparatus for providing snapshot truthing system for tracker
CN108917753B (en) Aircraft position determination method based on motion recovery structure
CN109917419B (en) Depth filling dense system and method based on laser radar and image
CN111623773B (en) Target positioning method and device based on fisheye vision and inertial measurement
WO2021150784A1 (en) System and method for camera calibration
CN112861660A (en) Laser radar array and camera synchronization device, method, equipment and storage medium
US20210295537A1 (en) System and method for egomotion estimation
He et al. Relative motion estimation using visual–inertial optical flow
CN116430403A (en) Real-time situation awareness system and method based on low-altitude airborne multi-sensor fusion
CN113587934A (en) Robot, indoor positioning method and device and readable storage medium
CN116684740A (en) Perception training data generation method, device, computer equipment and storage medium
CN117115271A (en) Binocular camera external parameter self-calibration method and system in unmanned aerial vehicle flight process
Liu et al. Integrated velocity measurement algorithm based on optical flow and scale-invariant feature transform
WO2023283929A1 (en) Method and apparatus for calibrating external parameters of binocular camera
KR102225321B1 (en) System and method for building road space information through linkage between image information and position information acquired from a plurality of image sensors
CN114964276A (en) Dynamic vision SLAM method fusing inertial navigation
Wang et al. Slam-based cooperative calibration for optical sensors array with gps/imu aided

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination