CN116597074A - Method, system, device and medium for multi-sensor information fusion - Google Patents

Method, system, device and medium for multi-sensor information fusion Download PDF

Info

Publication number
CN116597074A
CN116597074A CN202310412995.1A CN202310412995A CN116597074A CN 116597074 A CN116597074 A CN 116597074A CN 202310412995 A CN202310412995 A CN 202310412995A CN 116597074 A CN116597074 A CN 116597074A
Authority
CN
China
Prior art keywords
point cloud
data
camera
cloud data
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310412995.1A
Other languages
Chinese (zh)
Inventor
姜峰
朱志远
黄志勇
查长海
陈鹏
徐天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuba Intelligent Technology Hangzhou Co ltd
Original Assignee
Wuba Intelligent Technology Hangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuba Intelligent Technology Hangzhou Co ltd filed Critical Wuba Intelligent Technology Hangzhou Co ltd
Priority to CN202310412995.1A priority Critical patent/CN116597074A/en
Publication of CN116597074A publication Critical patent/CN116597074A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Electromagnetism (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a method, a system, a device and a medium for multi-sensor information fusion, wherein the method comprises the following steps: acquiring image data and point cloud data of a current scene through multiple sensors; calculating the mapping relation between the point cloud data and a camera imaging plane according to camera internal parameters and external parameters calibrated in advance in the sensor, screening out current frame point cloud data in the camera view angle range according to the mapping relation, and projecting the current frame point cloud data in the camera view angle range into image data according to internal and external parameters to obtain point cloud data containing various information; and constructing a high-dimensional octree map according to the three-dimensional coordinates of the point cloud data containing various information. The application solves the problems that a large amount of input data are difficult to process and the data fusion performance is not high when the multi-sensor data are fused in the related technology, improves the data fusion performance, and provides more accurate information for the operations such as positioning navigation or environment perception of the robot.

Description

Method, system, device and medium for multi-sensor information fusion
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method, a system, an apparatus, and a medium for multi-sensor information fusion.
Background
The multi-sensor data fusion is a technology for carrying out centralized processing and integration on various sensor data, and is a vital link in the robot technology. The robot is mainly used for sensing the surrounding environment through various sensors, such as a visual RGB camera, an RGBD camera, a laser radar, an infrared camera and the like, so that the robot can acquire the surrounding environment information from multiple dimensions. The multi-sensor data fusion has wide application in a plurality of fields such as positioning navigation, environment operation, environment perception and the like.
In the related art, multi-sensor data fusion is mainly developed around preprocessing and packaging of data. The main method is that the collected various sensor data are input into an upper computer, then preprocessing is carried out on different sensor data, the preprocessed data of the same frame are packaged, and the preprocessed data are sent to an algorithm end for subsequent processing. However, this current method has the following drawbacks: 1. for large amounts of input data, it is difficult to process such large amounts of data at the same time by subsequent algorithms, even if the data is fused. 2. The characteristics of various sensor data cannot be analyzed, and the data is only transmitted simply, so that the real fusion of the data is difficult to achieve.
Accordingly, there is a need to propose a corresponding solution to the problems existing in the prior art.
Disclosure of Invention
The embodiment of the application provides a method, a system, a device and a medium for multi-sensor information fusion, which at least solve the problems that a large amount of input data is difficult to process during multi-sensor data fusion and the data fusion performance is low in the related technology.
In a first aspect, an embodiment of the present application provides a method for multi-sensor information fusion, where the method includes:
acquiring image data and point cloud data of a current scene through multiple sensors;
calculating the mapping relation between the point cloud data and a camera imaging plane according to camera internal parameters and external parameters calibrated in advance in a sensor, screening out current frame point cloud data in a camera view angle range according to the mapping relation, and projecting the current frame point cloud data in the camera view angle range into the image data according to internal and external parameters to obtain point cloud data containing various information;
and constructing a high-dimensional octree map according to the three-dimensional coordinates of the point cloud data containing various information.
In some of these embodiments, the multisensor includes a lidar, an RGB camera, an infrared camera, and a multispectral camera.
In some of these embodiments, acquiring image data and point cloud data of the current scene by the multi-sensor includes:
acquiring point cloud data of a current scene through the laser radar;
and respectively acquiring an RGB image, an infrared image and a multispectral image of the current scene through the RGB camera, the infrared camera and the multispectral camera.
In some of these embodiments, prior to acquiring image data and point cloud data of a current scene by the multisensor, the method includes:
the front end of the robot chassis is sequentially provided with a plurality of sensors according to a preset distance, and the optical centers of the sensors are on the same vertical line and parallel to the ground.
In some embodiments, a specific calculation formula of the mapping relationship between the point cloud data and the camera imaging plane is as follows:
wherein ,for the horizontal and vertical pixel coordinates in the camera image, f x and fy C is the focal length of the camera in the lateral and longitudinal directions x and cy For the camera lateral and longitudinal focus coordinates, +.>Is the actual spatial coordinate position of the point cloud.
In some embodiments, projecting the current frame point cloud data within the camera view angle range into the image data according to the internal and external parameters, and obtaining the point cloud data containing various information comprises:
when the image data is an RGB image, mapping each point in the point cloud data to a pixel point of the image, reading RGB color values according to pixel coordinates, and writing the RGB color values into an original point cloud;
when the image data is an infrared image, mapping gray data of the infrared image into RGB data, and writing the infrared image data into an original point cloud through the RGB data;
when the image data is a multispectral image, mapping 9-dimensional gray data of the multispectral image into 9-dimensional RGB data, and writing the multispectral image data into an original point cloud through the RGB data.
In some of these embodiments, prior to constructing a high-dimensional octree map from three-dimensional coordinates of the point cloud data containing the plurality of information, the method comprises:
and modifying the structure of the octree map according to the data structure of the point cloud data containing various information.
In a second aspect, an embodiment of the present application provides a system for multi-sensor information fusion, where the system includes:
the acquisition module is used for acquiring image data and point cloud data of the current scene through a plurality of sensors;
the information fusion module is used for calculating the mapping relation between the point cloud data and the camera imaging plane according to camera internal parameters and external parameters calibrated in advance in the sensor, screening out current frame point cloud data in the camera view angle range according to the mapping relation, and projecting the current frame point cloud data in the camera view angle range into the image data according to internal and external parameters to obtain point cloud data containing various information;
and the construction module is used for constructing a high-dimensional octree map according to the three-dimensional coordinates of the point cloud data containing various information.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the method according to the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described in the first aspect above.
Compared with the related art, the multi-sensor information fusion method provided by the embodiment of the application has the advantages that the image data and the point cloud data of the current scene are acquired through the multi-sensor; calculating the mapping relation between the point cloud data and a camera imaging plane according to camera internal parameters and external parameters calibrated in advance in the sensor, screening out current frame point cloud data in the camera view angle range according to the mapping relation, and projecting the current frame point cloud data in the camera view angle range into image data according to internal and external parameters to obtain point cloud data containing various information; according to the three-dimensional coordinates of the point cloud data containing various information, a high-dimensional octree map is constructed, the problems that a large amount of input data are difficult to process when multi-sensor data are fused and the data fusion performance is low in the related technology are solved, the data fusion performance is improved, and more accurate information is provided for the operations such as positioning navigation or environment perception of a robot.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a method of multi-sensor information fusion according to an embodiment of the present application;
FIG. 2 is a block diagram of a system for multi-sensor information fusion in accordance with an embodiment of the present application;
fig. 3 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described and illustrated with reference to the accompanying drawings and examples in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by a person of ordinary skill in the art based on the embodiments provided by the present application without making any inventive effort, are intended to fall within the scope of the present application. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the described embodiments of the application can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," and similar referents in the context of the application are not to be construed as limiting the quantity, but rather as singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in connection with the present application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means greater than or equal to two. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
The embodiment provides a method for multi-sensor information fusion, and fig. 1 is a flowchart of the method for multi-sensor information fusion according to an embodiment of the application, as shown in fig. 1, and the flowchart includes the following steps:
step S101, acquiring image data and point cloud data of a current scene through multiple sensors.
Before acquiring image data and point cloud data of a current scene, a plurality of sensors are required to be sequentially arranged at the front end of a robot chassis according to a preset distance, and the optical centers of the plurality of sensors are on the same vertical line and are parallel to the ground. Preferably, in the embodiment, an RGB camera 80-90cm from the ground, an infrared camera 90-100cm from the ground, a multispectral camera 100-110cm from the ground and a 16-line laser radar 120-130cm from the ground are respectively installed from low to high in the vertical direction at the front end of the robot chassis, so that the optical centers of the three cameras are ensured to be on the same vertical line, the radar center and the optical centers of the three cameras are also positioned on the same vertical line, and the internal and external parameters of each sensor, namely the 16-line laser radar, the RGB camera, the infrared camera and the multispectral camera are calibrated.
Then, the indoor environment is scanned through a 16-line laser radar, and point cloud data P of the current frame is obtained t Wherein t is the current moment, and simultaneously driving the RGB camera, the infrared camera and the multispectral camera to respectively acquire image data of the current scene to respectively obtain RGB images C t Infrared image I t Multispectral image M t
Step S102, calculating the mapping relation between the point cloud data and the camera imaging plane according to the camera internal parameters and external parameters calibrated in advance in the sensor, screening out the current frame point cloud data in the camera view angle range according to the mapping relation, and projecting the current frame point cloud data in the camera view angle range into the image data according to the internal parameters and the external parameters to obtain the point cloud data containing various information.
In one embodiment, step S102 includes substeps S1-S3.
S1, calculating the mapping relation between each point in the point cloud data and a camera imaging plane according to the sensor, namely camera internal parameters and external parameters calibrated in advance in an RGB camera, an infrared camera and a multispectral camera, wherein the specific calculation formula is shown in the following formula (1):
wherein ,for the horizontal and vertical pixel coordinates in the camera image, f x and fy C is the focal length of the camera in the lateral and longitudinal directions x and cy For the camera lateral and longitudinal focus coordinates, +.>Is the actual spatial coordinate position of the point cloud.
S2, the point cloud in the space can be projected onto an imaging plane through the mapping relation calculation formula, if the projection result is not in the image size range, points, which are not in the RGB camera view angle range, in the point cloud of the current frame can be screened out, and therefore point cloud data in the camera view angle range are obtained.
And S3, projecting the point cloud data of the current frame within the field angle range of the camera into the image data according to the internal and external parameters to obtain the point cloud data containing various information.
Preferably, when the original image data is an RGB image, each point in the current frame point cloud data within the camera view angle range is mapped onto a pixel point of the image, and an RGB color value is read according to pixel coordinates and written into the original point cloud, thereby obtaining a point cloud containing RGB color information.
However, since the data structure of the point cloud does not include the data structure of infrared and multispectral, it is necessary to perform simple processing on the data structure of the point cloud first in order to write the infrared image information into the original point cloud. Specifically, when the original image data is an infrared image, since the original data of the infrared image is gray data, the gray data is mapped into RGB data, and then the infrared image data is written into the original point cloud through the RGB data according to the method of processing the RGB image. Similarly, when the original image data is a multispectral image, since the original data of the multispectral image is 9-dimensional gray data, the 9-dimensional gray data of the multispectral image is mapped into 9-dimensional RGB data, and then the multispectral image data is written into the original point cloud through the RGB data according to the method for processing the RGB image. By the method, the fusion of the multi-sensor data can be improved, and more accurate information can be provided for operations such as positioning navigation or environment perception of the robot.
And finally obtaining the point cloud data containing RGB, infrared and multispectral information.
Step S103, constructing a high-dimensional octree map according to the three-dimensional coordinates of the point cloud data containing various information.
It should be noted that, the conventional octree map structure describes a map by using an occupancy probability, and the occupancy probability calculation formula is shown in the following formula (2):
where y is the intensity within the voxel and x is the probability between 0 and 1, it can be seen that when y changes from negative infinity to positive infinity, x changes from 0 to 1 accordingly. When y takes 0, x takes 0.5, so the intensity value can be converted into probability between 0 and 1 according to the formula, and the occupancy probability is written into the octree voxels to form the octree map.
However, this conventional octree map structure is not suitable for multi-dimensional color data. Since the point cloud data in the present embodiment includes multidimensional color data, the conventional octree map structure needs to be modified to accommodate multidimensional point cloud data including RGB, infrared and multispectral information. A common octree voxel contains one float data, and a data structure containing 11-dimensional RGB data is written into each voxel for a total of 33 float types of data. The modified octree map structure is capable of handling a large amount of input data.
After the structure of the octree map is modified in the above manner, a high-dimensional octree map is constructed according to the three-dimensional coordinates of the point cloud data containing various information, wherein the high-dimensional refers to that each voxel in the octree map contains high-dimensional information including RGB information, infrared information and multispectral information.
In addition, the octree map is constructed by a conventional construction method, which is not described in detail herein.
The high-dimensional octree map of the current scene can be constructed and obtained through the steps, the robot chassis is driven to move, and after the robot chassis moves clockwise for one circle along the indoor environment, the collection and the map construction of the whole surrounding environment can be completed, so that the complete high-dimensional octree map fused with multi-sensor data is obtained.
Through the steps S101 to S103, the present embodiment processes different sensor data, so as to obtain point cloud data fused with multiple information, and constructs a high-dimensional octree map by modifying the octree map structure of the structure according to the point cloud data, so as to obtain a complete scene map fused with multiple sensor data. The problems that a large amount of input data are difficult to process during multi-sensor data fusion and the data fusion performance is low in the related technology are solved, the data fusion performance is improved, and more accurate information is provided for operations such as positioning navigation or environment perception of a robot.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
The embodiment also provides a system for fusing information of multiple sensors, which is used for implementing the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the terms "module," "unit," "sub-unit," and the like may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 2 is a block diagram of a system for multi-sensor information fusion according to an embodiment of the present application, and as shown in fig. 2, the system includes an acquisition module 21, an information fusion module 22, and a construction module 23:
an acquisition module 21, configured to acquire image data and point cloud data of a current scene through multiple sensors; the information fusion module 22 is configured to calculate a mapping relationship between the point cloud data and the camera imaging plane according to the camera internal parameters and the camera external parameters calibrated in advance in the sensor, screen out current frame point cloud data within the camera view angle range according to the mapping relationship, and project the current frame point cloud data within the camera view angle range into the image data according to the internal parameters and the external parameters, so as to obtain point cloud data containing multiple information; the construction module 23 is configured to construct a high-dimensional octree map according to three-dimensional coordinates of point cloud data including various information.
Through the above system, the information fusion module 22 processes different sensor data, so as to obtain point cloud data fused with multiple information, and the construction module 23 constructs a high-dimensional octree map by modifying the octree map structure of the structure according to the point cloud data, so as to obtain a complete scene map fused with multiple sensor data. The problems that a large amount of input data are difficult to process during multi-sensor data fusion and the data fusion performance is low in the related technology are solved, the data fusion performance is improved, and more accurate information is provided for operations such as positioning navigation or environment perception of a robot.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
In addition, in combination with the method for fusing multi-sensor information in the above embodiment, the embodiment of the present application may be implemented by providing a storage medium. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements the method of any of the multi-sensor information fusion of the above embodiments.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of multi-sensor information fusion. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
In one embodiment, fig. 3 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, and as shown in fig. 3, an electronic device, which may be a server, is provided, and an internal structure diagram thereof may be as shown in fig. 3. The electronic device includes a processor, a network interface, an internal memory, and a non-volatile memory connected by an internal bus, where the non-volatile memory stores an operating system, computer programs, and a database. The processor is used for providing computing and control capability, the network interface is used for communicating with an external terminal through network connection, the internal memory is used for providing environment for the operation of an operating system and a computer program, the computer program is executed by the processor to realize a multi-sensor information fusion method, and the database is used for storing data.
It will be appreciated by those skilled in the art that the structure shown in fig. 3 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the electronic device to which the present inventive arrangements are applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be understood by those skilled in the art that the technical features of the above-described embodiments may be combined in any manner, and for brevity, all of the possible combinations of the technical features of the above-described embodiments are not described, however, they should be considered as being within the scope of the description provided herein, as long as there is no contradiction between the combinations of the technical features.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A method of multi-sensor information fusion, the method comprising:
acquiring image data and point cloud data of a current scene through multiple sensors;
calculating the mapping relation between the point cloud data and a camera imaging plane according to camera internal parameters and external parameters calibrated in advance in a sensor, screening out current frame point cloud data in a camera view angle range according to the mapping relation, and projecting the current frame point cloud data in the camera view angle range into the image data according to internal and external parameters to obtain point cloud data containing various information;
and constructing a high-dimensional octree map according to the three-dimensional coordinates of the point cloud data containing various information.
2. The method of claim 1, wherein the multisensor comprises a lidar, an RGB camera, an infrared camera, and a multispectral camera.
3. The method of claim 2, wherein acquiring image data and point cloud data of a current scene by multiple sensors comprises:
acquiring point cloud data of a current scene through the laser radar;
and respectively acquiring an RGB image, an infrared image and a multispectral image of the current scene through the RGB camera, the infrared camera and the multispectral camera.
4. The method of claim 1, wherein prior to acquiring image data and point cloud data of a current scene by the multi-sensor, the method comprises:
the front end of the robot chassis is sequentially provided with a plurality of sensors according to a preset distance, and the optical centers of the sensors are on the same vertical line and parallel to the ground.
5. The method of claim 1, wherein the specific calculation formula of the mapping relationship between the point cloud data and the camera imaging plane is as follows:
wherein ,for the horizontal and vertical pixel coordinates in the camera image, f x and fy C is the focal length of the camera in the lateral and longitudinal directions x and cy For the camera lateral and longitudinal focus coordinates, +.>Is the actual spatial coordinate position of the point cloud.
6. The method according to claim 1 or 2, wherein projecting current frame point cloud data within a camera angle of view into the image data according to the inside and outside parameters, obtaining point cloud data containing a plurality of information comprises:
when the image data is an RGB image, mapping each point in the point cloud data to a pixel point of the image, reading RGB color values according to pixel coordinates, and writing the RGB color values into an original point cloud;
when the image data is an infrared image, mapping gray data of the infrared image into RGB data, and writing the infrared image data into an original point cloud through the RGB data;
when the image data is a multispectral image, mapping 9-dimensional gray data of the multispectral image into 9-dimensional RGB data, and writing the multispectral image data into an original point cloud through the RGB data.
7. The method of claim 1, wherein prior to constructing a high-dimensional octree map from three-dimensional coordinates of the point cloud data containing the plurality of information, the method comprises:
and modifying the structure of the octree map according to the data structure of the point cloud data containing various information.
8. A system for multi-sensor information fusion, the system comprising:
the acquisition module is used for acquiring image data and point cloud data of the current scene through a plurality of sensors;
the information fusion module is used for calculating the mapping relation between the point cloud data and the camera imaging plane according to camera internal parameters and external parameters calibrated in advance in the sensor, screening out current frame point cloud data in the camera view angle range according to the mapping relation, and projecting the current frame point cloud data in the camera view angle range into the image data according to internal and external parameters to obtain point cloud data containing various information;
and the construction module is used for constructing a high-dimensional octree map according to the three-dimensional coordinates of the point cloud data containing various information.
9. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of claims 1 to 7.
10. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1 to 7 when run.
CN202310412995.1A 2023-04-18 2023-04-18 Method, system, device and medium for multi-sensor information fusion Pending CN116597074A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310412995.1A CN116597074A (en) 2023-04-18 2023-04-18 Method, system, device and medium for multi-sensor information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310412995.1A CN116597074A (en) 2023-04-18 2023-04-18 Method, system, device and medium for multi-sensor information fusion

Publications (1)

Publication Number Publication Date
CN116597074A true CN116597074A (en) 2023-08-15

Family

ID=87598103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310412995.1A Pending CN116597074A (en) 2023-04-18 2023-04-18 Method, system, device and medium for multi-sensor information fusion

Country Status (1)

Country Link
CN (1) CN116597074A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183247A (en) * 2020-09-14 2021-01-05 广东工业大学 Laser point cloud data classification method based on multispectral image
CN113269837A (en) * 2021-04-27 2021-08-17 西安交通大学 Positioning navigation method suitable for complex three-dimensional environment
CN113947134A (en) * 2021-09-26 2022-01-18 南京邮电大学 Multi-sensor registration fusion system and method under complex terrain
CN114519681A (en) * 2021-12-31 2022-05-20 上海仙途智能科技有限公司 Automatic calibration method and device, computer readable storage medium and terminal
CN115830143A (en) * 2022-12-16 2023-03-21 苏州万集车联网技术有限公司 Joint calibration parameter adjusting method and device, computer equipment and storage medium
WO2023045271A1 (en) * 2021-09-24 2023-03-30 奥比中光科技集团股份有限公司 Two-dimensional map generation method and apparatus, terminal device, and storage medium
CN115876198A (en) * 2022-11-28 2023-03-31 烟台艾睿光电科技有限公司 Target detection and early warning method, device, system and medium based on data fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183247A (en) * 2020-09-14 2021-01-05 广东工业大学 Laser point cloud data classification method based on multispectral image
CN113269837A (en) * 2021-04-27 2021-08-17 西安交通大学 Positioning navigation method suitable for complex three-dimensional environment
WO2023045271A1 (en) * 2021-09-24 2023-03-30 奥比中光科技集团股份有限公司 Two-dimensional map generation method and apparatus, terminal device, and storage medium
CN113947134A (en) * 2021-09-26 2022-01-18 南京邮电大学 Multi-sensor registration fusion system and method under complex terrain
CN114519681A (en) * 2021-12-31 2022-05-20 上海仙途智能科技有限公司 Automatic calibration method and device, computer readable storage medium and terminal
CN115876198A (en) * 2022-11-28 2023-03-31 烟台艾睿光电科技有限公司 Target detection and early warning method, device, system and medium based on data fusion
CN115830143A (en) * 2022-12-16 2023-03-21 苏州万集车联网技术有限公司 Joint calibration parameter adjusting method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜峰: "Meteor Tail: Octomap Based Multi-sensor Data Fusion Method", 《IEEE》, 28 September 2021 (2021-09-28), pages 118 - 120 *

Similar Documents

Publication Publication Date Title
WO2021233029A1 (en) Simultaneous localization and mapping method, device, system and storage medium
CN111968235B (en) Object attitude estimation method, device and system and computer equipment
CN111563923B (en) Method for obtaining dense depth map and related device
JP6902122B2 (en) Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics
WO2020206708A1 (en) Obstacle recognition method and apparatus, computer device, and storage medium
CN111353969B (en) Method and device for determining road drivable area and computer equipment
Panek et al. Meshloc: Mesh-based visual localization
CN112444242A (en) Pose optimization method and device
US11461911B2 (en) Depth information calculation method and device based on light-field-binocular system
CN111295667B (en) Method for stereo matching of images and auxiliary driving device
CN113689578A (en) Human body data set generation method and device
CN113643414A (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN115035235A (en) Three-dimensional reconstruction method and device
CN114119992A (en) Multi-mode three-dimensional target detection method and device based on image and point cloud fusion
CN118115762A (en) Binocular stereo matching model training method, device, equipment and storage medium
US8884950B1 (en) Pose data via user interaction
CN117235299A (en) Quick indexing method, system, equipment and medium for oblique photographic pictures
CN110148086B (en) Depth filling method and device for sparse depth map and three-dimensional reconstruction method and device
CN116012805A (en) Object perception method, apparatus, computer device, storage medium, and program product
CN116469101A (en) Data labeling method, device, electronic equipment and storage medium
CN116597074A (en) Method, system, device and medium for multi-sensor information fusion
CN116704151B (en) Three-dimensional reconstruction method and device, and vehicle, equipment and medium based on three-dimensional reconstruction method and device
JP2024521816A (en) Unrestricted image stabilization
CN116188349A (en) Image processing method, device, electronic equipment and storage medium
CN114004839A (en) Image segmentation method and device of panoramic image, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination