CN113688900A - Radar and visual data fusion processing method, road side equipment and intelligent traffic system - Google Patents

Radar and visual data fusion processing method, road side equipment and intelligent traffic system Download PDF

Info

Publication number
CN113688900A
CN113688900A CN202110967643.3A CN202110967643A CN113688900A CN 113688900 A CN113688900 A CN 113688900A CN 202110967643 A CN202110967643 A CN 202110967643A CN 113688900 A CN113688900 A CN 113688900A
Authority
CN
China
Prior art keywords
information
data
brightness
determining
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110967643.3A
Other languages
Chinese (zh)
Inventor
张庆舜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202110967643.3A priority Critical patent/CN113688900A/en
Publication of CN113688900A publication Critical patent/CN113688900A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The disclosure provides a radar and visual data fusion processing method, a road test device and an intelligent traffic system, relates to the technical field of sensing devices, and particularly relates to the technical field of road side devices in vehicle-road cooperation. The specific implementation scheme is as follows: respectively receiving point cloud data and image data from a radar sensor and a vision sensor, wherein the point cloud data comprises reflection intensity information of each laser point, and the image data comprises brightness information of each pixel point; determining the corresponding relation between the laser point and the pixel point; based on the corresponding relation, fusing the reflection intensity information of the laser spot and the brightness information of the corresponding pixel point to obtain brightness fused information; structured data is generated based on the luminance fusion information. According to the technology disclosed by the invention, the image quality of the image data acquired by the vision sensor can be enhanced by utilizing the point cloud data of the radar sensor, the noise of the image data is reduced, and the signal to noise ratio is improved.

Description

Radar and visual data fusion processing method, road side equipment and intelligent traffic system
Technical Field
The utility model relates to an intelligent transportation technical field especially relates to roadside equipment technical field in vehicle and road is in coordination.
Background
In the related art, for the original data of the camera, the image quality is usually improved by using a deep learning low-illumination enhancement algorithm, but the image quality cannot be accurately restored by using the deep learning low-illumination enhancement algorithm under the condition of extremely low illumination, and the improvement of the signal-to-noise ratio is limited.
Disclosure of Invention
The disclosure provides a radar and visual data fusion processing method, a road test device and an intelligent traffic system.
According to an aspect of the present disclosure, there is provided a radar and visual data fusion processing method, including:
respectively receiving point cloud data and image data from a radar sensor and a vision sensor, wherein the point cloud data comprises reflection intensity information of each laser point, and the image data comprises brightness information of each pixel point;
determining the corresponding relation between the laser point and the pixel point;
based on the corresponding relation, fusing the reflection intensity information of the laser spot and the brightness information of the corresponding pixel point to obtain brightness fused information;
structured data is generated based on the luminance fusion information.
According to another aspect of the present disclosure, there is provided a radar and visual data fusion processing apparatus including:
the data receiving module is used for respectively receiving point cloud data and image data from the radar sensor and the visual sensor, wherein the point cloud data comprises reflection intensity information of each laser point, and the image data comprises brightness information of each pixel point;
the corresponding relation determining module is used for determining the corresponding relation between the laser point and the pixel point;
the fusion processing module is used for carrying out fusion processing on the reflection intensity information of the laser points and the brightness information of the corresponding pixel points based on the corresponding relation to obtain brightness fusion information;
and the structured data generation module is used for generating structured data based on the brightness fusion information.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method in any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the method in any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a probe apparatus including:
the radar sensor is used for detecting a target area and generating point cloud data;
the vision sensor is used for monitoring the target area and generating image data;
and the data processing unit is used for executing the radar and the visual data fusion processing method according to the above embodiment of the disclosure.
According to another aspect of the present disclosure, there is provided a roadside apparatus including:
the detection device according to the above-described embodiment of the present disclosure.
According to another aspect of the present disclosure, there is provided an intelligent transportation system including:
the roadside apparatus according to the above-described embodiment of the present disclosure;
and the road side calculating unit is used for receiving the structured data from the road side equipment and executing data calculating processing on the structured data.
According to the technology disclosed by the invention, the point cloud data of the radar sensor and the image data of the visual sensor are received, the reflection intensity information of the laser point and the brightness information of the corresponding pixel point are fused to obtain the brightness fusion information, the image quality of the image data collected by the visual sensor can be enhanced based on the brightness fusion information, the noise point of the image data is reduced, the signal to noise ratio is improved, and the identification precision of a target area is improved. Moreover, signal synchronization information of the radar sensor and the visual sensor is not delayed, reflection intensity information and brightness information can be fused at the front end of the signal, and compared with a processing mode of fusing after data output, the radar and visual data fusion processing method can avoid amplifying error data, so that the signal-to-noise ratio and the data reliability are further improved, and the false detection probability is reduced. In addition, the detection device using the radar and the visual data fusion processing method disclosed by the embodiment of the disclosure does not need to be provided with a light supplement unit separately to supplement light for the visual sensor, so that the structure of the detection device is simplified, and the cost of the detection device is reduced.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 illustrates a flow diagram of a radar and visual data fusion processing method according to an embodiment of the present disclosure;
FIG. 2 shows a detailed flow chart of obtaining luminance fusion information for a radar and visual data fusion processing method according to an embodiment of the present disclosure;
fig. 3 shows a specific flowchart of determining correspondence between laser points and pixel points according to the radar and visual data fusion processing method of the embodiment of the present disclosure;
FIG. 4 illustrates a detailed flow chart of determining a coordinate transformation matrix for a radar and visual data fusion processing method according to an embodiment of the present disclosure;
FIG. 5 illustrates a detailed flow diagram for structured data generation for a radar and visual data fusion processing method according to an embodiment of the present disclosure;
FIG. 6 shows a detailed flow diagram of video stream data generation for a radar and visual data fusion processing method according to an embodiment of the present disclosure;
FIG. 7 illustrates a diagram of one particular example of a radar and visual data fusion processing method according to an embodiment of the disclosure;
FIG. 8 shows a schematic diagram of a radar and visual data fusion processing apparatus according to an embodiment of the present disclosure;
FIG. 9 is a block diagram of an electronic device for implementing the radar and visual data fusion processing method of an embodiment of the present disclosure;
fig. 10 shows a schematic diagram of a detection device according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
A radar and visual data fusion processing method according to an embodiment of the present disclosure is described below with reference to fig. 1 to 7.
FIG. 1 shows a flow diagram of a radar and visual data fusion processing method according to an embodiment of the present disclosure. As shown in fig. 1, the radar and visual data fusion processing method includes the following steps:
s101: respectively receiving point cloud data and image data from a radar sensor and a vision sensor, wherein the point cloud data comprises reflection intensity information of each laser point, and the image data comprises brightness information of each pixel point;
s102: determining the corresponding relation between the laser point and the pixel point;
s103: based on the corresponding relation, fusing the reflection intensity information of the laser spot and the brightness information of the corresponding pixel point to obtain brightness fused information;
s104: structured data is generated based on the luminance fusion information.
The radar and visual data fusion processing method of the embodiment of the disclosure can be applied to a detection device with a radar sensor and a visual sensor, and more specifically, the detection device can be applied to roadside equipment which is installed in a roadside environment, and the radar sensor and the visual sensor are used for detecting the same target area in the roadside environment. It will be appreciated that the detection field of view of the radar sensor and the monitoring field of view of the vision sensor at least partially coincide, and that the portions of the detection field of view and the monitoring field of view that coincide form the same target region in the roadside environment.
The radar sensor and the vision sensor are both photoelectric sensors, and the photoelectric principles of the radar sensor and the vision sensor are basically similar. The radar sensor takes light as a leading medium, and different from a millimeter wave radar in the related technology, sensing data of the radar sensor and sensing data of the visual sensor in the embodiment of the disclosure are fused through pixel-level original brightness data. Therefore, the sensing data generated by the radar sensor and the sensing data generated by the vision sensor can be combined with each other in the original data layer, so that the identification precision of the final output data is improved.
For example, in step S101, the point cloud data output by the radar sensor may specifically include reflection intensity information of each laser spot, three-dimensional coordinate values of each laser spot, and the like. The image data output by the image sensor may specifically include luminance information, chrominance information, time-synchronized compensation information, and the like of each pixel point.
The brightness information may be obtained by converting RGB (Red Green Blue ) Color information of each pixel point into YUV (Luminance and Chrominance) information.
Exemplarily, in step S102, the detection field of view of the radar sensor and the monitoring field of view of the visual sensor may be matched, specifically including time synchronization, motion compensation and calibration of internal parameters and external parameters of the radar sensor and the visual sensor, respectively, so as to implement transformation between a three-dimensional coordinate system of the radar sensor and a world coordinate system, implement transformation between an imaging coordinate system of the visual sensor and the world coordinate system, and finally determine a corresponding relationship between each laser point in the point cloud data and each pixel point in the image data, thereby implementing one-to-one projection of the laser points in the point cloud data to the pixel points in the image data.
For example, in step S103, according to the high correlation between the reflection intensity information and the brightness information of the camera, the reflection intensity information of the laser point and the brightness information of the pixel point may be weighted and superimposed according to the corresponding relationship between the laser point in the point cloud data and the pixel point in the image data, so as to obtain the brightness fusion information.
In step S104, the structured data may be obtained by fusing the brightness fusion information and the image data through a pre-trained deep learning network to obtain image data with enhanced image quality, and then performing structured feature extraction on the image data.
The structured data has certain physical meaning and can be used for representing certain semantic information. The generated structured data can be transmitted to a decision layer of the road side computing unit, and the decision layer realizes other functions such as prediction perception, path planning and early warning of the target object in the target area according to the structured data.
It can be understood that, in a scene with dark ambient light, image data acquired by the vision sensor has more noise and lower signal-to-noise ratio. In the related art, a deep learning method is usually adopted to improve the image quality of image data, but the improvement effect under extremely low illumination is limited, the precision is low, and the detection requirement cannot be met; or, a light supplement strategy is adopted, and a flash lamp is independently arranged to supplement light for the vision sensor so as to improve the imaging quality of the image data, but correspondingly the equipment cost is greatly increased.
According to the radar and visual data fusion processing method disclosed by the embodiment of the disclosure, the point cloud data of the radar sensor and the image data of the visual sensor are received, the reflection intensity information of the laser point and the brightness information of the corresponding pixel point are fused to obtain the brightness fusion information, the image quality of the image data acquired by the visual sensor can be enhanced based on the brightness fusion information, the noise of the image data is reduced, the signal to noise ratio is improved, and therefore the identification precision of the target area is improved. Moreover, signal synchronization information of the radar sensor and the visual sensor is not delayed, reflection intensity information and brightness information can be fused at the front end of the signal, and compared with a processing mode of fusing after data output, the radar and visual data fusion processing method can avoid amplifying error data, so that the signal-to-noise ratio and the data reliability are further improved, and the false detection probability is reduced. In addition, the detection device using the radar and the visual data fusion processing method disclosed by the embodiment of the disclosure does not need to be provided with a light supplement unit separately to supplement light for the visual sensor, so that the structure of the detection device is simplified, and the cost of the detection device is reduced.
As shown in fig. 2, in one embodiment, step S103 includes:
s201: determining weight coefficients respectively corresponding to the reflection intensity information of the laser spot and the brightness information of the pixel point;
s202: and according to the weight coefficient, weighting and superposing the reflection intensity information of the laser spot and the brightness information of the corresponding pixel point to obtain brightness fusion information.
In one example, before step S201, the point cloud data may be normalized to obtain the preprocessed reflection intensity information. Specifically, external reference calibration matching can be performed on angle information and position information contained in sensing data of the radar sensor, and calibration matching can be performed on intensity information, so that normalization processing on scanning data can be completed, and a plurality of preprocessed data can be obtained. Therefore, the data volume of the sensing data of the radar sensor can be reduced, and the calculation efficiency of subsequent brightness fusion information is improved.
Illustratively, the luminance fusion information may be calculated according to the following formula:
Y’=aY+bI,
y' is used for representing brightness fusion information, Y is used for representing brightness information, a is used for representing a first weight coefficient corresponding to the brightness information, I is used for representing reflection intensity information, and b is used for representing a second weight coefficient corresponding to the reflection intensity information.
The first weight coefficient a and the second weight coefficient b can be determined by dynamic adjustment of a deep learning algorithm. The luminance information Y may be obtained by converting RGB color parameters of the image data into YUV parameters.
Further, after the luminance fusion information Y 'is obtained, the Y' UV may be converted into new RGB color parameters based on the chrominance information UV, and the image quality of the image data is enhanced and restored based on the new RGB color parameters, so as to obtain image data with better homomorphic contrast and higher luminance.
According to the embodiment, the weight coefficient is determined through a deep learning algorithm, and the brightness information and the reflection intensity information are subjected to weighted superposition based on the weight coefficient, so that the accuracy of the obtained brightness fusion information is high, the brightness of the image data is remarkably improved, and the signal-to-noise ratio of the image data is further improved.
As shown in fig. 3, in one embodiment, step S102 includes:
s301: determining a coordinate transformation matrix of the radar sensor relative to the vision sensor;
s302: and determining the corresponding relation between the laser points and the pixel points based on the coordinate transformation matrix.
The coordinate transformation matrix of the radar sensor relative to the visual sensor refers to a coordinate transformation matrix of a three-dimensional coordinate system of the radar sensor relative to an imaging coordinate system of the visual sensor.
And determining an overlapping area between the detection view field of the radar sensor and the monitoring view field of the visual sensor according to the time synchronization information and the motion compensation information of the visual sensor based on the coordinate transformation matrix, wherein the laser points of the point cloud data, which are positioned in the overlapping area, have pixel points in one-to-one correspondence. Based on the corresponding relation between the laser points and the pixel points, the laser points of the point cloud data can be projected to the pixel points in the image data in a one-to-one correspondence manner.
Through the embodiment, a plurality of laser points contained in the point cloud data and a plurality of pixel points contained in the image data can be matched, so that the accuracy of the subsequently obtained brightness fusion information is ensured.
As shown in fig. 4, exemplarily, the step S301 specifically includes the following steps:
s401: and respectively carrying out internal and external reference calibration processing on the radar sensor and the vision sensor, and determining a coordinate transformation matrix of the radar sensor relative to the vision sensor.
In one particular example, the radar sensor and the vision sensor may be internally and externally referenced by: acquiring a plurality of groups of calibration images, determining each edge straight line characteristic of a calibration plate corresponding to each group of calibration images under an imaging coordinate system of the visual sensor based on two-dimensional images in each group of calibration images, and determining edge point three-dimensional coordinates of the calibration plate corresponding to each group of calibration images under a three-dimensional coordinate system of the radar sensor based on point cloud data in each group of calibration images. And based on each edge straight line characteristic of the calibration plate corresponding to each set of calibration images in the imaging coordinate system and the edge point three-dimensional coordinates of the calibration plate in the three-dimensional coordinate system of the radar sensor. Therefore, the three-dimensional coordinates of the edge point of the calibration plate under the three-dimensional coordinate system of the laser radar sensor and the edge linear characteristics under the imaging coordinate system of the vision sensor can be obtained, so that the non-linear relationship of external parameters between the vision sensor and the radar sensor can be established according to the relationship between the three-dimensional coordinates and the edge linear characteristics, and the coordinate transformation matrix of the radar sensor relative to the vision sensor can be determined.
In another specific example, internal parameters and external parameters of the radar sensor are respectively calibrated to obtain a first coordinate transformation matrix of the radar sensor relative to a world coordinate system; and respectively calibrating the internal parameters and the external parameters of the vision sensor to obtain a second coordinate transformation matrix of the vision sensor relative to a world coordinate system. Based on the first coordinate transformation matrix, the world coordinate values of all laser points in the point cloud data in a world coordinate system can be obtained; similarly, based on the second coordinate transformation matrix, the world coordinate value of each pixel point in the image data in the world coordinate system can be obtained. Further, a coordinate transformation matrix of the radar sensor relative to the visual sensor is determined based on the world coordinate values of the laser points and the world coordinate values of the pixel points.
Through above-mentioned embodiment, can calibrate and match radar sensor and vision sensor, be favorable to accurately determining the corresponding relation between laser point and the pixel.
As shown in fig. 5, in one embodiment, step S104 includes:
s501: and inputting the brightness fusion information and the point cloud data into a pre-trained deep learning network to obtain the structured data output by the deep learning network.
The deep learning Network may adopt various Networks known to those skilled in the art, for example, a Fully Connected Neural Network (FCNN), a Convolutional Neural Network (CNN) Recurrent Neural Network (RNN), and the like may be adopted.
Taking a convolutional neural network as an example, a convolutional neural network is a neural network that is specialized for processing data having a grid-like structure, especially image data (which may be viewed as a two-dimensional grid of pixels). A Convolutional neural network may comprise a plurality of layers, and each Layer converts one quantity into another quantity by a differentiable function, the layers mainly comprising Convolutional layers (Convolutional Layer), Pooling layers (Pooling Layer), and full connection layers (FC Layer). The convolutional layer is used for performing structural feature extraction on the input brightness fusion information and point cloud data, and outputting the structural data through the full connection layer.
It is understood that structured data has a certain physical meaning and can be used to characterize certain semantic information. And a decision layer of the road side computing unit realizes the functions of prediction perception, path planning, early warning and the like according to the structured data.
According to the embodiment, the structured data are obtained by utilizing the deep learning network, so that the extraction precision and the extraction efficiency of the structured data are improved, and the roadside computing unit can directly execute the relevant decision processing based on the structured data, so that the computation amount and the performance requirement of the roadside computing unit are reduced, and the data processing efficiency of the roadside computing unit is improved.
As shown in fig. 6, in an embodiment, after step S103, the method further includes:
s601: updating color information of each pixel point contained in the image data based on the brightness fusion information;
s602: generating an updated frame image based on the updated color information;
s603: and generating video stream data based on the updated frame image and other frame images.
Illustratively, based on the luminance fusion information Y 'and the chrominance information UV of the pixel points, the Y' UV parameter is converted into a new RGB color parameter. And performing image quality enhancement and restoration on the image data based on the new RGB color parameters of each pixel point to obtain a frame image with good homomorphic contrast and high brightness. And respectively executing the steps on the image data at different moments to obtain a plurality of updated frame images. And finally, merging the plurality of updated frame images to generate high-definition video stream data.
According to the above embodiment, the radar and visual data fusion processing method according to the embodiment of the disclosure can enhance the image quality of image data by using the brightness fusion information generated by the brightness information and the reflection intensity information on the basis of the image data, so as to obtain high-definition video stream data, thereby improving the application range of the detection device and meeting the requirement of high-precision monitoring on a target area.
A radar and visual data fusion processing method according to an embodiment of the present disclosure is described below in one specific example with reference to fig. 7.
As shown in fig. 7, point cloud data including three-dimensional coordinate values and reflection intensity information of respective laser points and image data including YUV information (luminance information and chromaticity information) of respective pixel points and time-synchronized compensation information are received from a radar sensor and a vision sensor, respectively. And respectively carrying out internal and external reference calibration on the radar sensor and the visual sensor to determine the world coordinate value of each laser point in the point cloud data in the world coordinate system and the world coordinate value of each pixel point in the image data in the world coordinate system, and determining a coordinate transformation matrix of the radar sensor relative to the visual sensor, thereby determining the corresponding relation between the laser point in the point cloud data and the pixel point in the image data.
Determining weight coefficients corresponding to the brightness information and the reflection intensity information respectively according to the corresponding relation between the laser points and the pixel points, performing weighted superposition on the brightness information and the reflection intensity information according to the weight coefficients to determine brightness fusion information, and fusing the brightness fusion information and image data according to a fusion algorithm to obtain a frame image with improved image quality. And merging to obtain high-image-quality video stream data based on a plurality of image-quality-improved frame images.
And based on the brightness fusion information obtained in the last step, performing structured feature extraction on the point cloud data by using a deep learning network to obtain structured data, and transmitting the structured data to a decision layer of a roadside computing unit. Wherein, the deep learning network can be a convolutional neural network.
According to the embodiment of the disclosure, the disclosure further provides a radar and visual data fusion processing device.
As shown in fig. 8, the radar and visual data fusion processing apparatus includes:
a data receiving module 801, configured to receive point cloud data and image data from a radar sensor and a visual sensor, respectively, where the point cloud data includes reflection intensity information of each laser point, and the image data includes brightness information of each pixel point;
a correspondence determining module 802, configured to determine a correspondence between a laser point and a pixel point;
a fusion processing module 803, configured to perform fusion processing on the reflection intensity information of the laser spot and the brightness information of the corresponding pixel point based on the correspondence relationship, to obtain brightness fusion information;
and a structured data generation module 804, configured to generate structured data based on the brightness fusion information.
In one embodiment, the fusion processing module 803 includes:
the weight coefficient determining submodule is used for determining weight coefficients corresponding to the reflection intensity information of the laser spot and the brightness information of the pixel point respectively;
and the brightness fusion information calculation submodule is used for weighting and superposing the reflection intensity information of the laser points and the brightness information of the corresponding pixel points according to the weight coefficient to obtain brightness fusion information.
In one embodiment, the correspondence determining module 802 includes:
the coordinate transformation matrix determining submodule is used for determining a coordinate transformation matrix of the radar sensor relative to the vision sensor;
and the corresponding relation determining submodule is used for determining the corresponding relation between the laser point and the pixel point based on the coordinate transformation matrix.
In one embodiment, the coordinate transformation matrix determination sub-module is further configured to:
and respectively carrying out internal and external reference calibration processing on the radar sensor and the visual sensor, and determining a coordinate transformation matrix of the radar sensor relative to the visual sensor.
In one embodiment, the structured data generation module 804 is further configured to:
and inputting the brightness fusion information and the point cloud data into a pre-trained deep learning network to obtain the structured data output by the deep learning network.
In one embodiment, the apparatus further comprises:
the color information updating module is used for updating the color information of each pixel point contained in the image data based on the brightness fusion information;
the frame image generation module is used for generating an updated frame image based on the updated color information;
and the video stream data generating module is used for generating video stream data based on the updated frame image and other frame images.
The functions of each module or sub-module in the radar and visual data fusion processing apparatus in the embodiment of the present disclosure may refer to the corresponding description in the above method embodiment, and are not described herein again.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 9 illustrates a schematic block diagram of an example electronic device 900 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the apparatus 900 includes a computing unit 901, which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The calculation unit 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
A number of components in the device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 901 performs the respective methods and processes described above, such as the radar and visual data fusion processing methods. For example, in some embodiments, the radar and vision data fusion processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 900 via ROM 902 and/or communications unit 909. When the computer program is loaded into RAM 903 and executed by computing unit 901, one or more steps of the radar and visual data fusion processing methods described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the radar and visual data fusion processing methods by any other suitable means (e.g., by means of firmware).
According to an embodiment of the present disclosure, the present disclosure also provides a detection apparatus.
As shown in fig. 10, the detecting device includes a radar sensor 1001, a vision sensor 1002, and a data processing unit 1003.
Specifically, the radar sensor 1001 is used to detect a target area and generate point cloud data. The vision sensor 1002 is used to monitor a target area and generate image data. The data processing unit 1003 is used to execute the radar and visual data fusion processing method according to the above-described embodiment of the present disclosure.
Illustratively, the radar sensor 1001 may be a detection module of a laser radar, and the vision sensor 1002 may be a camera or a camera. The radar sensor 1001 and the vision sensor 1002 may be provided integrally or may be provided separately. For example, the radar sensor 1001 and the vision sensor 1002 may be commonly disposed on a support bar on the roadside, and the radar sensor 1001 and the vision sensor 1002 may reach a certain height from the ground.
The radar sensor 1001 and the vision sensor 1002 detect a target area on the roadside together, and generate point cloud data and image data. The data processing unit 1003 generates structured data having a certain physical meaning based on the point cloud data and the image data, and transmits the structured data to the drive test calculation unit.
According to an embodiment of the present disclosure, the present disclosure also provides a roadside apparatus including the detection device according to the above-described embodiment of the present disclosure.
The roadside equipment comprises a base body, and the detection device is arranged on the base body. The inside integration of pedestal has control unit and power module, and control unit is used for controlling radar sensor and visual sensor synchronous working, and power module is used for supplying power respectively to radar sensor and visual sensor.
According to an embodiment of the present disclosure, the present disclosure further provides an intelligent transportation system, including the roadside device and the roadside computing unit according to the above-mentioned embodiment of the present disclosure. The roadside computing unit is used for receiving the structured data from the roadside device and executing data computing processing on the structured data.
Illustratively, the roadside computing unit may be an edge computing unit, and is configured to receive structured data sent by the roadside device, and perform data computing processing on the structured data to obtain relevant information of a target object in a target environment, so as to implement other functions such as prediction perception, path planning, and early warning for the target object.
The intelligent transportation system can further comprise a cloud server and a vehicle-end server, and any two of the roadside computing unit, the cloud server and the vehicle-end server can perform information interaction.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable radar and visual data fusion processing apparatus such that the program codes, when executed by the processor or controller, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (18)

1. A radar and visual data fusion processing method, comprising:
respectively receiving point cloud data and image data from a radar sensor and a vision sensor, wherein the point cloud data comprises reflection intensity information of each laser point, and the image data comprises brightness information of each pixel point;
determining the corresponding relation between the laser point and the pixel point;
based on the corresponding relation, fusing the reflection intensity information of the laser spot and the brightness information of the corresponding pixel point to obtain brightness fused information;
and generating structured data based on the brightness fusion information.
2. The method of claim 1, wherein fusing the reflection intensity information of the laser spot and the brightness information of the corresponding pixel point to obtain brightness fused information comprises:
determining weight coefficients respectively corresponding to the reflection intensity information of the laser spot and the brightness information of the pixel point;
and according to the weight coefficient, performing weighted superposition on the reflection intensity information of the laser spot and the brightness information of the corresponding pixel point to obtain the brightness fusion information.
3. The method of claim 1, wherein determining the correspondence of the laser points to the pixel points comprises:
determining a coordinate transformation matrix of the radar sensor relative to the vision sensor;
and determining the corresponding relation between the laser point and the pixel point based on the coordinate transformation matrix.
4. The method of claim 3, wherein determining a coordinate transformation matrix of the radar sensor relative to the vision sensor comprises:
and respectively carrying out internal and external reference calibration processing on the radar sensor and the vision sensor, and determining a coordinate transformation matrix of the radar sensor relative to the vision sensor.
5. The method of claim 1, wherein generating structured data based on the luminance fusion information comprises:
inputting the brightness fusion information and the point cloud data into a pre-trained deep learning network, and receiving structured data from the deep learning network.
6. The method of claim 1, wherein after obtaining the luminance fusion information, further comprising:
updating color information of each pixel point contained in the image data based on the brightness fusion information;
generating an updated frame image based on the updated color information;
and generating video stream data based on the updated frame image and other frame images.
7. A radar and visual data fusion processing apparatus comprising:
the data receiving module is used for respectively receiving point cloud data and image data from the radar sensor and the visual sensor, wherein the point cloud data comprises reflection intensity information of each laser point, and the image data comprises brightness information of each pixel point;
the corresponding relation determining module is used for determining the corresponding relation between the laser point and the pixel point;
the fusion processing module is used for carrying out fusion processing on the reflection intensity information of the laser spot and the brightness information of the corresponding pixel point based on the corresponding relation to obtain brightness fusion information;
and the structured data generation module is used for generating structured data based on the brightness fusion information.
8. The apparatus of claim 7, wherein the fusion processing module comprises:
the weight coefficient determining submodule is used for determining weight coefficients corresponding to the reflection intensity information of the laser spot and the brightness information of the pixel point respectively;
and the brightness fusion information calculation submodule is used for weighting and superposing the reflection intensity information of the laser points and the brightness information of the corresponding pixel points according to the weight coefficients to obtain the brightness fusion information.
9. The apparatus of claim 7, wherein the correspondence determining module comprises:
the coordinate transformation matrix determining submodule is used for determining a coordinate transformation matrix of the radar sensor relative to the vision sensor;
and the corresponding relation determining submodule is used for determining the corresponding relation between the laser point and the pixel point based on the coordinate transformation matrix.
10. The apparatus of claim 9, wherein the coordinate transformation matrix determination submodule is further to:
and respectively carrying out internal and external reference calibration processing on the radar sensor and the vision sensor, and determining a coordinate transformation matrix of the radar sensor relative to the vision sensor.
11. The apparatus of claim 7, wherein the structured data generation module is further configured to:
and inputting the brightness fusion information and the point cloud data into a pre-trained deep learning network, and acquiring structured data output by the deep learning network.
12. The apparatus of claim 7, further comprising:
the color information updating module is used for updating the color information of each pixel point contained in the image data based on the brightness fusion information;
the frame image generation module is used for generating an updated frame image based on the updated color information;
and the video stream data generating module is used for generating video stream data based on the updated frame image and other frame images.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
16. A probe apparatus, comprising:
the radar sensor is used for detecting a target area and generating point cloud data;
the vision sensor is used for monitoring the target area and generating image data;
a data processing unit for performing the radar and visual data fusion processing method according to any one of claims 1 to 6.
17. A roadside apparatus comprising:
the probe device of claim 16.
18. An intelligent transportation system comprising:
the roadside apparatus of claim 17;
and the road side computing unit is used for receiving the structured data from the road side equipment and executing data computing processing on the structured data.
CN202110967643.3A 2021-08-23 2021-08-23 Radar and visual data fusion processing method, road side equipment and intelligent traffic system Pending CN113688900A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110967643.3A CN113688900A (en) 2021-08-23 2021-08-23 Radar and visual data fusion processing method, road side equipment and intelligent traffic system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110967643.3A CN113688900A (en) 2021-08-23 2021-08-23 Radar and visual data fusion processing method, road side equipment and intelligent traffic system

Publications (1)

Publication Number Publication Date
CN113688900A true CN113688900A (en) 2021-11-23

Family

ID=78581417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110967643.3A Pending CN113688900A (en) 2021-08-23 2021-08-23 Radar and visual data fusion processing method, road side equipment and intelligent traffic system

Country Status (1)

Country Link
CN (1) CN113688900A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115496975A (en) * 2022-08-29 2022-12-20 锋睿领创(珠海)科技有限公司 Auxiliary weighted data fusion method, device, equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9215382B1 (en) * 2013-07-25 2015-12-15 The United States Of America As Represented By The Secretary Of The Navy Apparatus and method for data fusion and visualization of video and LADAR data
CN107316325A (en) * 2017-06-07 2017-11-03 华南理工大学 A kind of airborne laser point cloud based on image registration and Image registration fusion method
CN108802760A (en) * 2017-04-27 2018-11-13 德尔福技术公司 Laser radar and camera data for automated vehicle merge
EP3438777A1 (en) * 2017-08-04 2019-02-06 Bayerische Motoren Werke Aktiengesellschaft Method, apparatus and computer program for a vehicle
US20190065878A1 (en) * 2017-08-22 2019-02-28 GM Global Technology Operations LLC Fusion of radar and vision sensor systems
CN109495694A (en) * 2018-11-05 2019-03-19 福瑞泰克智能系统有限公司 A kind of environment perception method and device based on RGB-D
US20200018869A1 (en) * 2018-07-16 2020-01-16 Faro Technologies, Inc. Laser scanner with enhanced dymanic range imaging
CN110799918A (en) * 2017-08-04 2020-02-14 宝马股份公司 Method, apparatus and computer program for a vehicle
CN111062378A (en) * 2019-12-23 2020-04-24 重庆紫光华山智安科技有限公司 Image processing method, model training method, target detection method and related device
KR102145557B1 (en) * 2019-02-21 2020-08-18 재단법인대구경북과학기술원 Apparatus and method for data fusion between heterogeneous sensors
US10929694B1 (en) * 2020-01-22 2021-02-23 Tsinghua University Lane detection method and system based on vision and lidar multi-level fusion
CN112541886A (en) * 2020-11-27 2021-03-23 北京佳力诚义科技有限公司 Laser radar and camera fused artificial intelligence ore identification method and device
CN112766135A (en) * 2021-01-14 2021-05-07 北京航空航天大学杭州创新研究院 Target detection method, target detection device, electronic equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9215382B1 (en) * 2013-07-25 2015-12-15 The United States Of America As Represented By The Secretary Of The Navy Apparatus and method for data fusion and visualization of video and LADAR data
CN108802760A (en) * 2017-04-27 2018-11-13 德尔福技术公司 Laser radar and camera data for automated vehicle merge
CN107316325A (en) * 2017-06-07 2017-11-03 华南理工大学 A kind of airborne laser point cloud based on image registration and Image registration fusion method
US20200174130A1 (en) * 2017-08-04 2020-06-04 Bayerische Motoren Werke Aktiengesellschaft Method, Apparatus and Computer Program for a Vehicle
CN110799918A (en) * 2017-08-04 2020-02-14 宝马股份公司 Method, apparatus and computer program for a vehicle
EP3438777A1 (en) * 2017-08-04 2019-02-06 Bayerische Motoren Werke Aktiengesellschaft Method, apparatus and computer program for a vehicle
US20190065878A1 (en) * 2017-08-22 2019-02-28 GM Global Technology Operations LLC Fusion of radar and vision sensor systems
US20200018869A1 (en) * 2018-07-16 2020-01-16 Faro Technologies, Inc. Laser scanner with enhanced dymanic range imaging
CN109495694A (en) * 2018-11-05 2019-03-19 福瑞泰克智能系统有限公司 A kind of environment perception method and device based on RGB-D
KR102145557B1 (en) * 2019-02-21 2020-08-18 재단법인대구경북과학기술원 Apparatus and method for data fusion between heterogeneous sensors
CN111062378A (en) * 2019-12-23 2020-04-24 重庆紫光华山智安科技有限公司 Image processing method, model training method, target detection method and related device
US10929694B1 (en) * 2020-01-22 2021-02-23 Tsinghua University Lane detection method and system based on vision and lidar multi-level fusion
CN112541886A (en) * 2020-11-27 2021-03-23 北京佳力诚义科技有限公司 Laser radar and camera fused artificial intelligence ore identification method and device
CN112766135A (en) * 2021-01-14 2021-05-07 北京航空航天大学杭州创新研究院 Target detection method, target detection device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张合新;王强;宋睿;: "激光图像融合算法", 导航定位与授时, no. 06, pages 54 - 60 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115496975A (en) * 2022-08-29 2022-12-20 锋睿领创(珠海)科技有限公司 Auxiliary weighted data fusion method, device, equipment and storage medium
CN115496975B (en) * 2022-08-29 2023-08-18 锋睿领创(珠海)科技有限公司 Auxiliary weighted data fusion method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US11436748B2 (en) Volume measurement method and system, apparatus and computer-readable storage medium
US10176543B2 (en) Image processing based on imaging condition to obtain color image
CN110793544B (en) Method, device and equipment for calibrating parameters of roadside sensing sensor and storage medium
CN113012210B (en) Method and device for generating depth map, electronic equipment and storage medium
CN110458826B (en) Ambient brightness detection method and device
EP4072131A1 (en) Image processing method and apparatus, terminal and storage medium
CN112634343A (en) Training method of image depth estimation model and processing method of image depth information
CN111027415A (en) Vehicle detection method based on polarization image
KR20190027131A (en) Apparatus and method for display visibility enhancement
CN112863187A (en) Detection method of perception model, electronic equipment, road side equipment and cloud control platform
CN113688900A (en) Radar and visual data fusion processing method, road side equipment and intelligent traffic system
CN113344906B (en) Camera evaluation method and device in vehicle-road cooperation, road side equipment and cloud control platform
CN116823674B (en) Cross-modal fusion underwater image enhancement method
CN113888509A (en) Method, device and equipment for evaluating image definition and storage medium
CN112509058B (en) External parameter calculating method, device, electronic equipment and storage medium
US20230328396A1 (en) White balance correction method and apparatus, device, and storage medium
US11379692B2 (en) Learning method, storage medium and image processing device
CN112966599A (en) Training method of key point identification model, and key point identification method and device
CN117078767A (en) Laser radar and camera calibration method and device, electronic equipment and storage medium
WO2022022136A1 (en) Depth image generation method and apparatus, reference image generation method and apparatus, electronic device, and computer readable storage medium
CN113112551B (en) Camera parameter determining method and device, road side equipment and cloud control platform
CN113379884B (en) Map rendering method, map rendering device, electronic device, storage medium and vehicle
CN113222968B (en) Detection method, system, equipment and storage medium fusing millimeter waves and images
CN114742726A (en) Blind area detection method and device, electronic equipment and storage medium
CN113312979B (en) Image processing method and device, electronic equipment, road side equipment and cloud control platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination