CN117764849A - Image compensation method, device, equipment and storage medium based on image fusion - Google Patents

Image compensation method, device, equipment and storage medium based on image fusion Download PDF

Info

Publication number
CN117764849A
CN117764849A CN202311797709.4A CN202311797709A CN117764849A CN 117764849 A CN117764849 A CN 117764849A CN 202311797709 A CN202311797709 A CN 202311797709A CN 117764849 A CN117764849 A CN 117764849A
Authority
CN
China
Prior art keywords
dimensional
target
current
detection data
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311797709.4A
Other languages
Chinese (zh)
Inventor
王星宇
陈洋
冯站银
石本义
梁林林
童城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infiray Technologies Co Ltd
Original Assignee
Infiray Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infiray Technologies Co Ltd filed Critical Infiray Technologies Co Ltd
Priority to CN202311797709.4A priority Critical patent/CN117764849A/en
Publication of CN117764849A publication Critical patent/CN117764849A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an image compensation method, device, equipment and storage medium based on image fusion, wherein the method comprises the steps of acquiring a three-dimensional point cloud image under a current frame and a visible light image under the current frame, which are synchronously acquired by vehicle equipment in a driving process; based on a three-dimensional point cloud image under a current frame and a visible light image under the current frame, current detection data under the current frame of a target to be optimized is obtained, historical detection data of the target to be optimized is obtained, and the historical detection data are obtained from a historical frame corresponding to the current frame; splicing the current detection data and the historical detection data of the target to be optimized to obtain a three-dimensional spliced color point cloud picture corresponding to the target to be optimized; and correcting the current three-dimensional detection data of the target to be optimized based on the three-dimensional spliced color point cloud picture corresponding to the target to be optimized.

Description

Image compensation method, device, equipment and storage medium based on image fusion
Technical Field
The present invention relates to the field of autopilot technology, and in particular, to an image compensation method, apparatus, device and computer readable storage medium based on image fusion.
Background
In the field of assisted driving and automatic driving, sensors capable of accurately detecting environmental depth information, such as lidar sensors, are widely used. However, the point cloud becomes sparse with the increase of the distance, objects such as pedestrians, vehicles and the like at a long distance often have only one plane, and the object shielding also often causes data loss, so that adverse effects are brought to target perception. In the prior art, some schemes adopt a method of combining images and radars, different areas are segmented through image recognition and segmentation strategies, then point clouds are complemented to obtain dense point clouds, but distortion can occur when the point clouds are filled by the scheme, the problems of sparse remote pixels, object shielding and the like exist, and only a part of point clouds which are not influenced by shielding can be complemented. Therefore, for a far-distance environment in front of the vehicle, pixels are sparse, far-distance target information is relatively less, recognition accuracy is not high, and when an object is shielded, information is lost, and accuracy of target detection is affected.
Disclosure of Invention
In order to solve the existing technical problems, embodiments of the present invention provide an image compensation method, apparatus, device and computer readable storage medium based on image fusion, which can compensate information in a current frame by combining information in a history frame, so that vehicle equipment can continuously accumulate characteristic data of a target in a driving process, color information in two-dimensional detection data is introduced in a matching and splicing process, more characteristics are increased, and accuracy of target detection is improved, thereby driving safety is improved.
In a first aspect, an image compensation method based on image fusion is provided, including:
acquiring a three-dimensional point cloud image under a current frame and a visible light image under the current frame, which are synchronously acquired by vehicle equipment in the driving process;
acquiring current detection data of a target to be optimized under a current frame based on a three-dimensional point cloud image under the current frame and a visible light image under the current frame, wherein the current detection data comprises current three-dimensional detection data obtained based on the three-dimensional point cloud image under the current frame and current two-dimensional detection data obtained based on the visible light image under the current frame;
acquiring historical detection data of the target to be optimized, wherein the historical detection data are data acquired from a historical frame corresponding to the current frame, and the historical detection data comprise historical three-dimensional detection data and historical two-dimensional detection data;
splicing the current detection data and the historical detection data of the target to be optimized to obtain a three-dimensional spliced color point cloud picture corresponding to the target to be optimized;
and correcting the current three-dimensional detection data of the target to be optimized based on the three-dimensional spliced color point cloud picture corresponding to the target to be optimized.
In a second aspect, a vehicle device is provided, including a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the image compensation method based on image fusion provided in the embodiments of the present application.
In a third aspect, a storage medium is provided, storing a computer program, which when executed by a processor, causes the processor to perform the steps of the image compensation method based on image fusion provided in the embodiments of the present application.
In the above embodiment, the three-dimensional point cloud image under the current frame and the visible light image under the current frame of the target to be optimized are obtained, so that the current three-dimensional detection data and the current two-dimensional detection data under the current frame of the target to be optimized are obtained, the historical three-dimensional detection data and the historical two-dimensional detection data of the target to be optimized are obtained, the current detection data and the historical detection data of the target to be optimized are spliced, and the three-dimensional spliced color point cloud image corresponding to the target to be optimized is obtained, so that the three-dimensional spliced color point cloud image comprises the point cloud information in the historical frame and the color information of the historical frame, and therefore the three-dimensional spliced color point cloud image comprises more information, and compensation can be performed on the current three-dimensional detection data. By combining the information in the history frame to compensate the information in the current frame, the characteristic data of the target can be accumulated continuously in the driving process of the vehicle equipment, so that the accuracy of target tracking is improved, in addition, the color information in the two-dimensional detection data is introduced in the matching and splicing process, more characteristics are added, the accuracy of target detection is improved, and the driving safety is improved.
Drawings
FIG. 1 is an application environment diagram of an image compensation method based on image fusion in an embodiment;
FIG. 2 is a block diagram schematic of a vehicle device in an embodiment;
FIG. 3 is a flowchart of an image compensation method based on image fusion in an embodiment;
FIG. 4 is a schematic diagram of a three-dimensional point cloud before stitching and a three-dimensional point cloud after stitching;
FIG. 5 is a flowchart of an image compensation method based on image fusion in another embodiment;
FIG. 6 is a schematic diagram of a three-dimensional detection frame, a two-dimensional detection frame, and a projection of the three-dimensional detection frame onto two dimensions;
FIG. 7 is a schematic diagram of a process for splicing current detection data and the historical detection data;
FIG. 8 is a flow chart of modifying data in a current frame of a target to be optimized;
FIG. 9 is a schematic diagram of target detection before correction data and target detection after correction data;
FIG. 10 is a schematic diagram of an image compensation apparatus based on image fusion in an embodiment;
FIG. 11 is a schematic diagram of a vehicle apparatus in an embodiment.
Detailed Description
The technical scheme of the invention is further elaborated below by referring to the drawings in the specification and the specific embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the scope of the invention. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
In the following description, reference is made to the expression "some embodiments" which describe a subset of all possible embodiments, but it should be understood that "some embodiments" may be the same subset or a different subset of all possible embodiments and may be combined with each other without conflict.
Referring to fig. 1, an application environment diagram of an image compensation method based on image fusion in an embodiment is shown. The image compensation method based on image fusion is applied to the vehicle device 10, and the vehicle device 10 comprises a display terminal 11, a sensor module 12 and a controller 13. The sensor module 12 is used for acquiring data during the running process of the vehicle device 10, and the controller 13 recognizes the target based on the image data acquired by the sensor module 12 and displays the image data and the target on the screen of the vehicle device 10.
Wherein the sensor module 12 may be one or more sensor integrated modules, the sensor module 12 may be plural and may be mounted at different locations of the vehicle apparatus 10. The sensor module 12 includes, but is not limited to, a multi-spectral vision sensor, an environmental sensor, and a motion gesture sensor. Wherein the multispectral vision sensor includes, but is not limited to, one or more of the following combinations of sensors: infrared thermal imaging sensor, visible light image sensor, millimeter wave sensor, laser radar sensor and depth sensor. Environmental sensors include, but are not limited to, sensors that are one or more of the following combinations: and environment sensors such as a brightness sensor, a temperature sensor, a haze sensor and the like. The motion gesture sensor includes, but is not limited to, one or a combination of the following: inertial sensors (Inertial Measurement Unit, IMU), speed sensors, acceleration sensors, gyroscopic sensors, geomagnetic sensors, rotation vector sensors, steering wheel angle sensors, level sensors, tilt sensors, vibration sensors, displacement sensors, gravity sensors, and the like.
Wherein the screen is a surface of the display terminal 11 displaying an image, the screen being disposed in front of the driver for the driver to view the picture, wherein the display terminal 11 includes, but is not limited to, a display such as a liquid crystal display, a projector with a projector screen, and the like. The display terminal 11 may be a center up display located in the vehicle apparatus 10 or may be a projector provided at the rear of the vehicle apparatus 10 to project an image onto a projector screen.
Where there are multiple controllers 13, the controllers 13 may be integrated on a single chip or may be separately provided on each chip. Wherein the vehicle device 10 is a device mounted on any type of mobile body, such as a vehicle, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobile device, an aircraft, a drone, a boat or robot, or the like.
As shown in fig. 2, fig. 2 is a block diagram of a vehicle apparatus in an embodiment. The vehicle apparatus 10 includes a display terminal 11, a controller 13, a three-dimensional point cloud image sensor 14, and a visible light image sensor 15. The controller 13 is in communication connection with the display terminal 11, the three-dimensional point cloud image sensor 14, and the visible light image sensor 15, respectively. The three-dimensional point cloud image sensor 14 is configured to collect three-dimensional point cloud image data including three-dimensional position information, that is, the three-dimensional point cloud image sensor 14 is capable of collecting depth information indicating a distance in an environment, and thus the three-dimensional point cloud image can provide not only position information in two dimensions but also distance information between the environment in front of the vehicle device 10 and the vehicle device 10. The three-dimensional point cloud image is a raw intensity map acquired by the three-dimensional point cloud image sensor 14, wherein the three-dimensional point cloud image sensor 14 includes, but is not limited to: lidar sensors, or combinations of lidar sensors with other types of sensors. The visible light image sensor 15 is configured to collect a visible light image including color information, so that the visible light image data can provide color texture information, and in an environment with better visibility, the visible light image data can provide richer color texture information. Wherein the visible light image sensor 15 includes, but is not limited to, a visible light sensor, or a combination of a visible light sensor and other types of sensors.
Referring to fig. 3, a flowchart of an image compensation method based on image fusion according to an embodiment of the present application is shown. The image compensation method based on the image fusion is applied to vehicle equipment and comprises the following steps of:
s11, acquiring a three-dimensional point cloud image under a current frame and a visible light image under the current frame, which are synchronously acquired by vehicle equipment in the driving process.
In the present embodiment, the three-dimensional point cloud image sensor 14 is configured to collect three-dimensional point cloud images in real time, the three-dimensional point cloud images collected in real time are presented in the form of three-dimensional point cloud images of consecutive frames, and the visible light image sensor 15 is configured to collect visible light images in real time, and the visible light images collected in real time are presented in the form of visible light images of consecutive frames. The current frame includes a three-dimensional point cloud image acquired by the three-dimensional point cloud image sensor 14 at the same current time and a visible light image acquired by the visible light image sensor 15 at the same current time. During the running of the vehicle device 10, the controller 13 controls the three-dimensional point cloud image sensor 14 and the visible light image sensor 15 to synchronously acquire an environmental image in front of the vehicle device 10. In some embodiments, the manner in which the controller 13 controls the three-dimensional point cloud image sensor 14 and the visible light image sensor 15 to synchronously capture images of the environment in front of the vehicle device 10 includes, but is not limited to: the plurality of sensors are triggered to simultaneously acquire the environmental image in front of the vehicle device 10 by simultaneously transmitting the trigger signal to the plurality of sensors, and the plurality of sensors simultaneously acquire the environmental image in front of the vehicle device 10 at preset time intervals.
S12, acquiring current detection data of a target to be optimized under the current frame based on the three-dimensional point cloud image under the current frame and the visible light image under the current frame.
In this embodiment, the target to be optimized is a target corresponding to current three-dimensional detection data and current two-dimensional detection data under the current frame, and the target to be optimized also corresponds to stored historical detection data, so that the image in the current frame in the target to be optimized can be compensated by using the information of the previous historical frame.
The current detection data comprises current three-dimensional detection data and current two-dimensional detection data, wherein the current three-dimensional detection data is data obtained based on a three-dimensional point cloud image under a current frame, and the current two-dimensional detection data is data obtained based on a visible light image under the current frame. During the running process of the vehicle device 10, the controller 13 obtains current detection data based on the three-dimensional point cloud image under the current frame and the visible light image under the current frame by using the target recognition method, and then stores the current detection data, so that the current detection data can be directly called when the history data is needed for the subsequent frames. Among them are target recognition algorithms including, but not limited to: color-based recognition methods, texture-based recognition methods, shape-based recognition methods, and deep learning-based recognition methods. The color-based identification method is to identify the target from the preprocessed image according to the color information of the pixel points in the preprocessed image. The texture-based identification method is to identify a target from a preprocessed image according to texture information of pixel points in the preprocessed image. The shape-based recognition method is to recognize a target from a preprocessed image based on shape information of the target object in the preprocessed image. The recognition method based on the deep learning is a deep learning algorithm based on a neural network, and features in the image can be automatically learned and targets in the preprocessed image can be recognized by training the neural network.
The current three-dimensional detection data comprises, but is not limited to, three-dimensional targets identified from the three-dimensional point cloud image under the current frame, categories of the three-dimensional targets, position data of the three-dimensional targets, attitude data of the three-dimensional targets, course angles of the three-dimensional targets and three-dimensional detection frame data of the three-dimensional targets. Wherein the three-dimensional detection frame data includes, but is not limited to: the position of the three-dimensional detection frame, the size of the three-dimensional detection frame, the center of the three-dimensional detection frame and point cloud data in the three-dimensional detection frame. Current two-dimensional detection data includes, but is not limited to: the method comprises the steps of identifying a two-dimensional target, a category of the two-dimensional target, position data of the two-dimensional target, gesture data of the two-dimensional target and two-dimensional detection frame data of the two-dimensional target from a visible light image under a current frame. Wherein the two-dimensional detection frame data includes, but is not limited to: the position of the two-dimensional detection frame, the size of the two-dimensional detection frame, the center of the two-dimensional detection frame, color data within the two-dimensional detection frame, and the like.
S13, acquiring historical detection data of the target to be optimized.
In this embodiment, during the running process of the vehicle device 10, the three-dimensional point cloud image sensor 14 is used for acquiring three-dimensional point cloud images in real time, the visible light image sensor 15 is used for acquiring visible light images in real time, the controller 13 obtains current detection data based on the three-dimensional point cloud images under the current frame and the visible light images under the current frame, and then stores the current detection data, so that when the history data is needed for the subsequent frame, the history detection data of the object to be optimized can be directly called, and therefore, the controller 13 acquires the history detection data of the object to be optimized from the storage device of the vehicle device 10. Wherein the history detection data is data obtained from a history frame corresponding to the current frame. Wherein the history frame is a pre-set frame before the current frame. The history frame may be one or more frames. The historical frame may be a frame that is continuous with the current frame or a partially continuous frame. For example, the 20 th frame image is acquired at the current time, then the history frame may be 15 th frame, 17 th frame, 19 th frame, and so on. The history frame may be 17 th, 18 th, or 19 th frame.
Wherein the historical detection data comprises historical three-dimensional detection data and historical two-dimensional detection data. The historical three-dimensional detection data comprises, but is not limited to, three-dimensional targets identified from three-dimensional point cloud images under historical frames, categories of the three-dimensional targets, position data of the three-dimensional targets, attitude data of the three-dimensional targets, course angles of the three-dimensional targets and three-dimensional detection frame data of the three-dimensional targets. Wherein the three-dimensional detection frame data includes, but is not limited to: the position of the three-dimensional detection frame, the size of the three-dimensional detection frame, the center of the three-dimensional detection frame and point cloud data in the three-dimensional detection frame. Historical two-dimensional detection data includes, but is not limited to: the two-dimensional target, the category of the two-dimensional target, the position data of the two-dimensional target, the gesture data of the two-dimensional target and the two-dimensional detection frame data of the two-dimensional target are identified from the visible light image under the history frame. Wherein the two-dimensional detection frame data includes, but is not limited to: the position of the two-dimensional detection frame, the size of the two-dimensional detection frame, the center of the two-dimensional detection frame, color data within the two-dimensional detection frame, and the like.
And S14, splicing the current detection data and the historical detection data of the target to be optimized to obtain a three-dimensional spliced color point cloud picture corresponding to the target to be optimized.
In the present embodiment, the vehicle apparatus 10 collects data in continuous frames in real time during traveling, and thus is partially overlapped data for the current detected data in the current frame and the history detected data in the history frame. By using the image stitching method, the present detection data in the present frame and the history detection data in the history frame are stitched, so that the feature data of the target in the running process of the vehicle device 10 can be accumulated. Because the current detection data and the historical detection data both comprise two-dimensional detection data obtained based on visible light images, each point cloud in the three-dimensional spliced color point cloud image comprises point cloud data and color information.
Wherein the image stitching method includes, but is not limited to: spatial domain based image stitching algorithms, feature based image stitching algorithms, mutual information based image stitching algorithms, and so forth. Wherein the spatial domain based image stitching algorithm performs image matching based on the attributes of the pixels. The feature-based image stitching algorithm is to extract main feature points from images for matching stitching, and the mutual information-based image stitching algorithm is to perform matching stitching based on the similarity of the shared information quantity among the images.
When the current detection data and the historical detection data of the target to be optimized are spliced, color information is added, more color features are added, and accuracy in splicing and matching can be improved. As shown in fig. 4, which is a schematic diagram of a three-dimensional point cloud before stitching and a three-dimensional point cloud after stitching, comparing the three-dimensional point cloud image before stitching with a three-dimensional stitched color point cloud image, the three-dimensional stitched color point cloud image includes more point cloud information and color information, and as shown in fig. 4, a target in the three-dimensional detection frame includes more point cloud information in the three-dimensional stitched color point cloud image and color information, and the three-dimensional stitched color point cloud image includes more features, so that the accuracy of target tracking can be improved.
And S15, correcting current three-dimensional detection data of the target to be optimized based on the three-dimensional spliced color point cloud image corresponding to the target to be optimized.
In this embodiment, the three-dimensional spliced color point cloud image includes point cloud information in the history frame and color information of the history frame, so that the three-dimensional spliced color point cloud image includes more information, and thus, compensation can be performed on the current three-dimensional detection data. In addition, in the running process of the vehicle device 10, the current three-dimensional detection data of each frame is repaired by using the data in the history frame, so that the point cloud data of the target can be continuously accumulated in the running process of the vehicle device 10, missing or incomplete data of the current frame are supplemented, and the motion trail of the target can be accurately captured in the follow-up process of tracking the target, thereby improving the running safety.
In the above embodiment, the three-dimensional point cloud image under the current frame and the visible light image under the current frame of the target to be optimized are obtained, so that the current three-dimensional detection data and the current two-dimensional detection data under the current frame of the target to be optimized are obtained, the historical three-dimensional detection data and the historical two-dimensional detection data of the target to be optimized are obtained, the current detection data and the historical detection data of the target to be optimized are spliced, and the three-dimensional spliced color point cloud image corresponding to the target to be optimized is obtained, so that the three-dimensional spliced color point cloud image comprises the point cloud information in the historical frame and the color information of the historical frame, and therefore the three-dimensional spliced color point cloud image comprises more information, and compensation can be performed on the current three-dimensional detection data. By combining the information in the history frame to compensate the information in the current frame, the characteristic data of the target can be accumulated continuously in the driving process of the vehicle equipment, so that the accuracy of target tracking is improved, in addition, the color information in the two-dimensional detection data is introduced in the matching and splicing process, more characteristics are added, the accuracy of target detection is improved, and the driving safety is improved.
In some embodiments, the obtaining the current detection data of the target to be optimized under the current frame based on the three-dimensional point cloud image under the current frame and the visible light image under the current frame includes:
Identifying targets in the three-dimensional point cloud image under the current frame to obtain at least one three-dimensional target and current three-dimensional detection data of each three-dimensional target;
identifying targets in the visible light image under the current frame to obtain at least one two-dimensional target and current two-dimensional detection data of each two-dimensional target;
acquiring at least one identical target from the at least one three-dimensional target and the at least one two-dimensional target, and binding current three-dimensional detection data and current two-dimensional detection data of the at least one identical target;
the target to be optimized is determined from the at least one identical target.
Specifically, as shown in fig. 5, a flowchart of an image compensation method based on image fusion in another embodiment includes the following steps:
s41, acquiring a three-dimensional point cloud image under a current frame and a visible light image under the current frame, which are synchronously acquired by vehicle equipment in the driving process.
S42, identifying targets in the three-dimensional point cloud image under the current frame to obtain at least one three-dimensional target and current three-dimensional detection data of each three-dimensional target.
In this embodiment, input data of a trained three-dimensional object recognition model is formed based on a three-dimensional point cloud image under a current frame, and at least one three-dimensional object and current three-dimensional detection data of each three-dimensional object are output through the trained three-dimensional object recognition model. Wherein the three-dimensional object recognition model includes, but is not limited to: convolutional Neural Network (CNN), cyclic neural network (RNN), etc., input data into a three-dimensional target recognition model, and current three-dimensional detection data of a three-dimensional target corresponding to the input data can be output through the trained three-dimensional target recognition model.
In some embodiments, training the three-dimensional object recognition model includes: acquiring a first training data set, wherein each training sample in the first training data set comprises three-dimensional sample point cloud image data and label data corresponding to the three-dimensional sample point cloud image data, the label data comprises sample target types corresponding to the three-dimensional sample point cloud image data, three-dimensional detection frame data of a sample target and the gesture of the sample target, and the three-dimensional detection frame data comprises position data of a three-dimensional detection frame and size data of the three-dimensional detection frame;
constructing an initial three-dimensional target recognition model;
and carrying out iterative training on the initial three-dimensional target recognition model through the first training data set, acquiring training samples from the first training data set to form input sample data, outputting output data corresponding to the input sample data based on the three-dimensional target recognition model in the iterative process, and calculating a loss value in each iterative process based on a first loss function, label data corresponding to the input sample data and output data corresponding to the input sample data until an iteration termination condition is reached, so as to obtain the trained three-dimensional target recognition model. Wherein the first loss function may be a mean square error function, a cross entropy function, or other functions, etc.
The acquired training samples can come from different environment scenes, such as sand dust scenes, night scenes, foggy scenes, rainy scenes, strong light scenes, weak light scenes or environments under various scene combinations, the training samples are formed by acquiring data under different environments, so that the training samples are richer, when the three-dimensional target recognition model is trained based on the training samples, the three-dimensional target recognition model can learn characteristic information under different environment scenes from the training samples, parameters in the three-dimensional target recognition model are updated through continuous iteration, and after the three-dimensional target recognition model is trained, the three-dimensional target recognition model can mine the characteristic information from input data of the model, thereby improving the target detection precision and driving safety.
S43, identifying targets in the visible light image of the current frame to obtain at least one two-dimensional target and current two-dimensional detection data of each two-dimensional target.
In some embodiments, the input data of the trained two-dimensional object recognition model is formed based on the visible light image under the current frame, and the current two-dimensional detection data of at least one two-dimensional object and each two-dimensional object is output through the trained two-dimensional object recognition model. Wherein the two-dimensional object recognition model includes, but is not limited to, yolov5 network, neural network (CNN), recurrent Neural Network (RNN), and the like.
In some implementations, training the two-dimensional object recognition model includes: acquiring a second training data set, wherein each training sample in the second training data set comprises visible light image data and label data corresponding to the visible light image data, the label data comprises sample target types corresponding to the visible light image data and two-dimensional detection frame data of sample targets, and the two-dimensional detection frame data comprises position data of a two-dimensional detection frame and size data of the two-dimensional detection frame;
constructing an initial two-dimensional target recognition model;
and carrying out iterative training on the initial two-dimensional target recognition model through the second training data set, acquiring training samples from the second training data set to form input sample data, outputting output data corresponding to the input sample data based on the two-dimensional target recognition model in the iterative process, and calculating a loss value in each iterative process based on a second loss function, label data corresponding to the input sample data and output data corresponding to the input sample data until an iteration termination condition is reached, so as to obtain the trained two-dimensional target recognition model. Wherein the second loss function may be a mean square error function, a cross entropy function, or other functions, etc.
The collected training samples can come from different environment scenes, such as sand dust scenes, night scenes, foggy scenes, rainy scenes, strong light scenes, weak light scenes or environments under various scene combinations, the training samples are formed by collecting data under different environments, so that the training samples are richer, when the two-dimensional target recognition model is trained based on the training samples, the two-dimensional target recognition model can learn characteristic information under different environment scenes from the training samples, parameters in the two-dimensional target recognition model are updated through continuous iteration, and after the two-dimensional target recognition model is trained, the two-dimensional target recognition model can mine the characteristic information from input data of the model, thereby improving the accuracy of target detection and driving safety.
S44, acquiring at least one same target from the at least one three-dimensional target and the at least one two-dimensional target, and binding current three-dimensional detection data and current two-dimensional detection data of the at least one same target.
In the running process of the vehicle equipment, the target in the three-dimensional point cloud image under the current frame is identified by utilizing the target identification method to obtain a three-dimensional target, the target in the visible light image under the current frame is identified by utilizing the target identification method to obtain a two-dimensional target, and the two-dimensional target and the three-dimensional target have the same target, namely the same target is in the three-dimensional target and the two-dimensional target. Thus, the same target has both current three-dimensional detection data and current two-dimensional detection data. And selecting an optimization target from the same targets for optimization.
Optionally, the acquiring at least one identical target from the at least one three-dimensional target and the at least one two-dimensional target, and binding the current three-dimensional detection data and the current two-dimensional detection data of the at least one identical target includes:
based on a space mapping relation between the three-dimensional point cloud image sensor and the visible light image sensor, projecting the at least one three-dimensional target to a coordinate system where the visible light image is positioned to obtain a plurality of projected targets;
and determining a plurality of targets with overlapping degrees larger than a preset overlapping degree between the targets after the projection and the at least one two-dimensional target as the same target, and binding current three-dimensional detection data and current two-dimensional detection data of the same target.
Specifically, in the present embodiment, when the plurality of sensors are mounted on the vehicle apparatus 10, the initial external parameters are substantially fixed when the vehicle apparatus 10 is designed to be completed, but due to errors in the mounting process, the vehicle apparatus 10 changes in external parameters due to changes in tire pressure, load, deformation, etc. during traveling, so that it is necessary to perform joint calibration of the plurality of sensors and the external parameters between the vehicle apparatus 10 in advance. After the external parameters are calibrated, the mapping relation of the data of different sensors can be determined, so that the data collected by different sensors can be projected and mapped mutually, and various data can be fused. And carrying out projection mapping on the acquired various data based on the spatial mapping relation among the plurality of sensors.
In some embodiments, external parameter calibration is performed on a plurality of sensors by using a calibration algorithm and a marker, specifically, a calibration object is placed in a world coordinate system where a known vehicle device is located, images of the calibration object are collected by using the plurality of sensors, feature points in the collected images are extracted, the extracted feature points are matched with coordinate points in the known world coordinates, and based on the extracted feature points and the coordinate points in the known world coordinates, a mutual spatial mapping relationship between the plurality of sensors and a spatial mapping relationship between each sensor and the vehicle device 10 are calculated, wherein the spatial mapping relationship includes at least one of translation relationship and rotation relationship.
After the external parameter calibration is completed, at least one three-dimensional target is projected to a coordinate system where a visible light image is located based on a spatial mapping relation between the three-dimensional point cloud image sensor 14 and the visible light image sensor 15, and a plurality of projected targets are obtained. As shown in fig. 6, fig. 6 is a schematic diagram of a three-dimensional detection frame, a two-dimensional detection frame, and a projection of the three-dimensional detection frame on two dimensions; in fig. 6, the three-dimensional point cloud image acquired at the same time in front of the vehicle device 10 is identified to obtain a three-dimensional detection result as shown in fig. 6, the visible light image acquired at the same time is identified to obtain a two-dimensional detection result as shown in fig. 6, and the three-dimensional detection result is projected onto a schematic view of projection of the three-dimensional detection frame on two dimensions as shown in fig. 6 in a projection view of the visible light image sensor 15 at the coordinate.
In the above embodiment, the plurality of targets with overlapping degrees between the plurality of projected targets and the at least one two-dimensional target being larger than the preset overlapping degree are determined to be the same target, so that the same target can be accurately found for tracking during tracking, the accuracy of target tracking can be improved, and driving safety is improved.
S45, determining a target to be optimized from at least one same target, and acquiring current detection data of the target to be optimized.
Optionally, the determining the target to be optimized from the at least one same target includes:
acquiring a historical target in a historical frame corresponding to the current frame;
and determining the target belonging to the historical target in the at least one identical target as a target to be optimized.
Further, in some embodiments, the identification of the two-dimensional object in the current two-dimensional detection data is compared with the identification of the two-dimensional object in the historical two-dimensional detection data, the objects with the same identification are determined to be the same object, the objects with the same identification are searched in at least one same object, and the searched objects are found in the historical frame or the current frame. For example, the bytearck method may be used to detect objects that repeatedly appear in a plurality of frames. For example, a vehicle a appears on the 10 th frame visible light image, then the vehicle a is blocked, the vehicle a cannot be found temporarily, the vehicle a appears again, the vehicle a is identified in the 15 th frame, the vehicle and the vehicle which disappears before can be identified as the same vehicle at the moment through the byte method, an ID mark can be marked on each two-dimensional detection frame through the byte method, and the ID marks of the same targets are judged to be the same.
In some embodiments, the overlapping degree of the three-dimensional detection frame in the current three-dimensional detection data and the three-dimensional detection frame in the historical detection data is calculated, and the target with the overlapping degree larger than the preset overlapping degree threshold value is taken as the same target. I.e. objects with an overlap greater than the preset overlap threshold occur in the history frame as well as in the current frame.
In the embodiment, the target which appears in the history frame is determined to be the target to be optimized, so that the same target can be accurately found for tracking during tracking, the accuracy of target tracking can be improved, and the driving safety is improved.
S46, acquiring historical detection data of the target to be optimized.
And S47, splicing the current detection data of the target to be optimized and the historical detection data to obtain a three-dimensional spliced color point cloud picture corresponding to the target to be optimized.
S48, correcting current three-dimensional detection data of the target to be optimized based on the three-dimensional spliced color point cloud picture corresponding to the target to be optimized.
In the above steps, the steps S41 and S11 are the same, and the steps S46 to S48 are the same as the steps S13 to S15, respectively, and are not described herein, wherein the steps S42 to S45 are one implementation of the step S12.
In the above embodiment, when one target is in the three-dimensional target as well as the two-dimensional target during the running of the vehicle device, the confidence of the target can be determined to be higher, and the target is the target which needs to be continuously tracked, and the optimization of the target is needed, so that the accuracy of target tracking can be improved, and the driving safety is improved.
In some embodiments, the current two-dimensional detection data and the historical two-dimensional detection data include color information, and the stitching the current detection data and the historical detection data of the target to be optimized to obtain the three-dimensional stitched color point cloud image corresponding to the target to be optimized includes:
fusing each point cloud in the three-dimensional detection frame in the historical three-dimensional detection data with color information in the historical two-dimensional detection data to obtain a color point cloud image in the three-dimensional detection frame in the historical three-dimensional detection data;
fusing each point cloud in the three-dimensional detection frame in the current three-dimensional detection data with the color information in the current two-dimensional detection data to obtain a color point cloud image in the three-dimensional detection frame in the current three-dimensional detection data;
and registering and splicing the color point cloud image in the three-dimensional detection frame in the historical three-dimensional detection data and the color point cloud image in the three-dimensional detection frame in the current three-dimensional detection data to obtain the three-dimensional spliced color point cloud image corresponding to the target to be optimized.
Fig. 7 is a schematic diagram of a process of splicing the current detection data and the historical detection data, which includes:
And S71, fusing each point cloud in the three-dimensional detection frame in the historical three-dimensional detection data with the color information in the historical two-dimensional detection data to obtain a color point cloud image in the three-dimensional detection frame in the historical three-dimensional detection data.
And fusing each point cloud in the three-dimensional detection frame in the historical three-dimensional detection data with the color information in the historical two-dimensional detection data based on the space mapping relation between the sensor for collecting the three-dimensional point cloud image and the sensor for collecting the visible light image to obtain a color point cloud image in the three-dimensional detection frame in the historical three-dimensional detection data. And then fusing the three-dimensional point cloud image in the three-dimensional detection frame under the history frame with the visible light image in the three-dimensional detection frame under the history frame to obtain the color point cloud image in the three-dimensional detection frame under the history frame. For one object, each point cloud in the three-dimensional detection frame of the object under the history frame comprises point cloud data and color data, and more features are included.
And S72, fusing each point cloud in the three-dimensional detection frame in the current three-dimensional detection data with the color information in the current two-dimensional detection data to obtain a color point cloud image in the three-dimensional detection frame in the current three-dimensional detection data.
And fusing each point cloud in the three-dimensional detection frame in the current three-dimensional detection data with the color information in the current two-dimensional detection data based on the space mapping relation between the sensor for acquiring the three-dimensional point cloud image and the sensor for acquiring the visible light image to obtain a color point cloud image in the three-dimensional detection frame in the current three-dimensional detection data. And then fusing the three-dimensional point cloud image in the three-dimensional detection frame under the current frame with the visible light image in the three-dimensional detection frame under the current frame to obtain the color point cloud image in the three-dimensional detection frame under the current frame. For one object, each point cloud in the three-dimensional detection frame of the object under the current frame comprises point cloud data and color data, and more features are included.
And S73, registering and splicing the color point cloud image in the three-dimensional detection frame in the historical three-dimensional detection data with the color point cloud image in the three-dimensional detection frame in the current three-dimensional detection data to obtain the three-dimensional spliced color point cloud image corresponding to the target to be optimized.
And then splicing the color point cloud image in the three-dimensional detection frame in the historical three-dimensional detection data with the color point cloud image in the three-dimensional detection frame in the current three-dimensional detection data by using an image splicing method. Wherein the image stitching method includes, but is not limited to: spatial domain based image stitching algorithms, feature based image stitching algorithms, mutual information based image stitching algorithms, and so forth. Wherein the spatial domain based image stitching algorithm performs image matching based on the attributes of the pixels. The feature-based image stitching algorithm is to extract main feature points from images for matching stitching, and the mutual information-based image stitching algorithm is to perform matching stitching based on the similarity of the shared information quantity among the images.
Optionally, after the three-dimensional spliced color point cloud images are obtained, the spliced three-dimensional point cloud images can be filtered by using a filtering algorithm, so that the three-dimensional spliced color point cloud images are not too dense.
When the current detection data and the historical detection data of the target to be optimized are spliced, each point cloud in the three-dimensional detection frame of the target under the current frame and the historical frame comprises the point cloud data and the color data, the three-dimensional point cloud is sparse, or when the point cloud is incomplete due to the fact that an object is shielded, color information is added during splicing, more color features are added, and accuracy in splicing matching can be improved.
In the above embodiment, when the current detection data and the historical detection data of the target to be optimized are spliced, the three-dimensional point cloud is sparse or incomplete, color information is added during the splicing, more color features are added, the accuracy in the splicing matching can be improved, and the information in the historical frame and the information in the previous frame are spliced, so that the characteristic data of the target can be continuously accumulated in the running process of the vehicle equipment, and the accuracy of target tracking is improved.
In some embodiments, the correcting the current three-dimensional detection data of the target to be optimized based on the three-dimensional stitched color point cloud image corresponding to the target to be optimized includes:
Acquiring the center of a three-dimensional detection frame in the current three-dimensional detection data of the target to be optimized and acquiring the center of a three-dimensional spliced color point cloud picture corresponding to the target to be optimized;
calculating the offset angle between the center of the three-dimensional detection frame and the center of the three-dimensional spliced color point cloud picture in the current three-dimensional detection data;
and correcting the current three-dimensional detection data according to the offset angle and the three-dimensional spliced color point cloud picture to obtain corrected current three-dimensional detection data.
Fig. 8 is a schematic flow chart of correcting data in a current frame of a target to be optimized, including:
s81, acquiring the center of a three-dimensional detection frame in the current three-dimensional detection data of the target to be optimized, and acquiring the center of a three-dimensional spliced color point cloud picture corresponding to the target to be optimized.
In this embodiment, one target corresponds to one three-dimensional detection frame. And calculating the center of the three-dimensional detection frame in the current three-dimensional detection data according to the size data of the three-dimensional detection frame in the current three-dimensional detection data. And calculating the center of the three-dimensional spliced color point cloud picture according to the size data of the three-dimensional spliced color point cloud picture.
S82, calculating the offset angle between the center of the three-dimensional detection frame in the current three-dimensional detection data and the center of the three-dimensional spliced color point cloud picture.
In this embodiment, the two centers calculated above are located at the same coordinate, and the offset angle can be calculated from the positions of the two centers with respect to the origin of the coordinates.
And S83, correcting the current three-dimensional detection data according to the offset angle and the three-dimensional spliced color point cloud image pair to obtain corrected current three-dimensional detection data.
In this embodiment, according to the offset angle, the three-dimensional detection frame in the current three-dimensional detection data is moved to the position of the three-dimensional spliced color point cloud image, so that the center of the three-dimensional detection frame in the current three-dimensional detection data coincides with the center of the three-dimensional spliced color point cloud image, and according to the range of the three-dimensional spliced color point cloud image and the size data of the three-dimensional detection frame in the current three-dimensional detection data, the size of the three-dimensional detection frame in the current three-dimensional detection data is adjusted and corrected in equal proportion, and finally the corrected three-dimensional detection frame is generated. Such corrected three-dimensional detection frames for the current frame include, but are not limited to: point cloud data under a history frame, color data under a history frame, point cloud data under a current frame, color data under a current frame, and the like. For each target, after the three-dimensional point cloud data and the color data in the three-dimensional detection frames of the front frame and the rear frame are spliced, the data in the three-dimensional detection frame under the current frame is corrected, so that when the target is tracked, the historical frame data can be used for reference to correct the data, the color characteristics are increased, the matching and splicing precision is improved, and the target tracking accuracy is improved.
Fig. 9 is a schematic diagram of target detection before correction data and target detection after correction data, in which, for a target, a result before correction of a three-dimensional detection frame obtained based on a three-dimensional point cloud image under a current frame as shown in fig. 9 is corrected to obtain a corrected result as shown in fig. 9, and the result before correction is compared with the corrected result to obtain a comparison result diagram as shown in fig. 9. The comparison result graph includes the result before correction and the result after correction. The 3D frame orientation and the arrow mark in each frame are heading information of the target.
In some embodiments, the number of point clouds in the three-dimensional detection frame of the current three-dimensional detection data of the target to be optimized is greater than the preset number of point clouds.
Since there may be some false detection in the target recognition process, when the target to be optimized is determined in the at least one identical target, the target greater than the preset number of point clouds in the identical target is determined as the target to be optimized by the number of point clouds in the three-dimensional detection frame of the identical target.
In the above embodiment, the target to be optimized is determined by the number of point clouds in the three-dimensional detection frame of the current three-dimensional detection data, and the target of the sparse point cloud can be eliminated, thereby improving the accuracy of target tracking.
In some embodiments, the method further comprises:
determining a target without history detection data in the at least one same target as a new target, and storing current detection data of the new target under a current frame;
and deleting the historical targets which are not updated in the preset time when the stored target number is greater than the preset target number.
In the process of driving the vehicle device 10, a new target is detected in the current frame, for the new target, in order to facilitate the subsequent tracking, the current detection data of the new target in the current frame needs to be stored, but because the target is continuously changed in the process of driving the vehicle device 10, some targets may not always appear in the front environment image of the vehicle device 10, so that the stored target needs to be updated, the new target is tracked in time, and when the number of the stored targets is greater than the number of the preset targets, the history targets which are not updated in the preset time are deleted. When a historical target does not have an updated frame image within a preset time, the historical target is not required to be tracked, and the historical target can be deleted.
In the embodiment, a new target is found in time in the tracking process, the target which is not updated for a long time is cleaned, and the target tracking precision is improved, so that the driving safety is improved.
Referring to fig. 10, an embodiment of the present application provides an image compensation device based on image fusion, including: the current data acquisition module 21 is used for acquiring a three-dimensional point cloud image under a current frame and a visible light image under the current frame, which are synchronously acquired by vehicle equipment in the driving process; the target obtaining module 22 is configured to obtain current detection data under a current frame of a target to be optimized based on a three-dimensional point cloud image under the current frame and a visible light image under the current frame, where the current detection data includes current three-dimensional detection data obtained based on the three-dimensional point cloud image under the current frame and current two-dimensional detection data obtained based on the visible light image under the current frame; a historical data obtaining module 23, configured to obtain historical detection data of the target to be optimized, where the historical detection data is data obtained from a historical frame corresponding to the current frame, and the historical detection data includes historical three-dimensional detection data and historical two-dimensional detection data; the splicing module 24 is configured to splice the current detection data and the historical detection data of the target to be optimized to obtain a three-dimensional spliced color point cloud image corresponding to the target to be optimized; and the correction module 25 is used for correcting the current three-dimensional detection data of the target to be optimized based on the three-dimensional spliced color point cloud image corresponding to the target to be optimized.
Optionally, the target acquisition module 22 is further configured to:
identifying targets in the three-dimensional point cloud image under the current frame to obtain at least one three-dimensional target and current three-dimensional detection data of each three-dimensional target;
identifying targets in the visible light image under the current frame to obtain at least one two-dimensional target and current two-dimensional detection data of each two-dimensional target;
acquiring at least one identical target from the at least one three-dimensional target and the at least one two-dimensional target, and binding current three-dimensional detection data and current two-dimensional detection data of the at least one identical target;
the target to be optimized is determined from the at least one identical target.
Optionally, the target acquisition module 22 is further configured to:
based on the space mapping relation between the sensor for collecting the three-dimensional point cloud image and the sensor for collecting the visible light image, projecting the at least one three-dimensional target to a coordinate system where the visible light image is located, and obtaining a plurality of projected targets;
and determining a plurality of targets with overlapping degrees larger than a preset overlapping degree between the targets after the projection and the at least one two-dimensional target as the same target, and binding current three-dimensional detection data and current two-dimensional detection data of the same target.
Optionally, the target acquisition module 22 is further configured to:
acquiring a historical target in a historical frame corresponding to the current frame;
and determining the target belonging to the historical target in the at least one identical target as a target to be optimized.
Optionally, the splicing module 24 is further configured to:
fusing each point cloud in the three-dimensional detection frame in the historical three-dimensional detection data with color information in the historical two-dimensional detection data to obtain a color point cloud image in the three-dimensional detection frame in the historical three-dimensional detection data;
fusing each point cloud in the three-dimensional detection frame in the current three-dimensional detection data with the color information in the current two-dimensional detection data to obtain a color point cloud image in the three-dimensional detection frame in the current three-dimensional detection data;
and registering and splicing the color point cloud image in the three-dimensional detection frame in the historical three-dimensional detection data and the color point cloud image in the three-dimensional detection frame in the current three-dimensional detection data to obtain the three-dimensional spliced color point cloud image corresponding to the target to be optimized.
Optionally, the correction module 25 is further configured to:
acquiring the center of a three-dimensional detection frame in the current three-dimensional detection data of the target to be optimized and acquiring the center of a three-dimensional spliced color point cloud picture corresponding to the target to be optimized;
Calculating the offset angle between the center of the three-dimensional detection frame and the center of the three-dimensional spliced color point cloud picture in the current three-dimensional detection data;
and correcting the current three-dimensional detection data according to the offset angle and the three-dimensional spliced color point cloud picture to obtain corrected current three-dimensional detection data.
Optionally, the number of the point clouds in the three-dimensional detection frame of the current three-dimensional detection data of the target to be optimized is larger than the preset number of the point clouds.
Optionally, the target acquisition module 22 is further configured to:
determining a target without history detection data in the at least one same target as a new target, and storing current detection data of the new target under a current frame;
and deleting the historical targets which are not updated in the preset time when the stored target number is greater than the preset target number.
It will be appreciated by those skilled in the art that the structure of the image compensation apparatus based on image fusion in fig. 10 does not constitute a limitation of the image compensation apparatus based on image fusion, and the respective modules may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a controller in the vehicle device, or may be stored in software in a memory in the vehicle device, so that the controller may call and execute operations corresponding to the above modules. In other embodiments, more or fewer modules than illustrated may be included in an image fusion-based image compensation apparatus.
Referring to fig. 11, in another aspect of the embodiments of the present application, there is further provided a vehicle apparatus 10, including a memory 3011 and a processor 3012, where the memory 3011 stores a computer program, and the computer program when executed by the processor causes the processor 3012 to perform the steps of the image compensation method based on image fusion provided in any of the embodiments of the present application.
Wherein the processor 3012 is a control center that utilizes various interfaces and wiring to connect various portions of the overall vehicle device, perform various functions of the vehicle device and process data by running or executing software programs and/or modules stored in the memory 3011, and invoking data stored in the memory 3011. Optionally, the processor 3012 may include one or more processing cores; preferably, the processor 3012 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user pages, applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 3012.
The memory 3011 may be used to store software programs and modules, and the processor 3012 executes various functional applications and data processing by executing the software programs and modules stored in the memory 3011. The memory 3011 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data created according to the use of the vehicular apparatus 10, or the like. In addition, memory 3011 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 3011 may also include a memory controller to provide access to the memory 3011 by the processor 3012.
In another aspect of the embodiments of the present application, there is further provided a storage medium storing a computer program, where the computer program when executed by a processor causes the processor to execute the steps of the image compensation method based on image fusion provided in any of the embodiments of the present application.
Those skilled in the art will appreciate that implementing all or part of the processes of the methods provided in the above embodiments may be accomplished by computer programs stored on a non-transitory computer readable storage medium, which when executed, may comprise processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. The scope of the invention is to be determined by the appended claims.

Claims (10)

1. An image compensation method based on image fusion, characterized in that the method comprises the following steps:
acquiring a three-dimensional point cloud image under a current frame and a visible light image under the current frame, which are synchronously acquired by vehicle equipment in the driving process;
acquiring current detection data of a target to be optimized under a current frame based on a three-dimensional point cloud image under the current frame and a visible light image under the current frame, wherein the current detection data comprises current three-dimensional detection data obtained based on the three-dimensional point cloud image under the current frame and current two-dimensional detection data obtained based on the visible light image under the current frame;
acquiring historical detection data of the target to be optimized, wherein the historical detection data are data acquired from a historical frame corresponding to the current frame, and the historical detection data comprise historical three-dimensional detection data and historical two-dimensional detection data;
Splicing the current detection data and the historical detection data of the target to be optimized to obtain a three-dimensional spliced color point cloud picture corresponding to the target to be optimized;
and correcting the current three-dimensional detection data of the target to be optimized based on the three-dimensional spliced color point cloud picture corresponding to the target to be optimized.
2. The image compensation method based on image fusion according to claim 1, wherein the obtaining current detection data of the object to be optimized in the current frame based on the three-dimensional point cloud image in the current frame and the visible light image in the current frame includes:
identifying targets in the three-dimensional point cloud image under the current frame to obtain at least one three-dimensional target and current three-dimensional detection data of each three-dimensional target;
identifying targets in the visible light image under the current frame to obtain at least one two-dimensional target and current two-dimensional detection data of each two-dimensional target;
acquiring at least one identical target from the at least one three-dimensional target and the at least one two-dimensional target, and binding current three-dimensional detection data and current two-dimensional detection data of the at least one identical target;
the target to be optimized is determined from the at least one identical target.
3. The image compensation method according to claim 2, wherein the acquiring at least one identical object from the at least one three-dimensional object and the at least one two-dimensional object and binding current three-dimensional detection data and current two-dimensional detection data of the at least one identical object comprises:
based on a space mapping relation between the three-dimensional point cloud image sensor and the visible light image sensor, projecting the at least one three-dimensional target to a coordinate system where the visible light image is positioned to obtain a plurality of projected targets;
and determining a plurality of targets with overlapping degrees larger than a preset overlapping degree between the targets after the projection and the at least one two-dimensional target as the same target, and binding current three-dimensional detection data and current two-dimensional detection data of the same target.
4. The image compensation method based on image fusion according to claim 2, wherein the determining the target to be optimized from the at least one identical target comprises:
acquiring a historical target in a historical frame corresponding to the current frame;
and determining the target belonging to the historical target in the at least one identical target as a target to be optimized.
5. The image compensation method based on image fusion according to claim 1, wherein the current two-dimensional detection data and the historical two-dimensional detection data include color information, and the stitching the current detection data and the historical detection data of the target to be optimized to obtain the three-dimensional stitched color point cloud image corresponding to the target to be optimized includes:
fusing each point cloud in the three-dimensional detection frame in the historical three-dimensional detection data with color information in the historical two-dimensional detection data to obtain a color point cloud image in the three-dimensional detection frame in the historical three-dimensional detection data;
fusing each point cloud in the three-dimensional detection frame in the current three-dimensional detection data with the color information in the current two-dimensional detection data to obtain a color point cloud image in the three-dimensional detection frame in the current three-dimensional detection data;
and registering and splicing the color point cloud image in the three-dimensional detection frame in the historical three-dimensional detection data and the color point cloud image in the three-dimensional detection frame in the current three-dimensional detection data to obtain the three-dimensional spliced color point cloud image corresponding to the target to be optimized.
6. The image compensation method based on image fusion according to claim 1, wherein the correcting the current three-dimensional detection data of the target to be optimized based on the three-dimensional stitched color point cloud image corresponding to the target to be optimized comprises:
Acquiring the center of a three-dimensional detection frame in the current three-dimensional detection data of the target to be optimized and acquiring the center of a three-dimensional spliced color point cloud picture corresponding to the target to be optimized;
calculating the offset angle between the center of the three-dimensional detection frame and the center of the three-dimensional spliced color point cloud picture in the current three-dimensional detection data;
and correcting the current three-dimensional detection data according to the offset angle and the three-dimensional spliced color point cloud picture to obtain corrected current three-dimensional detection data.
7. The image compensation method based on image fusion according to any one of claims 1 to 6, wherein the number of point clouds in a three-dimensional detection frame of the current three-dimensional detection data of the object to be optimized is larger than a preset number of point clouds.
8. The image compensation method based on image fusion according to any one of claims 1 to 6, characterized in that the method further comprises:
determining a target without history detection data in the at least one same target as a new target, and storing current detection data of the new target under a current frame;
and deleting the historical targets which are not updated in the preset time when the stored target number is greater than the preset target number.
9. A vehicle apparatus comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the method of any one of claims 1 to 8.
10. A computer readable storage medium storing a computer program, which when executed by a processor causes the processor to perform the steps of the method according to any one of claims 1 to 8.
CN202311797709.4A 2023-12-22 2023-12-22 Image compensation method, device, equipment and storage medium based on image fusion Pending CN117764849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311797709.4A CN117764849A (en) 2023-12-22 2023-12-22 Image compensation method, device, equipment and storage medium based on image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311797709.4A CN117764849A (en) 2023-12-22 2023-12-22 Image compensation method, device, equipment and storage medium based on image fusion

Publications (1)

Publication Number Publication Date
CN117764849A true CN117764849A (en) 2024-03-26

Family

ID=90310170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311797709.4A Pending CN117764849A (en) 2023-12-22 2023-12-22 Image compensation method, device, equipment and storage medium based on image fusion

Country Status (1)

Country Link
CN (1) CN117764849A (en)

Similar Documents

Publication Publication Date Title
CN110136199B (en) Camera-based vehicle positioning and mapping method and device
CN109461211B (en) Semantic vector map construction method and device based on visual point cloud and electronic equipment
US11024055B2 (en) Vehicle, vehicle positioning system, and vehicle positioning method
KR102022388B1 (en) Calibration system and method using real-world object information
US11474247B2 (en) Methods and systems for color point cloud generation
JP4973736B2 (en) Road marking recognition device, road marking recognition method, and road marking recognition program
CN110462343A (en) The automated graphics for vehicle based on map mark
CN111830953B (en) Vehicle self-positioning method, device and system
CN111448478A (en) System and method for correcting high-definition maps based on obstacle detection
JP2020525809A (en) System and method for updating high resolution maps based on binocular images
EP3842751B1 (en) System and method of generating high-definition map based on camera
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
CN108725318B (en) Automobile safety early warning method and device and computer readable storage medium
JP6278790B2 (en) Vehicle position detection device, vehicle position detection method, vehicle position detection computer program, and vehicle position detection system
CN112232275A (en) Obstacle detection method, system, equipment and storage medium based on binocular recognition
CN114550042A (en) Road vanishing point extraction method, vehicle-mounted sensor calibration method and device
JP6701057B2 (en) Recognizer, program
US20220292747A1 (en) Method and system for performing gtl with advanced sensor data and camera image
CN111833443A (en) Landmark position reconstruction in autonomous machine applications
JP7337617B2 (en) Estimation device, estimation method and program
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
CN114503044A (en) System and method for automatically labeling objects in 3D point clouds
CN116643291A (en) SLAM method for removing dynamic targets by combining vision and laser radar
CN117764849A (en) Image compensation method, device, equipment and storage medium based on image fusion
CN115718304A (en) Target object detection method, target object detection device, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination