CN109495694B - RGB-D-based environment sensing method and device - Google Patents

RGB-D-based environment sensing method and device Download PDF

Info

Publication number
CN109495694B
CN109495694B CN201811307133.8A CN201811307133A CN109495694B CN 109495694 B CN109495694 B CN 109495694B CN 201811307133 A CN201811307133 A CN 201811307133A CN 109495694 B CN109495694 B CN 109495694B
Authority
CN
China
Prior art keywords
data acquisition
acquisition module
exposure
image
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811307133.8A
Other languages
Chinese (zh)
Other versions
CN109495694A (en
Inventor
岳越
刘智
张介迟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Freetech Intelligent Systems Co Ltd
Original Assignee
Freetech Intelligent Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Freetech Intelligent Systems Co Ltd filed Critical Freetech Intelligent Systems Co Ltd
Priority to CN201811307133.8A priority Critical patent/CN109495694B/en
Publication of CN109495694A publication Critical patent/CN109495694A/en
Application granted granted Critical
Publication of CN109495694B publication Critical patent/CN109495694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors

Abstract

The invention discloses an environment sensing method and device based on RGB-D, the method comprises the steps of synchronously exposing a laser radar data acquisition module and an image data acquisition module through a hardware synchronization mechanism; the image data acquisition module acquires a color image of the environment picture through one-time exposure; meanwhile, the laser radar data acquisition module acquires a plurality of groups of intensity information of the environmental picture through a plurality of times of exposure and generates N gray level images, wherein N is more than or equal to 6; and carrying out fusion processing on the original data level on the color image and the N gray level images to obtain an environment image with improved brightness. The method adopts a laser radar data acquisition module and an image data acquisition module to synchronously expose, and after a gray image and a color image are acquired under different acquisition frequencies, the two images are subjected to fusion processing of an original data layer, so that an environment image with improved brightness is obtained; the method and the device realize that excellent environmental information can be obtained under various weather and natural environments.

Description

RGB-D-based environment sensing method and device
Technical Field
The invention relates to the technical field of intelligent driving, in particular to an environment sensing method and device based on RGB-D.
Background
Under the background of continuous development and innovation of technologies such as computers, artificial intelligence, machine vision and the like, the value of images acquired by cameras is higher and higher, and the application based on the images is more and more. A typical application is to install a camera on the robot, continuously acquire an environment image, and process the environment image by using a computer vision algorithm, so that the robot can have eyes to observe the surrounding environment like a human. On the basis of the above, the camera can be applied to the automobile to sense the environment around the automobile.
Common color cameras passively receive light from an object and then image it at its image plane. The image obtained under the principle is easily affected by external factors such as illumination, shadow, shooting angle and the like, and the current computer vision algorithm is not robust to the processing of the image, so the application of the algorithm is correspondingly limited. The depth camera can actively sense the distance information of an object by virtue of light emitted by the depth camera, and the imaging of the depth camera is less influenced by illumination, shadow and shooting angle.
With the continuous development of the ADAS and AD industries, the demand for environmental perception is increasingly strong, but the demand is limited by a plurality of factors, cost, industrial technical current situation and the like. The existing industry monocular camera has two main limitations: 1) no distance information; 2) the device cannot be used in the application scene of insufficient light or complex climate.
Therefore, it is necessary to provide an environment sensing method and apparatus based on RGB-D, aiming at solving the bottleneck problem of the current monocular camera in the application scenario with complicated light.
Disclosure of Invention
The invention provides an environment sensing method and device based on RGB-D (red, green and blue-D) and aims to solve the technical problem that a camera cannot acquire a high-brightness picture in an application scene with insufficient light in the prior art. The invention aims to break through the current industrial bottleneck and provide a better environment perception method and equipment for the intelligent automobile.
In order to solve the above technical problem, in a first aspect, the present invention provides an RGB-D based environment sensing method, including the steps of:
synchronously exposing the laser radar data acquisition module and the image data acquisition module through a hardware synchronization mechanism;
the image data acquisition module acquires a color image of the environment picture through one-time exposure; meanwhile, the laser radar data acquisition module acquires a plurality of groups of intensity information of the environmental picture through a plurality of times of exposure and generates N gray level images, wherein N is more than or equal to 6; the single exposure time of the laser radar data acquisition module is less than 1 ms;
and carrying out fusion processing on the original data level on the color image and the N gray level images to obtain an environment image with improved brightness.
Further, the lidar data acquisition module acquires a plurality of groups of intensity information of the environmental picture through a plurality of exposures, and generates N gray level maps with the intensity information, specifically including:
and converting the intensity information into gray scale information to generate the gray scale map.
Further, the intensity information is converted into gray scale information by a maximum value and mean gradient method.
Further, the performing of fusion processing on the original data level on the color image and the N gray level images to obtain an environment image with improved brightness specifically includes:
converting the color domain of the color map into YcCbcCrc;
acquiring a low-brightness area in the converted color image, wherein the low-brightness area is an area with brightness lower than a preset threshold value;
supplementing the brightness information of the part, corresponding to the low-brightness area, in the gray-scale image to the low-brightness area to obtain a fused image;
acquiring attribute information of the fused picture, wherein the attribute information comprises gradient amplitude, gradient direction, gradient loss, shadow and noise;
performing adaptive smoothing processing on the data corresponding to the attribute information and eliminating motion blur in the fused picture based on the gray-scale image;
and carrying out convergence processing on the data corresponding to the attribute information.
Further, the enabling of the laser radar data acquisition module and the image data acquisition module to be synchronously exposed through a hardware synchronization mechanism includes:
the laser radar data acquisition module sends an exposure signal and exposes the exposure signal, and simultaneously sends the exposure signal to the image data acquisition module, and the image data acquisition module receives the exposure signal and exposes the exposure signal;
or the like, or, alternatively,
the image data acquisition module sends an exposure signal and exposes the exposure signal, and simultaneously sends the exposure signal to the laser radar data acquisition module, and the laser radar data acquisition module receives the exposure signal and exposes the exposure signal.
Further, the image data acquisition module acquires a color image of the environmental picture through one exposure, and the laser radar data acquisition module acquires a plurality of sets of intensity information of the environmental picture through a plurality of exposures, and specifically includes:
acquiring laser point cloud data and image data of synchronous exposure; obtaining the intensity information through the laser point cloud data, and obtaining a color image of the environment picture through the image data;
and after the laser point cloud data is processed, fusing the laser point cloud data into the image data to obtain distance information of the environment.
Further, the Lidar data acquisition module is a Flash Lidar, and in the synchronous exposure process, the ratio of the acquisition frequency of the Lidar data acquisition module to the acquisition frequency of the image data acquisition module is M: 1, wherein M is more than or equal to 100; the image data acquisition module is a camera module, and the color card modes adopted by the camera module are RGGB, RCCB and RCCC.
Preferably, the single exposure time of the laser radar data acquisition module is 50ns-1 ms.
Preferably, the single exposure time of the laser radar data acquisition module is 1-50 ns.
Preferably, the single exposure time of the laser radar data acquisition module is less than 1 ns.
In a second aspect, the present invention provides an RGB-D based environment sensing apparatus, comprising: the system comprises an image data acquisition module, a laser radar data acquisition module, a synchronous exposure module and an environment picture fusion module;
the synchronous exposure module is used for synchronously exposing the laser radar data acquisition module and the image data acquisition module through a hardware synchronization mechanism;
the image data acquisition module is used for acquiring a color image of the environment picture by exposure at one time;
the laser radar data acquisition module is used for acquiring a plurality of groups of intensity information of the environmental picture by exposure for a plurality of times and generating N gray level graphs, wherein N is more than or equal to 6; the single exposure time of the laser radar data acquisition module is less than 1 ms; and the environment picture fusion module is used for carrying out fusion processing on an original data level on the color image and the N gray level images to obtain an environment picture with improved brightness.
Furthermore, the laser radar data acquisition module further comprises an information conversion unit, and the information conversion unit is used for converting the intensity information into gray scale information.
Further, the information conversion unit includes a first information conversion unit, and the first information conversion unit is configured to convert the intensity information into grayscale information by using a maximum value and mean gradient method.
Further, the information conversion unit further includes a second information conversion unit, and the second information conversion unit is configured to convert the intensity information into grayscale information by using an ROI method.
Further, the environment picture fusion module includes: a color domain conversion unit, a picture fusion unit, an attribute information acquisition unit, a motion blur elimination unit and a convergence unit,
the color domain conversion unit is used for converting the color domain of the color image into YcCbcCrc;
the image fusion unit is used for acquiring a low-brightness area in the converted color image, wherein the low-brightness area is an area with brightness lower than a preset threshold value; supplementing the brightness information of the part, corresponding to the low-brightness area, in the gray-scale image to the low-brightness area to obtain a fused image;
the attribute information acquisition unit is used for acquiring attribute information of the fused picture, wherein the attribute information comprises gradient amplitude, gradient direction, gradient loss, shadow and noise;
the motion blur elimination unit is used for performing self-adaptive smoothing processing on the data corresponding to the attribute information and eliminating motion blur in the fused picture based on the gray image;
and the convergence unit is used for carrying out convergence processing on the data corresponding to the attribute information.
Further, the synchronous exposure module comprises a first synchronous exposure module and a second synchronous exposure module, the first synchronous exposure module and the second synchronous exposure module are respectively arranged in the laser radar data acquisition module or the image data acquisition module, the first synchronous exposure module comprises a trigger generation unit, a first exposure control unit and a signal sending unit, and the second synchronous exposure module comprises a signal receiving unit and a second exposure control unit;
the trigger generation unit is used for sending out an exposure signal;
the first exposure control unit is used for receiving the exposure signal sent by the trigger generation unit and exposing the exposure signal;
the signal sending unit is used for receiving the exposure signal sent by the trigger generating unit and sending the exposure signal to the signal receiving unit;
the signal receiving unit is used for receiving the exposure signal sent by the signal sending unit;
and the second exposure control unit is used for receiving the exposure signal sent by the signal receiving unit and exposing the exposure signal.
Further, the laser radar data acquisition module comprises a first data acquisition unit and an intensity information acquisition unit,
the first data acquisition unit is used for acquiring laser point cloud data synchronously exposed with the second data acquisition unit;
the intensity information acquisition unit is used for acquiring the intensity information through the laser point cloud data;
the image data acquisition module comprises a second data acquisition unit and a color image acquisition unit;
the second data acquisition unit is used for acquiring image data synchronously exposed with the first data acquisition unit;
the color image acquisition unit is used for acquiring the color image of the environment picture through the image data;
and the distance information acquisition module is used for processing the laser point cloud data and then fusing the laser point cloud data into the image data to obtain the distance information of the environment.
Further, the Lidar data acquisition module is a Flash Lidar, and in the synchronous exposure process, the ratio of the acquisition frequency of the Lidar data acquisition module to the acquisition frequency of the image data acquisition module is M: 1, wherein M is more than or equal to 100; the image data acquisition module is a camera module, and the color card modes adopted by the camera module are RGGB, RCCB and RCCC.
Preferably, the single exposure time of the laser radar data acquisition module is 50ns-1 ms.
Preferably, the single exposure time of the laser radar data acquisition module is 1-50 ns.
Preferably, the single exposure time of the laser radar data acquisition module is less than 1 ns.
The environment sensing method and device based on RGB-D provided by the invention have the following beneficial effects:
(1) the RGB-D-based environment sensing method adopts a laser radar data acquisition module and an image data acquisition module to synchronously expose, and after a gray image and a color image are acquired under different acquisition frequencies, the two images are subjected to fusion processing of an original data layer, so that an environment image with improved brightness is obtained; the method realizes that excellent environmental information is obtained under various weather and natural environments, and has better robustness;
(2) according to the invention, the laser radar data acquisition module can realize synchronous exposure with the image data acquisition module through a hardware synchronization mechanism, and the influence degree of the laser point cloud data on motion distortion is small, so that an environment picture with higher fusion precision can be obtained;
(3) compared with a camera, the laser radar data acquisition module has the advantages that the acquisition frequency is high, data with better real-time performance can be obtained under the condition that a vehicle runs at a high speed, and the accuracy of the data is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an RGB-D based environment sensing method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a process of synchronously exposing a laser radar data acquisition module and an image data acquisition module through a hardware synchronization mechanism according to an embodiment of the present invention;
fig. 3 is another schematic flow chart of synchronously exposing the lidar data acquisition module and the image data acquisition module through a hardware synchronization mechanism according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of a process for fusing a color map and a gray map into an original data plane according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of another RGB-D-based context awareness method according to an embodiment of the present invention
FIG. 6 is a schematic diagram of a Flash Lidar and RGB camera synchronous exposure signal provided by an embodiment of the invention;
FIG. 7 is a block diagram of an RGB-D based environment sensing apparatus according to an embodiment of the present invention;
FIG. 8 is a block diagram of a synchronous exposure module provided by an embodiment of the present invention;
fig. 9 is a block diagram of a first synchronous exposure module according to an embodiment of the present invention;
FIG. 10 is a block diagram of a second synchronous exposure module according to an embodiment of the present invention;
fig. 11 is a block diagram illustrating a hardware synchronization mechanism for synchronously exposing the lidar data acquisition module and the image data acquisition module according to an embodiment of the present invention;
FIG. 12 is another block diagram illustrating a hardware synchronization mechanism for synchronously exposing a laser radar data acquisition module and an image data acquisition module according to an embodiment of the present invention;
FIG. 13 is a block diagram of a lidar data acquisition module according to an embodiment of the present invention;
FIG. 14 is a block diagram of an image data acquisition module according to an embodiment of the present invention;
FIG. 15 is a block diagram of an environment picture fusion module according to an embodiment of the present invention;
FIG. 16 is another block diagram of an RGB-D based environment sensing apparatus according to an embodiment of the present invention;
fig. 17 is a schematic structural diagram of an installation of a lidar data acquisition module and an image data acquisition module according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
In the several embodiments provided in this application, the described apparatus embodiments are only illustrative, for example, the division of the modules is only one logical division, and there may be other divisions when the actual implementation is performed, for example, a plurality of modules or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of modules or units through some interfaces, and may be in an electrical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The invention is applied to the scene of the intelligent driving field, and mainly aims to sense the surrounding environment of the vehicle in the ADAS and AD processes.
Example 1
As shown in fig. 1, an embodiment of the present invention provides an RGB-D based environment sensing method, including:
s110, synchronously exposing the laser radar data acquisition module and the image data acquisition module through a hardware synchronization mechanism;
in the hardware level, the synchronous exposure of the laser radar data acquisition module and the image data acquisition module is realized in a hardware communication mode, and compared with the independent exposure of each device in the prior art (for example, the radar has the exposure frequency of the radar, the camera has the exposure frequency of the camera, after respective exposure data is output, a processing system is added behind the cameras to fuse the data output by the radar and the video data of the camera data), the synchronous exposure is realized through a hardware mechanism, so that the time synchronization of the laser point cloud data and the image data acquisition data is realized, and the laser point cloud data can not generate motion distortion.
Fig. 2 is a schematic flow chart illustrating a process of synchronously exposing a laser radar data acquisition module and an image data acquisition module through a hardware synchronization mechanism in an embodiment of the present invention:
s210, a trigger generating unit in the laser radar data acquisition module sends out an exposure signal and exposes the exposure signal, meanwhile, the exposure signal is sent to the image data acquisition module through channels such as MIPI/LVDS/parallel port, and a signal unit in the image data acquisition module receives the exposure signal and exposes the exposure signal.
Fig. 3 is another schematic flow chart illustrating the synchronous exposure of the lidar data acquisition module and the image data acquisition module by a hardware synchronization mechanism according to the embodiment of the present invention:
and S310, a trigger generation unit in the image data acquisition module sends an exposure signal and exposes the exposure signal, and simultaneously the exposure signal is sent to the laser radar data acquisition module through channels such as MIPI/LVDS/parallel port, and the laser radar data acquisition module receives the exposure signal and exposes the exposure signal.
In the embodiment of the invention, a main control chip can be arranged, when the main control chip outputs signals, the trigger generation unit is triggered to enable the laser radar data acquisition module and the image data acquisition module to synchronously expose, a specific interface can also be arranged, when the interface is entered, the trigger generation unit is triggered to enable the laser radar data acquisition module and the image data acquisition module to synchronously expose, or a hardware synchronous exposure mode is arranged, when the hardware synchronous exposure mode is entered, the trigger generation unit is triggered to enable the laser radar data acquisition module and the image data acquisition module to synchronously expose, or a time period can be arranged, and when the time period is reached, the trigger generation unit is triggered to enable the laser radar data acquisition module and the image data acquisition module to synchronously expose.
S120, in the same time, the image data acquisition module acquires a color image of the environment picture through one-time exposure; meanwhile, the laser radar data acquisition module acquires a plurality of groups of intensity information of the environmental picture through a plurality of times of exposure and generates N gray level images, wherein N is more than or equal to 6; the single exposure time of the laser radar data acquisition module is 50ns-1 ms;
in the embodiment of the invention, laser point cloud data synchronously exposed with the image data acquisition module is acquired through the laser radar data acquisition module, and image data synchronously exposed with the laser radar data acquisition module is acquired through the image data acquisition module.
Obtaining the intensity information through the laser point cloud data, and converting the intensity information into gray scale information by adopting a maximum value and mean value gradient method, wherein the specific calculation formula is as follows:
y=kx+b;
wherein b is 0, k is max (a)/max (b); where A is the maximum intensity of the gray scale 255 and B is the maximum intensity of the ambient reflectance.
Other parameters are described: x is intensity information corresponding to a certain point in the actual environment, and y is gray scale information, namely gray scale information;
mapping the data collected by the radar (1-1 ten thousand) into a grey scale map (0-255), for example by adjusting the density of the data;
obtaining a color map of the environment picture through the image data;
and after the laser point cloud data is processed, fusing the laser point cloud data into the image data to obtain distance information of the environment.
S130, carrying out fusion processing on an original data layer on the color image and the N gray level images to obtain an environment image with improved brightness; as shown in fig. 4, the method specifically includes the following steps:
s410, converting the color domain of the color image into YcCbcCrc;
s420, acquiring a low-brightness area in the converted color image, wherein the low-brightness area is an area with brightness lower than a preset threshold value; the low-brightness area is defined relative to the normal brightness, wherein a preset threshold value is lower than the normal brightness value, and the value can be set according to an actual application scene;
s430, supplementing brightness information of a part, corresponding to the low-brightness area, in the gray-scale image to the low-brightness area to obtain a fused image;
s440, acquiring attribute information of the fused picture, wherein the attribute information comprises gradient amplitude, gradient direction, gradient loss, shadow and noise;
s450, performing self-adaptive smoothing processing on data corresponding to the attribute information and eliminating motion blur in the fused picture based on the gray-scale image;
the adaptive smoothing process can be realized through a convolution process, a filter W (x, y) with the size of mxn is adopted to be convolved with an image f (x, y), and the calculation formula is as follows:
Figure BDA0001853927110000101
wherein, a is (m-1)/2, b is (n-1)/2; the minus sign on the right side of the equation indicates flipping f, i.e., rotation by 180 °.
And S460, performing convergence processing on the data corresponding to the attribute information to obtain an environment picture with improved brightness, wherein the convergence processing can be performed on the data by adopting an IRLS (infrared laser scanning and color matching) method and a CG (color matching) method. The brightness improvement means that the brightness of the obtained environment picture is improved compared with the color map in the step S120.
Wherein the IRLS method is as follows: an iterative weighted least squares method;
the CG method is: conjugate gradient method.
The image data acquisition module in the embodiment of the invention comprises but is not limited to a vehicle-mounted camera or a camera, the laser radar data acquisition module is Flash Lidar, the Flash Lidar belongs to non-scanning laser radar, and the Flash Lidar outputs point cloud data arranged in a two-dimensional image form by emitting and receiving area array laser, and the point cloud data comprises information such as distance, reflection intensity, speed and the like. The laser radar gets rid of the dependence of scanning equipment during non-scanning, and the laser radar has higher detection precision and reliability and lower cost. The wavelength of infrared rays emitted by the Flash Lidar can be set according to actual conditions, for example, for application scenes with worse environment, the Flash Lidar supports SWIR.
As can be seen from the technical solutions published in the embodiments of the present specification, an environment sensing method based on RGB-D is provided in the embodiments of the present specification, in which a laser radar data acquisition module and an image data acquisition module are synchronously exposed through a hardware synchronization mechanism, and the image data acquisition module acquires a color image of an environment picture through one exposure; meanwhile, the laser radar data acquisition module acquires a plurality of groups of intensity information of the environmental picture through a plurality of times of exposure and generates N gray level images, wherein N is more than or equal to 6; the single exposure time of the laser radar data acquisition module is less than 1 ms; and carrying out fusion processing on the original data level on the color image and the N gray level images to obtain an environment image with improved brightness. The method has the advantages that: the method comprises the following steps that (A) a laser radar data acquisition module and an image data acquisition module are adopted for synchronous exposure, after a gray image and a color image are acquired under different acquisition frequencies, fusion processing of an original data layer is carried out on the two images, and an environment image with improved brightness is obtained on the premise that the image data acquisition module is not added; the laser radar data acquisition module can realize synchronous exposure with the image data acquisition module through a hardware synchronization mechanism, and the laser point cloud data cannot generate motion distortion, so that an environment picture with higher fusion precision can be obtained; compared with a camera, the laser radar data acquisition module has the advantages that the acquisition frequency is high, data with better real-time performance can be obtained under the condition that a vehicle runs at a high speed, and the accuracy of the data is improved.
Example 2
Fig. 5 is a schematic flow chart of another RGB-D based environment sensing method according to an embodiment of the present invention, where the method specifically includes:
s510, carrying out combined calibration on a laser radar data acquisition module and an image data acquisition module to align the spatial positions of laser point cloud data and image data;
specifically, in the embodiment of the present invention, the joint calibration of the laser radar data acquisition module and the image data acquisition module may be performed by using a method in the prior art, which is not described herein again.
In the embodiment of the present invention, before the joint calibration of the lidar data acquisition module and the image data acquisition module, a step of positioning the lidar data acquisition module and the image data acquisition module at the same position is further included, which can ensure the alignment of the spatial positions of the subsequent acquired laser point cloud data and image data, and in order to ensure the accuracy and accuracy of the acquired data and the coverage of the effective data range, the fusion field of view of the lidar data acquisition module may be set to be larger than that of the image data acquisition module, as shown in fig. 17, a is the image data acquisition module, a corresponds to the fusion field of view being α, B corresponds to the lidar data acquisition module, and β is larger than α.
It should be noted that, in the embodiment of the present invention, the image data acquisition module and the lidar data acquisition module are complementary, and the solution needs to obtain the coincidence view fields (i.e., the intersection region) of the two acquisition modules, so the solution does not limit the specific relationship between α and β, and the relationship between β and β may also be set to be equal to or smaller than α.
In this embodiment, the lidar data acquisition module and the image data acquisition module may be mounted on the top, side, or front of the vehicle, and the mounting positions of the lidar data acquisition module and the image data acquisition module are not limited in the embodiment of the present invention.
S520, synchronously exposing the laser radar data acquisition module and the image data acquisition module through a hardware synchronization mechanism, and synchronizing the time of laser point cloud data and the time of image data;
s530, the image data acquisition module acquires a color image of the environment picture through one-time exposure; meanwhile, the laser radar data acquisition module acquires a plurality of groups of intensity information of the environmental picture through a plurality of times of exposure and generates N gray level images, wherein N is more than or equal to 6, the single exposure time of the laser radar data acquisition module is 1-50ns, and the time can be set according to actual needs and can also be set to be less than 1 ns; the method comprises the following specific steps:
acquiring laser point cloud data and image data which are aligned in spatial position and synchronously exposed, wherein the laser point cloud data comprises the distance and reflection intensity information of point clouds, converting the intensity information into gray scale information by adopting an ROI (region of interest) method, dividing the mapping relation between the intensity information and the gray scale information into n sections, and then independently calculating the corresponding gray scale information of each section;
the specific calculation formula is as follows:
Figure BDA0001853927110000121
where B is a constant, k ═ max (a)/max (B), where a is the grayscale value and B is the ambient reflectance intensity value. Other parameters are described: x is intensity information corresponding to a certain point in the actual environment, and y is gray scale information, namely gray scale information.
Adjusting segmentation parameters and effects by combining with field test at the present stage, and using the segmentation parameters and effects to differentiate intensity information of the characteristic object and gray level information after corresponding mapping;
mapping the data collected by the radar (1-1 ten thousand) into a grey scale map (0-255), for example by adjusting the density of the data;
obtaining a color map of the environment picture through the image data;
and after the laser point cloud data is processed, fusing the laser point cloud data into the image data to obtain distance information of the environment.
S540, carrying out fusion processing on an original data layer on the color image and the N gray level images to obtain an environment image with improved brightness; for the specific operation, refer to the above steps S410-S460, which are not described herein again.
In this embodiment, the Lidar data acquisition module is a Flash Lidar, and in the synchronous exposure process, the ratio of the acquisition frequency of the Lidar data acquisition module to the acquisition frequency of the image data acquisition module is M: 1, wherein M is greater than or equal to 100, a synchronous exposure schematic diagram is shown in fig. 6, wherein the camera module is an RGB camera, the single exposure time is greater than or equal to 5 μ s, and the color chip mode adopted by the camera module is RGGB, it should be noted that the color chip mode is not limited to RGGB, and may be RCCB, RCCC, and the like; the image data acquisition module is a camera module, and the camera module can be a monocular camera or a binocular camera, but is not limited thereto.
Example 3
An embodiment of the present invention provides an environment sensing apparatus based on RGB-D, as shown in fig. 7, the apparatus includes: the system comprises a synchronous exposure module 710, an image data acquisition module 720, a laser radar data acquisition module 730, an environment picture fusion module 740 and a distance information acquisition module 750. The synchronous exposure module 710 is configured to synchronously expose the laser radar data acquisition module and the image data acquisition module through a hardware synchronization mechanism; specifically, as shown in fig. 8, the synchronous exposure module 710 includes a first synchronous exposure module 7101 and a second synchronous exposure module 7102;
specifically, as shown in fig. 9 and 10, the first synchronous exposure module 7101 includes: a trigger generating unit 71011, a first exposure control unit 71012 and a signal transmitting unit 71013, wherein the second synchronous exposure module 7102 comprises: a signal receiving unit 71021 and a second exposure control unit 71022;
the trigger generation unit 71011 is used for sending out an exposure signal;
the first exposure control unit 71012 is configured to receive the exposure signal sent by the trigger generation unit and expose the exposure signal;
the signal sending unit 71013 is configured to receive the exposure signal sent by the trigger generating unit and send the exposure signal to the signal receiving unit through a channel such as MIPI/LVDS/parallel port;
the signal receiving unit 71021 is used for receiving the exposure signal sent by the signal sending unit;
the second exposure control unit 71022 is configured to receive the exposure signal sent by the signal receiving unit and expose the exposure signal.
Fig. 11 is a block diagram illustrating a laser radar data acquisition module and an image data acquisition module synchronously exposed by a hardware synchronization mechanism according to an embodiment of the present invention, where the first synchronous exposure module 7101 is disposed in the laser radar data acquisition module 730, and the second synchronous exposure module 7102 is disposed in the image data acquisition module 720.
Fig. 12 is another block diagram illustrating a laser radar data acquisition module and an image data acquisition module synchronously exposed by a hardware synchronization mechanism according to an embodiment of the present invention, where the first synchronous exposure module 7101 is disposed in the image data acquisition module 720, and the second synchronous exposure module 7102 is disposed in the laser radar data acquisition module 730.
The image data acquisition module 720 is used for acquiring a color image of the environment picture by exposure once;
the laser radar data acquisition module 730 is used for acquiring a plurality of groups of intensity information of the environmental picture by exposure for a plurality of times and generating N gray level images, wherein N is more than or equal to 6; the single exposure time of the laser radar data acquisition module is 50ns-1 ms; the ratio of the acquisition frequency of the laser radar data acquisition module to the acquisition frequency of the image data acquisition module is M: 1, wherein M is more than or equal to 100; the image data acquisition module is a camera module, and the color card modes adopted by the camera module are RGGB, RCCB and RCCC.
As shown in fig. 13, the lidar data acquisition module 730 includes a first data acquisition unit 7301, an intensity information acquisition unit 7302, and an information conversion unit 7303, where the first data acquisition unit 7301 is configured to acquire laser point cloud data synchronously exposed with a second data acquisition unit; the intensity information acquiring unit 7302 is configured to acquire the intensity information from the laser point cloud data; the information conversion unit package 7303 includes a first information conversion unit configured to convert the intensity information into grayscale information by using a maximum value and mean gradient method, and the specific calculation method may refer to the description of embodiment 1.
As shown in fig. 14, the image data acquisition module 720 includes a second data acquisition unit 7201 and a color image acquisition unit 7202; the second data acquisition unit 7201 is configured to acquire image data exposed synchronously with the first data acquisition unit;
the color image obtaining unit 7202 configured to obtain a color image of the environment picture from the image data;
the distance information obtaining module 750 is configured to process the laser point cloud data and then fuse the laser point cloud data into the image data to obtain the distance information of the environment.
As shown in fig. 15, the environment picture fusion module 740 includes a color domain conversion unit 7401, a picture fusion unit 7402, an attribute information acquisition unit 7403, a motion blur elimination unit 7404, and a convergence unit 7405;
the color domain converting unit 7401 is configured to convert the color domain of the color map into YcCbcCrc;
the image fusion unit 7402 is configured to obtain a low brightness region in the converted color image, where the low brightness region is a region with brightness lower than a preset threshold; supplementing the brightness information of the part, corresponding to the low-brightness area, in the gray-scale image to the low-brightness area to obtain a fused image;
the attribute information acquiring unit 7403 is configured to acquire attribute information of the fused picture, where the attribute information includes a gradient amplitude, a gradient direction, a gradient loss, a shadow, and noise;
the motion blur removing unit 7404 is configured to perform adaptive smoothing processing on the data corresponding to the attribute information and remove motion blur in the fused picture based on the grayscale map;
the convergence unit 7405 is configured to perform convergence processing on the data corresponding to the attribute information, and may specifically adopt an IRLS and CG method.
Example 4
Fig. 16 is a block diagram illustrating another positive sample making device according to an embodiment of the present invention, the device including: the system comprises a calibration module 700, a synchronous exposure module 710, an image data acquisition module 720, a laser radar data acquisition module 730, an environment picture fusion module 740 and a distance information acquisition module 750. The calibration module 700 is configured to perform joint calibration on the laser radar data acquisition module and the image data acquisition module, so that spatial positions of the laser point cloud data and the image data are aligned;
the synchronous exposure module 710 is configured to synchronously expose the laser radar data acquisition module and the image data acquisition module through a hardware synchronization mechanism;
the image data acquisition module 720 is used for acquiring a color image of the environment picture by exposure once;
the laser radar data acquisition module 730 is used for acquiring a plurality of groups of intensity information of the environmental picture by exposure for a plurality of times and generating N gray level images, wherein N is more than or equal to 6; the single exposure time of the laser radar data acquisition module is 1-50ns, and the time is set according to actual needs and can also be set to be less than 1 ns; the ratio of the acquisition frequency of the laser radar data acquisition module to the acquisition frequency of the image data acquisition module is M: 1, wherein M is more than or equal to 100; the image data acquisition module is a camera module, and the color card modes adopted by the camera module are RGGB, RCCB and RCCC.
The distance information obtaining module 750 is configured to process the laser point cloud data and then fuse the laser point cloud data into the image data to obtain the distance information of the environment.
The synchronous exposure module 710, the image data acquisition module 720, the laser radar data acquisition module 730, and the environmental image fusion module 740 can refer to the description in embodiment 3, and are not described herein again. Different from embodiment 3, the information conversion unit package 7303 in the lidar data acquisition module 730 includes the second information conversion unit, which is configured to convert the intensity information into gray scale information by using the ROI method, and the specific calculation method may refer to the description in embodiment 2.
It should be noted that the device and method embodiments in the device embodiment are based on the same inventive concept.
The environment sensing method and the device based on RGB-D provided by the invention adopt the laser radar data acquisition module and the image data acquisition module to synchronously expose, and after a gray image and a color image are acquired under different acquisition frequencies, the two images are subjected to fusion processing of an original data layer, so that an environment image with improved brightness is obtained; the method realizes that excellent environmental information is obtained under various weather and natural environments, and has better robustness; according to the invention, the laser radar data acquisition module can realize synchronous exposure with the image data acquisition module through a hardware synchronization mechanism, and the influence degree of the laser point cloud data on motion distortion is small, so that an environment picture with higher fusion precision can be obtained; compared with a camera, the laser radar data acquisition module has the advantages that the acquisition frequency is high, data with better real-time performance can be obtained under the condition that a vehicle runs at a high speed, and the accuracy of the data is improved.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device, since it is basically similar to the method embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (18)

1. An RGB-D based environment sensing method is characterized by comprising the following steps:
synchronously exposing the laser radar data acquisition module and the image data acquisition module through a hardware synchronization mechanism;
the image data acquisition module acquires a color image of the environment picture through one-time exposure; meanwhile, the laser radar data acquisition module acquires a plurality of groups of intensity information of the environmental picture through a plurality of times of exposure and generates N gray level images, wherein N is more than or equal to 6; the single exposure time of the laser radar data acquisition module is less than 1 ms;
the image data acquisition module acquires a color image of the environmental picture through one exposure, and the laser radar data acquisition module acquires a plurality of groups of intensity information of the environmental picture through a plurality of exposures, wherein the groups of intensity information comprise:
acquiring laser point cloud data and image data of synchronous exposure; obtaining the intensity information through the laser point cloud data, and obtaining a color image of the environment picture through the image data; processing the laser point cloud data, and fusing the laser point cloud data into the image data to obtain distance information of the environment;
and carrying out fusion processing on the original data level on the color image and the N gray level images to obtain an environment image with improved brightness.
2. The RGB-D based environment sensing method of claim 1, wherein the lidar data acquisition module acquires a plurality of sets of intensity information of the environment picture through a plurality of exposures and generates N gray-scale maps, which specifically includes:
and converting the intensity information into gray scale information to generate the gray scale map.
3. The RGB-D based context awareness method of claim 2, wherein the intensity information is converted into gray scale information using a maximum and mean gradient method.
4. The RGB-D based environment sensing method as recited in claim 1, wherein the blending process of the original data layer is performed on the color image and the N gray level images to obtain an environment image with improved brightness, which includes:
converting the color domain of the color map into YcCbcCrc;
acquiring a low-brightness area in the converted color image, wherein the low-brightness area is an area with brightness lower than a preset threshold value;
supplementing the brightness information of the part, corresponding to the low-brightness area, in the gray-scale image to the low-brightness area to obtain a fused image;
acquiring attribute information of the fused picture, wherein the attribute information comprises gradient amplitude, gradient direction, gradient loss, shadow and noise;
performing adaptive smoothing processing on the data corresponding to the attribute information and eliminating motion blur in the fused picture based on the gray-scale image;
and carrying out convergence processing on the data corresponding to the attribute information.
5. The RGB-D based environment sensing method according to claim 1, wherein the exposing the laser radar data acquisition module and the image data acquisition module synchronously through a hardware synchronization mechanism comprises:
the laser radar data acquisition module sends an exposure signal and exposes the exposure signal, and simultaneously sends the exposure signal to the image data acquisition module, and the image data acquisition module receives the exposure signal and exposes the exposure signal;
or the like, or, alternatively,
the image data acquisition module sends an exposure signal and exposes the exposure signal, and simultaneously sends the exposure signal to the laser radar data acquisition module, and the laser radar data acquisition module receives the exposure signal and exposes the exposure signal.
6. The RGB-D based environment sensing method according to claim 1, wherein the Lidar data acquisition module is a Flash Lidar, and a ratio of an acquisition frequency of the Lidar data acquisition module to an acquisition frequency of the image data acquisition module in a synchronous exposure process is M: 1, wherein M is more than or equal to 100; the image data acquisition module is a camera module, and the color card modes adopted by the camera module are RGGB, RCCB and RCCC.
7. The RGB-D based environment sensing method according to claim 1, wherein the single exposure time of the lidar data acquisition module is 50ns-1 ms.
8. The RGB-D based environment sensing method according to claim 1, wherein the single exposure time of the lidar data acquisition module is 1-50 ns.
9. The RGB-D based environment sensing method according to claim 1, wherein the lidar data acquisition module has a single exposure time of less than 1 ns.
10. An RGB-D based context awareness apparatus, comprising: the system comprises an image data acquisition module, a laser radar data acquisition module, a synchronous exposure module, a distance information acquisition module and an environment picture fusion module;
the synchronous exposure module is used for synchronously exposing the laser radar data acquisition module and the image data acquisition module through a hardware synchronization mechanism;
the image data acquisition module is used for acquiring a color image of the environment picture by exposure at one time;
the laser radar data acquisition module is used for acquiring a plurality of groups of intensity information of the environmental picture by exposure for a plurality of times and generating N gray level graphs, wherein N is more than or equal to 6; the single exposure time of the laser radar data acquisition module is less than 1 ms;
the laser radar data acquisition module comprises a first data acquisition unit and an intensity information acquisition unit, and the image data acquisition module comprises a second data acquisition unit and a color map acquisition unit; the first data acquisition unit is used for acquiring laser point cloud data synchronously exposed with the second data acquisition unit; the intensity information acquisition unit is used for acquiring the intensity information through the laser point cloud data; the second data acquisition unit is used for acquiring image data synchronously exposed with the first data acquisition unit; the color image acquisition unit is used for acquiring the color image of the environment picture through the image data;
the distance information acquisition module is used for processing the laser point cloud data and then fusing the laser point cloud data into the image data to obtain the distance information of the environment;
and the environment picture fusion module is used for carrying out fusion processing on an original data level on the color image and the N gray level images to obtain an environment picture with improved brightness.
11. The RGB-D based context aware apparatus of claim 10, wherein the lidar data acquisition module further comprises an information conversion unit configured to convert the intensity information into grayscale information.
12. The RGB-D based context awareness apparatus of claim 11, wherein the information conversion unit includes a first information conversion unit,
the first information conversion unit is used for converting the intensity information into gray scale information by adopting a maximum value and mean value gradient method.
13. The RGB-D based context aware apparatus of claim 10, wherein the context picture fusion module comprises: a color domain conversion unit, a picture fusion unit, an attribute information acquisition unit, a motion blur elimination unit and a convergence unit,
the color domain conversion unit is used for converting the color domain of the color image into YcCbcCrc;
the image fusion unit is used for acquiring a low-brightness area in the converted color image, wherein the low-brightness area is an area with brightness lower than a preset threshold value; supplementing the brightness information of the part, corresponding to the low-brightness area, in the gray-scale image to the low-brightness area to obtain a fused image;
the attribute information acquisition unit is used for acquiring attribute information of the fused picture, wherein the attribute information comprises gradient amplitude, gradient direction, gradient loss, shadow and noise;
the motion blur elimination unit is used for performing self-adaptive smoothing processing on the data corresponding to the attribute information and eliminating motion blur in the fused picture based on the gray image;
and the convergence unit is used for carrying out convergence processing on the data corresponding to the attribute information.
14. The RGB-D based environment sensing apparatus according to claim 10, wherein the synchronous exposure module includes a first synchronous exposure module and a second synchronous exposure module, the first synchronous exposure module and the second synchronous exposure module are respectively disposed in the lidar data acquisition module or the image data acquisition module, the first synchronous exposure module includes a trigger generation unit, a first exposure control unit and a signal transmission unit, and the second synchronous exposure module includes a signal receiving unit and a second exposure control unit;
the trigger generation unit is used for sending out an exposure signal;
the first exposure control unit is used for receiving the exposure signal sent by the trigger generation unit and exposing the exposure signal;
the signal sending unit is used for receiving the exposure signal sent by the trigger generating unit and sending the exposure signal to the signal receiving unit;
the signal receiving unit is used for receiving the exposure signal sent by the signal sending unit;
and the second exposure control unit is used for receiving the exposure signal sent by the signal receiving unit and exposing the exposure signal.
15. The RGB-D based environment sensing apparatus of claim 10, wherein the Lidar data acquisition module is a Flash Lidar, and a ratio of the acquisition frequency of the Lidar data acquisition module to the acquisition frequency of the image data acquisition module during the synchronous exposure is M: 1, wherein M is more than or equal to 100; the image data acquisition module is a camera module, and the color card modes adopted by the camera module are RGGB, RCCB and RCCC.
16. The RGB-D based environment sensing apparatus of claim 10, wherein the lidar data acquisition module has a single exposure time of 50ns to 1 ms.
17. The RGB-D based environment sensing apparatus of claim 10, wherein the lidar data acquisition module has a single exposure time of 1-50 ns.
18. The RGB-D based environment sensing apparatus of claim 10, wherein the lidar data acquisition module has a single exposure time of less than 1 ns.
CN201811307133.8A 2018-11-05 2018-11-05 RGB-D-based environment sensing method and device Active CN109495694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811307133.8A CN109495694B (en) 2018-11-05 2018-11-05 RGB-D-based environment sensing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811307133.8A CN109495694B (en) 2018-11-05 2018-11-05 RGB-D-based environment sensing method and device

Publications (2)

Publication Number Publication Date
CN109495694A CN109495694A (en) 2019-03-19
CN109495694B true CN109495694B (en) 2021-03-05

Family

ID=65693802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811307133.8A Active CN109495694B (en) 2018-11-05 2018-11-05 RGB-D-based environment sensing method and device

Country Status (1)

Country Link
CN (1) CN109495694B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542717A (en) * 2021-06-18 2021-10-22 黄初镇 Camera device with radar function
CN113688900A (en) * 2021-08-23 2021-11-23 阿波罗智联(北京)科技有限公司 Radar and visual data fusion processing method, road side equipment and intelligent traffic system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106043169A (en) * 2016-07-01 2016-10-26 百度在线网络技术(北京)有限公司 Environment perception device and information acquisition method applicable to environment perception device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101940718B1 (en) * 2015-09-04 2019-01-22 한국전자통신연구원 Apparatus and method for extracting person domain based on RGB-Depth image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106043169A (en) * 2016-07-01 2016-10-26 百度在线网络技术(北京)有限公司 Environment perception device and information acquisition method applicable to environment perception device

Also Published As

Publication number Publication date
CN109495694A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
US11877086B2 (en) Method and system for generating at least one image of a real environment
US10194135B2 (en) Three-dimensional depth perception apparatus and method
KR102278776B1 (en) Image processing method, apparatus, and apparatus
US20210217212A1 (en) Method and system for automatically colorizing night-vision images
EP3438776B1 (en) Method, apparatus and computer program for a vehicle
US20240118218A1 (en) Stroboscopic stepped illumination defect detection system
EP3438777A1 (en) Method, apparatus and computer program for a vehicle
Peng et al. Image haze removal using airlight white correction, local light filter, and aerial perspective prior
CN109792485A (en) System and method for blending image
CN108648225B (en) Target image acquisition system and method
CN111462128B (en) Pixel-level image segmentation system and method based on multi-mode spectrum image
CN108683902B (en) Target image acquisition system and method
US20120154541A1 (en) Apparatus and method for producing 3d images
CN109690628A (en) Image producing method and device
CN109495694B (en) RGB-D-based environment sensing method and device
US20190355101A1 (en) Image refocusing
US11455710B2 (en) Device and method of object detection
CN110827375B (en) Infrared image true color coloring method and system based on low-light-level image
CN116704111A (en) Image processing method and apparatus
WO2021060397A1 (en) Gating camera, automobile, vehicle lamp, image processing device, and image processing method
CN112017128A (en) Image self-adaptive defogging method
CN117079085B (en) Training method of raindrop detection model, vehicle control method, device and medium
CN115631123B (en) Bionic vision fusion severe environment imaging device and method
CN116596805B (en) Polarization defogging method based on polarization state difference of scene object and atmosphere light
CN109556574B (en) Pose detection system based on fovea system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant