CN112017128B - Image self-adaptive defogging method - Google Patents

Image self-adaptive defogging method Download PDF

Info

Publication number
CN112017128B
CN112017128B CN202010856742.XA CN202010856742A CN112017128B CN 112017128 B CN112017128 B CN 112017128B CN 202010856742 A CN202010856742 A CN 202010856742A CN 112017128 B CN112017128 B CN 112017128B
Authority
CN
China
Prior art keywords
image
module
defogging
adaptive
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010856742.XA
Other languages
Chinese (zh)
Other versions
CN112017128A (en
Inventor
俞峰
汤勇明
郑姚生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010856742.XA priority Critical patent/CN112017128B/en
Publication of CN112017128A publication Critical patent/CN112017128A/en
Application granted granted Critical
Publication of CN112017128B publication Critical patent/CN112017128B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image self-adaptive defogging method, which comprises the following steps: the system comprises an image sensor module, an image self-adaptive defogging module and a universal video transmission interface module. The invention uses the image sensor to process the collected data through the image self-adaptive defogging module, and encodes and outputs the data according to the general video transmission format. The image self-adaptive defogging module can automatically judge input image data, automatically determine whether a scene needs defogging, automatically encircle a sky area, and execute defogging operation on a pixel area needing defogging. The method can integrate the self-adaptive defogging function into the device, thereby improving the capability of the target device for acquiring information in the foggy scene and providing richer original information for subsequent digital image processing.

Description

Image self-adaptive defogging method
Technical Field
The invention relates to the field of digital image processing, in particular to an image processing method with higher degree of combination with a platform, which has close relation with an image self-adaptive defogging algorithm, a functional integration application of a camera and the like in an edge computing application scene.
Background
Image defogging has wide application in various fields such as traffic systems, security and protection fields, remote sensing monitoring, unmanned aerial vehicle technology, maritime traffic and the like, and can effectively increase the information quantity acquired from images by human beings through defogging an original input image. The existing defogging technology based on the image enhancement mode usually ignores color reproducibility and local detail processing, but the image restoration algorithm has better color reproducibility, but is generally difficult to process in real time under a platform with lower computing power, so that the corresponding effect is difficult to be exerted in the field facing video monitoring, and the use effect of the imaging equipment with defogging function is not ideal. In addition, in a modern image capturing and collecting device with a high integration level, a camera with a defogging function often needs to manually specify parameters required by the defogging function and whether the image capturing scene needs to execute defogging operation, so that adaptive defogging of different complex scenes cannot be achieved. Therefore, the design of the image self-adaptive defogging method which can be deployed by the platform and has the self-adaptive defogging function is of great significance in the actual application scene.
The current processor platform for image defogging mainly comprises a CPU, a GPU and an FPGA. Different processor platforms may offer different advantages in different applications. The CPU is suitable for serial execution of operations with strong flexibility, and has insignificant advantages in large-scale and high-throughput image processing occasions. Although the GPU platform can efficiently execute complex image defogging processing algorithms, the space and power consumption costs of its deployment make it less applicable in small-sized edge devices. The FPGA platform has higher energy efficiency ratio and strong calculation power, is suitable for parallel processing of a defogging algorithm with high data flux, is convenient to integrate in small-sized edge-end equipment, and brings breakthrough for performance improvement of the equipment.
Image defogging algorithms are related to the prior paper patents, and the design thought of the algorithms is defogging algorithms based on image enhancement, defogging based on Retinex enhancement and image defogging based on dark channel prior. The defogging algorithm is implemented in a similar framework and comprises atmospheric light value estimation, propagation map function estimation and image refinement operation. However, the conventional processing method does not have a mechanism for automatically judging whether the image is fogged, i.e., cannot judge whether the input image is fogged. The prior method can not process the halation effect of the image at the depth fault in the implementation process, and has obvious artificial effect on the processing of the image. The image self-adaptive defogging method can effectively establish a self-adaptive defogging mechanism, solve the halation effect at the depth fault and obviously improve the color, contrast and saturation of the processed image.
Disclosure of Invention
Technical problems: the invention discloses an image self-adaptive defogging method, which can realize image defogging operation with good platform combination degree by combining an image sensor module, a self-adaptive defogging module and a universal video transmission interface, and provides richer original information for subsequent digital image processing.
The technical scheme is as follows: in order to achieve the above object, the present invention provides a frame of an image adaptive defogging method, which includes an image sensor module, an image adaptive defogging module and a universal video transmission interface connected in sequence;
the image sensor module outputs the acquired video image data to the image self-adaptive defogging module for processing, and codes and outputs the video image data according to a universal video transmission format;
the image self-adaptive defogging module caches the input video image data through the memory module and outputs the data to the processor module; the processor module calibrates a sky area in the input image through a sky area module in a calibrating scene, and uses a judging scene whether a fog module is used for carrying out image saturation calculation on a non-sky area in the input image, and judges whether the image is fogged according to the saturation distribution of the image, so that the direct output or the output through a defogging module is further selected according to a judging result; the defogging module carries out defogging operation on an input image to be defogged in a non-sky area through the atmospheric light value estimation module, the propagation function estimation module and the image refinement module, and outputs the defogged image.
Wherein,
The image self-adaptive defogging module comprises a memory module which is connected with the image self-adaptive defogging module and used for caching video image data, and a processor module which is used for realizing the image self-adaptive defogging function, wherein the processor module comprises a module for judging whether a scene has fog, a module for calibrating a sky area in the scene and an image defogging module.
The image self-adaptive defogging module has the advantages that the operation of the image self-adaptive defogging function depends on the processor module and calls the memory module.
And the processor module is used for judging whether the scene is fogged or not, and the processor module is used for calibrating whether the image is fogged or not according to the calculated value of the saturation distribution data of the input image, and judging that the image is fogged or not when the image exceeds a threshold value.
The sky area module in the calibration scene in the processor module judges whether the local area is sky according to the brightness and gradient calculated value of the pixel points in the local pixel area of the image, the brightness value of the pixel points in the area exceeds a specified threshold value, and the gradient calculated value is considered as sky in a specified range.
The defogging module in the processor module comprises an atmospheric light value estimation module, a propagation function estimation module and an image refinement module.
And the atmospheric light value estimation module in the defogging module calculates an atmospheric light estimation value according to the brightness value of the pixel point in the dark channel of the input image.
And a propagation function estimation module in the defogging module calculates the propagation function estimation value of each pixel point from the input image data according to the dark channel theory.
The image refining module in the defogging module adopts a guide filtering algorithm with a filter size having an automatic adjusting mechanism to refine the image of the previous step.
The general video transmission format is a high-definition video transmission format conforming to HDMI protocol or DisplayPort protocol.
The beneficial effects are that: the invention can realize the self-adaptive defogging function for the input original video image, and obtains better display effect compared with the original image. The defogging function of the invention can carry out self-adaptive defogging of different degrees for different scenes, and has obvious effect in a depth fault area. Compared with the original image, the defogged image has obviously improved color saturation, no distortion and better structural similarity with the original image.
Drawings
Fig. 1 is a block diagram of an adaptive defogging camera composition using an image adaptive defogging method.
Fig. 2 is a schematic diagram of an algorithm execution framework of the image adaptive defogging method.
Fig. 3 is a framework of FPGA algorithm execution modules of the image adaptive defogging method.
Detailed Description
The invention relates to an image self-adaptive defogging method design framework, which comprises the following steps: the system comprises an image sensor module, an image self-adaptive defogging module and a universal video transmission interface. The different modules are combined to form a whole in an integrated mode, the operation process does not depend on external computing power, such as computing power of a cloud server or a PC end, and the combination of original data acquisition and self-adaptive defogging pretreatment operation is realized.
An image sensor module: the image self-adaptive defogging module is used for acquiring digital image information under different scenes and inputting the acquired digital image data into the image self-adaptive defogging module according to a standard data transmission format.
Image self-adaption defogging module: comprising a memory module for buffering video image data and a processor module for implementing an image adaptive defogging function.
The memory module is used for realizing data caching and interacting with the processor module.
The processor module is a module for executing the image self-adaptive defogging function and outputting the processed data to the universal video transmission interface module. The image self-adaptive defogging module comprises a module for judging whether a scene has fog or not, a module for calibrating an sky area in the scene, and an image defogging module.
The implementation principle of the judging whether the scene has fog module is that the saturation of a non-sky area in an input image is judged, if the saturation is lower than a set threshold value, the image is considered to be a fog image, otherwise, the image is not.
The sky area module in the calibration scene firstly traverses all input image pixels to find a flat and high-brightness pixel area in the image, and if a large continuous and high-brightness pixel area appears, the area is considered to be a sky pixel area.
Further, the image defogging module related to the application is divided into the following modules: an atmospheric light value estimation module, a propagation function estimation module and an image refinement module.
The atmospheric light value estimation module firstly calculates dark channel pixel values of all the input pixel points, takes a point with higher brightness in the dark channel pixels, and considers the brightness of the point as a candidate point of the atmospheric brightness if the point still meets the condition that the brightness distribution of the pixel points is smooth and uniform in the 3*3 neighborhood. And averaging all the candidate points to obtain the atmospheric brightness value.
The propagation function estimation module calculates a propagation function estimated roughness value of each pixel point according to the dark channel theory. And then using the atmospheric light value and a guided filtering algorithm to calculate a rough image after defogging the image.
The image refining module selects the size of the filter according to the information of the edge pixel points in the adjacent area, so as to realize the self-adaptive adjustment of the size of the filter, and applies the size to the propagation function estimation module to further refine the output image, thereby obtaining the final output image. The defogging image processed by the method can avoid halation effect in the depth fault area of the image.
The present invention will be described in detail with reference to the following examples for the purpose of making the objects, technical solutions and advantages of the present invention more apparent, and it should be understood that the specific examples described herein are only for the purpose of explaining the present invention and are not limited to the present invention.
An image sensor module: using a CMOS image sensor, a standard 1080p@60hz video signal is output,
The image self-adaptive defogging module comprises: a memory module, a processor module,
A memory module: a 4-chip MT41J256M16HA-107 memory chip is adopted to form a 64-bit wide memory,
The processor module includes: judging whether a scene has a fog module, calibrating a sky area module and an image defogging module in the scene;
Judging whether a scene has a fog module: based on XC7K325 TFFG-2 series FPGA chip construction, the IP core design of whether a scene has a fog module or not is realized by using Verilog programming language,
Calibrating a sky area module in a scene: based on XC7K325 TFFG-2 series FPGA chip construction, the method uses Verilog programming language to realize the IP core design of the sky area module in the calibration scene,
An image defogging module: based on XC7K325 TFFG-2 series FPGA chip construction, the IP core design of the image defogging module is realized by using Verilog programming language,
Universal video transmission interface: the 1080P@60Hz signal is output by adopting a standard HDMI protocol.
Fig. 1 shows a camera platform with adaptive defogging function, the hardware of which integrates an image sensor, an FPGA module, a DDR module and a universal video transmission interface. The FPGA module and the DDR module in the camera platform form an image self-adaptive defogging module in the image self-adaptive defogging method. The image adaptive defogging algorithm is a software algorithm deployed under the hardware platform. The data interaction path is mainly between the image sensor and the FPGA, and between the FPGA and the DDR, and the FPGA is not only a processor platform operated by an algorithm, but also a controller for data interaction. The DDR module plays a role in caching data and realizing data interaction with the FPGA.
In the image self-adaptive defogging algorithm, an implementation formula of the defogging mechanism judging module is as follows:
Where the input image size is m, n, sky represents sky area pixel points, and the sky area pixels marked as sky areas have sky values of 1. In the practical application, the algorithm of the invention carries out defogging operation on the image with the saturation lower than 0.2, and judges the image with the saturation higher than 0.2 as a defogging-free image without processing.
In the above formula, the detailed steps for sky area determination are:
1. Respectively solving two-dimensional gradients of RGB three channels of all pixel points, wherein the three-channel gradients are lower than a set threshold (0.03), and the pixel points are considered to be suspected sky areas;
2. respectively averaging all RGB three channels of the pixel points of the suspected sky area to obtain the average intensity of the RGB three channels of the suspected sky area;
3. And (3) if the suspected sky area pixel points meet the conditions that the ratio of the RGB three-channel value to the average value obtained in the step (2) is higher than a set threshold value (0.85), the current pixel points are considered to be sky area pixel points.
The atmospheric light value judging process in the self-adaptive defogging algorithm is as follows:
1. and counting the 0.1% pixel point with the highest brightness in the dark channel of the image to be processed, and taking the average value of the pixel points as a preliminary estimated value A 0 of the atmospheric light value.
2. Let l be the gray value of the pixel,For the smoothness threshold, η is the luminance threshold. Traversing each pixel point, and if all the pixel points in 3*3 neighborhood taking the pixel point as a center meet the smoothing condition and the brightness condition:
the point is considered to be the reference point. Specifically, let/> 0.03 And η is 0.15.
3. And (3) taking the gray average value of all the reference points in the step (2) as a final atmospheric light estimated value A.
For the case of fog, the image acquired by the image sensor is obtained by using the Bouguer attenuation law and the linear superposition relation, and the expression is shown as follows.
I(x)=J(x)·e-β(λ)x+A·(1-e-β(λ)x)
Wherein the light intensity of the atmospheric environment is A, the light intensity of the object reflected light in the direction of the observer is J (x), and the light intensity of the observer which receives the object is I (x). Take the propagation function t (x) =e -β(λ)x.
According to dark channel prior theory, carrying out rough estimation on the propagation function of the image, and simultaneously, as the actually acquired defogged image should remain a certain particle scattering influence, making the influence factor be omega (0 < omega < 1), estimating the propagation function: The haze retention degree of the output image can be controlled by adjusting the ω size, wherein the dark channel filter size in the local region centered on x, represented by Ω (x), is set to R1.
The guided filtering algorithm used in the refinement of the transfer function assumes that the filtered output image has a local linear relationship with its reference image, assuming that the input reference image is I, the output image is q, and a k,bk is a constant linear factor in the local region ω k of the image, the following linear relationship exists.
Assuming that the image to be smoothed is p, to prevent overfitting, a regularization factor is introducedThe local degree of difference between the filtered output image and the original image obtained based on the above-described linear model can be expressed as follows.
And (3) calculating the minimum value of the difference E (a k,bk), and calculating the bias derivative of a k,bk respectively, wherein cov k (p, I) is the covariance of the guide map I and the map p to be smoothed in the region omega k, var k (I) is the variance of the guide map I in the region, and the obtained guide filtering output result is obtained by solving.
Let it be assumed that the pilot filter size in the target filter ω k is R2. In order to effectively remove the blocking effect of the coarse propagation function, the filter size of the guided filtering algorithm should be larger than the size of the filter in the dark channel theory, in this example, the guided filtering filter size is set to be twice the dark channel filter size. R2=2×r1. In the adaptive adjustment of the filter size of the adaptive image defogging method, the adjustment mode of the dark channel filter size is as follows: firstly, detecting edge pixels of a preprocessed image, and regarding a determined pixel point (x, y), if no edge pixels exist in a local area omega xy, considering the local area as a flat area. Otherwise, the size of the local area is reduced until no edge pixels exist in the local area, or the size of the local area reaches the minimum preset by the algorithm. The size of the partial region is used as the size of the dark channel filter, and is substituted into the above formula to calculate the size of the guide filter.
The specific algorithm flow of this case is shown in fig. 2. And finally, outputting the defogged image through an FPGA interface.
The present case is also described for an FPGA platform, using a modular design to describe the function of the adaptive defogging algorithm, the FPGA module design being shown in figure 3. In this case, a video signal with an input of 1080P@60Hz is described, and the FPGA module is described as follows.
2 Times down sampling module: the module is used for downsampling the video data with the input format into the video data with the output 24bit width and the format.
An RGB minimum channel selection module: the module is used for acquiring the minimum channel pixel value in each input pixel point RGB three channels, inputting the output video data of the 2-time downsampling module and outputting the output video data of the 8-bit single channel video data.
A minimum value filter module: the module is used for calculating the dark channel value of each pixel point, inputting the 8-bit single-channel video data of the RGB minimum channel selection module, and outputting the 8-bit dark channel video data.
An atmospheric light value calculation module: the module is used for calculating a global atmosphere light value, inputting dark channel data output by the minimum filter module, and outputting the dark channel data as the global atmosphere light value.
An RGB (red, green and blue) gray scale conversion module: the module is used for calculating the gray value of the pixel, inputting the video data output by the 2-time downsampling module and outputting the video data as the video gray value data.
A coarse propagation function calculation module: the module is used for calculating a coarse propagation function, inputting dark channel data output by the minimum filter module and an atmospheric light value output by the atmospheric light value calculation module, and outputting the dark channel data and the atmospheric light value as a coarse propagation function value of each pixel point.
And the adaptive guide filtering module is used for: the module is used for carrying out self-adaptive refinement processing on the coarse propagation function, inputting video gray value data output by the RGB gray scale conversion module and the coarse propagation function value output by the coarse propagation function calculation module, and outputting the video gray scale value data and the coarse propagation function value as the refinement propagation function value.
An image defogging module: the module is used for image defogging operation, and the input is the refined propagation function value and the corresponding video pixel data output by the self-adaptive guiding filtering module, and the output is defogged video data stream.
The encoding and decoding chip configuration module: the module generates a configuration data stream for the video codec chip.
The processed defogging image data flow is output to external equipment through an FPGA interface.
The invention has higher modularization independence, namely, the realization of the image self-adaptive defogging method has strong platform compatibility, and the realization process is independent of specific external equipment. The invention can be applied to image acquisition equipment such as cameras, CCD and the like, and can independently complete the self-adaptive defogging function of images.
The foregoing description has only expressed one embodiment of the present invention, and its description is more detailed, but should not be construed as limiting the patent of the invention. It should be noted that it is within the scope of the present patent that several changes can be made by those skilled in the art without departing from the inventive concept. The patent protection scope of the invention is subject to the appended claims.

Claims (10)

1. The image self-adaptive defogging method is characterized in that the frame of the method comprises an image sensor module, an image self-adaptive defogging module and a universal video transmission interface which are sequentially connected;
the image sensor module outputs the acquired video image data to the image self-adaptive defogging module for processing, and codes and outputs the video image data according to a universal video transmission format;
The image self-adaptive defogging module caches the input video image data through the memory module and outputs the data to the processor module; the processor module calibrates a sky area in an input image through a sky area module in a calibrating scene, and uses a judging scene whether a fog module carries out image saturation calculation on a non-sky area in the input image, whether the image has fog is judged according to the saturation distribution of the image, and in the image self-adaptive defogging module, the implementation formula of the defogging mechanism judging module is as follows:
The input image size is m, n and sky represents sky area pixel points, the sky value of all the pixels marked as sky areas is 1, defogging operation is carried out on the image with saturation lower than 0.2, and the image with saturation higher than 0.2 is judged as a defogging image and is not processed; so as to further select direct output or output through a defogging module according to the discrimination result; the defogging module carries out defogging operation on an input image to be defogged in a non-sky area through the atmospheric light value estimation module, the propagation function estimation module and the image refinement module, and outputs the defogged image.
2. The method according to claim 1, wherein the image adaptive defogging module comprises a memory module connected with the image adaptive defogging module for buffering video image data and a processor module for realizing the image adaptive defogging function, and the processor module comprises a scene defogging module for judging whether a scene is fogged or not, a sky area module in the calibrated scene, and an image defogging module.
3. The method of claim 2, wherein the image adaptive defogging module operates in dependence on the processor module and invokes the memory module.
4. The method according to claim 2, wherein the processor module determines whether the scene is fogged by calibrating the image according to the calculated saturation distribution data of the input image, and the image is considered fogged when the saturation distribution data exceeds a threshold.
5. The method for adaptive defogging of an image according to claim 2, wherein the sky area module in the calibration scene in the processor module determines whether the local area is sky according to the brightness and gradient calculated values of the pixels in the local pixel area of the image, the brightness value of the pixels in the area exceeds a specified threshold value and the gradient calculated value is considered sky within a specified range.
6. The method according to claim 2, wherein the defogging module in the processor module comprises an atmospheric light value estimation module, a propagation function estimation module and an image refinement module.
7. The method of claim 6, wherein the atmospheric light value estimation module calculates the atmospheric light estimation value according to the brightness value of the pixel point in the dark channel of the input image.
8. The method according to claim 6, wherein the propagation function estimation module in the defogging module calculates the propagation function estimation value of each pixel point from the input image data according to the dark channel theory.
9. The method of claim 6, wherein the image refinement module of the defogging module refines the previous image by using a guided filtering algorithm with an automatic filter size adjustment mechanism.
10. The method according to claim 1, wherein the universal video transmission format is a high definition video transmission format conforming to HDMI protocol or DisplayPort protocol.
CN202010856742.XA 2020-08-24 2020-08-24 Image self-adaptive defogging method Active CN112017128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010856742.XA CN112017128B (en) 2020-08-24 2020-08-24 Image self-adaptive defogging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010856742.XA CN112017128B (en) 2020-08-24 2020-08-24 Image self-adaptive defogging method

Publications (2)

Publication Number Publication Date
CN112017128A CN112017128A (en) 2020-12-01
CN112017128B true CN112017128B (en) 2024-05-03

Family

ID=73504222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010856742.XA Active CN112017128B (en) 2020-08-24 2020-08-24 Image self-adaptive defogging method

Country Status (1)

Country Link
CN (1) CN112017128B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155173A (en) * 2022-02-10 2022-03-08 山东信通电子股份有限公司 Image defogging method and device and nonvolatile storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182671A (en) * 2018-01-25 2018-06-19 南京信息职业技术学院 A kind of single image to the fog method based on sky areas identification
CN108596849A (en) * 2018-04-23 2018-09-28 南京邮电大学 A kind of single image to the fog method based on sky areas segmentation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182671A (en) * 2018-01-25 2018-06-19 南京信息职业技术学院 A kind of single image to the fog method based on sky areas identification
CN108596849A (en) * 2018-04-23 2018-09-28 南京邮电大学 A kind of single image to the fog method based on sky areas segmentation

Also Published As

Publication number Publication date
CN112017128A (en) 2020-12-01

Similar Documents

Publication Publication Date Title
CN109636754B (en) Extremely-low-illumination image enhancement method based on generation countermeasure network
JP7077395B2 (en) Multiplexed high dynamic range image
CN111698434B (en) Image processing apparatus, control method thereof, and computer-readable storage medium
US20190130169A1 (en) Image processing method and device, readable storage medium and electronic device
US20100177203A1 (en) Apparatus and method for local contrast enhanced tone mapping
CN106454014B (en) A kind of method and device improving backlight scene vehicle snapshot picture quality
JP2012168936A (en) Animation processing device and animation processing method
US10609303B2 (en) Method and apparatus for rapid improvement of smog/low-light-level image using mapping table
KR20110136152A (en) Apparatus and method of creating high dynamic range image empty ghost image by using filtering
US20210042881A1 (en) Method and apparatus for image processing
CN109639994B (en) Dynamic adjusting method for exposure time of embedded vehicle-mounted camera
CN111582074A (en) Monitoring video leaf occlusion detection method based on scene depth information perception
CN112017128B (en) Image self-adaptive defogging method
CN113068011B (en) Image sensor, image processing method and system
CN108093175A (en) A kind of adaptive defogging method of real-time high-definition video and device
CN113344820B (en) Image processing method and device, computer readable medium and electronic equipment
CN110507283A (en) Retina camera and its implementation
CN107464225B (en) Image processing method, image processing device, computer-readable storage medium and mobile terminal
CN109495694B (en) RGB-D-based environment sensing method and device
CN116309116A (en) Low-dim-light image enhancement method and device based on RAW image
CN111541886A (en) Vision enhancement system applied to muddy underwater
CN115334250B (en) Image processing method and device and electronic equipment
US11861814B2 (en) Apparatus and method for sensing image based on event
CN114549358A (en) Low-illumination image enhancement method and system based on camera characteristics of guided filtering
KR20150040559A (en) Apparatus for Improving Image Quality and Computer-Readable Recording Medium with Program Therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant