CN111915496A - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN111915496A
CN111915496A CN201910380316.0A CN201910380316A CN111915496A CN 111915496 A CN111915496 A CN 111915496A CN 201910380316 A CN201910380316 A CN 201910380316A CN 111915496 A CN111915496 A CN 111915496A
Authority
CN
China
Prior art keywords
image
frame image
value
frame
defogged
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910380316.0A
Other languages
Chinese (zh)
Other versions
CN111915496B (en
Inventor
邹瑞波
黄攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910380316.0A priority Critical patent/CN111915496B/en
Publication of CN111915496A publication Critical patent/CN111915496A/en
Application granted granted Critical
Publication of CN111915496B publication Critical patent/CN111915496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides an image processing method, including: extracting a first frame image and a second frame image in a target video, wherein the first frame image and the second frame image are continuous images; respectively filtering the extracted first frame image and the second frame image to obtain an image to be enhanced; performing image enhancement processing based on the obtained image to be enhanced to form a fused image; and replacing the first frame image with the fused image in the target video. The present disclosure also provides an image processing apparatus and a storage medium.

Description

Image processing method, device and storage medium
Technical Field
The present disclosure relates to image adjustment technologies, and in particular, to an image processing method and apparatus, and a storage medium.
Background
When the electronic equipment carries out video shooting under the dim light condition at night, due to the influence of the light environment of dim light, the noise of image frames in the video is large, the contrast is too low, the noise and the low contrast not only destroy the real information of the image, but also seriously affect the visual effect of the video, therefore, for the video image frames containing the noise, the noise reduction treatment is required, and the clear video image frames are obtained by restoration so as to form the clear video.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide an image processing method, an image processing apparatus, and a storage medium.
The technical scheme of the embodiment of the disclosure is realized as follows:
the embodiment of the disclosure provides an image processing method, which includes:
extracting a first frame image and a second frame image in a target video, wherein the first frame image and the second frame image are continuous images;
respectively filtering the extracted first frame image and the second frame image to obtain an image to be enhanced;
performing image enhancement processing based on the obtained image to be enhanced to form a fused image;
and replacing the first frame image with the fused image in the target video.
In the foregoing solution, the performing filtering processing on the extracted first frame image and the extracted second frame image respectively includes:
respectively carrying out Gaussian blur processing on the first frame image and the second frame image;
determining an absolute value of a difference of the Gaussian blur of the first frame image and the second frame image;
constructing a Kalman filtering matrix based on the absolute value of the difference value of the Gaussian blur of the first frame image and the second frame image;
determining a filter parameter of the Kalman filter matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image.
In the foregoing solution, the determining a filtering parameter of the kalman filtering matrix based on an absolute value of a difference between gaussian blurs corresponding to the first frame image and the second frame image includes:
determining an update value of the Kalman filtering matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image and a first adjustment value;
determining a weight adjustment value of the Kalman filtering matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image and a second adjustment value;
determining the superposition weight of the first frame image and the second frame image based on the updated value of the Kalman filtering matrix and the weight adjustment value of the Kalman filtering matrix.
In the above scheme, the method further comprises:
and determining a corresponding image to be enhanced based on the addition operation result of a first parameter and the first frame image, wherein the first parameter represents the product of the difference value corresponding to the first frame image and the second frame image and the superposition weight of the first frame image and the second frame image.
In the foregoing solution, the image enhancement processing based on the obtained image to be enhanced to form a fused image includes:
carrying out inverse logarithm processing on the image to be enhanced to form an image to be defogged;
carrying out dark channel defogging treatment on the formed image to be defogged to form an enhanced image;
and carrying out inverse logarithm processing on the formed enhanced image to form a fused image.
In the foregoing solution, the performing the dark channel defogging process on the formed image to be defogged to form an enhanced image includes:
determining a dark channel value of the image to be defogged;
determining the gray value of the image to be defogged;
determining an atmospheric light value of the image to be defogged based on the dark channel value, the third adjusting value and the gray value of the image to be defogged;
and processing the image to be defogged according to the atmospheric light value and the fourth adjusting value of the image to be defogged to form an enhanced image.
In the above scheme, the method further comprises:
determining the minimum value of three channels of each pixel point of the image to be defogged;
and assigning the minimum value of the three channels of each pixel point of the defogged image to the corresponding pixel point in the image of the dark channel.
An embodiment of the present disclosure further provides an image processing apparatus, including:
the image processing module is used for extracting a first frame image and a second frame image in a target video, wherein the first frame image and the second frame image are continuous images;
the image filtering module is used for respectively carrying out filtering processing on the extracted first frame image and the second frame image to obtain an image to be enhanced;
the image enhancement module is used for enhancing and processing the image based on the obtained image to be enhanced so as to form a fusion image;
the image processing module is used for replacing the first frame image with the fused image in the target video.
In the above-mentioned scheme, the first step of the method,
the image filtering module is used for respectively carrying out Gaussian blur processing on the first frame image and the second frame image;
the image filtering module is used for determining the absolute value of the difference value of the Gaussian blur of the first frame image and the second frame image;
the image filtering module is used for constructing a Kalman filtering matrix based on the absolute value of the difference value of the Gaussian blur of the first frame image and the second frame image;
the image filtering module is configured to determine a filtering parameter of the kalman filtering matrix based on an absolute value of a difference between gaussian blurs of the first frame image and the second frame image.
In the above-mentioned scheme, the first step of the method,
the image filtering module is used for determining an update value of the Kalman filtering matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image and a first adjusting value;
the image filtering module is used for determining a weight adjustment value of the Kalman filtering matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image and a second adjustment value;
the image filtering module is configured to determine a superposition weight of the first frame image and the second frame image based on the updated value of the kalman filter matrix and the weight adjustment value of the kalman filter matrix.
In the above-mentioned scheme, the first step of the method,
the image filtering module is configured to determine a corresponding image to be enhanced based on a sum operation result of a first parameter and the first frame image, where the first parameter represents a product of a difference value corresponding to the first frame image and the second frame image and a superposition weight of the first frame image and the second frame image.
In the above-mentioned scheme, the first step of the method,
the image enhancement module is used for carrying out inverse logarithm processing on the image to be enhanced to form an image to be defogged;
the image enhancement module is used for carrying out dark channel defogging treatment on the formed image to be defogged to form an enhanced image;
and the image enhancement module is used for carrying out inverse logarithm processing on the formed enhanced image to form a fused image.
In the above-mentioned scheme, the first step of the method,
the image enhancement module is used for determining a dark channel value of the image to be defogged;
the image enhancement module is used for determining the gray value of the image to be defogged;
the image enhancement module is used for determining an atmospheric light value of the image to be defogged based on the dark channel value, the third adjusting value and the gray value of the image to be defogged;
and the image enhancement module is used for processing the image to be defogged according to the atmospheric light value and the fourth adjusting value of the image to be defogged so as to form an enhanced image.
In the above-mentioned scheme, the first step of the method,
the image enhancement module is used for determining the minimum value of each pixel point of the image to be defogged in three channels;
and the image enhancement module is used for assigning the minimum value of each pixel point of the defogged image in the three channels to the corresponding pixel point in the image of the dark channel.
An embodiment of the present disclosure further provides an image processing apparatus, including:
a memory for storing executable instructions;
and the processor is used for realizing the image processing method provided by the embodiment of the disclosure when executing the executable instruction.
The embodiment of the disclosure also provides a storage medium, which stores executable instructions, and when the executable instructions are executed, the image processing method provided by the embodiment of the disclosure is realized.
The embodiment of the disclosure provides an image processing method, a server and a storage medium, and the embodiment of the disclosure has the following technical effects:
respectively filtering the extracted first frame image and the second frame image by extracting the first frame image and the second frame image in a target video, and performing image enhancement processing based on the obtained image to be enhanced to form a fusion image; and finally, replacing the first frame image with the fused image in the target video. Each frame of image shot by the electronic equipment under the dim light condition can be processed, so that clear image frames are formed, the formed image frames can be used for replacing the image frames in the target video to form the clear target video, and the defect that shot video is large in noise and low in contrast caused by the dim light condition is avoided.
Drawings
Fig. 1 is a schematic view of an application scenario of an image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an alternative hardware structure of an image processing apparatus 200 according to an embodiment of the disclosure;
fig. 3 is a schematic diagram of an alternative structure of an image processing apparatus according to an embodiment of the disclosure;
fig. 4 is an optional flowchart of an image processing method according to an embodiment of the disclosure;
fig. 5 is an optional flowchart of an image processing method according to an embodiment of the disclosure;
fig. 6 is a schematic front end view of an image processing method according to an embodiment of the present disclosure;
fig. 7a and 7b are schematic diagrams illustrating effects of the image processing method according to the embodiment of the disclosure.
Detailed Description
For the purpose of making the purpose, technical solutions and advantages of the present disclosure clearer, the present disclosure will be described in further detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present disclosure, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present disclosure.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein is for the purpose of describing embodiments of the disclosure only and is not intended to be limiting of the disclosure.
It should be noted that, in the embodiments of the present disclosure, the terms "comprises", "comprising" or any other variation thereof are intended to cover a non-exclusive inclusion, so that a method or server including a series of elements includes not only the explicitly recited elements but also other elements not explicitly listed or inherent to the method or server. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other related elements in a method or server comprising the element (e.g., steps in a method or elements in a server, such as elements that may be part of a circuit, part of a processor, part of a program or software, etc.).
For example, the image processing method provided by the embodiment of the present disclosure includes a series of steps, but the image processing method provided by the embodiment of the present disclosure is not limited to the described steps, and similarly, the terminal provided by the embodiment of the present disclosure includes a series of units, but the terminal provided by the embodiment of the present disclosure is not limited to include the explicitly described units, and may further include units that are required to acquire related information or perform processing based on the information. It should be noted that in the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Before further detailed description of the embodiments of the present disclosure, terms and expressions referred to in the embodiments of the present disclosure are explained, and the terms and expressions referred to in the embodiments of the present disclosure are applied to the following explanations.
1) The target video is a video shot by electronic equipment in a dark light environment, and due to the dark light environment, more noise exists in each image frame of the target video, so that the watching effect of the video is influenced.
2) And the filtering treatment comprises Kalman filtering, and an algorithm for performing optimal estimation on the system state by using a linear system state equation and inputting and outputting observation data through the system. Since the observation data includes the influence of noise and interference in the system, the optimal estimation can also be regarded as a filtering process, in which the kalman filtering is performed by a Graphics Processing Unit (GPU Graphics Processing Unit) through a corresponding kalman filtering algorithm.
3) And defogging the dark channel, namely calculating and calculating the dark channel, atmospheric light parameters and refined dark channel transmissivity by a dark channel defogging algorithm in a GPU (graphics processing Unit) of the electronic equipment, and then defogging the input image to be defogged to form an enhanced image.
4) The client, a carrier in the terminal for implementing a specific function, for example, a mobile client (APP) is a carrier of a specific function in the mobile terminal, for example, a function of performing live online (video push streaming) or a playing function of online video.
The following describes an exemplary application of the apparatus implementing the embodiments of the present disclosure, and the apparatus provided by the embodiments of the present disclosure may be implemented as various types of electronic devices with a GPU, such as a tablet computer, a notebook computer, a central processing unit, and the like.
A usage scenario in which the image processing method of the embodiments of the present disclosure is implemented will now be described with reference to the drawings. Referring to fig. 1, fig. 1 is an application scenario diagram of an image processing method provided by an embodiment of the present disclosure, in order to support an exemplary application, a server implementing the embodiment of the present disclosure may be a video server, taking a video server 30 as an example, a user terminal 10 (an exemplary user terminal 10-1 and a user terminal 10-2 are shown) is connected to the video server 30 through a network 20, the network 20 may be a wide area network or a local area network, or a combination of the two, data transmission is implemented using a wireless link, a graphic processor of the terminal 10 can process a target video shot by the terminal 10, and the processed target video is sent to the video server 30 through the network 20.
The terminal 10 is configured to perform filtering processing on the extracted first frame image and the extracted second frame image respectively to obtain an image to be enhanced; based on the obtained image to be enhanced, carrying out image enhancement processing to form a fusion image; replacing the fused image with the first frame image in the target video; the user terminal 10 displays the processed image frames to a user through a graphical interface 110 (an exemplary graphical interface 110-1 and a graphical interface 110-2 is shown) to enable the user to judge whether the processed video frames meet a noise requirement according to a requirement, and the video server 30 is configured to provide background data support of image processing in cooperation with the user terminal 10 during an image processing process to implement different functions in an image processing application of the terminal, such as pushing a processed target video to the video server 30.
Based on the usage environment of the image processing method shown in fig. 1, an image processing apparatus implementing the embodiment of the present disclosure will be described first, and the image processing apparatus server may be provided as hardware, software, or a combination of hardware and software. Various exemplary implementations of the image processing apparatus provided by the embodiments of the present disclosure are described below.
The following describes an implementation of a combination of hardware and software of the image processing apparatus. Specifically, the hardware structure of an image processing apparatus implementing an embodiment of the present disclosure will now be described with reference to the drawings, and fig. 2 is a schematic diagram of an alternative hardware structure of an image processing apparatus 200 provided in an embodiment of the present disclosure.
The image processing apparatus 200 in the embodiment of the present disclosure may include, but is not limited to, mobile terminals such as a mobile phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Android Device, a Portable Multimedia Player (PMP), a car terminal (e.g., a car navigation terminal), and the like, and various types of electronic devices with an image processing apparatus function such as a digital Television (TV), a desktop computer, and the like. The image processing apparatus 200 shown in fig. 2 is only an example, and should not bring any limitation to the functions and the range of use of the embodiments of the present disclosure.
As shown in fig. 2, the image processing apparatus 200 may include a processing apparatus (e.g., a central processing unit, a graphics processor, etc.) 201, wherein the graphics processor is capable of executing a kalman filter algorithm and a dark channel defogging algorithm, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 202 or a program loaded from a storage apparatus 208 into a Random Access Memory (RAM) 203. In the RAM203, various programs and data necessary for the operation of the image processing apparatus 200 are also stored. The processing device 201, the ROM 202, and the RAM203 are connected to each other via a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
Generally, the following devices may be connected to the I/O interface 205: input devices 206 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 207 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 208 including, for example, magnetic tape, hard disk, etc.; and a communication device 209. The communication means 209 may allow the image processing apparatus 200 to perform wireless or wired communication with other devices to exchange data. While FIG. 2 illustrates an image processing device 200 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 209, or installed from the storage means 208, or installed from the ROM 202. The computer program, when executed by the processing device 201, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an Erasable Programmable Read-Only Memory (EPROM), an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
The computer readable medium may be included in the image processing apparatus; or may exist separately without being incorporated into the image processing apparatus.
The computer readable medium carries one or more programs which, when executed by the image processing apparatus, cause the image processing apparatus to: acquiring calculation parameters of interest points in media information to be issued; determining the interest points to be recommended according to the calculation parameters of the interest points; and pushing the determined interest points to a client so as to insert the interest points into corresponding media information.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or image processing apparatus. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the name of a module in some cases does not constitute a limitation on the module itself.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
As an example of a hardware implementation or a software implementation of the image processing apparatus, the image processing apparatus may be provided as a series of modules having a coupling relationship at a signal/information/data level, which will be described below with reference to fig. 3. Referring to fig. 3, fig. 3 is a schematic diagram of an alternative composition structure of the image processing apparatus provided in the embodiment of the present disclosure, which shows a series of modules included in the image processing apparatus, but the module structure of the image processing apparatus is not limited to that shown in fig. 3, for example, the modules in the image processing apparatus may be further split or combined according to different functions implemented.
The following describes a purely hardware implementation of an image processing apparatus, which may be various types of image processing apparatus clients for running applications, such as: fig. 3 is an optional functional structure diagram of an image processing apparatus provided in an embodiment of the present disclosure; as shown in fig. 3, the image processing apparatus 300 includes: an image processing module 301, an image filtering module 302 and an image enhancement module 303. The functions of the respective modules are explained in detail below.
The image processing module 301 is configured to extract a first frame image and a second frame image in a target video, where the first frame image and the second frame image are consecutive images. The image filtering module 302 is configured to perform filtering processing on the extracted first frame image and the extracted second frame image, respectively, to obtain an image to be enhanced. And an image enhancement module 303, configured to perform image enhancement processing based on the obtained image to be enhanced to form a fused image. The image processing module 301 is configured to replace the first frame image with the fused image in the target video.
In some embodiments of the present disclosure, the image filtering module 302 is configured to perform gaussian blurring processing on the first frame image and the second frame image respectively. The image filtering module 302 is configured to determine an absolute value of a difference between gaussian blurs of the first frame image and the second frame image. The image filtering module 302 is configured to construct a kalman filtering matrix based on an absolute value of a difference between gaussian blurs of the first frame image and the second frame image. The image filtering module 302 is configured to determine a filtering parameter of the kalman filtering matrix based on an absolute value of a difference between gaussian blurs of the first frame image and the second frame image.
In some embodiments of the present disclosure, the image filtering module 302 is configured to determine an updated value of the kalman filter matrix based on an absolute value of a difference between gaussian blurs of the first frame image and the second frame image and a first adjustment value. The image filtering module 302 is configured to determine a weight adjustment value of the kalman filter matrix based on an absolute value of a difference between gaussian blurs of the first frame image and the second frame image and a second adjustment value. The image filtering module 302 is configured to determine a superposition weight of the first frame image and the second frame image based on the updated value of the kalman filter matrix and the weight adjustment value of the kalman filter matrix.
In some embodiments of the present disclosure, the image filtering module 302 is configured to determine a corresponding image to be enhanced based on a sum operation result of the first parameter and the first frame image, where the first parameter represents a product of a difference value corresponding to the first frame image and the second frame image and a superposition weight of the first frame image and the second frame image.
In some embodiments of the present disclosure, the image enhancement module 303 is configured to perform inverse logarithm processing on the image to be enhanced to form an image to be defogged. The image enhancement module 303 is configured to perform dark channel defogging on the formed image to be defogged to form an enhanced image. The image enhancement module 303 is configured to perform inverse logarithm processing on the formed enhanced image to form a fused image.
In some embodiments of the present disclosure, the image enhancement module 303 is configured to determine a dark channel value of the image to be defogged; the image enhancement module 303 is configured to determine a gray value of the image to be defogged; the image enhancement module 303 is configured to determine an atmospheric light value of the image to be defogged based on the dark channel value, the third adjustment value, and the gray value of the image to be defogged; the image enhancement module 303 is configured to process the image to be defogged according to the atmospheric light value and the fourth adjustment value of the image to be defogged, so as to form an enhanced image. The image enhancement module 303 is configured to determine a minimum value of three channels of each pixel of the image to be defogged. The image enhancement module 303 is configured to assign a minimum value of three channels of each pixel point of the defogged image to a corresponding pixel point in the image of the dark channel.
Referring to fig. 4, fig. 4 is an optional flowchart of the image processing method provided by the embodiment of the present disclosure, and it is understood that the steps shown in fig. 4 may be executed by a terminal running the image processing apparatus 300, for example, the image processing apparatus 300 may be a functional module coupled to an internal/external interface of the terminal; the steps shown in fig. 4 may also be performed by a server running the image processing apparatus 300, for example, the image processing apparatus 300 may be a functional module coupled to an internal/external interface of the server. The following is a description of the steps shown in fig. 4.
Step 401: extracting a first frame image and a second frame image in a target video;
wherein the first frame image and the second frame image are continuous images;
step 402: respectively filtering the extracted first frame image and the second frame image to obtain an image to be enhanced;
step 403: based on the obtained image to be enhanced, carrying out image enhancement processing to form a fusion image;
step 404: and replacing the first frame image with the fused image in the target video.
In some embodiments of the present disclosure, the separately filtering the extracted first frame image and the second frame image includes:
respectively carrying out Gaussian blur processing on the first frame image and the second frame image; determining an absolute value of a difference of the Gaussian blur of the first frame image and the second frame image; constructing a Kalman filtering matrix based on the absolute value of the difference value of the Gaussian blur of the first frame image and the second frame image; determining a filter parameter of the Kalman filter matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image. When the target video is processed, the target video is initialized firstly, two continuous frames of images, namely a first frame of image T-1 and a second frame of image T, are extracted from the target video through a processing queue of the image processor GPU, and the first frame of image T-1 and the second frame of image T are firstly extracted for Gaussian blur processing when the noise of the image frames is removed because the noise of each frame of image in the target video shot under the dim light condition usually meets the Gaussian distribution. Wherein, the first frame image is represented by the formula:
delta is the absolute value (gaussian-blurred first frame image GT-1-gaussian-blurred second frame image GT), the absolute value of the difference of the gaussian-blurs of the first and second frame images may be determined.
Further, kalman filtering is a recursive estimation, that is, as long as the estimation value of the state at the last time and the observation value of the current state are known, the estimation value of the current state can be calculated. Applying Kalman filtering to the denoising processing of video image frames, firstly establishing a Kalman filtering matrix, and determining filtering parameters in the Kalman filtering matrix.
In some embodiments of the present disclosure, the determining, based on an absolute value of a difference between corresponding gaussian blurs of the first frame image and the second frame image, a filtering parameter of the kalman filtering matrix includes:
determining an update value of the Kalman filtering matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image and a first adjustment value; determining a weight adjustment value of the Kalman filtering matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image and a second adjustment value; determining the superposition weight of the first frame image and the second frame image based on the updated value of the Kalman filtering matrix and the weight adjustment value of the Kalman filtering matrix. Recording an updated value of the Kalman filtering matrix as Update, and recording a first regulating value as down speed; by the formula:
update _ Update- (1.0/1.0+ exp- (downSpeed Delta))) -0.5) × 2.0, the updated values of the kalman filter matrix can be determined.
Because the number of noise points of the electronic device is different when shooting in a dim light environment, the weight adjustment value of the kalman filter matrix needs to be calculated in the process of processing the image frame by the kalman filter matrix to realize the weighting processing of different target images, so that the weight adjustment value of the kalman filter matrix is denoted as fpredicted, the second adjustment value is global _ q, and the global _ q can realize the influence of adjusting the Delta component on the final predicted weight; by the formula:
and determining a weight adjustment value of the Kalman filter matrix, wherein the value of one RGB triple is a constant 255.0 + 1.0+ global _ q _ Delta _ 255.0.
In the process of processing the image frame through the Kalman filtering matrix, the motion estimation value of a static area of the image frame is small, and the Gaussian weight value of the motion estimation value is larger; the motion estimation value of the motion region is large, and the gaussian weight value thereof is small, so that the superimposition weight of the first frame image and the second frame image needs to be calculated, specifically, the superimposition weight of the first frame image and the second frame image is recorded as follows: kcurr, by the formula:
(Kcurr ═ fpredicted/(fpredicted + update × 1.5)), the superimposition weight of the first frame image and the second frame image is determined.
In some embodiments of the present disclosure, the method further comprises:
and determining a corresponding image to be enhanced based on the addition operation result of the first parameter and the first frame image, wherein the first parameter represents the product of the difference value corresponding to the first frame image and the second frame image and the superposition weight of the first frame image and the second frame image. Wherein, the image to be enhanced is recorded as Result _ dense, and the image to be enhanced is represented by a formula:
result _ dense ═ first frame image + Kcurr — (second frame image — first frame image) to obtain an image to be enhanced formed based on the first image frame and the second image frame. Wherein the filter parameters in the Kalman filter matrix are dynamically changed and are executed by a Graphics Processing Unit (GPU) of the electronic device.
Based on the example of the above embodiment, when the length of the computation queue of the GPU is 10, the GPU can perform parallel processing on consecutive 10 frames of images in the target video in parallel in the computation queue.
In some embodiments of the present disclosure, the performing image enhancement processing based on the obtained image to be enhanced to form a fused image includes:
carrying out inverse logarithm processing on the image to be enhanced to form an image to be defogged; carrying out dark channel defogging treatment on the formed image to be defogged to form an enhanced image; and carrying out inverse logarithm processing on the formed enhanced image to form a fused image. Specifically, the minimum value of the gray value in three channels of each pixel in the image frame to be enhanced is firstly obtained to obtain a gray image, then a rectangular window with a certain size is obtained by taking each pixel as the center in the gray image, and the minimum value of the gray value in the rectangular window is obtained to replace the gray value of the central pixel, so that the dark channel image of the input image is obtained. The dark channel image is a gray image, and a large amount of statistics and observation show that the gray value of the dark channel image is very low, so that the gray value of all pixels in the whole dark channel image is approximate to 0, the image to be defogged is subjected to dark channel defogging treatment to form an enhanced image, and the enhanced image is subjected to inverse logarithm treatment to obtain a fused image of the first image frame and the second image frame.
In some embodiments of the present disclosure, the performing the dark channel defogging process on the formed image to be defogged to form an enhanced image includes:
determining a dark channel value of the image to be defogged; determining the gray value of the image to be defogged; determining an atmospheric light value of the image to be defogged based on the dark channel value, the third adjusting value and the gray value of the image to be defogged; and processing the image to be defogged according to the atmospheric light value and the fourth adjusting value of the image to be defogged to form an enhanced image. Recording the Dark channel value as Dark _ channel, the gray values of the image to be defogged as Mean _ H and Mean _ V, and the atmospheric light value of the image to be defogged as AirLight; the third adjustment value is P, the fourth adjustment value is a, the image to be enhanced is Input, the result of taking the inverse number is IR, and for any Input image, the average value of the gray value of each channel corresponding to the pixel position of the original Input image for the pixel point of which the gray value of the dark channel image is 0.1% of the maximum gray value is taken, so as to calculate the atmospheric light value of each channel, that is, the atmospheric light value AirLight is a three-element vector, and each element corresponds to each color channel. Thus, in some embodiments of the present disclosure, the method further comprises:
determining the minimum value of three channels of each pixel point of the image to be defogged;
and assigning the minimum value of the three channels of each pixel point of the defogged image to the corresponding pixel point in the image of the dark channel.
Wherein, by the formula: dark _ channel ═ min (Input _ R, Input _ G, Input _ B); a dark channel value for the image to be dehazed may be determined.
In the process of determining the image to be defogged, firstly, the image to be defogged is determined by the formula: mean _ H-interval is stored in the first column of an image by taking the average of each row, followed by the formula: and averaging the first column of Mean _ H to obtain an approximation of the overall image average, and not processing the rest columns to determine the gray value of the image to be defogged.
In the determining of the atmospheric light value AirLight, the atmospheric light value AirLight may be determined by the formula:
and (4) determining the corresponding atmospheric light value AirLight as min (min (p Mean _ V,0.9) × average filtered Input, Input).
Further, for a processing queue of a Graphics Processing Unit (GPU) of the electronic device, normalization processing needs to be performed on the image to be defogged, namely that the two sides are divided by the atmospheric light value of each channel at the same time; assuming that the value of the transfer function is constant in a rectangular window of a certain size in the image, the minimization operations are performed on both sides of the above formula by using the minimization operator, and since the value is constant in the rectangular area, the transfer function is taken out of the operator for transporting the graphic processor GPU.
Because the dark light conditions of the shooting of the target video are not completely the same, the noise positions and the noise quantity of the image frames of the target video are different, and therefore, the process of defogging the dark channel of the image to be enhanced can be adjusted by taking the third adjusting value as P and the fourth adjusting value as A, so that the situation that the defogging of the GPU of the image processor is too thorough and the recovered scenery is unnatural is avoided.
In some embodiments of the present disclosure, the following may also be expressed by the formula:
result _ tmp ═ Input-air light)/(1-air light/a) to form an enhanced image, and the formed enhanced image is subjected to an inverse logarithm process to form a fused image.
Fig. 5 is an optional schematic flow chart of the image processing method according to the embodiment of the present disclosure, where the image processing method is implemented by a GPU of an electronic device, and the target video is shot by a client in a dark environment by controlling a camera module of the electronic device. As shown in fig. 5, an optional flow of the image processing method provided in the embodiment of the present disclosure includes the following steps:
step 501: extracting two continuous frames of images T and T-1 in the target video;
step 502: selecting a first adjusting value and a second adjusting value according to the dim light environment of the target video;
step 503: constructing the Kalman filtering matrix through a processing queue of a GPU (graphics processing Unit);
wherein, corresponding filtering parameters need to be determined in the process of constructing the kalman filtering matrix, and the filtering parameters include: the updating value of the Kalman filtering matrix, the weight adjusting value of the Kalman filtering matrix and the superposition weight of the image frame.
Step 504: filtering the two continuous frames of images T and T-1 through the Kalman filtering matrix to obtain an image to be enhanced;
the obtained image to be enhanced is required to be used as an input image in the image enhancement processing process;
step 505: carrying out inverse logarithm processing on the image to be enhanced to form an image to be defogged;
step 506: selecting a third adjusting value and a fourth adjusting value according to the dim light environment of the target video;
step 507: and determining a dark channel value, a gray value and an atmospheric light value of the image to be defogged.
Determining the minimum value of three channels of each pixel point of the image to be defogged;
and assigning the minimum value of the three channels of each pixel point of the defogged image to the corresponding pixel point in the image of the dark channel so as to determine the atmospheric light value of the image to be defogged.
Step 508: and processing the image to be defogged according to the atmospheric light value and the fourth adjusting value of the image to be defogged to form an enhanced image, and processing the enhanced image to form a fused image.
Step 509: and judging whether the fused image meets the noise requirement, if so, executing the step 510, otherwise, returning to execute the step 502.
Step 510: and in the target video, replacing the first frame image with the fused image to form a new target video.
Because the dark light conditions of the shooting of the target video are not completely the same, the noise positions and the noise quantity of the image frames of the target video are different, and therefore, the filter coefficient of the Kalman filter matrix can be adjusted through the first adjusting value and the second adjusting value so as to avoid filter unbalance; and adjusting the process of defogging the dark channel of the image to be enhanced through the third adjusting value and the fourth adjusting value so as to avoid that the defogging of the GPU of the image processor is too thorough and the recovered scenery is unnatural. Fig. 6 is a schematic front-end diagram of the image processing method provided in the embodiment of the present disclosure, because the user frequently adjusts the first adjustment value and the second adjustment value and/or the third adjustment value and the fourth adjustment value, which are not favorable for the user experience, the adjustment values in a common noctilucent shooting environment may be packaged as fixed parameter modules to facilitate the invocation of the GPU, the user may switch icons corresponding to different fixed parameter modules in a display interface of a front-end application, and fig. 6 exemplarily shows that types 1 to 5 implement the adjustment of the first adjustment value and the second adjustment value and/or the third adjustment value and the fourth adjustment value.
Fig. 7a and 7b are schematic effect diagrams of the image processing method provided by the embodiment of the disclosure, where a first frame image and a second frame image of two consecutive frames of images in a target video are extracted through a processing queue of a GPU of a graphics processor, and the extracted first frame image and the extracted second frame image are respectively subjected to filtering processing to obtain an image to be enhanced; performing image enhancement processing based on the obtained image to be enhanced to form a fused image; and in the target video, replacing the first frame image with the fused image, and sequentially and circularly processing all image frames in the target video until a processing queue of the GPU completes the processing of all image frames in the target video to form a new target video.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as methods, systems, or computer program products. Accordingly, embodiments of the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the disclosed embodiments may take the form of a computer program product embodied on one or more computer-usable storage media (including magnetic disk storage, optical storage, and so forth) having computer-usable program code embodied therein.
Embodiments of the present disclosure are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program operations. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the operations performed by the processor of the computer or other programmable data processing apparatus produce a server for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program operations may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the operations stored in the computer-readable memory produce an article of manufacture including an operations server which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program operations may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the operations executed on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present disclosure and should not be taken as limiting the scope of the present disclosure, which is intended to cover any variations, modifications, equivalents, and improvements included within the spirit and scope of the present disclosure.

Claims (16)

1. An image processing method, characterized in that the method comprises:
extracting a first frame image and a second frame image in a target video, wherein the first frame image and the second frame image are continuous images;
respectively filtering the extracted first frame image and the second frame image to obtain an image to be enhanced;
performing image enhancement processing based on the obtained image to be enhanced to form a fused image;
and replacing the first frame image with the fused image in the target video.
2. The method according to claim 1, wherein the performing filtering processing on the extracted first frame image and the extracted second frame image respectively comprises:
respectively carrying out Gaussian blur processing on the first frame image and the second frame image;
determining an absolute value of a difference of the Gaussian blur of the first frame image and the second frame image;
constructing a Kalman filtering matrix based on the absolute value of the difference value of the Gaussian blur of the first frame image and the second frame image;
determining a filter parameter of the Kalman filter matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image.
3. The method according to claim 2, wherein the determining the filter parameters of the kalman filter matrix based on the absolute value of the difference of the gaussian blur corresponding to the first frame image and the second frame image comprises:
determining an update value of the Kalman filtering matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image and a first adjustment value;
determining a weight adjustment value of the Kalman filtering matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image and a second adjustment value;
determining the superposition weight of the first frame image and the second frame image based on the updated value of the Kalman filtering matrix and the weight adjustment value of the Kalman filtering matrix.
4. The method of claim 3, further comprising:
and determining a corresponding image to be enhanced based on the addition operation result of a first parameter and the first frame image, wherein the first parameter represents the product of the difference value corresponding to the first frame image and the second frame image and the superposition weight of the first frame image and the second frame image.
5. The method according to claim 1, wherein the image enhancement processing based on the obtained image to be enhanced to form a fused image comprises:
carrying out inverse logarithm processing on the image to be enhanced to form an image to be defogged;
carrying out dark channel defogging treatment on the formed image to be defogged to form an enhanced image;
and carrying out inverse logarithm processing on the formed enhanced image to form a fused image.
6. The method according to claim 5, wherein the performing the dark channel defogging process on the formed image to be defogged to form an enhanced image comprises:
determining a dark channel value of the image to be defogged;
determining the gray value of the image to be defogged;
determining an atmospheric light value of the image to be defogged based on the dark channel value, the third adjusting value and the gray value of the image to be defogged;
and processing the image to be defogged according to the atmospheric light value and the fourth adjusting value of the image to be defogged to form an enhanced image.
7. The method of claim 6, further comprising:
determining the minimum value of three channels of each pixel point of the image to be defogged;
and assigning the minimum value of the three channels of each pixel point of the defogged image to the corresponding pixel point in the image of the dark channel.
8. An image processing apparatus, characterized in that the apparatus comprises:
the image processing module is used for extracting a first frame image and a second frame image in a target video, wherein the first frame image and the second frame image are continuous images;
the image filtering module is used for respectively carrying out filtering processing on the extracted first frame image and the second frame image to obtain an image to be enhanced;
the image enhancement module is used for enhancing and processing the image based on the obtained image to be enhanced so as to form a fusion image;
the image processing module is used for replacing the first frame image with the fused image in the target video.
9. The apparatus of claim 8,
the image filtering module is used for respectively carrying out Gaussian blur processing on the first frame image and the second frame image;
the image filtering module is used for determining the absolute value of the difference value of the Gaussian blur of the first frame image and the second frame image;
the image filtering module is used for constructing a Kalman filtering matrix based on the absolute value of the difference value of the Gaussian blur of the first frame image and the second frame image;
the image filtering module is configured to determine a filtering parameter of the kalman filtering matrix based on an absolute value of a difference between gaussian blurs of the first frame image and the second frame image.
10. The apparatus of claim 9,
the image filtering module is used for determining an update value of the Kalman filtering matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image and a first adjusting value;
the image filtering module is used for determining a weight adjustment value of the Kalman filtering matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image and a second adjustment value;
the image filtering module is configured to determine a superposition weight of the first frame image and the second frame image based on the updated value of the kalman filter matrix and the weight adjustment value of the kalman filter matrix.
11. The apparatus of claim 10,
the image filtering module is configured to determine a corresponding image to be enhanced based on a sum operation result of a first parameter and the first frame image, where the first parameter represents a product of a difference value corresponding to the first frame image and the second frame image and a superposition weight of the first frame image and the second frame image.
12. The apparatus of claim 8,
the image enhancement module is used for carrying out inverse logarithm processing on the image to be enhanced to form an image to be defogged;
the image enhancement module is used for carrying out dark channel defogging treatment on the formed image to be defogged to form an enhanced image;
and the image enhancement module is used for carrying out inverse logarithm processing on the formed enhanced image to form a fused image.
13. The apparatus of claim 12,
the image enhancement module is used for determining a dark channel value of the image to be defogged;
the image enhancement module is used for determining the gray value of the image to be defogged;
the image enhancement module is used for determining an atmospheric light value of the image to be defogged based on the dark channel value, the third adjusting value and the gray value of the image to be defogged;
and the image enhancement module is used for processing the image to be defogged according to the atmospheric light value and the fourth adjusting value of the image to be defogged so as to form an enhanced image.
14. The apparatus of claim 13,
the image enhancement module is used for determining the minimum value of each pixel point of the image to be defogged in three channels;
and the image enhancement module is used for assigning the minimum value of each pixel point of the defogged image in the three channels to the corresponding pixel point in the image of the dark channel.
15. An image processing apparatus characterized by comprising:
a memory for storing executable instructions;
a processor for implementing the image processing method of any one of claims 1 to 7 when executing the executable instructions.
16. A storage medium storing executable instructions for implementing the image processing method of any one of claims 1 to 7 when executed.
CN201910380316.0A 2019-05-08 2019-05-08 Image processing method, device and storage medium Active CN111915496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910380316.0A CN111915496B (en) 2019-05-08 2019-05-08 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910380316.0A CN111915496B (en) 2019-05-08 2019-05-08 Image processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111915496A true CN111915496A (en) 2020-11-10
CN111915496B CN111915496B (en) 2024-04-23

Family

ID=73241829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910380316.0A Active CN111915496B (en) 2019-05-08 2019-05-08 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111915496B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529813A (en) * 2020-12-18 2021-03-19 四川云从天府人工智能科技有限公司 Image defogging processing method and device and computer storage medium
CN112819007A (en) * 2021-01-07 2021-05-18 北京百度网讯科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN114390307A (en) * 2021-12-28 2022-04-22 广州虎牙科技有限公司 Image quality enhancement method, device, terminal and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000063151A (en) * 2000-03-02 2000-11-06 신천우 An apparatus for warning lane deviation and a method for warning the same
CN103927526A (en) * 2014-04-30 2014-07-16 长安大学 Vehicle detecting method based on Gauss difference multi-scale edge fusion
CN104103053A (en) * 2013-11-25 2014-10-15 北京华科创智健康科技股份有限公司 Electronic endoscope image enhancement method and device
CN105550999A (en) * 2015-12-09 2016-05-04 西安邮电大学 Video image enhancement processing method based on background reuse
CN105631825A (en) * 2015-12-28 2016-06-01 西安电子科技大学 Image defogging method based on rolling guidance
CN105913404A (en) * 2016-07-01 2016-08-31 湖南源信光电科技有限公司 Low-illumination imaging method based on frame accumulation
CN107993245A (en) * 2017-11-15 2018-05-04 湖北三江航天红峰控制有限公司 A kind of sky day background multi-target detection and tracking
CN108257101A (en) * 2018-01-16 2018-07-06 上海海洋大学 A kind of underwater picture Enhancement Method based on optimal recovery parameter

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000063151A (en) * 2000-03-02 2000-11-06 신천우 An apparatus for warning lane deviation and a method for warning the same
CN104103053A (en) * 2013-11-25 2014-10-15 北京华科创智健康科技股份有限公司 Electronic endoscope image enhancement method and device
CN103927526A (en) * 2014-04-30 2014-07-16 长安大学 Vehicle detecting method based on Gauss difference multi-scale edge fusion
CN105550999A (en) * 2015-12-09 2016-05-04 西安邮电大学 Video image enhancement processing method based on background reuse
CN105631825A (en) * 2015-12-28 2016-06-01 西安电子科技大学 Image defogging method based on rolling guidance
CN105913404A (en) * 2016-07-01 2016-08-31 湖南源信光电科技有限公司 Low-illumination imaging method based on frame accumulation
CN107993245A (en) * 2017-11-15 2018-05-04 湖北三江航天红峰控制有限公司 A kind of sky day background multi-target detection and tracking
CN108257101A (en) * 2018-01-16 2018-07-06 上海海洋大学 A kind of underwater picture Enhancement Method based on optimal recovery parameter

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529813A (en) * 2020-12-18 2021-03-19 四川云从天府人工智能科技有限公司 Image defogging processing method and device and computer storage medium
CN112529813B (en) * 2020-12-18 2024-05-24 四川云从天府人工智能科技有限公司 Image defogging processing method and device and computer storage medium
CN112819007A (en) * 2021-01-07 2021-05-18 北京百度网讯科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN112819007B (en) * 2021-01-07 2023-08-01 北京百度网讯科技有限公司 Image recognition method, device, electronic equipment and storage medium
CN114390307A (en) * 2021-12-28 2022-04-22 广州虎牙科技有限公司 Image quality enhancement method, device, terminal and readable storage medium

Also Published As

Publication number Publication date
CN111915496B (en) 2024-04-23

Similar Documents

Publication Publication Date Title
CN111915496B (en) Image processing method, device and storage medium
CN110889802B (en) Image processing method and device
CN110898429B (en) Game scenario display method and device, electronic equipment and storage medium
CN114298944A (en) Image enhancement method, device, equipment and storage medium
CN110070495B (en) Image processing method and device and electronic equipment
CN110958481A (en) Video page display method and device, electronic equipment and computer readable medium
CN111833269B (en) Video noise reduction method, device, electronic equipment and computer readable medium
JP2013041565A (en) Image processor, image display device, image processing method, computer program, and recording medium
CN112183173B (en) Image processing method, device and storage medium
CN113038176B (en) Video frame extraction method and device and electronic equipment
CN114817630A (en) Card display method, card display device, electronic device, storage medium, and program product
CN111738950B (en) Image processing method and device
CN110719407A (en) Picture beautifying method, device, equipment and storage medium
CN111915532B (en) Image tracking method and device, electronic equipment and computer readable medium
CN115086686A (en) Video processing method and related device
CN111626921A (en) Picture processing method and device and electronic equipment
CN114640796B (en) Video processing method, device, electronic equipment and storage medium
CN115272061A (en) Method, device and equipment for generating special effect video and storage medium
CN111369472B (en) Image defogging method and device, electronic equipment and medium
CN115526796A (en) Image processing method, device, equipment and storage medium
JP2011516911A (en) Color image enhancement
CN110599437A (en) Method and apparatus for processing video
CN113014745A (en) Video image noise reduction method and device, storage medium and electronic equipment
CN111756954B (en) Image processing method, image processing device, electronic equipment and computer readable medium
US12022204B2 (en) System and method to improve quality in under-display camera system with radially-increasing distortion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant