CN111915496B - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN111915496B
CN111915496B CN201910380316.0A CN201910380316A CN111915496B CN 111915496 B CN111915496 B CN 111915496B CN 201910380316 A CN201910380316 A CN 201910380316A CN 111915496 B CN111915496 B CN 111915496B
Authority
CN
China
Prior art keywords
image
frame image
value
frame
defogged
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910380316.0A
Other languages
Chinese (zh)
Other versions
CN111915496A (en
Inventor
邹瑞波
黄攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910380316.0A priority Critical patent/CN111915496B/en
Publication of CN111915496A publication Critical patent/CN111915496A/en
Application granted granted Critical
Publication of CN111915496B publication Critical patent/CN111915496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present disclosure provides an image processing method, including: extracting a first frame image and a second frame image in a target video, wherein the first frame image and the second frame image are continuous images; respectively filtering the extracted first frame image and the second frame image to obtain an image to be enhanced; performing image enhancement processing based on the obtained image to be enhanced to form a fusion image; and replacing the fused image with the first frame image in the target video. The present disclosure also provides an image processing apparatus and a storage medium.

Description

Image processing method, device and storage medium
Technical Field
The present disclosure relates to image adjustment technology, and in particular, to an image processing method, an image processing device, and a storage medium.
Background
When the electronic equipment shoots a video under the condition of dark light at night, due to the influence of the light environment of the dark light, the noise of the image frames in the video is large and the contrast is too low, and the noise and the low contrast not only destroy the real information of the image, but also seriously influence the visual effect of the video, so that the noise-containing video image frames need to be subjected to noise reduction processing and restored to obtain clear video image frames so as to form clear video.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide an image processing method, apparatus, and storage medium.
The technical scheme of the embodiment of the disclosure is realized as follows:
the embodiment of the disclosure provides an image processing method, which comprises the following steps:
Extracting a first frame image and a second frame image in a target video, wherein the first frame image and the second frame image are continuous images;
respectively filtering the extracted first frame image and the second frame image to obtain an image to be enhanced;
Performing image enhancement processing based on the obtained image to be enhanced to form a fusion image;
and replacing the fused image with the first frame image in the target video.
In the above aspect, the filtering processing for the extracted first frame image and the second frame image respectively includes:
respectively carrying out Gaussian blur processing on the first frame image and the second frame image;
determining an absolute value of a difference of gaussian blur of the first frame image and the second frame image;
Constructing a Kalman filter matrix based on the absolute value of the difference value of the Gaussian blur of the first frame image and the second frame image;
The filtering parameters of the Kalman filtering matrix are determined based on the absolute value of the difference of the Gaussian blur of the first frame image and the second frame image.
In the above solution, the determining, based on the absolute value of the difference value of the gaussian blur corresponding to the first frame image and the second frame image, a filtering parameter of the kalman filter matrix includes:
determining an updated value of the kalman filter matrix based on an absolute value of a difference of gaussian blur of the first frame image and the second frame image and a first adjustment value;
determining a weight adjustment value of the Kalman filter matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image and a second adjustment value;
And determining the superposition weight of the first frame image and the second frame image based on the updated value of the Kalman filtering matrix and the weight adjustment value of the Kalman filtering matrix.
In the above scheme, the method further comprises:
And determining a corresponding image to be enhanced based on the addition operation result of a first parameter and the first frame image, wherein the first parameter represents the product of the difference value corresponding to the first frame image and the second frame image and the superposition weight of the first frame image and the second frame image.
In the above solution, the processing for image enhancement based on the obtained image to be enhanced to form a fused image includes:
performing anti-logarithmic processing on the image to be enhanced to form an image to be defogged;
performing dark channel defogging treatment on the formed image to be defogged, and forming an enhanced image;
and performing anti-logarithm processing on the formed enhanced image to form a fusion image.
In the above scheme, the performing dark channel defogging treatment on the formed image to be defogged to form the enhanced image includes:
Determining a dark channel value of the image to be defogged;
Determining a gray value of the image to be defogged;
determining an atmospheric light value of the image to be defocused based on the dark channel value, the third adjustment value and the gray value of the image to be defocused;
And processing the image to be defogged according to the atmospheric light value and the fourth regulating value of the image to be defogged so as to form an enhanced image.
In the above scheme, the method further comprises:
determining the minimum value in three channels of each pixel point of the image to be defogged;
and assigning the minimum value in three channels of each pixel point of the defogging image to the corresponding pixel point in the image of the dark channel.
The embodiment of the disclosure also provides an image processing apparatus, including:
The image processing module is used for extracting a first frame image and a second frame image in the target video, wherein the first frame image and the second frame image are continuous images;
the image filtering module is used for respectively carrying out filtering processing on the extracted first frame image and the second frame image to obtain an image to be enhanced;
the image enhancement module is used for carrying out image enhancement processing based on the obtained image to be enhanced so as to form a fusion image;
the image processing module is used for replacing the first frame image with the fusion image in the target video.
In the above-described arrangement, the first and second embodiments,
The image filtering module is used for respectively carrying out Gaussian blur processing on the first frame image and the second frame image;
The image filtering module is used for determining the absolute value of the difference value of the Gaussian blur of the first frame image and the second frame image;
The image filtering module is used for constructing a Kalman filtering matrix based on the absolute value of the Gaussian blur difference value of the first frame image and the second frame image;
The image filtering module is used for determining filtering parameters of the Kalman filtering matrix based on absolute values of Gaussian blur difference values of the first frame image and the second frame image.
In the above-described arrangement, the first and second embodiments,
The image filtering module is used for determining an updated value of the Kalman filtering matrix based on the absolute value of the Gaussian blur difference value of the first frame image and the second frame image and a first adjusting value;
the image filtering module is used for determining a weight adjusting value of the Kalman filtering matrix based on an absolute value of a Gaussian blur difference value of the first frame image and the second frame image and a second adjusting value;
The image filtering module is used for determining the superposition weight of the first frame image and the second frame image based on the updated value of the Kalman filtering matrix and the weight adjustment value of the Kalman filtering matrix.
In the above-described arrangement, the first and second embodiments,
The image filtering module is configured to determine a corresponding image to be enhanced based on a result of a summation operation of a first parameter and the first frame image, where the first parameter characterizes a product of a difference value corresponding to the first frame image and the second frame image and a superposition weight of the first frame image and the second frame image.
In the above-described arrangement, the first and second embodiments,
The image enhancement module is used for carrying out anti-logarithmic processing on the image to be enhanced to form an image to be defogged;
the image enhancement module is used for carrying out dark channel defogging treatment on the formed image to be defogged so as to form an enhanced image;
the image enhancement module is used for carrying out anti-logarithmic processing on the formed enhanced image to form a fusion image.
In the above-described arrangement, the first and second embodiments,
The image enhancement module is used for determining a dark channel value of the image to be defogged;
The image enhancement module is used for determining the gray value of the image to be defogged;
the image enhancement module is used for determining an atmospheric light value of the image to be defogged based on the dark channel value, the third adjusting value and the gray value of the image to be defogged;
The image enhancement module is used for processing the image to be defogged according to the atmospheric light value and the fourth adjustment value of the image to be defogged so as to form an enhanced image.
In the above-described arrangement, the first and second embodiments,
The image enhancement module is used for determining the minimum value in three channels of each pixel point of the image to be defogged;
And the image enhancement module is used for assigning the minimum value in three channels of each pixel point of the defogging image to the corresponding pixel point in the image of the dark channel.
The embodiment of the disclosure also provides an image processing apparatus, including:
A memory for storing executable instructions;
And the processor is used for realizing the image processing method provided by the embodiment of the disclosure when executing the executable instructions.
The embodiment of the disclosure also provides a storage medium, which stores executable instructions that, when executed, implement the image processing method provided by the embodiment of the disclosure.
The embodiment of the disclosure provides an image processing method, a server and a storage medium, and has the following technical effects:
extracting a first frame image and a second frame image in a target video, respectively carrying out filtering processing on the extracted first frame image and second frame image, and carrying out image enhancement processing on the basis of the obtained image to be enhanced so as to form a fusion image; and finally, replacing the first frame image with the fusion image in the target video. Each frame of image shot by the electronic equipment under the dim light condition can be processed to form a clear image frame, and meanwhile, the formed image frame can be used for replacing the image frame in the target video to form the clear target video, so that the defect that the shot video is large in noise and low in contrast ratio due to the dim light condition is avoided.
Drawings
Fig. 1 is a schematic view of an application scenario of an image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an alternative hardware structure of an image processing apparatus 200 according to an embodiment of the disclosure;
FIG. 3 is a schematic view showing an optional composition of an image processing apparatus according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of an alternative image processing method according to an embodiment of the disclosure;
FIG. 5 is a schematic flow chart of an alternative image processing method according to an embodiment of the disclosure;
fig. 6 is a front-end schematic diagram of an image processing method according to an embodiment of the disclosure;
Fig. 7a and 7b are schematic diagrams illustrating effects of an image processing method according to an embodiment of the present disclosure.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present disclosure more apparent, the present disclosure will be further described in detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present disclosure, and all other embodiments obtained by those skilled in the art without making inventive efforts are within the scope of protection of the present disclosure.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein is for the purpose of describing embodiments of the present disclosure only and is not intended to be limiting of the present disclosure.
It should be noted that, in the embodiments of the present disclosure, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a method or server comprising a list of elements does not include only those elements explicitly recited, but may include other elements not expressly listed or inherent to such method or server. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other related elements in a method or server comprising the element (e.g., a step in a method or an element in a server, e.g., an element may be a part of a circuit, a part of a processor, a part of a program or software, etc.).
For example, the image processing method provided in the embodiment of the present disclosure includes a series of steps, but the image processing method provided in the embodiment of the present disclosure is not limited to the described steps, and similarly, the terminal provided in the embodiment of the present disclosure includes a series of units, but the terminal provided in the embodiment of the present disclosure is not limited to including the explicitly described units, and may include units that are required to be set for acquiring related information or performing processing based on the information. It should be noted that in the following description reference is made to "some embodiments" which describe a subset of all possible embodiments, but it should be understood that "some embodiments" may be the same subset or different subsets of all possible embodiments and may be combined with each other without conflict.
Before explaining the embodiments of the present disclosure in further detail, terms and terminology involved in the embodiments of the present disclosure are explained, and the terms and terminology involved in the embodiments of the present disclosure are applicable to the following explanation.
1) The method comprises the steps that a target video, a video shot by electronic equipment in a dark light environment, and noise of each image frame in the target video is more due to the dark light environment, so that the watching effect of the video is affected.
2) And the filtering processing comprises Kalman filtering, and an algorithm for optimally estimating the system state by utilizing a linear system state equation and through system input and output observation data. Since the observed data includes the effects of noise and interference in the system, the optimal estimate can also be considered as a filtering process, wherein the Kalman filtering is performed by a graphics processor (GPU Graphics Processing Unit) through a corresponding Kalman filtering algorithm.
3) And defogging the dark channel, and calculating the transmittance of the dark channel, the atmospheric light parameter and the refined dark channel through a dark channel defogging algorithm in a Graphic Processor (GPU) of the electronic equipment, so as to defogging the input image to be defogged to form an enhanced image.
4) A client, a carrier in a terminal that implements a specific function, for example, a mobile client (APP) is a carrier of a specific function in a mobile terminal, for example, a function of performing live online (video push) or a play function of online video.
An exemplary application of an apparatus implementing the embodiments of the present disclosure is described below, and the apparatus provided by the embodiments of the present disclosure may be implemented as various types of electronic devices with graphics processor GPUs, such as a tablet computer, a notebook computer, a central processing unit, and the like.
The use scenario of the image processing method implementing the embodiments of the present disclosure will now be described with reference to the accompanying drawings. Referring to fig. 1, fig. 1 is a schematic application scenario of an image processing method provided by an embodiment of the present disclosure, in order to support an exemplary application, a server implementing an embodiment of the present disclosure may be a video server, taking a video server 30 as an example, a user terminal 10 (the user terminal 10-1 and the user terminal 10-2 are exemplarily shown) is connected to the video server 30 through a network 20, the network 20 may be a wide area network or a local area network, or a combination of the two, data transmission is implemented by using a wireless link, a graphics processor of the terminal 10 can process a target video captured by the terminal 10, and the processed target video is sent to the video server 30 through the network 20.
The terminal 10 is configured to perform filtering processing on the extracted first frame image and the second frame image, respectively, to obtain an image to be enhanced; performing image enhancement processing based on the obtained image to be enhanced to form a fusion image; replacing the fused image with the first frame image in the target video; the user terminal 10 displays the processed image frames to the user through a graphical interface 110 (the graphical interface 110-1 and the graphical interface 110-2 are shown in an exemplary manner) so as to enable the user to judge whether the processed video frames meet the noise requirement according to the requirement, and the video server 30 is used for providing background data support of image processing in the image processing process in cooperation with the user terminal 10 so as to enable different functions in the image processing application of the terminal, such as pushing the processed target video to the video server 30.
Based on the use environment of the image processing method shown in fig. 1, an image processing apparatus implementing an embodiment of the present disclosure will be described first, and the image processing apparatus server may be provided in hardware, software, or a combination of hardware and software. Various exemplary implementations of the image processing apparatus provided by the embodiments of the present disclosure are described below.
The implementation of the combination of hardware and software of the image processing apparatus will be described first. Specifically, a hardware configuration of an image processing apparatus implementing an embodiment of the present disclosure will now be described with reference to the accompanying drawings, and referring to fig. 2, fig. 2 is a schematic diagram of an alternative hardware configuration of an image processing apparatus 200 provided by an embodiment of the present disclosure.
The image processing apparatus 200 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (PDA, personalDigital Assistant), tablet computers (PAD, portable Android Device), portable multimedia players (PMP, portable MEDIA PLAYER), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and various types of electronic devices with image processing apparatus functions such as digital Televisions (TVs), desktop computers, and the like. The image processing apparatus 200 shown in fig. 2 is only one example, and should not impose any limitation on the functions and scope of use of the embodiments of the present disclosure.
As shown in fig. 2, the image processing apparatus 200 may include a processing apparatus (e.g., a central processing unit, a graphics processor, etc.) 201, wherein the graphics processor is capable of executing a kalman filtering algorithm and a dark channel defogging algorithm, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 202 or a program loaded from a storage apparatus 208 into a random access Memory (RAM, random Access Memory) 203. In the RAM203, various programs and data necessary for the operation of the image processing apparatus 200 are also stored. The processing device 201, ROM 202, and RAM203 are connected to each other through a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
In general, the following devices may be connected to the I/O interface 205: input devices 206 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 207 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 208 including, for example, magnetic tape, hard disk, etc.; and a communication device 209. The communication means 209 may allow the image processing apparatus 200 to perform wireless or wired communication with other devices to exchange data. While fig. 2 shows an image processing apparatus 200 having various devices, it is to be understood that not all illustrated devices are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 209, or from the storage means 208, or from the ROM 202. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 201.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an erasable programmable read-Only Memory (EPROM, erasable Programmable Read-Only Memory), an optical fiber, a portable compact disc read-Only Memory (CD-ROM, compact Disc Read-Only Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, radio Frequency (RF), and the like, or any suitable combination thereof.
The computer readable medium may be contained in the image processing apparatus; or may exist alone without being incorporated into the image processing apparatus.
The computer readable medium carries one or more programs which, when executed by the image processing apparatus, cause the image processing apparatus to: acquiring calculation parameters of interest points in media information to be released; determining the interest points to be recommended according to the calculation parameters of the interest points; pushing the determined interest points to the client so as to insert the interest points in the corresponding media information.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or image processing apparatus. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN, local Area Network) or a wide area network (WAN, wide Area Network), or may be connected to an external computer (e.g., connected through the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of a module does not in some cases define the module itself.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
As an example of a hardware implementation or a software implementation of the image processing apparatus, the image processing apparatus may be provided as a series of modules having a coupling relationship at a signal/information/data level, which will be described below in connection with fig. 3. Referring to fig. 3, fig. 3 is a schematic diagram showing an optional composition structure of an image processing apparatus according to an embodiment of the present disclosure, which shows a series of modules included in implementing the image processing apparatus, but the module structure of the image processing apparatus is not limited to that shown in fig. 3, and for example, the modules therein may be further split or combined according to different functions implemented.
The following describes a pure hardware implementation of an image processing apparatus, which may be various types of image processing apparatus clients for running applications, such as: fig. 3 is a schematic diagram of an optional functional structure of an image processing apparatus according to an embodiment of the present disclosure; as shown in fig. 3, the image processing apparatus 300 includes: an image processing module 301, an image filtering module 302 and an image enhancement module 303. The functions of the respective modules are described in detail below.
The image processing module 301 is configured to extract a first frame image and a second frame image in a target video, where the first frame image and the second frame image are continuous images. The image filtering module 302 is configured to perform filtering processing on the extracted first frame image and the second frame image respectively, so as to obtain an image to be enhanced. The image enhancement module 303 is configured to perform image enhancement processing based on the obtained image to be enhanced, so as to form a fused image. The image processing module 301 is configured to replace the fused image with the first frame image in the target video.
In some embodiments of the present disclosure, the image filtering module 302 is configured to perform gaussian blur processing on the first frame image and the second frame image respectively. The image filtering module 302 is configured to determine an absolute value of a difference value of gaussian blur between the first frame image and the second frame image. The image filtering module 302 is configured to construct a kalman filter matrix based on an absolute value of a difference value of gaussian blur of the first frame image and the second frame image. The image filtering module 302 is configured to determine a filtering parameter of the kalman filtering matrix based on an absolute value of a difference value of gaussian blur of the first frame image and the second frame image.
In some embodiments of the present disclosure, the image filtering module 302 is configured to determine the updated value of the kalman filter matrix based on the absolute value of the difference of the gaussian blur of the first frame image and the second frame image and the first adjustment value. The image filtering module 302 is configured to determine a weight adjustment value of the kalman filter matrix based on an absolute value of a difference value of gaussian blur of the first frame image and the second frame image and a second adjustment value. The image filtering module 302 is configured to determine a superposition weight of the first frame image and the second frame image based on the updated value of the kalman filter matrix and the weight adjustment value of the kalman filter matrix.
In some embodiments of the present disclosure, the image filtering module 302 is configured to determine a corresponding image to be enhanced based on a result of the addition operation of the first parameter and the first frame image, where the first parameter characterizes a product of a difference value corresponding to the first frame image and the second frame image and a superposition weight of the first frame image and the second frame image.
In some embodiments of the present disclosure, the image enhancement module 303 is configured to perform an anti-logarithm process on the image to be enhanced to form an image to be defogged. The image enhancement module 303 is configured to perform dark channel defogging processing on the formed image to be defogged, so as to form an enhanced image. The image enhancement module 303 is configured to perform an anti-logarithm process on the formed enhanced image to form a fused image.
In some embodiments of the present disclosure, the image enhancement module 303 is configured to determine a dark channel value of the image to be defogged; the image enhancement module 303 is configured to determine a gray value of the image to be defogged; the image enhancement module 303 is configured to determine an atmospheric light value of the image to be defogged based on the dark channel value, the third adjustment value, and the gray value of the image to be defogged; the image enhancing module 303 is configured to process the image to be defogged according to the atmospheric light value and the fourth adjustment value of the image to be defogged, so as to form an enhanced image. The image enhancing module 303 is configured to determine a minimum value in three channels of each pixel point of the image to be defogged. The image enhancement module 303 is configured to assign a minimum value in three channels of each pixel point of the defogging image to a corresponding pixel point in the image of the dark channel.
Describing the image processing method provided by the embodiment of the present disclosure with reference to the image processing apparatus 300 shown in fig. 3, referring to fig. 4, fig. 4 is an alternative flowchart of the image processing method provided by the embodiment of the present disclosure, it will be understood that the steps shown in fig. 4 may be performed by a terminal running the image processing apparatus 300, for example, the image processing apparatus 300 may be a functional module coupled to an internal/external interface of the terminal; the steps shown in fig. 4 may also be performed by a server running the image processing apparatus 300, for example, the image processing apparatus 300 may be a functional module coupled to an internal/external interface of the server. The following is a description of the steps shown in fig. 4.
Step 401: extracting a first frame image and a second frame image in a target video;
wherein the first frame image and the second frame image are continuous images;
Step 402: respectively filtering the extracted first frame image and the second frame image to obtain an image to be enhanced;
Step 403: performing image enhancement processing based on the obtained image to be enhanced to form a fusion image;
step 404: and replacing the fused image with the first frame image in the target video.
In some embodiments of the present disclosure, the filtering the extracted first frame image and the second frame image respectively includes:
Respectively carrying out Gaussian blur processing on the first frame image and the second frame image; determining an absolute value of a difference of gaussian blur of the first frame image and the second frame image; constructing a Kalman filter matrix based on the absolute value of the difference value of the Gaussian blur of the first frame image and the second frame image; the filtering parameters of the Kalman filtering matrix are determined based on the absolute value of the difference of the Gaussian blur of the first frame image and the second frame image. When the target video is processed, the target video is initialized, two continuous frame images, namely a first frame image T-1 and a second frame image T, are extracted from the target video through a processing queue of the GPU, and noise of each frame image in the target video shot under a dim light condition usually meets Gaussian distribution, so that the extracted first frame image T-1 and second frame image T are firstly processed in a Gaussian blur mode when the noise of the image frames is removed. Wherein, the passing formula of the first frame image:
delta=absolute value (gaussian blur first frame image GT-1-gaussian blur second frame image GT), the absolute value of the difference of the gaussian blur of the first frame image and the second frame image can be determined.
Further, the kalman filter is a recursive estimation, i.e. the estimated value of the current state can be calculated as long as the estimated value of the state at the previous time and the observed value of the current state are known. The Kalman filtering is applied to the noise reduction processing of video image frames, and a Kalman filtering matrix is firstly established, and filtering parameters in the Kalman filtering matrix are determined.
In some embodiments of the present disclosure, the determining the filtering parameters of the kalman filter matrix based on the absolute value of the difference value of the gaussian blur corresponding to the first frame image and the second frame image includes:
Determining an updated value of the kalman filter matrix based on an absolute value of a difference of gaussian blur of the first frame image and the second frame image and a first adjustment value; determining a weight adjustment value of the Kalman filter matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image and a second adjustment value; and determining the superposition weight of the first frame image and the second frame image based on the updated value of the Kalman filtering matrix and the weight adjustment value of the Kalman filtering matrix. Wherein, the updated value of the Kalman filtering matrix is updated, and the first adjusting value is downSpeed; by the formula:
Update = init_update- (1.0/1.0+exp (- (downSpeed x Delta))) -0.5) 2.0, the updated values of the kalman filter matrix can be determined.
Because the number of noise points of the electronic equipment is not the same when shooting in a dark environment, the weight adjustment value of the Kalman filter matrix needs to be calculated in the processing process of the image frames through the Kalman filter matrix to realize the weighting processing of different target images, therefore, the weight adjustment value of the Kalman filter matrix is FPREDICATED, the second adjustment value is global_q, and the global_q can realize the influence of the adjustment Delta component on the final prediction weight; by the formula:
FPREDICATED =1.0+global_q×delta×255.0×255.0, and determining a weight adjustment value of the kalman filter matrix, where a value of one RGB triplet is a constant 255.
In the process of processing the image frames through the Kalman filtering matrix, the motion estimation value of the static area of the image frames is small, and the Gaussian weight value is larger; the motion estimation value of the motion region is large, and the gaussian weight value is smaller, so that the superposition weight of the first frame image and the second frame image needs to be calculated, and specifically, the superposition weight of the first frame image and the second frame image is recorded as follows: kcurr, by the formula:
kcurr = FPREDICATED/(FPREDICATED +update 1.5), and determining the superposition weight of the first frame image and the second frame image.
In some embodiments of the present disclosure, the method further comprises:
And determining a corresponding image to be enhanced based on the addition operation result of the first parameter and the first frame image, wherein the first parameter represents the product of the difference value corresponding to the first frame image and the second frame image and the superposition weight of the first frame image and the second frame image. Wherein, the image to be enhanced is recorded as result_ denoise, and the formula is as follows:
Result_ denoise =first frame image+ Kcurr (second frame image-first frame image) to obtain an image to be enhanced formed based on the first image frame and the second image frame. Wherein the filtering parameters in the Kalman filtering matrix are dynamically changed and are executed by a graphic processor GPU of the electronic device.
Based on the example of the embodiment, when the length of the computation queue of the GPU is 10, the GPU computation queue can process the continuous 10 frames of images in the target video in parallel.
In some embodiments of the present disclosure, the processing for image enhancement based on the obtained image to be enhanced to form a fused image includes:
Performing anti-logarithmic processing on the image to be enhanced to form an image to be defogged; performing dark channel defogging treatment on the formed image to be defogged, and forming an enhanced image; and performing anti-logarithm processing on the formed enhanced image to form a fusion image. The gray value of one channel is very low and tends to 0 almost in all RGB three color channels of each image frame to be enhanced in the image frame to be enhanced formed through the Kalman filtering matrix processing, specifically, the minimum value of the gray value in the three channels of each pixel in the image frame to be enhanced is firstly taken to obtain a gray image, then in the gray image, a rectangular window with a certain size is taken by taking each pixel as the center, and the minimum value of the gray value in the rectangular window is taken to replace the gray value of the center pixel, so that the dark channel image of the input image is obtained. The dark channel image is a gray level image, and a large number of statistics and observation show that the gray level value of the dark channel image is very low, so that the gray level value of all pixels in the whole dark channel image is approximately 0, the image to be defogged is defogged by the dark channel image, an enhanced image is formed, and the enhanced image is subjected to anti-logarithmic processing, so that a fused image of the first image frame and the second image frame is obtained.
In some embodiments of the present disclosure, the performing dark channel defogging processing on the formed image to be defogged to form an enhanced image includes:
Determining a dark channel value of the image to be defogged; determining a gray value of the image to be defogged; determining an atmospheric light value of the image to be defocused based on the dark channel value, the third adjustment value and the gray value of the image to be defocused; and processing the image to be defogged according to the atmospheric light value and the fourth regulating value of the image to be defogged so as to form an enhanced image. The Dark channel value is recorded as dark_channel, the gray values of the image to be defogged are mean_h and mean_v, and the atmospheric light value of the image to be defogged is air light; the third adjusting value is P, the fourth adjusting value is A, the image to be enhanced is Input, the result of taking the opposite number is IR, for any one Input image, the average value of the gray values of each channel corresponding to the pixel position of the original Input image of the pixel point with the maximum gray value of 0.1% of the gray value of the dark channel image is taken, and therefore the atmosphere light value of each channel is calculated, namely the atmosphere light value AirLight is a three-element vector, and each element corresponds to each color channel. Thus, in some embodiments of the present disclosure, the method further comprises:
determining the minimum value in three channels of each pixel point of the image to be defogged;
and assigning the minimum value in three channels of each pixel point of the defogging image to the corresponding pixel point in the image of the dark channel.
Wherein, through the formula: dark_channel=min (input_r, input_g, input_b); a dark channel value of the image to be defogged may be determined.
In determining the image to be defogged, first, the formula is as follows: mean_h=interval is stored in the first column of an image by taking the average value of each row, followed by the formula: mean_v=average the first column of mean_h to get an approximation of the full image average, and the remaining columns are not processed to determine the gray value of the image to be defogged.
In the determination of the atmospheric light value AirLight, the following formula may be used:
air light=min (min (p_mean_v, 0.9) filtered Input, input), the corresponding atmospheric light value air light is determined.
Further, for the processing queue of the graphics processor GPU of the electronic device, normalization processing is also required for the image to be defogged, that is, both sides are divided by the atmospheric light value of each channel at the same time; assuming that the value of the transfer function is a constant value within a rectangular window of a certain size in the image, the two sides of the image are subjected to a minimization operation by using a minimization operator minimum operators, and the value is a constant value within a rectangular region, so that the image is taken out of the operator for operating the GPU of the graphics processor.
Because the dark light conditions of the shooting of the target video are not completely the same, the noise positions and the noise quantity of the image frames of the target video are different, and therefore, the defogging process of the dark channel of the image to be enhanced can be adjusted through the third adjusting value P and the fourth adjusting value A, so that the situation that the defogging of the GPU of the image processor is too thorough is avoided, and the restored scenery is unnatural.
In some embodiments of the present disclosure, the following formula may also be used:
result_tmp= (Input-air light)/(1-air light/a) to form an enhanced image, and performing an inverse logarithm process on the formed enhanced image to form a fused image.
Fig. 5 is an optional flowchart of an image processing method according to an embodiment of the present disclosure, where the image processing method is implemented by a graphics processor GPU of an electronic device, and the target video is captured by a client in a dark environment by controlling an image capturing module of the electronic device. As shown in fig. 5, an optional flow of the image processing method provided in the embodiment of the disclosure includes the following steps:
step 501: extracting continuous two-frame images T and T-1 in the target video;
Step 502: selecting a first adjusting value and a second adjusting value according to the dim light environment of the target video;
Step 503: constructing the Kalman filtering matrix through a processing queue of a Graphic Processor (GPU);
Wherein, in the process of constructing the Kalman filtering matrix, corresponding filtering parameters need to be determined, and the filtering parameters comprise: the updated value of the Kalman filter matrix, the weight adjustment value of the Kalman filter matrix and the superposition weight of the image frames.
Step 504: filtering the continuous two-frame images T and T-1 through the Kalman filtering matrix to obtain an image to be enhanced;
Wherein the obtained image to be enhanced needs to be used as an input image in the image enhancement processing process;
Step 505: performing anti-logarithmic processing on the image to be enhanced to form an image to be defogged;
step 506: selecting a third adjusting value and a fourth adjusting value according to the dim light environment of the target video;
step 507: and determining a dark channel value, a gray value and an atmospheric light value of the image to be defogged.
Determining the minimum value in three channels of each pixel point of the image to be defogged;
And assigning the minimum value in three channels of each pixel point of the defogging image to the corresponding pixel point in the image of the dark channel so as to determine the atmospheric light value of the image to be defogged.
Step 508: and processing the image to be defogged according to the atmospheric light value and the fourth regulating value of the image to be defogged so as to form an enhanced image, and processing the enhanced image so as to form a fusion image.
Step 509: and judging whether the fused image meets the noise requirement, if so, executing step 510, otherwise, returning to executing step 502.
Step 510: and replacing the first frame image with the fusion image in the target video to form a new target video.
Because the dim light conditions of the shooting of the target video are not completely the same, the noise positions and the noise quantity of the image frames of the target video are different, and therefore, the filter coefficients of the Kalman filter matrix can be adjusted through the first adjusting value and the second adjusting value so as to avoid unbalanced filtering; and the defogging process of the dark channel of the image to be enhanced can be adjusted through the third adjusting value and the fourth adjusting value, so that the situation that the defogging of the GPU of the image processor is too thorough and the restored scenery is unnatural is avoided. Fig. 6 is a schematic front-end diagram of an image processing method provided by the embodiment of the present disclosure, because a user frequently adjusts a first adjustment value and a second adjustment value and/or a third adjustment value and a fourth adjustment value, which are not beneficial to the user's use experience, an adjustment value in a common noctilucent shooting environment may be packaged as a fixed parameter module, so that the graphics processor GPU may be conveniently invoked, and in a display interface of a front-end application program, a user may switch icons corresponding to different fixed parameter modules, and fig. 6 exemplarily provides that types 1 to 5 implement adjustment of the first adjustment value and the second adjustment value and/or the third adjustment value and the fourth adjustment value.
Fig. 7a and fig. 7b are schematic diagrams of the effect of the image processing method provided in the embodiments of the present disclosure, in which a first frame image and a second frame image of two continuous frame images in a target video are extracted through a processing queue of a GPU of a graphics processor, and the extracted first frame image and the second frame image are respectively subjected to filtering processing to obtain an image to be enhanced; performing image enhancement processing based on the obtained image to be enhanced to form a fusion image; and in the target video, replacing the first frame image with the fusion image, and sequentially and circularly processing all image frames in the target video until the processing queue of the GPU completes processing all the image frames in the target video to form a new target video.
It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the disclosed embodiments may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present disclosure may take the form of a computer program product on one or more computer-usable storage media (including disk storage, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present disclosure are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program operations. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the operations performed by the processor of the computer or other programmable data processing apparatus produce a server for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program operations may also be stored in a computer readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the operations stored in the computer readable memory produce an article of manufacture including an operation server which implements the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program operations may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the operations performed on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description of the preferred embodiments of the present disclosure is not intended to limit the scope of the present disclosure, but is intended to cover any modifications, equivalents, and improvements within the spirit and principles of the present disclosure.

Claims (14)

1. An image processing method, the method comprising:
Extracting a first frame image and a second frame image in a target video, wherein the first frame image and the second frame image are continuous images;
Respectively filtering the extracted first frame image and the second frame image to obtain an image to be enhanced, wherein the image to be enhanced is a summation operation result of a first parameter and the first frame image, and the first parameter represents the product of the difference value corresponding to the first frame image and the second frame image and the superposition weight of the first frame image and the second frame image;
Performing image enhancement processing based on the obtained image to be enhanced to form a fusion image;
and replacing the fused image with the first frame image in the target video.
2. The method according to claim 1, wherein the filtering the extracted first frame image and the second frame image respectively includes:
respectively carrying out Gaussian blur processing on the first frame image and the second frame image;
determining an absolute value of a difference of gaussian blur of the first frame image and the second frame image;
Constructing a Kalman filter matrix based on the absolute value of the difference value of the Gaussian blur of the first frame image and the second frame image;
The filtering parameters of the Kalman filtering matrix are determined based on the absolute value of the difference of the Gaussian blur of the first frame image and the second frame image.
3. The method of claim 2, wherein the determining the filtering parameters of the kalman filter matrix based on the absolute value of the difference of the gaussian blur corresponding to the first frame image and the second frame image comprises:
determining an updated value of the kalman filter matrix based on an absolute value of a difference of gaussian blur of the first frame image and the second frame image and a first adjustment value;
determining a weight adjustment value of the Kalman filter matrix based on an absolute value of a difference value of Gaussian blur of the first frame image and the second frame image and a second adjustment value;
Based on the updated values of the kalman filter matrix and the weight adjustment values of the kalman filter matrix,
And determining the superposition weight of the first frame image and the second frame image.
4. The method of claim 1, wherein the performing image enhancement processing to form a fused image based on the obtained image to be enhanced comprises:
performing anti-logarithmic processing on the image to be enhanced to form an image to be defogged;
performing dark channel defogging treatment on the formed image to be defogged, and forming an enhanced image;
and performing anti-logarithm processing on the formed enhanced image to form a fusion image.
5. The method of claim 4, wherein performing dark channel defogging processing on the formed image to be defogged to form an enhanced image, comprises:
Determining a dark channel value of the image to be defogged;
Determining a gray value of the image to be defogged;
determining an atmospheric light value of the image to be defocused based on the dark channel value, the third adjustment value and the gray value of the image to be defocused;
And processing the image to be defogged according to the atmospheric light value and the fourth regulating value of the image to be defogged so as to form an enhanced image.
6. The method of claim 5, wherein the method further comprises:
determining the minimum value in three channels of each pixel point of the image to be defogged;
and assigning the minimum value in three channels of each pixel point of the defogging image to the corresponding pixel point in the image of the dark channel.
7. An image processing apparatus, characterized in that the apparatus comprises:
The image processing module is used for extracting a first frame image and a second frame image in the target video, wherein the first frame image and the second frame image are continuous images;
The image filtering module is used for respectively carrying out filtering processing on the extracted first frame image and the second frame image to obtain an image to be enhanced, wherein the image to be enhanced is a summation operation result of a first parameter and the first frame image, and the first parameter represents the product of the difference value corresponding to the first frame image and the second frame image and the superposition weight of the first frame image and the second frame image;
the image enhancement module is used for carrying out image enhancement processing based on the obtained image to be enhanced so as to form a fusion image;
the image processing module is used for replacing the first frame image with the fusion image in the target video.
8. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
The image filtering module is used for respectively carrying out Gaussian blur processing on the first frame image and the second frame image;
The image filtering module is used for determining the absolute value of the difference value of the Gaussian blur of the first frame image and the second frame image;
The image filtering module is used for constructing a Kalman filtering matrix based on the absolute value of the Gaussian blur difference value of the first frame image and the second frame image;
The image filtering module is used for determining filtering parameters of the Kalman filtering matrix based on absolute values of Gaussian blur difference values of the first frame image and the second frame image.
9. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
The image filtering module is used for determining an updated value of the Kalman filtering matrix based on the absolute value of the Gaussian blur difference value of the first frame image and the second frame image and a first adjusting value;
the image filtering module is used for determining a weight adjusting value of the Kalman filtering matrix based on an absolute value of a Gaussian blur difference value of the first frame image and the second frame image and a second adjusting value;
The image filtering module is used for determining the superposition weight of the first frame image and the second frame image based on the updated value of the Kalman filtering matrix and the weight adjustment value of the Kalman filtering matrix.
10. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
The image enhancement module is used for carrying out anti-logarithmic processing on the image to be enhanced to form an image to be defogged;
the image enhancement module is used for carrying out dark channel defogging treatment on the formed image to be defogged so as to form an enhanced image;
the image enhancement module is used for carrying out anti-logarithmic processing on the formed enhanced image to form a fusion image.
11. The apparatus of claim 10, wherein the device comprises a plurality of sensors,
The image enhancement module is used for determining a dark channel value of the image to be defogged;
The image enhancement module is used for determining the gray value of the image to be defogged;
the image enhancement module is used for determining an atmospheric light value of the image to be defogged based on the dark channel value, the third adjusting value and the gray value of the image to be defogged;
The image enhancement module is used for processing the image to be defogged according to the atmospheric light value and the fourth adjustment value of the image to be defogged so as to form an enhanced image.
12. The apparatus of claim 11, wherein the device comprises a plurality of sensors,
The image enhancement module is used for determining the minimum value in three channels of each pixel point of the image to be defogged;
And the image enhancement module is used for assigning the minimum value in three channels of each pixel point of the defogging image to the corresponding pixel point in the image of the dark channel.
13. An image processing apparatus, comprising:
A memory for storing executable instructions;
a processor for implementing the image processing method according to any one of claims 1 to 6 when executing the executable instructions.
14. A storage medium storing executable instructions which, when executed, are adapted to carry out the image processing method according to any one of claims 1 to 6.
CN201910380316.0A 2019-05-08 2019-05-08 Image processing method, device and storage medium Active CN111915496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910380316.0A CN111915496B (en) 2019-05-08 2019-05-08 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910380316.0A CN111915496B (en) 2019-05-08 2019-05-08 Image processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111915496A CN111915496A (en) 2020-11-10
CN111915496B true CN111915496B (en) 2024-04-23

Family

ID=73241829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910380316.0A Active CN111915496B (en) 2019-05-08 2019-05-08 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111915496B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529813A (en) * 2020-12-18 2021-03-19 四川云从天府人工智能科技有限公司 Image defogging processing method and device and computer storage medium
CN112819007B (en) * 2021-01-07 2023-08-01 北京百度网讯科技有限公司 Image recognition method, device, electronic equipment and storage medium
CN114390307A (en) * 2021-12-28 2022-04-22 广州虎牙科技有限公司 Image quality enhancement method, device, terminal and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000063151A (en) * 2000-03-02 2000-11-06 신천우 An apparatus for warning lane deviation and a method for warning the same
CN103927526A (en) * 2014-04-30 2014-07-16 长安大学 Vehicle detecting method based on Gauss difference multi-scale edge fusion
CN104103053A (en) * 2013-11-25 2014-10-15 北京华科创智健康科技股份有限公司 Electronic endoscope image enhancement method and device
CN105550999A (en) * 2015-12-09 2016-05-04 西安邮电大学 Video image enhancement processing method based on background reuse
CN105631825A (en) * 2015-12-28 2016-06-01 西安电子科技大学 Image defogging method based on rolling guidance
CN105913404A (en) * 2016-07-01 2016-08-31 湖南源信光电科技有限公司 Low-illumination imaging method based on frame accumulation
CN107993245A (en) * 2017-11-15 2018-05-04 湖北三江航天红峰控制有限公司 A kind of sky day background multi-target detection and tracking
CN108257101A (en) * 2018-01-16 2018-07-06 上海海洋大学 A kind of underwater picture Enhancement Method based on optimal recovery parameter

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000063151A (en) * 2000-03-02 2000-11-06 신천우 An apparatus for warning lane deviation and a method for warning the same
CN104103053A (en) * 2013-11-25 2014-10-15 北京华科创智健康科技股份有限公司 Electronic endoscope image enhancement method and device
CN103927526A (en) * 2014-04-30 2014-07-16 长安大学 Vehicle detecting method based on Gauss difference multi-scale edge fusion
CN105550999A (en) * 2015-12-09 2016-05-04 西安邮电大学 Video image enhancement processing method based on background reuse
CN105631825A (en) * 2015-12-28 2016-06-01 西安电子科技大学 Image defogging method based on rolling guidance
CN105913404A (en) * 2016-07-01 2016-08-31 湖南源信光电科技有限公司 Low-illumination imaging method based on frame accumulation
CN107993245A (en) * 2017-11-15 2018-05-04 湖北三江航天红峰控制有限公司 A kind of sky day background multi-target detection and tracking
CN108257101A (en) * 2018-01-16 2018-07-06 上海海洋大学 A kind of underwater picture Enhancement Method based on optimal recovery parameter

Also Published As

Publication number Publication date
CN111915496A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN111915496B (en) Image processing method, device and storage medium
CN110889802B (en) Image processing method and device
CN110062176B (en) Method and device for generating video, electronic equipment and computer readable storage medium
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN112183173B (en) Image processing method, device and storage medium
US11599974B2 (en) Joint rolling shutter correction and image deblurring
CN111757100B (en) Method and device for determining camera motion variation, electronic equipment and medium
CN111385484A (en) Information processing method and device
CN111833269A (en) Video noise reduction method and device, electronic equipment and computer readable medium
CN113038176B (en) Video frame extraction method and device and electronic equipment
CN114900625A (en) Subtitle rendering method, device, equipment and medium for virtual reality space
CN114817630A (en) Card display method, card display device, electronic device, storage medium, and program product
CN111738950B (en) Image processing method and device
JP2009224901A (en) Dynamic range compression method of image, image processing circuit, imaging apparatus, and program
CN110070482B (en) Image processing method, apparatus and computer readable storage medium
CN114640796B (en) Video processing method, device, electronic equipment and storage medium
CN111369472B (en) Image defogging method and device, electronic equipment and medium
CN117319725A (en) Subtitle display method, device, equipment and medium
CN112465940A (en) Image rendering method and device, electronic equipment and storage medium
CN109842738B (en) Method and apparatus for photographing image
CN115086686A (en) Video processing method and related device
CN112418233A (en) Image processing method, image processing device, readable medium and electronic equipment
CN110599437A (en) Method and apparatus for processing video
CN111756954B (en) Image processing method, image processing device, electronic equipment and computer readable medium
US11792535B2 (en) System and method to improve quality in under-display camera system with radially increasing distortion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant