CN113781336B - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113781336B
CN113781336B CN202111012338.5A CN202111012338A CN113781336B CN 113781336 B CN113781336 B CN 113781336B CN 202111012338 A CN202111012338 A CN 202111012338A CN 113781336 B CN113781336 B CN 113781336B
Authority
CN
China
Prior art keywords
frame
image frame
image
reference image
deblurring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111012338.5A
Other languages
Chinese (zh)
Other versions
CN113781336A (en
Inventor
邹子杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111012338.5A priority Critical patent/CN113781336B/en
Publication of CN113781336A publication Critical patent/CN113781336A/en
Application granted granted Critical
Publication of CN113781336B publication Critical patent/CN113781336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20216Image averaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides an image processing method, an image processing device, electronic equipment and a storage medium, and relates to the technical field of image and video processing. The image processing method comprises the following steps: determining a first image frame; determining at least one frame of reference image frame from N frames of image frames before the first image frame and M frames of image frames after the first image frame, wherein N and M are positive integers; performing single-frame deblurring and image fusion processing on the reference image frames of each frame to obtain second image frames; and determining a fusion image frame of the second image frame and the first image frame, and performing single-frame deblurring processing on the fusion image frame to generate a processed first image frame.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image and video processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer readable storage medium, and an electronic device.
Background
In the growth of the image capturing process, the image is blurred due to shake, defocus, movement of the object to be captured, or the like. In some cases, due to complex environment and high similarity between the background and the photographed object, the photographed object is effectively deblurred.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device, thereby improving an image deblurring effect at least to some extent.
According to a first aspect of the present disclosure, there is provided an image processing method including: determining a first image frame; determining at least one frame of reference image frame from N frames of image frames before the first image frame and M frames of image frames after the first image frame, wherein N and M are positive integers; performing single-frame deblurring and image fusion processing on the reference image frames of each frame to obtain second image frames; and determining a fusion image frame of the second image frame and the first image frame, and performing single-frame deblurring processing on the fusion image frame to generate a processed first image frame.
According to a second aspect of the present disclosure, there is provided an apparatus for image processing, comprising: a first determining module for determining a first image frame; a second determining module, configured to determine at least one frame of reference image frame from N frames of image frames before the first image frame and M frames of image frames after the first image frame, where N and M are positive integers; the fusion module is used for carrying out single-frame deblurring and image fusion processing on the reference image frames of each frame to obtain second image frames; the fusion module is further configured to determine a fused image frame of the second image frame and the first image frame, and perform single-frame deblurring processing on the fused image frame to obtain a deblurred first image frame.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing one or more programs which, when executed by the processor, cause the processor to implement the image processing method as described above in the first aspect.
According to a fourth aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, implements the image processing method of the first aspect as described above.
Drawings
In order to more clearly illustrate the examples of the present application or the technical solutions in the prior art, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic diagram of a system architecture in the present exemplary embodiment;
fig. 2 shows a schematic structural diagram of an electronic device in the present exemplary embodiment;
Fig. 3 shows a flowchart of an image processing method in the present exemplary embodiment;
fig. 4 shows a schematic diagram of determining a first image frame in the present exemplary embodiment;
fig. 5 shows a schematic diagram of determining a reference image frame in the present exemplary embodiment;
fig. 6 shows a flowchart of determining image frame blurriness in the present exemplary embodiment;
fig. 7A shows a schematic diagram of determining image frame blurriness in the present exemplary embodiment;
fig. 7B shows a schematic diagram of determining image frame blurriness in the present exemplary embodiment;
fig. 8 is a schematic diagram showing the structure of a circulation network in the present exemplary embodiment;
fig. 9 shows a flowchart of obtaining a second image frame through a loop network in the present exemplary embodiment. The method comprises the steps of carrying out a first treatment on the surface of the
Fig. 10 shows a schematic diagram of image fusion in the present exemplary embodiment;
fig. 11 is a schematic diagram showing the structure of a single frame deblurring network in the present exemplary embodiment;
FIG. 12 shows a schematic diagram of a training single frame deblurring network in the present exemplary embodiment;
fig. 13 shows a flowchart of image registration in the present exemplary embodiment;
fig. 14 shows a schematic diagram of matching of an image pyramid with feature points in the present exemplary embodiment;
Fig. 15 shows a flowchart of another image processing method in the present exemplary embodiment;
fig. 16 shows a schematic configuration diagram of an image processing apparatus in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only and not necessarily all steps are included. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
Image blur can be classified into lens hardware blur, motion blur, virtual focus blur, other types of blur, and the like by type. The motion blur is a condition most frequently occurring when a terminal such as a smart phone shoots, and is caused by very complex motion blur, particularly a face image, the environment where the image is positioned is usually a natural environment with complex background, the background of the blurred texture color and the environment with different degrees has extremely high similarity, and the blur is discontinuous and difficult to recognize, so that the difficulty of deblurring the image is higher.
In one scheme of the related art, a blur kernel of an image is estimated by a data model, and deblurring processing is performed using the blur kernel image. However, when the blur causes are complex and difficult to analyze, the blur kernel cannot be accurately estimated, and the deblurring quality of the image is affected.
In view of the above, exemplary embodiments of the present disclosure provide an image processing method. The system architecture of the image processing method operating environment is described first.
Fig. 1 shows a schematic diagram of a system architecture, which system architecture 100 may include a terminal 110 and a server 120. The terminal 110 may be a terminal device such as a desktop computer, a notebook computer, a smart phone, a tablet computer, etc., and the server 120 may be a server providing services related to image processing, or a cluster formed by a plurality of servers. The terminal 110 and the server 120 may form a connection through a wired or wireless communication link for data interaction. The terminal 110 may capture or otherwise obtain a first image frame or a reference image frame from another device. In one embodiment, the terminal 110 may send the first image frame and the reference image frame to the server 120, and the server 120 outputs a deblurred image (e.g., the deblurred first image frame or the deblurred fused reference image frame) by performing the image processing method in the present exemplary embodiment, and returns to the terminal 110. In one embodiment, the image deblurring method in the present exemplary embodiment may be performed by the terminal 110 to obtain a deblurred image.
Application scenarios of the image deblurring method include, but are not limited to: a user opens a photographing function on the terminal 110, and the terminal 110 acquires image frames through a built-in camera; the terminal 110 performs the above image processing method, or sends the acquired image frame to the server 120, and the server 120 performs the above image processing method, so as to finally obtain a deblurred first image frame, and displays and stores the deblurred first image frame on a shooting interface of the terminal 110, so as to implement image deblurring processing synchronous with shooting.
As described above, the execution subject of the image processing method may be the terminal 110 or the server 120. The exemplary embodiments of the present disclosure also provide an electronic device for performing the image processing method, which may be the terminal 110 or the server 120. The configuration of the above-described electronic device will be exemplarily described below taking the mobile terminal 200 in fig. 2 as an example. It will be appreciated by those skilled in the art that the configuration of fig. 2 can also be applied to stationary type devices in addition to components specifically for mobile purposes.
As shown in fig. 2, the mobile terminal 200 may specifically include: processor 210, internal memory 221, external memory interface 222, USB (Universal Serial Bus ) interface 230, charge management module 240, power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, headset interface 274, sensor module 280, display screen 290, camera module 291, flash 292, motor 293, keys 294, and SIM (Subscriber Identification Module, subscriber identity module) card interface 295, and the like.
Processor 210 may include one or more processing units such as, for example: the processor 210 may include an AP (Application Processor ), modem processor, GPU (Graphics Processing Unit, graphics processor), ISP (Image Signal Processor ), controller, encoder, decoder, DSP (Digital Signal Processor ), baseband processor and/or NPU (Neural-Network Processing Unit, neural network processor), and the like.
The encoder may encode (i.e., compress) the image or video, for example, the current image, to obtain the bitstream data; the decoder may decode (i.e., decompress) the code stream data of the image or video to restore the image or video data. The mobile terminal 200 may support one or more encoders and decoders. In this way, the mobile terminal 200 can process images or videos in various encoding formats, such as: image formats such as JPEG (Joint Photographic Experts Group ), PNG (Portable Network Graphics, portable network graphics), BMP (Bitmap), and video formats such as MPEG (Moving Picture Experts Group ) 1, MPEG2, h.263, h.264, HEVC (High Efficiency Video Coding ).
In one embodiment, processor 210 may include one or more interfaces through which connections are made with other components of mobile terminal 200.
Internal memory 221 may be used to store computer executable program code that includes instructions. The internal memory 221 may include a volatile memory and a nonvolatile memory. The processor 210 performs various functional applications of the mobile terminal 200 and data processing by executing instructions stored in the internal memory 221.
The external memory interface 222 may be used to connect an external memory, such as a Micro SD card, to enable expansion of the memory capabilities of the mobile terminal 200. The external memory communicates with the processor 210 through the external memory interface 222 to implement data storage functions, such as storing files of images, videos, and the like. The USB interface 230 is an interface conforming to the USB standard specification, and may be used to connect a charger to charge the mobile terminal 200, or may be connected to a headset or other electronic device.
The charge management module 240 is configured to receive a charge input from a charger. The charging management module 240 may also supply power to the device through the power management module 241 while charging the battery 242; the power management module 241 may also monitor the status of the battery.
The wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 250 may provide a 2G, 3G, 4G, 5G, etc. mobile communication solution applied on the mobile terminal 200. The wireless communication module 260 may provide wireless communication solutions of WLAN (Wireless Local Area Networks, wireless local area network) such as Wi-Fi (Wireless Fidelity ) network, BT (Bluetooth), GNSS (Global Navigation Satellite System ), FM (Frequency Modulation, frequency modulation), NFC (Near Field Communication, short range wireless communication technology), IR (Infrared technology) and the like applied on the mobile terminal 200.
The mobile terminal 200 may implement a display function through a GPU, a display screen 290, an AP, and the like, and display a user interface. For example, when a user performs camera detection, the mobile terminal 200 may display an interface of a camera detection App (Application) in the display screen 290.
The mobile terminal 200 may implement a photographing function through an ISP, a camera module 291, an encoder, a decoder, a GPU, a display 290, an AP, and the like. For example, the user may start an image or video capturing function in the hidden camera detection App, and at this time, an image of the space to be detected may be acquired through the image capturing module 291.
The mobile terminal 200 may implement audio functions through an audio module 270, a speaker 271, a receiver 272, a microphone 273, a headphone interface 274, an AP, and the like.
The sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyro sensor 2803, a barometric sensor 2804, etc. to implement a corresponding sensing function.
The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc. The motor 293 may generate vibration cues, may also be used for touch vibration feedback, or the like. The keys 294 include a power on key, a volume key, etc.
The mobile terminal 200 may support one or more SIM card interfaces 295 for interfacing with a SIM card to enable telephony and mobile communications, among other functions.
Fig. 3 illustrates an exemplary flow of the image processing method described above, which may include:
s301, determining a first image frame.
In a specific embodiment, a method of determining a first image frame includes:
s3011, acquiring an original image frame acquired at a shooting trigger time.
S3012, determining an image frame with the lowest ambiguity as the first image frame from a P frame image frame before the original image frame, a Q frame image frame after the original image frame and the original image frame, wherein P and Q are positive integers.
Referring to fig. 4, when a user triggers a camera or a photographing mode of the terminal, a camera of the terminal automatically acquires a photographed image frame and temporarily stores the image frame in the terminal. When a photographing key of a camera or a photographing mode is triggered, an image frame photographed at that time is acquired as an original image frame, and a P-frame image frame temporarily stored in the terminal and preceding the original image frame is acquired. The terminal also automatically buffers the Q-frame image frames after the shooting time. The P and Q may be different, and the number of P and Q is not limited herein. As shown in fig. 4, the photographing key is at t 2 Is triggered at the moment of time, t 2 Image frame F acquired at a moment 2 For the original image frame, at t 2 T before the moment 1 Image frame F acquired at a moment 1 The P-frame (here, 1 frame) image frame preceding the original frame is acquired. Similarly, at t 2 T after the moment of time 3 Image frame F acquired at a moment 3 And t 4 Image frame F acquired at a moment 4 The image frames are Q frames (here, 2 frames) after the original frame acquired. In the original image frame F 2 And an image frame F preceding the original frame 1 And original image frame F 2 Later image frame F 3 And F 4 In which the image frame with the lowest blur is determined as the first image frame, if image frame F 1 Is the lowest of the ambiguities, and describes image frame F 1 The image frame F is the clearest and the picture most desired to be acquired by the user near the trigger time of the shooting key 1 As the first image frame.
S302, determining at least one frame of reference image frame from N frames of image frames before the first image frame and M frames of image frames after the first image frame, wherein N and M are positive integers.
In a specific embodiment, a method of determining at least one frame of reference image frames includes:
and determining the image frame with the ambiguity lower than a first threshold value as the reference image frame from N image frames before the first image frame and M image frames after the first image frame.
Referring to FIG. 5, if t 4 Image frame F obtained at moment 4 Is the first image frame, t is acquired 4 T before the moment 3 Image frame F acquired at a moment 3 、t 2 Image frame F acquired at a moment 2 、t 1 Image frame F acquired at a moment 1 Image frame F 1 、F 2 、F 3 The N frames (here, 3 frames) preceding the first image frame. Similarly, get t 4 T after the moment of time 5 Image frame F acquired at a moment 5 、t 6 Image frame F acquired at a moment 6 、t 7 Image frame F acquired at a moment 7 Image frame F 5 、F 6 、F 7 The N frames (here, 3 frames) preceding the first image frame. The numbers of M and N may be the same or different. Determination of F 1 、F 2 、F 3 、F 5 、F 6 、F 7 And screening out the image frames with the ambiguity below the first threshold as reference image frames. For example, image frame F 1 、F 3 、F 5 、F 6 Is lower than the first threshold, then image frame F 1 、F 3 、F 5 、F 6 Is a reference image frame.
In an alternative example, the first threshold may be set by human, or factory set by the terminal.
By the method, more clear frames can be selected from the frames adjacent to the first image frame as reference image frames, and the over-blurred image frames (such as image frame F) 2 And F 7 ) And reducing interference of the acquired reference frame on the first image frame in subsequent image processing.
In a specific embodiment, referring to fig. 6, a method of determining image frame ambiguity includes:
s510, determining a pixel value of each pixel point of the image frame, determining a corresponding pixel point with the pixel value larger than a second threshold value as a first pixel point, and determining a corresponding pixel point with the pixel value smaller than or equal to the second threshold value as a second pixel point;
s520, obtaining a first average value of pixel values of all the first pixel points of the image frame and a second average value of pixel values of all the second pixel points of the image frame;
and S530, determining the ambiguity of the image frame by determining the difference value of the first mean value and the second mean value.
In an alternative embodiment, a HOG value (gradient histogram, histogram of Oriented Gradients) is obtained for each pixel on the image frame, where the corresponding pixel with a HOG value greater than the second threshold is the first pixel and the corresponding pixel with a HOG value less than or equal to the second threshold is the second pixel. The average value of the HOG values of all the first pixel points of the image frame is obtained as a first average value, and the average value of the HOG values of all the second pixel points of the image frame is obtained as a second average value.
Alternatively, the second threshold may be the median of all HOG values acquired.
And S530, determining the ambiguity of the image frame by determining the difference value of the first mean value and the second mean value.
Referring to fig. 7A and 7B, fig. 7A is a clear face image and fig. 7B is a blurred face image. In clear face images, the texture is clearly visible, whereas blurred face images generally have substantially blurred textures. Therefore, the gradient change of the clear face image is larger, and the gradient change of the blurred face image is smaller. In the image histogram, the HOG gradient value distribution of the blurred face is relatively concentrated, and the HOG gradient value of the clear face is large or small and is distributed on the two poles. In a clear picture, the difference between the first mean value and the second mean value is larger. In a blurred picture, the difference between the first mean value and the second mean value is small. Thus, the magnitude of the difference may be used to determine the blur of the image frame, the larger the difference, the more clear the image frame, the smaller the difference, and the more blurred the image frame.
The difference can be obtained by the formula (1):
α>1
β<1
wherein the difference Diff HOG For the difference between the first mean and the second mean, AVG represents averaging,HOG value representing pixel coordinates of (x, y) and having a HOG value greater than a second threshold value, +.>HOG values representing pixel coordinates of (x, y) and having a HOG value less than or equal to a second threshold. Alpha is used to let->Greater, beta is used to let +.>Smaller, so that the difference Diff HOG The method is more dominant, and the difference is too small to be beneficial to judgment under the situation of avoiding slight blurring. The image ambiguity is calculated by adopting the difference value of the two parts, the calculation process is simple, and the calculation amount of the whole scheme is reduced.
In addition, gradient statistics, a deep learning algorithm, etc. may be used to calculate the image blur (or sharpness).
The method for determining the image frame ambiguity may be used to determine the image frame with the lowest ambiguity as the first image frame from the P-frame image frame before the original image frame, the Q-frame image frame after the original image frame, and the original image frame; the image frame whose blur degree is lower than the first threshold may be determined as the reference image frame from N image frames preceding the first image frame and M image frames following the first image frame by the above-described method of determining the image frame blur degree.
And S303, performing single-frame deblurring and image fusion processing on each frame of reference image frame to obtain a second image frame.
S304, determining a fusion image of the second image frame and the first image frame, and performing single-frame deblurring processing on the fusion image frame to obtain a deblurred first image frame.
In one embodiment, referring to fig. 8, a looped network is constructed. A reference image frame is input into the looped network. And inputting the first frame reference image frame into a single frame deblurring network in the cyclic network to perform single frame deblurring processing, and fusing the deblurred first frame reference image frame with the second frame reference image frame to obtain a fused reference image frame. And inputting the fusion reference image frame into a single-frame deblurring network to perform deblurring processing to obtain a deblurred fusion reference image frame. And fusing the deblurred fused reference image frame with a frame of reference image frame to obtain a new fused reference frame, inputting the new fused reference frame into a single frame deblurring network to perform deblurring treatment to obtain a new deblurred fused reference frame, and re-fusing and debluring the new deblurred fused reference frame and the single frame of reference image frame until no unfused single frame of reference image frame exists, wherein the deblurred fused reference frame is a second image frame. And fusing the second image frame with the first image, and inputting the fused image frame into a single-frame deblurring network to perform single-frame deblurring processing to obtain a deblurred first image frame. The processed first image frame will be clearer.
By the method, multi-frame processing and single-frame deblurring are built into a cyclic network, at least one frame of reference image frame is fused and deblurred, and a second image frame is obtained, wherein the second image frame comprises screened image frame data adjacent to the first image frame, noise factors are eliminated, and therefore the image quality of the second image frame is higher. After the second image frame is fused with the first image frame and deblurred, the quality of the obtained deblurred first image frame is higher, the image is clearer, and noise factors in the original image are removed. In addition, in a cyclic network, a single-frame deblurring mode is adopted to carry out single-frame deblurring treatment on (fused) reference image frames, so that the image quality of each (fused) reference image frame is ensured, and a multi-frame fusion is adopted to fuse the reference frames with high image quality to obtain a clearer second image frame, and the first image frame is processed based on the second image frame to obtain a clear image frame. The method not only can ensure the image quality of each frame through single-frame deblurring processing in a cyclic network, but also overcomes the defect that a single-frame algorithm is adopted only to lack front and rear time domain information by combining a multi-frame fusion mode, improves the effect of the multi-frame algorithm, simplifies the flow, and solves the problems of single-frame processing and multi-frame fusion by using a network.
In another embodiment, referring to fig. 9, fig. 9 is a flow chart of a process for obtaining a second image frame through a looped network.
S1110, inputting a first reference image frame, and performing single-frame deblurring processing on the first reference image frame to obtain a deblurred first reference image frame.
S1120, judging whether the unfused reference image frames exist or not.
If not, i.e., the input reference image frame has only the first reference image frame, S1130 is performed; if there are other reference image frames in addition to the first reference image frame, S1140 is performed.
S1130, determining the deblurred first reference image frame as the second image frame.
When the input reference image frame only has the first reference image frame, the first reference image frame is subjected to single-frame deblurring treatment, and the deblurred reference image frame is the second image frame.
S1140, fusing the deblurred first reference image frame and the second reference image frame.
And S1150, obtaining the fusion reference image frame.
S1160, performing single-frame deblurring processing on the fusion reference image frame.
S1170, obtaining the deblurred fusion reference image frame.
S1180, judging whether there is an unfused reference image frame.
If not, then S1190 is performed; if there are other unfused reference image frames, S1210 is performed.
And S1190, determining the deblurred fusion reference image frame as a second image frame.
S1210, re-fusing a frame of unfused reference image frame with the deblurred fused reference image frame, and continuing to execute S1150 until there is no unfused reference image frame.
In one embodiment, at least one frame of reference image frame is input, the at least one frame of reference image frame including the first reference image frame F 1 For F 1 Single-frame deblurring is carried out to obtain deblurred image frame F 1’ . If the reference image frame is only one frame, F 1’ I.e. the second image frame. If there is also a second reference image frame F 2 Fusion F 2 And F 1’ Obtaining a fused reference image frame F 2m And for F 2m Single frame deblurring to obtain deblurred fused reference image frame F 2m’ . If there are no more reference image frames at this time, F 2m’ I.e. the second image frame. If still referring to image frame F at this time 3 Will F 3 And F 2m’ Fusing to obtain a new fused reference image frame F 3m . For F 3m Single-frame deblurring is carried out to obtain a deblurred fusion reference image frame F 3m’ . If there are no more reference image frames at this time, F 3m’ I.e. the second image frame. If still referring to image frame F at this time 4 、F 5 、…、F n (n is a positive integer), the above-described fusing and deblurring process is repeated until a deblurred fused reference image frame F is obtained nm’ At this time, there is no reference image frame that is not fused, F nm’ I.e. the second image frame.
In another embodiment, the image frames may be fused by means of weighted averaging, and the weights may be determined according to the number of actually fused image frames. In the merging process, as shown in fig. 10, the images may be superimposed in a weighted average manner, and the weight may be determined according to the number of images actually merged. Assume that n+1 frame reference image frames are fused in the order of 1 st frame and 2 nd frame, referring to the following formula:
wherein i is [2, n+1 ]]Is a positive integer of any one. Fusion image of i-th frame and previous i-1 frame (i.e. F (i-1)m ) When fusion is carried out, the weights are respectively 1 and i-1. It can be obtained that in fig. 10, the weight M 1 Is 1, weight M 2 2.
In one embodiment, a single frame image frame may be deblurred through a single frame deblurring network, a reference image frame may be deblurred to obtain a deblurred reference image frame, a fused reference frame may be deblurred to obtain a deblurred fused reference frame, and so on.
The single-frame deblurring network is a pre-trained neural network for single-frame image deblurring, and an end-to-end network structure can be adopted. FIG. 11 illustrates an exemplary architecture of a single frame deblurring network that employs a U-Net like architecture. After inputting the image frames into the network, the process is as follows:
First, the pixel rearrangement operation is performed by the pixel rearrangement layer at the input end, and in general, the size of the first original image is larger, and the first original image can be rearranged to a plurality of channels through the space_to_depth function, so that the image size of each channel is reduced. For example, when the block_size of the space_to_depth is set to 2, the splitting of pixels of 2×2 grids in each channel in the image frame into 4 different channels is equivalent to splitting an image of one channel into 4 new images of channels, and the width and the height of the new images are reduced to half of those of the original images, so as to obtain a characteristic image of H/2*W/2×12. This facilitates subsequent operations such as convolving the small-sized feature image.
And secondly, a downsampling part consisting of a 2d convolution layer, a residual block and a downsampling layer performs downsampling on the characteristic image and multi-scale convolution operation in the downsampling process so as to extract image characteristics of a multi-scale layer. The downsampling layer may be implemented using a pooling operation.
And an up-sampling part consisting of a residual block, an up-sampling layer and a 2d convolution layer, and performing up-sampling on the feature image after down-sampling and multi-scale convolution operation in the up-sampling process to restore the image detail information on a multi-scale level. The upsampling layer may be implemented using transposed convolution, interpolation (e.g., bilinear interpolation), etc. The structure of the up-sampling portion and the structure of the down-sampling portion may be symmetrical, and the operation of the up-sampling portion may be the inverse operation of the down-sampling portion.
Finally, the pixel rearrangement layer at the output end performs a pixel rearrangement operation, which may be the inverse of the pixel rearrangement operation at the input end, for recovering the size of the first original image, for example, the characteristic image of H/2*W/2×12 may be rearranged into a three-channel image of h×w×3, so that the output first deblurred image is consistent with the size of the first original image.
Fig. 12 shows a schematic diagram of a training single frame deblurring network. And acquiring an image frame to be processed and a standard image frame (group trunk), wherein the standard image frame is a clear image frame corresponding to the image frame to be processed, for example, a large number of clear image frames can be acquired as the standard image frame, and the clear image frame is subjected to fuzzy processing to obtain the corresponding image frame to be processed. Inputting an image frame to be processed into a single frame deblurring network to be trained, comparing the image frame to be processed with a standard image frame, and outputting a first comparison result; substituting the first comparison result into a loss function, and obtaining a loss result; the loss function may be selected from L1, L2, etc., and the selection of the loss function is not limited here; and comparing the loss result with the standard image frame to obtain a second comparison result, and carrying the second comparison result into the loss function again to carry out iterative calculation. The parameters of the single frame deblurring network are updated by using the loss function, for example, the gradient of each parameter can be calculated through a back propagation algorithm, and gradient descent update is carried out on each parameter. And performing iterative training on the single-frame deblurring network until the result of iterative calculation is smaller than a third threshold value, and the accuracy of the iterative calculation reaches a preset requirement.
It should be appreciated that a single frame deblurring network may employ a different network structure than that shown in fig. 11 or 12, for example, a GAN (Generative Adversarial Network, generation countermeasure network) structure may be employed.
The single-frame deblurring treatment can realize the deblurring of the single-frame image in the spatial layer, and compared with the unprocessed image frames, the definition of the obtained deblurred image frames can be obviously improved.
In another embodiment, referring to fig. 13, registering each reference image frame with a first image frame prior to inputting the reference image frames into the looped network, comprises:
s710, carrying out pyramid operation on each reference image frame and each first image frame to obtain sampling images of each reference image frame and each first image frame under a plurality of resolutions;
the pyramid operation refers to a series of downsampling of image frames by different low resolution, resulting in a set of sampled images of progressively lower resolution. The above-described multiple resolution sampled image frames may include the original image at the original resolution as a particular sampled image.
In general, a stop condition may be set when the pyramid operation is performed, such as a specific resolution or a downsampling multiple, and when this condition is reached, continued downsampling is stopped. The present disclosure does not specifically limit the stop condition and the downsampling multiple of each layer in the pyramid. For reference image frame P and first image frame Q, downsampling is performed layer by multiples of 1/2, 1/4, and 1/8, respectively, to obtain sampled images P (i.e., original image), P (1/2) (representing sampled images obtained by downsampling by a multiple of 1/2), P (1/4), P (1/8), and sampled images Q, Q (1/2), Q (1/4), and Q (1/8) of Q, as illustrated with reference to fig. 14.
And S720, performing feature point matching on the sampled image at each resolution of each reference image frame and the sampled image at the corresponding resolution of the first image frame to obtain a matched feature point pair of each reference image frame and the first image frame.
The type of the feature points and the feature point detection algorithm are not limited in the present disclosure, and for example, harris corner points, SIFT (Scale-Invariant Feature Transform ) feature points, detection algorithms thereof, and the like may be adopted. And carrying out feature point matching on the two sampling images at each resolution. Specifically, after detecting a feature point in the reference image frame P, a feature point corresponding to the feature point is detected in each of its sampled images, thereby obtaining a set of feature points of P, for example, (P1, P2, P3, P4) shown in fig. 12, and if (P1, P2, P3, P4) is successfully matched with a set of feature points (Q1, Q2, Q3, Q4) of the first image frame Q, P1 and Q1 are determined as one matching feature point pair of P and Q. And similarly, determining matching characteristic point pairs of at least one frame of reference image frame P and the first image frame Q to obtain matching characteristic point pairs of each frame of reference image frame and the first image frame, and obtaining a plurality of matching characteristic point pairs. Therefore, feature point matching under different scales is realized, and because the semantics corresponding to the images with different scales are different, the stability of the semantics of the feature points under different scales is ensured, and the accuracy of matching the feature point pairs is improved.
And S730, registering each reference image frame and the first image frame according to the matched characteristic point pairs.
After the matching point pairs are obtained, a transformation matrix between each reference image frame and the first image frame can be calculated through an optical flow algorithm and the like, so that the reference image frames are transformed, and the registration of the reference image frames is realized. By the method, the reference frame is registered with the first image frame before entering the circulation network, so that the picture in the reference frame is kept consistent with the first image frame, and noise which is generated during subsequent fusion and deblurring due to different image angles is reduced.
Fig. 15 shows another exemplary flow showing an image deblurring method in the present exemplary embodiment, including:
s1310, acquiring an original image frame acquired at a shooting trigger moment;
s1320, determining an image frame with the lowest ambiguity as a first image frame from P frame image frames before an original image frame, Q frame image frames after the original image frame and the original image frame, wherein P and Q are positive integers;
s1330, determining an image frame with the ambiguity lower than a first threshold value as the reference image frame from N image frames before a first image frame and M image frames after the first image frame;
S1340, performing optical flow registration through detection and matching of an image pyramid and Harris corner points;
s1350, respectively inputting the registered reference image frame and the first image frame into a circulation network to perform image fusion and deblurring;
s1360, the deblurred first image frame is output.
By the method, the image is processed by using a cyclic network combining single-frame deblurring and multi-frame fusion, so that the problem of image blurring in other scenes such as a moving state is solved, and a clear image is output.
Exemplary embodiments of the present disclosure also provide an image deblurring apparatus. Referring to fig. 16, the image deblurring apparatus 900 includes:
a first determining module 910, configured to determine a first image frame;
a second determining module 920, configured to determine at least one frame of reference image frame from N frames of image frames before the first image frame and M frames of image frames after the first image frame, where N and M are positive integers;
the fusion module 930 is configured to perform single-frame deblurring and image fusion processing on each frame of reference image frame to obtain a second image frame;
the fusion module 930 is further configured to determine a fused image frame of the second image frame and the first image frame, and perform single-frame deblurring processing on the fused image to generate a processed first image frame.
In a specific embodiment, the first determining module 910 is configured to acquire an original image frame acquired at a shooting trigger time; from the P-frame image frame before the original image frame, the Q-frame image frame after the original image frame, and the original image frame, the image frame with the lowest blur degree is determined as the first image frame, and P and Q are positive integers.
In a specific embodiment, the second determining module 920 is configured to determine, from N image frames before the first image frame and M image frames after the first image frame, an image frame with a blur level lower than the first threshold as the reference image frame.
In a specific embodiment, the first determining module 910 and/or the second determining module 920 may be configured to determine an ambiguity of the image frame, where the method of determining the ambiguity includes: determining a pixel value of each pixel point of the image frame, determining a corresponding pixel point with the pixel value larger than a second threshold value as a first pixel point, and determining a corresponding pixel point with the pixel value smaller than or equal to the second threshold value as a second pixel point; acquiring a first average value of pixel values of all first pixel points of the image frame and a second average value of pixel values of all second pixel points of the image frame; the blur degree of the image frame is determined by determining a difference between the first mean value and the second mean value.
In a specific embodiment, the second determining module 920 is configured to perform single-frame deblurring and image fusion processing on each reference image frame, and perform pyramid operation on each reference image frame and the first image frame before obtaining the second image frame, so as to obtain sampled images of each reference image frame and the first image frame under multiple resolutions; performing feature point matching on the sampling image under each resolution of each reference image frame and the sampling image under the corresponding resolution of the first image frame to obtain a matching feature point pair of each reference image frame and the first image frame; and registering each reference image frame and the first image frame according to the matched characteristic point pairs.
In a specific embodiment, the fusion module 930 is configured to perform single-frame deblurring on the first reference image frame to obtain a deblurred first reference image frame, and determine that the deblurred first reference image frame is a second image frame.
In a specific embodiment, the fusion module 930 is further configured to, after the reference image frame further includes a second reference image frame, perform single-frame deblurring processing on the first reference image frame to obtain a deblurred first reference image frame, fuse the deblurred first reference image frame with the second reference image frame to obtain a fused reference image frame; and carrying out single-frame deblurring processing on the fusion reference image frame to obtain a second image frame.
In a specific embodiment, the fusion module 930 is further configured to fuse the deblurred first reference image frame and the deblurred second reference image frame to obtain a fused reference image frame, then, re-fuse an unfused reference image frame with the fused reference image frame to obtain an updated fused reference image frame, and perform single-frame deblurring processing on the fused reference image frame; repeating the rebinning of an unfused reference image frame and a fused reference image frame to obtain an updated fused reference image frame, and performing single-frame deblurring treatment on the updated fused reference image frame until the unfused reference image frame does not exist; and obtaining the deblurred updated fusion reference image frame as a second image frame.
In a specific embodiment, the fusing module 930 is configured to perform deblurring on an image frame using a single-frame deblurring network, where the deblurring processing includes substituting the image frame to be processed into a single-frame deblurring network model, and the training the deblurring network model includes: comparing the image frame to be processed with a standard image frame to obtain a first comparison result; substituting the first comparison result into a loss function to obtain a loss result; comparing the loss result with the standard image frame to obtain a second comparison result; substituting the second comparison result into the loss function to perform iterative calculation; and generating the deblurring network model according to the loss function of which the iterative calculation result is smaller than a third threshold value.
By means of the device, multi-frame processing and single-frame deblurring are built into a circulation network, at least one frame of reference image frame is fused and deblurred, and a second image frame is obtained, wherein the second image frame comprises screened image frame data adjacent to the first image frame, noise factors are eliminated, and therefore the image quality of the second image frame is high. After the second image frame is fused with the first image frame and deblurred, the quality of the obtained deblurred first image frame is higher, the image is clearer, and noise factors in the original image are removed. In addition, in a cyclic network, a single-frame deblurring mode is adopted to carry out single-frame deblurring treatment on (fused) reference image frames, so that the image quality of each (fused) reference image frame is ensured, and a multi-frame fusion is adopted to fuse the reference frames with high image quality to obtain a clearer second image frame, and the first image frame is processed based on the second image frame to obtain a clear image frame. The method has the advantages that the image quality of each frame can be guaranteed through single-frame deblurring processing in a cyclic network, the defect that front and rear time domain information is lacked by adopting a single-frame algorithm only is overcome by combining a multi-frame fusion mode, the effect of the multi-frame algorithm is improved, the flow is simplified, single-frame processing and multi-frame fusion are solved by using one network, model training is not complex, the requirement on hardware of a device is not particularly high, and engineering is easy.
Exemplary embodiments of the present disclosure also provide a computer readable storage medium, which may be implemented in the form of a program product comprising program code for causing an electronic device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the above section of the "exemplary method" when the program product is run on the electronic device. In one embodiment, the program product may be implemented as a portable compact disc read only memory (CD-ROM) and includes program code and may be run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of image processing, comprising:
determining a first image frame;
determining at least one frame of reference image frame from N frames of image frames before the first image frame and M frames of image frames after the first image frame, wherein N and M are positive integers;
performing single-frame deblurring and image fusion processing on the reference image frames of each frame to obtain second image frames;
determining a fusion image frame of the second image frame and the first image frame, and performing single-frame deblurring processing on the fusion image frame to generate a processed first image frame;
if the at least one frame of reference image frame includes a plurality of frames of reference image frames, performing single-frame deblurring and image fusion processing on each frame of reference image frame to obtain a second image frame, which specifically includes:
constructing a cyclic network, inputting a first frame reference image frame into a single frame deblurring network of the cyclic network to perform single frame deblurring processing, and fusing the deblurred first frame reference image frame with a second frame reference image frame to obtain a fused reference image frame;
Inputting the fusion reference image frame into a single-frame deblurring network to perform deblurring processing, fusing the deblurred fusion reference image frame with another frame of reference image frame, and circularly executing deblurring and fusion operation until no single-frame reference image frame is not fused;
circularly re-fusing an unfused reference image frame and a deblurred fused reference image frame to obtain an updated fused reference image frame, and performing single-frame deblurring treatment on the updated fused reference image frame until the unfused reference image frame does not exist;
and obtaining the deblurred updated fusion reference image frame as the second image frame.
2. The method of claim 1, wherein determining the first image frame comprises:
acquiring an original image frame acquired at a shooting trigger moment;
and determining an image frame with the lowest ambiguity as the first image frame from a P frame image frame before the original image frame, a Q frame image frame after the original image frame and the original image frame, wherein P and Q are positive integers.
3. The method of claim 1, wherein determining at least one reference image frame from N image frames preceding the first image frame and M image frames following the first image frame comprises:
And determining the image frame with the ambiguity lower than a first threshold value as the reference image frame from N image frames before the first image frame and M image frames after the first image frame.
4. A method according to claim 2 or 3, wherein determining the blur level of the image frame comprises:
determining a pixel value of each pixel point of the image frame, determining a corresponding pixel point with the pixel value larger than a second threshold value as a first pixel point, and determining a corresponding pixel point with the pixel value smaller than or equal to the second threshold value as a second pixel point;
acquiring a first average value of pixel values of all the first pixel points of the image frame and a second average value of pixel values of all the second pixel points of the image frame;
and determining the ambiguity of the image frame by determining the difference between the first average value and the second average value.
5. The method of claim 1, wherein the single frame deblurring and image fusion process is performed on each reference image frame, and before obtaining the second image frame, the method comprises:
pyramid operation is carried out on the reference image frame and the first image frame of each frame, so that sampling images of the reference image frame and the first image frame of each frame under a plurality of resolutions are obtained;
Performing feature point matching on a sampling image under each resolution of the reference image frame and a sampling image under the corresponding resolution of the first image frame to obtain a matching feature point pair of the reference image frame and the first image frame of each frame;
and registering the reference image frame and the first image frame of each frame according to the matched characteristic point pairs.
6. The method of claim 1, wherein the reference image frames comprise first reference image frames, wherein the single frame deblurring and image fusion processing is performed on each frame of reference image frames to obtain second image frames, and wherein the step of obtaining the second image frames comprises:
performing single-frame deblurring processing on the first reference image frame to obtain a deblurred first reference image frame;
the deblurred first reference image frame is determined to be the second image frame.
7. The method of claim 1, 2, 3, 5 or 6, wherein the single frame deblurring method comprises substituting the image frames to be processed into a single frame deblurring network model for deblurring, and training the deblurring network model comprises:
comparing the image frame to be processed with a standard image frame to obtain a first comparison result;
Substituting the first comparison result into a loss function to obtain a loss result;
comparing the loss result with the standard image frame to obtain a second comparison result;
substituting the second comparison result into the loss function to perform iterative calculation;
and generating the deblurring network model according to the loss function of which the iterative calculation result is smaller than a third threshold value.
8. An apparatus for image processing, comprising:
a first determining module for determining a first image frame;
a second determining module, configured to determine at least one frame of reference image frame from N frames of image frames before the first image frame and M frames of image frames after the first image frame, where N and M are positive integers;
the fusion module is used for carrying out single-frame deblurring and image fusion processing on each frame of reference image frame to obtain a second image frame;
the fusion module is specifically configured to construct a cyclic network if the at least one frame of reference image frame includes multiple frames of reference image frames, input a first frame of reference image frame into a single frame deblurring network of the cyclic network to perform single frame deblurring processing, and fuse the deblurred first frame of reference image frame with a second frame of reference image frame to obtain a fused reference image frame; inputting the fusion reference image frame into a single-frame deblurring network to perform deblurring processing, fusing the deblurred fusion reference image frame with another frame of reference image frame, and circularly executing deblurring and fusion operation until no single-frame reference image frame is not fused; circularly re-fusing an unfused reference image frame and a deblurred fused reference image frame to obtain an updated fused reference image frame, and performing single-frame deblurring treatment on the updated fused reference image frame until the unfused reference image frame does not exist; obtaining the deblurred updated fusion reference image frame as the second image frame;
The fusion module is further configured to determine a fused image frame of the second image frame and the first image frame, and perform single-frame deblurring processing on the fused image frame to generate a processed first image frame.
9. An electronic device, comprising:
a processor;
a memory for storing one or more programs that, when executed by the processor, cause the processor to implement the image processing method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the image processing method according to any one of claims 1 to 7.
CN202111012338.5A 2021-08-31 2021-08-31 Image processing method, device, electronic equipment and storage medium Active CN113781336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111012338.5A CN113781336B (en) 2021-08-31 2021-08-31 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111012338.5A CN113781336B (en) 2021-08-31 2021-08-31 Image processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113781336A CN113781336A (en) 2021-12-10
CN113781336B true CN113781336B (en) 2024-02-02

Family

ID=78840377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111012338.5A Active CN113781336B (en) 2021-08-31 2021-08-31 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113781336B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118158464A (en) * 2024-04-10 2024-06-07 腾讯科技(深圳)有限公司 Video data processing method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0525408A2 (en) * 1991-07-01 1993-02-03 Eastman Kodak Company Method for multiframe Wiener restoration of noisy and blurred image sequences
CN111275626A (en) * 2018-12-05 2020-06-12 深圳市炜博科技有限公司 Video deblurring method, device and equipment based on ambiguity
CN111629262A (en) * 2020-05-08 2020-09-04 Oppo广东移动通信有限公司 Video image processing method and device, electronic equipment and storage medium
CN111932480A (en) * 2020-08-25 2020-11-13 Oppo(重庆)智能科技有限公司 Deblurred video recovery method and device, terminal equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070511B (en) * 2019-04-30 2022-01-28 北京市商汤科技开发有限公司 Image processing method and device, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0525408A2 (en) * 1991-07-01 1993-02-03 Eastman Kodak Company Method for multiframe Wiener restoration of noisy and blurred image sequences
CN111275626A (en) * 2018-12-05 2020-06-12 深圳市炜博科技有限公司 Video deblurring method, device and equipment based on ambiguity
CN111629262A (en) * 2020-05-08 2020-09-04 Oppo广东移动通信有限公司 Video image processing method and device, electronic equipment and storage medium
CN111932480A (en) * 2020-08-25 2020-11-13 Oppo(重庆)智能科技有限公司 Deblurred video recovery method and device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN113781336A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN111598776B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN111580765B (en) Screen projection method, screen projection device, storage medium, screen projection equipment and screen projection equipment
CN111641835B (en) Video processing method, video processing device and electronic equipment
CN111445392B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN112927271B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN112767295A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN113409203A (en) Image blurring degree determining method, data set constructing method and deblurring method
CN111161176A (en) Image processing method and device, storage medium and electronic equipment
CN111696039B (en) Image processing method and device, storage medium and electronic equipment
CN111835973A (en) Shooting method, shooting device, storage medium and mobile terminal
CN113781336B (en) Image processing method, device, electronic equipment and storage medium
CN113658073A (en) Image denoising processing method and device, storage medium and electronic equipment
CN113343895B (en) Target detection method, target detection device, storage medium and electronic equipment
CN113538225A (en) Model training method, image conversion method, device, equipment and storage medium
CN113409209B (en) Image deblurring method, device, electronic equipment and storage medium
CN113658128A (en) Image blurring degree determining method, data set constructing method and deblurring method
CN115205164B (en) Training method of image processing model, video processing method, device and equipment
CN115550669A (en) Video transcoding method and device, electronic equipment and storage medium
CN113364964B (en) Image processing method, image processing apparatus, storage medium, and terminal device
CN114240750A (en) Video resolution improving method and device, storage medium and electronic equipment
CN113592009A (en) Image semantic segmentation method and device, storage medium and electronic equipment
CN113627314A (en) Face image blur detection method and device, storage medium and electronic equipment
CN116033155A (en) Compression method, device and readable storage medium for binocular image
CN115205126A (en) Image distortion correction processing method, device, storage medium and electronic equipment
CN113658070A (en) Image processing method, image processing apparatus, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant