CN113409203A - Image blurring degree determining method, data set constructing method and deblurring method - Google Patents

Image blurring degree determining method, data set constructing method and deblurring method Download PDF

Info

Publication number
CN113409203A
CN113409203A CN202110649891.3A CN202110649891A CN113409203A CN 113409203 A CN113409203 A CN 113409203A CN 202110649891 A CN202110649891 A CN 202110649891A CN 113409203 A CN113409203 A CN 113409203A
Authority
CN
China
Prior art keywords
image
deblurring
processed
degree
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110649891.3A
Other languages
Chinese (zh)
Inventor
辛明远
胡攀
刘悠欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110649891.3A priority Critical patent/CN113409203A/en
Publication of CN113409203A publication Critical patent/CN113409203A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides an image blurring degree determining method, a blurred image data set constructing method, an image deblurring device, a storage medium and electronic equipment, and relates to the technical field of image and video processing. The image blur degree determining method includes: acquiring n continuous frames of original images and a target image synthesized by carrying out weighted fusion on pixel points at the same position in the n continuous frames of original images, wherein n is a positive integer not less than 2; acquiring the pixel value of each pixel point in the target image at the corresponding pixel point at the same position in each frame of original image; and determining the fuzzy degree data of the target image according to the deviation degree of each pixel point among the corresponding n pixel values in the n frames of original images, wherein the fuzzy degree data comprises the fuzzy degree value of each pixel point in the target image. The method and the device can represent the blurring degree of different areas in the image, and are beneficial to improving the image deblurring effect.

Description

Image blurring degree determining method, data set constructing method and deblurring method
Technical Field
The present disclosure relates to the field of image and video processing technologies, and in particular, to an image blur degree determining method, a blurred image data set constructing method, an image deblurring method, an image blur degree determining apparatus, a blurred image data set constructing apparatus, an image deblurring apparatus, a computer-readable storage medium, and an electronic device.
Background
In the image capturing process, it is a common situation that the image is blurred due to shake, defocus or other reasons. When performing image deblurring, it is usually necessary to calculate the blurring degree of the image first, so as to implement a specific deblurring process by using a matching criterion.
In the related art, most of the blur degree calculation methods are to calculate the blur degree of a whole image, so that the blur degree difference of different areas in the image cannot be reflected, and further, the same standard is adopted for the whole image during deblurring processing, so that the deblurring effect is poor.
Disclosure of Invention
The present disclosure provides an image blur degree determining method, a blurred image data set constructing method, an image deblurring method, an image blur degree determining device, a blurred image data set constructing device, an image deblurring device, a computer-readable storage medium, and an electronic device, thereby solving, at least to a certain extent, the problems that in the related art, blur degrees cannot be respectively determined for different regions in an image, and an image deblurring effect is poor.
According to a first aspect of the present disclosure, there is provided an image blur degree determination method, including: acquiring n continuous frames of original images and a target image synthesized by carrying out weighted fusion on pixel points at the same position in the n continuous frames of original images, wherein n is a positive integer not less than 2; acquiring the pixel value of each pixel point in the target image at the corresponding pixel point at the same position in each frame of the original image; and determining the blurring degree data of the target image according to the deviation degree of each pixel point among the corresponding n pixel values in the n frames of original images, wherein the blurring degree data of the target image comprises the blurring degree value of each pixel point in the target image.
According to a second aspect of the present disclosure, there is provided a blurred image data set construction method, comprising: acquiring a target image and blur degree data of the target image determined according to the image blur degree determining method of the first aspect; taking the target image as a sample image, taking the fuzzy degree data of the target image as a first label, and constructing a fuzzy image data set; the fuzzy image data set is used for training a fuzzy degree perception network, and the fuzzy degree perception network is used for determining fuzzy degree data of the image input into the fuzzy degree perception network.
According to a third aspect of the present disclosure, there is provided an image deblurring method, comprising: acquiring an image to be processed; according to the image blur degree determining method of the first aspect, blur degree data of the image to be processed is determined by taking the image to be processed as a target image; and based on the fuzzy degree data of the image to be processed, performing deblurring processing on the image to be processed to obtain a deblurred image corresponding to the image to be processed.
According to a fourth aspect of the present disclosure, there is provided an image deblurring method, comprising: acquiring an image to be processed; processing the image to be processed by using a fuzzy degree perception network to obtain fuzzy degree data of the image to be processed; based on the fuzzy degree data of the image to be processed, performing deblurring processing on the image to be processed to obtain a deblurred image corresponding to the image to be processed; the fuzzy degree perception network is trained by using the fuzzy image data set constructed by the fuzzy image data set construction method of the second aspect.
According to a fifth aspect of the present disclosure, there is provided an image blur degree determination apparatus including: the image acquisition module is configured to acquire n continuous frames of original images and a target image synthesized by performing weighted fusion on pixel points at the same positions in the n continuous frames of original images, wherein n is a positive integer not less than 2; the pixel value acquisition module is configured to acquire the pixel value of each pixel point in the target image at the corresponding pixel point at the same position in each frame of the original image; a blur degree data determining module configured to determine blur degree data of the target image according to a deviation degree of each pixel point between the n corresponding pixel values in the n frames of original images, where the blur degree data of the target image includes a blur degree value of each pixel point in the target image.
According to a sixth aspect of the present disclosure, there is provided a blurred image data set construction apparatus comprising: a data acquisition module configured to acquire a target image and blur degree data of the target image determined according to the image blur degree determination method of the first aspect; a data set construction module configured to construct a blurred image data set by using the target image as a sample image and the blur degree data of the target image as a first label; the fuzzy image data set is used for training a fuzzy degree perception network, and the fuzzy degree perception network is used for determining fuzzy degree data of the image input into the fuzzy degree perception network.
According to a seventh aspect of the present disclosure, there is provided an image deblurring apparatus comprising: an image acquisition module configured to acquire an image to be processed; a blur degree data determining module configured to determine blur degree data of the image to be processed, with the image to be processed as a target image, according to the image blur degree determining method of the first aspect; and the deblurring processing module is configured to deblur the image to be processed based on the blur degree data of the image to be processed to obtain a deblurred image corresponding to the image to be processed.
According to an eighth aspect of the present disclosure, there is provided an image deblurring apparatus comprising: an image acquisition module configured to acquire an image to be processed; the fuzzy degree data determining module is configured to process the image to be processed by using a fuzzy degree sensing network to obtain fuzzy degree data of the image to be processed; the deblurring processing module is configured to deblur the image to be processed based on the blur degree data of the image to be processed to obtain a deblurred image corresponding to the image to be processed; the fuzzy degree perception network is trained by using the fuzzy image data set constructed by the fuzzy image data set construction method of the second aspect.
According to a ninth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image blur degree determination method of the first aspect, the blurred image data set construction method of the second aspect, the image deblurring method of the third aspect, or the image deblurring method of the fourth aspect described above.
According to a tenth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the image blur degree determination method of the first aspect, the blurred image data set construction method of the second aspect, the image deblurring method of the third aspect, or the image deblurring method of the fourth aspect described above, via execution of the executable instructions.
The technical scheme of the disclosure has the following beneficial effects:
the disclosure provides a technical scheme for determining a fuzzy degree value of each pixel point in an image. On one hand, compared with a scheme for calculating the overall image blur degree value in the related technology, the scheme can more finely represent the blur degrees of different areas in the image and reflect the blur degree difference among the different areas, so that a proper method and parameters are adopted for further optimization of each area, and if the different areas in the image can be deblurred by adopting differentiated parameters, the deblurring effect can be improved. On the other hand, the method and the device determine the fuzzy degree data based on the pixel value deviation degree between the original images, the calculation process is simple, and the realization cost is low.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 shows a schematic diagram of a system architecture in the present exemplary embodiment;
fig. 2 shows a schematic configuration diagram of an electronic apparatus in the present exemplary embodiment;
fig. 3 shows a flowchart of an image blur degree determination method in the present exemplary embodiment;
fig. 4 shows a flowchart of synthesizing a target image in the present exemplary embodiment;
FIG. 5 illustrates a schematic diagram of generating a target image and a blur degree image in the present exemplary embodiment;
fig. 6 shows a flowchart of a blurred image data set construction method in the present exemplary embodiment;
fig. 7 shows a schematic structural diagram of an ambiguity-aware network in the present exemplary embodiment;
FIG. 8 is a schematic diagram of a deblurring network and a feature-aware network in accordance with the exemplary embodiment;
FIG. 9 illustrates a flow chart of a method of deblurring an image in the present exemplary embodiment;
FIG. 10 illustrates a flow chart of another method of deblurring an image in the present exemplary embodiment;
FIG. 11 shows a schematic flow diagram of image deblurring in the present exemplary embodiment;
FIG. 12 illustrates a flow chart of one type of training network in the present exemplary embodiment;
fig. 13 is a schematic configuration diagram showing an image blur degree determination apparatus in the present exemplary embodiment;
fig. 14 is a schematic configuration diagram showing a blurred image data set constructing apparatus according to the present exemplary embodiment;
fig. 15 is a schematic configuration diagram showing an image deblurring apparatus in the present exemplary embodiment;
fig. 16 shows a schematic configuration diagram of another image deblurring apparatus in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In one scheme of the related art, the overall blur degree of the whole image is calculated, then a corresponding blur kernel is generated, and the blur kernel is used for deblurring the whole image. However, the difference of scene depth, the difference of texture sparsity, and the like of different regions in the image may cause the difference of the blur degree of different regions. The deblurring processing is carried out on different regions by adopting a fuzzy core, the optimized deblurring of each region cannot be realized, and a large number of defects exist in the deblurred image, such as noise points in partial regions, blur in partial regions, even bad phenomena such as artifacts, ringing and the like.
In view of the above, exemplary embodiments of the present disclosure provide an image blur degree determination method. The system architecture of the operating environment of the image blur degree determination method will be described first.
Fig. 1 shows a schematic diagram of a system architecture, and the system architecture 100 may include a terminal 110 and a server 120. The terminal 110 may be a desktop computer, a notebook computer, a smart phone, a tablet computer, or other terminal devices, and the server 120 may be a server providing image processing related services, or a cluster formed by multiple servers. The terminal 110 and the server 120 may form a connection through a wired or wireless communication link for data interaction. The terminal 110 may capture or otherwise receive or extract a plurality of consecutive frames of the original image. In one embodiment, the terminal 110 may transmit the consecutive frames of original images to the server 120, and the server 120 synthesizes the target image based on the consecutive frames of original images, and outputs the blur degree data of the target image by performing the image blur degree determining method in the present exemplary embodiment, and returns the target image and the blur degree data to the terminal 110. In one embodiment, the terminal 110 may perform synthesizing the target image from the consecutive multiple frames of original images, and transmit the original image and the target image to the server 120, and the server 120 outputs the blur degree data of the target image by performing the image blur degree determining method in the present exemplary embodiment, and returns the blur degree data to the terminal 110. In one embodiment, the terminal 110 may continue to locally perform the image blur degree determining method in the present exemplary embodiment after performing the synthesis of the target image from the consecutive multiple frames of original images to obtain blur degree data of the target image.
As can be seen from the above, in the present exemplary embodiment, the main body of execution of the image blur degree determining method may be the terminal 110 or the server 120. Exemplary embodiments of the present disclosure also provide an electronic device for performing the image blur degree determining method, which may be the terminal 110 or the server 120. The structure of the electronic device is exemplarily described below by taking the mobile terminal 200 in fig. 2 as an example. It will be appreciated by those skilled in the art that the configuration of figure 2 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes.
As shown in fig. 2, the mobile terminal 200 may specifically include: a processor 210, an internal memory 221, an external memory interface 222, a USB (Universal Serial Bus) interface 230, a charging management Module 240, a power management Module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication Module 250, a wireless communication Module 260, an audio Module 270, a speaker 271, a microphone 272, a microphone 273, an earphone interface 274, a sensor Module 280, a display 290, a camera Module 291, a pointer 292, a motor 293, a button 294, and a SIM (Subscriber identity Module) card interface 295.
Processor 210 may include one or more processing units, such as: the Processor 210 may include an AP (Application Processor), a modem Processor, a GPU (Graphics Processing Unit), an ISP (Image Signal Processor), a controller, an encoder, a decoder, a DSP (Digital Signal Processor), a baseband Processor, and/or an NPU (Neural-Network Processing Unit), etc.
The encoder may encode (i.e., compress) an image or a video, for example, encode a current image to obtain code stream data; the decoder may decode (i.e., decompress) the codestream data of the image or video to restore the image or video data. The mobile terminal 200 may support one or more encoders and decoders. In this way, the mobile terminal 200 may process images or video in a variety of encoding formats, such as: image formats such as JPEG (Joint Photographic Experts Group), PNG (Portable Network Graphics), BMP (Bitmap), and Video formats such as MPEG (Moving Picture Experts Group) 1, MPEG2, h.263, h.264, and HEVC (High Efficiency Video Coding).
In one embodiment, processor 210 may include one or more interfaces through which connections are made to other components of mobile terminal 200.
Internal memory 221 may be used to store computer-executable program code, including instructions. The internal memory 221 may include volatile memory and nonvolatile memory. The processor 210 executes various functional applications of the mobile terminal 200 and data processing by executing instructions stored in the internal memory 221.
The external memory interface 222 may be used to connect an external memory, such as a Micro SD card, for expanding the storage capability of the mobile terminal 200. The external memory communicates with the processor 210 through the external memory interface 222 to implement data storage functions, such as storing images, videos, and other files.
The USB interface 230 is an interface conforming to the USB standard specification, and may be used to connect a charger to charge the mobile terminal 200, or connect an earphone or other electronic devices.
The charge management module 240 is configured to receive a charging input from a charger. While the charging management module 240 charges the battery 242, the power management module 241 may also supply power to the device; the power management module 241 may also monitor the status of the battery.
The wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 250 may provide a mobile communication solution of 2G, 3G, 4G, 5G, etc. applied to the mobile terminal 200. The Wireless Communication module 260 may provide Wireless Communication solutions such as WLAN (Wireless Local Area Networks) (e.g., Wi-Fi (Wireless Fidelity), BT (Bluetooth), GNSS (Global Navigation Satellite System), FM (Frequency Modulation), NFC (Near Field Communication), IR (Infrared technology), and the like, which are applied to the mobile terminal 200.
The mobile terminal 200 may implement a display function through the GPU, the display screen 290, the AP, and the like, and display a user interface. For example, when the user performs camera detection, the mobile terminal 200 may display an interface of a camera detection App (Application) in the display screen 290.
The mobile terminal 200 may implement a photographing function through the ISP, the camera module 291, the encoder, the decoder, the GPU, the display 290, the AP, and the like. For example, a user can start an image or video shooting function in the hidden camera detection App, and at this time, an image of a space to be detected can be acquired through the camera module 291.
The mobile terminal 200 may implement an audio function through the audio module 270, the speaker 271, the receiver 272, the microphone 273, the earphone interface 274, the AP, and the like.
The sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, a barometric pressure sensor 2804, etc. to implement a corresponding inductive detection function.
Indicator 292 may be an indicator light that may be used to indicate a state of charge, a change in charge, or may be used to indicate a message, missed call, notification, etc. The motor 293 may generate a vibration cue, may also be used for touch vibration feedback, and the like. The keys 294 include a power-on key, a volume key, and the like.
The mobile terminal 200 may support one or more SIM card interfaces 295 for connecting SIM cards to implement functions such as call and mobile communication.
Fig. 3 shows an exemplary flow of the image blur degree determining method described above, which may include:
step S310, acquiring continuous n frames of original images and a target image synthesized by carrying out weighted fusion on pixel points at the same position in the continuous n frames of original images, wherein n is a positive integer not less than 2;
step S320, acquiring the pixel value of each pixel point in the target image at the corresponding pixel point at the same position in each frame of original image;
step S330, determining the fuzzy degree data of the target image according to the deviation degree of each pixel point in the target image among the corresponding n pixel values in the n frames of original images, wherein the fuzzy degree data comprises the fuzzy degree value of each pixel point in the target image.
By the method, the technical scheme for determining the fuzzy degree value of each pixel point in the image is provided. On one hand, compared with a scheme for calculating the overall image blur degree value in the related technology, the scheme can more finely represent the blur degrees of different areas in the image and reflect the blur degree difference among the different areas, so that a proper method and parameters are adopted for further optimization of each area, and if the different areas in the image can be deblurred by adopting differentiated parameters, the deblurring effect can be improved. On the other hand, the method and the device determine the fuzzy degree data based on the pixel value deviation degree between the original images, the calculation process is simple, and the realization cost is low.
Each step in fig. 3 will be described in detail below.
Referring to fig. 3, in step S310, n consecutive frames of original images and a target image synthesized by performing weighted fusion on pixels at the same position in the n consecutive frames of original images are obtained, where n is a positive integer not less than 2.
The n frames of original images may be n frames of images continuously acquired during shooting, or may be n frames of images continuously acquired in a video, where the n frames of original images are images acquired for the same shooting object or scene. In the n consecutive frames of original images, there is generally a moving object, or the camera moves when the n consecutive frames of original images are captured, so that there is a certain degree of blurring in the target image composed of the n consecutive frames of original images, that is, the target image can be regarded as a blurred image.
The number of pixels of n continuous frames of original images can be the same, and pixel-level synthesis can be adopted when synthesizing the target image, that is, weighted fusion is carried out on pixel points at the same position in the n frames of original images to obtain a pixel value which is used as the pixel value of the corresponding position in the target image. Or, when the number of pixels of the n consecutive original images is different, some of the n original images may be up-sampled or down-sampled to make the number of pixels of all the n original images equal, and then pixel-level synthesis is performed.
In one embodiment, referring to fig. 4, step S310 may include the following steps S410 and S420:
step S410, acquiring n consecutive frames of original images.
Where n may be a fixed value or a variable value. In general, the larger the value of n, i.e., the larger the number of original images participating in the synthesis of the target image, wherein the higher the degree of motion of the moving object, or the higher the degree of motion of the camera when capturing these original images, the more blurred the synthesized target image. In one embodiment, the image blur degree determining method may further include the steps of:
the value of n is determined based on the expected global blurriness value of the target image before acquiring n consecutive frames of original images.
The global blur degree value is a quantization parameter representing the overall blur degree of an image, and a user-defined parameter may be adopted, for example, an average value of the blur degree values of each pixel point in an image is used as the global blur degree of the image, or a blur degree parameter commonly used in the industry, for example, a blur degree value based on a gradient of a pixel value in the image. In the exemplary embodiment, the expected global blur level value of the target image, that is, the global blur level value that the target image is expected to achieve, is determined according to the actual application requirement of the target image.
It should be noted that the blurring degree of the target image is positively correlated with the value of n, which mainly means that the global blurring degree value is positively correlated with the value of n, but the blurring degree of each region in the target image is not positively correlated with n, for example, the blurring degree of the solid background region in the original images of consecutive multiple frames is the same, and as the value of n increases, the blurring degree of the solid background region in the synthesized target image does not change, and the blurring degree of the solid background region is not correlated with n.
By counting the global fuzzy degree values of a large number of synthesized images and the number of original images participating in synthesis in advance, the corresponding relation between the global fuzzy degree values and n can be obtained, for example, the two parameters are linear relation. Thus, after determining the expected global blurriness value for the target image, the corresponding value of n may be determined based on the correspondence.
In one embodiment, consecutive n may be obtainedmaxFrame original image, nmaxThe maximum value that n may take may be determined empirically or based on actual scene requirements. After determining the value of n, continuing n frommaxSelecting n frames of original images from 1 st to n th framesmax-any frame of n +1 frames is selected, i.e. the selected n consecutive frames of original images can be located in n consecutive framesmaxAn arbitrary position in the original image of the frame.
In one embodiment, it is considered that during the process of shooting the n continuous frames of original images, the camera may have a certain displacement or shake, which causes the shooting positions and the viewing angles corresponding to the n continuous frames of original images to have deviations. In order to facilitate accurate fusion in the subsequent process, n frames of original images may be registered, for example, a previous frame may be sequentially registered to a next frame, or a reference frame is selected from the n frames of original images, and other frames are registered to the reference frame. The specific algorithm of the registration is not limited in the present disclosure, for example, feature points may be extracted from two frames of original images to be registered, feature points between the two frames of original images are matched to obtain a feature point matching pair, and then the pose transformation relationship between the two frames of original images is solved by using the feature point matching pair, so that any one of the frames is transformed to realize the registration.
Step S420, performing weighted fusion on the pixel points at the same position in the n consecutive frames of original images to synthesize the n consecutive frames of original images into a target image.
For example, the number of pixels of n consecutive original images is W × H, and from a certain position, a pixel point at the position in each original image is selected and weighted-fused. For example, starting from the position of the pixel coordinate (1,1) at the upper left corner, selecting a pixel point (for short, 1) at the position (1,1) in each frame of original image to obtain n (1,1) pixel points, and performing weighted fusion on pixel values to obtain a (1,1) fused pixel value. In the exemplary embodiment, weighted fusion may be performed on n pixel points on each color channel, for example, weighted fusion is performed on a numerical value of R, G, B channel, or pixel values of color modes such as RGB are converted into a gray value, and weighted fusion is performed on the gray value. The present disclosure does not limit the weight value used for weighting fusion, for example: equal weight can be set for n frames of original images, and weighted fusion is equivalent to calculating average pixel values of n pixel points. Different weights can be set for the n frames of original images according to actual scene requirements, for example, the weights of the n frames of original images are set to be increased or decreased according to the shooting time sequence of the n frames of original images. Different weights can be set for the n frames of original images aiming at the pixel points at each position, for example, gradient values of the (1,1) pixel points in the n frames of original images are respectively calculated, and the weights of the (1,1) pixel points in the n frames of original images are set according to the size relation of the n gradient values; generally, the larger the gradient value is, the clearer the (1,1) pixel point in the original image of the frame is, and the heavier the weight is; illustratively, the gradient values of (1,1) pixel points in n frames of original images are normalized to obtain the weights of (1,1) pixel points in n frames of original images. And traversing the pixel points at each position according to a certain sequence by adopting a pixel value weighted fusion method, for example, traversing from the (1,1) position to the (W, H) position from left to right and from top to bottom, and performing weighted fusion on n pixel points at each position in the same manner as the (1,1) pixel points to obtain a fused pixel value at each position. Thereby forming a target image from the fused pixel values at all positions.
With reference to fig. 3, in step S320, the pixel value of the pixel point at the same position corresponding to each pixel point in the original image of each frame in the target image is obtained.
As can be seen from the above, the target image is formed by fusing the pixel points at each position in the n frames of original images, and each pixel point in the target image has a corresponding pixel point at the same position in each frame of original image, and the pixel values of the pixel points are obtained, so that n pixel values corresponding to each pixel point in the target image in the n frames of original images can be obtained. For example, for a (1,1) pixel point in the target image, the pixel value of the (1,1) position in each frame of the original image corresponds to, and therefore n pixel values corresponding to the (1,1) pixel point can be obtained. In the same way, n pixel values corresponding to each pixel point in the target image can be obtained.
With reference to fig. 3, in step S330, blur degree data of the target image, including a blur degree value of each pixel point in the target image, is determined according to a deviation degree of each pixel point in the target image between n corresponding pixel values in the n frames of original images.
Wherein the degree of deviation between the n pixel values can be quantitatively expressed by the variance or standard deviation between the n pixel values. Generally, the higher the deviation degree between n pixel values corresponding to a certain pixel point in a target image is, the higher the blurring degree value of the pixel point is, and the linear or nonlinear positive correlation between the two values can be satisfied. For example, the standard deviation between n pixel values corresponding to each pixel point may be used as the ambiguity degree value of each pixel point, as follows:
Figure BDA0003110703230000091
wherein, (x, y) represents the position coordinates of the pixel points, D (x, y) represents the ambiguity degree value of the (x, y) pixel points in the target image, and LiAnd (x, y) represents the pixel value of the (x, y) pixel point in the original image of the ith frame. Formula (1) represents that the standard deviation is calculated for the pixel values of (x, y) pixels in the n frames of original images, and the standard deviation is used as the fuzzy degree value of (x, y) pixels in the target image. The larger the standard deviation is, the larger the difference in image information about the point (x, y) in the n-frame original images is, and when a target image is synthesized, the more image information is lost at the point (x, y), and therefore the degree of blur value is higher.
It should be noted that, in the formula (1), the standard deviation is directly adopted as the ambiguity value, and the value of D (x, y) and LiThe numerical value of (x, y) being related, e.g. LiWhen (x, y) is 0-1, D (x, y) is generally within 0-1, LiWhen (x, y) is 0 to 255, D (x, y) is generally within 0 to 255. In an embodiment, D (x, y) may be mapped according to a preset value range (related to an actual scene requirement, for example, 0 to 1, 0 to 100, etc.) of the ambiguity degree value, so that the value of D (x, y) is converted into the preset value range, for example, when the preset value range is 0 to 1, D (x, y) may be normalized.
In calculating the degree of deviation between pixel values, the pixel values may be converted into gray-scale values and calculated, such as L in formula (1)i(x, y) may be the gray value of the (x, y) pixel point in the i-th frame original image, or may be calculated on each color channel, for example, L is used in formula (1) respectivelyiAnd (x, y) R, G, B calculating D (x, y) of the three channels, and then fusing the D (x, y) of the three channels, such as calculating an average value or a weighted average value, to obtain a fuzzy degree value of the (x, y) pixel point.
In one embodiment, the image blur degree determining method may further include the steps of:
and generating a fuzzy degree image corresponding to the target image according to the fuzzy degree data of the target image.
And the pixel value of each pixel point in the fuzzy degree image is the fuzzy degree value of each pixel point in the target image. That is, the blur degree image is blur degree data of the target image, which is displayed in a visual form by representing the blur degree value of each pixel point in the target image as a pixel value. The image with the fuzzy degree has the same pixel number as the target image, and is convenient to store and process.
Fig. 5 shows a schematic process of generating a target image and a blur degree image from n consecutive frames of original images. And extracting pixel points at the same position in the n frames of original images, calculating an average value of the pixel values to obtain the pixel value of the corresponding position in the target image, and calculating a standard deviation of the pixel values to obtain the pixel value of the corresponding position in the fuzzy degree image.
Exemplary embodiments of the present disclosure also provide a blurred image data set construction method. Fig. 6 shows an exemplary flow of the blurred image data set construction method, including the following steps S610 and S620:
step S610, acquiring a target image and fuzzy degree data of the target image determined according to the image fuzzy degree determining method;
and step S620, taking the target image as a sample image, taking the fuzzy degree data of the target image as a first label, and constructing a fuzzy image data set.
For example, the target image obtained by the image blur degree determining method in fig. 3 is regarded as a blurred image, which may be used as a sample image, the blur degree data of the target image is used as a corresponding label (Ground route), and in order to distinguish labels of other forms, the label is referred to as a first label, the sample image and the first label form a training array, and a blurred image data set may be constructed by obtaining a large number of training arrays.
The blurred image data set may be used to train a blur level perception network for determining blur level data for an image input to the blur level perception network. The ambiguity-aware network may be an End-to-End (End-to-End) architecture network. Fig. 7 shows a schematic block diagram of an ambiguity level aware network, which can employ a U-Net structure. Exemplarily, after a sample image is input into a blur degree sensing network, performing one or more convolution operations by the convolutional layer 1 (fig. 6 shows that the convolutional layer 1 performs two convolution operations, the present disclosure does not limit the number of specific convolution operations in each convolutional layer), and then performing pooling operation to obtain a feature image with a reduced size; performing a round of convolution and pooling operations on the convolution layer 2 to obtain a feature image with further reduced size; performing a round of convolution and pooling operations on the convolution layer 3 to obtain a feature image with a smaller size; performing convolution operations in convolution layer 4, but not pooling operations; then, the method enters a transposition convolutional layer 1, firstly performs transposition convolution operation, then splices the transposition convolutional layer with the characteristic image in the convolutional layer 3, and then performs convolution operation once or for multiple times to obtain a characteristic image with an increased size; performing one-turn convolution operation, splicing with the characteristic image in the convolution layer 2 and convolution operation by the transposed convolution layer 2 to obtain a characteristic image with a further increased size; finally, the transposed convolution layer 3 performs another round of the above operation to output the fuzzy degree data. It should be noted that, the present disclosure does not limit the number of convolution layers and transposed convolution layers in the ambiguity level aware network, and other types of intermediate layers, such as a Dropout layer (discard layer), a full connection layer, etc., may also be added in the ambiguity level aware network according to the actual scene requirements.
The loss function value is calculated according to a difference between sample blurring degree data (output here is blurring degree data of a sample image, and is therefore referred to as sample blurring degree data) output by the blurring degree sensing network and blurring degree data serving as a first label, for example, a loss function may be established based on MSE (Mean Square Error) between the sample blurring degree data and the first label, and the loss function value may be obtained by substituting the sample blurring degree data and the first label. And updating the parameters of the fuzzy degree perception network by using the loss function values, such as gradient descent updating of the parameters of the network. And the network reaches a certain accuracy rate through multiple iterations, or the loss function value is converged, so that the training of the fuzzy degree perception network is completed.
In one embodiment, the blurred image data set construction method may further comprise the steps of:
and acquiring one frame of original image in the n frames of original images for synthesizing the target image, and adding the one frame of original image as a second label to the blurred image data set.
The original image can be regarded as a clear image corresponding to the sample image (i.e., the target image), and is used as another type of label corresponding to the sample image, which is referred to as a second label herein for distinguishing the first label. The second label is added to the blurred image data set, and may form a binary training array with the sample image or a ternary training array with the sample image and the first label.
In the exemplary embodiment, any one of the n consecutive original images may be used as the second label, or after the original image is subjected to blur degree evaluation, one original image with the lowest blur degree value may be selected as the second label. For example, the blur degree sensing network is used to process each frame of original image to obtain blur degree data of each frame of original image, an average blur degree value of each frame of original image is calculated based on the blur degree data, and a frame of original image with the lowest average blur degree value is selected as the second label.
The blurred image data set containing the second label can also be used for training a deblurring network, and the deblurring network is used for deblurring an image input into the deblurring network and outputting a deblurred image corresponding to the image.
In one embodiment, the deblurring network may be an end-to-end architecture network, such as a U-Net architecture network. Illustratively, the deblurring network may also employ the network architecture of FIG. 7, wherein the arrangement of specific convolutional layers, transposed convolutional layers, or other types of intermediate layers may differ from that of FIG. 7. And updating parameters of the deblurring network according to the difference between the deblurring image output by the deblurring network and the second label, thereby realizing the training of the deblurring network. It should be understood that, in the present exemplary embodiment, both the ambiguity level awareness network and the deblurring network may adopt a network with a U-Net structure, but the detailed structure of the network may be different, and the parameters of the two networks after training are different, so that the functions implemented by the two networks are different.
In one embodiment, the blurriness data may be integrated into a deblurring network. Generally, the blur degree data can be processed and then input into the intermediate layer of the deblurring network to be fused with the image information. Fig. 8 shows a schematic block diagram of a deblurring network and a feature-aware network for processing the blur level data, the output of which is connected to the middle layer of the deblurring network. Illustratively, the sample image is input into a deblurring network, and is subjected to convolution and pooling operations in the convolutional layers 1 to 3 to obtain a feature image with a continuously reduced size, and the feature image enters the convolutional layer 4. Inputting the blur degree data (usually in the form of a blur degree image, which may be the blur degree data output by the first label or the blur degree sensing network) of the sample image into the feature sensing network, and obtaining a multi-channel feature ψ representing the blur degree feature through the processing of the convolutional layer 1 '(fig. 8 shows that the convolutional layer 1' includes four convolution operations, which is not particularly limited by the present disclosure); inputting psi into multiple different convolutional layers, such as convolutional layer 2 'and convolutional layer 3' shown in fig. 8 (fig. 8 shows that convolutional layer 2 'and convolutional layer 3' both include two convolution operations, which is not particularly limited by the present disclosure); the feature image output from the convolutional layer 2 'is input to the fully-connected layer 1 to obtain a sample modulation parameter α, and the feature image output from the convolutional layer 3' is input to the fully-connected layer 2 to obtain a sample modulation parameter β (here, α and β are modulation parameters corresponding to the sample image and are referred to as sample modulation parameters). Modulation refers to fusing blur degree information with image information. In one embodiment, the modulation may be affine transformation of the image, and α and β may be different types of transformation operation parameters, for example, affine transformation generally includes rotation, translation, scaling, and the like operations, for example, α may be a scaling parameter, and β may be a translation parameter. The fully connected layers 1 and 2 may be connected to any intermediate layer in the deblurring network, for example to the convolutional layer 4 in fig. 8, representing an affine transformation of the characteristic image in the convolutional layer 4, as follows:
Figure BDA0003110703230000121
wherein F is a characteristic image, and the dimensionality of alpha and beta is the same as that of F.
Figure BDA0003110703230000122
Is a feature image that has undergone an affine transformation,
Figure BDA0003110703230000123
representing element by element multiplication. The affine-transformed feature image is processed by the transposed convolution layer 1 to the transposed convolution layer 3, and a sample deblurred image is output (the deblurred image is a deblurred image corresponding to the sample image, and is referred to as a sample deblurred image).
In fig. 8, the feature sensing network is used to characterize the blur degree data as a modulation parameter, and modulate the feature image of the sample image, thereby realizing the fusion of two kinds of information of the image and the blur degree, and being beneficial to improving the deblurring quality of the deblurring network.
It should be understood that the Feature-aware network in fig. 8 adopts the structure of SFT (Spatial Feature Transformation) layer, and this structure is only schematic. In other embodiments, a feature sensing network with another structure may be adopted, for example, a feature sensing network with a U-Net structure may be adopted, after the blur degree image is input into the feature sensing network, the blur degree image is processed by a plurality of convolution layers and a transposed convolution layer, one or more convolution layers or transposed convolution layers may be connected to a corresponding convolution layer or transposed convolution layer in the deblurring network, so that the feature image in the feature sensing network is input into the deblurring network, and is spliced with the feature image in the deblurring network, so that the fusion of the two kinds of information of the image and the blur degree can be realized.
In the structure of the deblurring network and the feature sensing network, a ternary training array of sample image-first label-second label may be used for training, where the sample image and the first label are respectively input into two channels of the deblurring network, processed, and output a sample deblurring image, and according to a difference (such as a loss function value like MSE) between the sample deblurring image and the second label, a parameter of the deblurring network is updated, or parameters of the deblurring network and the feature sensing network are updated at the same time, so as to implement training.
Existing common blurred image data sets (e.g., GoPro data sets, etc.) typically average a plurality of consecutive sharp images to obtain a blurred image. By using the image blur degree determining method in the present exemplary embodiment, blur degree data can be determined for blurred images in the public blurred image data set, so that expansion of the public blurred image data set is realized, and a more comprehensive blurred image data set is constructed, so that the method can be applied to training of the blur degree perception network and the deblurring network.
Exemplary embodiments of the present disclosure also provide an image deblurring method. Fig. 9 shows an exemplary flow of the image deblurring method, including the following steps S910 to S930:
step S910, acquiring an image to be processed synthesized by performing weighted fusion on pixel points at the same position in n continuous frames of original images, wherein n is a positive integer not less than 2.
The image to be processed is an image that needs to be deblurred, and is an image synthesized by n consecutive frames of original images, for example, in some dynamic shooting modes, a camera continuously shoots multiple frames of original images in a short time, and the multiple frames of original images are collectively called an image to be output, and the image may have motion blur and can be used as the image to be processed here.
For a specific embodiment of synthesizing an image to be processed from n consecutive frames of original images, please refer to the content of fig. 4.
Step S920, determining blur degree data of the image to be processed by using the image to be processed as a target image according to the image blur degree determining method.
Illustratively, with the image to be processed as a target image, n consecutive frames of original images for synthesizing the image to be processed are obtained, and the image blur degree determining method shown in fig. 3 is executed to obtain blur degree data of the image to be processed, including a blur degree value of each pixel point in the image to be processed.
And step S930, based on the fuzzy degree data of the image to be processed, performing deblurring processing on the image to be processed to obtain a deblurred image corresponding to the image to be processed.
The blur degree data of the image to be processed may provide auxiliary or reference information for the deblurring process of the image to be processed. For example, different blurring kernels can be set in different areas of the image to be processed based on the blurring degree data of the image to be processed, then the deblurring processing is performed on the different areas through the blurring kernels to obtain the deblurred image, and compared with the method that one blurring kernel is adopted, the deblurring processing can be performed on the different areas in a targeted mode, and the deblurring effect is improved.
In one embodiment, step S930 may include the steps of:
processing the fuzzy degree data of the image to be processed by utilizing a characteristic sensing network to obtain a modulation parameter;
and performing deblurring processing on the image to be processed according to the modulation parameters to obtain a deblurred image corresponding to the image to be processed.
The characteristic perception network is used for converting the fuzzy degree data into a modulation parameter in a specific form so as to be convenient for fusing with the image information of the image to be processed. The feature sensing network and the modulation parameter may refer to fig. 8, but may also be in other forms, for example, the feature sensing network is a network with a U-Net structure, and the modulation parameter is a feature image of the blur degree data. The present disclosure is not limited thereto.
The fuzzy degree information and the image information can be fused according to the modulation parameters, so that the information dimensionality of the image to be processed is more comprehensive, and high-quality image deblurring is convenient to realize.
In one embodiment, the deblurring process may be performed using the above-described deblurring network. Illustratively, the deblurring processing the image to be processed according to the modulation parameter to obtain a deblurred image corresponding to the image to be processed may include the following steps:
inputting the image to be processed into an input layer of a deblurring network, inputting modulation parameters into a middle layer of the deblurring network, and outputting the deblurring image corresponding to the image to be processed through the deblurring network.
The specific process of the deblurring network may refer to the content of fig. 8. Fig. 8 is a process of processing the sample image and the blur degree data thereof by the deblurring network, that is, a training process. In the deblurring processing of the image to be processed, the processing process of the deblurring network is the same, and the difference is that the network is fully trained, and the deblurring image with higher quality can be directly output.
Exemplary embodiments of the present disclosure also provide another image deblurring method. Fig. 10 shows an exemplary flow of the image deblurring method, including the following steps S1010 to S1030:
step S1010, an image to be processed is acquired.
The image to be processed is an image that needs to be deblurred, and may be an image acquired through any way, for example, an image currently taken or an image selected by a user. Step S1010 is compared with step S910, it is not necessary to limit the to-be-processed image to an image synthesized from n consecutive frames of original images, for example, the to-be-processed image may be an image of one frame taken separately, and there is a blur in the to-be-processed image due to hand shake or the like at the time of taking.
Step S1020, the image to be processed is processed by using the fuzzy degree perception network, and fuzzy degree data of the image to be processed is obtained.
The blur degree perception network is trained by using the blurred image data set constructed by the blurred image data set construction method, and may be a network shown in fig. 7. Illustratively, the blurred image data set is constructed by the blurred image data set construction method of fig. 6, and the blurred level perception network is trained by using the blurred image data set. In step S1020, the image to be processed is input into the trained blur degree perception network, and blur degree data of the image to be processed is output. The blur degree data of the image to be processed may be a blur degree image corresponding to the image to be processed.
It should be noted that through the blur degree perception network, the feature association between the target image and the blur degree data thereof can be learned, and the feature association is communicated for the blurred images in all scenes. After training, the fuzzy degree perception network can apply the learned feature association to the fuzzy image in any scene, namely can be used for processing the fuzzy image in any scene. For this reason, the present exemplary embodiment does not particularly limit the image to be processed, and particularly does not need to limit the image to be processed to an image synthesized from n consecutive frames of original images.
And step S1030, based on the blur degree data of the image to be processed, performing deblurring processing on the image to be processed to obtain a deblurred image corresponding to the image to be processed.
The blur degree data of the image to be processed may provide auxiliary or reference information for the deblurring process of the image to be processed. For example, different blurring kernels can be set in different areas of the image to be processed based on the blurring degree data of the image to be processed, then the deblurring processing is performed on the different areas through the blurring kernels to obtain the deblurred image, and compared with the method that one blurring kernel is adopted, the deblurring processing can be performed on the different areas in a targeted mode, and the deblurring effect is improved.
In one embodiment, step S1030 may include the steps of:
processing the fuzzy degree data of the image to be processed by utilizing a characteristic sensing network to obtain a modulation parameter;
and performing deblurring processing on the image to be processed according to the modulation parameters to obtain a deblurred image corresponding to the image to be processed.
The characteristic perception network is used for converting the fuzzy degree data into a modulation parameter in a specific form so as to be convenient for fusing with the image information of the image to be processed. The feature sensing network and the modulation parameter may refer to fig. 8, but may also be in other forms, for example, the feature sensing network is a network with a U-Net structure, and the modulation parameter is a feature image of the blur degree data. The present disclosure is not limited thereto.
The fuzzy degree information and the image information can be fused according to the modulation parameters, so that the information dimensionality of the image to be processed is more comprehensive, and high-quality image deblurring is convenient to realize.
In one embodiment, the deblurring process may be performed using the above-described deblurring network. Illustratively, the deblurring processing the image to be processed according to the modulation parameter to obtain a deblurred image corresponding to the image to be processed may include the following steps:
inputting the image to be processed into an input layer of a deblurring network, inputting modulation parameters into a middle layer of the deblurring network, and outputting the deblurring image corresponding to the image to be processed through the deblurring network.
The specific process of the deblurring network may refer to the content of fig. 8. Fig. 8 is a process of processing the sample image and the blur degree data thereof by the deblurring network, that is, a training process. In the deblurring processing of the image to be processed, the processing process of the deblurring network is the same, and the difference is that the network is fully trained, and the deblurring image with higher quality can be directly output.
Fig. 11 shows a schematic diagram of deblurring an image to be processed through a blur degree-aware network, a feature-aware network, and a deblurring network. Firstly, inputting an image to be processed into a fuzzy degree perception network, and outputting a fuzzy degree image corresponding to the image to be processed by the fuzzy degree perception network; inputting the fuzzy degree image into a feature perception network, and outputting modulation parameters by the feature perception network; and inputting the image to be processed into a deblurring network, inputting the modulation parameters into an intermediate layer of the deblurring network, and outputting the deblurring image corresponding to the image to be processed by the deblurring network. In the flow of fig. 11, the image information of the image to be processed itself and the pixel-level blur degree information of the image are merged, so that the targeted deblurring processing of different areas of the image to be processed can be realized, and a high-quality deblurred image can be obtained.
In the network training process, the fuzzy degree perception network is trained by using the sample image-the first label, the deblurring network is trained by using the sample image-the first label-the second label, or the deblurring network and the characteristic perception network are trained, and the two parts of training are respectively carried out. As can be seen from fig. 11, the three networks have a connection relationship between input and output, so that the fuzzy degree perception network, the deblurring network, and the feature perception network can be trained together. The complete training process is exemplified below. Referring to fig. 12, the image deblurring method may further include the following steps S1210 to S1240:
step S1210, acquiring a fuzzy image data set comprising a sample image, a first label and a second label; the sample image is a target image, the first label is fuzzy degree data of the target image, and the second label is one frame of original image in continuous n frames of original images of the synthetic target image;
step S1220, pre-training the fuzzy degree perception network by using the sample image and the first label;
step 1230, pre-training the deblurring network by using the sample image, the first label and the second label, or pre-training the deblurring network and the feature perception network;
and step S1240, respectively inputting the sample image into the fuzzy degree perception network and the deblurring network, inputting the sample fuzzy degree data output by the fuzzy degree perception network into the characteristic perception network, inputting the sample modulation parameters output by the characteristic perception network into the middle layer of the deblurring network, and finely adjusting the parameters of the fuzzy degree perception network, the deblurring network and the characteristic perception network according to the difference between the sample deblurring image output by the deblurring network and the second label.
The whole training process is divided into two stages of Pre-training (Pre-training) and Fine-tuning (Fine-tuning). In the pre-training process, the fuzzy degree perception network is trained independently, the deblurring network is trained independently, or the deblurring network and the characteristic perception network are trained independently as a whole, the structure of the trained network of each part is relatively small, and parameters in the network are adjusted rapidly easily to obtain a preliminary training result. In the parameter fine-tuning stage, the three networks are used as a large network structure, specifically, a sample image is input into a fuzzy degree perception network and a deblurring network, the fuzzy degree perception network outputs sample fuzzy degree data, the sample fuzzy degree data is processed by a characteristic perception network to output sample modulation parameters, the sample modulation parameters are input into a middle layer of the deblurring network, finally the deblurring network outputs a sample deblurring image, the deblurring network outputs a sample deblurring image as an output result of the whole network structure, and the parameters of the three networks are updated together according to the difference between the sample deblurring image and a second label, such as MSE loss function values and the like, so that overall full training is realized, the parameters of the three networks are adjusted more finely on the basis of pre-training, a network with higher accuracy is obtained, and the image deblurring quality is further improved.
Exemplary embodiments of the present disclosure also provide an image blur degree determining apparatus. Referring to fig. 13, the image blur degree determining apparatus 1300 may include:
an image obtaining module 1310 configured to obtain n consecutive frames of original images and a target image synthesized by performing weighted fusion on pixels at the same positions in the n consecutive frames of original images, where n is a positive integer not less than 2;
a pixel value obtaining module 1320, configured to obtain a pixel value of a pixel point at the same position corresponding to each pixel point in each frame of the original image in the target image;
the blur degree data determining module 1330 is configured to determine blur degree data of the target image according to a deviation degree of each pixel point in the target image between n corresponding pixel values in the n frames of original images, where the blur degree data of the target image includes a blur degree value of each pixel point in the target image.
In one embodiment, the blur level data determining module 1330 is further configured to:
and generating a fuzzy degree image corresponding to the target image according to the fuzzy degree data of the target image, wherein the pixel value of each pixel point in the fuzzy degree image is the fuzzy degree value of each pixel point in the target image.
In one embodiment, the image acquisition module 1310 is configured to:
acquiring continuous n frames of original images;
and carrying out weighted fusion on pixel points at the same position in the continuous n frames of original images so as to synthesize the continuous n frames of original images into a target image.
In one embodiment, the image acquisition module 1310 is further configured to:
the value of n is determined based on the expected global blurriness value of the target image before acquiring n consecutive frames of original images.
Exemplary embodiments of the present disclosure also provide a blurred image data set construction apparatus. Referring to fig. 14, the blurred image data set constructing apparatus 1400 may include:
a data obtaining module 1410 configured to obtain a target image and blur degree data of the target image determined according to the image blur degree determining method;
a data set construction module 1420 configured to construct a blurred image data set with the target image as a sample image and the blur degree data of the target image as a first label; the fuzzy image data set is used for training a fuzzy degree perception network, and the fuzzy degree perception network is used for determining fuzzy degree data of an image input into the fuzzy degree perception network.
In one embodiment, the data obtaining module 1410 is further configured to obtain one original image of n consecutive original images for synthesizing the target image;
a data set constructing module 1420, configured to add the frame of original image as a second label to the blurred image data set; the blurred image data set is also used to train a deblurring network for deblurring images input to the deblurring network.
Exemplary embodiments of the present disclosure also provide an image deblurring determination apparatus. Referring to fig. 15, the image deblurring apparatus 1500 may include:
an image obtaining module 1510 configured to obtain an image to be processed synthesized by performing weighted fusion on pixel points at the same position in n consecutive frames of original images, where n is a positive integer not less than 2;
a blur degree data determining module 1520 configured to determine blur degree data of the image to be processed, taking the image to be processed as a target image, according to the image blur degree determining method;
the deblurring processing module 1530 is configured to perform deblurring processing on the image to be processed based on the blur degree data of the image to be processed, so as to obtain a deblurred image corresponding to the image to be processed.
Exemplary embodiments of the present disclosure also provide another image deblurring determination apparatus. Referring to fig. 16, the image deblurring apparatus 1600 may include:
an image acquisition module 1610 configured to acquire an image to be processed;
the blur degree data determining module 1620 configured to process the image to be processed by using the blur degree sensing network to obtain blur degree data of the image to be processed;
a deblurring processing module 1630 configured to deblur the image to be processed based on the blur degree data of the image to be processed to obtain a deblurred image corresponding to the image to be processed;
the fuzzy degree perception network is obtained by training a fuzzy image data set constructed by the fuzzy image data set construction method.
In one embodiment, the deblurring processing module 1630 is configured to:
processing the fuzzy degree data of the image to be processed by utilizing a characteristic sensing network to obtain a modulation parameter;
and performing deblurring processing on the image to be processed according to the modulation parameters to obtain a deblurred image corresponding to the image to be processed.
In one embodiment, the deblurring processing module 1630 is configured to:
inputting the image to be processed into an input layer of the deblurring network, inputting modulation parameters into a middle layer of the deblurring network, and outputting the deblurring image through the deblurring network.
In one embodiment, the image deblurring apparatus 1600 may further include a network training module configured to:
acquiring a fuzzy image data set, wherein the fuzzy image data set comprises a sample image, a first label and a second label, the sample image is a target image, the first label is fuzzy degree data of the target image, and the second label is one frame of original image in continuous n frames of original images of a synthetic target image;
pre-training the fuzzy degree perception network by using the sample image and the first label;
pre-training the deblurring network by using the sample image, the first label and the second label, or pre-training the deblurring network and the feature perception network;
and respectively inputting the sample image into a fuzzy degree perception network and a deblurring network, inputting sample fuzzy degree data output by the fuzzy degree perception network into a characteristic perception network, inputting sample modulation parameters output by the characteristic perception network into a middle layer of the deblurring network, and carrying out parameter fine adjustment on the fuzzy degree perception network, the deblurring network and the characteristic perception network according to the difference between the sample deblurring image output by the deblurring network and a second label.
The details of the above-mentioned parts of the apparatus have been described in detail in the method part embodiments, and thus are not described again.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium, which may be implemented in the form of a program product, including program code for causing an electronic device to perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "exemplary method" section of this specification, when the program product is run on the electronic device. In one embodiment, the program product may be embodied as a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the following claims.

Claims (17)

1. An image blur degree determination method, characterized by comprising:
acquiring n continuous frames of original images and a target image synthesized by carrying out weighted fusion on pixel points at the same position in the n continuous frames of original images, wherein n is a positive integer not less than 2;
acquiring the pixel value of each pixel point in the target image at the corresponding pixel point at the same position in each frame of the original image;
and determining the blurring degree data of the target image according to the deviation degree of each pixel point among the corresponding n pixel values in the n frames of original images, wherein the blurring degree data of the target image comprises the blurring degree value of each pixel point in the target image.
2. The method of claim 1, further comprising:
and generating a fuzzy degree image corresponding to the target image according to the fuzzy degree data of the target image, wherein the pixel value of each pixel point in the fuzzy degree image is the fuzzy degree value of each pixel point in the target image.
3. The method according to claim 1, wherein the obtaining of the n consecutive frames of original images and the target image synthesized by performing weighted fusion on pixels at the same positions in the n consecutive frames of original images comprises:
acquiring continuous n frames of original images;
and carrying out weighted fusion on pixel points at the same position in the continuous n frames of original images so as to synthesize the continuous n frames of original images into one target image.
4. The method of claim 3, wherein prior to acquiring n consecutive frames of original images, the method further comprises:
determining a value of n based on an expected global blurriness value of the target image.
5. A blurred image data set construction method, comprising:
acquiring a target image and blur degree data of the target image determined by the image blur degree determination method according to any one of claims 1 to 4;
taking the target image as a sample image, taking the fuzzy degree data of the target image as a first label, and constructing a fuzzy image data set; the fuzzy image data set is used for training a fuzzy degree perception network, and the fuzzy degree perception network is used for determining fuzzy degree data of the image input into the fuzzy degree perception network.
6. The method of claim 5, further comprising:
acquiring one original image in continuous n frames of original images for synthesizing the target image, and adding the original image as a second label to the blurred image data set; the blurred image data set is further used for training a deblurring network, and the deblurring network is used for deblurring an image input into the deblurring network.
7. An image deblurring method, comprising:
acquiring an image to be processed synthesized by carrying out weighted fusion on pixel points at the same position in n continuous frames of original images, wherein n is a positive integer not less than 2;
the image blur degree determination method according to any one of claims 1 to 4, wherein blur degree data of the image to be processed is determined with the image to be processed as a target image;
and based on the fuzzy degree data of the image to be processed, performing deblurring processing on the image to be processed to obtain a deblurred image corresponding to the image to be processed.
8. An image deblurring method, comprising:
acquiring an image to be processed;
processing the image to be processed by using a fuzzy degree perception network to obtain fuzzy degree data of the image to be processed;
based on the fuzzy degree data of the image to be processed, performing deblurring processing on the image to be processed to obtain a deblurred image corresponding to the image to be processed;
wherein the blur degree perception network is trained by using the blurred image data set constructed by the blurred image data set construction method of claim 5 or 6.
9. The method according to claim 8, wherein the deblurring processing is performed on the image to be processed based on the blur degree data of the image to be processed to obtain a deblurred image corresponding to the image to be processed, and the method comprises:
processing the fuzzy degree data of the image to be processed by utilizing a feature perception network to obtain a modulation parameter;
and deblurring the image to be processed according to the modulation parameters to obtain a deblurred image corresponding to the image to be processed.
10. The method according to claim 9, wherein the deblurring the image to be processed according to the modulation parameter to obtain a deblurred image corresponding to the image to be processed includes:
and inputting the image to be processed into an input layer of a deblurring network, inputting the modulation parameter into a middle layer of the deblurring network, and outputting the deblurring image through the deblurring network.
11. The method of claim 10, further comprising:
acquiring a blurred image data set, wherein the blurred image data set comprises a sample image, a first label and a second label, the sample image is the target image, the first label is the blurring degree data of the target image, and the second label is one frame of original image in continuous n frames of original images for synthesizing the target image;
pre-training the fuzzy degree perception network by using the sample image and the first label;
pre-training the deblurring network by using the sample image, the first label and the second label, or pre-training the deblurring network and the feature perception network;
inputting the sample image into the fuzzy degree perception network and the deblurring network respectively, inputting sample fuzzy degree data output by the fuzzy degree perception network into the characteristic perception network, inputting sample modulation parameters output by the characteristic perception network into a middle layer of the deblurring network, and carrying out parameter fine adjustment on the fuzzy degree perception network, the deblurring network and the characteristic perception network according to the difference between the sample deblurring image output by the deblurring network and the second label.
12. An image blur degree determination device characterized by comprising:
the image acquisition module is configured to acquire n continuous frames of original images and a target image synthesized by performing weighted fusion on pixel points at the same positions in the n continuous frames of original images, wherein n is a positive integer not less than 2;
the pixel value acquisition module is configured to acquire the pixel value of each pixel point in the target image at the corresponding pixel point at the same position in each frame of the original image;
a blur degree data determining module configured to determine blur degree data of the target image according to a deviation degree of each pixel point between the n corresponding pixel values in the n frames of original images, where the blur degree data of the target image includes a blur degree value of each pixel point in the target image.
13. A blurred image data set construction apparatus, comprising:
a data acquisition module configured to acquire a target image and blur degree data of the target image determined by the image blur degree determination method according to any one of claims 1 to 4;
a data set construction module configured to construct a blurred image data set by using the target image as a sample image and the blur degree data of the target image as a first label; the fuzzy image data set is used for training a fuzzy degree perception network, and the fuzzy degree perception network is used for determining fuzzy degree data of the image input into the fuzzy degree perception network.
14. An image deblurring apparatus, comprising:
the image acquisition module is configured to acquire an image to be processed which is synthesized by carrying out weighted fusion on pixel points at the same position in n continuous frames of original images, wherein n is a positive integer not less than 2;
a blur degree data determination module configured to determine blur degree data of the image to be processed, with the image to be processed as a target image, according to the image blur degree determination method according to any one of claims 1 to 4;
and the deblurring processing module is configured to deblur the image to be processed based on the blur degree data of the image to be processed to obtain a deblurred image corresponding to the image to be processed.
15. An image deblurring apparatus, comprising:
an image acquisition module configured to acquire an image to be processed;
the fuzzy degree data determining module is configured to process the image to be processed by using a fuzzy degree sensing network to obtain fuzzy degree data of the image to be processed;
the deblurring processing module is configured to deblur the image to be processed based on the blur degree data of the image to be processed to obtain a deblurred image corresponding to the image to be processed;
wherein the blur degree perception network is trained by using the blurred image data set constructed by the blurred image data set construction method of claim 5 or 6.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 11.
17. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1 to 11 via execution of the executable instructions.
CN202110649891.3A 2021-06-10 2021-06-10 Image blurring degree determining method, data set constructing method and deblurring method Pending CN113409203A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110649891.3A CN113409203A (en) 2021-06-10 2021-06-10 Image blurring degree determining method, data set constructing method and deblurring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110649891.3A CN113409203A (en) 2021-06-10 2021-06-10 Image blurring degree determining method, data set constructing method and deblurring method

Publications (1)

Publication Number Publication Date
CN113409203A true CN113409203A (en) 2021-09-17

Family

ID=77683632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110649891.3A Pending CN113409203A (en) 2021-06-10 2021-06-10 Image blurring degree determining method, data set constructing method and deblurring method

Country Status (1)

Country Link
CN (1) CN113409203A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373196A (en) * 2021-12-31 2022-04-19 北京极豪科技有限公司 Effective acquisition region determining method, program product, storage medium, and electronic device
CN116012675A (en) * 2023-02-14 2023-04-25 荣耀终端有限公司 Model training method, image processing method and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373196A (en) * 2021-12-31 2022-04-19 北京极豪科技有限公司 Effective acquisition region determining method, program product, storage medium, and electronic device
CN114373196B (en) * 2021-12-31 2023-09-19 天津极豪科技有限公司 Effective acquisition area determination method, program product, storage medium and electronic device
CN116012675A (en) * 2023-02-14 2023-04-25 荣耀终端有限公司 Model training method, image processing method and electronic equipment
CN116012675B (en) * 2023-02-14 2023-08-11 荣耀终端有限公司 Model training method, image processing method and electronic equipment

Similar Documents

Publication Publication Date Title
CN111580765B (en) Screen projection method, screen projection device, storage medium, screen projection equipment and screen projection equipment
CN111598776B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN112927271B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN111970562A (en) Video processing method, video processing device, storage medium and electronic equipment
CN110889809B (en) Image processing method and device, electronic equipment and storage medium
CN112767295A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN111741303B (en) Deep video processing method and device, storage medium and electronic equipment
CN113409203A (en) Image blurring degree determining method, data set constructing method and deblurring method
WO2023284401A1 (en) Image beautification processing method and apparatus, storage medium, and electronic device
US11960996B2 (en) Video quality assessment method and apparatus
CN111784734A (en) Image processing method and device, storage medium and electronic equipment
CN104081780A (en) Image processing apparatus and image processing method
CN111768351A (en) Image denoising method, image denoising device, storage medium and electronic device
CN111696039A (en) Image processing method and device, storage medium and electronic equipment
CN113313776A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN114494942A (en) Video classification method and device, storage medium and electronic equipment
US20230115821A1 (en) Image processing devices and methods
CN113658128A (en) Image blurring degree determining method, data set constructing method and deblurring method
CN113409209A (en) Image deblurring method and device, electronic equipment and storage medium
CN115278189A (en) Image tone mapping method and apparatus, computer readable medium and electronic device
CN113364964B (en) Image processing method, image processing apparatus, storage medium, and terminal device
CN115529411A (en) Video blurring method and device
CN114240750A (en) Video resolution improving method and device, storage medium and electronic equipment
CN113658073A (en) Image denoising processing method and device, storage medium and electronic equipment
CN113542739A (en) Image encoding method and apparatus, image decoding method and apparatus, medium, and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination