CN116993620A - Deblurring method and electronic equipment - Google Patents

Deblurring method and electronic equipment Download PDF

Info

Publication number
CN116993620A
CN116993620A CN202311109764.XA CN202311109764A CN116993620A CN 116993620 A CN116993620 A CN 116993620A CN 202311109764 A CN202311109764 A CN 202311109764A CN 116993620 A CN116993620 A CN 116993620A
Authority
CN
China
Prior art keywords
image
electronic device
blurred
frame
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311109764.XA
Other languages
Chinese (zh)
Other versions
CN116993620B (en
Inventor
邵扬
黄雅琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311109764.XA priority Critical patent/CN116993620B/en
Publication of CN116993620A publication Critical patent/CN116993620A/en
Application granted granted Critical
Publication of CN116993620B publication Critical patent/CN116993620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The application provides a deblurring method and electronic equipment. The method can be applied to electronic equipment provided with cameras, such as mobile phones, tablet computers and the like. By implementing the method, the electronic equipment can identify whether the image acquired by the camera is blurred or not, and further judge whether the image is a local blurred image or a global blurred image. For a local blurred image, the electronic equipment can acquire a blurred image block with blurring from the local blurred image, then input the obtained blurred image block into a deblurring network to obtain a clear image block, backfill the clear image block to a corresponding position in the original local blurred image, so as to realize deblurring processing of the image, avoid processing the original clear image content in the local blurred image into blurring, further improve the overall deblurring effect of the image, and improve the shooting quality.

Description

Deblurring method and electronic equipment
Technical Field
The present application relates to the field of terminals, and in particular, to a deblurring method and an electronic device.
Background
With the development of mobile terminals and the maturation of image processing technologies, the requirements of people for terminal photography are gradually increasing. In practical applications, when there is relative motion between the camera sensor and the photographed object, blurring of the imaged picture may occur. To improve imaging quality, the device typically deblurs the blurred image using a deblurring algorithm.
Currently, the device generally directly inputs a blurred image output by a camera into a deblurring network to perform deblurring processing. However, this tends to cause the deblurring network to process the otherwise clear image content as blurred, and neither does the intended deblurring effect be achieved, but rather the image quality is reduced.
Disclosure of Invention
The application provides a deblurring method and electronic equipment. The electronic equipment implementing the method can process the image acquired by the camera and judge the image as a local blurred image or a global blurred image; for a local blurred image, the electronic equipment can acquire a blurred image block from the local blurred image, then process the blurred image block to obtain a corresponding clear image block, backfill the clear image block into the local blurred image, so as to realize deblurring processing of the local blurred image, avoid the problem of processing the originally clear image content in the local blurred image into blur, further realize deblurring effect and improve shooting quality effect.
In a first aspect, the present application provides a deblurring method, applied to an electronic device, the method comprising: acquiring a first image; when the first image is a local blurred image, one or more first image blocks are obtained from the first image, and the one or more first image blocks are processed to obtain equivalent second image blocks, wherein the definition of the second image blocks is higher than that of the first image blocks; and backfilling the second image block into a corresponding first image block in the first image to obtain a fourth image, wherein the definition of the fourth image is higher than that of the first image.
By implementing the method provided in the first aspect, the electronic device may first determine that the acquired reference frame (i.e. the first image) is a locally blurred image or a globally blurred image. For a local blurred image, the electronic equipment can acquire a blurred image block with blurring from the local blurred image, then input the acquired blurred image block into a deblurring network to obtain a clear image block, and backfill the clear image block to a corresponding position in the original local blurred image.
Compared with the existing method for directly inputting the local blurred image into the deblurring network, the method for firstly judging the image and then inputting the blurred image blocks in the local blurred image into the deblurring network can effectively avoid that the originally clear image content in the local blurred image is processed to be blurred, further improve the overall deblurring effect of the image and improve the shooting quality.
With reference to the method provided in the first aspect, in some embodiments, the method further includes: when the first image is a global fuzzy image, the first image is processed to obtain a fifth image, and the definition of the fifth image is higher than that of the first image.
When the method provided by the embodiment is implemented and the reference frame is determined to be the global fuzzy image, the electronic equipment can integrally input the global fuzzy image into the deblurring network to obtain a clear reference frame, so that the global fuzzy image can be applicable to the method to achieve the deblurring effect.
With reference to the method provided in the first aspect, in some embodiments, the method further includes: determining a second image and a third image from an image stream containing the first image, wherein the second image is the first image or a frame of image before the first image, the third image is the first image or a frame of image after the first image, and the second image and the third image are not the first image at the same time; and processing the second image and the third image to obtain a first light flow map and one or more regions of interest (ROI), and determining whether the first image is a locally blurred image according to the first light flow map and the one or more regions of interest (ROI).
After the method provided in the above embodiment is implemented, after obtaining the reference frame, the electronic device may determine two frames of images adjacent to the reference frame, which are also referred to as target images (i.e., the second image and the third image), then the electronic device may input the two frames of target images into the motion region determining network to obtain the optical flow map and one or more ROIs, and further, the electronic device may determine that the reference frame belongs to the global blurred image or the local blurred image by using the optical flow map and the ROIs.
In some embodiments, the method provided in combination with the first aspect, determining whether the first image is a locally blurred image according to the first light flow map and the one or more regions of interest ROIs, specifically includes: and when the average value of the first light flow graph is smaller than a first threshold value and the standard deviation of the first light flow graph is larger than or equal to a second threshold value, determining the first image as a local blurred image.
In some embodiments, the method provided in combination with the first aspect, determining whether the first image is a locally blurred image according to the first light flow map and the one or more regions of interest ROIs, specifically includes: when the average value of the first light flow graph is larger than or equal to a first threshold value, the standard deviation of the first light flow graph is larger than or equal to a second threshold value, and the average value of non-ROI in the first light flow graph is smaller than the first threshold value, the first image is determined to be a local blurred image.
In combination with the method provided in the first aspect, in some embodiments, obtaining one or more first image blocks from the first image specifically includes: one or more first image blocks indicated by the one or more ROIs are acquired from the first image.
By implementing the method provided by the embodiment, the electronic device can acquire the ROI by using the motion region discrimination network, then determine the specific blurred image region in the blurred image through the ROI, and further extract the blurred image block by using the ROI.
In some embodiments, the method provided in combination with the first aspect, the obtaining one or more first image blocks indicated by one or more ROIs from the first image specifically includes: selecting a second number of first image blocks from the first number of first image blocks indicated by the first number of ROIs, the second number being less than or equal to the first number; when the average value of all the ROIs in the first optical flow graph is larger than or equal to a first threshold value, the second quantity is equal to the first quantity; when there is at least one ROI for which the average is less than the first threshold, the second number is less than the first number; processing one or more first image blocks to obtain second image blocks with equal quantity, specifically comprising: and processing the second number of the first image blocks to obtain the second number of the second image blocks.
By implementing the method provided by the embodiment, the electronic equipment can screen the ROI, screen the effective ROI meeting the preset requirement from all the ROIs output by the motion area judging network, further acquire the blurred image blocks according to the effective ROI, and further accurately position the blurred image content.
With reference to the method provided in the first aspect, in some embodiments, the method further includes: and when the average value of the first light flow graph is larger than or equal to a first threshold value and the standard deviation of the first light flow graph is smaller than a second threshold value, determining the first image as a global blurred image.
With reference to the method provided in the first aspect, in some embodiments, the method further includes: when the average value of the first light flow graph is greater than or equal to a first threshold value, the standard deviation of the first light flow graph is greater than or equal to a second threshold value, and the average value of non-ROI in the first light flow graph is greater than or equal to the first threshold value, the first image is determined to be a global blurred image.
In combination with the method provided in the first aspect, in some embodiments, before acquiring the first image, the method further includes: detecting photographing operation of a user; the first image is determined according to a photographing operation.
By implementing the method provided by the embodiment, the electronic device can determine the reference frame after detecting the photographing operation of the user, so that the operation frequency of the whole method is reduced, the occupation of computing resources is reduced, and the power consumption of the electronic device is further reduced.
In combination with the method provided in the first aspect, in some embodiments, the first image is determined according to a shooting operation, and specifically includes: the first image is a frame of image acquired by a camera at the moment of shooting operation; or, the first image is a frame of image acquired by the camera before the shooting operation occurs.
In combination with the method provided in the first aspect, in some embodiments, before acquiring the first image, the method further includes: displaying a shooting preview interface, and displaying images acquired by a camera in real time in the shooting preview interface; after obtaining the fourth image, the method further comprises: and displaying a fourth image in the shooting preview interface.
By implementing the method provided by the embodiment, the electronic device can also determine the reference frame from the preview stream, and display the clear image obtained by deblurring in real time in the preview interface, so that the definition of the preview image is improved, and the preview experience of the user is improved.
With reference to the method provided in the first aspect, in some embodiments, the second image is a 1 st frame image before the first image, and the third image is a 1 st frame image after the first image; or the second image is the first image, and the third image is the 1 st frame image after the first image; alternatively, the second image is a 1 st frame image before the first image, and the third image is the first image.
In some embodiments, the second image and the third image are images in RGGB format, and after determining the second image and the third image from the image stream including the first image, the method further includes: determining a first gray level image of the single channel according to the second image in the RGGB format, and determining a second gray level image of the single channel according to the third image in the RGGB format; processing the second image and the third image specifically includes: and processing the first gray scale map and the second gray scale map.
By implementing the method provided by the embodiment, the electronic device can firstly convert two frames of target images into the single-channel gray level image, so that the size of the target images is reduced, the calculation cost of the motion area discrimination network is further reduced, and the calculation efficiency is improved.
After the single-channel gray level image is obtained, the electronic device can further perform downsampling on the gray level image to obtain a downsampled image, and then input the downsampled image into the motion area distinguishing network, so that the calculation cost of the motion area distinguishing network is further reduced, and the calculation efficiency is improved.
In a second aspect, the present application provides an electronic device comprising one or more processors and one or more memories; wherein the one or more memories are coupled to the one or more processors, the one or more memories being for storing a computer executable program which, when executed by the one or more processors, causes the electronic device to perform the method as described in the first aspect and any possible implementation of the first aspect.
In a third aspect, embodiments of the present application provide a chip system, where the chip system is applied to an electronic device, and the chip system includes one or more processors, such as an application processor, a graphics processor, an image signal processor, a digital signal processor, and so on; the chip system further comprises an input-output interface through which the chip system may receive computer instructions to cause the electronic device to perform the method as described in the first aspect and any possible implementation of the first aspect.
In a fourth aspect, the present application provides a computer readable storage medium comprising a computer program which, when run on an electronic device, causes the electronic device to perform a method as described in the first aspect and any possible implementation of the first aspect.
In a fifth aspect, the application provides a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect and any possible implementation of the first aspect.
It will be appreciated that the electronic device provided in the second aspect, the chip system provided in the third aspect, the computer storage medium provided in the fourth aspect, and the computer program product provided in the fifth aspect are all configured to perform the method provided by the present application. Therefore, the advantages achieved by the method can be referred to as the advantages of the corresponding method, and will not be described herein.
Drawings
FIG. 1 is a flow chart of a deblurring method provided by an embodiment of the present application;
fig. 2 is a schematic diagram of a working principle of a buffer according to an embodiment of the present application;
FIG. 3 is a flowchart of acquiring a light flow diagram and a region of interest provided by an embodiment of the present application;
fig. 4 is a schematic diagram of converting a RAW image in RGGB format into a gray image according to an embodiment of the present application;
FIG. 5 is a schematic illustration of an ROI provided in an embodiment of the present application;
FIG. 6 is a flow chart of determining a global blurred image or a local blurred image provided by an embodiment of the present application;
fig. 7A is a block diagram of a motion area discrimination network according to an embodiment of the present application;
FIG. 7B is a block diagram of a deblurring network provided by embodiments of the present application;
fig. 8 shows a schematic structural diagram of the electronic device 100.
Detailed Description
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
The electronic device 100 such as a mobile phone and a tablet computer is provided with a camera. The electronic device 100 may capture images via a camera. In the process of collecting images, the relative movement of the camera and the shot object can cause the blurring of the images output by the camera. In order to improve the quality of the output image, the electronic device 100 needs to process the image output by the camera by using a deblurring algorithm, so as to improve the definition of the image and improve the shooting experience of the user.
In one embodiment, the electronic device 100 may directly input the blurred image output by the camera into the deblurring network. Through the process of the blur removal network, the electronic device 100 can obtain a clear image. However, in the above method, the deblurring network cannot recognize whether the input blurred image is specifically a locally blurred image due to the movement of the photographed object or a globally blurred image due to camera shake. Further, when processing a partially blurred image, the deblurring network easily processes the originally clear image content in the partially blurred image as blurred, thereby affecting the deblurring effect.
In view of this, an embodiment of the present application provides a deblurring method. The method can be applied to electronic equipment such as mobile phones, tablet computers and the like provided with cameras, and is marked as the electronic equipment 100.
By implementing the deblurring method provided by the embodiment of the present application, the electronic device 100 may select a frame of image from the photographing stream as a reference frame, and determine two frames of target images according to the reference frame. The electronic device 100 may then input the target image into a motion region discrimination network resulting in a light flow map and a motion region of interest (region of interest, ROI). The electronic device 100 may determine, according to the light flow map and the ROI, that the reference frames corresponding to the two frames of target images are global blurred images or local blurred images. For global blurred images, the electronic device 100 may directly input the reference frame into the deblurring network for deblurring processing to obtain a clear image. For a locally blurred image, the electronic device 100 may first obtain a blurred image block from a locally blurred reference frame, and input the blurred image block into a deblurring network to perform deblurring processing, so as to obtain a clear image block. Then, the electronic device 100 backfills the clear image block into the original reference frame to replace the blurred image block in the original reference frame, thereby obtaining the clear image.
By implementing the method, the electronic device 100 can avoid processing the originally clear image content in the partially blurred image into blur, thereby improving the deblurring effect and the shooting quality.
Not limited to a cell phone, tablet computer, electronic device 100 may also be a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular telephone, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, an artificial intelligence (artificial intelligence, AI) device, a wearable device, a vehicle-mounted device, a smart home device, and/or a smart city device, and the specific type of the electronic device is not particularly limited by the embodiments of the present application.
Fig. 1 is a flowchart of a deblurring method according to an embodiment of the present application.
S101, the electronic device 100 acquires a reference frame from the photographing stream, and determines a target image based on the reference frame.
The electronic device 100 captures images in real time through a camera. Taking a 30FPS frame rate as an example, a camera 33.3ms may generate one image. A fixed-size buffer may be preset in the electronic device 100. After each frame of image is generated, the camera can send the frame of image into the buffer. The image stored in the buffer may be referred to as an original image (also referred to as a RAW image). The electronic device 100 manages the original image in the buffer using a first-in first-out (FIFO) algorithm (first in first out) strategy.
Fig. 2 is a schematic diagram of a working principle of a buffer according to an embodiment of the present application.
As shown in fig. 2, the size of the buffer preset in the electronic device 100 may be 10, that is, the size may be used to buffer the original image reported by the 10 frames of cameras. After each frame of the original image reported by the camera is received, the electronic device 100 may write the original image into the buffer. When the buffer is full, after receiving a new image, the buffer can read out an original image written earliest, and then store the newly received image.
Based on different purposes, the electronic device 100 may split the original image stored in the buffer to obtain different image streams, such as a photographing stream, a preview stream, an analysis stream, a video stream, and the like. The deblurring method provided by the embodiment of the application can be applied to any image stream. Subsequently, in the embodiment of the present application, taking a photographing flow as an example, how the electronic device 100 implements the deblurring method in the photographing flow, so as to implement deblurring processing on images in the photographing flow.
Taking the i-th frame image in the photographing stream as an example, the electronic device 100 may first determine the i-th frame image as a reference frame. Then, the electronic apparatus 100 may determine a multi-frame image adjacent to the above-described reference frame as the target image. The above-mentioned reference frame is also referred to as a first image.
Alternatively, the electronic device 100 may determine an nth frame (e.g., an i-1 th frame) before the reference frame and an mth frame (e.g., an i+1 th frame) after the reference frame as the target image. N and M may be the same or different. Optionally, the electronic device 100 may further determine the reference frame and the nth frame before the reference frame as the target image, or determine the reference frame and the mth frame after the reference frame as the target image, and so on, which the embodiment of the present application is not limited. Taking the i-1 st frame and the i+1 st frame as an example, the two frame images may be denoted as a target image 1 and a target image 2. Wherein the target image 1 is also referred to as a second image and the target image 2 is also referred to as a third image.
In one implementation, the electronic device 100 may determine each frame image in the photographing stream as a reference frame. In another implementation, the electronic device 100 may also determine the reference frame in a frame-pumped manner. For example, after determining the 1 st frame image as the reference frame, the electronic device 100 may determine the 11 th frame image, which is 10 frames apart from the 1 st frame image, as the next reference frame. In this way, the electronic device 100 can reduce the operating frequency of the whole method, reduce the occupation of computing resources, and further reduce the power consumption of the electronic device 100.
S102, the electronic device 100 inputs the target image into a motion region discrimination network to obtain a light flow map and a region of interest (ROI).
Fig. 3 is a flowchart of acquiring a light flow chart and a region of interest according to an embodiment of the present application.
As shown in fig. 3, preferably, after the target image is acquired, the electronic device 100 may first convert the target image into a gray-scale image. Taking the target image 1 (i-1 st frame image) and the target image 2 (i+1 st frame image) acquired in S101 as an example, the electronic device 100 can obtain the gradation map 1 and the gradation map 2 through the conversion process. The gray level diagram 1 is also referred to as a first gray level diagram, and the gray level diagram 2 is also referred to as a second gray level diagram.
The image format of the RAW map may be an RGGB format. The image in the photographing stream obtained based on buffer splitting is consistent with the original image in the buffer, so that the reference frame and the target image determined by the electronic device 100 from the photographing stream are in RGGB format. Any one frame image in the photographing stream may also be referred to as a RAW image.
Fig. 4 is a schematic diagram of converting a RAW image in RGGB format into a gray image according to an embodiment of the present application.
As shown in fig. 4, the electronic device 100 may divide the RAW graph in the RGGB format using a 2×2 sized window. A 2 x 2 size window just covers a set of RGGB: r (red channel), G1 (green channel), G2 (green channel), B (blue channel). The windows do not overlap. The electronic device 100 may sequentially calculate the average value of the four channels in each window, so as to obtain an image composed of the average value (P), i.e., a gray scale image. Thus, an average value in the gray scale map is also referred to as a gray scale value.
In the RAW diagram of the RGGB format, one channel (R/G/G/B) may be referred to as one pixel. In the gray scale map, one gray scale value P may be referred to as one pixel. For example, when the size of the RAW map of the initial RGGB format is h1×w1, the size of the converted gray map may be h2×w2. Where h1=2×h2, w1=2×w2. Exemplary, h1×w1=1920×1080, h2×w2=960×540.
As shown in fig. 3, after obtaining the gray scale image, the electronic device 100 may downsample the gray scale image to obtain a downsampled image, thereby further reducing the image size, reducing the computational complexity of the subsequent network, and improving the operation efficiency. At this time, the electronic device 100 may obtain the downsampled image 1 and the downsampled image 2 based on the gray map 1 and the gray map 2.
The size of the downsampled image is denoted as h3×w3. Illustratively, when the electronic device 100 downsamples the gray map 1/2 once, the downsampled image has dimensions h3×w3=1/2 h2×1/2W2. Taking the above 960×540 gray scale image as an example, the size of the downsampled image after 1/2 downsampling process is 480×270.
In some embodiments, the image format of the RAW graph may alternatively be RYYB format or RGBW format, among other formats. Electronic device 100 may process the RAW image in RYYB format, RGBW format, or other formats by using the method shown in fig. 4, to obtain a corresponding gray-scale image and a downsampled image.
The electronic device 100 may then input the downsampled image into a motion area discrimination network. Through the processing of the motion region discrimination network, the electronic device 100 may obtain a light flow map and a motion region of interest (ROI). The above optical flow map is also referred to as a first optical flow map. The following embodiments will specifically describe the structure of the motion area discrimination network, and will not be expanded here.
Wherein the optical flow map is a matrix of a plurality of two-dimensional vectors P (x, y). The size of the matrix can be noted as H4 xw 4. Preferably, the size of the light flow pattern corresponds to the size of the input downsampled image, i.e. h3=h4, w3=w4. Taking the above 480 x 270 downsampled image as an example, the size of the dataflow graph is 480 x 270. At this time, a two-dimensional vector P (x, y) in the optical flow map may correspond to the optical flow direction of a pixel point in the downsampled image. Where x in P (x, y) may represent the displacement distance of the pixel optical flow in the horizontal direction and y may represent the displacement distance of the pixel optical flow in the vertical direction.
One ROI indicates one image block in the reference frame. In an embodiment of the application, the image block is a blurred image block, also referred to as a first image block. For an input set of downsampled images, the motion region discrimination network may output one or more ROIs, the number of which is also referred to as a first number. Any zero ROIs may intersect or may be independent (i.e., disjoint).
Fig. 5 is a schematic diagram of an ROI according to an embodiment of the present application. As shown in fig. 5, the a region in the i-1 th frame image (target image 1) is the region in which the photographic subject a is located, and the b region in the i+1 th frame image (target image 2) is the region in which the photographic subject a is located. The two-frame image reflects that the photographic subject a moves from the a region to the b region during photographing. At this time, the motion region discrimination network may output the ROI 1 。ROI 1 I.e., a blurred image block generated by the motion of the photographic subject a in a reference frame (i.e., an i-th frame image) corresponding to the i-1-th frame image and the i+1-th frame image. Preferably, the ROI output by the motion region discrimination network completely comprises a region and b region.
S103, the electronic device 100 determines that the reference frame is a global blurred image or a local blurred image according to the light flow map and the ROI.
Fig. 6 is a flowchart of determining a global blurred image or a local blurred image provided by an embodiment of the present application.
S201, the electronic device 100 calculates a modulus of each vector in the optical flow chart.
Taking the (i, j) th vector in the optical flow diagram as an example, modulo S of the vector:
s202, the electronic device 100 determines an average value s_avg and a standard deviation s_std of the optical flow sheet.
After determining the modulus of each vector in the optical flow graph, electronic device 100 may determine an average value s_avg and a standard deviation s_std of the optical flow graph based on the modulus of each vector. Wherein:
Where n represents the number of rows of the light flow graph matrix and m represents the number of columns of the light flow graph matrix.
S203, the electronic device 100 determines that the reference frame is a global blurred image or a local blurred image according to the average value and the standard deviation of the optical flow diagram.
After determining the average value s_avg and the standard deviation s_std of the optical flow map, the electronic device 100 may compare them with a preset threshold value, thereby determining that the reference frame is a global blurred image or a local blurred image. The threshold may be set empirically by a developer.
Specifically, electronic device 100 may compare average value s_avg and standard deviation s_std with average value threshold M1 and standard deviation threshold M2, respectively. The average value threshold value M1 is also referred to as a first threshold value, and the standard deviation threshold value M2 is also referred to as a second threshold value. When s_avg < M1 and s_std < M2, the electronic device 100 may determine that the reference frame corresponding to the target image is a clear image, and does not deblur. S_avg.gtoreq.M1 and S_std < M2 means that the image content of a large area in the reference frame has changed significantly. Thus, when S_avg+.M1 and S_std < M2, electronic device 100 may determine that the reference frame is a global blurred image.
S_avg < M1 and S_std.gtoreq.M2 means that the image content of a partial region of the reference frame has changed significantly. Thus, when S_avg < M1 and S_std+.m2, electronic device 100 may determine that the reference frame is a partially blurred image. Further, the electronic device 100 may determine an average value s_roi of the modes of the respective ROIs. For any one ROI, when S_roi of the ROI is equal to or greater than M1, the electronic device 100 can determine that the ROI is a valid ROI, whereas when S_roi of the ROI is less than M1, the electronic device 100 can determine that the ROI is an invalid ROI.
S_avg.gtoreq.M1 and S_std.gtoreq.M2 mean that there may be multiple moving objects in the image, resulting in significant changes in the image content of the entire image. Thus, when S+.m1 and S_std+.m2, electronic device 100 can calculate the average S_Rroi of the modes of non-ROI areas in the optical flow map. When S_Rroi of the non-ROI area is equal to or greater than M1, the electronic device 100 may determine that the reference frame is a global blurred image. When s_rroi < M1 of the non-ROI area, the electronic device 100 may determine that the reference frame is a locally blurred image. The electronic device 100 may then determine an average value s_roi of the modes of the respective ROIs. Similarly, for any one ROI, when S_roi of the ROI is equal to or greater than M1, the electronic device 100 may determine that the ROI is a valid ROI, whereas when S_roi of the ROI is less than M1, the electronic device 100 may determine that the ROI is an invalid ROI.
And S104, when the reference frame is a global blurred image, the electronic equipment 100 inputs the reference frame into a deblurring network to deblur, so as to obtain a clear image, namely a clear reference frame.
Alternatively, the electronic device 100 may directly input the reference frame into the deblurring network to perform deblurring processing, so as to obtain a clear image.
Optionally, for larger sized reference frames, the electronic device 100 may also dice the reference frames. For example, the electronic device 100 may segment 1080×1920 images into 4 540×720 images. Then, the electronic device 100 may input each image block obtained by segmentation into a deblurring network for processing, so as to obtain clear image blocks in sequence. The electronic device 100 may then combine the above-described sharp image blocks to obtain a sharp image. The clear reference frame obtained by directly inputting the deblurring network is also referred to as a fifth image.
S105, when the reference frame is a local blurred image, the electronic device 100 determines blurred image blocks in the reference frame according to the ROI, and inputs the blurred image blocks into a deblurring network to deblur so as to obtain clear image blocks.
In one embodiment, after determining that the reference frame is a partially blurred image, electronic device 100 may determine one or more blurred image blocks in the reference frame directly from one or more ROIs output by the motion area discrimination network. The electronic device 100 may then input the one or more blurred image blocks into a deblurring network for deblurring to obtain one or more sharp image blocks.
In another embodiment, after determining that the reference frame is a locally blurred image, the electronic device 100 may further determine whether each of the one or more ROIs output by the motion area discrimination network is valid. The number of effective ROIs described above is also referred to as a second number. Then, the electronic device 100 may input the blurred image blocks corresponding to the effective ROIs into the deblurring network to deblur, so as to obtain clear image blocks. The above-mentioned clear image block output by the deblurring network is also referred to as a second image block.
Wherein the target image input to the motion region discrimination network is a downsampled image, the ROI output by the motion region discrimination network is thus directly aligned with the downsampled image of h3×w3. Thus, in determining the blurred image blocks of the reference frame from the ROI, the electronic device 100 needs to scale the ROI to align it with the reference frame of h1×w1.
S106, the electronic device 100 backfills the clear image block to the reference frame to replace the original blurred image block, and the clear reference frame is obtained.
After obtaining the clear image block, the electronic device 100 may backfill the clear image block to the reference frame to replace the blurred image block in the original reference frame, thereby obtaining the clear reference frame. The clear reference frame obtained by replacing the original blurred image blocks by backfilling the clear image blocks to the reference frame is also referred to as a fourth image. Optionally, after the backfilling is completed, the electronic device 100 may perform image fusion on the edges of the backfilled clear image block to avoid edge aliasing.
Fig. 7A is a block diagram of a motion area discrimination network according to an embodiment of the present application.
As shown in FIG. 7A, the motion area discriminates between the network convolution layers C1-C13, the downsampling layers D1-D6, and the upsampling layers U1-U4. Wherein, C1~ C4 and D1~ D4 are arranged alternately, connect 3 convolution layers C5~ C7 behind the D4, connect up sampling layer U1~ U4 and C8~ C11 behind the C7, up sampling layer U1~ U4 and C8~ C11 are arranged alternately. The downsampled images 1 and 2 obtained based on the target images 1 and 2 are optical flow diagrams which are data obtained after processing of C1-D4, C5-C7 and U1-C11.
As shown in FIG. 7A, the downsampling layers D5-D6 and the convolution layers C12-C13 may also be connected after C11, with the downsampling layers D5-D6 and the convolution layers C12-C13 being alternately arranged. And C11, obtaining data after processing the data output by the C11 through the D5-C13, namely the ROI.
Fig. 7B is a block diagram of a deblurring network according to an embodiment of the present application.
As shown in fig. 7B, the deblurring network includes convolutional layers C21 to C29, downsampling layers D21, D22, upsampling layers U21, U22. Wherein, C21, C22 and D21, D22 are alternately arranged, D22 connects 4 convolution layers C23~ C26 behind, C26 connects up sampling layer U21, U22 and C27, C28 behind, U21, U22 and C27, C28 are alternately arranged, C28 connects convolution layer C29 behind.
The input to the deblurring network includes optical flow information and blurred images. Specifically, after determining that the reference frame is a global blurred image based on the optical flow map and the ROI output from the motion area discrimination network, the input to the deblurring network includes the above optical flow map (optical flow information) and the reference frame (blurred image). At this time, the output of the deblurring network is a clear reference frame (clear image).
When the reference frame is determined to be a partially blurred image based on the optical flow map and the ROI output from the motion region discrimination network, the input to the deblurring network includes an optical flow block (optical flow information) and a blurred image block (blurred image). Wherein, the optical flow block refers to a local optical flow graph extracted from the optical flow graph according to the ROI, which is also called an optical flow subgraph. An optical flow sub-image in an optical flow diagram corresponding to an ROI and a blurred image block in a reference frame corresponding to the ROI are a set of inputs to the deblurring network. When the motion region discrimination network outputs multiple ROIs, the electronic device 100 may determine multiple sets of optical flow subgraphs and blurred image blocks, i.e., multiple sets of deblurred network input data. At this time, the electronic device 100 may input the optical flow subgraph and the blurred image blocks into the deblurring network in a group-by-group manner, resulting in corresponding sharp image blocks (sharp images).
Referring to the description of S101, the deblurring method provided by the embodiment of the present application may be applied to a photographing stream.
In one embodiment, the electronic device 100 may determine each frame image in the photographing stream as a reference frame, further determine a blurred image block in the reference frame, and perform deblurring processing on the blurred image block to obtain a deblurred clear reference frame.
Alternatively, the electronic device 100 may determine that one frame image (i.e., a frame is extracted) in the photographing stream is a reference frame, determine a blurred image block in the reference frame, and then perform deblurring processing on the blurred image block to obtain a clear reference frame. Thus, the electronic device 100 can reduce the operation frequency of the algorithm, reduce the occupation of computing resources, and save power consumption.
In addition, in some embodiments, the electronic device 100 may further determine, after detecting the photographing operation, a specific frame in the photographing stream corresponding to the photographing operation as a reference frame. For example, the electronic device 100 may determine, according to the time T1 when the photographing operation is detected, that an i-th frame image corresponding to the time T1 in the photographing stream is a reference frame; alternatively, the electronic device 100 may further determine that a frame before the i-th frame image corresponding to the time T1 in the photographing stream is a reference frame, for example, a 3-th frame before the i-th frame image corresponding to the time T1. At this time, the electronic device 100 does not need to process each frame of image in the photographing stream, and does not need to process periodically, so that the operation frequency of the algorithm can be further reduced, the occupation of computing resources is reduced, and the power consumption is saved.
Not limited to application to photographing streams. The deblurring method provided by the embodiment of the application can also be applied to preview streams. Similarly, the electronic device 100 may determine each frame of image in the preview stream as a reference frame, or may determine one frame in the photographing stream at intervals as a reference frame, then determine a blurred image block in the reference frame, and then perform deblurring processing on the blurred image block to obtain a clear image.
In the scene applied to the preview stream, the electronic device 100 may also display the clear image obtained by deblurring in real time in the preview interface, so as to improve the definition of the preview image and improve the preview experience of the user.
Fig. 8 shows a schematic structural diagram of the electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. A memory may also be provided in the processor 110 for storing instructions and data.
The charge management module 140 is configured to receive a charge input from a charger. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information. The display screen 194 is used to display images, videos, and the like.
In the embodiment of the present application, the electronic device 100 may display the image acquired by the camera and the user interaction interface of the electronic device 100 through the GPU, the display screen 194, and the display function provided by the application processor. In embodiments applied to preview streams, the electronic device 100 may directly display the deblurred sharp image in the capture preview interface. In other embodiments that do not apply to preview streams, the electronic device 100 may display the image that is not deblurred in the capture preview interface. In some embodiments, after detecting the shooting operation, the electronic device 100 displays, on a shooting software gallery interface, a clear image corresponding to the shooting operation that has undergone deblurring.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like. Wherein the camera 193 is used to capture images. In an embodiment of the present application, the camera 193 captures images in real time, and the captured images are RAW images arranged in RGGB mode. The reference frames, target images, and images in the various image streams used in the subsequent methods are all acquired by a camera 193.
The digital signal processor is used for processing digital signals, and the video codec is used for compressing or decompressing digital video.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information and continuously perform self-learning by referring to a biological neural network structure. The NPU can implement applications such as intelligent cognition of the electronic device 100. In the embodiment of the present application, the electronic device 100 may execute the motion area discrimination network and the deblurring network algorithm through the NPU, so as to obtain the deblurred image.
The internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (NVM).
The random access memory may be read directly from and written to by the processor 110, may be used to store executable programs (e.g., machine instructions) for an operating system or other on-the-fly programs, may also be used to store data for users and applications, and the like. The nonvolatile memory may store executable programs, store data of users and applications, and the like, and may be loaded into the random access memory in advance for the processor 110 to directly read and write.
In an embodiment of the present application, application code corresponding to the deblurring method may be stored in the NVM. Upon deblurring, application code corresponding to the deblurring method may be loaded into RAM.
The external memory interface 120 may be used to connect external non-volatile memory to enable expansion of the memory capabilities of the electronic device 100.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. The gyro sensor 180B may be used to determine the angular velocity of the electronic device 100 about three axes (i.e., x, y, and z axes). The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a hall sensor, and the opening and closing of the flip cover can be detected by the magnetic sensor 180D. The distance sensor 180F is used to measure a distance. The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector that may be used to detect a scene of the user holding the electronic device 100 in close proximity to the user. The ambient light sensor 180L is used to sense ambient light level. The fingerprint sensor 180H is used to collect a fingerprint. The temperature sensor 180J is for detecting temperature. The bone conduction sensor 180M may acquire a vibration signal. The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The keys 190 include a power-on key, a volume key, etc. The motor 191 may generate a vibration cue. The indicator 192 may be an indicator light. The SIM card interface 195 is used to connect a SIM card.
As used in the specification of the present application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this disclosure refers to and encompasses any or all possible combinations of one or more of the listed items. As used in the above embodiments, the term "when …" may be interpreted to mean "if …" or "after …" or "in response to determination …" or "in response to detection …" depending on the context. Similarly, the phrase "at the time of determination …" or "if detected (a stated condition or event)" may be interpreted to mean "if determined …" or "in response to determination …" or "at the time of detection (a stated condition or event)" or "in response to detection (a stated condition or event)" depending on the context.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.

Claims (17)

1. A deblurring method for use in an electronic device, the method comprising:
acquiring a first image;
when the first image is a local blurred image, one or more first image blocks are obtained from the first image, the one or more first image blocks are processed to obtain equivalent second image blocks, and the definition of the second image blocks is higher than that of the first image blocks;
and backfilling the second image block into a corresponding first image block in the first image to obtain a fourth image, wherein the definition of the fourth image is higher than that of the first image.
2. The method according to claim 1, wherein the method further comprises:
When the first image is a global fuzzy image, the first image is processed to obtain a fifth image, and the definition of the fifth image is higher than that of the first image.
3. The method according to claim 2, wherein the method further comprises:
determining a second image and a third image from an image stream comprising the first image, wherein the second image is the first image or a frame of image before the first image, the third image is the first image or a frame of image after the first image, and the second image and the third image are not the first image at the same time;
and processing the second image and the third image to obtain a first light flow map and one or more regions of interest (ROI), and determining whether the first image is a locally blurred image according to the first light flow map and the one or more regions of interest (ROI).
4. A method according to claim 3, wherein said determining whether said first image is a locally blurred image from said first light flowsheet and said one or more regions of interest ROIs, in particular comprises:
and when the average value of the first light flow graph is smaller than a first threshold value and the standard deviation of the first light flow graph is larger than or equal to a second threshold value, determining that the first image is a local blurred image.
5. A method according to claim 3, wherein said determining whether said first image is a locally blurred image from said first light flowsheet and said one or more regions of interest ROIs, in particular comprises:
and when the average value of the first light flow graph is larger than or equal to a first threshold value, the standard deviation of the first light flow graph is larger than or equal to a second threshold value, and the average value of non-ROIs in the first light flow graph is smaller than the first threshold value, determining that the first image is a local blurred image.
6. The method according to any one of claims 3-5, wherein said obtaining one or more first image blocks from said first image, in particular comprises:
one or more first image blocks indicated by the one or more ROIs are acquired from the first image.
7. The method according to claim 6, wherein said obtaining one or more first image blocks from said first image indicated by said one or more ROIs, in particular comprises:
selecting a second number of first image blocks from the first number of first image blocks indicated by a first number of ROIs, the second number being equal to or less than the first number;
Wherein the second number is equal to the first number when an average value of all ROIs in the first light flow graph is equal to or greater than a first threshold; the second number is smaller than the first number when there is at least one ROI for which the average value is smaller than a first threshold;
the processing the one or more first image blocks to obtain an equal amount of second image blocks specifically includes: and processing the second number of first image blocks to obtain the second number of second image blocks.
8. A method according to claim 3, characterized in that the method further comprises:
and when the average value of the first light flow graph is larger than or equal to a first threshold value and the standard deviation of the first light flow graph is smaller than a second threshold value, determining that the first image is a global blurred image.
9. A method according to claim 3, characterized in that the method further comprises:
and when the average value of the first light flow graph is larger than or equal to a first threshold value, the standard deviation of the first light flow graph is larger than or equal to a second threshold value, and the average value of non-ROIs in the first light flow graph is larger than or equal to the first threshold value, determining that the first image is a global blurred image.
10. The method of claim 1, wherein prior to the acquiring the first image, the method further comprises: detecting shooting operation of a user; the first image is determined according to the photographing operation.
11. The method according to claim 10, wherein the first image is determined according to the photographing operation, specifically comprising: the first image is a frame of image acquired by a camera at the moment of shooting operation; or the first image is a frame of image acquired by the camera before the shooting operation occurs.
12. The method of claim 1, wherein prior to the acquiring the first image, the method further comprises: displaying a shooting preview interface, wherein images acquired by a camera are displayed in real time in the shooting preview interface;
after the fourth image is obtained, the method further includes: and displaying the fourth image in the shooting preview interface.
13. A method according to claim 3, wherein the second image is the first image or a frame of image before the first image, the third image is the first image or a frame of image after the first image, and the second image and the third image are not the first image at the same time, specifically comprising:
the second image is a 1 st frame image before the first image, and the third image is a 1 st frame image after the first image;
Or, the second image is the first image, and the third image is a 1 st frame image after the first image;
or the second image is a 1 st frame image before the first image, and the third image is the first image.
14. The method of claim 3, wherein the second image and the third image are images in RGGB format, and wherein after determining the second image and the third image from the image stream containing the first image, the method further comprises:
determining a first gray level image of a single channel according to the second image in RGGB format, and determining a second gray level image of the single channel according to the third image in RGGB format;
the processing the second image and the third image specifically includes: and processing the first gray scale image and the second gray scale image.
15. An electronic device comprising one or more processors and one or more memories; wherein the one or more memories are coupled to the one or more processors, the one or more memories being configured to store a computer program that, when executed by the one or more processors, causes the method of any of claims 1-14 to be performed.
16. A chip system for application to an electronic device, the chip system comprising one or more processors configured to invoke computer instructions to cause performance of the method of any of claims 1-14.
17. A computer readable storage medium comprising a computer program, characterized in that the computer program, when run on an electronic device, causes the execution of the method according to any one of claims 1-14.
CN202311109764.XA 2023-08-31 2023-08-31 Deblurring method and electronic equipment Active CN116993620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311109764.XA CN116993620B (en) 2023-08-31 2023-08-31 Deblurring method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311109764.XA CN116993620B (en) 2023-08-31 2023-08-31 Deblurring method and electronic equipment

Publications (2)

Publication Number Publication Date
CN116993620A true CN116993620A (en) 2023-11-03
CN116993620B CN116993620B (en) 2023-12-15

Family

ID=88523273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311109764.XA Active CN116993620B (en) 2023-08-31 2023-08-31 Deblurring method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116993620B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9355439B1 (en) * 2014-07-02 2016-05-31 The United States Of America As Represented By The Secretary Of The Navy Joint contrast enhancement and turbulence mitigation method
CN110111282A (en) * 2019-05-09 2019-08-09 杭州电子科技大学上虞科学与工程研究院有限公司 A kind of video deblurring method based on motion vector and CNN
CN113284080A (en) * 2021-06-17 2021-08-20 Oppo广东移动通信有限公司 Image processing method and device, electronic device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9355439B1 (en) * 2014-07-02 2016-05-31 The United States Of America As Represented By The Secretary Of The Navy Joint contrast enhancement and turbulence mitigation method
CN110111282A (en) * 2019-05-09 2019-08-09 杭州电子科技大学上虞科学与工程研究院有限公司 A kind of video deblurring method based on motion vector and CNN
CN113284080A (en) * 2021-06-17 2021-08-20 Oppo广东移动通信有限公司 Image processing method and device, electronic device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KOH, YJ ET AL: "Reliable Optical Flow Estimation in Motion-Blurred Regions", 《15TH IEEE INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP)》, pages 396 - 401 *
张钟汉 等: "各向异性光流法的目标边缘检测", 《石家庄铁道大学学报(自然科学版)》, no. 2, pages 108 - 114 *

Also Published As

Publication number Publication date
CN116993620B (en) 2023-12-15

Similar Documents

Publication Publication Date Title
CN108898567B (en) Image noise reduction method, device and system
CN113592887B (en) Video shooting method, electronic device and computer-readable storage medium
US20090066693A1 (en) Encoding A Depth Map Into An Image Using Analysis Of Two Consecutive Captured Frames
CN113706414B (en) Training method of video optimization model and electronic equipment
US9058655B2 (en) Region of interest based image registration
CN108776822B (en) Target area detection method, device, terminal and storage medium
CN115061770B (en) Method and electronic device for displaying dynamic wallpaper
WO2024011976A1 (en) Method for expanding dynamic range of image and electronic device
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
CN116916151B (en) Shooting method, electronic device and storage medium
CN115633262B (en) Image processing method and electronic device
CN113744139A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116993620B (en) Deblurring method and electronic equipment
CN116055895A (en) Image processing method and related device
CN115908120A (en) Image processing method and electronic device
CN115150542B (en) Video anti-shake method and related equipment
CN114827442B (en) Method for generating image and electronic equipment
CN112752086A (en) Image signal processor, method and system for environment mapping
CN114970576A (en) Identification code identification method, related electronic equipment and computer readable storage medium
CN108431867B (en) Data processing method and terminal
CN116668773B (en) Method for enhancing video image quality and electronic equipment
CN111353929A (en) Image processing method and device and electronic equipment
CN115767287B (en) Image processing method and electronic equipment
CN116012262B (en) Image processing method, model training method and electronic equipment
CN116109828B (en) Image processing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant