CN111968052B - Image processing method, image processing apparatus, and storage medium - Google Patents

Image processing method, image processing apparatus, and storage medium Download PDF

Info

Publication number
CN111968052B
CN111968052B CN202010803059.XA CN202010803059A CN111968052B CN 111968052 B CN111968052 B CN 111968052B CN 202010803059 A CN202010803059 A CN 202010803059A CN 111968052 B CN111968052 B CN 111968052B
Authority
CN
China
Prior art keywords
image
aberration
region
weight
acquisition device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010803059.XA
Other languages
Chinese (zh)
Other versions
CN111968052A (en
Inventor
万韶华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Pinecone Electronic Co Ltd
Original Assignee
Beijing Xiaomi Pinecone Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Pinecone Electronic Co Ltd filed Critical Beijing Xiaomi Pinecone Electronic Co Ltd
Priority to CN202010803059.XA priority Critical patent/CN111968052B/en
Publication of CN111968052A publication Critical patent/CN111968052A/en
Application granted granted Critical
Publication of CN111968052B publication Critical patent/CN111968052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to an image processing method, an image processing apparatus, and a storage medium, the image processing method being applied to an electronic device in which an image acquisition apparatus is installed, the image processing method including: acquiring an original image acquired by an image acquisition device, wherein the original image is an image with aberration at a pixel position; and inputting the original image into a deep convolution neural network model to obtain a processed image, wherein the deep convolution neural network is obtained by training an aberration function based on an image acquisition device. According to the embodiment of the disclosure, the image acquisition device with the same attribute as the image acquisition device determines the aberration function by using a measurement means, trains the depth convolution neural network based on the obtained aberration function, and performs phase difference removal processing on the phase difference-generating image acquired by the image acquisition device by using the trained depth convolution neural network, so that the image processing quality is improved.

Description

Image processing method, image processing apparatus, and storage medium
Technical Field
The present disclosure relates to the field of image processing, and more particularly, to an image processing method, an image processing apparatus, and a storage medium.
Background
With the rapid development of intelligent terminal technology, intelligent terminals are becoming more and more popular in people's work and life, and in order to better meet the use demands of users, the performances of the intelligent terminals are also becoming higher and higher in all aspects. The user uses the terminal to shoot at any time and any place, so that great convenience is brought to the user. Therefore, an aspect of interest is the shooting performance of the terminal.
The shooting performance of the terminal intuitively reflects the quality of a shot image, and the larger the size of a terminal sensor is, the more light rays are received by pixels in unit area for shooting pictures, and the better the imaging quality is. The larger the aperture of the camera, the more light passes per unit time, the shorter the exposure time required, i.e. the passage of light into the camera to the sensor after all. Therefore, for the camera of the terminal, the larger the aperture, the better the image quality.
In order to pursue superior image quality, the size of an image sensor configured by a terminal and the development trend of an aperture are both larger and larger, and meanwhile, optical aberration of a large image sensor and a large aperture terminal causes blurring of a shooting image, poor shooting effect and influences user experience.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image processing method, an image processing apparatus, and a storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method applied to an electronic device, in which an image capturing apparatus is installed, the image processing method including: acquiring an original image acquired by the image acquisition device, wherein the original image is an image with aberration at a pixel position; and inputting the original image into a deep convolution neural network model to obtain a processed image, wherein the deep convolution neural network is obtained by training an aberration function based on the image acquisition device.
In one embodiment, the deep convolutional neural network is trained based on aberration functions in the following manner: in one embodiment, the deep convolutional neural network is trained based on aberration functions in the following manner: acquiring an aberration reference image group acquired by an image acquisition device with the same attribute as the image acquisition device, wherein each aberration reference image in the aberration reference image group comprises pixels with aberration at pixel positions; acquiring a clear sample image group, wherein clear sample images included in the clear sample image group are images without aberration; determining the aberration function based on the aberration reference image group, and carrying out convolution operation on the aberration function and the clear sample images in the clear sample image group to obtain a fuzzy sample image group; and forming sample images based on the clear sample image group and the corresponding clear sample images and fuzzy sample images in the fuzzy sample image group, and training based on the sample image pairs to obtain the depth convolution neural network.
In an embodiment, the images in the aberration reference image group are point light source images shot by an image acquisition device with the same attribute as the image acquisition device; the determining the aberration function based on the aberration reference image group includes: measuring all pixels with aberration in the point light source image to obtain pixel positions with aberration; and mapping the pixel positions into pixel brightness, and carrying out normalization processing on the pixel brightness of all the pixels to obtain an aberration function corresponding to the point light source image.
In an embodiment, after obtaining the processed image, the method further comprises: and carrying out fusion processing on the original image and the processed image to obtain a fused image.
In an embodiment, the fusing the original image and the processed image to obtain a fused image includes: determining a first region and a second region in the original image, wherein the first region is a weak texture region with a pixel gradient smaller than a first gradient threshold value, and the second region is a strong edge region with the pixel gradient larger than a second gradient threshold value; determining a first weight corresponding to the first region and a second weight corresponding to the second region; and carrying out fusion processing on the original image and the processed image based on the first weight and the second weight to obtain a fused image.
In an embodiment, the determining the first weight corresponding to the first area and the second weight corresponding to the second area includes: determining a first weight of the first region corresponding to the original image and the processed image, wherein the first weight of the first region corresponding to the original image is larger than the first weight of the first region corresponding to the processed image; and determining a second weight of the second region corresponding to the original image and the processed image, wherein the second weight of the second region corresponding to the original image is smaller than the second weight of the second region corresponding to the processed image.
According to a second aspect of embodiments of the present disclosure, there is provided an image processing apparatus applied to an electronic device in which an image pickup apparatus is mounted, the image processing apparatus including: the acquisition module is used for acquiring an original image acquired by the image acquisition device, wherein the original image is an image of a pixel position occurrence image; the processing module is used for inputting the original image into a deep convolution neural network model to obtain a processed image, wherein the deep convolution neural network is obtained by training based on an aberration function of the image acquisition device, the input of the deep convolution neural network is a fuzzy sample image, and the input of the deep convolution neural network is a clear sample image.
In an embodiment, the deep convolutional neural network is obtained based on point spread function training in the following manner: acquiring an aberration reference image group acquired by an image acquisition device with the same attribute as the image acquisition device, wherein each aberration reference image in the aberration reference image group comprises pixels with aberration at pixel positions; acquiring a clear sample image group, wherein clear sample images included in the clear sample image group are images without aberration; determining the aberration function based on the aberration reference image group, and carrying out convolution operation on the aberration function and the clear sample images in the clear sample image group to obtain a fuzzy sample image group; and forming a sample image pair based on the clear sample image group and the corresponding clear sample image and the corresponding fuzzy sample image in the fuzzy sample image group, training based on the sample image pair to obtain the depth convolution neural network, wherein the input of the depth convolution neural network is the fuzzy sample image, and outputting the input is the clear sample image.
In an embodiment, the images in the first image group are point light source images captured by an image capturing device having the same attribute as the image capturing device; the determining the aberration function based on the aberration reference image group includes: measuring all pixels with aberration in the point light source image to obtain pixel positions with aberration; and mapping the pixel positions into pixel brightness, and carrying out normalization processing on the pixel brightness of all the pixels to obtain an aberration function corresponding to the point light source image.
In an embodiment, the image processing apparatus further includes: and the fusion module is used for carrying out fusion processing on the original image and the processed image to obtain a fused image.
In an embodiment, the fusion module performs fusion processing on the original image and the processed image in the following manner, so as to obtain a fused image: determining a first region and a second region in the original image, wherein the first region is a weak texture region with a pixel gradient smaller than a first gradient threshold value, and the second region is a strong edge region with the pixel gradient larger than a second gradient threshold value; determining a first weight corresponding to the first region and a second weight corresponding to the second region; and carrying out fusion processing on the original image and the processed image based on the first weight and the second weight to obtain a fused image.
In an embodiment, the fusing module determines the first weight corresponding to the first area and the second weight corresponding to the second area by adopting the following manner: determining a first weight of the first region corresponding to the original image and the processed image, wherein the first weight of the first region corresponding to the original image is larger than the first weight of the first region corresponding to the processed image; and determining a second weight of the second region corresponding to the original image and the processed image, wherein the second weight of the second region corresponding to the original image is smaller than the second weight of the second region corresponding to the processed image.
According to a third aspect of the embodiments of the present disclosure, there is provided a function control apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the image processing method of any one of the preceding claims is performed.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, which when executed by a processor of a mobile terminal, enables the mobile terminal to perform the image processing method of any one of the preceding claims.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: the image acquisition device with the same attribute as the image acquisition device determines an aberration function by using a measurement means, trains a depth convolution neural network based on the obtained aberration function, and performs phase difference removal processing on the images with the phase difference acquired by the image acquisition device by using the trained depth convolution neural network, so that the image processing quality can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic view showing aberrations occurring in photographing by an image pickup device according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present disclosure.
Fig. 3 is a flow chart illustrating a convolutional neural network training, according to an exemplary embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating an image processing method according to still another exemplary embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating an image processing method according to still another exemplary embodiment of the present disclosure.
Fig. 6 is a flowchart illustrating an image processing method according to still another exemplary embodiment of the present disclosure.
Fig. 7 is a flowchart illustrating an image processing method according to still another exemplary embodiment of the present disclosure.
Fig. 8 is a block diagram of an image processing apparatus according to an exemplary embodiment of the present disclosure.
Fig. 9 is a block diagram of an image processing apparatus according to still another exemplary embodiment of the present disclosure.
Fig. 10 is a block diagram of an apparatus according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
In the work and life of people, the user uses the mobile terminal to shoot at any time and any place, and great convenience is brought to the user for acquiring and recording information. Therefore, improvement of the photographing performance of the terminal is more and more focused within the industry.
The shooting performance of the terminal intuitively reflects the quality of a shot image, and the larger the size of a terminal sensor is, the more light rays are received by pixels in unit area for shooting pictures, and the better the imaging quality is. The larger the aperture of the camera, the more light passes per unit time, the shorter the exposure time required, i.e. the passage of light into the camera to the sensor after all. Therefore, for the camera of the terminal, the larger the aperture, the better the image quality.
In order to pursue superior image quality, the size of an image sensor configured by a terminal and the development trend of an aperture are both larger and larger, and meanwhile, optical aberration of a large image sensor and a large aperture terminal causes blurring of a shooting image, poor shooting effect and influences user experience.
Fig. 1 is a schematic view showing aberrations occurring in photographing by an image pickup device according to an exemplary embodiment, and fig. 1 shows an image obtained when a planar sheet is photographed at a close distance by a terminal equipped with a 1/1.33 inch image sensor, a 1/1.69 aperture. As shown in fig. 1, the degree of blurring of a shot image from the center of the image to the edge of the image is increased, and blurring occurring at the edge of the image seriously reduces the resolution of the image, has poor imaging effect, and affects the user experience.
The imaging of the optical system is different from the result obtained by Gaussian optics, and has a certain deviation, and the deviation of the optical imaging relative to paraxial imaging is called aberration. The light which is incident on the paraxial sphere and has a small included angle with the optical axis is paraxial light, and the included angle between the paraxial light and the optical axis is small and approaches to 0. After passing through the optical system, the paraxial rays may be considered to intersect at a point. Non-paraxial rays of light passing through the lens cannot be focused to a point on the imaging surface.
The mathematical model of the optical aberration can be described by an aberration function, for example by a point spread function (PSF, point SpreadFunction), which is the impulse response of the focusing optical system. Functionally, it is a spatial domain form of the optical transfer function of the imaging system, which is an indicator of the quality of the imaging system. After imaging, the point light source diffuses and blurs into speckles, the shape and intensity of which can be described by the PSF.
When the point light source is divided into discrete points of different intensities by diffusion, the point light source image is calculated as the sum of the PSFs of each point. The PSF is typically determined by the image acquisition device acquisition, so imaging features during image acquisition can be described by determining the PSF function of the image acquisition device. Imaging characteristics in the process of describing the image acquisition process can be expressed by a convolution equation, and determining the PSF of the image acquisition device has important significance for image processing.
In the prior art, the deblurring technology is mainly divided into two types of non-blind deblurring and blind deblurring according to whether a point spread function is known or not. The non-blind deblurring technology comprises methods such as inverse filtering, wiener filtering, least square filtering and the like, but the method model is too simple, and the problems of large noise, serious edge information loss and the like of a restored image exist. The image deblurring technology based on the super Laplace prior perfectly restores the image edge, but the algorithm has low operation efficiency.
The known point spread function of the image, namely the non-blind deblurring technology, has the problem of poor edge recovery effect and algorithm operation efficiency. With the breakthrough progress of deep learning on many problems of computer vision, many students use convolutional neural networks for image deblurring, and have excellent effects, but the problems of complex network training, troublesome data acquisition and the like are also existed. And estimating the fuzzy core of the image block by using a convolutional neural network, and obtaining the motion fuzzy cores of different image point by optimizing a Markov random field model. The technique of obtaining a restored image based on the estimated motion blur kernel deconvolution algorithm is troublesome in practical application.
In the blind deblurring processing method in the prior art, the PSF is determined by estimation, for example, estimation of a point spread function according to priori knowledge, estimation according to a degraded image of a certain point in an original scene, estimation according to error-parameter curve analysis, and the like. The PSF determination method is complex and difficult to realize by adopting the estimation method, and different image acquisition devices cannot be distinguished, so that the accuracy is low. The estimated PSF is adopted for image processing, so that the effect is poor.
Accordingly, the present disclosure provides an image processing method, which determines a point spread function through a measurement means by an image acquisition device having the same attribute as the image acquisition device, trains a deep convolutional neural network based on the obtained point spread function. And the trained deep convolutional neural network is utilized to remove the phase difference of the phase difference-generating images acquired by the image acquisition device, so that the image processing quality is improved.
Fig. 2 is a flowchart of an image processing method according to an exemplary embodiment of the present disclosure, and as shown in fig. 2, the image processing is applied to an electronic device in which an image capturing apparatus is installed, including the following steps.
In step S101, an original image acquired by an image acquisition device is acquired.
In the embodiments of the present disclosure, the original image acquired by the image acquisition device may be understood as an unprocessed image, and may be generally understood as a blurred image in which aberration occurs at a pixel position.
In step S102, the original image is input into a deep convolutional neural network model, resulting in a processed image.
The depth convolution neural network according to the embodiments of the present disclosure is obtained by training an aberration function based on an image acquisition device, and the aberration function may be represented by a point spread function, i.e., a PSF function, corresponding to an image acquired by the image acquisition device having the same attribute as the image acquisition device and having aberration at a pixel position. The PSF function represents the extent of the spread of the pixel points when a point light source in the object is imaged by the camera. The brightness of the pixel point is diffused from the center of the pixel point to the periphery, the point diffusion degree of each pixel point of the image acquisition device is related to the distance between the pixel point and the center pixel point of the image acquisition device, and the greater the distance between the pixel point and the center pixel point of the image acquisition device is, the stronger the point diffusion degree of the pixel point is, and the lower the definition is. The brightness value of each pixel point in the original image acquired by the image acquisition device is the brightness value after the reflection light of the shot object is mapped on the corresponding pixel point in the original image, and the blurring degree of the pixel point at the edge of the image is large. The embodiment of the present disclosure will be described below taking an aberration function as a PSF function as an example. It will be appreciated that the aberration function of the image acquisition device may be described by other functions.
In the embodiment of the disclosure, the PSF function is used for describing the shape and intensity of the speckle of the point light source after imaging by the image acquisition device, and the PSF function is determined by the image acquisition device with the same attribute as the image acquisition device, so that the PSF function can embody the parameter attribute of the image acquisition device. The attributes of the image acquisition device comprise parameters such as an image sensor and an aperture arranged on the image acquisition device. An image acquisition device is provided with image sensors and diaphragms with different performances, and the acquired images have different aberrations.
The PSF function information of the basic pixel points is simulated to simulate PSF function information of other parts of the whole picture, so that different correction intensities of different fields of view can be ensured, and corresponding PSF functions can be determined according to the performance of the image acquisition device. The PSF function is determined by the image acquisition device with the same attribute as the image acquisition device, so that the accuracy of the PSF function can be improved, and accurate data support is provided for the subsequent image processing process.
The PSF function determined by the image acquisition device with the same attribute as the image acquisition device is used for training the deep convolution neural network model, so that the PSF function can embody the parameter attribute of the image acquisition device, and further the PSF function can be used for removing the aberration of the original image based on the trained deep convolution neural network. The deep convolutional neural network in the present disclosure may be layers of different types including a convolutional layer, a batch regularization layer, an activation layer, and the like, and by setting the layers of different types in the deep convolutional neural network, feature extraction, feature fitting, and the like in model learning are realized.
According to the embodiment of the disclosure, the image acquisition device with the same attribute as the image acquisition device determines the aberration function by using the measurement means, trains the depth convolution neural network based on the obtained aberration function, and performs phase difference removal processing on the images with the phase difference acquired by the image acquisition device by using the trained depth convolution neural network, so that the image processing quality can be improved.
FIG. 3 is a flow chart illustrating a convolutional neural network training, according to an exemplary embodiment of the present disclosure, based on aberration function training in the manner described below for the deep convolutional neural network in the embodiment of the present disclosure shown in FIG. 3.
In step S201, an aberration reference image group acquired by an image acquisition device having the same attribute as the image acquisition device is acquired.
In the embodiment of the disclosure, the aberration reference image group includes a plurality of aberration reference images, and each aberration reference image includes pixels with aberration at pixel positions.
In the embodiment of the disclosure, the plurality of aberration reference images in the aberration reference image group may be a plurality of photographs taken by the image acquisition device at different distances and/or different angles, so as to be used as reference images for determining that the image acquisition device generates aberration.
In step S202, a sharp sample image group is acquired, and sharp sample images included in the sharp sample image group are images in which no aberration occurs.
In the embodiment of the disclosure, the plurality of clear sample images in the clear sample image group may be acquired by an image acquisition device having the same attribute as the image acquisition device, or may be acquired by other image acquisition devices or downloaded from the internet to a local clear image without aberration.
In the embodiment of the disclosure, each aberration reference image of the aberration reference image group is an image with aberration, and the image acquisition device with the same attribute as the image acquisition device can acquire a plurality of aberration reference images from the same point light source at different positions and different angles. The distance between the image acquisition device and the point light source can be any distance between 30cm and 2m, and the angle between the image acquisition device and the point light source can be any angle between +/-45 degrees. And shooting the same point light source at different distances and different angles by using an image acquisition device to obtain a plurality of aberration reference images.
In step S203, an aberration function is determined based on the aberration reference image group, and the aberration function is convolved with the clear sample images in the clear sample image group to obtain a blurred sample image group.
In the embodiment of the disclosure, the image acquisition device is utilized to shoot the same point light source at different distances and different angles to obtain a plurality of aberration reference images. The PSF function for each pixel location of each aberration reference image at which the aberration occurs may be determined by measurement means. After determining PSF functions of aberration of each pixel position, performing convolution operation on clear samples in the clear sample image group by using the determined PSF functions to obtain fuzzy sample images corresponding to the clear sample images, wherein the fuzzy sample images form a fuzzy sample image group. May be. It is understood that the clear sample images in the clear sample image group are in one-to-one correspondence with the blurred sample images in the blurred sample image group, thereby forming an image pair composed of the clear sample images and the blurred sample images.
The PSF function is determined by utilizing the image group formed by a plurality of aberration reference images, so that the determined PSF function is more accurate, the processing of the acquired pictures under the complex conditions of multiple angles and multiple distances is satisfied, the robustness is improved, and accurate data support is provided for the subsequent image processing process, thereby improving the picture processing quality.
In step S204, a sample image pair is formed based on the corresponding sharp sample image and the blurred sample image in the sharp sample image group and the blurred sample image group, and a depth convolutional neural network is obtained based on the sample image pair training.
In the embodiment of the disclosure, a depth convolution neural network is trained by a sample image pair consisting of a clear sample image and a blurred sample image corresponding to a clear sample image group and a blurred sample image group.
In the verification of training results, an L1 norm loss function, i.e., a minimum absolute deviation or a minimum absolute error, may be selected. The L1 norm loss function minimizes the sum of absolute differences between the target value and the estimated value, namely, when the blurred image is used as the input of the deep convolutional neural network, the difference between the output result of the deep convolutional neural network and the clear image is taken, the absolute value of the difference is taken as the loss function, and the loss function can be minimized by adopting a fastest gradient descent method in the process of minimizing the loss function.
According to the embodiment of the disclosure, the PSF function is determined through the measurement means by the image acquisition device with the same attribute as the image acquisition device, the clear sample image acquired by the image acquisition device with the same attribute as the image acquisition device is convolved based on the obtained PSF function, the fuzzy sample image corresponding to the clear sample image is obtained, and the accuracy of the depth convolution neural network can be improved by utilizing the clear sample image and the image pair training depth convolution neural network formed by the fuzzy sample image, so that the image processing quality is further improved.
Fig. 4 is a flowchart of an image processing method according to an exemplary embodiment of the present disclosure, and as shown in fig. 4, determining an aberration function based on an aberration reference image group in step S203 in fig. 3 includes the following steps.
In step S2031, all pixels in the spot light source image where aberration has occurred are measured, and the pixel positions where aberration has occurred are obtained.
In the embodiment of the disclosure, when a point light source image in a photographed object is imaged in an original image through a camera, the brightness of the pixel point is diffused from the center of the pixel point to the periphery, and the pixel positions of all pixels where aberration occurs are measured. The pixel position of the pixel has a corresponding relation with the pixel brightness, i.e. each pixel position corresponds to one pixel brightness.
In step S2032, the pixel positions are mapped to pixel intensities, and the pixel intensities of all the pixels are normalized to obtain an aberration function corresponding to the point light source image.
In the embodiment of the disclosure, the brightness of the pixel points of the point light source image is diffused from the center to the periphery, and the sum of the pixel brightness corresponding to each pixel position after the diffusion is the same as the brightness of the origin light source. And carrying out normalization processing on pixel brightness of all pixels after the point light source image is diffused, and obtaining a function model of a PSF function corresponding to the point light source image.
According to the embodiment of the disclosure, when the aberration function is determined, all pixels with aberration in the spot light source image are measured to obtain pixel positions with aberration, the pixel positions are mapped to pixel brightness, normalization processing is carried out on the pixel brightness of all pixels, the aberration function is determined for a single image acquisition device through an experimental means, and the accuracy of the aberration function is improved, so that the image processing quality is improved.
Fig. 5 is a flowchart of an image processing method according to an exemplary embodiment of the present disclosure, and the image processing method includes the following steps as shown in fig. 5.
In step S301, an original image acquired by an image acquisition device is acquired.
In step S302, the original image is input into a deep convolutional neural network model, resulting in a processed image.
In step S303, fusion processing is performed on the original image and the processed image, and a fused image is obtained.
In the embodiment of the disclosure, in order to improve the quality and precision of image processing, an original image and a processed image are subjected to image fusion, that is, the image data of the original image and the processed deblurred image based on the same target are extracted, and the beneficial information in the respective image data is fused into a high-quality image.
When the original image and the processed image are fused, the following fusion formula can be adopted:
I(x,y)=A(x,y)*α(x,y)+B(x,y)*β(x,y)
wherein I is the fused pixel value, (x, y) is the pixel point location coordinate, a is the pixel value of the original image, B is the pixel value of the processed image, α is the weight of the original image, and β is the weight of the processed image. The above formula shows that the image fusion result can integrate the characteristics of the original image and the processed image, and enhance the image processing effect.
According to the embodiment of the disclosure, the original image and the processed image are subjected to fusion processing to obtain the fused image, the beneficial information in the image data of the original image and the processed image is extracted, fusion is carried out to obtain a high-quality image, the image processing quality is further improved, and the imaging effect is further improved.
Fig. 6 is a flowchart of an image processing method according to an exemplary embodiment of the present disclosure, and as shown in fig. 6, the fusion processing is performed on the original image and the processed image in step S303 in fig. 5, and the obtaining the fused image includes the following steps.
In step S3031, a first region and a second region in the original image are determined, where the first region is a weak texture region having a pixel gradient less than a first gradient threshold, and the second region is a strong edge region having a pixel gradient greater than a second gradient threshold.
In the embodiment of the disclosure, the texture is a visual characteristic reflecting the homogeneity phenomenon in the image, and represents the surface structure organization arrangement attribute of the surface of the object, which has slow change or periodical change. The weak texture region is the region where the pixel points in the image are similar in color and brightness and are too small to be distinguished. For example, the weak texture region may be the sky, water surface, road surface, wall surface, etc. in the image. In contrast, a strong edge region is a region of a relatively weak texture region in an image, for example, a plant, a mountain, a field, living goods, or the like in the image, which has a distribution of gray values that are more significantly and regularly repeated in the image. The weak texture region corresponds to the gray level of the pixels in the image, is not transformed or is transformed very weak, and the gradient value of the gray level of the pixels of the image can be used.
In the embodiment of the disclosure, the image after the aberration removal processing is easy to wipe out the original weak texture details in the original image while removing the optical aberration of the original image, so that the processing effect of the weak texture part in the processed deblurred image is poor. The region with large difference between the color and brightness of the pixel point in the image and large discrimination is the strong edge region in the image, which is opposite to the strong texture region. The original image is input into a deep convolutional neural network model to obtain a processed image, and when the aberration of the original image is removed, weak texture details in the original image can be removed due to low pixel discrimination in a weak texture area, so that the image distortion is caused.
In the embodiment of the disclosure, the brightness and color of the pixels inside the weak texture region are similar, the change is smaller, and the corresponding gradient value is smaller, namely, the region with smaller gradient statistical average value is the weak texture region. And calculating according to a gradient algorithm to obtain gradient information corresponding to the original image, namely obtaining gradients corresponding to all pixel points in the image, calculating a gradient statistical average value corresponding to the pixel points in a plurality of areas, and selecting an area with the gradient statistical average value within a preset range as a weak texture area.
The original image can be regarded as a two-dimensional discrete function I (I, j), where (I, j) is the coordinates of a pixel point in the image, I (I, j) is the pixel value of the pixel point (I, j), and the pixel value can be represented by an RGB value, a YUV value, and a gray value. Gradient information of the original image is the derivative of the two-dimensional discrete function.
The gradient information of the original image may be:
G(x,y)=dx(i,j)+dy(i,j)
wherein dx (I, j) =i (i+1, j) -I (I, j)
dy(i,j)=I(i,j+1)-I(i,j)
The gradient information of the original image can also be determined by using a median difference method:
dx(i,j)=[I(i+1,j)-I(i-1,j)]/2
dy(i,j)=[I(i,j+1)-I(i,j-1)]/2
Gradient information for the original image may also be determined using gradient calculation formulas in other image processing techniques. The first gradient threshold value and the second gradient threshold value are preset, and the first gradient threshold value can be equal to the second gradient threshold value or smaller than the second gradient threshold value. The region of pixels that is less than the first gradient threshold, i.e., the first region in the original image. The region of pixels above the second gradient threshold, i.e., the second region in the original image.
In step S3032, a first weight corresponding to the first region and a second weight corresponding to the second region are determined.
In the embodiment of the present disclosure, in the fusion process, different weight values are adopted for a weak texture region and a strong edge region in an original image, and a general region different from the weak texture region and the strong edge region. A first weight of a weak texture region in the original image and a second weight of a strong edge region are determined.
In step S3033, fusion processing is performed on the original image and the processed image based on the first weight and the second weight, so as to obtain a fused image.
According to the embodiment of the disclosure, when the fusion processing is performed on the original image and the processed image, a first weight is given to the original image in a first area, a second weight is given to the image processed by the deep convolutional neural network in a second area, and the fusion processing is performed on the original image and the image processed by the deep convolutional neural network, so that the processing of a weak texture area in the original image is improved, the image processing quality is further improved, and the imaging effect is improved.
In the original image, when the first area, namely the weak texture area, and the second area, namely the strong edge area, are determined, the weak texture area and the strong edge area in the original image can be determined according to the color and brightness information of the image. It is also possible to determine the weak texture region and the strong edge region in the original image using a threshold-based segmentation method, an edge-based segmentation method, a cluster analysis method, or the like. It will be appreciated that the disclosed embodiments are not limited to methods of determining weak texture regions and strong edge regions in an original image.
Fig. 7 is a flowchart of an image processing method according to an exemplary embodiment of the present disclosure, and as shown in fig. 7, step S3032 in fig. 6 determines a first weight corresponding to a first region and a second weight corresponding to a second region, including the following steps.
In step S30321, it is determined that the first region corresponds to the first weight of the original image and the processed image.
The first region corresponds to a first weight of the original image and is greater than the first weight of the processed image corresponding to the first region.
In step S30322, it is determined that the second region corresponds to the second weight of the original image and the processed image.
The second region corresponds to a second weight of the original image that is less than a second weight of the processed image corresponding to the second region.
In the embodiment of the disclosure, in the image processing process of removing the aberration, the original weak texture details in the original image are easily wiped off while the optical aberration of the processed image is removed, so that the processing effect of the weak texture part in the processed deblurred image is poor. While removing the aberration of the original image, the weak texture region may appear to have weak texture details removed from the original image due to low pixel discrimination.
And respectively determining the first weight of the first region corresponding to the original image and the first weight of the first region corresponding to the processed image. In order to avoid the over-processing of the weak texture details of the first area of the processed image, it is determined that the first weight of the first area corresponding to the original image is greater than the first weight of the first area corresponding to the processed image, i.e. the weak texture area is given a greater weight to the unprocessed original image, so that the image characteristics of the weak texture area are preserved.
Similarly, a second weight is determined for the second region corresponding to the original image and for the second region corresponding to the processed image. The image with the aberration removed has good effect after the image is processed due to the strong edge area in the image, so the processed image is given greater weight to the strong edge area.
According to the embodiment of the disclosure, when the fusion processing is performed on the original image and the processed image, a first larger weight is given to the original image in a first area, a second larger weight is given to the image processed by the deep convolution neural network in a second area, and the fusion of the original image and the image processed by the deep convolution neural network is performed, so that the excessive deblurring processing of a weak texture area in the original image is improved, the characteristics of a strong edge area in the processed image are fused, and the image processing quality is improved.
Based on the same conception, the embodiment of the disclosure also provides an image processing device.
It will be appreciated that, in order to implement the above-described functions, the image processing apparatus provided in the embodiments of the present disclosure includes corresponding hardware structures and/or software modules that perform the respective functions. The disclosed embodiments may be implemented in hardware or a combination of hardware and computer software, in combination with the various example elements and algorithm steps disclosed in the embodiments of the disclosure. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation is not to be considered as beyond the scope of the embodiments of the present disclosure.
Fig. 8 is a block diagram of an image processing apparatus according to an exemplary embodiment of the present disclosure. Referring to fig. 8, the image processing apparatus is applied to an electronic device in which an image pickup apparatus is mounted, and the image processing apparatus 100 includes: an acquisition module 101 and a processing module 102.
The acquisition module 101 is configured to acquire an original image acquired by the image acquisition device, where the original image is an image with aberration occurring at a pixel position.
The processing module 102 is configured to input the original image into a deep convolutional neural network model, and obtain a processed image, where the deep convolutional neural network is obtained by training an aberration function based on the image acquisition device.
In one embodiment, the deep convolutional neural network is trained based on aberration functions in the following manner: acquiring an aberration reference image group acquired by an image acquisition device with the same attribute as the image acquisition device, wherein each aberration reference image in the aberration reference image group comprises pixels with aberration at pixel positions; acquiring a clear sample image group, wherein clear sample images included in the clear sample image group are images without aberration; determining an aberration function based on the aberration reference image group, and carrying out convolution operation on the aberration function and clear sample images in the clear sample image group to obtain a fuzzy sample image group; and forming a sample image pair based on the corresponding clear sample images and the corresponding fuzzy sample images in the clear sample image group and the fuzzy sample image group, training based on the sample image pair to obtain a depth convolution neural network, wherein the input of the depth convolution neural network is the fuzzy sample image, and the input of the depth convolution neural network is the clear sample image.
In an embodiment, the images in the aberration reference image group are point light source images shot by an image acquisition device with the same attribute as the image acquisition device; determining an aberration function based on the set of aberration reference images, comprising: measuring all pixels with aberration in the spot light source image to obtain pixel positions after the aberration occurs; and mapping the pixel positions into pixel brightness, and carrying out normalization processing on the pixel brightness of all pixels to obtain an aberration function corresponding to the point light source image.
Fig. 9 is a block diagram of an image processing apparatus according to still another exemplary embodiment of the present disclosure, and referring to fig. 9, the image processing apparatus 100 further includes a fusing module 103.
And the fusion module 103 is used for carrying out fusion processing on the original image and the processed image to obtain a fused image.
In one embodiment, the fusion module 103 performs fusion processing on the original image and the processed image in the following manner, to obtain a fused image: determining a first region and a second region in an original image, wherein the first region is a weak texture region with a pixel gradient smaller than a first gradient threshold value, and the second region is a strong edge region with a pixel gradient larger than a second gradient threshold value; determining a first weight corresponding to the first region and a second weight corresponding to the second region; and carrying out fusion processing on the original image and the processed image based on the first weight and the second weight to obtain a fused image.
In an embodiment, the fusion module determines a first weight corresponding to the first region and a second weight corresponding to the second region by: determining a first weight of the first region corresponding to the original image and the processed image, wherein the first weight of the first region corresponding to the original image is larger than the first weight of the first region corresponding to the processed image; and determining a second weight of the second region corresponding to the original image and the processed image, wherein the second weight of the second region corresponding to the original image is smaller than the second weight of the second region corresponding to the processed image.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 10 is a block diagram illustrating an apparatus 200 for image processing according to an exemplary embodiment. For example, apparatus 200 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 10, the apparatus 200 may include one or more of the following components: a processing component 202, a memory 204, a power component 206, a multimedia component 208, an audio component 210, an input/output (I/O) interface 212, a sensor component 214, and a communication component 216.
The processing component 202 generally controls overall operation of the apparatus 200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 202 may include one or more processors 220 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 202 can include one or more modules that facilitate interactions between the processing component 202 and other components. For example, the processing component 202 may include a multimedia module to facilitate interaction between the multimedia component 208 and the processing component 202.
The memory 204 is configured to store various types of data to support operations at the apparatus 200. Examples of such data include instructions for any application or method operating on the device 200, contact data, phonebook data, messages, pictures, videos, and the like. The memory 204 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 206 provides power to the various components of the device 200. The power components 206 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 200.
The multimedia component 208 includes a screen between the device 200 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 208 includes a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 200 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 210 is configured to output and/or input audio signals. For example, the audio component 210 includes a Microphone (MIC) configured to receive external audio signals when the device 200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 204 or transmitted via the communication component 216. In some embodiments, audio component 210 further includes a speaker for outputting audio signals.
The I/O interface 212 provides an interface between the processing assembly 202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 214 includes one or more sensors for providing status assessment of various aspects of the apparatus 200. For example, the sensor assembly 214 may detect the on/off state of the device 200, the relative positioning of the components, such as the display and keypad of the device 200, the sensor assembly 214 may also detect a change in position of the device 200 or a component of the device 200, the presence or absence of user contact with the device 200, the orientation or acceleration/deceleration of the device 200, and a change in temperature of the device 200. The sensor assembly 214 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 214 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 216 is configured to facilitate communication between the apparatus 200 and other devices in a wired or wireless manner. The device 200 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 216 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 216 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 204, including instructions executable by processor 220 of apparatus 200 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It is understood that the term "plurality" in this disclosure means two or more, and other adjectives are similar thereto. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It is further understood that the terms "first," "second," and the like are used to describe various information, but such information should not be limited to these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the expressions "first", "second", etc. may be used entirely interchangeably. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It will be further understood that "connected" includes both direct connection where no other member is present and indirect connection where other element is present, unless specifically stated otherwise.
It will be further understood that although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method, characterized by being applied to an electronic device in which an image acquisition device is installed, comprising:
acquiring an original image acquired by the image acquisition device, wherein the original image is an image with aberration at a pixel position;
Inputting the original image into a deep convolutional neural network model to obtain a processed image, wherein the deep convolutional neural network is obtained by training an aberration function based on the image acquisition device;
the deep convolutional neural network is obtained based on aberration function training in the following mode:
acquiring an aberration reference image group acquired by an image acquisition device with the same attribute as the image acquisition device, wherein each aberration reference image in the aberration reference image group comprises pixels with aberration at pixel positions;
Acquiring a clear sample image group, wherein clear sample images included in the clear sample image group are images without aberration;
determining the aberration function based on the aberration reference image group, and carrying out convolution operation on the aberration function and the clear sample images in the clear sample image group to obtain a fuzzy sample image group;
forming a sample image pair based on corresponding clear sample images and blurred sample images in the clear sample image group and the blurred sample image group, and training based on the sample image pair to obtain the depth convolution neural network;
The images in the aberration reference image group are point light source images shot by the image acquisition device with the same attribute as the image acquisition device;
The determining an aberration function of the image includes:
Measuring all pixels with aberration in the point light source image to obtain pixel positions with aberration;
And mapping the pixel positions into pixel brightness, and carrying out normalization processing on the pixel brightness of all the pixels to obtain an aberration function corresponding to the point light source image.
2. The image processing method according to claim 1, wherein after obtaining the processed image, the method further comprises:
and carrying out fusion processing on the original image and the processed image to obtain a fused image.
3. The image processing method according to claim 2, wherein the fusing the original image and the processed image to obtain a fused image includes:
determining a first region and a second region in the original image, wherein the first region is a weak texture region with a pixel gradient smaller than a first gradient threshold value, and the second region is a strong edge region with the pixel gradient larger than a second gradient threshold value;
determining a first weight corresponding to the first region and a second weight corresponding to the second region;
And carrying out fusion processing on the original image and the processed image based on the first weight and the second weight to obtain a fused image.
4. The image processing method according to claim 3, wherein the determining the first weight corresponding to the first region and the second weight corresponding to the second region includes:
Determining a first weight of the first region corresponding to the original image and the processed image, wherein the first weight of the first region corresponding to the original image is larger than the first weight of the first region corresponding to the processed image;
and determining a second weight of the second region corresponding to the original image and the processed image, wherein the second weight of the second region corresponding to the original image is smaller than the second weight of the second region corresponding to the processed image.
5. An image processing apparatus, characterized by being applied to an electronic device in which an image pickup apparatus is mounted, comprising:
The acquisition module is used for acquiring an original image acquired by the image acquisition device, wherein the original image is an image with aberration at a pixel position;
The processing module is used for inputting the original image into a deep convolution neural network model to obtain a processed image, wherein the deep convolution neural network is obtained by training an aberration function based on the image acquisition device;
the deep convolutional neural network is obtained based on aberration function training in the following mode:
acquiring an aberration reference image group acquired by an image acquisition device with the same attribute as the image acquisition device, wherein each aberration reference image in the aberration reference image group comprises pixels with aberration at pixel positions;
Acquiring a clear sample image group, wherein clear sample images included in the clear sample image group are images without aberration;
determining the aberration function based on the aberration reference image group, and carrying out convolution operation on the aberration function and the clear sample images in the clear sample image group to obtain a fuzzy sample image group;
forming a sample image pair based on corresponding clear sample images and blurred sample images in the clear sample image group and the blurred sample image group, and training based on the sample image pair to obtain the depth convolution neural network;
The images in the aberration reference image group are point light source images shot by the image acquisition device with the same attribute as the image acquisition device;
The determining an aberration function of the image includes:
Measuring all pixels with aberration in the point light source image to obtain pixel positions with aberration;
And mapping the pixel positions into pixel brightness, and carrying out normalization processing on the pixel brightness of all the pixels to obtain an aberration function corresponding to the point light source image.
6. The image processing apparatus according to claim 5, further comprising:
And the fusion module is used for carrying out fusion processing on the original image and the processed image to obtain a fused image.
7. The image processing apparatus according to claim 6, wherein the fusion module performs fusion processing on the original image and the processed image to obtain a fused image by:
determining a first region and a second region in the original image, wherein the first region is a weak texture region with a pixel gradient smaller than a first gradient threshold value, and the second region is a strong edge region with the pixel gradient larger than a second gradient threshold value;
determining a first weight corresponding to the first region and a second weight corresponding to the second region;
And carrying out fusion processing on the original image and the processed image based on the first weight and the second weight to obtain a fused image.
8. The image processing apparatus of claim 7, wherein the fusion module determines the first weight corresponding to the first region and the second weight corresponding to the second region by:
Determining a first weight of the first region corresponding to the original image and the processed image, wherein the first weight of the first region corresponding to the original image is larger than the first weight of the first region corresponding to the processed image;
and determining a second weight of the second region corresponding to the original image and the processed image, wherein the second weight of the second region corresponding to the original image is smaller than the second weight of the second region corresponding to the processed image.
9. An image processing apparatus, comprising:
A processor;
A memory for storing processor-executable instructions;
wherein the processor is configured to: an image processing method according to any one of claims 1 to 4.
10. A non-transitory computer readable storage medium, which when executed by a processor of a mobile terminal, causes the mobile terminal to perform the image processing method of any of claims 1 to 4.
CN202010803059.XA 2020-08-11 2020-08-11 Image processing method, image processing apparatus, and storage medium Active CN111968052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010803059.XA CN111968052B (en) 2020-08-11 2020-08-11 Image processing method, image processing apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010803059.XA CN111968052B (en) 2020-08-11 2020-08-11 Image processing method, image processing apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN111968052A CN111968052A (en) 2020-11-20
CN111968052B true CN111968052B (en) 2024-04-30

Family

ID=73365721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010803059.XA Active CN111968052B (en) 2020-08-11 2020-08-11 Image processing method, image processing apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN111968052B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989800B (en) * 2021-12-29 2022-04-12 南京南数数据运筹科学研究院有限公司 Intestinal plexus auxiliary identification method based on improved progressive residual error network
CN114022484B (en) * 2022-01-10 2022-04-29 深圳金三立视频科技股份有限公司 Image definition value calculation method and terminal for point light source scene
CN114863506B (en) * 2022-03-18 2023-05-26 珠海优特电力科技股份有限公司 Authentication method, device and system of admission permission and identity authentication terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345474A (en) * 2018-05-22 2019-02-15 南京信息工程大学 Image motion based on gradient field and deep learning obscures blind minimizing technology
CN109889724A (en) * 2019-01-30 2019-06-14 北京达佳互联信息技术有限公司 Image weakening method, device, electronic equipment and readable storage medium storing program for executing
CN110533607A (en) * 2019-07-30 2019-12-03 北京威睛光学技术有限公司 A kind of image processing method based on deep learning, device and electronic equipment
CN111223062A (en) * 2020-01-08 2020-06-02 西安电子科技大学 Image deblurring method based on generation countermeasure network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI462054B (en) * 2012-05-15 2014-11-21 Nat Univ Chung Cheng Estimation Method of Image Vagueness and Evaluation Method of Image Quality
CN109644230B (en) * 2016-08-25 2020-10-30 佳能株式会社 Image processing method, image processing apparatus, image pickup apparatus, and storage medium
KR102550175B1 (en) * 2016-10-21 2023-07-03 삼성전기주식회사 Camera module and electronic device including the same
CN110473147A (en) * 2018-05-09 2019-11-19 腾讯科技(深圳)有限公司 A kind of video deblurring method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345474A (en) * 2018-05-22 2019-02-15 南京信息工程大学 Image motion based on gradient field and deep learning obscures blind minimizing technology
CN109889724A (en) * 2019-01-30 2019-06-14 北京达佳互联信息技术有限公司 Image weakening method, device, electronic equipment and readable storage medium storing program for executing
CN110533607A (en) * 2019-07-30 2019-12-03 北京威睛光学技术有限公司 A kind of image processing method based on deep learning, device and electronic equipment
CN111223062A (en) * 2020-01-08 2020-06-02 西安电子科技大学 Image deblurring method based on generation countermeasure network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
双框架卷积神经网络用于运动模糊图像盲复原;吴梦婷;李伟红;龚卫国;;计算机辅助设计与图形学学报(第12期);全文 *
基于快速卷积神经网络的图像去模糊;任静静;方贤勇;陈尚文;汪粼波;周健;;计算机辅助设计与图形学学报(第08期);全文 *
基于波前探测的视网膜图像半盲解卷积复原;钮赛赛;沈建新;梁春;张运海;;南京航空航天大学学报(第04期);全文 *
基于深度卷积神经网络的运动模糊去除算法;郭业才;朱文军;;南京理工大学学报(第03期);全文 *

Also Published As

Publication number Publication date
CN111968052A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN107798669B (en) Image defogging method and device and computer readable storage medium
CN111968052B (en) Image processing method, image processing apparatus, and storage medium
CN108230333B (en) Image processing method, image processing apparatus, computer program, storage medium, and electronic device
CN109889724B (en) Image blurring method and device, electronic equipment and readable storage medium
CN108234858B (en) Image blurring processing method and device, storage medium and electronic equipment
CN110060215B (en) Image processing method and device, electronic equipment and storage medium
CN107948510B (en) Focal length adjusting method and device and storage medium
US11580327B2 (en) Image denoising model training method, imaging denoising method, devices and storage medium
CN109784164B (en) Foreground identification method and device, electronic equipment and storage medium
CN112258404A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112634160A (en) Photographing method and device, terminal and storage medium
US11222235B2 (en) Method and apparatus for training image processing model, and storage medium
CN113706421B (en) Image processing method and device, electronic equipment and storage medium
CN115660945A (en) Coordinate conversion method and device, electronic equipment and storage medium
CN109784327B (en) Boundary box determining method and device, electronic equipment and storage medium
CN113660531B (en) Video processing method and device, electronic equipment and storage medium
CN111553865B (en) Image restoration method and device, electronic equipment and storage medium
CN110807745B (en) Image processing method and device and electronic equipment
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN112288657A (en) Image processing method, image processing apparatus, and storage medium
WO2021145913A1 (en) Estimating depth based on iris size
CN113313788A (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN113592733A (en) Image processing method, image processing device, storage medium and electronic equipment
CN114640815A (en) Video processing method and device, electronic equipment and storage medium
CN110910304B (en) Image processing method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant