CN111968052A - Image processing method, image processing apparatus, and storage medium - Google Patents

Image processing method, image processing apparatus, and storage medium Download PDF

Info

Publication number
CN111968052A
CN111968052A CN202010803059.XA CN202010803059A CN111968052A CN 111968052 A CN111968052 A CN 111968052A CN 202010803059 A CN202010803059 A CN 202010803059A CN 111968052 A CN111968052 A CN 111968052A
Authority
CN
China
Prior art keywords
image
aberration
region
weight
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010803059.XA
Other languages
Chinese (zh)
Other versions
CN111968052B (en
Inventor
万韶华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Pinecone Electronic Co Ltd
Original Assignee
Beijing Xiaomi Pinecone Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Pinecone Electronic Co Ltd filed Critical Beijing Xiaomi Pinecone Electronic Co Ltd
Priority to CN202010803059.XA priority Critical patent/CN111968052B/en
Publication of CN111968052A publication Critical patent/CN111968052A/en
Application granted granted Critical
Publication of CN111968052B publication Critical patent/CN111968052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to an image processing method, an image processing apparatus, and a storage medium, the image processing method being applied to an electronic device in which an image acquisition apparatus is installed, the image processing method including: acquiring an original image acquired by an image acquisition device, wherein the original image is an image with aberration at a pixel position; and inputting the original image into a depth convolution neural network model to obtain a processed image, wherein the depth convolution neural network is obtained by aberration function training based on an image acquisition device. According to the embodiment of the disclosure, the aberration function is determined by using the measurement means through the image acquisition device with the same attribute as the image acquisition device, the deep convolutional neural network is trained based on the obtained aberration function, and the trained deep convolutional neural network is used for removing the phase difference of the image acquired by the image acquisition device and having the phase difference, so that the image processing quality is improved.

Description

Image processing method, image processing apparatus, and storage medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, and a storage medium.
Background
With the rapid development of the intelligent terminal technology, the intelligent terminal is more and more popular in the work and life of people, and in order to better meet the use requirements of users, the performances of all aspects of the intelligent terminal are higher and higher. The user uses the terminal to shoot anytime and anywhere, which brings great convenience to the user. Therefore, one of the most interesting features is the shooting capability of the terminal.
The shooting performance of the terminal reflects the quality of a shot image more intuitively, and the larger the size of the terminal sensor is, the more light received by a unit area pixel for shooting a picture is, and the better the imaging quality is. The aperture of the camera, i.e. the path after all light enters the camera to the sensor, is larger, the more light passes per unit time and the shorter the required exposure time. Therefore, for the camera of the terminal, the larger the aperture, the better the image quality.
In order to pursue superior image quality, the development trends of the size and the aperture of an image sensor configured in a terminal are increasing, and meanwhile, the optical aberration of the large image sensor and the large aperture terminal causes the blurring of a shot image and the poor shooting effect, which affects the user experience.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image processing method, an image processing apparatus, and a storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided an image processing method applied to an electronic device, in which an image capturing apparatus is installed, the image processing method including: acquiring an original image acquired by the image acquisition device, wherein the original image is an image with pixel position aberration; and inputting the original image into a depth convolution neural network model to obtain a processed image, wherein the depth convolution neural network is obtained by aberration function training based on the image acquisition device.
In one embodiment, the deep convolutional neural network is obtained based on aberration function training in the following manner: in one embodiment, the deep convolutional neural network is obtained based on aberration function training in the following manner: acquiring an aberration reference image group acquired by an image acquisition device with the same attribute as the image acquisition device, wherein each aberration reference image in the aberration reference image group comprises a pixel with a pixel position having aberration; acquiring a clear sample image group, wherein the clear sample image in the clear sample image group is an image without aberration; determining the aberration function based on the aberration reference image group, and performing convolution operation on the aberration function and the clear sample image in the clear sample image group to obtain a fuzzy sample image group; and forming a sample image based on the clear sample image group and the corresponding clear sample image and the corresponding fuzzy sample image in the fuzzy sample image group, and training based on the sample image to obtain the deep convolutional neural network.
In one embodiment, the images in the aberration reference image group are point light source images shot by an image acquisition device with the same attribute as the image acquisition device; the determining the aberration function based on the aberration reference image set comprises: measuring all pixels with aberration in the point light source image to obtain pixel positions with aberration; and mapping the pixel position to pixel brightness, and performing normalization processing on the pixel brightness of all the pixels to obtain an aberration function corresponding to the point light source image.
In an embodiment, after obtaining the processed image, the method further comprises: and carrying out fusion processing on the original image and the processed image to obtain an image subjected to fusion processing.
In an embodiment, the fusing the original image and the processed image to obtain a fused image includes: determining a first region and a second region in the original image, wherein the first region is a weak texture region with pixel gradient smaller than a first gradient threshold, and the second region is a strong edge region with pixel gradient larger than a second gradient threshold; determining a first weight corresponding to the first area and a second weight corresponding to the second area; and performing fusion processing on the original image and the processed image based on the first weight and the second weight to obtain a fused image.
In an embodiment, the determining a first weight corresponding to the first area and a second weight corresponding to the second area includes: determining a first weight of the first region corresponding to the original image and the processed image, wherein the first weight of the first region corresponding to the original image is greater than the first weight of the first region corresponding to the processed image; and determining a second weight of the second region corresponding to the original image and the processed image, wherein the second weight of the second region corresponding to the original image is smaller than the second weight of the second region corresponding to the processed image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus applied to an electronic device in which an image capturing apparatus is installed, the image processing apparatus including: the acquisition module is used for acquiring an original image acquired by the image acquisition device, wherein the original image is an image of a pixel position occurrence image; and the processing module is used for inputting the original image into a depth convolution neural network model to obtain a processed image, wherein the depth convolution neural network is obtained by aberration function training based on the image acquisition device, the input of the depth convolution neural network is a fuzzy sample image, and the output of the depth convolution neural network is a clear sample image.
In an embodiment, the deep convolutional neural network is obtained by training based on a point spread function in the following manner: acquiring an aberration reference image group acquired by an image acquisition device with the same attribute as the image acquisition device, wherein each aberration reference image in the aberration reference image group comprises a pixel with a pixel position having aberration; acquiring a clear sample image group, wherein the clear sample image in the clear sample image group is an image without aberration; determining the aberration function based on the aberration reference image group, and performing convolution operation on the aberration function and the clear sample image in the clear sample image group to obtain a fuzzy sample image group; forming a sample image pair based on the clear sample image group and a corresponding clear sample image and a corresponding fuzzy sample image in the fuzzy sample image group, and training based on the sample image pair to obtain the deep convolutional neural network, wherein the input of the deep convolutional neural network is a fuzzy sample image, and the output of the deep convolutional neural network is a clear sample image.
In an embodiment, the images in the first image group are point light source images shot by an image acquisition device with the same attribute as the image acquisition device; the determining the aberration function based on the aberration reference image set comprises: measuring all pixels with aberration in the point light source image to obtain pixel positions with aberration; and mapping the pixel position to pixel brightness, and performing normalization processing on the pixel brightness of all the pixels to obtain an aberration function corresponding to the point light source image.
In one embodiment, the image processing apparatus further includes: and the fusion module is used for carrying out fusion processing on the original image and the processed image to obtain the image after the fusion processing.
In an embodiment, the fusion module performs fusion processing on the original image and the processed image in the following manner to obtain a fused image: determining a first region and a second region in the original image, wherein the first region is a weak texture region with pixel gradient smaller than a first gradient threshold, and the second region is a strong edge region with pixel gradient larger than a second gradient threshold; determining a first weight corresponding to the first area and a second weight corresponding to the second area; and performing fusion processing on the original image and the processed image based on the first weight and the second weight to obtain a fused image.
In an embodiment, the fusion module determines a first weight corresponding to the first region and a second weight corresponding to the second region by: determining a first weight of the first region corresponding to the original image and the processed image, wherein the first weight of the first region corresponding to the original image is greater than the first weight of the first region corresponding to the processed image; and determining a second weight of the second region corresponding to the original image and the processed image, wherein the second weight of the second region corresponding to the original image is smaller than the second weight of the second region corresponding to the processed image.
According to a third aspect of the embodiments of the present disclosure, there is provided a function control apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: performing the image processing method of any of the preceding claims.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to perform any one of the image processing methods described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the image acquisition device with the same attribute as the image acquisition device determines the aberration function by using a measurement means, trains a deep convolution neural network based on the obtained aberration function, and removes the aberration of the image acquired by the image acquisition device and having the aberration by using the trained deep convolution neural network, thereby improving the image processing quality.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram illustrating an occurrence of aberration in photographing by an image pickup device according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present disclosure.
FIG. 3 is a flow chart illustrating a convolutional neural network training in accordance with an exemplary embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating an image processing method according to still another exemplary embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating an image processing method according to still another exemplary embodiment of the present disclosure.
Fig. 6 is a flowchart illustrating an image processing method according to still another exemplary embodiment of the present disclosure.
Fig. 7 is a flowchart illustrating an image processing method according to still another exemplary embodiment of the present disclosure.
Fig. 8 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment of the present disclosure.
Fig. 9 is a block diagram illustrating an image processing apparatus according to still another exemplary embodiment of the present disclosure.
FIG. 10 is a block diagram illustrating an apparatus in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In the work and life of people, a user uses the mobile terminal to shoot at any time and any place, so that great convenience is brought to the user for acquiring and recording information. Therefore, the improvement of the terminal shooting performance is paid much attention from the inside and outside of the industry.
The shooting performance of the terminal reflects the quality of a shot image more intuitively, and the larger the size of the terminal sensor is, the more light received by a unit area pixel for shooting a picture is, and the better the imaging quality is. The aperture of the camera, i.e. the path after all light enters the camera to the sensor, is larger, the more light passes per unit time and the shorter the required exposure time. Therefore, for the camera of the terminal, the larger the aperture, the better the image quality.
In order to pursue superior image quality, the development trends of the size and the aperture of an image sensor configured in a terminal are increasing, and meanwhile, the optical aberration of the large image sensor and the large aperture terminal causes the blurring of a shot image and the poor shooting effect, which affects the user experience.
Fig. 1 is a schematic view illustrating the occurrence of aberration in photographing by an image pickup device according to an exemplary embodiment, and fig. 1 illustrates an image obtained when a flat paper sheet is photographed at a close distance through a terminal equipped with a 1/1.33 inch image sensor and a 1/1.69 aperture. As shown in fig. 1, the degree of image blur of the shot image is increased from the center of the image to the edge of the image, and the blur occurring at the edge of the image seriously reduces the resolution of the image, has poor imaging effect, and affects user experience.
The image formed by the optical system has a certain deviation from the result obtained by Gaussian optics, and the deviation of the optical image relative to the paraxial image is called aberration. The light ray which is incident on the paraxial spherical surface and has a small included angle with the optical axis is paraxial light ray, and the included angle between the paraxial light ray and the optical axis is small and approaches to 0. After passing through the optical system, the paraxial rays may be considered to intersect at a point. Non-paraxial rays of light pass through the lens and cannot be focused to a point on the image plane.
The mathematical model of the optical aberrations can be described by an aberration function, for example by a Point Spread Function (PSF), which is the impulse response of the focusing optics. Functionally, it is a spatial domain form of the optical transfer function of the imaging system, which is an index to measure the quality of the imaging system. After the point light source is imaged, the point light source is diffused and blurred into speckles, and the shapes and the intensities of the speckles can be described by PSF.
When a point source is divided into discrete points of different intensities via diffusion, the point source image is calculated as the sum of the PSFs for each point. The PSF is usually determined by the image acquisition device acquisition, and thus the imaging characteristics during the image acquisition process can be described by determining the PSF function of the image acquisition device. The imaging characteristics in the image acquisition process can be represented by a convolution equation, and the PSF of the image acquisition device is determined to be of great significance for image processing.
In the prior art, based on whether a point spread function is known or not, the deblurring technology is mainly divided into non-blind deblurring and blind deblurring. The non-blind deblurring technology comprises methods such as inverse filtering, wiener filtering and least square filtering, but the method is too simple in model, and the restored image has the problems of large noise, serious loss of edge information and the like. The image deblurring technology based on the super-Laplace prior ideally recovers the image edge, but the algorithm operation efficiency is low.
The point spread function of the known image, namely the non-blind deblurring technology, has the problems of poor edge recovery effect and algorithm operation efficiency. With the breakthrough progress of deep learning on many problems of computer vision, many scholars use the convolutional neural network for image deblurring to obtain an excellent effect, but the problems of complex network training, troublesome data acquisition and the like also exist. And estimating a fuzzy kernel of the image block by using a convolutional neural network, and then obtaining different motion fuzzy kernels of the image point by optimizing a Markov random field model. The technology for obtaining the restored image based on the estimated motion blur kernel deconvolution algorithm is troublesome to operate in practical application.
In the blind deblurring processing method in the prior art, the PSF is determined by estimation, for example, estimation of a point spread function is performed according to prior knowledge, estimation is performed according to a degraded image of a certain point in an original scene, estimation is performed according to error-parameter curve analysis, and the like. The PSF determining method by adopting the estimation method is complex and difficult to realize, and different image acquisition devices cannot be distinguished, so that the accuracy rate is low. The image processing using the estimated PSF is not effective.
Therefore, the present disclosure provides an image processing method, which determines a point spread function through a measurement means by an image acquisition device having the same attribute as the image acquisition device, and trains a deep convolutional neural network based on the obtained point spread function. And the trained deep convolutional neural network is utilized to remove the phase difference of the image which is acquired by the image acquisition device and has the phase difference, so that the image processing quality is improved.
Fig. 2 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present disclosure, where, as shown in fig. 2, the image processing is applied to an electronic device in which an image capturing apparatus is installed, including the following steps.
In step S101, an original image captured by an image capturing device is acquired.
In the embodiment of the present disclosure, the original image acquired by the image acquisition device may be understood as an unprocessed image, and may be generally understood as a blurred image with aberration occurring at the pixel position.
In step S102, the original image is input into the deep convolutional neural network model to obtain a processed image.
The deep convolutional neural network involved in the embodiment of the present disclosure is obtained by training an aberration function based on an image acquisition device, and the aberration function may be represented by a point spread function, i.e., a PSF function, corresponding to an image acquired by the image acquisition device having the same attribute as the image acquisition device and having an aberration at a pixel position. The PSF function represents the diffusion degree of pixel points when a point light source in a shot object is imaged by the camera. The brightness of the pixel point is diffused from the center of the pixel point to the periphery, the point diffusion degree of each pixel point of the image acquisition device is related to the distance between the pixel point and the center pixel point of the image acquisition device, the larger the distance between the pixel point and the center pixel point of the image acquisition device is, the stronger the point diffusion degree of the pixel point is, and the lower the definition is. The brightness value of each pixel point in the original image collected by the image collecting device is the brightness value of the photographed object after the reflection light is mapped on the corresponding pixel point in the original image and diffused, and the pixel point at the edge of the image has large fuzzy degree. The embodiments of the present disclosure will be described below taking an example in which the aberration function is a PSF function. It will be appreciated that the aberration function of the image acquisition arrangement may also be described by other functions.
In the embodiment of the disclosure, the PSF function is used to describe the shape and intensity of the speckle of the point light source after being imaged by the image acquisition device, and the PSF function is determined by the image acquisition device having the same attribute as the image acquisition device, so that the PSF function can embody the parameter attribute of the image acquisition device. The attributes of the image acquisition device comprise parameters such as an image sensor and an aperture of the image acquisition device. The image acquisition device is provided with image sensors and diaphragms with different performances, and the acquired images of the image acquisition device have different aberrations.
The PSF function information of the basic pixel points is fitted to simulate the PSF function information of other parts of the whole picture, so that different correction intensities of different fields of view can be guaranteed, and the corresponding PSF functions are determined according to the performance of the image acquisition device. The image acquisition device with the same attribute as the image acquisition device determines the PSF function, so that the accuracy of the PSF function can be improved, and accurate data support is provided for the subsequent image processing process.
The deep convolutional neural network model is trained on the basis of the PSF function determined by the image acquisition device with the same attribute as that of the image acquisition device, so that the PSF function can embody the parameter attribute of the image acquisition device, and further, the trained deep convolutional neural network can be used for removing the aberration of an original image. The deep convolutional neural network in the disclosure can be layers of different types including convolutional layers, batch regularization layers, activation layers and the like, and by setting the layers of different types in the deep convolutional neural network, feature extraction, feature fitting and the like in model learning are realized.
According to the embodiment of the disclosure, the image acquisition device with the same attribute as the image acquisition device determines the aberration function by using the measurement means, the deep convolutional neural network is trained based on the obtained aberration function, and the trained deep convolutional neural network is used for removing the phase difference of the image acquired by the image acquisition device and having the phase difference, so that the image processing quality can be improved.
Fig. 3 is a flowchart illustrating training of a convolutional neural network according to an exemplary embodiment of the present disclosure, in which a deep convolutional neural network in the embodiment of the present disclosure illustrated in fig. 3 is trained based on an aberration function in the following manner.
In step S201, an aberration reference image group acquired by an image acquisition apparatus having the same attribute as the image acquisition apparatus is acquired.
In the embodiment of the disclosure, the aberration reference image group includes a plurality of aberration reference images, and each aberration reference image includes a pixel with a pixel position having aberration.
In the embodiment of the disclosure, the plurality of aberration reference images in the aberration reference image group may be a plurality of photographs taken by the image capturing device at different distances and/or different angles, so as to serve as reference images for determining that the image capturing device generates aberration.
In step S202, a clear sample image group is acquired, and a clear sample image included in the clear sample image group is an image in which no aberration occurs.
In the embodiment of the present disclosure, the plurality of clear sample images in the clear sample image group may be acquired by an image acquisition device having the same attribute as the image acquisition device, or may be clear images which are acquired by other image acquisition devices or downloaded from the internet to the local and have no aberration.
In the embodiment of the present disclosure, each aberration reference image in the aberration reference image group is an image with aberration, and may be a plurality of aberration reference images acquired by an image acquisition device having the same attribute as the image acquisition device from the same point light source at different positions and different angles. The distance between the image acquisition device and the point light source can be any distance between 30cm and 2m, and the angle between the image acquisition device and the point light source can be any angle between +/-45 degrees. And shooting the same point light source at different distances and different angles by using an image acquisition device to obtain a plurality of aberration reference images.
In step S203, an aberration function is determined based on the aberration reference image group, and the aberration function is convolved with a sharp sample image in the sharp sample image group to obtain a blurred sample image group.
In the embodiment of the disclosure, the same point light source is shot at different distances and different angles by using the image acquisition device to obtain a plurality of aberration reference images. The PSF function for which aberration occurs at each pixel position of each aberration reference image may be determined by means of measurement. After the PSF function of each pixel position with aberration is determined, convolution operation is carried out on the clear samples in the clear sample image group by using the determined PSF function to obtain fuzzy sample images corresponding to the clear sample images, and the fuzzy sample image group is formed by the fuzzy sample images. May be used. Understandably, the clear sample images in the clear sample image group correspond to the blurred sample images in the blurred sample image group one by one, so that an image pair consisting of the clear sample images and the blurred sample images is formed.
The image group consisting of the plurality of aberration reference images is used for determining the PSF function, so that the determined PSF function is more accurate, the processing of the pictures acquired under the complex conditions of multiple angles and multiple distances is met, the robustness of the pictures is improved, accurate data support is provided for the subsequent image processing process, and the picture processing quality is improved.
In step S204, a sample image pair is formed based on the corresponding sharp sample image and the corresponding blurred sample image in the sharp sample image group and the blurred sample image group, and a deep convolutional neural network is obtained through training based on the sample image pair.
In the embodiment of the disclosure, a sample image pair consisting of a corresponding clear sample image and a corresponding blurred sample image in the clear sample image group and the blurred sample image group is used for training the deep convolutional neural network.
In the verification of the training result, an L1 norm loss function, i.e., a minimum absolute value deviation or a minimum absolute value error, may be selected. The L1 norm loss function minimizes the sum of absolute differences between the target value and the estimated value, that is, the difference between the output result of the deep convolutional neural network and the sharp image when the blurred image is used as the input of the deep convolutional neural network, takes the absolute value of the difference, and uses the absolute value as a loss function, wherein the loss function can be minimized by using the fastest gradient descent method in the process of minimizing the loss function.
According to the embodiment of the disclosure, the PSF function is determined by the image acquisition device with the same attribute as the image acquisition device through a measurement means, the clear sample image acquired by the image acquisition device with the same attribute as the image acquisition device is convolved based on the obtained PSF function to obtain the fuzzy sample image corresponding to the clear sample image, and the image formed by the clear sample image and the fuzzy sample image is used for training the deep convolutional neural network, so that the precision of the deep convolutional neural network can be improved, and the image processing quality is further improved.
Fig. 4 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present disclosure, and as shown in fig. 4, the determining an aberration function based on the aberration reference image group in step S203 in fig. 3 includes the following steps.
In step S2031, all pixels of the spot light source image where aberration occurs are measured, and the pixel positions where aberration occurs are obtained.
In the embodiment of the present disclosure, when a point light source image in a photographed object is imaged in an original image by a camera, the brightness of the pixel point is diffused from the center of the pixel point to the periphery, and the pixel positions of all pixels where aberration occurs are measured. The pixel position of the pixel has a corresponding relationship with the pixel brightness, that is, each pixel position corresponds to one pixel brightness.
In step S2032, the pixel positions are mapped to pixel brightness, and the pixel brightness of all pixels is normalized to obtain an aberration function corresponding to the point light source image.
In the embodiment of the disclosure, the brightness of the pixel point of the point light source image is diffused from the center to the periphery, and the sum of the pixel brightness corresponding to each pixel position after diffusion is the same as the brightness of the origin light source. And normalizing the pixel brightness of all pixels after the point light source image is diffused to obtain a function model of the PSF function corresponding to the point light source image.
According to the embodiment of the disclosure, when the aberration function is determined, all pixels with aberration in the point light source image are measured to obtain the pixel position with aberration, the pixel position is mapped to the pixel brightness, normalization processing is performed on the pixel brightness of all pixels, the aberration function of a single image acquisition device is determined through an experimental means, the precision of the aberration function is improved, and therefore the image processing quality is improved.
Fig. 5 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present disclosure, the image processing method including the following steps, as shown in fig. 5.
In step S301, an original image captured by an image capturing device is acquired.
In step S302, the original image is input into the deep convolutional neural network model to obtain a processed image.
In step S303, the original image and the processed image are subjected to fusion processing to obtain a fusion-processed image.
In the embodiment of the present disclosure, in order to improve the quality and accuracy of image processing, the original image and the processed image are subjected to image fusion, that is, image data of the original image and the processed deblurred image based on the same target are extracted from the image data of the original image and the processed deblurred image, and are fused into a high-quality image.
When the original image and the processed image are fused, the following fusion formula can be adopted:
I(x,y)=A(x,y)*α(x,y)+B(x,y)*β(x,y)
wherein, I is the fused pixel value, (x, y) is the pixel position coordinate, a is the pixel value of the original image, B is the pixel value of the processed image, α is the weight of the original image, and β is the weight of the processed image. The above formula shows that the image fusion result can be integrated with the characteristics of the original image and the processed image, and the image processing effect is enhanced.
According to the embodiment of the disclosure, the original image and the processed image are subjected to fusion processing to obtain the image subjected to fusion processing, and favorable information in the image data of the original image and the processed image is extracted and fused into the image with high quality, so that the image processing quality is further improved, and the imaging effect is further improved.
Fig. 6 is a flowchart illustrating an image processing method according to an exemplary embodiment of the disclosure, and as shown in fig. 6, the step S303 in fig. 5 of performing fusion processing on the original image and the processed image to obtain a fusion processed image includes the following steps.
In step S3031, a first region and a second region in the original image are determined, where the first region is a weak texture region with a pixel gradient smaller than a first gradient threshold, and the second region is a strong edge region with a pixel gradient larger than a second gradient threshold.
In the embodiment of the disclosure, the texture is a visual feature reflecting a homogeneous phenomenon in an image, and represents the surface structure organization arrangement property of the object surface with slow change or periodic change. The weak texture region is a region in which pixel points in the image are similar in color and brightness and have too small difference to be distinguished. For example, the weak texture region may be the sky, the water surface, the road surface, the wall surface, or the like in the image. In contrast, a strong edge region is a region of a relatively weak texture region in an image, such as a plant, a mountain, a field, a living article, etc., in the image, and has a gray value that is distributed in the image in a more distinct and regularly repeated arrangement. The gray scale of the pixel in the image corresponding to the weak texture area is not transformed or is very weak, and the gradient value of the gray scale of the pixel of the image can be used.
In the embodiment of the disclosure, the image after the aberration removal process is easy to wipe off the original weak texture details in the original image while removing the optical aberration of the original image, so that the weak texture part in the processed deblurred image has a poor processing effect. And the regions with large difference of color and brightness and large discrimination of pixel points in the image are strong edge regions in the image, which are opposite to the strong texture regions. The original image is input into a depth convolution neural network model to obtain a processed image, and when the aberration of the original image is removed, the weak texture detail in the original image is possibly removed due to low pixel discrimination in the weak texture area, so that the image distortion is caused.
In the embodiment of the disclosure, the brightness and color of the internal pixel points of the weak texture region are relatively similar, the change is relatively small, and the corresponding gradient value is relatively small, that is, the region with the relatively small gradient statistical average value is the weak texture region. Calculating according to a gradient algorithm to obtain gradient information corresponding to the original image, namely obtaining gradients corresponding to all pixel points in the image, obtaining gradient statistical average values corresponding to the pixel points in a plurality of regions, and selecting the region of the gradient statistical average values in a preset range as a weak texture region.
The original image can be regarded as a two-dimensional discrete function I (I, j), where (I, j) is the coordinates of a pixel point in the image, I (I, j) is the pixel value of the pixel point (I, j), and the pixel value can be represented by RGB value, YUV value, and gray value. The gradient information of the original image is the derivative to a two-dimensional discrete function.
The gradient information of the original image may be:
G(x,y)=dx(i,j)+dy(i,j)
wherein dx (I, j) ═ I (I +1, j) -I (I, j)
dy(i,j)=I(i,j+1)-I(i,j)
The gradient information of the original image can also be determined by a median difference method:
dx(i,j)=[I(i+1,j)-I(i-1,j)]/2
dy(i,j)=[I(i,j+1)-I(i,j-1)]/2
gradient information of the original image can also be determined using gradient operation formulas in other image processing techniques. A first gradient threshold and a second gradient threshold are preset, and the first gradient threshold may be equal to the second gradient threshold or smaller than the second gradient threshold. And the area formed by the pixels smaller than the first gradient threshold value is the first area in the original image. And the area composed of the pixels larger than the second gradient threshold value is the second area in the original image.
In step S3032, a first weight corresponding to the first region and a second weight corresponding to the second region are determined.
In the embodiment of the present disclosure, in the fusion process, different weight values are applied to the weak texture region and the strong edge region in the original image and a general region different from the weak texture region and the strong edge region. A first weight of a weak texture region and a second weight of a strong edge region in an original image are determined.
In step S3033, the original image and the processed image are fused based on the first weight and the second weight, so as to obtain a fused image.
According to the embodiment of the disclosure, when the original image and the processed image are subjected to fusion processing, the first weight is given to the original image in the first area, the second weight is given to the image processed by the deep convolution neural network in the second area, and the fusion of the original image and the image processed by the deep convolution neural network is performed, so that the processing of the weak texture area in the original image is improved, the image processing quality is further improved, and the imaging effect is improved.
When the first region, i.e., the weak texture region, and the second region, i.e., the strong edge region, are determined in the original image, the weak texture region and the strong edge region in the original image may be determined according to color and brightness information of the image. The weak texture region and the strong edge region in the original image may also be determined by using a threshold-based segmentation method, an edge-based segmentation method, a cluster analysis method, and the like. It is to be understood that the embodiments of the present disclosure do not limit the method for determining the weak texture region and the strong edge region in the original image.
Fig. 7 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present disclosure, and as shown in fig. 7, step S3032 in fig. 6 determines a first weight corresponding to the first region and a second weight corresponding to the second region, including the following steps.
In step S30321, a first weight of the first region corresponding to the original image and the processed image is determined.
The first weight of the first area corresponding to the original image is greater than the first weight of the first area corresponding to the processed image.
In step S30322, a second weight of the second region corresponding to the original image and the processed image is determined.
The second weight of the second area corresponding to the original image is smaller than the second weight of the second area corresponding to the processed image.
In the image processing process for removing the aberration, the original weak texture details in the original image are easily erased while the optical aberration of the processed image is removed, so that the weak texture part in the processed deblurred image has a poor processing effect. While the aberration of the original image is removed, the weak texture area may have weak texture details removed from the original image due to low pixel discrimination.
And respectively determining a first weight of the first area corresponding to the original image and a first weight of the first area corresponding to the processed image. In order to avoid excessive processing of the weak texture details of the first region of the processed image, the first weight of the first region corresponding to the original image is determined to be greater than the first weight of the first region corresponding to the processed image, that is, the original image which is not processed is given a greater weight to the weak texture region, so that the image characteristics of the weak texture region are retained.
Similarly, a second weight is determined for the second region corresponding to the original image and the second region corresponding to the processed image. Because the strong edge area in the image has good effect after the image processing of removing the aberration, the strong edge area is given larger weight to the processed image.
According to the embodiment of the disclosure, when the original image and the processed image are subjected to fusion processing, the original image is given a larger first weight in the first area, the image processed by the deep convolution neural network is given a larger second weight in the second area, and the fusion of the original image and the image processed by the deep convolution neural network is performed, so that the excessive deblurring processing of the weak texture area in the original image is improved, the characteristics of the strong edge area in the processed image are fused, and the image processing quality is improved.
Based on the same conception, the embodiment of the disclosure also provides an image processing device.
It is understood that the image processing apparatus provided by the embodiments of the present disclosure includes a hardware structure and/or a software module for performing each function in order to realize the above functions. The disclosed embodiments can be implemented in hardware or a combination of hardware and computer software, in combination with the exemplary elements and algorithm steps disclosed in the disclosed embodiments. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Fig. 8 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment of the present disclosure. Referring to fig. 8, the image processing apparatus is applied to an electronic device in which an image capturing apparatus is installed, and the image processing apparatus 100 includes: an acquisition module 101 and a processing module 102.
The acquiring module 101 is configured to acquire an original image acquired by an image acquisition device, where the original image is an image with a pixel position having an aberration.
The processing module 102 is configured to input the original image into a depth convolution neural network model to obtain a processed image, where the depth convolution neural network is obtained by aberration function training based on an image acquisition device.
In one embodiment, the deep convolutional neural network is trained based on the aberration function in the following manner: acquiring an aberration reference image group acquired by an image acquisition device with the same attribute as the image acquisition device, wherein each aberration reference image in the aberration reference image group comprises a pixel with a pixel position having aberration; acquiring a clear sample image group, wherein the clear sample image in the clear sample image group is an image without aberration; determining an aberration function based on the aberration reference image group, and performing convolution operation on the aberration function and a clear sample image in the clear sample image group to obtain a fuzzy sample image group; and forming a sample image pair based on the corresponding clear sample image and the corresponding fuzzy sample image in the clear sample image group and the fuzzy sample image group, and training based on the sample image pair to obtain a deep convolutional neural network, wherein the input of the deep convolutional neural network is the fuzzy sample image, and the output of the deep convolutional neural network is the clear sample image.
In one embodiment, the images in the aberration reference image group are point light source images shot by an image acquisition device with the same attribute as the image acquisition device; determining an aberration function based on the aberration reference image set, comprising: measuring all pixels with aberration in the point light source image to obtain pixel positions with aberration; and mapping the pixel position to pixel brightness, and performing normalization processing on the pixel brightness of all the pixels to obtain an aberration function corresponding to the point light source image.
Fig. 9 is a block diagram illustrating an image processing apparatus according to still another exemplary embodiment of the present disclosure, and referring to fig. 9, the image processing apparatus 100 further includes a fusion module 103.
And the fusion module 103 is configured to perform fusion processing on the original image and the processed image to obtain an image after the fusion processing.
In an embodiment, the fusion module 103 performs fusion processing on the original image and the processed image in the following manner to obtain a fused image: determining a first region and a second region in the original image, wherein the first region is a weak texture region with pixel gradient smaller than a first gradient threshold value, and the second region is a strong edge region with pixel gradient larger than a second gradient threshold value; determining a first weight corresponding to the first area and a second weight corresponding to the second area; and performing fusion processing on the original image and the processed image based on the first weight and the second weight to obtain the image after the fusion processing.
In an embodiment, the fusion module determines a first weight corresponding to the first region and a second weight corresponding to the second region by: determining a first weight of the first area corresponding to the original image and the processed image, wherein the first weight of the first area corresponding to the original image is greater than the first weight of the first area corresponding to the processed image; and determining a second weight of the second area corresponding to the original image and the processed image, wherein the second weight of the second area corresponding to the original image is smaller than the second weight of the second area corresponding to the processed image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 10 is a block diagram illustrating an apparatus 200 for image processing according to an exemplary embodiment. For example, the apparatus 200 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 10, the apparatus 200 may include one or more of the following components: a processing component 202, a memory 204, a power component 206, a multimedia component 208, an audio component 210, an input/output (I/O) interface 212, a sensor component 214, and a communication component 216.
The processing component 202 generally controls overall operation of the device 200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 202 may include one or more processors 220 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 202 can include one or more modules that facilitate interaction between the processing component 202 and other components. For example, the processing component 202 can include a multimedia module to facilitate interaction between the multimedia component 208 and the processing component 202.
The memory 204 is configured to store various types of data to support operations at the apparatus 200. Examples of such data include instructions for any application or method operating on the device 200, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 204 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 206 provide power to the various components of device 200. Power components 206 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 200.
The multimedia component 208 includes a screen that provides an output interface between the device 200 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 208 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 200 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 210 is configured to output and/or input audio signals. For example, audio component 210 includes a Microphone (MIC) configured to receive external audio signals when apparatus 200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 204 or transmitted via the communication component 216. In some embodiments, audio component 210 also includes a speaker for outputting audio signals.
The I/O interface 212 provides an interface between the processing component 202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 214 includes one or more sensors for providing various aspects of status assessment for the device 200. For example, the sensor assembly 214 may detect an open/closed state of the device 200, the relative positioning of components, such as a display and keypad of the device 200, the sensor assembly 214 may also detect a change in the position of the device 200 or a component of the device 200, the presence or absence of user contact with the device 200, the orientation or acceleration/deceleration of the device 200, and a change in the temperature of the device 200. The sensor assembly 214 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 216 is configured to facilitate wired or wireless communication between the apparatus 200 and other devices. The device 200 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 216 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 216 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as memory 204, comprising instructions executable by processor 220 of device 200 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is understood that "a plurality" in this disclosure means two or more, and other words are analogous. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be further understood that the terms "first," "second," and the like are used to describe various information and that such information should not be limited by these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the terms "first," "second," and the like are fully interchangeable. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It will be further understood that, unless otherwise specified, "connected" includes direct connections between the two without the presence of other elements, as well as indirect connections between the two with the presence of other elements.
It is further to be understood that while operations are depicted in the drawings in a particular order, this is not to be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. An image processing method is applied to an electronic device, wherein an image acquisition device is installed in the electronic device, and the image processing method comprises the following steps:
acquiring an original image acquired by the image acquisition device, wherein the original image is an image with pixel position aberration;
and inputting the original image into a depth convolution neural network model to obtain a processed image, wherein the depth convolution neural network is obtained by aberration function training based on the image acquisition device.
2. The image processing method of claim 1, wherein the deep convolutional neural network is trained based on an aberration function in the following manner:
acquiring an aberration reference image group acquired by an image acquisition device with the same attribute as the image acquisition device, wherein each aberration reference image in the aberration reference image group comprises a pixel with a pixel position having aberration;
acquiring a clear sample image group, wherein the clear sample image in the clear sample image group is an image without aberration;
determining the aberration function based on the aberration reference image group, and performing convolution operation on the aberration function and the clear sample image in the clear sample image group to obtain a fuzzy sample image group;
forming a sample image pair based on the clear sample image group and a corresponding clear sample image and a corresponding fuzzy sample image in the fuzzy sample image group, and training based on the sample image pair to obtain the deep convolutional neural network, wherein the input of the deep convolutional neural network is a fuzzy sample image, and the output of the deep convolutional neural network is a clear sample image.
3. The image processing method according to claim 2, wherein the images in the aberration reference image group are point light source images captured by an image capturing device having the same attributes as the image capturing device;
the determining the aberration function based on the aberration reference image set comprises:
measuring all pixels with aberration in the point light source image to obtain pixel positions with aberration;
and mapping the pixel position to pixel brightness, and performing normalization processing on the pixel brightness of all the pixels to obtain an aberration function corresponding to the point light source image.
4. The image processing method of claim 1, wherein after obtaining the processed image, the method further comprises:
and carrying out fusion processing on the original image and the processed image to obtain an image subjected to fusion processing.
5. The image processing method according to claim 4, wherein the fusing the original image and the processed image to obtain a fused image comprises:
determining a first region and a second region in the original image, wherein the first region is a weak texture region with pixel gradient smaller than a first gradient threshold, and the second region is a strong edge region with pixel gradient larger than a second gradient threshold;
determining a first weight corresponding to the first area and a second weight corresponding to the second area;
and performing fusion processing on the original image and the processed image based on the first weight and the second weight to obtain a fused image.
6. The method according to claim 5, wherein the determining a first weight corresponding to the first region and a second weight corresponding to the second region comprises:
determining a first weight of the first region corresponding to the original image and the processed image, wherein the first weight of the first region corresponding to the original image is greater than the first weight of the first region corresponding to the processed image;
and determining a second weight of the second region corresponding to the original image and the processed image, wherein the second weight of the second region corresponding to the original image is smaller than the second weight of the second region corresponding to the processed image.
7. An image processing apparatus, applied to an electronic device in which an image capturing apparatus is installed, the image processing apparatus comprising:
the acquisition module is used for acquiring an original image acquired by the image acquisition device, wherein the original image is an image with aberration at a pixel position;
and the processing module is used for inputting the original image into a depth convolution neural network model to obtain a processed image, wherein the depth convolution neural network is obtained by aberration function training based on the image acquisition device.
8. The image processing apparatus of claim 7, wherein the deep convolutional neural network is trained based on an aberration function in the following manner:
acquiring an aberration reference image group acquired by an image acquisition device with the same attribute as the image acquisition device, wherein each aberration reference image in the aberration reference image group comprises a pixel with a pixel position having aberration;
acquiring a clear sample image group, wherein the clear sample image in the clear sample image group is an image without aberration;
determining the aberration function based on the aberration reference image group, and performing convolution operation on the aberration function and the clear sample image in the clear sample image group to obtain a fuzzy sample image group;
forming a sample image pair based on the clear sample image group and a corresponding clear sample image and a corresponding fuzzy sample image in the fuzzy sample image group, and training based on the sample image pair to obtain the deep convolutional neural network, wherein the input of the deep convolutional neural network is a fuzzy sample image, and the output of the deep convolutional neural network is a clear sample image.
9. The image processing device according to claim 8, wherein the images in the aberration reference image group are point light source images captured by an image capturing device having the same properties as the image capturing device;
the determining the aberration function based on the aberration reference image set comprises:
measuring all pixels with aberration in the point light source image to obtain pixel positions with aberration;
and mapping the pixel position to pixel brightness, and performing normalization processing on the pixel brightness of all the pixels to obtain an aberration function corresponding to the point light source image.
10. The image processing apparatus according to claim 7, characterized by further comprising:
and the fusion module is used for carrying out fusion processing on the original image and the processed image to obtain the image after the fusion processing.
11. The image processing apparatus according to claim 10, wherein the fusion module performs fusion processing on the original image and the processed image to obtain a fusion processed image as follows:
determining a first region and a second region in the original image, wherein the first region is a weak texture region with pixel gradient smaller than a first gradient threshold, and the second region is a strong edge region with pixel gradient larger than a second gradient threshold;
determining a first weight corresponding to the first area and a second weight corresponding to the second area;
and performing fusion processing on the original image and the processed image based on the first weight and the second weight to obtain a fused image.
12. The image processing apparatus according to claim 11, wherein the fusion module determines the first weight corresponding to the first region and the second weight corresponding to the second region by:
determining a first weight of the first region corresponding to the original image and the processed image, wherein the first weight of the first region corresponding to the original image is greater than the first weight of the first region corresponding to the processed image;
and determining a second weight of the second region corresponding to the original image and the processed image, wherein the second weight of the second region corresponding to the original image is smaller than the second weight of the second region corresponding to the processed image.
13. An image processing apparatus characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the image processing method of any one of claims 1 to 6.
14. A non-transitory computer readable storage medium, instructions in which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the image processing method of any one of claims 1 to 6.
CN202010803059.XA 2020-08-11 2020-08-11 Image processing method, image processing apparatus, and storage medium Active CN111968052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010803059.XA CN111968052B (en) 2020-08-11 2020-08-11 Image processing method, image processing apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010803059.XA CN111968052B (en) 2020-08-11 2020-08-11 Image processing method, image processing apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN111968052A true CN111968052A (en) 2020-11-20
CN111968052B CN111968052B (en) 2024-04-30

Family

ID=73365721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010803059.XA Active CN111968052B (en) 2020-08-11 2020-08-11 Image processing method, image processing apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN111968052B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989800A (en) * 2021-12-29 2022-01-28 南京南数数据运筹科学研究院有限公司 Intestinal plexus auxiliary identification method based on improved progressive residual error network
CN114022484A (en) * 2022-01-10 2022-02-08 深圳金三立视频科技股份有限公司 Image definition value calculation method and terminal for point light source scene
CN114863506A (en) * 2022-03-18 2022-08-05 珠海优特电力科技股份有限公司 Method, device and system for verifying access permission and identity authentication terminal

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130308866A1 (en) * 2012-05-15 2013-11-21 National Chung Cheng University Method for estimating blur degree of image and method for evaluating image quality
US20180061020A1 (en) * 2016-08-25 2018-03-01 Canon Kabushiki Kaisha Image processing method, image processing apparatus, image pickup apparatus, and storage medium
US20180115704A1 (en) * 2016-10-21 2018-04-26 Samsung Electro-Mechanics Co., Ltd. Camera module and electronic device including the same
CN109345474A (en) * 2018-05-22 2019-02-15 南京信息工程大学 Image motion based on gradient field and deep learning obscures blind minimizing technology
CN109889724A (en) * 2019-01-30 2019-06-14 北京达佳互联信息技术有限公司 Image weakening method, device, electronic equipment and readable storage medium storing program for executing
CN110533607A (en) * 2019-07-30 2019-12-03 北京威睛光学技术有限公司 A kind of image processing method based on deep learning, device and electronic equipment
CN111223062A (en) * 2020-01-08 2020-06-02 西安电子科技大学 Image deblurring method based on generation countermeasure network
US20200372618A1 (en) * 2018-05-09 2020-11-26 Tencent Technology (Shenzhen) Company Limited Video deblurring method and apparatus, storage medium, and electronic apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130308866A1 (en) * 2012-05-15 2013-11-21 National Chung Cheng University Method for estimating blur degree of image and method for evaluating image quality
US20180061020A1 (en) * 2016-08-25 2018-03-01 Canon Kabushiki Kaisha Image processing method, image processing apparatus, image pickup apparatus, and storage medium
US20180115704A1 (en) * 2016-10-21 2018-04-26 Samsung Electro-Mechanics Co., Ltd. Camera module and electronic device including the same
US20200372618A1 (en) * 2018-05-09 2020-11-26 Tencent Technology (Shenzhen) Company Limited Video deblurring method and apparatus, storage medium, and electronic apparatus
CN109345474A (en) * 2018-05-22 2019-02-15 南京信息工程大学 Image motion based on gradient field and deep learning obscures blind minimizing technology
CN109889724A (en) * 2019-01-30 2019-06-14 北京达佳互联信息技术有限公司 Image weakening method, device, electronic equipment and readable storage medium storing program for executing
CN110533607A (en) * 2019-07-30 2019-12-03 北京威睛光学技术有限公司 A kind of image processing method based on deep learning, device and electronic equipment
CN111223062A (en) * 2020-01-08 2020-06-02 西安电子科技大学 Image deblurring method based on generation countermeasure network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
任静静;方贤勇;陈尚文;汪粼波;周健;: "基于快速卷积神经网络的图像去模糊", 计算机辅助设计与图形学学报, no. 08 *
吴梦婷;李伟红;龚卫国;: "双框架卷积神经网络用于运动模糊图像盲复原", 计算机辅助设计与图形学学报, no. 12 *
郭业才;朱文军;: "基于深度卷积神经网络的运动模糊去除算法", 南京理工大学学报, no. 03 *
钮赛赛;沈建新;梁春;张运海;: "基于波前探测的视网膜图像半盲解卷积复原", 南京航空航天大学学报, no. 04 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989800A (en) * 2021-12-29 2022-01-28 南京南数数据运筹科学研究院有限公司 Intestinal plexus auxiliary identification method based on improved progressive residual error network
CN114022484A (en) * 2022-01-10 2022-02-08 深圳金三立视频科技股份有限公司 Image definition value calculation method and terminal for point light source scene
CN114022484B (en) * 2022-01-10 2022-04-29 深圳金三立视频科技股份有限公司 Image definition value calculation method and terminal for point light source scene
CN114863506A (en) * 2022-03-18 2022-08-05 珠海优特电力科技股份有限公司 Method, device and system for verifying access permission and identity authentication terminal

Also Published As

Publication number Publication date
CN111968052B (en) 2024-04-30

Similar Documents

Publication Publication Date Title
CN107798669B (en) Image defogging method and device and computer readable storage medium
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109889724B (en) Image blurring method and device, electronic equipment and readable storage medium
CN108154465B (en) Image processing method and device
CN111968052B (en) Image processing method, image processing apparatus, and storage medium
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
CN108234858B (en) Image blurring processing method and device, storage medium and electronic equipment
CN108230333B (en) Image processing method, image processing apparatus, computer program, storage medium, and electronic device
CN107948510B (en) Focal length adjusting method and device and storage medium
US11580327B2 (en) Image denoising model training method, imaging denoising method, devices and storage medium
US20120300115A1 (en) Image sensing device
CN106557759B (en) Signpost information acquisition method and device
CN109784164B (en) Foreground identification method and device, electronic equipment and storage medium
CN112634160A (en) Photographing method and device, terminal and storage medium
CN110827219B (en) Training method, device and medium of image processing model
CN113706421B (en) Image processing method and device, electronic equipment and storage medium
CN109784327B (en) Boundary box determining method and device, electronic equipment and storage medium
CN109509195B (en) Foreground processing method and device, electronic equipment and storage medium
CN111741187B (en) Image processing method, device and storage medium
CN112288657A (en) Image processing method, image processing apparatus, and storage medium
CN113313788A (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN107451972B (en) Image enhancement method, device and computer readable storage medium
CN110807745A (en) Image processing method and device and electronic equipment
CN110602397A (en) Image processing method, device, terminal and storage medium
CN115601316A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant