CN113674171A - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113674171A
CN113674171A CN202110930303.3A CN202110930303A CN113674171A CN 113674171 A CN113674171 A CN 113674171A CN 202110930303 A CN202110930303 A CN 202110930303A CN 113674171 A CN113674171 A CN 113674171A
Authority
CN
China
Prior art keywords
image
target
restoration
view
color channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110930303.3A
Other languages
Chinese (zh)
Inventor
吴青峻
韦怡
张海裕
陈嘉伟
李响
谭耀成
高玉婵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110930303.3A priority Critical patent/CN113674171A/en
Publication of CN113674171A publication Critical patent/CN113674171A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a computer readable storage medium, and belongs to the technical field of image processing. The method comprises the following steps: acquiring an original image shot by a target shooting component; acquiring preset image restoration parameters, wherein the image restoration parameters are determined according to point spread functions corresponding to all fields of view of the target shooting assembly; and performing deconvolution processing on the original image based on the image restoration parameters to obtain a restored image. The technical scheme provided by the embodiment of the application can improve the definition of the image shot by the terminal.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the continuous upgrade of hardware carried by terminals, most terminals have an image shooting function, and some terminals even have a microscopic imaging function, and the microscopic imaging is generally realized by configuring a microscope lens in the terminal.
Taking a microscope lens as an example, the depth of field of the microscope lens is very small, and if the surface of a shot object is uneven, some parts in an image shot by the microscope lens are clear and some parts are fuzzy, namely the overall definition of the image is poor; in addition, when the user holds the terminal by hand and takes a picture through the microscope lens, the shake of the hand of the user can also cause the focusing distance of the microscope lens to exceed the depth of field range of the microscope lens, so that the definition of the image taken by the microscope lens is poor.
In view of this, how to improve the definition of an image captured by a terminal through a microscope lens is a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, electronic equipment and a computer readable storage medium, which can improve the definition of an image shot by a terminal.
In a first aspect, an image processing method is provided, which includes:
acquiring an original image shot by a target shooting component;
acquiring preset image restoration parameters, wherein the image restoration parameters are determined according to point spread functions corresponding to all fields of view of the target shooting assembly;
and performing deconvolution processing on the original image based on the image restoration parameters to obtain a restored image.
In a second aspect, there is provided an image processing apparatus comprising:
the first acquisition module is used for acquiring an original image shot by the target shooting component;
the second acquisition module is used for acquiring preset image restoration parameters, and the image restoration parameters are determined according to point spread functions corresponding to all fields of view of the target shooting assembly;
and the processing module is used for carrying out deconvolution processing on the original image based on the image restoration parameters to obtain a restored image.
In a third aspect, an electronic device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, implements the image processing method as described in the first aspect above.
In a fourth aspect, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method as described in the first aspect above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the original image shot by the target shooting component is obtained, the preset image restoration parameters are obtained, the image restoration parameters are determined according to the point spread functions corresponding to the fields of view of the target shooting component, then the original image is subjected to deconvolution processing based on the image restoration parameters, the restored image is obtained, in this way, in the image shooting process of the target shooting component, an image formed by one point of the surface of the shot object passing through the target shooting component is a plurality of points formed by performing convolution processing on the point by using the corresponding point spread function, namely, the original image shot by the target shooting component is obtained by performing convolution processing and spreading on each point of the surface of the shot object by using the corresponding point spread function, therefore, the embodiment of the application utilizes the image restoration parameters determined according to the point spread functions corresponding to the fields of view of the target shooting component, the original image is deconvoluted, so that the diffusion phenomenon in the original image can be restored to obtain a clear restored image, and the definition of the restored image is greater than that of the original image, so that the definition of the image shot by the target shooting assembly is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow diagram of a method of image processing in one embodiment;
FIG. 2 is a field division diagram of an exemplary target capture assembly;
FIG. 3 is a flow chart of an image processing method in another embodiment;
FIG. 4 is a flow diagram illustrating the acquisition of a target point spread function for each field of view of a target capture assembly in accordance with another embodiment;
FIG. 5 is a sample schematic of an exemplary single wavelength target capture assembly 49 fields of view corresponding to one color channel;
FIG. 6 is a flow chart of step 402 in another embodiment;
FIG. 7 is a flowchart of step 103 in another embodiment;
FIG. 8 is a diagram illustrating an exemplary sharpness comparison of an original image and a restored image;
FIG. 9 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
fig. 10 is a schematic diagram of an internal structure of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the conventional technology, no matter a shooting component carried by a terminal is a conventional shooting lens or a micro-lens, in the process of image shooting, an image formed by a Point on the surface of a shot object after passing through the shooting component is a plurality of points diffused by performing convolution processing on the Point by using a corresponding Point Spread Function (PSF), that is, one Point on the surface of the shot object exists as a diffused 'light spot' in an original image shot by the shooting component, so that the definition of the original image is poor; further, under the condition that the shooting component is a micro-lens, the definition of the original image is worse due to the small depth of field of the micro-lens, and the shooting experience of the user is seriously influenced.
In view of this, embodiments of the present application provide an image processing method in which, by acquiring an original image captured by a target capture component, and acquiring a preset image restoration parameter, the image restoration parameters are determined according to the point spread functions corresponding to the respective fields of view of the object photographing component, then, deconvoluting the original image based on the image restoration parameters to obtain a restored image, thus, by using the image restoration parameters determined according to the point spread functions corresponding to the respective fields of view of the object photographing component, deconvoluting the original image, so as to recover the diffusion phenomenon in the original image to obtain a clear recovered image, the definition of the restored image is greater than that of the original image, so that the definition of the image shot by the target shooting assembly is improved, and a clear photo effect in a large depth of field range is obtained.
In the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, and the image processing apparatus may be implemented as part or all of a terminal by software, hardware, or a combination of software and hardware.
In the following method embodiments, the execution subject is taken as an example to be described, where the terminal may be a terminal device such as a smart phone, a notebook computer, a tablet computer, a smart watch, a smart band, and smart glasses, and the type of the terminal is not specifically limited herein.
Referring to fig. 1, a flowchart of an image processing method according to an embodiment of the present application is shown. As shown in fig. 1, the image processing method may include the steps of:
step 101, the terminal acquires an original image shot by a target shooting component.
The terminal captures an image of a subject, which may be a subject of any kind, such as a person, an animal, a building, a plant, etc., using the object capturing module to obtain an original image.
The target shooting component can be a front shooting component or a rear shooting component configured in the terminal, the target shooting component can be a conventional shooting lens, and can also be a micro lens used for realizing high magnification, and the magnification of the micro lens is larger than that of the conventional shooting lens.
And 102, the terminal acquires preset image restoration parameters.
The image restoration parameters are determined according to the point spread functions corresponding to the fields of view of the target shooting assembly. In the embodiment of the application, the visual field range of the target shooting assembly can be divided into a plurality of different visual fields.
Referring to fig. 2, a view field division diagram of an exemplary target capture assembly is shown. As shown in fig. 2, each square in fig. 2 may represent one field of view of the object photographing assembly, and the field of view of the object photographing assembly shown in fig. 2 is divided into 49 fields of view.
It should be noted that, in the embodiment of the present application, the manner of dividing the field of view of the target shooting assembly is not limited to the manner shown in fig. 2, for example, the field of view of the target shooting assembly may be divided into a plurality of different fields of view in a concentric circle manner. Thus, each view field of the target shooting assembly comprises a circular central view field containing a circle center and other view fields which are annular outside the central view field. Here, the view field division manner of the object capture component is not particularly limited.
In the embodiment of the application, the terminal is preset with image restoration parameters, and the image restoration parameters are determined according to point spread functions corresponding to each view field of the target shooting assembly. Optionally, the point spread function corresponding to each field of view may be obtained by measuring an actual point spread function corresponding to each field of view based on a point spread function measuring device in advance, and the terminal further performs corresponding processing on each measured actual point spread function to obtain an image restoration parameter, and stores the image restoration parameter in a preset storage address of the terminal; optionally, the image restoration parameter may also be determined according to a lens type of the target shooting component in a delivery process of the target shooting component, and burned in a preset module of the terminal.
In this way, the terminal can acquire the image restoration parameters from the terminal when the terminal needs to perform deconvolution processing on the original image shot by the target shooting component to restore the definition of the original image.
And 103, the terminal performs deconvolution processing on the original image based on the image restoration parameters to obtain a restored image.
The terminal performs deconvolution processing on the original image by using the deconvolution algorithm and taking the image restoration parameters as algorithm parameters to obtain a restored image with a definition degree larger than that of the original image, wherein the deconvolution algorithm can be Lucy-Richardson deconvolution method of Lucy-Richardson, Wiener deconvolution algorithm and the like.
Therefore, the diffusion phenomenon in the original image can be restored through deconvolution processing to obtain a clear restored image, the definition of the restored image is larger than that of the original image, the restored image is taken as an image finally obtained by shooting of the target shooting component and is output, the definition of the image shot by the target shooting component is improved, and a clear photo effect in a large depth range is obtained.
The image restoration effect of the deconvolution processing is closely related to the accuracy of the image restoration parameters. For the target shooting component, the point spread functions of different depths of field and different fields of view of the target shooting component have great differences, and if deconvolution processing is performed on an original image by using the point spread function of a single depth of field or a single field of view (for example, a central field of view) as an image restoration parameter, an obvious ringing effect is caused due to a large error between the point spread function of an edge field of view and the used point spread function, so that the depth of field of the obtained restored image is shallow, and the definition of the restored image is poor.
In view of this, in the embodiment of the present application, the image restoration parameters are determined according to the point spread functions corresponding to each view field of the target shooting assembly, so that the accuracy of the image restoration parameters is improved, and thus, in the process of performing deconvolution processing on the original image by using the image restoration parameters, the point spread functions corresponding to each view field of the target shooting assembly all participate in the deconvolution processing process, so that the definition of the restored image can be effectively improved, and the depth range of the restored image is expanded.
Further, in a possible embodiment, the object capturing assembly includes a phase plate, and the phase plate is used for modulating a variation of the point spread function of the object capturing assembly at different depths of field to be smaller than a preset variation threshold.
Exemplarily, the phase plate can lean against the inner wall of the lens cone positioned on the target shooting assembly on the peripheral side, the phase plate is provided with a diffraction microstructure, thus, when light passes through the phase plate, the light can pass through the diffraction microstructure of the phase plate to generate diffraction, thereby adjusting the phase of the light, so as to increase the depth of field when the target shooting assembly shoots, and through phase adjustment, the variation of the point spread function of the target shooting assembly under different depth of field is smaller than a preset variation threshold, namely, the degree of the point spread function of the target shooting assembly dispersed along with the depth of field is remarkably reduced, the image forming definition of the scenery in a long range before and after the optimal imaging object distance of the target shooting assembly is close to each other, thereby the definition of the original image shot by the terminal can be improved, and the definition of the restored image is further improved.
In addition, in the related art, in order to improve the definition of the image captured by the capturing module, a plurality of images at different depth positions are fused into a clear image with a full field of view. However, the method needs to control the shooting assembly to move to multiple positions for shooting, clear areas of images shot at each position are different, the multiple images are fused through an algorithm to obtain a clear image with a full view field, and the process is complicated in the image acquisition stage, so that the problem that the shooting time is too long is caused. In the embodiment of the application, only one original image shot by the target shooting component is obtained, then, the preset image restoration parameters are obtained by utilizing the implementation mode of the embodiment, and then the original image is subjected to deconvolution processing based on the image restoration parameters, so that a clear restoration image in a full view field can be obtained, and the shooting component does not need to be controlled to move to multiple positions for shooting, so that the shooting time is shortened, and the shooting efficiency is improved.
In the above embodiment, the original image captured by the target capturing component is obtained, and the preset image restoration parameter is obtained, where the image restoration parameter is determined according to the point spread function corresponding to each view field of the target capturing component, and then the original image is deconvoluted based on the image restoration parameter to obtain the restored image, so that, in the image capturing process of the target capturing component, an image formed by one point of the surface of the object passing through the target capturing component is a plurality of points into which the point is spread by performing convolution processing on the point by using the corresponding point spread function, that is, the original image captured by the target capturing component is obtained by performing convolution processing and spreading on each point of the surface of the object by using the corresponding point spread function, and therefore, the embodiment of the present application uses the image restoration parameter determined according to the point spread function corresponding to each view field of the target capturing component, the original image is deconvoluted, so that the diffusion phenomenon in the original image can be restored to obtain a clear restored image, and the definition of the restored image is greater than that of the original image, so that the definition of the image shot by the target shooting assembly is improved.
In one embodiment, based on the embodiment shown in fig. 1, referring to fig. 3, the present embodiment relates to a setting process of image restoration parameters. In this embodiment, the image restoration parameters include target restoration parameters of each color channel, and as shown in fig. 3, the setting process of the image restoration parameters includes step 104:
and 104, the terminal acquires each view field of the target shooting assembly and a target point diffusion function corresponding to the color channel for each color channel, and acquires a target restoration parameter of the color channel according to the target point diffusion function corresponding to each view field.
The image captured by the target capture component may have one or more color channels, for example, the original image may have three color channels: r channel, G channel, and B channel. In this embodiment, for each color channel, the terminal may obtain target restoration parameters corresponding to each color channel to form image restoration parameters.
In the embodiment of the application, the point spread function corresponding to each field of view includes a target point spread function of each color channel, and therefore, for each color channel, the terminal first obtains each field of view of the target shooting assembly and the target point spread function corresponding to the color channel, and then obtains the target restoration parameter of the color channel according to the target point spread function corresponding to each field of view.
For example, the view range of the target shooting assembly is divided into 49 view fields, for the R channel, the terminal acquires the target point spread functions of the 49 view fields corresponding to the R channel, respectively, and then the terminal acquires the target restoration parameters of the R channel according to the target point spread functions of the 49 view fields corresponding to the R channel, respectively. Similarly, the terminal obtains the target restoration parameter of the G channel and the target restoration parameter of the B channel, and the target restoration parameter of the R channel, the target restoration parameter of the G channel and the target restoration parameter of the B channel form the image restoration parameter.
In one possible implementation, referring to fig. 4, the terminal may execute steps 401 and 402 shown in fig. 4 to implement a process of acquiring a target point spread function corresponding to each color channel and each field of view of the target capture assembly:
step 401, for each field of view, the terminal obtains candidate point spread functions of the field of view corresponding to a plurality of different wavelengths.
The point spread function describes the response of the imaging system (object capture assembly) to the light point. Generally, the light of the shooting environment is various and has different wavelengths, and the response of the target shooting component to the light with different wavelengths is different. In view of this, in the embodiment of the present application, for each field of view in each color channel, the terminal first acquires candidate point spread functions corresponding to the field of view and a plurality of different wavelengths, respectively.
In a possible embodiment, taking the measurement by the point spread function measurement device as described above as an example, the wavelength of the light of the shooting environment is generally in the range of 400nm to 700nm, 61 wavelengths are set on average at intervals of 5nm, for each field of view in each color channel, light of different wavelengths is projected onto the point spread function measurement device from the field of view of the point spread function measurement device (the point spread function measurement device simulates the target shooting component) (the pupil sampling during projection may be set by itself in implementation, for example, to 128 × 128), and then, based on the response of the point spread function measurement device to different wavelengths, candidate point spread functions corresponding to the field of view and a plurality of different wavelengths are obtained.
In another possible implementation, the terminal may further perform the following steps a1 and a2 to obtain candidate point spread functions for the field of view corresponding to a plurality of different wavelengths:
step A1, the terminal obtains the initial point spread function corresponding to each wavelength in the field of view.
And step A2, the terminal samples each initial point diffusion function according to a preset first sampling strategy to obtain each candidate point diffusion function.
That is, for each field of view in each color channel, light rays with different wavelengths are respectively projected onto the point spread function measurement device from the field of view of the point spread function measurement device, initial point spread functions corresponding to the field of view and the wavelengths are obtained according to responses of the point spread function measurement device to the different wavelengths, then, the terminal samples the initial point spread functions according to a preset first sampling strategy to obtain candidate point spread functions, that is, the second embodiment shown in steps a1 and a2 has more sampling processes than the first embodiment.
The first sampling strategy may be a sampling strategy corresponding to image plane sampling. For example, the sampling interval of the image plane is selected to be 0.1 μm, correspondingly, the sampling rate of the image plane is determined to be 1024 × 1024 according to the size of the image shot by the target shooting assembly, and thus, the terminal samples each initial point diffusion function according to the sampling rate of 1024 × 1024 to obtain each candidate point diffusion function. Therefore, the data density degree of the candidate point diffusion function can be improved by selecting a higher image plane sampling rate, and the accuracy of the candidate point diffusion function is further improved.
Illustratively, referring to fig. 5, fig. 5 is a sample schematic of one color channel of 49 fields of view of an exemplary single wavelength target capture assembly.
And step 402, the terminal performs fusion processing on each candidate point diffusion function to obtain a target point diffusion function corresponding to the field of view.
Then, the terminal performs fusion processing on each candidate point diffusion function to obtain a target point diffusion function corresponding to the field of view, and exemplarily, the terminal may perform fusion processing such as averaging or weighted summation on each candidate point diffusion function to obtain a target point diffusion function corresponding to the field of view.
Thus, according to the above embodiment, for each color channel, the terminal can obtain the target point spread function corresponding to the color channel in each field. And then, the terminal can acquire the target recovery parameters of the color channel according to the target point diffusion functions corresponding to the fields of view.
In this way, because the different responses of the target shooting assembly to the light rays with different wavelengths are considered, for each view field in each color channel, the target point diffusion function corresponding to the view field is obtained by fusing the candidate point diffusion functions respectively corresponding to different wavelengths, so that the accuracy of the target point diffusion function is improved, the accuracy of the target recovery parameters of the color channel is improved, and the improvement of the image definition of the recovered image is facilitated.
In an embodiment, based on the embodiment shown in fig. 4, referring to fig. 6, this embodiment relates to a process in which a terminal performs fusion processing on each candidate point spread function to obtain a target point spread function corresponding to a field of view.
As shown in fig. 6, step 402 includes steps 601 and 602 shown in fig. 6:
step 601, the terminal acquires a weight coefficient corresponding to each candidate point diffusion function.
In this embodiment, after the terminal acquires candidate point spread functions corresponding to the field and a plurality of different wavelengths for each field in each color channel through the implementation of the above embodiment, the terminal acquires a weight coefficient corresponding to each candidate point spread function to perform weighted summation on each candidate point spread function.
In a possible implementation manner, the terminal may read, in a preset storage address of the terminal, a weight coefficient corresponding to each candidate point spread function, where the weight coefficient corresponding to each candidate point spread function may be manually set based on experience.
In another possible implementation manner, the terminal may perform the following steps B1 and B2 to implement the process of obtaining the weight coefficients corresponding to the candidate point spread functions:
and step B1, the terminal acquires a spectral response curve corresponding to the color channel.
The spectral response curve comprises spectral response data corresponding to each wavelength.
Illustratively, different wavelengths of light (such as the 61 wavelengths of light exemplified above) may be projected onto the spectrometer, respectively, and a spectral response curve may be determined based on spectral response data of the spectrometer for each of the different wavelengths of light.
The horizontal axis of the spectral response curve may be, for example, the wavelength of the light, the vertical axis of the spectral response curve may be, for example, spectral response data corresponding to each wavelength, and the spectral response data may be response data such as the energy of the light of the wavelength absorbed by the spectrometer.
And step B2, the terminal determines the weight coefficient of the candidate point spread function corresponding to the wavelength according to the spectral response data corresponding to the wavelength for each wavelength.
For example, the terminal may normalize the spectral response data corresponding to each wavelength to the [0,1] interval in an equal proportion, and then use the normalized value of the spectral response data corresponding to each wavelength as the weight coefficient of the candidate point spread function corresponding to the normalized value. It is understood that the weighting coefficients are positively correlated with the spectral response data.
Step 602, the terminal performs weighted summation on each candidate point diffusion function according to the weight coefficient corresponding to each candidate point diffusion function, so as to obtain a target point diffusion function corresponding to the field of view.
In a possible implementation manner, the terminal multiplies each candidate point spread function by its corresponding weight coefficient to obtain a product, and adds the products corresponding to each candidate point spread function to obtain the point spread function of the color channel.
In another possible implementation manner, the terminal may perform the following steps C1 and C2, implementing the process of step 602:
and step C1, the terminal performs weighted summation on each candidate point diffusion function according to each weight coefficient to obtain a summation point diffusion function.
And step C2, the terminal samples the summation point diffusion function according to a preset second sampling strategy to obtain a target point diffusion function corresponding to the field of view.
For each view field in each color channel, the terminal performs weighted summation on each candidate point diffusion function according to each weight coefficient, the obtained result is used as a summation point diffusion function, and then the summation point diffusion function is sampled according to a preset second sampling strategy to obtain a target point diffusion function corresponding to the view field.
Wherein the second sampling strategy may be to down-sample the summing point spread function to an image sensor pixel size of the target capture component. For example, the sampling intervals of the image surfaces of the summation point diffusion function and the candidate point diffusion function are consistent and are both 0.1 μm, the terminal down samples the data interval of the summation point diffusion function to the pixel size of the image sensor by a down-sampling algorithm, that is, every 11 × 11 sampling points in the summation point diffusion function are summed to one point, and the target point diffusion function corresponding to the field of view is obtained.
Therefore, the weight coefficient of the candidate point diffusion function corresponding to each wavelength is determined through the spectral response curve, and then the candidate point diffusion functions are subjected to weighted summation according to the weight coefficients corresponding to the candidate point diffusion functions to obtain the target point diffusion function corresponding to the view field.
In an embodiment, based on the embodiment shown in fig. 3, this embodiment relates to a process of how the terminal obtains the target restoration parameters of the color channels according to the target point spread function corresponding to each field of view.
For example, the terminal may perform non-negative matrix decomposition on the target point spread function corresponding to each field of view to obtain the target restoration parameter of the color channel. The target recovery parameters comprise a plurality of intrinsic point diffusion functions and space variation coefficients corresponding to the intrinsic point diffusion functions, the intrinsic point diffusion functions are used for representing data characteristics of the target point diffusion functions, and the space variation coefficients are used for representing the space variation characteristics of the target point diffusion functions.
That is, for each color channel, after the terminal acquires each view field of the target shooting component and the target point spread function corresponding to the color channel, the terminal performs non-negative matrix decomposition on the target point spread function corresponding to each view field in the color channel to obtain the target restoration parameter of the color channel.
In a possible embodiment, before the non-negative matrix decomposition, the terminal may first perform singular value decomposition on the target point diffusion function corresponding to each field as a matrix, and perform the next non-negative matrix decomposition on the result of the singular value decomposition as an initial value of the non-negative matrix decomposition.
In the following, the procedure of singular value decomposition is briefly described, and for example, the terminal may perform singular value decomposition by the following equation 1:
A=UΣVHequation 1
Wherein a is a matrix formed by the target point spread functions corresponding to the respective fields of view (in the above example, the target capture assembly has 49 fields of view, and the size of the target point spread function obtained by each field of view based on the second sampling strategy is 93 × 93(93 × 1024/11), so the size of a may be 93 × 49); u and V are both square arrays, the size of sigma is the same as A, and VHThe Hermitian transpose of V.
In this way, since the two-dimensional matrix for each field of view 93 × 93 is transformed into 8649 data, which is a one-dimensional matrix, at the time of singular value decomposition, the a matrix is transformed into 8659 × 49, and singular value reduction decomposition is performed according to equation 1 to decompose U (8659 × 49) S (49 × 49) V (49 × 49), and then, non-negative matrix decomposition may be performed using the first N items (1 ≦ N ≦ 49, and assuming that N takes 25, the first N items 8659 × 25) of U as initial values of the non-negative matrix decomposition.
Therefore, the two-dimensional matrix is converted into the one-dimensional matrix through singular value decomposition to carry out the next non-negative matrix decomposition, so that the data operation amount of the non-negative matrix decomposition can be reduced, the non-negative matrix decomposition efficiency is improved, and the storage space is saved.
After the singular value decomposition is completed, the terminal performs nonnegative matrix decomposition on the initial value obtained by the singular value decomposition to obtain a plurality of orthogonal and space-independent intrinsic point diffusion functions and space variation coefficients corresponding to the intrinsic point diffusion functions, the intrinsic point diffusion functions are used for representing the data characteristics of the diffusion functions of all target points, the space variation coefficients are used for representing the space variation characteristics of the diffusion functions of all target points, and then the product sum of the space variation coefficients corresponding to the intrinsic point diffusion functions and the intrinsic point diffusion functions is calculated to obtain the target recovery parameters of the color channel.
Thus, for a color channel, the relationship between the target restoration parameter of the color channel, the decomposed image (obtained by decomposing the original image captured by the target capture component) corresponding to the color channel, and the decomposed restoration image corresponding to the color channel can be as follows:
Figure BDA0003211090740000111
wherein, I represents the decomposition image corresponding to the color channel, S represents the decomposition restoration image corresponding to the color channel, aiRepresenting a spatial coefficient of variation, p, in a target restoration parameter of a color channeliRepresenting the intrinsic point spread function in the target restoration parameters for the color channel, N being the number of modes.
Therefore, after the terminal acquires the target restoration parameters of each color channel, the target restoration parameters of each color channel can be used as image restoration parameters, deconvolution processing is carried out on the original image based on the image restoration parameters, a clear restoration image with large depth of field is obtained, and the quality of the image shot by the terminal is improved.
In one embodiment, referring to fig. 7, the present embodiment relates to a process of how a terminal performs deconvolution processing on an original image based on an image restoration parameter to obtain a restored image. As shown in fig. 7, step 103 may include step 701, step 702, and step 703:
in step 701, the terminal decomposes an original image into decomposed images corresponding to each color channel.
As described above, the raw image captured by the subject capture assembly may have one or more color channels, for example, three color channels: and the terminal decomposes the original image into decomposed images respectively corresponding to the color channels, namely a decomposed image corresponding to the R channel, a decomposed image corresponding to the G channel and a decomposed image corresponding to the B channel are obtained.
And step 702, the terminal performs deconvolution processing on the decomposed image corresponding to each color channel according to the target restoration parameters of the color channel to obtain the decomposed restoration image corresponding to the color channel.
The image restoration parameters comprise target restoration parameters of each color channel, so that the terminal performs deconvolution processing on the decomposed image of the corresponding color channel by adopting the target restoration parameters of each color channel to obtain the decomposed restoration image corresponding to each color channel.
Hereinafter, a process of performing deconvolution processing on a decomposed image by a terminal according to a target restoration parameter of a color channel to obtain a decomposed restored image corresponding to the color channel will be described by taking one color channel as an example.
For example, for a decomposed image corresponding to one color channel, the terminal may perform iterative deconvolution processing on the decomposed image according to the target restoration parameter of the color channel, so as to obtain a decomposed restoration image corresponding to the color channel.
Wherein, the K-th iteration deconvolution processing comprises the following steps: acquiring image restoration correction data according to the target restoration parameters, the exploded image and the intermediate image, and acquiring an output image corresponding to the K-th iterative deconvolution processing according to the intermediate image and the image restoration correction data; and under the condition that K is equal to 1, the intermediate image is a decomposition image, under the condition that K is larger than 1, the intermediate image is an output image corresponding to the last iteration deconvolution processing, and under the condition that K is equal to the preset iteration number, the output image corresponding to the Kth iteration deconvolution processing is a decomposition restoration image corresponding to the color channel.
In one possible implementation, referring to the following equations 2-4, the terminal may perform the K-th iterative deconvolution process according to the following equations 2-4:
Figure BDA0003211090740000121
Figure BDA0003211090740000122
SK=Sk-1·F-1{E* k-1equation 4
Wherein, aiRepresenting the spatial variation coefficient, p, in the target restoration parameter of the color channeliRepresenting an intrinsic point spread function in a target restoration parameter for the color channel; n is the number of modes, which may be 25, for example; sK-1Is an intermediate image; p is a radical ofiIs as a pair of piCarrying out matrix inversion to obtain an inverted intrinsic point diffusion function; i is a decomposition image corresponding to the color channel; eK-1Restoring the correction data for the image; the band symbol indicates the frequency domain transform of the variable; f represents the Fourier transform, F-1Representing an inverse fourier transform; sKAnd processing the corresponding output image for the K iteration deconvolution. And under the condition that K is equal to the preset iteration times, the output image corresponding to the K-th iteration deconvolution processing is a decomposition restoration image corresponding to the color channel.
In practical implementation, the number of iterations may be set by itself, for example, 20, and so on.
In another possible implementation, the terminal obtaining the output image corresponding to the K-th iterative deconvolution processing according to the intermediate image and the image restoration correction data may include steps D1 and D2 as follows:
and D1, the terminal acquires regular term data according to the intermediate image and a preset regular term coefficient.
For example, see equation 5:
Figure BDA0003211090740000131
wherein, TVK-1For regular term data, - λTVFor a preset regularizing term coefficient, λTVAnd may be set by itself, for example to 0.001,
Figure BDA0003211090740000133
representing an intermediate image SK-1Div represents the divergence.
And D2, the terminal acquires an output image corresponding to the K-th iteration deconvolution processing according to the intermediate image, the image restoration correction data and the regular item data.
Referring to the following equation 6, the terminal may perform the K-th iterative deconvolution process according to the above equations 2, 3, and 5 and the following equation 6:
Figure BDA0003211090740000132
wherein S isK-1As an intermediate image, EK-1For restoring the correction data for the image, the sign indicates the frequency-domain transformation quantity of the variable, SKFor processing the corresponding output image for the Kth iterative deconvolution, TVK-1And for the regular item data, under the condition that K is equal to the preset iteration times, the output image corresponding to the K-th iteration deconvolution processing is a decomposition restored image corresponding to the color channel.
Therefore, the terminal can obtain a clear decomposition restoration image corresponding to each color channel through a deconvolution processing mode of total variation regularization.
And 703, the terminal performs fusion processing on the decomposed restored images corresponding to the color channels to obtain restored images.
And after the terminal obtains the decomposed restoration images corresponding to the color channels through deconvolution processing, fusing the decomposed restoration images corresponding to the color channels to obtain a clear restoration image, wherein the definition of the restoration image is greater than that of the original image.
Referring to fig. 8, fig. 8 is a schematic diagram of an exemplary original image and restored image. In fig. 8, the left side is an original image, and the right side is a restored image, so that it can be seen that the acquired image restoration parameters are used for image restoration by a full-variation regularized deconvolution algorithm, and through the algorithm, the definition of different depth of field and field area in the image is obviously improved, and the restored image with clear and smooth transition of each field is obtained.
Therefore, the restored image is output as the image finally shot by the target shooting component, the definition of the image shot by the target shooting component is improved, and the clear photo effect in a larger depth of field range is obtained.
In one embodiment, there is provided an image processing method including:
step a, for each view field of a target shooting assembly under each color channel, a terminal acquires an initial point diffusion function corresponding to each wavelength of the view field, and samples each initial point diffusion function according to a preset first sampling strategy to obtain each candidate point diffusion function; and the terminal acquires the weight coefficient corresponding to each candidate point diffusion function, and performs weighted summation on each candidate point diffusion function according to the weight coefficient corresponding to each candidate point diffusion function to obtain a target point diffusion function corresponding to the field of view.
The obtaining of the weight coefficient corresponding to each candidate point spread function includes: acquiring a spectral response curve corresponding to the color channel, wherein the spectral response curve comprises spectral response data corresponding to each wavelength; and for each wavelength, determining a weight coefficient of a candidate point spread function corresponding to the wavelength according to the spectral response data corresponding to the wavelength, wherein the weight coefficient and the spectral response data are in positive correlation.
The method for performing weighted summation on each candidate point diffusion function according to the weight coefficient corresponding to each candidate point diffusion function to obtain the target point diffusion function corresponding to the field of view includes: carrying out weighted summation on each candidate point diffusion function according to each weight coefficient to obtain a summation point diffusion function; and sampling the summation point diffusion function according to a preset second sampling strategy to obtain a target point diffusion function corresponding to the field of view.
And b, the terminal carries out nonnegative matrix decomposition on the target point diffusion function corresponding to each field of view in the color channel to obtain the target recovery parameter of the color channel.
The target recovery parameters comprise a plurality of intrinsic point diffusion functions and space variation coefficients corresponding to the intrinsic point diffusion functions, the intrinsic point diffusion functions are used for representing data characteristics of the target point diffusion functions, and the space variation coefficients are used for representing the space variation characteristics of the target point diffusion functions.
And c, the terminal acquires an original image shot by the target shooting component.
The target shooting assembly comprises a phase plate, and the phase plate is used for modulating that the variation of a point spread function of the target shooting assembly under different depths of field is smaller than a preset variation threshold.
And d, the terminal acquires preset image restoration parameters, wherein the image restoration parameters comprise target restoration parameters of each color channel.
E, the terminal decomposes the original image into decomposed images corresponding to each color channel;
and f, the terminal conducts iterative deconvolution processing on the decomposed image corresponding to each color channel according to the target restoration parameters of the color channel to obtain the decomposed restoration image corresponding to the color channel.
Wherein, the K-th iteration deconvolution processing comprises the following steps: acquiring image restoration correction data according to the target restoration parameters, the exploded image and the intermediate image, and acquiring an output image corresponding to the K-th iterative deconvolution processing according to the intermediate image and the image restoration correction data; and under the condition that K is equal to 1, the intermediate image is a decomposition image, under the condition that K is larger than 1, the intermediate image is an output image corresponding to the last iteration deconvolution processing, and under the condition that K is equal to the preset iteration number, the output image corresponding to the Kth iteration deconvolution processing is a decomposition restoration image corresponding to the color channel.
The terminal obtains an output image corresponding to the K-th iteration deconvolution processing according to the intermediate image and the image restoration correction data, and the method comprises the following steps: acquiring regular term data according to the intermediate image and a preset regular term coefficient; and acquiring an output image corresponding to the K-th iterative deconvolution processing according to the intermediate image, the image restoration correction data and the regular term data.
And g, the terminal performs fusion processing on the decomposed restoration images corresponding to the color channels to obtain restoration images.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in the above-described flowcharts may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or the stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of the sub-steps or stages of other steps.
Fig. 9 is a block diagram showing the configuration of an image processing apparatus according to an embodiment. As shown in fig. 9, the apparatus includes:
a first obtaining module 100, configured to obtain an original image captured by a target capturing component;
a second obtaining module 200, configured to obtain preset image restoration parameters, where the image restoration parameters are determined according to point spread functions corresponding to each field of view of the target shooting assembly;
and the processing module 300 is configured to perform deconvolution processing on the original image based on the image restoration parameter to obtain a restored image.
Optionally, the point spread function corresponding to each of the fields of view includes a target point spread function of each color channel, and the image restoration parameters include target restoration parameters of each color channel, and the apparatus further includes:
a third obtaining module, configured to obtain, for each color channel, a target point spread function corresponding to each color channel and each field of view of the target shooting assembly;
and the fourth acquisition module is used for acquiring the target restoration parameters of the color channels according to the target point spread functions corresponding to the fields of view.
Optionally, the third obtaining module includes:
an acquisition unit configured to acquire, for each of the fields of view, a candidate point spread function of the field of view corresponding to a plurality of different wavelengths;
and the fusion unit is used for performing fusion processing on each candidate point diffusion function to obtain the target point diffusion function corresponding to the view field.
Optionally, the obtaining unit is specifically configured to obtain an initial point spread function of the field of view corresponding to each of the wavelengths; and sampling each initial point diffusion function according to a preset first sampling strategy to obtain each candidate point diffusion function.
Optionally, the fusion unit is specifically configured to obtain a weight coefficient corresponding to each candidate point spread function; and performing weighted summation on each candidate point diffusion function according to the weight coefficient corresponding to each candidate point diffusion function to obtain the target point diffusion function corresponding to the view field.
Optionally, the fusion unit is specifically configured to obtain a spectral response curve corresponding to the color channel, where the spectral response curve includes spectral response data corresponding to each of the wavelengths; for each wavelength, determining the weight coefficient of the candidate point spread function corresponding to the wavelength according to the spectral response data corresponding to the wavelength, and performing weighted summation on each candidate point spread function according to each weight coefficient to obtain a summation point spread function; and sampling the summing point diffusion function according to a preset second sampling strategy to obtain the target point diffusion function corresponding to the view field, wherein the weight coefficient and the spectral response data are in a positive correlation relationship.
Optionally, the fourth obtaining module is specifically configured to perform non-negative matrix decomposition on the target point spread function corresponding to each of the fields of view to obtain the target restoration parameter of the color channel;
the target restoration parameters comprise a plurality of intrinsic point diffusion functions and space variation coefficients corresponding to the intrinsic point diffusion functions, the intrinsic point diffusion functions are used for representing data characteristics of the target point diffusion functions, and the space variation coefficients are used for representing space variation characteristics of the target point diffusion functions.
Optionally, the image restoration parameters include target restoration parameters of respective color channels, and the processing module 300 includes:
the decomposition unit is used for decomposing the original image into decomposition images corresponding to the color channels;
a first processing unit, configured to perform deconvolution processing on the decomposed image corresponding to each color channel according to the target restoration parameter of the color channel, to obtain a decomposed restoration image corresponding to the color channel;
and a second processing unit, configured to perform fusion processing on the decomposed restored images corresponding to the color channels to obtain the restored image.
Optionally, the first processing unit is specifically configured to perform iterative deconvolution processing on the decomposed image according to the target restoration parameter of the color channel, so as to obtain the decomposed restoration image corresponding to the color channel; wherein, the K-th iteration deconvolution processing comprises the following steps: acquiring image restoration correction data according to the target restoration parameters, the decomposed image and the intermediate image, and acquiring an output image corresponding to the K-th iterative deconvolution processing according to the intermediate image and the image restoration correction data; and under the condition that K is equal to 1, the intermediate image is the decomposed image, under the condition that K is larger than 1, the intermediate image is an output image corresponding to the last iteration deconvolution processing, and under the condition that K is equal to the preset iteration number, the output image corresponding to the Kth iteration deconvolution processing is the decomposed restored image corresponding to the color channel.
Optionally, the first processing unit is specifically configured to obtain regular term data according to the intermediate image and a preset regular term coefficient; and acquiring an output image corresponding to the K-th iteration deconvolution processing according to the intermediate image, the image restoration correction data and the regular item data.
Optionally, the object capturing assembly includes a phase plate, and the phase plate is configured to modulate a variation of a point spread function of the object capturing assembly at different depths of field to be smaller than a preset variation threshold.
The division of the modules in the image processing apparatus is merely for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the electronic device, or can be stored in a memory in the electronic device in a software form, so that the processor can call and execute operations corresponding to the modules.
The implementation of each module in the image processing apparatus provided by the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. Program modules constituted by such computer programs may be stored on the memory of the electronic device. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
Fig. 10 is a schematic diagram of an internal structure of an electronic device in one embodiment. The electronic device may be any terminal device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and a wearable device. The electronic device includes a processor and a memory connected by a system bus. The processor may include one or more processing units, among others. The processor may be a CPU (Central Processing Unit), a DSP (Digital Signal processor), or the like. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing an image processing method provided in the above embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image processing method.
Embodiments of the present application also provide a computer program product containing instructions which, when run on a computer, cause the computer to perform an image processing method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. The nonvolatile Memory may include a ROM (Read-Only Memory), a PROM (Programmable Read-Only Memory), an EPROM (Erasable Programmable Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), or a flash Memory. Volatile Memory can include RAM (Random Access Memory), which acts as external cache Memory. By way of illustration and not limitation, RAM is available in many forms, such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), SDRAM (Synchronous Dynamic Random Access Memory), Double Data Rate DDR SDRAM (Double Data Rate Synchronous Random Access Memory), ESDRAM (Enhanced Synchronous Dynamic Random Access Memory), SLDRAM (Synchronous Link Dynamic Random Access Memory), RDRAM (Random Dynamic Random Access Memory), and DRmb DRAM (Dynamic Random Access Memory).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. An image processing method, comprising:
acquiring an original image shot by a target shooting component;
acquiring preset image restoration parameters, wherein the image restoration parameters are determined according to point spread functions corresponding to all fields of view of the target shooting assembly;
and performing deconvolution processing on the original image based on the image restoration parameters to obtain a restored image.
2. The method according to claim 1, wherein the point spread function corresponding to each of the fields of view comprises a target point spread function of each color channel, the image restoration parameters comprise target restoration parameters of each color channel, and the setting process of the image restoration parameters comprises:
for each color channel, acquiring each view field of the target shooting assembly and a target point diffusion function corresponding to the color channel, and acquiring the target restoration parameters of the color channel according to the target point diffusion function corresponding to each view field.
3. The method of claim 2, wherein said obtaining a target point spread function for each of said fields of view of said object capture assembly corresponding to said color channel comprises:
for each field of view, obtaining a candidate point spread function of the field of view corresponding to a plurality of different wavelengths;
and performing fusion processing on each candidate point diffusion function to obtain the target point diffusion function corresponding to the view field.
4. The method of claim 3, wherein obtaining candidate point spread functions for the field of view corresponding to a plurality of different wavelengths comprises:
acquiring an initial point spread function corresponding to each wavelength and the field of view;
and sampling each initial point diffusion function according to a preset first sampling strategy to obtain each candidate point diffusion function.
5. The method according to claim 3, wherein the fusing the candidate point spread functions to obtain the target point spread function corresponding to the field of view comprises:
acquiring a weight coefficient corresponding to each candidate point diffusion function;
and performing weighted summation on each candidate point diffusion function according to the weight coefficient corresponding to each candidate point diffusion function to obtain the target point diffusion function corresponding to the view field.
6. The method of claim 5, wherein the obtaining the weighting factor corresponding to each candidate point spread function comprises:
acquiring a spectral response curve corresponding to the color channel, wherein the spectral response curve comprises spectral response data corresponding to each wavelength;
for each wavelength, determining the weight coefficient of the candidate point spread function corresponding to the wavelength according to the spectral response data corresponding to the wavelength, wherein the weight coefficient is in positive correlation with the spectral response data.
7. The method of claim 5, wherein the performing a weighted summation on each candidate point spread function according to a weighting coefficient corresponding to each candidate point spread function to obtain the target point spread function corresponding to the field of view comprises:
carrying out weighted summation on each candidate point diffusion function according to each weight coefficient to obtain a summation point diffusion function;
and sampling the summation point diffusion function according to a preset second sampling strategy to obtain the target point diffusion function corresponding to the view field.
8. The method according to claim 2, wherein said obtaining the target restoration parameters of the color channels according to the target point spread function corresponding to each of the fields of view comprises:
performing non-negative matrix decomposition on the target point diffusion function corresponding to each field of view to obtain the target restoration parameters of the color channel;
the target restoration parameters comprise a plurality of intrinsic point diffusion functions and space variation coefficients corresponding to the intrinsic point diffusion functions, the intrinsic point diffusion functions are used for representing data characteristics of the target point diffusion functions, and the space variation coefficients are used for representing space variation characteristics of the target point diffusion functions.
9. The method of claim 1, wherein the image restoration parameters comprise target restoration parameters of each color channel, and the deconvolving the original image based on the image restoration parameters to obtain a restored image comprises:
decomposing the original image into decomposed images corresponding to the color channels;
for the decomposed image corresponding to each color channel, performing deconvolution processing on the decomposed image according to the target restoration parameters of the color channel to obtain a decomposed restoration image corresponding to the color channel;
and performing fusion processing on the decomposed restoration images corresponding to the color channels to obtain the restoration images.
10. The method according to claim 9, wherein the deconvolving the decomposed image according to the target restoration parameters of the color channels to obtain decomposed restoration images corresponding to the color channels comprises:
performing iterative deconvolution processing on the decomposed image according to the target restoration parameters of the color channel to obtain the decomposed restoration image corresponding to the color channel;
wherein, the K-th iteration deconvolution processing comprises the following steps: acquiring image restoration correction data according to the target restoration parameters, the decomposed image and the intermediate image, and acquiring an output image corresponding to the K-th iterative deconvolution processing according to the intermediate image and the image restoration correction data; and under the condition that K is equal to 1, the intermediate image is the decomposed image, under the condition that K is larger than 1, the intermediate image is an output image corresponding to the last iteration deconvolution processing, and under the condition that K is equal to the preset iteration number, the output image corresponding to the Kth iteration deconvolution processing is the decomposed restored image corresponding to the color channel.
11. The method according to claim 10, wherein said obtaining an output image corresponding to a K-th iterative deconvolution process from the intermediate image and the image restoration correction data comprises:
acquiring regular term data according to the intermediate image and a preset regular term coefficient;
and acquiring an output image corresponding to the K-th iteration deconvolution processing according to the intermediate image, the image restoration correction data and the regular item data.
12. The method of claim 1, wherein the object capture assembly comprises a phase plate for modulating a variation of a point spread function of the object capture assembly at different depths of field to be less than a preset variation threshold.
13. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring an original image shot by the target shooting component;
the second acquisition module is used for acquiring preset image restoration parameters, and the image restoration parameters are determined according to point spread functions corresponding to all fields of view of the target shooting assembly;
and the processing module is used for carrying out deconvolution processing on the original image based on the image restoration parameters to obtain a restored image.
14. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the method according to any of claims 1 to 12.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 12.
CN202110930303.3A 2021-08-13 2021-08-13 Image processing method, image processing device, electronic equipment and computer readable storage medium Pending CN113674171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110930303.3A CN113674171A (en) 2021-08-13 2021-08-13 Image processing method, image processing device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110930303.3A CN113674171A (en) 2021-08-13 2021-08-13 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113674171A true CN113674171A (en) 2021-11-19

Family

ID=78542669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110930303.3A Pending CN113674171A (en) 2021-08-13 2021-08-13 Image processing method, image processing device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113674171A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114760449A (en) * 2022-03-31 2022-07-15 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal, and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533617A (en) * 2019-08-30 2019-12-03 Oppo广东移动通信有限公司 Image processing method and device, storage medium
CN111062895A (en) * 2019-11-29 2020-04-24 宁波永新光学股份有限公司 Microscopic image restoration method based on multi-view-field segmentation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533617A (en) * 2019-08-30 2019-12-03 Oppo广东移动通信有限公司 Image processing method and device, storage medium
CN111062895A (en) * 2019-11-29 2020-04-24 宁波永新光学股份有限公司 Microscopic image restoration method based on multi-view-field segmentation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114760449A (en) * 2022-03-31 2022-07-15 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal, and readable storage medium
CN114760449B (en) * 2022-03-31 2024-10-18 Oppo广东移动通信有限公司 Image processing method, image processing device, terminal and readable storage medium

Similar Documents

Publication Publication Date Title
US11624652B2 (en) Recovery of hyperspectral data from image
Khashabi et al. Joint demosaicing and denoising via learned nonparametric random fields
KR100911890B1 (en) Method, system, program modules and computer program product for restoration of color components in an image model
US8280180B2 (en) Method and system for image restoration in the spatial domain
Kronander et al. A unified framework for multi-sensor HDR video reconstruction
Hirsch et al. Online multi-frame blind deconvolution with super-resolution and saturation correction
US20110267507A1 (en) Range measurement using a zoom camera
EP1584067A2 (en) Camera with image enhancement functions
Akpinar et al. Learning wavefront coding for extended depth of field imaging
JP2008310828A (en) End-to-end design of electro-optical imaging system
CN102369722A (en) Imaging device and method, and image processing method for imaging device
CN111537072B (en) Polarization information measuring system and method of array type polarization camera
Zhou et al. Multiframe super resolution reconstruction method based on light field angular images
Masood et al. Automatic Correction of Saturated Regions in Photographs using Cross‐Channel Correlation
Bauer et al. Automatic estimation of modulation transfer functions
CN113674171A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
Lin et al. Non-blind optical degradation correction via frequency self-adaptive and finetune tactics
Molina et al. Restoration of severely blurred high range images using stochastic and deterministic relaxation algorithms in compound Gauss–Markov random fields
Fu et al. Raw image based over-exposure correction using channel-guidance strategy
Machuca et al. A unified method for digital super-resolution and restoration in infrared microscopy imaging
CN113674170A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
Yu et al. Continuous digital zooming of asymmetric dual camera images using registration and variational image restoration
Řeřábek et al. The space variant PSF for deconvolution of wide-field astronomical images
Güngör et al. Feature-enhanced computational infrared imaging
Li et al. Adaptive optics image restoration via regularization priors with Gaussian statistics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination