CN114445315A - Image quality enhancement method and electronic device - Google Patents
Image quality enhancement method and electronic device Download PDFInfo
- Publication number
- CN114445315A CN114445315A CN202210111810.9A CN202210111810A CN114445315A CN 114445315 A CN114445315 A CN 114445315A CN 202210111810 A CN202210111810 A CN 202210111810A CN 114445315 A CN114445315 A CN 114445315A
- Authority
- CN
- China
- Prior art keywords
- image
- images
- original
- aligned
- quality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000000295 complement effect Effects 0.000 claims abstract description 16
- 230000003287 optical effect Effects 0.000 claims description 34
- 238000000354 decomposition reaction Methods 0.000 claims description 20
- 230000004927 fusion Effects 0.000 claims description 18
- 230000009466 transformation Effects 0.000 claims description 13
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 10
- 230000002194 synthesizing effect Effects 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
The application discloses an image quality enhancement method and electronic equipment, and belongs to the field of image processing. The method comprises the following steps: controlling a zoom lens module to change focal lengths and shooting images at different focal lengths to obtain a plurality of original images; wherein, the different focal lengths make the depth of field of the multiple original images complementary or partially overlapped; performing image alignment processing on a plurality of original images to obtain a plurality of aligned images; and fusing the clear areas of the multiple aligned images to obtain the quality enhanced image.
Description
Technical Field
The application belongs to the field of image processing, and particularly relates to an image quality enhancement method and electronic equipment.
Background
When a user uses the terminal to shoot a photo, due to the optical characteristics of the lens, the space of front and back field depths is limited, an image which is focused clearly can be presented only in the limited field depth space, and in an area which is away from the lens and is outside the field depth space, the image in the image is blurred, so that the area range of clear focusing in the image is small.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image quality enhancement method and an electronic device, which can solve the problem that a region in a captured image that is sharply focused is small in range.
In a first aspect, an embodiment of the present application provides an image quality enhancement method, where the method includes:
controlling a zoom lens module to change focal lengths and shooting images at different focal lengths to obtain a plurality of original images; wherein, the different focal lengths make the depth of field of the multiple original images complementary or partially overlapped;
performing image alignment processing on a plurality of original images to obtain a plurality of aligned images;
and fusing the clear areas of the multiple aligned images to obtain the quality enhanced image.
In a second aspect, an embodiment of the present application provides an image quality enhancing apparatus, including:
the acquisition unit is used for controlling the zoom lens module to change the focal length and shooting images at different focal lengths to acquire a plurality of original images; wherein, the different focal lengths make the depth of field of the multiple original images complementary or partially overlapped;
the alignment unit is used for executing image alignment processing on the plurality of original images to obtain a plurality of aligned images;
and the fusion unit is used for fusing the clear areas of the multiple aligned images to obtain the quality enhanced image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored in the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, the present application further provides a computer program product stored in a storage medium, the program product being executed by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the focal length is changed by controlling the zoom lens module, images are shot under different focal lengths, a plurality of original images with complementary depth of field or partially overlapped depth of field can be obtained, image alignment processing is performed on the plurality of original images to obtain a plurality of aligned images, clear areas of the plurality of aligned images are fused, a quality enhancement image can be obtained, clear areas of the plurality of images with different focal lengths can be fused in one image to form a quality enhancement image with a larger clear area range, the problem that the clear focusing area range of the shot image is smaller is solved, and the image quality is improved.
Drawings
Fig. 1 is a schematic flowchart of an image quality enhancement method provided in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a zoom lens module;
FIG. 3 is a schematic interface diagram of an image quality enhancement method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an image quality enhancement method according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a plurality of original images obtained by an image quality enhancement method according to an embodiment of the present application;
FIG. 6 is a schematic representation of an affine transformation;
fig. 7 is a schematic diagram of a quality enhanced image of an image quality enhancement method provided by an embodiment of the present application;
fig. 8 is a block diagram illustrating an image quality enhancing apparatus according to an embodiment of the present disclosure;
fig. 9 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 10 is a block diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application are capable of operation in sequences other than those illustrated or described herein, and that the terms "first," "second," etc. are generally used in a generic sense and do not limit the number of terms, e.g., a first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image quality enhancement method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an image quality enhancement method provided by an embodiment of the present application may include the following steps:
The depths of field of a plurality of original images obtained by shooting based on different focal lengths are complementary or partially overlapped. A Depth of Field (DOF) corresponding to one focal length is a space in which the blur degree of the image is within a limited range of an allowable circle of confusion when the object to be photographed is located. Because the depth of field corresponding to the focal length has a certain spatial range, when the zoom lens module is controlled to change the focal length, the depth of field of each focal length can be controlled to be complementary, or only partial coincidence between adjacent depth of field is controlled.
Complementary depth of field means that the depths of field are not mutually coincident, and after all the depths of field are combined, a larger depth of field range can be obtained, so that the depth of field range of the final image is larger, namely, objects in the larger/whole depth of field range can be clearly imaged on the final image. Of course, there may be partial overlap between the depth of field ranges corresponding to two adjacent focal lengths. Here, adjacent means two adjacent focal lengths among different focal lengths.
Referring to fig. 5, from left to right, the images taken by three different focal lengths are respectively, and as can be seen from the images, the depth of field ranges corresponding to the focal lengths from left to right are respectively the farther, middle and closer depth of field ranges, and the depth of field ranges of the three focal lengths are complementary or only partially overlapped.
The terminal for shooting the image can control the focal length to change, and specifically, the lens, the optical sensor, or both the lens and the optical sensor can be controlled to move, so as to shoot the image at different focal lengths. Specifically, the terminal can change the focal length by controlling the zoom lens module and shoot images at different focal lengths, so that a plurality of original images are obtained. The zoom lens module is a lens module with a variable focal length. The zoom lens module may include a lens and an optical sensor, and the plane on which the optical sensor is located is an imaging plane.
Alternatively, the lens may include a plurality of lenses whose lens centers are in a straight line, as shown in fig. 2, including 5 lenses whose lens centers are in a line. Optionally, at least a portion of the plurality of lenses may be movable, as may the optical sensor. When the focal length is changed, the focal length may be changed by controlling the movement of the lens and/or the optical sensor on a hardware structure, or by magnifying the magnification of the lens from software through a digital processing technology.
When the zoom lens module is controlled to change the focal length, there may be two cases, one is that the image distance is not changed, and one is that the image distance is changed.
For the case of constant image distance, the relative positions of the lens and the optical sensor can be controlled to change the focal length and keep the image distance constant. Since the image distance is not changed, the lens and the optical sensor are required to move cooperatively, so as to ensure that the focal length is changed and the image distance is not changed. Specifically, the positions of the lens and the optical sensor, at which the image distance is constant and the focal length is changed, may be calculated according to an optical imaging formula, and the lens and the optical sensor may be controlled to move according to the calculated positions.
For the case of a change in image distance, the lens can be controlled to move relative to the optical sensor to change the focal length. Since the optical sensor does not move in coordination with the lens, the image distance must change. Alternatively, in another embodiment, the optical sensor may be controlled to move relative to the lens when the image distance changes, and similarly, the image distance may also change because the lens does not move in coordination with the optical sensor. This embodiment is similar to the case where the lens moves but the optical sensor does not move, and will not be described in detail.
After the focal length is changed by controlling the lens or the optical sensor to move, the depth of field space at different focal lengths also changes. As shown in fig. 4, it can be seen from the three imaging optical path diagrams from top to bottom that the focal length is changed after the lens is moved. As can be seen from the imaging optical paths at the three different (image side) focal lengths F1, F2, and F3 shown in fig. 4, when the focal length is changed, the change of the focal length also brings the change of the depth of field, the sizes of the three depth of field spaces are different, and the center of each depth of field space (i.e., the object side focal point) also moves. The depth of field spaces at three different focal lengths are Δ L1, Δ L2 and Δ L3, and as can be seen from fig. 4, the three depth of field spaces are complementary and can be combined into a larger depth of field space Δ L1 +/Δ L2 +/Δ L3.
Illustratively, based on the different focal lengths shown in fig. 4, three original images as shown in fig. 5 can be obtained, and it can be seen that the scene of the image shot contains three paper boxes at different positions, in each image, because the distance between the paper box and the lens is different, when the paper box is in the depth space when the corresponding focal length is shot, the paper box is clearer, and the rest of the paper boxes are more blurred, and the farther the distance from the depth space, the more blurred.
It should be noted that the lens movement may be a physical structure movement, or a movement implemented by a digital processing technology, that is, a software implementation, rather than a hardware implementation. The embodiments of the present application do not limit this.
In one example, the embodiment of the present application may be implemented in a mobile phone terminal, before step 101 is executed, a user may open a camera in a mobile phone, and click on a "more" option at a lower right corner, as shown in fig. 3, a "full depth of field" icon is displayed, if the user clicks and selects, a "full depth of field" mode is selected, and a shooting interface of the camera is switched back to shooting, and the user may keep the mobile phone stable, wait for the mobile phone to automatically adjust the zoom lens module, shoot a plurality of original images with different focal lengths, and further implement the following steps until a fused quality-enhanced image is obtained.
And 102, performing image alignment processing on the plurality of original images to obtain a plurality of aligned images.
And image alignment processing, namely image registration processing, is used for matching based on the feature points extracted from different images, so as to obtain the position corresponding relation of the feature points of different images, and further perform alignment by using the position corresponding relation.
In one example, when performing image alignment processing on a plurality of original images to obtain a plurality of aligned images, the method may include the following steps:
step 1021, feature point extraction is performed on each original image, and feature points in each original image are obtained.
The extraction of the image Feature points may be performed by using a Scale Invariant Feature Transform (SIFT) algorithm, an accelerated Up Robust Features (SURF) algorithm, and the like, to obtain Feature points of corresponding types.
Step 1022, taking one of the plurality of original images as a reference image, performing feature point matching on the other images except the reference image in the plurality of original images and the reference image to obtain the position corresponding relationship of the other images relative to the feature points of the reference image.
The algorithm for matching the feature points may adopt a matching algorithm corresponding to the feature point extraction algorithm to perform matching, and the matching result is a corresponding relationship of the same feature points in different images. And taking one of the original images as a reference, and matching other images with the original image.
And 1023, performing affine transformation on the other images according to the corresponding position relation of the other images relative to the feature points of the reference image so as to align the other images with the reference image to obtain a plurality of aligned images.
The principle of affine transformation is shown in fig. 6, where affine transformation is to deform an image, and a deformation matrix formula of each image (except for a reference image) can be determined by using a position correspondence relationship of feature points, so that a position of each pixel point in an original image after deformation is calculated based on the deformation matrix formula, and a new aligned image aligned with the reference image is obtained. The specific algorithm may adopt an affine transformation algorithm in the related art, and is not described herein again.
And 103, fusing the clear areas of the multiple aligned images to obtain a quality enhanced image.
The plurality of images are fused, and image fusion algorithms in the related art, for example, KL (principal component analysis) Transform fusion algorithm, high-pass filter fusion algorithm, Discrete Wavelet Transform (DWT) fusion algorithm, and the like can be used.
Specifically, when step 103 is executed to fuse the sharp regions of the multiple aligned images to obtain the quality-enhanced image, the method includes the following steps:
step 1031, extracting a clear region in each alignment image;
step 1032, the clear areas in each aligned image are combined into one image, and a quality enhanced image is obtained.
An alternative discrete wavelet transform fusion algorithm is used as an example to illustrate one specific implementation of step 103. In performing step 1031 to extract a sharp region in each aligned image, discrete wavelet transform may be performed on each aligned image to decompose each aligned image into multiple decomposition layers, so as to obtain a coefficient of each aligned image in each decomposition layer, and then a coefficient with the highest sharpness in the coefficients of the multiple aligned images is selected in each decomposition layer to serve as a fusion coefficient of the corresponding decomposition layer.
Correspondingly, step 1032 synthesizes the sharp regions in each aligned image into one image, so as to obtain a quality-enhanced image, specifically, based on the fusion coefficients selected by the multiple decomposition layers, an inverse discrete wavelet transform is performed, so as to obtain a quality-enhanced image. The above-mentioned embodiments may also be referred to as Discrete Wavelet Transform Focus Measurement (DWTFM), and the main principle is to perform Focus measurement by using the coefficients after Discrete Wavelet Transform, and then to obtain the best Focus point image, and then to synthesize a clear image.
In an alternative embodiment, since the affine-transformed aligned image may deform the original image, so that the affine-transformed image is not a regular rectangle, and an irregular area may exist in the image after the affine-transformed image is fused, as shown in the left side of fig. 7, then the irregular area may be cut out, and the central main scene is retained, so as to obtain the quality-enhanced image as shown in the right side of fig. 7.
In the embodiment of the application, the focal length is changed by controlling the zoom lens module, images are shot under different focal lengths, a plurality of original images with complementary depth of field or partially overlapped depth of field can be obtained, image alignment processing is performed on the plurality of original images to obtain a plurality of aligned images, clear areas of the plurality of aligned images are fused, a quality enhancement image can be obtained, clear areas of the plurality of images with different focal lengths can be fused in one image to form a quality enhancement image with a larger clear area range, the problem that the clear focusing area range of the shot image is smaller is solved, and the image quality is improved.
Optionally, the image quality enhancement method provided in the embodiment of the present application may also perform multi-focal-length image acquisition on the same spatial region, and fuse a plurality of images with different focal lengths into an image with higher resolution to enhance image details.
It should be noted that, in the image quality enhancement method provided in the embodiment of the present application, the execution subject may be an image quality enhancement apparatus, or a control module in the image quality enhancement apparatus for executing the image quality enhancement method. The embodiment of the present application takes an example in which an image quality enhancement apparatus executes an image quality enhancement method, and describes an image quality enhancement apparatus provided in the embodiment of the present application.
As shown in fig. 8, the image quality enhancement apparatus provided by the embodiment of the present application includes an acquisition unit 11, an alignment unit 12, and a fusion unit 13.
The acquiring unit 11 is used for controlling the zoom lens module to change the focal length and shooting images at different focal lengths to acquire a plurality of original images; wherein, the different focal lengths make the depth of field of the multiple original images complementary or partially overlapped;
the alignment unit 12 is configured to perform image alignment processing on a plurality of original images to obtain a plurality of aligned images;
the fusion unit 13 is used for fusing the clear areas of the multiple aligned images to obtain the quality enhanced image.
Optionally, the zoom lens module comprises a lens and an optical sensor, and the control subunit can also control the relative position change of the lens and the optical sensor to change the focal length and keep the image distance unchanged; or it can be used to control the lens to move relative to the optical sensor to change the focal length and image distance.
Alternatively, the alignment unit 12 may include:
the first extraction subunit is used for extracting the feature points of each original image to obtain the feature points in each original image;
the matching subunit is used for performing feature point matching on other images except the reference image in the plurality of original images and the reference image by taking one of the plurality of original images as the reference image to obtain the position corresponding relation of the other images relative to the feature points of the reference image;
and the affine transformation subunit is used for carrying out affine transformation on the other images according to the position corresponding relation of the other images relative to the characteristic points of the reference image so as to align the other images with the reference image and obtain a plurality of aligned images.
Alternatively, the fusion unit 13 may include:
a second extraction subunit, configured to extract a clear region in each of the aligned images;
and the synthesizing subunit is used for synthesizing the clear area in each aligned image into one image to obtain the quality-enhanced image.
Optionally, the second extraction subunit may include:
a wavelet transform subunit, configured to perform discrete wavelet transform on each aligned image to decompose each aligned image into multiple decomposition layers, so as to obtain a coefficient of each aligned image in each decomposition layer;
the selection subunit is used for selecting the coefficient with the highest definition from the coefficients of the multiple aligned images in each decomposition layer to serve as the fusion coefficient of the corresponding decomposition layer;
the synthesis subunit may comprise:
and the inverse transformation subunit is used for executing the inverse discrete wavelet transformation based on the fusion coefficients selected by the plurality of decomposition layers to obtain the quality enhanced image.
In the embodiment of the application, the focal length is changed by controlling the zoom lens module, images are shot under different focal lengths, a plurality of original images with complementary depth of field or partially overlapped depth of field can be obtained, image alignment processing is performed on the plurality of original images to obtain a plurality of aligned images, clear areas of the plurality of aligned images are fused, a quality enhancement image can be obtained, clear areas of the plurality of images with different focal lengths can be fused in one image to form a quality enhancement image with a larger clear area range, the problem that the clear focusing area range of the shot image is smaller is solved, and the image quality is improved.
The image quality enhancement device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image quality enhancing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image quality enhancement device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 7, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 9, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 901, a memory 902, and a program or an instruction stored in the memory 902 and executable on the processor 901, where the program or the instruction is executed by the processor 901 to implement each process of the above-mentioned embodiment of the image quality enhancement method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein the processor 1010 is configured to perform the following steps:
controlling a zoom lens module to change focal lengths and shooting images at different focal lengths to obtain a plurality of original images; wherein, the different focal lengths make the depth of field of the multiple original images complementary or partially overlapped;
performing image alignment processing on a plurality of original images to obtain a plurality of aligned images;
and fusing the clear areas of the multiple aligned images to obtain the quality enhanced image.
Optionally, the zoom lens module includes a lens and an optical sensor, and the processor 1010, when executing controlling the zoom lens module to change the focal length, may include executing the following steps:
controlling the relative position change of the lens and the optical sensor to change the focal length and keep the image distance unchanged;
or,
the lens is controlled to move relative to the optical sensor to change the focal length and the image distance.
Optionally, when the processor 1010 performs the image alignment process on the multiple original images to obtain multiple aligned images, the method may include the following steps:
performing characteristic point extraction on each original image to obtain characteristic points in each original image;
taking one of the plurality of original images as a reference image, and performing feature point matching on other images except the reference image in the plurality of original images and the reference image to obtain the position corresponding relation of the other images relative to the feature points of the reference image;
and performing affine transformation on the other images according to the position corresponding relation of the other images relative to the characteristic points of the reference image so as to align the other images with the reference image to obtain a plurality of aligned images.
Optionally, the processor 1010, when performing fusing the sharp regions of the multiple aligned images to obtain the quality-enhanced image, may include performing the following steps:
extracting a clear region in each aligned image;
and synthesizing the clear areas in each aligned image into one image to obtain a quality enhanced image.
Optionally, the processor 1010 when executing the extracting the clear region in each of the aligned images may include executing the steps of:
performing discrete wavelet transform on each aligned image to decompose each aligned image into a plurality of decomposition layers, and obtaining coefficients of each aligned image in each decomposition layer;
selecting a coefficient with the highest definition from the coefficients of the multiple aligned images in each decomposition layer to serve as a fusion coefficient of the corresponding decomposition layer;
synthesizing the clear regions in each aligned image into an image to obtain a quality-enhanced image, comprising:
and performing inverse discrete wavelet transform based on the fusion coefficients selected by the plurality of decomposition layers to obtain a quality enhanced image.
In the embodiment of the application, the focal length is changed by controlling the zoom lens module, images are shot under different focal lengths, a plurality of original images with complementary depth of field or partially overlapped depth of field can be obtained, image alignment processing is performed on the plurality of original images to obtain a plurality of aligned images, clear areas of the plurality of aligned images are fused, a quality enhancement image can be obtained, clear areas of the plurality of images with different focal lengths can be fused in one image to form a quality enhancement image with a larger clear area range, the problem that the clear focusing area range of the shot image is smaller is solved, and the image quality is improved.
It should be understood that in the embodiment of the present application, the input unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the graphics processing unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image quality enhancement method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image quality enhancement method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. An image quality enhancement method, comprising:
controlling a zoom lens module to change focal lengths and shooting images at different focal lengths to obtain a plurality of original images; wherein the different focal lengths enable the depths of field of the plurality of original images to be complementary or partially overlapped;
performing image alignment processing on the plurality of original images to obtain a plurality of aligned images;
fusing the clear regions of the multiple aligned images to obtain a quality-enhanced image.
2. The method of claim 1, wherein the zoom lens module comprises a lens and an optical sensor, and wherein controlling the zoom lens module to change the focal length comprises:
controlling the relative position of the lens and the optical sensor to change so as to change the focal length and keep the image distance unchanged;
or,
and controlling the lens to move relative to the optical sensor so as to change the focal length and the image distance.
3. The method according to claim 1 or 2, wherein the performing the image alignment process on the plurality of original images to obtain a plurality of aligned images comprises:
performing characteristic point extraction on each original image to obtain characteristic points in each original image;
taking one of the original images as a reference image, and performing feature point matching on other images except the reference image in the original images and the reference image to obtain the position corresponding relation of the other images relative to the feature points of the reference image;
and performing affine transformation on the other images according to the position corresponding relation of the other images relative to the feature points of the reference image so as to align the other images with the reference image to obtain the multiple aligned images.
4. The method according to claim 1 or 2, wherein said fusing the sharp regions of the multiple aligned images to obtain a quality enhanced image comprises:
extracting a clear region in each of the aligned images;
and synthesizing the clear areas in each aligned image into one image to obtain the quality enhanced image.
5. The method of claim 4, wherein said extracting sharp regions in each of said aligned images comprises:
performing discrete wavelet transform on each of the aligned images to decompose each of the aligned images into a plurality of decomposition layers, resulting in coefficients for each of the aligned images at each decomposition layer;
selecting a coefficient with the highest definition from the coefficients of the multiple aligned images in each decomposition layer to serve as a fusion coefficient of the corresponding decomposition layer;
the step of synthesizing the clear region in each of the aligned images into one image to obtain the quality-enhanced image includes:
and performing inverse discrete wavelet transform based on the fusion coefficients selected by the plurality of decomposition layers to obtain the quality enhanced image.
6. An image quality enhancing apparatus, comprising:
the acquisition unit is used for controlling the zoom lens module to change the focal length and shooting images under different focal lengths to acquire a plurality of original images; wherein the different focal lengths enable the depths of field of the plurality of original images to be complementary or partially overlapped;
the alignment unit is used for executing image alignment processing on the original images to obtain aligned images;
and the fusion unit is used for fusing the clear areas of the multiple aligned images to obtain the quality enhanced image.
7. The apparatus according to claim 6, wherein the zoom lens module comprises a lens and an optical sensor, and the control subunit is further configured to control the relative position of the lens and the optical sensor to change the focal length and keep the image distance unchanged; or the optical sensor is also used for controlling the lens to move relative to the optical sensor so as to change the focal length and the image distance.
8. The apparatus according to claim 6 or 7, wherein the aligning unit comprises:
the first extraction subunit is used for performing feature point extraction on each original image to obtain feature points in each original image;
a matching subunit, configured to perform feature point matching on images other than the reference image in the plurality of original images and the reference image by using one of the plurality of original images as a reference image, so as to obtain a position corresponding relationship between the other images and feature points of the reference image;
and the affine transformation subunit is configured to perform affine transformation on the other image according to the corresponding position relationship between the other image and the feature point of the reference image, so as to align the other image with the reference image, and obtain the multiple aligned images.
9. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image quality enhancement method according to any one of claims 1 to 5.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor, carry out the steps of the image quality enhancement method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210111810.9A CN114445315A (en) | 2022-01-29 | 2022-01-29 | Image quality enhancement method and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210111810.9A CN114445315A (en) | 2022-01-29 | 2022-01-29 | Image quality enhancement method and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114445315A true CN114445315A (en) | 2022-05-06 |
Family
ID=81371859
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210111810.9A Pending CN114445315A (en) | 2022-01-29 | 2022-01-29 | Image quality enhancement method and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114445315A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114866699A (en) * | 2022-05-23 | 2022-08-05 | Oppo广东移动通信有限公司 | Image processing method and device, computer readable storage medium and electronic device |
CN115567783A (en) * | 2022-08-29 | 2023-01-03 | 荣耀终端有限公司 | Image processing method |
CN116128782A (en) * | 2023-04-19 | 2023-05-16 | 苏州苏映视图像软件科技有限公司 | Image generation method, device, equipment and storage medium |
CN116630220A (en) * | 2023-07-25 | 2023-08-22 | 江苏美克医学技术有限公司 | Fluorescent image depth-of-field fusion imaging method, device and storage medium |
-
2022
- 2022-01-29 CN CN202210111810.9A patent/CN114445315A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114866699A (en) * | 2022-05-23 | 2022-08-05 | Oppo广东移动通信有限公司 | Image processing method and device, computer readable storage medium and electronic device |
CN115567783A (en) * | 2022-08-29 | 2023-01-03 | 荣耀终端有限公司 | Image processing method |
CN115567783B (en) * | 2022-08-29 | 2023-10-24 | 荣耀终端有限公司 | Image processing method |
CN116128782A (en) * | 2023-04-19 | 2023-05-16 | 苏州苏映视图像软件科技有限公司 | Image generation method, device, equipment and storage medium |
CN116630220A (en) * | 2023-07-25 | 2023-08-22 | 江苏美克医学技术有限公司 | Fluorescent image depth-of-field fusion imaging method, device and storage medium |
CN116630220B (en) * | 2023-07-25 | 2023-11-21 | 江苏美克医学技术有限公司 | Fluorescent image depth-of-field fusion imaging method, device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10311649B2 (en) | Systems and method for performing depth based image editing | |
CN114445315A (en) | Image quality enhancement method and electronic device | |
CA2919253C (en) | Method and apparatus for generating an all-in-focus image | |
CN114245905A (en) | Depth aware photo editing | |
CN109691080B (en) | Image shooting method and device and terminal | |
CN104463817A (en) | Image processing method and device | |
CN112637500B (en) | Image processing method and device | |
CN110766706A (en) | Image fusion method and device, terminal equipment and storage medium | |
WO2022161260A1 (en) | Focusing method and apparatus, electronic device, and medium | |
CN113141450A (en) | Shooting method, shooting device, electronic equipment and medium | |
CN114390201A (en) | Focusing method and device thereof | |
CN114025100B (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN112672058B (en) | Shooting method and device | |
CN113873160B (en) | Image processing method, device, electronic equipment and computer storage medium | |
CN115134532A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN112383708B (en) | Shooting method and device, electronic equipment and readable storage medium | |
CN112738399B (en) | Image processing method and device and electronic equipment | |
CN114390206A (en) | Shooting method and device and electronic equipment | |
CN114241127A (en) | Panoramic image generation method and device, electronic equipment and medium | |
CN113012085A (en) | Image processing method and device | |
CN112653841A (en) | Shooting method and device and electronic equipment | |
CN112911148B (en) | Image processing method and device and electronic equipment | |
CN114143442B (en) | Image blurring method, computer device, and computer-readable storage medium | |
CN112333388B (en) | Image display method and device and electronic equipment | |
CN116957926A (en) | Image amplification method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |