CN111835968B - Image definition restoration method and device and image shooting method and device - Google Patents

Image definition restoration method and device and image shooting method and device Download PDF

Info

Publication number
CN111835968B
CN111835968B CN202010471627.0A CN202010471627A CN111835968B CN 111835968 B CN111835968 B CN 111835968B CN 202010471627 A CN202010471627 A CN 202010471627A CN 111835968 B CN111835968 B CN 111835968B
Authority
CN
China
Prior art keywords
image
target image
target
circle
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010471627.0A
Other languages
Chinese (zh)
Other versions
CN111835968A (en
Inventor
唐金伟
梁钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN202010471627.0A priority Critical patent/CN111835968B/en
Publication of CN111835968A publication Critical patent/CN111835968A/en
Application granted granted Critical
Publication of CN111835968B publication Critical patent/CN111835968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The invention provides an image definition restoration method and device and an image shooting method and device, and relates to the technical field of image processing, wherein the image definition restoration method comprises the following steps: acquiring depth information of each pixel point in a target image; wherein, the target image is an image with a fuzzy area; determining the diameter of a diffusion circle of each pixel point in the target image based on the depth information; and performing deblurring processing on the target image based on the diameter of the diffusion circle to obtain a clear image corresponding to the target image. The method and the device can restore the definition of the blurred image, and effectively improve the shooting experience of a user.

Description

Image definition restoration method and device and image shooting method and device
Technical Field
The invention relates to the technical field of image processing, in particular to an image definition restoration method and device and an image shooting method and device.
Background
In the prior art, when image acquisition equipment with a photographing function such as a mobile phone is used for photographing, the problem of blurred photographed images caused by focusing failure or the problem of blurred image backgrounds caused by focusing may occur, especially when close-range photographing close-up is performed, the problem of blurred images in the photographed images is more likely to occur, the problems are frequently caused in mobile terminals such as mobile phones, namely, blurred images are likely to occur in the photographing process of users, the users also need to focus for many times to obtain sharp images, the process is complicated, and the photographing experience of the users is poor.
Disclosure of Invention
In view of the above, the present invention provides an image sharpness reducing method and apparatus, and an image capturing method and apparatus, which can reduce the sharpness of a blurred image, and effectively improve the capturing experience of a user.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides an image sharpness reducing method, including: acquiring depth information of each pixel point in a target image; wherein the target image is an image with a fuzzy area; determining the diameter of a diffusion circle of each pixel point in the target image based on the depth information; and performing deblurring processing on the target image based on the diameter of the diffusion circle to obtain a clear image corresponding to the target image.
Further, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the step of obtaining depth information of each pixel point in the target image includes: performing image interpolation processing on the target image to obtain phase difference information of each pixel point in the target image; performing left-right image separation on the target image based on the phase difference information to obtain a left phase image and a right phase image of the target image; and determining the depth information of each pixel point in the target image based on the left phase image and the right phase image.
Further, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the step of determining depth information of each pixel point in the target image based on the left phase image and the right phase image includes: performing stereo matching on the left phase image and the right phase image to obtain the parallax of each pixel point between the left phase image and the right phase image; obtaining depth information of each pixel point in the target image based on the parallax and a preset depth function formula; and the preset depth function formula is obtained by performing curve fitting on the parallax and the corresponding depth information of the image acquisition equipment for acquiring the target image.
Further, an embodiment of the present invention provides a third possible implementation manner of the first aspect, wherein the step of determining, based on the depth information, a diameter of a circle of confusion of each pixel point in the target image includes: and for each pixel point in the target image, obtaining the diameter of the diffusion circle of the pixel point based on the depth information of the focusing point of the target image, the depth information of the pixel point and a preset diffusion circle diameter calculation formula.
Further, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the preset circle of confusion diameter is calculated as:
Figure BDA0002514040180000021
wherein, deltapixelIs the diameter of the dispersion circleThe unit of the diameter of the circle of confusion is pixel uAIs the depth of the focus of the target image uBThe depth of a pixel point in the target image is calculated, F is the lens focal length of image acquisition equipment for acquiring the target image, F is an aperture value, and ccdSize is the pixel size of the image acquisition equipment.
Further, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the step of performing deblurring processing on the target image based on the circle of confusion to obtain a clear image corresponding to the target image includes: obtaining a circle of confusion image according to the circle of confusion diameter of each pixel point in the target image; and based on the target image and the circle of confusion image, carrying out deblurring processing on the target image by using a least square method to obtain a clear image corresponding to the target image.
Further, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where the step of obtaining, based on the target image and the circle of confusion image, a clear image corresponding to the target image by using a least square method includes: constructing an objective function based on the target image and the circle of confusion image; the target function is X c b, b is a target image with a fuzzy area, c is the circle of confusion image, and X is a clear image corresponding to the target image; and solving a clear image corresponding to the target image in the target function by using a least square method.
In a second aspect, an embodiment of the present invention further provides an image capturing method, where the method is applied to an image capturing device, and the method includes: monitoring whether a fuzzy area exists in a target image acquired by the image acquisition equipment; if so, carrying out image definition reduction on the target image by using the method of any one of the first aspect to obtain a clear image corresponding to the target image.
In a third aspect, an embodiment of the present invention provides an image sharpness reducing apparatus, including: the information acquisition module is used for acquiring the depth information of the target image; wherein the target image is an image with a fuzzy area; the diameter determining module is used for determining the diameter of the diffusion circle of each pixel point in the target image based on the image depth information; and the blurring processing module is used for performing deblurring processing on the target image based on the diameter of the diffusion circle to obtain a clear image corresponding to the target image.
In a fourth aspect, an embodiment of the present invention provides an image capturing apparatus, where the apparatus is applied to an image capturing device, and the apparatus includes: the monitoring module is used for monitoring whether a fuzzy area exists in a target image acquired by the image acquisition equipment; and the image processing module is used for performing image definition restoration on the target image by using the method of any one of the first aspect to obtain a clear image corresponding to the target image when the monitoring result of the monitoring module is yes.
In a fifth aspect, an embodiment of the present invention provides an image capturing apparatus, including: a processor and a storage device; the storage means has stored thereon a computer program which, when executed by the processor, performs the method of any of the first aspect or the method of the second aspect.
In a sixth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the method according to any one of the above first aspect or the steps of the method according to the second aspect.
The embodiment of the invention provides an image definition restoring method and device. According to the method, the depth information of each pixel point in the shot target image is obtained, the diameter of the diffusion circle of each pixel point in the target image is calculated based on the depth information, the diameter of the diffusion circle of each pixel point is used for carrying out definition reduction on the fuzzy area in the target image, the shot image can be subjected to deblurring processing automatically, a clear image can be obtained without repeated focusing of a user, and the shooting experience of the user is effectively improved.
The embodiment of the invention provides an image shooting method and device, which can monitor whether a fuzzy area exists in a target image acquired by image acquisition equipment, and if so, the image definition reduction method is utilized to carry out image definition reduction on the target image to obtain a clear image corresponding to the target image. The method can automatically carry out definition reduction operation on the target image with the fuzzy area, and the user can obtain the definition image without repeatedly focusing for many times, thereby effectively improving the shooting experience of the user.
Additional features and advantages of embodiments of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of embodiments of the invention as set forth above.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 shows a schematic structural diagram of an image capturing device according to an embodiment of the present invention;
fig. 2 shows a flowchart of an image sharpness reduction method provided in an embodiment of the present invention;
FIG. 3a is a schematic diagram of an image array provided by an embodiment of the present invention;
FIG. 3b illustrates an image exposure map provided by an embodiment of the present invention;
FIG. 3c is a diagram illustrating Y-channel phase difference information provided by an embodiment of the present invention;
FIG. 4a is a schematic diagram of another image array provided by an embodiment of the present invention;
FIG. 4b shows another image exposure provided by an embodiment of the present invention;
FIG. 4c is a graph of G-channel phase difference information provided by an embodiment of the present invention;
FIG. 5a is a schematic diagram of a Y-channel left phase image according to an embodiment of the present invention;
FIG. 5b is a schematic diagram of a Y-channel right phase image according to an embodiment of the present invention;
FIG. 6a is a schematic diagram of a G-channel left phase image according to an embodiment of the present invention;
FIG. 6b is a schematic diagram of a G-channel right phase image according to an embodiment of the present invention;
fig. 7 shows an imaging schematic diagram of an image acquisition device provided by an embodiment of the invention;
FIG. 8 is a diagram illustrating an objective function construction process according to an embodiment of the present invention;
FIG. 9 is a flow chart of an image capture method provided by an embodiment of the invention;
fig. 10 is a schematic structural diagram illustrating an image sharpness reducing apparatus according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an image capturing apparatus according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, not all, embodiments of the present invention.
At present, in view of the fact that a blurred image is easily shot by existing partial image acquisition equipment such as a mobile phone, and a user needs to focus for multiple times to obtain a sharp image, in order to improve the problem, embodiments of the present invention provide an image sharpness reduction method and apparatus, and an image shooting method and apparatus.
The first embodiment is as follows:
first, an example image capturing apparatus 100 for implementing an image sharpness restoration method and apparatus, an image capturing method and apparatus according to an embodiment of the present invention will be described with reference to fig. 1.
As shown in fig. 1, an image capture device 100 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected via a bus system 112 and/or other type of connection mechanism (not shown). It should be noted that the components and configuration of the image capturing apparatus 100 shown in fig. 1 are exemplary only, and not limiting, and the image capturing apparatus may have other components and configurations as desired.
The processor 102 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), the processor 102 may be one or a combination of several of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or other forms of processing units having data processing capability and/or instruction execution capability, and may control other components in the image capturing device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage device 104 for use by other components.
Exemplary image capturing devices for implementing the image sharpness restoration method and apparatus, and the image capturing method and apparatus according to embodiments of the present invention may be implemented as smart terminals such as smart phones, tablet computers, and the like.
Example two:
the embodiment provides an image definition restoring method, which can be applied to an image acquisition device such as a mobile phone, and refer to a flowchart of the image definition restoring method shown in fig. 2, and the method mainly includes the following steps S202 to S206:
step S202, obtaining depth information of each pixel point in the target image.
The target image may be an image acquired by the image acquisition device and having scene depth information, and the target image includes an focusing point and a foreground or background image, where the focusing point may be generated when the user focuses on the shooting target object, such as when the image acquisition device receives a specified operation of the user, the focusing point (generally, a central point) is generated according to an area specified by the user, and the specified operation may be an operation that the user performs a single-click operation or a double-click operation on a screen of the device, and the focusing target may be selected. The depth information may also be referred to as image depth, and is generally used to represent the distance between the scene in the target image and the image capturing device, and the depth information may be represented by pixel values of 0-256, such as a target point closer to the image capturing device may have a smaller pixel value, and may exhibit a darker color. The depth information of each pixel point in the target image can be directly displayed as the depth value of each pixel point, and can also be directly displayed in the form of a depth map.
The target image may be an image captured by the image capturing device during a preview or shooting process, that is, a blurred region exists in the target image, and the blurred region may be generated when focusing is performed during the image preview or shooting process, for example, when a close-up of a close-up target is shot by using the image capturing device, natural blurring occurs in the preview or shooting picture to cause the captured image to be a blurred image, or when the focus distance of the lens of the image capturing device is too close, a problem that a part other than a focusing subject naturally blurs in the captured image (that is, the focusing target in the focused captured image is clear, and the background of the focusing target is blurred) occurs.
And step S204, determining the diameter of the diffusion circle of each pixel point in the target image based on the depth information.
When an image formed by the points in the object direction in the image direction is not in the focal plane, a diffused circular projection is formed on the image plane to present a diffusion circle; when a point in the object direction is imaged in the image direction in the focal plane, a point will appear in the image plane. The diameter of the circle of confusion is related to the definition of the image, and the larger the diameter of the circle of confusion is, the more blurred the image is; when the diameter of the circle of confusion is 0, that is, a point in the object direction will appear as a point on the image plane, and the obtained image is clear. When the image is collected, the diameter of the circle of confusion of each pixel point in the target image can be calculated by utilizing the imaging principle of the image collecting equipment according to the depth information of each pixel point in the target image.
And S206, deblurring processing is carried out on the target image based on the diameter of the diffusion circle, and a clear image corresponding to the target image is obtained.
The method includes the steps that a circle of confusion image can be obtained according to the diameter of a circle of confusion of each pixel point in a target image, the diameter of the circle of confusion of each pixel point in a clear image corresponding to a target object is 0, when the diameter of the circle of confusion of each pixel point in the clear image is larger than 0, a pixel point area of the image is fuzzy when the diameter of the circle of confusion of each pixel point of the image is larger, and a target image with a fuzzy area can be obtained when the clear image is combined with the circle of confusion image (the clear image can be multiplied by the circle of confusion image).
According to the image definition restoring method provided by the embodiment, the depth information of each pixel point in the shot target image is obtained, the diameter of the circle of confusion of each pixel point in the target image is calculated based on the depth information, and the diameter of the circle of confusion of each pixel point is used for restoring the definition of the fuzzy area in the target image, so that the shot image can be automatically deblurred, a clear image can be obtained without repeated focusing of a user, and the shooting experience of the user is effectively improved.
In consideration of the current mode of acquiring the depth information of the image pixel points, the method mainly utilizes two cameras to carry out triangular ranging, and utilizes structured light or TOF (time of flight) algorithm to carry out three-dimensional reconstruction, so that the calculation is complex and the material cost is high. In order to simplify the obtaining manner of the depth information and save the material cost, this embodiment provides an implementation manner for obtaining the depth information of each pixel point in the target image, which may be specifically executed with reference to the following steps (1) to (3):
step (1): and performing image interpolation processing on the target image to obtain phase difference information of each pixel point in the target image.
The phase difference information of each pixel point in the target image can be directly presented in the form of a phase information graph, or the phase information of each pixel in the target image can be directly output. The above-described image interpolation processing may also be referred to as pixel interpolation or pixel arrangement, that is, RGB arrangement in the original format is converted into phase information Y or phase information G.
In one embodiment, the target image may be converted from RGB image data into data of a Y channel (Y channel phase difference information map), and phase difference estimation may be performed using the data of the Y channel. For example, referring to a schematic diagram of an image array shown in fig. 3a (the image may be image data output by a sensor such as a Bayer Full PD), where R, G and B in fig. 3a are three primary colors commonly used, R represents Red (Red), G represents Green (Green), and B represents Blue (Blue), and the image resolution of the image is 4 × 4, when performing interpolation processing to calculate phase difference information of the image, an exposure map is first acquired, referring to an image exposure map shown in fig. 3B, and phase difference information of the image is calculated according to the exposure map, where Y channel phase difference information is calculated by: YL ═ a × (GL + w) × (BL + t) × (RL), YR ═ a ' × (GR + w) × (BR + t) × (RR), where a, w, t, a ', w ', and t ' in the above formulae are all amplification coefficients, and may be set artificially according to actual conditions, and values of the amplification coefficients may be appropriately increased when an image scene is dark (for example, a may be made 0.6, w may be 0.1, and t may be 0.3, and accordingly, a ' may also be 0.6, w ' may be 0.1, and t ' may be 0.3).
In another embodiment, the target image may be converted from RGB image data into data of a G channel (G channel phase difference information map), and phase difference estimation may be performed using the data of the G channel. For example, referring to the schematic diagram of the image array shown in fig. 4a, wherein R, G and B in fig. 4a are also the common three primary colors, the image resolution of the image is 4 × 4, when performing interpolation processing to calculate the phase difference information of the image, an exposure map is first acquired, referring to the image exposure map shown in fig. 4B, and the phase difference information of the image is calculated from the exposure map, wherein the G-channel phase difference information is calculated by: GL ═ d × GL; and GR is d 'GR, wherein d and d' are amplification coefficients, the amplification coefficients may be artificially set according to actual conditions, the values of the amplification coefficients may be appropriately increased when the image scene is dark, and a G-channel phase difference information map may be obtained according to the calculated phase difference information, see the G-channel phase difference information map shown in fig. 4c, and the phase image resolution of the image is 4 × 2.
Step (2): and performing left-right image separation on the target image based on the phase difference information to obtain a left phase image and a right phase image of the target image.
When the phase difference information of each pixel point in the target image is presented in the form of a Y-channel phase difference information image of the target image as a whole, performing left-right image separation on the Y-channel phase difference information image to obtain two images, namely a left phase image and a right phase image; when the phase difference information of each pixel point in the target image is presented in the form of a G channel phase difference information graph of the target image as a whole, the G channel phase difference information graph is subjected to left-right graph separation, and a left phase image and a right phase image can be obtained respectively. For example, left-right map separation is performed on the Y-channel phase difference information map shown in fig. 3c, so that a Y-channel left phase image map shown in fig. 5a and a Y-channel right phase image map shown in fig. 5b can be obtained; left-right graph separation is performed on the G-channel phase difference information graph shown in fig. 4c, so that a G-channel left phase image graph shown in fig. 6a and a G-channel right phase image graph shown in fig. 6b can be obtained.
And (3): and determining the depth information of each pixel point of the target image based on the left phase image and the right phase image.
This embodiment presents a specific implementation of the step (3) as above, which can be performed with reference to the following steps 3a and 3 b:
step 3 a: and performing stereo matching on the left phase image and the right phase image to obtain the parallax of each pixel point between the left phase image and the right phase image. Specifically, the left phase image and the right phase image obtained by separating the target image are subjected to stereo Matching to obtain the parallax of each pixel point (the pixel may be a sub-pixel unit) in the target image, and stereo Matching algorithms such as SGBM (Semi-Global Block Matching), SGM (Semi-Global Block Matching), or stereo Matching based on deep learning may be used in stereo Matching.
And step 3 b: obtaining depth information of each pixel point in the target image based on the parallax and a preset depth function formula; the preset depth function formula is obtained by performing curve fitting on the parallax and the corresponding depth information of the image acquisition equipment for acquiring the target image. Because the relation between the depth and the parallax of each image acquisition device is fixed, the corresponding relation between the parallax of the image acquisition device and the real depth of an image can be found by adopting a curve fitting mode, when the depth function formula of the image acquisition device is obtained, the image acquisition device can be aligned to a target object to carry out image acquisition, a camera can be used for shooting the target object from near to far, the parallax and the depth information of an opposite focus in each acquired image (namely the actual distance between the image acquisition device and the target object) are recorded from the position (such as the position which is 10cm away from the target object) at which a lens of the image acquisition device can accurately focus the target object, the image acquisition device is gradually moved backwards at preset intervals until the parallax of the opposite focus in the acquired image is 0, and the parallax and the depth information data of the opposite focus in each acquired image group are subjected to curve fitting, and obtaining a corresponding relation between the parallax and the depth information of the image acquisition equipment, wherein a function formula corresponding to the fitted curve is a preset depth function formula (the parallax and the depth information are in an inverse proportion relation, such as the preset depth function formula can be Z-m/n, wherein Z is the depth information, m is a constant obtained by curve fitting, and n is the parallax). Under the normal condition, each image acquisition device can perform curve fitting of the relation between parallax and depth information before leaving a factory, and burn the curve obtained by fitting into a memory of the image acquisition device, in actual use, a preset depth function formula can be directly obtained from the memory of the image acquisition device, the depth information of each pixel point in a target image is obtained according to the parallax and the preset depth function formula, curve fitting of the relation between the parallax and the depth information can also be performed in real time according to the mode, the preset depth function formula is obtained according to the fitted curve, and the depth information of each pixel point in the target image is obtained according to the parallax and the preset depth function formula.
In order to accurately calculate the diameter of the circle of confusion of each pixel point in the target image, the embodiment provides an implementation manner for determining the diameter of the circle of confusion of each pixel point in the target image based on the depth information, which can be specifically executed with reference to the following steps: and for each pixel point in the target image, obtaining the diameter of the dispersion circle of the pixel point based on the depth information of the focusing point of the target image, the depth information of the pixel point and a preset dispersion circle diameter calculation formula. The preset circle-of-confusion diameter formula can be derived according to the imaging principle of image acquisition equipment, the depth of the focus of the target image and the depth of each pixel point in the target image are respectively input into the preset circle-of-confusion diameter formula, and the circle-of-confusion diameter of each pixel point in the target image can be respectively calculated.
For easy understanding, referring to the imaging schematic diagram of the image acquisition device shown in fig. 7, point a in fig. 7 is the focus point of the image acquisition device, point B is the background blurred point of any point, and the depth u of the focus point a is obtainedA(u is represented by uA in FIG. 7)AI.e. the direct distance between point a and the optical center) and the depth u of each pixel point in the target imageB(u is represented by uB in FIG. 7)BI.e. the direct distance between point B and the optical center). The imaging of the CCD or the film of the point A on the image acquisition equipment is the point A1, and the image distance is vA(v is denoted by vA in FIG. 7A) The imaging of the CCD or film on the image acquisition equipment by the B point is B1 point with the diameter delta of the circle of confusion and the image distance vB(v is denoted by vB in FIG. 7B) The focal length of the lens of the image acquisition equipment is F, the aperture diameter is D, and the aperture value is F. Calculating an equation according to an image imaging principle:
Figure BDA0002514040180000131
according to the similar triangular relation in the figure, the method can obtain
Figure BDA0002514040180000132
Combining the above equations to obtain
Figure BDA0002514040180000133
According to
Figure BDA0002514040180000134
Can further obtain
Figure BDA0002514040180000135
The circle diameter delta is expressed in mm, and is converted into a circle diameter delta expressed in pixelspixelAnd acquiring the pixel size ccdSize of the image acquisition equipment, and obtaining a preset diameter formula of the dispersion circle according to the pixel size ccdSize, wherein the preset diameter formula of the dispersion circle is as follows:
Figure BDA0002514040180000136
wherein, deltapixelIs the diameter of the circle of confusion, the unit of the diameter of the circle of confusion is pixel, uAIs the depth of the focus of the target image, uBThe depth of a pixel point in a target image is defined, F is the lens focal length of image acquisition equipment for acquiring the target image, F is an aperture value, and ccdSize is the pixel size of the image acquisition equipment.
And taking any pixel point in the target image as a B point, inputting the depth of the focus point of the target image and the depth of the B point in the target image into the preset dispersion circle diameter formula to obtain the dispersion circle diameter of the B point, and sequentially taking each pixel point in the target image as the B point to calculate the dispersion circle diameter so as to obtain the dispersion circle diameter of each pixel point in the target image.
In order to obtain a sharp image corresponding to the target image, this embodiment provides an implementation manner of deblurring the target image based on the diameter of the circle of confusion to obtain a sharp image corresponding to the target image, and the following steps 1) to 2) may be specifically referred to:
step 1): and obtaining a circle-of-confusion image according to the circle-of-confusion diameter of each pixel point in the target image.
The diameter delta of the circle of confusion of each pixel point in the target image can be calculated based on the depth information of each pixel point in the target imagepixelThe diameter delta of the circle of confusion of each pixel point in the target imagepixeiAnd combining to obtain a circle of confusion image. When the focus in the target image is clear, the diameter of the circle of confusion of the focus is 0, and the diameter of the circle of confusion of the pixel points of the background image is larger.
Step 2): and based on the target image and the circle of confusion image, carrying out deblurring processing on the target image by using a least square method to obtain a clear image corresponding to the target image.
And constructing an objective function based on the target image and the circle of confusion image, and solving a clear image corresponding to the target image in the objective function by using a least square method. The target function is X c ═ b, b is a target image with a fuzzy area, c is a circle of confusion image, and X is a clear image corresponding to the target image. Referring to the schematic diagram of the objective function construction process shown in fig. 8, the shooting focus of the image acquisition device is on the trunk, so the diameter of the circle of confusion of the trunk is 0, the target image is a blurred background image b, and the blurred background image b can be obtained according to the clear image X and the circle of confusion image c corresponding to the target image, that is, when the circle of confusion in the background image of the clear image X corresponding to the target image is not 0, the background image of the clear image X corresponding to the target image is a blurred background image. Using least square method to solve the clear image corresponding to the target image in the target function, and calculating
Figure BDA0002514040180000141
Solving X in the solution, wherein,
Figure BDA0002514040180000142
is a gradient regularization term for suppressing noise in the image.
In the image sharpness restoration method provided by this embodiment, the image acquisition device with the phase difference information collection function is used to output the phase difference information of each pixel point of the target image, and depth estimation is performed according to the phase difference information to obtain the depth information of the scene shot in the target image, and the depth information can be obtained in the image acquisition process, so that the shot image and the depth information can be in one-to-one correspondence with high precision, the calculation error is reduced, the depth information of the image can be obtained without using a plurality of sensor devices, and the material cost is saved.
Example three:
the present embodiment provides an image capturing method, which can be applied to an image capturing device such as a mobile phone, and referring to a flowchart of the image capturing method shown in fig. 9, the method mainly includes the following steps S902 to S904:
and step S902, monitoring whether a fuzzy area exists in the target image acquired by the image acquisition equipment.
The fuzzy area refers to a pixel point area with the diameter of a circle of confusion being larger than 0 in an image, when a lens of the image acquisition device is used for imaging, the problem that a focusing main body of the imaging is clear and the background or the foreground is fuzzy is easily caused by the problem of depth of field, whether the fuzzy area exists in the target image can be monitored by acquiring the diameter of the circle of confusion of each pixel point in the target image acquired by the image acquisition device, and the diameter of the circle of confusion of each pixel point in the target image can be calculated according to the depth information of each pixel point in the target image.
And step S904, if so, carrying out image definition reduction on the target image by using the image definition reduction method to obtain a clear image corresponding to the target image.
When it is monitored that a blurred region exists in a target image acquired by the image acquisition device, that is, image blurring occurs in the target image due to the fact that pixel points with the circle of confusion diameter larger than 0 exist in the target image, the image sharpness restoration method provided by the second embodiment is utilized, and the target image is deblurred based on the circle of confusion diameter of each pixel point in the target image, so that a sharp image corresponding to the target image is obtained, and therefore the image acquisition device can shoot a full-image sharp panoramic deep image.
According to the image shooting method provided by the embodiment, the definition reduction operation can be automatically carried out on the target image with the fuzzy area, the user does not need to repeatedly focus for many times to obtain the clear image, and the shooting experience of the user is effectively improved. That is, according to the method, when a fuzzy area appears in a target image acquired by image acquisition equipment, the acquired target image is deblurred by using an image definition reduction method, so that the problems of clear main body, fuzzy background or foreground and the like easily appearing in the existing image acquisition equipment can be solved, and the shooting experience of a user is improved.
Example four:
as to the image sharpness reducing method provided in the second embodiment, an embodiment of the present invention provides an image sharpness reducing apparatus, and referring to a schematic structural diagram of the image sharpness reducing apparatus shown in fig. 10, the apparatus includes the following modules:
the information acquisition module 11 is configured to acquire depth information of each pixel point in a target image; wherein, the target image is an image with a fuzzy area.
And the diameter determining module 12 is configured to determine a diameter of a diffusion circle of each pixel point in the target image based on the image depth information.
And the blurring processing module 13 is configured to perform deblurring processing on the target image based on the diameter of the circle of confusion to obtain a clear image corresponding to the target image.
According to the image definition restoring device provided by the embodiment, the depth information of each pixel point in the shot target image is acquired, the diameter of the circle of confusion of each pixel point in the target image is calculated based on the depth information, the diameter of the circle of confusion of each pixel point is utilized to restore the definition of the fuzzy area in the target image, the shot image can be automatically deblurred, a user does not need to repeatedly focus for many times to obtain a clear image, and the shooting experience of the user is effectively improved.
In an embodiment, the information obtaining module 11 is further configured to perform image interpolation processing on the target image to obtain phase difference information of each pixel in the target image; performing left-right image separation on the target image based on the phase difference information to obtain a left phase image and a right phase image of the target image; and determining the depth information of each pixel point in the target image based on the left phase image and the right phase image.
In an embodiment, the information obtaining module 11 is further configured to perform stereo matching on the left phase image and the right phase image to obtain a parallax of each pixel point between the left phase image and the right phase image; obtaining depth information of each pixel point in the target image based on the parallax and a preset depth function formula; the preset depth function formula is obtained by performing curve fitting on the parallax and the corresponding depth information of the image acquisition equipment for acquiring the target image.
In an embodiment, the diameter determining module 12 is further configured to, for each pixel point in the target image, obtain a diameter of a circle of confusion of the pixel point based on the depth information of the focus of the target image, the depth information of the pixel point, and a preset diameter calculation formula of the circle of confusion.
In one embodiment, the predetermined circle of confusion diameter is calculated as:
Figure BDA0002514040180000171
wherein, deltapixelIs the diameter of the circle of confusion, the unit of the diameter of the circle of confusion is pixel, uAIs the depth of the focus of the target image, uBThe depth of a pixel point in a target image is defined, F is the lens focal length of image acquisition equipment for acquiring the target image, F is an aperture value, and ccdSize is the pixel size of the image acquisition equipment.
In an embodiment, the blurring module 13 is further configured to obtain a circle of confusion image according to a diameter of a circle of confusion of each pixel point in the target image; and based on the target image and the circle of confusion image, carrying out deblurring processing on the target image by using a least square method to obtain a clear image corresponding to the target image.
In an embodiment, the blurring processing module 13 is further configured to construct an objective function based on the target image and the circle of confusion image; the target function is X c-b, b is a target image with a fuzzy area, c is a circle of diffusion image, and X is a clear image corresponding to the target image; and solving a clear image corresponding to the target image in the target function by using a least square method.
The image sharpness restoration apparatus provided in this embodiment outputs phase difference information of each pixel point in a target image by using an image acquisition device having a phase difference information collection function, and performs depth estimation according to the phase difference information to obtain depth information of a scene photographed in the target image, and the depth information can be obtained in an image acquisition process, so that the photographed image and the depth information can be in one-to-one correspondence with high accuracy, a calculation error is reduced, the depth information of the image can be obtained without using a plurality of sensor devices, and a material cost is saved.
The device provided by the embodiment has the same implementation principle and technical effect as the foregoing embodiment, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiment for the portion of the embodiment of the device that is not mentioned.
Example five:
corresponding to the image capturing method provided in the foregoing embodiment, referring to a schematic structural diagram of an image capturing apparatus shown in fig. 11, an embodiment of the present invention further provides an image capturing apparatus, including the following modules:
and the monitoring module 111 is used for monitoring whether a fuzzy area exists in the target image acquired by the image acquisition equipment.
And the image processing module 113 is configured to, when the monitoring result of the monitoring module is yes, perform image sharpness restoration on the target image by using the image sharpness restoration method, so as to obtain a sharp image corresponding to the target image.
According to the image shooting device provided by the embodiment, when the fuzzy area appears in the target image collected by the image collecting equipment, the collected target image is deblurred by using the image definition restoring method, so that the problems of clear main body and fuzzy background or foreground caused by the problem of depth of field in the convex lens imaging can be solved, and the shooting experience of a user is improved.
Example six:
the embodiment of the invention provides a computer-readable medium, wherein the computer-readable medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the image definition restoring method or the image shooting method described in the above embodiment.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing embodiments, and is not described herein again.
The image sharpness restoration method and apparatus, and the image capturing method and apparatus provided in the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementations may refer to the method embodiments and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (11)

1. An image sharpness restoration method, comprising:
acquiring depth information of each pixel point in a target image; wherein the target image is an image with a fuzzy area;
determining the diameter of a diffusion circle of each pixel point in the target image based on the depth information;
deblurring processing is carried out on the target image based on the diameter of the diffusion circle, and a clear image corresponding to the target image is obtained;
the step of obtaining the depth information of each pixel point in the target image includes:
performing image interpolation processing on the target image to obtain phase difference information of each pixel point in the target image;
performing left-right image separation on the target image based on the phase difference information to obtain a left phase image and a right phase image of the target image;
and determining the depth information of each pixel point in the target image based on the left phase image and the right phase image.
2. The method of claim 1, wherein the step of determining depth information for each pixel point in the target image based on the left and right phase images comprises:
performing stereo matching on the left phase image and the right phase image to obtain the parallax of each pixel point between the left phase image and the right phase image;
obtaining depth information of each pixel point in the target image based on the parallax and a preset depth function formula; and the preset depth function formula is obtained by performing curve fitting on the parallax and the corresponding depth information of the image acquisition equipment for acquiring the target image.
3. The method of claim 1, wherein the step of determining the circle of confusion diameter for each pixel point in the target image based on the depth information comprises:
and for each pixel point in the target image, obtaining the diameter of the diffusion circle of the pixel point based on the depth information of the focusing point of the target image, the depth information of the pixel point and a preset diffusion circle diameter calculation formula.
4. The method of claim 3, wherein the predetermined circle of confusion diameter is calculated as:
Figure FDA0003282125540000021
wherein, deltapixelIs the circle of confusion diameter, the unit of the circle of confusion diameter is pixel uAIs the depth of the focus of the target image uBThe depth of a pixel point in the target image is calculated, F is the lens focal length of image acquisition equipment for acquiring the target image, F is an aperture value, and ccdSize is the pixel size of the image acquisition equipment.
5. The method according to claim 1, wherein the step of deblurring the target image based on the circle of confusion diameter to obtain a sharp image corresponding to the target image comprises:
obtaining a circle of confusion image according to the circle of confusion diameter of each pixel point in the target image;
and based on the target image and the circle of confusion image, carrying out deblurring processing on the target image by using a least square method to obtain a clear image corresponding to the target image.
6. The method according to claim 5, wherein the step of obtaining the sharp image corresponding to the target image by using a least square method based on the target image and the circle of confusion image comprises:
constructing an objective function based on the target image and the circle of confusion image; the target function is X c b, b is a target image with a fuzzy area, c is the circle of confusion image, and X is a clear image corresponding to the target image;
and solving a clear image corresponding to the target image in the target function by using a least square method.
7. An image shooting method is applied to an image acquisition device, and comprises the following steps:
monitoring whether a fuzzy area exists in a target image acquired by the image acquisition equipment;
if so, carrying out image definition reduction on the target image by using the method of any one of claims 1-6 to obtain a clear image corresponding to the target image.
8. An image sharpness reducing apparatus, comprising:
the information acquisition module is used for acquiring depth information of each pixel point in the target image; wherein the target image is an image with a fuzzy area;
the diameter determining module is used for determining the diameter of the diffusion circle of each pixel point in the target image based on the image depth information;
the blurring processing module is used for performing deblurring processing on the target image based on the diameter of the diffusion circle to obtain a clear image corresponding to the target image;
the information acquisition module is further configured to: performing image interpolation processing on the target image to obtain phase difference information of each pixel point in the target image;
performing left-right image separation on the target image based on the phase difference information to obtain a left phase image and a right phase image of the target image;
and determining the depth information of each pixel point in the target image based on the left phase image and the right phase image.
9. An image shooting device, characterized in that the device is applied to an image acquisition device, and the device comprises:
the monitoring module is used for monitoring whether a fuzzy area exists in a target image acquired by the image acquisition equipment;
an image processing module, configured to, when the monitoring result of the monitoring module is yes, perform image sharpness restoration on the target image by using the method according to any one of claims 1 to 6, to obtain a sharp image corresponding to the target image.
10. An image acquisition apparatus, characterized by comprising: a processor and a storage device;
the storage device has stored thereon a computer program which, when executed by the processor, performs the method of any of claims 1 to 6 or the method of claim 7.
11. A computer-readable storage medium, having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method of any of the preceding claims 1 to 6 or the steps of the method of claim 7.
CN202010471627.0A 2020-05-28 2020-05-28 Image definition restoration method and device and image shooting method and device Active CN111835968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010471627.0A CN111835968B (en) 2020-05-28 2020-05-28 Image definition restoration method and device and image shooting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010471627.0A CN111835968B (en) 2020-05-28 2020-05-28 Image definition restoration method and device and image shooting method and device

Publications (2)

Publication Number Publication Date
CN111835968A CN111835968A (en) 2020-10-27
CN111835968B true CN111835968B (en) 2022-02-08

Family

ID=72913726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010471627.0A Active CN111835968B (en) 2020-05-28 2020-05-28 Image definition restoration method and device and image shooting method and device

Country Status (1)

Country Link
CN (1) CN111835968B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104155470A (en) * 2014-07-15 2014-11-19 华南理工大学 Detecting method and system based on binocular camera for real-time vehicle speed
CN107592455A (en) * 2017-09-12 2018-01-16 北京小米移动软件有限公司 Shallow Deep Canvas imaging method, device and electronic equipment
CN107784631A (en) * 2016-08-24 2018-03-09 中安消物联传感(深圳)有限公司 Image deblurring method and device
JP6515423B2 (en) * 2016-12-28 2019-05-22 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd CONTROL DEVICE, MOBILE OBJECT, CONTROL METHOD, AND PROGRAM

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665494A (en) * 2017-03-27 2018-10-16 北京中科视维文化科技有限公司 Depth of field real-time rendering method based on quick guiding filtering

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104155470A (en) * 2014-07-15 2014-11-19 华南理工大学 Detecting method and system based on binocular camera for real-time vehicle speed
CN107784631A (en) * 2016-08-24 2018-03-09 中安消物联传感(深圳)有限公司 Image deblurring method and device
JP6515423B2 (en) * 2016-12-28 2019-05-22 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd CONTROL DEVICE, MOBILE OBJECT, CONTROL METHOD, AND PROGRAM
CN107592455A (en) * 2017-09-12 2018-01-16 北京小米移动软件有限公司 Shallow Deep Canvas imaging method, device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
单幅自然场景深度恢复;曹风云、方帅、胡玉娟等;《中国图像图形学报》;20140531;第19卷(第5期);第1节 *

Also Published As

Publication number Publication date
CN111835968A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
CN108898567B (en) Image noise reduction method, device and system
US10997696B2 (en) Image processing method, apparatus and device
EP1924966B1 (en) Adaptive exposure control
WO2017016050A1 (en) Image preview method, apparatus and terminal
KR102229811B1 (en) Filming method and terminal for terminal
CN109474780B (en) Method and device for image processing
WO2017045558A1 (en) Depth-of-field adjustment method and apparatus, and terminal
JP5468404B2 (en) Imaging apparatus and imaging method, and image processing method for the imaging apparatus
CN107749944A (en) A kind of image pickup method and device
KR20090101239A (en) Image stabilization using multi-exposure pattern
CN110324532B (en) Image blurring method and device, storage medium and electronic equipment
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
JP6308748B2 (en) Image processing apparatus, imaging apparatus, and image processing method
JP2005309559A (en) Image processing method, device and program
JP5766077B2 (en) Image processing apparatus and image processing method for noise reduction
JP2011166588A (en) Imaging apparatus and imaging method, and image processing method for the imaging apparatus
WO2014002521A1 (en) Image processing device and image processing method
JP2015046678A (en) Image processing device, image processing method and imaging device
JP6624785B2 (en) Image processing method, image processing device, imaging device, program, and storage medium
CN111866369B (en) Image processing method and device
CN109672810B (en) Image processing apparatus, image processing method, and storage medium
CN111835968B (en) Image definition restoration method and device and image shooting method and device
CN114745502A (en) Shooting method and device, electronic equipment and storage medium
CN112866554B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN109582811B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant