CN107979715B - Image pickup apparatus - Google Patents

Image pickup apparatus Download PDF

Info

Publication number
CN107979715B
CN107979715B CN201710318575.1A CN201710318575A CN107979715B CN 107979715 B CN107979715 B CN 107979715B CN 201710318575 A CN201710318575 A CN 201710318575A CN 107979715 B CN107979715 B CN 107979715B
Authority
CN
China
Prior art keywords
image
unit
correction data
correction
imaging device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710318575.1A
Other languages
Chinese (zh)
Other versions
CN107979715A (en
Inventor
酒本章人
萩原泰文
安藤洋史
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang OFilm Optoelectronics Technology Co Ltd
Original Assignee
Nanchang OFilm Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang OFilm Optoelectronics Technology Co Ltd filed Critical Nanchang OFilm Optoelectronics Technology Co Ltd
Publication of CN107979715A publication Critical patent/CN107979715A/en
Application granted granted Critical
Publication of CN107979715B publication Critical patent/CN107979715B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Abstract

The invention provides an imaging device which can efficiently correct image quality deterioration caused by optical characteristics of a lens in the imaging device and image quality deterioration caused by image amplification processing during zooming, and can obtain a natural high-resolution image even for an image subjected to digital zooming. The imaging device has an optical lens (1) and a solid-state imaging element (2). The imaging device has a storage means for storing correction data calculated based on both a point spread function obtained by the optical lens (1) in each region of an image which is output from the solid-state imaging element (2) and divided into a plurality of regions, and an out-of-focus function when the image is enlarged by digital zooming. The imaging device is provided with an image correction calculation unit (7) which extracts correction data from a correction data holding unit (6) for each captured image, and performs calculation processing for correcting image degradation based on a point spread function and an out-of-focus function for each captured image using the extracted correction data.

Description

Image pickup apparatus
Technical Field
The present invention relates to an imaging device having an optical lens and a solid-state imaging element.
Background
A known digital camera generally includes an imaging device having an optical lens and a solid-state imaging element, which converts an image of a captured subject into image data and then converts the image data into an electronic signal. The imaging device is used not only for a digital camera but also as a camera module to be incorporated in a mobile device, i.e., a smartphone, a tablet computer, or the like. The image data captured by these imaging devices cannot converge light from the point light source at 1 point due to optical aberration mainly, and optical defocus occurs to deteriorate the image quality. The Function representing the Spread of light corresponding to the defocus is a Point Spread Function (PSF). The PSF varies with the area position of the image (distance from the center of the image: image height).
For the defocus due to the optical aberration, for example, the following processing is performed. That is, an edge sharpening filter process (laplacian filter or the like) is generally performed on an image that is deteriorated due to optical characteristics such as optical aberration of a lens. However, if the edge sharpening filtering process is excessive, unnecessary overshoot or undershoot is likely to occur around the edge. In order to obtain a high-quality image including the peripheral portion of the image, it is necessary to perform correction by the PSF. For example, a technique has been proposed in which PSFs are measured for each camera module, and these are used as correction data to perform arithmetic processing, thereby correcting the solid-state difference and the like of each region of an image and each camera module (see, for example, patent document 1).
Patent document 1: japanese patent application laid-open No. 2010-177918
On the other hand, in recent years, a zoom function has been increasingly emphasized particularly in a camera module used in a mobile device such as a smartphone. The optical zoom mechanism requires a lens moving mechanism in the lens unit, and the module size becomes large, the price becomes high, and the optical zoom mechanism may not resist drop impact. From this viewpoint, the mobile device has been given importance to an image enlargement function by image processing, that is, a so-called digital zoom function. In a typical digital zoom, interpolation processing by nearest neighbor interpolation, linear interpolation, cubic convolution interpolation, or the like, or processing such as edge sharpening filter processing is performed, but in the edge sharpening filter processing, as described above, there is a possibility that an overshoot or an undershoot occurs in the vicinity of an edge and an unnatural image is formed. In addition, linear interpolation or cubic convolution interpolation cannot recover high frequency components lost at the time of sampling basically. This requires processing for restoring the high-frequency component in addition to the interpolation processing. Thus, in an imaging apparatus that often uses digital zooming, it is necessary to separately perform processing for coping with image degradation due to optical characteristics and processing for coping with image degradation due to enlargement processing. Therefore, the processing amount of the recovery processing increases. In addition, when changing the magnification using the digital zoom in the movie shooting at a high frame rate, the arithmetic processing device of the imaging device is required to have a high processing capability, or the operation of changing the frame rate or the magnification of the digital zoom is limited.
Disclosure of Invention
The present invention has been made in view of the above circumstances, and an object thereof is to provide an imaging apparatus capable of efficiently correcting image quality degradation due to optical characteristics of lenses in the imaging apparatus and image quality degradation due to image enlargement processing at the time of zooming, and obtaining a natural high-resolution image even in an image subjected to digital zooming.
In order to achieve the above object, an imaging device according to the present invention includes an optical lens and a solid-state imaging element, and includes: a storage unit that stores correction data calculated based on both a point spread function obtained from the optical lens in each region of an image that is output from the solid-state imaging element and divided into a plurality of regions, and an out-of-focus function when the image is enlarged by digital zooming; and an arithmetic processing unit that extracts the correction data from the storage unit for each of the captured images, and performs arithmetic processing for correcting degradation of the image based on the point spread function and the defocus function for each of the captured images using the extracted correction data.
According to the above configuration, in the imaging apparatus that performs digital zooming, both degradation of a captured image due to optical characteristics (optical aberration and the like) of the optical lens and degradation of a captured image due to image enlargement processing by digital zooming can be efficiently corrected using integrated correction data.
That is, it is possible to efficiently correct a digitally zoomed image using correction data corresponding to both a point spread function relating to degradation of an image due to optical characteristics of an optical lens and a defocus function relating to degradation of an image due to magnification processing.
In the configuration of the present invention, it is preferable that the correction data is stored in the storage means by transforming the point spread function and the defocus function into a frequency domain by fourier transform, multiplying the frequency domain by the point spread function and the defocus function, obtaining a reciprocal of a multiplication result, performing inverse fourier transform on the obtained reciprocal, transforming the reciprocal by deconvolution filtering in a real space region, and storing the reciprocal in the storage means.
According to the above configuration, it is possible to efficiently correct a digitally zoomed image using correction data obtained by combining a point spread function corresponding to image degradation due to optical characteristics of an optical lens and a defocus function corresponding to image degradation due to magnification processing.
In the configuration of the present invention, it is preferable that the image processing apparatus further includes an edge detection unit that detects an edge intensity in the captured image,
the arithmetic processing unit adjusts the intensity of the correction by the arithmetic processing in accordance with the edge intensity of each position of the image detected by the edge detection unit.
According to the above configuration, when the correction is performed based on the point spread function and the defocus function, it is possible to prevent a region on the image where the edge intensity is low, that is, where the light intensity changes little with the position and the light intensity is uniform, from being corrected in the same manner as a region where the edge intensity is high, and on the contrary, noise from increasing. The difference in the correction strength may be, for example, that the correction is not performed when the edge strength is lower than a predetermined value, and the correction is performed when the edge strength is higher than the predetermined value. In addition, an upper limit value and a lower limit value may be set for the edge intensity, and the correction may be performed without performing the correction when the edge intensity is at the lower limit value, while performing the correction when the edge intensity is at the upper limit value or more, and the correction may be performed by decreasing the intensity of the correction when the edge intensity is between the upper limit value and the lower limit value. That is, the edge strength may be divided into a plurality of levels, and an appropriate correction strength may be set to the degree of the edge strength.
In the above configuration of the present invention, it is preferable that the liquid crystal display device further includes: a plurality of imaging units having the lens and the solid-state imaging element with different focal distances; and a switching unit that switches output of the images from the plurality of the photographing units according to a zoom magnification at the time of photographing,
the storage unit stores the correction data corresponding to each imaging unit,
the arithmetic processing unit performs arithmetic processing for correcting the image output from the imaging unit using the correction data corresponding to each imaging unit.
According to the above configuration, by providing a plurality of imaging units having different focal distances and switching the images output from the imaging units, the magnification ratio can be changed so as to replace the interchangeable lens, and by combining the switching of the images from the plurality of imaging units and the digital zooming, even when the performance approaches the optical zooming, the images can be corrected efficiently by correcting the images using the correction data corresponding to the respective imaging units.
ADVANTAGEOUS EFFECTS OF INVENTION
According to the present invention, it is possible to efficiently correct the deterioration of the resolution of each area of an image and the defocus generated by the image enlargement processing by the zoom operation using the point spread function which is one of the optical characteristics of the lens used in the imaging device, and it is possible to more easily obtain an image with high image quality.
Drawings
Fig. 1 is a block diagram showing an imaging apparatus according to embodiment 1 of the present invention.
Fig. 2 is a block diagram showing an image correction arithmetic unit of the imaging device according to embodiment 1.
Fig. 3A is a diagram showing an ideal image (a) of a part of the resolution test board according to embodiment 1.
FIG. 3B is a diagram showing a degraded image (B) obtained by imaging a local area of the resolution test board according to embodiment 1.
Fig. 3C is a graph showing comparison between luminance changes of the image (a) and the image (B) in embodiment 1.
Fig. 3D is a diagram showing a value obtained by differentiating a luminance change occurring in the image (B) with respect to the image (a) in embodiment 1.
Fig. 4A is a diagram showing an ideal image (a) of a part of the resolution test board according to embodiment 1.
Fig. 4B is a diagram showing an image (B) of the embodiment 1 in which a local area of the resolution test board is imaged and enlarged and then degraded.
Fig. 4C is a graph showing comparison between luminance changes of the image (a) and the image (B) in embodiment 1.
Fig. 4D is a diagram showing a value obtained by differentiating a luminance change occurring in the image (B) with respect to the image (a) in embodiment 1.
Fig. 5 is a block diagram showing an image pickup apparatus according to embodiment 2 of the present invention.
Fig. 6A is a diagram showing an image captured by one lens.
Fig. 6B is a diagram showing a restored image of the image of fig. 6A.
Fig. 6C is a diagram showing an image captured by another lens.
Fig. 6D is a diagram showing a restored image of the image of fig. 6C.
Detailed Description
Hereinafter, embodiments of the present invention will be described.
(embodiment 1)
First, embodiment 1 of the present invention is explained.
As shown in fig. 1, an imaging device according to embodiment 1 of the present invention includes: an optical lens 1 for imaging light from a subject; a solid-state imaging element 2 that converts light into signal charges when an image of a subject is formed on a photosensitive surface by the optical lens 1, generates image data, and outputs an image signal representing the image data; an anti-mosaic unit 3 that performs an anti-mosaic process (interpolation process) on the image signal output from the solid-state imaging element 2; an enlargement processing unit 4 that enlarges the image data after the demosaicing processing by zooming; a YC separating unit 5 that converts RGB signals representing image data into a luminance signal and a color difference signal; and a correction data holding unit 6 that is a storage circuit (memory) for holding correction data for restoring the image data.
Further, the imaging apparatus includes: an image correction calculation unit 7 that performs a calculation for correcting the luminance signal using the correction data stored in the correction data holding unit 6; a timing adjustment unit 8 for matching the timing of the color difference signal with the corrected luminance signal; a noise reduction unit 9 that reduces unnecessary noise generated by the processing in the image correction calculation unit 7; and a synthesizing unit 10 for synthesizing the corrected luminance signal and the delayed color difference signal and outputting the synthesized signal.
The image correction calculation unit 7 includes: an image data holding unit 11 as a storage unit for holding N × N image data necessary for image correction calculation; a deconvolution operation unit 12 that performs a deconvolution filter process; an edge detection unit 13 that detects an edge intensity of the image data; a luminance difference calculation unit 14 that calculates a luminance difference in the N × N image data; an addition unit 15 that calculates a pixel value from the output values of the deconvolution unit 12 and the edge detection unit 13; and a color matrix adjustment unit 16 for performing color adjustment.
The optical lens 1 may be a single lens or a lens group composed of a plurality of lenses. The solid-state imaging element 2 may be an element using a CCD, a CMOS, or the like, for example. The solid-state imaging element 2 is provided with a color filter, generates a signal charge corresponding to each color of the color filter, and obtains image data by digitizing the value of the signal charge by an analog/digital conversion circuit built in the solid-state imaging element.
The anti-mosaic unit 3 performs anti-mosaic processing on the image data output from the solid-state imaging element 2, calculates (interpolates: interpolates) pixel values of colors other than the respective pixel colors of the color filter in accordance with the surrounding pixel data, for example, calculates blue and green values in a pixel in which the color filter is red, calculates blue and red values in a pixel in which the color filter is green, and calculates red and green values in a pixel in which the color filter is blue.
The enlargement processing unit 4 enlarges the image data subjected to the demosaic processing by the demosaic unit 3 at an arbitrary magnification in accordance with a zoom command a by a user operation or the like. At this time, the number of pixels is increased by interpolation. In addition, when the zoom command a is not given, that is, when the enlargement is not instructed, the enlargement processing unit 4 outputs the inputted image as it is, for example.
The YC separating unit 5 converts the RGB data, which is the image data output from the amplification processing unit 4, into luminance and color difference data, and performs correction processing, which will be described later, only on the luminance (Y) in the case where the human eye is sensitive to the luminance.
The correction data holding unit 6 is a storage circuit (memory) for holding correction data for restoring the obtained image data, and switches and outputs the held correction data based on the zoom command a and the position signal b indicating the position on the image data. Here, the zoom command a includes an enlargement ratio when zooming the image data. The position signal b is a signal indicating a position on the image data, for example, coordinates or the like, and indicates a position on the image data before zooming. In zooming, when the enlargement base point of the original image needs to be at the same position as the original image, the position as the base point may be set as the origin of coordinates, or may be represented as a distance from the base point.
As will be described later, correction data is obtained by combining optical correction data for restoring degradation of image data due to optical characteristics such as optical aberration of a lens and magnification correction data for restoring degradation of image data due to magnification processing as digital zooming, and correction data having different positions on original image data and different magnifications is stored in the correction data holding unit 6. In addition, when the amplification factors are different, it may be designed to calculate a parameter corresponding to the amplification factor, that is, correction data corresponding to the amplification factor by calculation from correction data serving as a reference.
The image correction calculation unit 7 reads the luminance signal of the image data from the YC separation unit 5 and also reads the correction data from the correction data holding unit 6, and corrects the image data based on the read data. The details of the image correction arithmetic unit 7 will be described later.
The timing adjustment unit 8 performs a delay process in order to match the timing of the color difference signal output from the YC separation unit 5 with the timing of the image correction arithmetic unit 7.
The noise reduction unit 9 is for reducing unnecessary noise generated by the processing in the image correction arithmetic unit 7, and may appropriately apply median filtering, Epsilon filtering (Epsilon filter), bilateral filtering, or the like, or may determine luminance information (an absolute value of luminance, a luminance difference, or the like) of peripheral pixels and appropriately change the filtering strength. The noise reduction process may be omitted as appropriate.
The synthesis unit 10 synthesizes the recovered luminance signal output from the noise reduction unit 9 and the color difference signal output from the timing adjustment unit 8, converts the image format into RGB, YUV, RAW, and the like as necessary, and outputs image data as the final output from the image device.
As shown in fig. 2, the image data holding unit 11 of the image correction arithmetic unit 7 stores the luminance signal of the image data from the YC separating unit 5, and stores the image data of N × N pixels necessary for the image correction arithmetic for each pixel.
The deconvolution operation unit 12 multiplies the N × N luminance signal input from the image data holding unit 11 by the corrected data read from the corrected data holding unit 6 for each element, and adds the products to obtain a pixel value at the center position in the N × N image data (output value d of the deconvolution operation unit 12). In the present embodiment, N is 9.
The edge detection unit 13 calculates the edge intensities in the vertical and horizontal directions of the image using the center M × M in the data of the image data holding unit 11, and sets them as the output value e of the edge detection unit 13. In edge detection, the mean square in the vertical and horizontal directions is calculated using sobel filtering with M ═ 3. In addition, the edge strength may be determined using other algorithms.
The addition unit 15 calculates the pixel value of the target pixel by the following algorithm based on the output values of the deconvolution unit 12 and the edge detection unit 13.
[ equation 1 ]
Figure BDA0001289146420000081
Here:
d: the output value (corrected pixel value) of the deconvolution operation unit 12
e: output value of the edge detection unit 13
I: target pixel value (pixel value before correction)
th _ H, th _ L: edge strength threshold
α: mixing ratio of deconvolution operation result and original image
The parameters of th _ H, th _ L, α, and the like may be determined as appropriate, may be determined as appropriate when viewing an image at the time of design, at the time of sample production, at the time of finished product production, and the like, or may be set so as to be capable of being changed as appropriate depending on the imaging scene.
The processing in the addition unit 15 is performed after the correction processing of the image is performed by the deconvolution unit 12, which will be described later, and the correction strength is changed in accordance with the output value e of the edge detection unit 13. As described above, the output value e of the edge detection unit 13 has the upper and lower thresholds th _ H, th _ L set. When the output value e of the edge intensity is equal to or less than the lower threshold value th _ L, the output value from the addition unit 15 is set as the pixel value I of the target pixel of the image data before correction. That is, the pixel value of the image data is not corrected for the pixel in which the output value e of the edge intensity in each pixel of the image data is equal to or less than the lower threshold th _ L.
When the output value e of the edge intensity is equal to or greater than the upper threshold th _ H, the output value from the addition unit 15 is set as the pixel value of the target pixel of the corrected image data, that is, the output value d of the deconvolution unit 12. That is, the pixel value is corrected for the pixel whose output value e of the edge intensity is equal to or greater than the upper threshold th _ H among the pixels of the image data.
When the output value e of the edge intensity falls between the upper threshold th _ H and the lower threshold th _ L, the output value d from the addition unit 15 is a value obtained by combining the pixel value I of the target pixel of the image data before correction and the output value d of the deconvolution unit 12 after correction. That is, for a pixel in which the output value e of the edge intensity falls between the lower threshold value th _ L and the upper threshold value th _ H among the pixels of the image data, the pixel value is set to a value between the pixel value I before correction and the pixel value after correction.
It is also possible to set the threshold value of the edge intensity to 1, set the output value from the addition unit 15 to the pixel value of the target pixel of the corrected image data, that is, the output value d of the deconvolution unit 12 when the threshold value is not less than the threshold value, and set the output value from the addition unit 15 to the pixel value I of the target pixel of the image data before correction when the output value e of the edge intensity is not more than the threshold value. Further, 3 or more threshold values may be set, and pixel values obtained by combining the pixel values before correction and the pixel values after correction in a plurality of stages may be embedded between the pixel values before correction and the pixel values after correction. In this way, by not performing correction or reducing the correction strength in the portion with low edge strength, it is possible to prevent noise from being generated due to an increase in luminance change caused by correcting the portion with a small luminance change at different positions.
The output from the addition unit 15 (image correction unit 7) and the color difference signal from the YC separation unit 5 are combined by the combining unit 10, and restored to the RGB signal. Then, the color matrix adjustment unit 16 performs color adjustment to obtain final image data.
Next, generation of image correction data using a Point Spread Function (PSF) of a lens and an out-of-focus function of out-of-focus due to magnification will be described using a schematic diagram.
Fig. 3A is an ideal image (a) without degradation, which is an object obtained by enlarging a part of a resolution test board as the object, in other words, an object obtained by imaging the object. Fig. 3B is an image (B) with deterioration, which is captured by the imaging device of the same portion without any processing. Fig. 3C is a diagram corresponding to the case where the images (a) and (B) are scanned as indicated by arrows in fig. 3A and 3B, and is a diagram showing, on a pixel-by-pixel basis, the change in light intensity (luminance) in the ideal image (a) with the position on the arrow when the original subject is captured, and the change in light intensity (luminance) in the enlarged image (B) with the position on the arrow. The horizontal axis represents each pixel at the arrow in units of pixels, the vertical axis represents luminance (intensity), and the intensities are normalized so as to be in the same ratio. Fig. 3D is a graph obtained by differentiating the luminance change of the image (B) in fig. 3B with respect to the image (a) in fig. 3A, and corresponds to the point spread function of the lens, and spreads the distribution on the light-receiving surface of the solid-state imaging element 2. That is, the result of convolution integration (convolution) of the luminance change in fig. 3a in the real space by fig. 3D corresponds to the image (B) in fig. 3B which is the image actually captured.
In order to restore the obtained image (B) to the ideal image (a), it is known that the luminance change of the image B shown in fig. 3D and 3C is fourier-transformed, and then expanded in the spatial frequency domain, and divided by the value shown in fig. 3D, and then inverse fourier-transformed.
On the other hand, in image enlargement by the zoom processing, if the image size is enlarged with the original number of pixels kept unchanged, for example, the area of a single pixel is enlarged. In this case, since the image is enlarged but the number of pixels is not changed, it can be observed that the resolution of the image is reduced. Therefore, an image enlarged by interpolation in the case where the number of pixels is increased by interpolation has smaller pixels than the size of the original image as compared with an image in which the original image is directly enlarged to the same size.
As described above, it is necessary to increase the spatial frequency as shown in fig. 4 by reducing the pixel size of the enlarged image relative to the pixel size of the original image, but if the simple interpolation process is normally used, the original sampling frequency is reduced, and therefore, the resolution deterioration can be observed.
Here, fig. 4A and 4B show images of a part of the resolution test board, fig. 4A shows an image (a) which is an ideal original image before enlargement, and fig. 4B shows an image (B) which is an enlarged image. Fig. 4C shows changes in luminance (light intensity) when the images (a) and (B) are scanned as shown by arrows in fig. 4A and 4B, where the horizontal axis indicates pixels at respective positions of the images (a) and (B) indicated by the arrows and the vertical axis indicates luminance.
The deterioration of the resolution is caused by the luminance change becoming gentle as shown in fig. 4C, and the resolution is deteriorated by convolving the image (a) of fig. 4A with the defocus function shown in fig. 4D to obtain the enlarged image (B) of fig. 4B. Thus, similarly to the correction of the resolution deterioration by the point spread function of the lens, the luminance change of the image (B) in fig. 4B and the value of the defocus function in fig. 4D are fourier-transformed, respectively, and divided by the value of the defocus function in fig. 4D, and then restored by inverse fourier transform, thereby obtaining an enlarged image having substantially the same resolution as the image (a).
The formula is shown below.
G: luminance distribution of a subject, I: luminance distribution of captured image, P: point spread function
I=G*P……(1)
F(I)=F(G*P)=F(G)×F(P)……(2)
G=F-1(F(I)/F(P))=I*F-1(1/F(P))……(3)
In addition, the symbol denotes the convolution integral, F (), F-1() Respectively, fourier transform and inverse fourier transform. F-1(1/F (P)) is commonly referred to as deconvolution filtering.
Note that the following description is made when the image is enlarged.
I’=I*B=G*P*B……(4)
Here, I': enlarged image, B: magnification of the resulting defocus function
To recover from I' to G, the following procedure is required.
G=I*F-1(1/F(P))*F-1(1/F(B))……(5)
Since the deconvolution filter is usually expressed by a matrix of (2N +1) × (2N +1), the arithmetic processing of 1 pixel requires multiplication processing and addition processing (2N +1) × (2N + 1).
In order to obtain the result of 1 st order convolution integration, it is necessary to wait for at least the next set of image data of N columns to be input. In the present invention, the arithmetic expression (5) is modified as shown below, and correction data (deconvolution filter) corresponding to the magnification is calculated in advance and held, thereby reducing the number of arithmetic operations.
=I*F-1(1/(F(P)×1/F(B)))=I*F-1(1/(F(P)×F(B)))……(6)
The point spread function of the lens may be an optical design value, an actual measurement value after the image pickup device is assembled, or an appropriate function substituted by the above value. Examples of suitable functions include a gaussian function, a lorentzian function, and a suitable polynomial. When the optical design values and the measurement values are used, the amplification degree can be appropriately corrected while observing the image quality, and the optimal value can be selected.
(embodiment 2)
Next, embodiment 2 of the present invention will be explained.
As shown in fig. 5, an imaging device according to embodiment 2 of the present invention is the imaging device according to embodiment 1 shown in fig. 1, in which an optical lens 1 ' and a solid-state imaging element 2 ' are added in addition to the optical lens 1 and the solid-state imaging element 2, and a switching circuit 17 for switching between an output signal from the solid-state imaging element 2 and an output signal from the solid-state imaging element 2 ' is added. The optical lens 1 and the solid-state imaging element 2, and the optical lens 1 'and the solid-state imaging element 2' constitute imaging means, respectively.
The optical lenses 1 and 1' use lenses having different angles of view and focal distances, and have different optical magnifications. For example, if the ratio of the focal distances of the optical lens 1' and the optical lens 1 is 1: 2, corresponding to the optical 2-time zoom, the switching circuit 17 switches between the case where the zoom ratio Z of the entire imaging apparatus is 1 ≧ Z > 2 and the case where Z ≧ 2, so that the solid-state imaging element 2 having the optical lens 1 outputs a signal, and the case where Z ≧ 2 and the solid-state imaging element 2 'having the optical lens 1' output a signal. The other structures are the same as those of the imaging device according to embodiment 1.
The solid-state imaging elements 2 and 2' may have the same specification or different specifications, but in the latter case, adjustment according to different specifications such as zoom ratio adjustment is required.
In embodiment 2, the above-described processing performed in embodiment 1 is also performed on image data captured by each imaging unit. Thus, in the imaging apparatus according to embodiment 2, for example, by switching the imaging means for outputting an image or switching the signals of images respectively output from the two imaging means, the image magnification can be optically changed in stages by the number of stages corresponding to the number of imaging means, and when the magnification is continuously changed using digital zooming, correction data is generated and stored based on the point spread function according to the optical characteristics and the defocus function generated by the magnification processing as described above, whereby the deterioration of the image due to the optical characteristics and the magnification processing can be efficiently recovered using the correction data.
Fig. 6 shows a part of an image obtained by the image pickup apparatus according to the present invention.
Fig. 6A shows a normal image obtained by combining the optical lens 1 and the solid-state imaging element 2, fig. 6B shows a corrected image obtained by correcting the normal image by the correction method of embodiment 1, and fig. 6C and 6D show images obtained on the optical lens 1' side when the zoom ratio is set to 2, and similarly show the normal image and the corrected image.
In either case, it was confirmed that the present invention provides good image quality.
In the imaging device according to each of the above embodiments, the arithmetic processing section for processing the output signal from the solid-state imaging element 2 (2') may be located in a camera module as the imaging device or may be located outside the camera module. The arithmetic processing unit may use a dedicated circuit, a programmed general-purpose circuit, or a circuit in which a dedicated circuit and a general-purpose circuit are combined. In addition, when the imaging device is incorporated in an electronic apparatus having a general-purpose arithmetic processing device such as a smartphone or a tablet pc, all or a part of the arithmetic processing portion may be provided in the arithmetic processing device on the electronic apparatus side that operates based on an application (program) on the electronic apparatus side.
Description of the reference numerals
1 optical lens
2 solid-state imaging element
6 correction data holding part (storage unit)
7 image correction arithmetic unit (arithmetic processing unit)
13 edge detection unit (edge detection unit)
17 switching circuit (switching unit)

Claims (3)

1. An imaging device having an optical lens and a solid-state imaging element,
the imaging device includes: a storage unit that stores correction data calculated based on both a point spread function obtained from the optical lens in each region of an image that is output from the solid-state imaging element and divided into a plurality of regions, and an out-of-focus function when the image is enlarged by digital zooming; and
an arithmetic processing unit that extracts the correction data from the storage unit for each of the captured images, and performs arithmetic processing for correcting degradation of the image based on the point spread function and the defocus function for each of the captured images using the extracted correction data;
wherein the correction data is stored in the storage unit by transforming the point spread function and the defocus function into a frequency domain by fourier transform, multiplying the transformed data by each other to obtain a reciprocal of a multiplication result, performing inverse fourier transform on the obtained reciprocal, transforming the transformed data by deconvolution filtering in an actual space region, and storing the transformed data in the storage unit.
2. The image pickup apparatus according to claim 1,
having an edge detection unit that detects an edge intensity in the captured image,
the arithmetic processing unit adjusts the intensity of the correction by the arithmetic processing in accordance with the edge intensity of each position of the image detected by the edge detection unit.
3. The image pickup apparatus according to claim 1,
comprising: a plurality of imaging units having the lens and the solid-state imaging element with different focal distances; and a switching unit that switches output of the images from the plurality of the photographing units according to a zoom magnification at the time of photographing,
the storage unit stores the correction data corresponding to each imaging unit,
the arithmetic processing unit performs arithmetic processing for correcting the image output from the imaging unit using the correction data corresponding to each imaging unit.
CN201710318575.1A 2016-10-21 2017-05-08 Image pickup apparatus Expired - Fee Related CN107979715B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016206729A JP6857006B2 (en) 2016-10-21 2016-10-21 Imaging device
JPJP2016-206729 2016-10-21

Publications (2)

Publication Number Publication Date
CN107979715A CN107979715A (en) 2018-05-01
CN107979715B true CN107979715B (en) 2021-04-13

Family

ID=62012188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710318575.1A Expired - Fee Related CN107979715B (en) 2016-10-21 2017-05-08 Image pickup apparatus

Country Status (2)

Country Link
JP (1) JP6857006B2 (en)
CN (1) CN107979715B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410152A (en) * 2018-11-26 2019-03-01 Oppo广东移动通信有限公司 Imaging method and device, electronic equipment, computer readable storage medium
CN114630008B (en) * 2020-12-10 2023-09-12 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004328506A (en) * 2003-04-25 2004-11-18 Sony Corp Imaging apparatus and image recovery method
CN101222583A (en) * 2007-01-09 2008-07-16 奥林巴斯映像株式会社 Imaging apparatus adapted to implement electrical image restoration processing
CN102930507A (en) * 2011-08-08 2013-02-13 佳能株式会社 Image processing method, image processing apparatus, and image pickup apparatus
CN103201765A (en) * 2010-09-28 2013-07-10 马普科技促进协会 Method and device for recovering a digital image from a sequence of observed digital images
CN105684417A (en) * 2013-10-31 2016-06-15 富士胶片株式会社 Image processing device, image capture device, parameter generating method, image processing method, and program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06261238A (en) * 1993-03-05 1994-09-16 Canon Inc Image pickup device
JP4341019B2 (en) * 2004-01-09 2009-10-07 カシオ計算機株式会社 Imaging apparatus and program
JP2010141661A (en) * 2008-12-12 2010-06-24 Toshiba Corp Image processing device
US8553106B2 (en) * 2009-05-04 2013-10-08 Digitaloptics Corporation Dual lens digital zoom
JP2013021509A (en) * 2011-07-11 2013-01-31 Canon Inc Image processing apparatus and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004328506A (en) * 2003-04-25 2004-11-18 Sony Corp Imaging apparatus and image recovery method
CN101222583A (en) * 2007-01-09 2008-07-16 奥林巴斯映像株式会社 Imaging apparatus adapted to implement electrical image restoration processing
CN103201765A (en) * 2010-09-28 2013-07-10 马普科技促进协会 Method and device for recovering a digital image from a sequence of observed digital images
CN102930507A (en) * 2011-08-08 2013-02-13 佳能株式会社 Image processing method, image processing apparatus, and image pickup apparatus
CN105684417A (en) * 2013-10-31 2016-06-15 富士胶片株式会社 Image processing device, image capture device, parameter generating method, image processing method, and program

Also Published As

Publication number Publication date
CN107979715A (en) 2018-05-01
JP2018067868A (en) 2018-04-26
JP6857006B2 (en) 2021-04-14

Similar Documents

Publication Publication Date Title
US7529424B2 (en) Correction of optical distortion by image processing
US8350948B2 (en) Image device which bypasses blurring restoration during a through image
US8189960B2 (en) Image processing apparatus, image processing method, program and recording medium
US8031232B2 (en) Image pickup apparatus including a first image formation system and a second image formation system, method for capturing image, and method for designing image pickup apparatus
US20110199542A1 (en) Image processing apparatus and image processing method
US9369693B2 (en) Stereoscopic imaging device and shading correction method
JP2010087672A (en) Image processing method, image processing apparatus, and imaging device
JP6906947B2 (en) Image processing equipment, imaging equipment, image processing methods and computer programs
WO2011096157A1 (en) Imaging device and method, and image processing method for imaging device
JP2009260620A (en) Image processing apparatus and method
WO2011099239A1 (en) Imaging device and method, and image processing method for imaging device
KR20070004202A (en) Method for correcting lens distortion in digital camera
JP5159715B2 (en) Image processing device
CN107979715B (en) Image pickup apparatus
JP2015115733A (en) Image processing method, image processor, imaging device, and image processing program
JP5455728B2 (en) Imaging apparatus, image processing apparatus, and image processing method
WO2015186510A1 (en) Imaging device and method, and program
US10235742B2 (en) Image processing apparatus, image capturing apparatus, image processing method, and non-transitory computer-readable storage medium for adjustment of intensity of edge signal
CN109672810B (en) Image processing apparatus, image processing method, and storage medium
JP6436840B2 (en) Image processing apparatus, imaging apparatus, image processing method, image processing program, and storage medium
US20160269628A1 (en) Image processing apparatus, image processing method, and recording medium
JP7183015B2 (en) Image processing device, image processing method, and program
JP6245847B2 (en) Image processing apparatus and image processing method
JP6961423B2 (en) Image processing equipment, imaging equipment, control methods for image processing equipment, programs and recording media
JP2015091072A (en) Imaging apparatus, imaging system with the imaging apparatus, and false color removal method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210413