WO2017158690A1 - 画像処理装置、画像処理方法、記録媒体、プログラム及び撮像装置 - Google Patents
画像処理装置、画像処理方法、記録媒体、プログラム及び撮像装置 Download PDFInfo
- Publication number
- WO2017158690A1 WO2017158690A1 PCT/JP2016/057997 JP2016057997W WO2017158690A1 WO 2017158690 A1 WO2017158690 A1 WO 2017158690A1 JP 2016057997 W JP2016057997 W JP 2016057997W WO 2017158690 A1 WO2017158690 A1 WO 2017158690A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- pixel
- unit
- position information
- code
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 148
- 238000003672 processing method Methods 0.000 title claims description 3
- 230000004075 alteration Effects 0.000 claims abstract description 216
- 238000012937 correction Methods 0.000 claims abstract description 173
- 230000003287 optical effect Effects 0.000 claims abstract description 48
- 230000010363 phase shift Effects 0.000 claims abstract description 16
- 238000006243 chemical reaction Methods 0.000 claims description 151
- 238000000034 method Methods 0.000 claims description 121
- 238000013139 quantization Methods 0.000 claims description 65
- 238000003384 imaging method Methods 0.000 claims description 33
- 238000004364 calculation method Methods 0.000 claims description 21
- 230000006870 function Effects 0.000 claims description 7
- 238000013461 design Methods 0.000 claims description 6
- 238000013041 optical simulation Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 94
- 238000013500 data storage Methods 0.000 description 36
- 238000010586 diagram Methods 0.000 description 34
- 238000003702 image correction Methods 0.000 description 19
- 230000003044 adaptive effect Effects 0.000 description 15
- 238000001514 detection method Methods 0.000 description 13
- 230000006866 deterioration Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 206010010071 Coma Diseases 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000009940 knitting Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Classifications
-
- G06T5/80—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0025—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 for optical correction, e.g. distorsion, aberration
- G02B27/0037—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 for optical correction, e.g. distorsion, aberration with diffracting elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
Definitions
- the present invention relates to an image processing apparatus, an image processing method, a recording medium, a program, and an imaging apparatus.
- CMOS complementary metal oxide semiconductor
- CCD charge coupled device
- the output image of the solid-state imaging device has a problem that the degree of image quality deterioration such as distortion, blurring, and darkening becomes large according to the distance from the image center.
- Patent Document 1 corrects the distortion amount of an image using an approximate expression based on the distortion aberration information of the photographing lens and on the basis of an arbitrary image height from the center on the screen of the photographing lens.
- Technology is disclosed.
- Patent Document 2 discloses a technique for suppressing jaggies that may occur in an image at the time of distortion correction.
- Patent Document 3 discloses a technique for preventing image quality deterioration at the time of resolution conversion.
- Patent Document 4 and Patent Document 5 disclose techniques for accurately determining the correction amount for correcting the distortion of an image caused by lens aberration.
- JP-A-4-348343 Japanese Patent Application Laid-Open No. 6-165024 JP 2005-311473 A JP-A-7-193790 Unexamined-Japanese-Patent No. 2000-4391
- the present invention solves the problems of the prior art.
- an image storage unit for storing a first image affected by an aberration of an optical system, and a pixel value of each pixel of a second image from which the influence of the aberration is removed Information on the position of the pixel of interest scanned in a predetermined order, and a distortion table indicating the correspondence between the position information of each pixel of the first image and the position information of each pixel of the second image;
- Position information generation for generating pixel position information of the first image corresponding to the scanned target pixel each time the target pixel is scanned in the predetermined order on the second image
- the phase shift due to distortion with respect to each pixel of the first image read from the image storage unit using the unit and the decimal point information of the position information generated by the position information generation unit Correction by a first aberration correction unit and the first aberration correction unit It provided for the first image, a second aberration correction unit configured to generate the second image by correcting the aberrations other than the distortion aberration, the.
- the second aspect of the present invention is the correspondence between positional information of each pixel of the first image affected by the aberration of the optical system and positional information of each pixel of the second image from which the influence of the aberration has been removed.
- the target pixel is the second target pixel based on a distortion aberration table indicating a relationship and position information of the target pixel scanned in a predetermined order to generate the pixel value of each pixel of the second image.
- a position information generation step of generating position information of a pixel of the first image corresponding to the scanned target pixel each time the image is scanned in a predetermined order; and a position generated in the position information generation step A first aberration correction step of correcting phase shift due to distortion with respect to each pixel of the first image read from the image storage unit storing the first image using decimal place information of information And in the first aberration correction step.
- a second aberration correction step of generating the second image by correcting the aberrations other than the distortion.
- an image storage section storing a first image affected by an aberration of an optical system, and a pixel of each pixel of the second image from which the effect of the aberration is removed. Distortion indicating the correspondence between position information of a pixel of interest scanned in a predetermined order to generate a value, position information of each pixel of the first image, and position information of each pixel of the second image The positional information of the pixel of the first image corresponding to the scanned target pixel is generated each time the target pixel is scanned in the predetermined order on the second image based on the aberration table.
- a phase shift due to distortion with respect to each pixel of the first image read from the image storage unit using the position information generation unit and the fractional information of the position information generated by the position information generation unit A first aberration correction unit that corrects A program for causing the first image corrected by the positive portion to function as a second aberration correction unit that generates the second image by correcting an aberration other than distortion is recorded. Recording medium.
- an image pickup device for generating a first image according to imaging light incident through an optical system
- an image storage unit for storing the first image generated by the image pickup device. And positional information of a pixel of interest scanned in a predetermined order to generate a pixel value of each pixel of the second image from which the influence of the aberration has been removed, and positional information of each pixel of the first image Is scanned each time the pixel of interest is scanned in the predetermined order on the second image, based on a distortion aberration table indicating a correspondence between the position information of each pixel of the second image and the second image.
- a first aberration correction unit that corrects a phase shift, and the first image corrected by the first aberration correction unit generate the second image by correcting an aberration other than distortion. And a second aberration correction unit.
- the present invention can correct an image having degraded image quality under the influence of an optical system with high accuracy.
- FIG. 1 is a block diagram showing a configuration example according to an embodiment of a digital camera to which the present technology is applied. It is a block diagram which shows the structural example of the image conversion apparatus which performs the aberrational correction process performed by an image correction part. It is a conceptual diagram which shows the property of the distortion aberration by an imaging device.
- FIG. 6 is a conceptual diagram showing how to tap a tap and read out a pixel in the vicinity of a first image having distortion.
- FIG. 6 is a conceptual diagram showing how to tap a tap and read out a pixel in the vicinity of a first image having distortion. It is a conceptual diagram which shows the process of a positional infomation alignment part.
- FIG. 1 It is a figure which shows the structural example of the filter up selected by a filter tap selection part. It is a figure which shows the structural example of the code tap selected by a code tap selection part. It is a block diagram showing an example of composition of a code operation part. It is a conceptual diagram which shows the positional information which divided
- the code classification type adaptive filter is image conversion processing for converting a first image into a second image, and performs various signal processing in accordance with the definition of the first and second images. For example, if the first image is a low resolution image and the second image is a high resolution image, the code classification adaptive filter performs super resolution processing to improve resolution.
- the code classification adaptive filter When the first image is a low S / N (Signal / Noise) image and the second image is a high S / N image, the code classification adaptive filter performs noise removal processing. If the second image has more or fewer pixels than the first image, the code classification adaptive filter performs image resizing (enlargement or reduction) processing.
- the code classification type adaptive filter performs a deblurring process.
- the code classification adaptive filter performs phase shift processing.
- the code classification type adaptive filter uses the tap coefficient of the code and the pixel value of the pixel of the first image selected for the focused pixel of interest in the second image, Calculate pixel values.
- the number of taps of the code is obtained by code classification of the pixel value of the target pixel of the second image into any one of a plurality of codes.
- FIG. 1 is a block diagram showing a configuration example of an image conversion apparatus 10 that performs image conversion processing by a code classification type adaptive filter.
- the first image is supplied to the image conversion device 10.
- the first image is supplied to the code tap selection unit 12 and the filter tap selection unit 13.
- the target pixel selection unit 11 sequentially selects each pixel constituting the second image as a target pixel, and supplies information representing the selected target pixel to a predetermined block.
- the filter tap selection unit 12 selects, as filter taps, pixel values of a plurality of pixels forming the first image in order to obtain the pixel value of the target pixel by filter calculation. Specifically, the filter tap selection unit 12 selects pixel values of a plurality of pixels of the first image located at a position close to the position of the pixel of interest as filter taps, and sends the selected filter taps to the product-sum operation unit 16 Supply.
- the code tap selection unit 13 codes the pixel values of a plurality of pixels forming the first image near the position of the target pixel in order to code classify the target pixel into any of several codes.
- the selected code tap is selected as a tap and supplied to the code operation unit 14. Note that the tap structure (the structure of the pixel to be selected) of the filter tap and the code tap may be the same tap structure or different tap structures.
- the code operation unit 14 codes the target pixel according to a predetermined rule based on the code tap from the code tap selection unit 13 and supplies a code corresponding to the code of the target pixel to the coefficient storage unit 15.
- a method of performing code classification for example, there is a DR (Dynamic Range) quantization method of quantizing a pixel value as a code tap.
- the DR quantization method quantizes the pixel values of the pixels that make up the code tap, and determines the code of the pixel of interest according to the resulting DR quantization code.
- the maximum value MAX and the minimum value MIN of the pixel values of the pixels forming the code tap are detected first.
- the pixel value of each pixel constituting the code tap is quantized into N bits.
- the minimum value MIN is subtracted from the pixel value of each pixel forming the code tap, and the subtraction value is divided (quantized) by DR / 2N .
- the pixel values of the N bits of each pixel constituting the code tap as described above are arranged in a predetermined order, and the arranged bit string is output as a DR quantization code.
- the pixel value of each pixel constituting the code tap is divided by the average value of the maximum value MAX and the minimum value MIN (integer operation). Thereby, the pixel value of each pixel becomes 1 bit (binarization). Then, a bit string in which the 1-bit pixel values are arranged in a predetermined order is output as a DR quantization code.
- the DR quantization code is a code to be calculated by the code operation unit 14.
- the code operation unit 14 can also output, for example, the pattern of the level distribution of the pixel values of the pixels forming the code tap as a class code as it is.
- the code tap is formed of pixel values of M pixels and A bits are assigned to the pixel values of each pixel
- the number of codes output by the code operation unit 14 is (2 M )
- the coefficient storage unit 15 stores tap coefficients for each code obtained by learning described later.
- the coefficient storage unit 15 outputs the tap coefficient stored in the address corresponding to the code, and supplies the tap coefficient to the product-sum operation unit 16.
- the tap coefficient refers to a coefficient to be multiplied with input data in a so-called tap in a digital filter.
- the product-sum operation unit 16 performs product-sum operation for obtaining a predicted value of the pixel value of the pixel of interest using the filter tap output from the filter tap selection unit 12 and the tap coefficient output from the coefficient storage unit 15. Do. That is, the product-sum operation unit 16 obtains the pixel value of the target pixel, that is, the pixel values of the pixels forming the second image.
- FIG. 2 is a flow chart for explaining the image conversion process by the image conversion apparatus 10.
- the pixel-of-interest selection unit 11 selects one of the pixels that have not yet been noticed (the conversion process has not been performed) among the pixels constituting the second image for the first image input to the image conversion device 10. Is selected as the target pixel.
- the pixel of interest selection unit 11 selects a pixel of interest from pixels that have not yet been focused on in the raster scan order, for each of the pixels that make up the second image. Then, the process proceeds to step S12.
- step S12 the code tap selection unit 12 selects, from each pixel of the first image input to the image conversion device 10, a pixel forming a code tap for the target pixel, and selects the selected code tap as a code operation unit.
- Supply to 14 The filter tap selection unit 13 selects, from each pixel of the first image input to the image conversion device 10, a pixel forming a filter tap for the target pixel, and supplies the selected filter tap to the product-sum operation unit 16. Do. Then, the process proceeds to step S13.
- step S13 the code operation unit 14 performs code operation on the target pixel based on the code tap for the target pixel supplied from the code tap selection unit 12. Furthermore, the code operation unit 14 supplies the code of the target pixel obtained as a result of the code operation to the coefficient storage unit 15. Then, the process proceeds to step S14.
- step S14 the coefficient storage unit 15 acquires and outputs the tap coefficients stored at the address corresponding to the code supplied from the code operation unit 14.
- the product-sum operation unit 16 acquires the tap coefficient output from the coefficient storage unit 15. Then, the process proceeds to step S15.
- step S ⁇ b> 15 the product-sum operation unit 16 performs a predetermined product-sum operation using the filter taps output from the filter tap selection unit 12 and the tap coefficients acquired from the coefficient storage unit 15. Find the pixel value. Then, the process proceeds to step S16.
- step S16 the target pixel selection unit 11 determines whether or not there is a pixel not selected as a target pixel among the pixels of the second image. In the case of a positive determination result, that is, when there is a pixel not selected as the target pixel, the process returns to step S11, and the processes after step S11 are performed again. In the case of a negative determination result, that is, when there is no pixel not selected as the target pixel, the pixel values have been obtained for all the pixels of the second image, and the processing is ended.
- the second image is a high-quality image
- the first image is a low-quality image in which the high-quality image is subjected to LPF (Low Pass Filter) processing to reduce the image quality (resolution).
- LPF Low Pass Filter
- the product-sum operation unit 16 performs, for example, linear primary prediction operation. At this time, the pixel value y of the high quality image is obtained by the linear linear expression of the following equation (1).
- x n represents the pixel value of the pixel (low quality pixel) of the n-th low quality image that constitutes the filter tap for the pixel y of the high quality image.
- w n represents the n-th tap coefficient multiplied to the pixel value x n of the n-th low quality pixel.
- the filter taps are configured by N low quality pixels x 1 , x 2 ,..., X n .
- the pixel value y of the high image quality pixel is not determined by the linear linear equation shown in the equation (1), but also determined by a second or higher order equation.
- the true value of the pixel value of high definition pixel of the k-th sample (k-th) represents the y k, representing the prediction value of the true value y k obtained by equation (1) and y k '.
- the prediction error e k with respect to the true value y k of the prediction value y k ′ is expressed by equation (2).
- Equation (2) The predicted value y k ′ of Equation (2) is obtained according to Equation (1). If yk 'in equation (2) is replaced according to equation (1), equation (3) is obtained.
- x n, k represents the n-th low-quality pixel that constitutes the filter tap for the k- th high-quality pixel yk.
- Equation (3) (or Equation (2)) tap coefficients w n to zero prediction error e k of, the optimum tap coefficient for predicting the high-quality pixel y k.
- determining the optimal tap coefficient w n for all high quality pixels y k is generally difficult.
- the tap coefficient w n is the optimal value
- the optimal tap coefficient w n can be obtained by minimizing the sum E of the square errors represented by equation (4).
- K is a high-quality pixel y k, low quality pixels x 1, k that constitute the filter taps for the high-quality pixel y k, x 2, k, ⁇ , x n, k And represents the number of samples set (the number of samples for learning).
- Equation (5) The minimum value of the sum E of square errors of Equation (4) (minimum value), as shown in Equation (5) is obtained the result of the sum E were knitted differentiated by the tap coefficient w n by w n to 0.
- Formula (7) is obtained from Formula (5) and (6).
- equation (3) is expressed by the normal equation shown in equation (8).
- the tap coefficient w n is derived. Further, by solving the normal equation of Equation (8) for each code and solving it, an optimum tap coefficient (a tap coefficient for minimizing the total sum E of square errors) w n can be obtained for each code.
- FIG. 3 is a block diagram showing a configuration example of a learning device that performs learning for obtaining a tap coefficient w n by formulating and solving the normal equation of equation (8) for each code.
- Learning image storage unit 21 of the learning device 20 stores a learning image used for learning of the tap coefficient w n.
- the learning image corresponds to, for example, a high-quality image with high resolution.
- the teacher data generation unit 22 reads the learning image from the learning image storage unit 21. From the learning image, the teacher data generation unit 22 is a teacher (true value) of learning of the tap coefficient, that is, teacher data (teacher image) that becomes a pixel value of a mapping destination of mapping as a prediction operation according to equation (1). It is generated and supplied to the teacher data storage unit 23.
- the teacher data generation unit 22 may supply the high quality image, which is a learning image, as it is to the teacher data storage unit 23 as teacher data.
- the teacher data storage unit 23 stores the high-quality image supplied from the teacher data generation unit 22 as teacher data.
- the student data generation unit 24 reads the learning image from the learning image storage unit 21.
- the student data generation unit 24 generates student data (student image) to be a student of learning of tap coefficients, that is, a pixel value to be converted by mapping as a prediction operation according to equation (1) from a learning image
- the data is supplied to the data storage unit 25.
- the student data generation unit 24 filters a high quality image as a learning image to generate a low quality image with a reduced resolution, and supplies the low quality image as student data to the student data storage unit 25.
- the student data storage unit 25 stores the low image quality image supplied from the student data generation unit 24 as student data.
- the learning unit 26 sequentially selects each pixel of the high-quality image stored in the teacher data storage unit 23 as a target pixel.
- the learning unit 26 selects, as filter taps corresponding to the selected target pixel, the filter tap selection unit 12 shown in FIG. 1 among low quality pixels constituting the low quality image stored in the student data storage unit 25. Select low quality pixels of the same tap structure.
- the learning unit 26 establishes and solves the normal equation of equation (8) for each code, using each pixel constituting the teacher data and the filter tap selected when the pixel is the pixel of interest. Thus, tap coefficients for each code are obtained.
- FIG. 4 is a block diagram showing a configuration example of the learning unit 26.
- the pixel-of-interest selection unit 31 sequentially selects pixels constituting the teacher data stored in the teacher data storage unit 23 as pixels of interest, and supplies information representing the pixel-of-interest to a predetermined block.
- the filter tap selection unit 32 is the same as that selected by the filter tap selection unit 12 in FIG. 1 from low image quality pixels constituting the low image quality image stored in the student data storage unit 25 as filter taps corresponding to the target pixel. Select the pixel of. Thereby, a filter tap having the same tap structure as that obtained by the filter tap selection unit 12 is obtained. The filter tap is supplied to the adding unit 35.
- the code tap selection unit 33 is the same as that selected by the code tap selection unit 13 in FIG. 1 from the low image quality pixels constituting the low image quality image stored in the student data storage unit 25 as code taps corresponding to the target pixel. Select the pixel of. As a result, a code tap having the same tap structure as that obtained by the code tap selection unit 13 is obtained. The code tap is supplied to the code operation unit 34.
- the code operation unit 34 performs the same code operation as that of the code operation unit 14 of FIG. 1 based on the code tap output from the code tap selection unit 33, and supplies the resulting code to the addition unit 35.
- the adding unit 35 reads out teacher data (pixel) which is a target pixel from the teacher data storage unit 23. For each code supplied from the code operation unit 34, the adding unit 35 forms student tap data constituting the filter tap for the target pixel read from the teacher data storage unit 23 and the target pixel supplied from the filter tap selection unit 32. The addition is performed for (pixels) and.
- the adding unit 35 includes the teacher data y k read from the teacher data storage unit 23, the filter tap x n, k selected by the filter tap selection unit 32, and the code calculated by the code calculation unit 34. Is supplied.
- the adder 35 uses filter taps (student data) x n, k for each code supplied from the code operation unit 34 and multiplies the student data in the matrix on the left side of equation (8 n, k Perform an operation corresponding to x n ′, k ) and the summation ( ⁇ ).
- the adding unit 35 uses the student data x in the vector on the right side of equation (8). n, multiplication of k and the teacher data y k and (x n, k ⁇ y k), the calculation corresponding to summation (sigma) performed.
- the adding unit 35 determines the component ( ⁇ x n, k ⁇ x n ′, k ) of the matrix on the left side in Equation (8) obtained for the teacher data which was previously regarded as the target pixel and the component of the vector on the right side. ( ⁇ x n, k ⁇ y k ) is stored in a built-in storage unit (not shown).
- the adding unit 35 adds the teaching data y to the teaching data that has been newly set as the target pixel for the matrix component ( ⁇ x n, k ⁇ x n ′, k ) or the vector component ( ⁇ x n, k ⁇ y k ).
- the adding-in unit 35 performs the above-described addition using all the teacher data stored in the teacher data storage unit 23 as the target pixel, thereby establishing the normal equation shown in the equation (8) for each code. , And supplies the normal equation to the tap coefficient calculator 36.
- the tap coefficient calculator 36 solves the normal equation for each code supplied from the adding unit 35 to obtain an optimum tap coefficient w n for each code.
- the coefficient storage unit 15 in the image conversion apparatus 10 of FIG. 1 stores the tap coefficients w n for each code obtained as described above.
- FIG. 5 is a flowchart illustrating the learning process of the learning device 20 of FIG.
- the teacher data generation unit 22 generates teacher data from the learning image stored in the learning image storage unit 21, and supplies the generated teacher data to the teacher data storage unit 23.
- the student data generation unit 24 generates student data from the learning image stored in the learning image storage unit 21, and supplies the generated student data to the student data storage unit 25.
- the process proceeds to step S22.
- An image for learning that is optimal as teacher data or student data is selected according to what kind of image conversion processing the code classification adaptive filter performs.
- step S22 the pixel-of-interest selection unit 31 of the learning unit 26 illustrated in FIG. 4 selects a pixel that is not yet the pixel of interest among the teacher data stored in the teacher data storage unit 23 illustrated in FIG. Do. Then, the process proceeds to step S23.
- step S23 the filter tap selection unit 32 shown in FIG. 4 selects, for the pixel of interest, a pixel as student data to be a filter tap from the student data stored in the student data storage unit 25 shown in FIG. It supplies to the inclusion part 35.
- the code tap selection unit 33 shown in FIG. 4 selects the student data to be a code tap from the student data stored in the student data storage unit 25 shown in FIG. Then, the process proceeds to step S24.
- step S24 the code operation unit 34 performs code operation on the target pixel based on the code tap for the target pixel, and supplies the code obtained as a result to the adding unit 35. Then, the process proceeds to step S25.
- step S25 the adding unit 35 reads the pixel of interest from the teacher data storage unit 23.
- the adding unit 35 targets, for each code supplied from the code operation unit 34, the read target pixel and the student data constituting the filter tap selected for the target pixel supplied from the filter tap selection unit 32. Perform the addition of equation (8). Then, the process proceeds to step S26.
- step S26 the focused pixel selection unit 31 determines whether or not there is a pixel not selected as a focused pixel among the teacher data stored in the teacher data storage unit 23. In the case of a positive determination result, that is, when there is a pixel not selected as a target pixel in the teacher data, the process returns to step S22, and the processing after step S22 is executed.
- the adding-in unit 35 calculates the equation (8) for each code obtained by the process of steps S22 to S26.
- the matrix on the left side and the vector on the right side are supplied to the tap coefficient calculator 36. Then, the process proceeds to step S27.
- step S27 the tap coefficient calculator 36 calculates tap coefficients for each code according to a normal equation for each code formed by the left side matrix and the right side vector in the equation (8) for each code supplied from the adder 35. Calculate w n . Then, the series of processing ends.
- the tap coefficient calculator 36 may output, for example, a preset tap coefficient.
- FIG. 6 is a block diagram showing a configuration example of the digital camera 40 according to the first embodiment of the present invention.
- the digital camera 40 captures an aberration-free still image or moving image using design information of the optical system.
- the digital camera 40 includes an optical system 41, an image sensor 42, a storage unit 43, a signal processing unit 44, an image correction unit 45, an output unit 46, and a control unit 47.
- the optical system 41 includes, for example, a zoom lens (not shown), a focus lens, an aperture, an optical low pass filter, and the like.
- the optical system 41 causes light from the outside to be incident on the image sensor 42.
- the optical system 41 supplies shooting information such as zoom information of a zoom lens (not shown) and aperture information of an aperture to the image correction unit 45.
- the image sensor 42 is, for example, a CMOS image sensor.
- the image sensor 42 receives incident light from the optical system 41, performs photoelectric conversion, and outputs an image as an electrical signal corresponding to the incident light from the optical system 41.
- the storage unit 43 temporarily stores the image output from the image sensor 42.
- the signal processing unit 44 performs signal processing such as white balance processing, demosaicing processing, gamma correction processing, noise removal processing, and the like on the image stored in the storage unit 43, and performs image processing on the signal-processed image.
- the image correction unit 45 performs image correction processing such as aberration correction on the image supplied from the signal processing unit 44 using the imaging information supplied from the optical system 41.
- the output unit 46 includes, for example, (a) a display configured by liquid crystal, (b) a driver for driving a recording medium such as a semiconductor memory, a magnetic disk, or an optical disk, (c) a communication path such as a network, various cables, wireless , And outputs the image from the image correction unit 45 in various manners.
- the output unit 46 when the output unit 46 is a display, the output unit 46 displays the image from the image correction unit 45 as a so-called through image.
- the output unit 46 is a driver for driving a recording medium, the output unit 46 records the image from the image correction unit 45 on the recording medium.
- the output unit 46 is a communicator, the output unit 46 outputs the image from the image correction unit 45 to the outside via a communication path such as a network.
- the control unit 47 controls each block constituting the digital camera 40 in accordance with the user's operation or the like.
- the image sensor 42 receives the incident light from the optical system 41, and outputs an image according to the incident light.
- the image output from the image sensor 42 is supplied to the storage unit 43 and stored.
- the image stored in the storage unit 43 is subjected to signal processing by the signal processing unit 44 and the image correction unit 45.
- the image after the image correction by the image correction unit 45 is output to the outside through the output unit 46.
- FIG. 7 is a block diagram showing an example of the configuration of the image conversion apparatus 50 that performs the aberration correction process performed by the image correction unit 45 of FIG.
- the image correction unit 45 includes, for example, a plurality of image conversion devices 50 provided for each of RGB color components.
- the image conversion device 50 performs an aberration correction process on an image for each of RGB color components using a table or a coefficient corresponding to each color component.
- the image conversion device 50 uses the result of the two-dimensional image simulation of optical simulation for the correspondence between the ideal first aberration-free image and the second image with aberration obtained through the optical system 41. Ask. Then, the image conversion device 50 performs the aberration correction on the second image by the distortion aberration correction table and the coefficient obtained by the learning process of the code classification type adaptive filter described above.
- Distortion aberration means that an image is expanded or contracted in the radial direction from the center of the image to distort the image.
- the center of the image is the intersection of the optical axis of the imaging optical system and the imaging surface of the image sensor.
- point P on the first image shown in FIG. 8A expands and contracts in the radial direction to correspond to point P 'on the second image shown in FIG. 8B.
- the degree of distortion depends on the distance r from the center of the image.
- the distortion in an ideal imaging optical system is point symmetric with respect to the center of the image.
- the correspondence between the image without distortion and the image with distortion is as shown in the following equations (9) to (11).
- the correspondence between the distance r and the distance r ' can not be approximated with high accuracy using a function. Therefore, in the present embodiment, as the correspondence between the distance r and the distance r ', a table obtained by optical simulation or measured values of the imaging optical system is used.
- a distortion aberration table d which is obtained by an optical simulation or an actual measurement value of the imaging optical system and which shows the correspondence between the distance r and the distance r ' (R) is used.
- (x ', y'), etc. can be obtained from (x, y) without using a large amount of data.
- (x ', y') is obtained by equations (13) and (14).
- the image processing apparatus 50 acquires the pixel value of (x ′, y ′) of the input second image, and acquires the acquired pixel value of (x ′, y ′) as the first image (x, y).
- the first image without distortion can be obtained by converting it to the pixel value of.
- r ′ is as shown in equation (12). That is, the distortion aberration table d (r) represents a table for converting the distance r into the distance r ′, and changes depending on the distance r.
- the distortion aberration table d (r) is obtained by optical simulation or actual measurement of the optical system, and is stored in the distortion aberration table storage unit 52 of FIG.
- the distortion aberration table storage unit 52 of FIG. 7 stores a plurality of distortion aberration tables d (r) corresponding to various zoom amounts.
- the distortion aberration correction position information generation processing unit 53 is supplied with photographing information such as the zoom amount of the zoom lens from the optical system 41 of FIG. 6, and when the pixel of interest is selected by the pixel of interest selection unit 51, the distortion aberration correction position information
- the distortion aberration correction position information storage unit 54 is supplied with the distortion aberration correction position information.
- FIG. 10 is a flowchart for explaining the processing of the distortion aberration correction position information generation processing unit 53 of FIG. 7.
- step S31 the distortion aberration correction position information generation processing unit 53 refers to the photographing information supplied from the optical system 41, for example, the zoom amount of the zoom lens, and the zoom amount from the distortion aberration table storage unit 52 of FIG. The plurality of distortion aberration tables associated with the zoom amounts in the vicinity of are read out. Then, the process proceeds to step S32.
- step S32 the distortion aberration correction position information generation processing unit 53 interpolates each value of the plurality of distortion aberration tables read in step S31 using an interpolation method such as linear interpolation or Lagrange interpolation. Thereby, a distortion aberration table corresponding to the current zoom amount is obtained. Then, the distortion correction position information generation processing unit 53 selects an address which has not been processed yet in the address space corresponding to the first image as the processing address (x, y). Then, the process proceeds to step S33.
- an interpolation method such as linear interpolation or Lagrange interpolation.
- step S33 the distortion aberration correction position information generation processing unit 53 calculates the distance r from the center of the processing address (x, y) to a predetermined number of decimal places using Equation (9). Then, the process proceeds to step S34.
- step S34 the distortion aberration correction position information generation processing unit 53 uses the interpolated distortion aberration table obtained in step S32 and the distance r calculated in step S33 to expand and contract the distance r 'due to distortion.
- the distances respectively corresponding to the values before and after the distance r obtained in step S33 are read out from the complemented distortion aberration table, and the read distances are linearly interpolated. As a result, a highly accurate distance r 'is obtained. Then, the process proceeds to step S35.
- step S35 the distortion aberration correction position information generation processing unit 53 uses the ratio of the distance r obtained in step S33 to the distance r ′ obtained in step S34 to calculate the decimal point according to equations (13) and (14). In the following, (x ', y') having a predetermined number of digits are calculated. The distortion correction position information generation processing unit 53 supplies (x ', y') to the distortion correction position information storage unit 54 as distortion correction position information. Then, the process proceeds to step S36.
- step S36 the distortion correction position information generation processing unit 53 determines whether or not there is an unprocessed address. If there is a positive determination result, that is, there is an unprocessed address, the process returns to step S33. Then, the processes from step S33 to step S36 are repeated until there is no unprocessed address. Then, when a negative determination result is obtained in step S36, the series of processes is ended.
- the coefficient storage unit 55 of FIG. 7 stores coefficients for correcting resolution deterioration for each piece of imaging information (zoom amount, aperture value, etc.).
- the resolution degradation refers to an aberration other than distortion, an aperture blur due to diffraction, or an optical low pass filter.
- the coefficient is determined by a learning method described later.
- the coefficient interpolation processing unit 56 of FIG. 7 generates position coefficients corresponding to the current zoom amount or the aperture value when the photographing information such as the zoom amount and the aperture value of the zoom lens is supplied from the optical system 41 of FIG. To the position-specific coefficient storage unit 57.
- FIG. 11 is a flowchart for explaining the processing of the coefficient interpolation processing unit 56.
- the coefficient interpolation processing unit 56 refers to the photographing information (for example, zoom information of the zoom lens, the F number, and a plurality of coefficient values corresponding to the interchangeable lens) supplied from the optical system 41.
- the coefficient storage unit 55 reads out a plurality of coefficients corresponding to values in the vicinity of the current shooting information. Then, the process proceeds to step S42.
- step S42 the coefficient interpolation processing unit 56 interpolates the plurality of coefficients read out in step S41 using interpolation methods such as linear interpolation or Lagrange interpolation, and displays the interpolated coefficients (coefficient by position). It is supplied to the position coefficient storage unit 57 of FIG. Then, the process proceeds to step S43.
- interpolation methods such as linear interpolation or Lagrange interpolation
- step S43 the coefficient interpolation processing unit 56 determines whether there is a code (unprocessed code) that has not been processed yet, and in the case of a positive determination result, selects an unprocessed code. Then, the process returns to step S41. If the determination result is negative, the process proceeds to step S44. That is, the process from step S41 to step S43 is repeated until there is no unprocessed code.
- step S44 the coefficient interpolation processing unit 56 determines whether or not there is an unprocessed position, and in the case of a positive determination result, selects the unprocessed position. Then, the process proceeds to step S41. In the case of a negative determination result, the series of processing is ended. That is, the process from step S41 to step S44 is repeated until there is no unprocessed position.
- the distortion aberration correction position information is written in the distortion aberration correction position information storage unit 54 according to the current imaging condition, and the position coefficient is written in the position coefficient coefficient storage unit 57. Then, the preparation of the image conversion process of converting the first image into the second image by the image conversion device 50 is completed.
- Image Conversion Processing by Image Conversion Device 50 When the imaging information is supplied from the optical system 41 and the image (first image) subjected to the signal processing is supplied from the signal processing unit 44, the image conversion device 50 performs the following processing.
- the target pixel selection unit 51 sequentially selects pixels constituting the second image as a target pixel, and supplies information representing the selected target pixel to a predetermined block.
- the second image is an image to be generated, and is an image in which the influence of the aberration is removed from the first image.
- the distortion aberration correction position information storage unit 54 stores distortion aberration correction position information (x ′, y) stored by the above-described processing based on the address (x, y) representing the pixel of interest supplied from the pixel of interest selection unit 51. Read '). The distortion aberration correction position information storage unit 54 rounds off the decimal places of the distortion aberration correction position information (x ', y').
- the distortion aberration correction position information storage unit 54 supplies, to the frame storage unit 58, integer position information which is information obtained by rounding off the distortion aberration correction position information (x ', y').
- the distortion correction position information storage unit 54 supplies the lower several bits of the integer position information to the address alignment unit 59.
- the distortion aberration correction position information storage unit 54 supplies the first aberration correction unit 60 with information on the decimal point of the distortion aberration correction position information (x ′, y ′).
- the frame storage unit 58 reads out the pixel values of the respective pixels of the first image in raster scan order according to the integer position information supplied from the distortion correction position information storage unit 54, thereby correcting distortion on a pixel basis. Output 1 image. Further, as shown in FIG. 12B, the frame storage unit 58 reads pixel values of four or more pixels in the vicinity of the pixel of the integer position information. The frame storage unit 58 supplies the read pixel value to the position information alignment unit 59.
- the first image (FIG. 8B) affected by the distortion has an extended portion and a reduced portion as compared to the second image without distortion (FIG. 8A).
- the pixels of the first image with distortion are tapped (a plurality of taps It is necessary to link with surrounding pixels).
- the frame storage unit 58 when integer position information is supplied from the distortion correction position information storage unit 54, the frame storage unit 58, as shown in FIG. 12B, from the first image, the integer position information and its vicinity, for example, The pixel value of each pixel greater than or equal to the pixel is read, and the read pixel value is supplied to the position information aligning unit 59. This avoids the problem that the necessary pixel values are not read out.
- the number of pixels in the vicinity of integer position information is 1 pixel when the scaling factor r '/ r is 1 or less, and 2 pixels each in the horizontal direction and the vertical direction when the scaling factor r' / r is more than 1 and 2 or less
- the total number is 4 pixels or more.
- the position information alignment unit 59 receives the pixel value (first image) supplied from the frame storage unit 58 in raster scan order and the lower position information (lower number of integer position information supplied from the distortion correction position information storage unit 54 Alignment processing based on position information is performed so that the neighboring pixels of the pixel of interest in FIG.
- FIG. 13 is a conceptual view showing processing of the position information alignment unit 59.
- the position information alignment unit 59 has a storage medium such as a buffer memory or a register, and links pixel values sequentially supplied from the frame storage unit 58 with the pixel values and is supplied from the distortion correction position information storage unit 54.
- the lower position information is stored.
- a pixel value in the vicinity of the pixel of interest is read out as tap information from the position information alignment unit 59.
- the tap information is supplied to the first aberration correction processing unit 60.
- the first aberration correction processing unit 60 has an image conversion device 70 shown in FIG.
- the image conversion device 70 performs phase shift correction as the first aberration correction on the input first image, and outputs the first image having been subjected to the first aberration correction.
- the third image is supplied to the second aberration correction processing unit 61.
- the second aberration correction processing unit 61 has an image conversion device 110 shown in FIG.
- the image conversion apparatus 110 performs an aberration correction process mainly for improving sharpness as a second aberration correction on the input third image, and outputs a second image after the second aberration correction.
- the image conversion apparatus 70 is a code classification adaptive filter that performs phase shift correction on an input image while suppressing minute aberration correction of the input image, specifically, suppressing jaggies and ringing that may occur in the input image. .
- the image conversion device 70 includes a pixel of interest selection unit 71, a code tap selection unit 72, a filter tap selection unit 73, a code operation unit 74, a coefficient storage unit 75, and a product-sum operation unit 76.
- the input image (first image) supplied to the image conversion device 70 is supplied to the code tap selection unit 72 and the filter tap selection unit 73.
- the input image is, for example, pixel values of RGB color components.
- the pixel-of-interest selection unit 71 sequentially selects, as pixels of interest, pixels constituting the third image which is an output image of the image conversion device 70, and supplies information representing the selected pixel-of-interest to a predetermined block.
- the filter tap selection unit 72 selects and selects pixel values of a plurality of pixels of the first image near the position of the pixel of interest as filter taps.
- the filter taps are supplied to the product-sum operation unit 76.
- the code tap selection unit 73 selects, as a code tap, the pixel values of a plurality of pixels forming the first image near the position of the pixel of interest, as in the filter tap selection unit 13 of FIG. 1, for example.
- the selected code tap is supplied to the code operation unit 74.
- the code calculation unit 74 calculates, for example, a quantization value by DR quantization, a difference ratio from the central pixel value, and the like based on the code tap from the code tap selection unit 73, and generates a code indicating a feature of the code tap.
- the code is classified and supplied to the coefficient storage unit 75.
- the coefficient storage unit 75 stores tap coefficients for each code.
- the tap coefficient is obtained by learning described later.
- the coefficient storage unit 75 is supplied when the code is supplied from the code operation unit 74 and the phase information (information on the decimal point of the distortion correction position information) is supplied from the distortion correction position information storage unit 54 shown in FIG. 7. It reads out a plurality of tap coefficients corresponding to the coded code and in the vicinity of the supplied phase information. The plurality of tap coefficients read out are supplied to the product-sum operation unit 76.
- the coefficient storage unit 75 has horizontal and vertical directions shown by white circles or black circles with respect to the origin (the position of the target pixel) for each code.
- a total of 25 tap coefficients are stored, each of which shifts the phase by 5 by 1/4 phase of ⁇ -2/4, -1/4, 0, +1/4, +2/4 ⁇ .
- the first circle from the right and the first circle from the top represent tap coefficients to shift the phase by +2/4 in the horizontal direction and +2/4 in the vertical direction with respect to the origin.
- the second circle from the right and the fourth circle from the top represent tap coefficients for shifting the phase by + 1 ⁇ 4 in the horizontal direction and ⁇ 1 ⁇ 4 in the vertical direction with respect to the origin.
- the code is supplied to the coefficient storage unit 75
- 25 tap coefficients corresponding to the supplied code are identified.
- 16 tap coefficients centering on the phase information are selected from the 25 tap coefficients. For example, as shown in FIG. 15, when generating a pixel value at point P, the phase is shifted by 1 ⁇ 4 phase in the horizontal direction and the vertical direction centering on the supplied phase information. A total of 16 tap coefficients (black circle marks) of 4 pieces in each direction are selected. The selected tap coefficient is supplied to the product-sum operation unit 76.
- the product-sum operation unit 76 includes 16 product-sum calculators 91 to 106.
- the number of product-sum operation units 91 to 106 may be any number as long as it corresponds to the number of pixels required in the interpolation process of the interpolation operation unit 77, and is not limited to sixteen.
- a filter tap (for example, each pixel value of 13 pixels shown in FIG. 17 described later) is input to the product-sum operation units 91 to 106.
- 16 tap coefficients shown in FIG. 15 are input to the product-sum operation units 91 to 106, respectively.
- the number of selected tap coefficients is the same as the number of product-sum operators, but may be different from the number of product-sum operators.
- the product-sum operation unit 91 receives the coefficient group 1 (for example, the tap coefficient corresponding to the first from the left and the first from the top of the 16 black circles in FIG. 15) and the filter tap (for example, the 13 pixels described above) The product-sum operation is performed using each pixel value of (1) and the pixel value 1 is output.
- the product-sum operation unit 92 receives the coefficient tap 2 (for example, the tap coefficient corresponding to the first from the left and the second from the top among the 16 black dots in FIG. 15) and the filter tap (for example, the 13 pixels described above) The product-sum operation is performed using each pixel value of (1) and (2) is output.
- the product-sum operation units 93 and 94 are the coefficient groups 3 and 4 (for example, taps corresponding to the first from the left and the third and fourth from the left among the 16 black circles in FIG. 15).
- a product-sum operation is performed using a coefficient) and a filter tap (for example, a pixel value of 13 pixels), and pixel values 3 and 4 are output.
- the product-sum operation units 95 to 98 include coefficient groups 5 to 8 (for example, tap coefficients respectively corresponding to the second from the left and the first to the fourth from the top of the sixteen black dots in FIG. 15); A product-sum operation is performed using filter taps (for example, each pixel value of 13 pixels described above), and pixel values 5 to 8 are output.
- the product-sum operation units 99 to 102 include coefficient groups 9 to 12 (for example, tap coefficients respectively corresponding to the third from the left and the first to fourth black dots from the top among the 16 black dots in FIG. 15); Product-sum operations are performed using filter taps (for example, each pixel value of 13 pixels described above), and pixel values 9 to 12 are output.
- the product-sum operation units 103 to 106 include coefficient groups 13 to 16 (for example, tap coefficients respectively corresponding to the first from the right and the first to fourth black dots from the top among the 16 black dots in FIG. 15); A product-sum operation is performed using filter taps (for example, each pixel value of 13 pixels described above), and pixel values 13 to 16 are output.
- the product-sum operation unit 76 supplies the plurality of tap coefficients (coefficient groups 1 to 16) supplied from the coefficient storage unit 75 to the product-sum operation units 91 to 106 for the pixel value of the pixel of interest. Then, a plurality of pixel values 1 to 16 are obtained, and the plurality of pixel values 1 to 16 are supplied to the interpolation operation unit 77.
- the interpolation operation unit 77 performs linear interpolation or Lagrange interpolation, etc., using the phase information supplied from the distortion aberration correction position information storage unit 54 for the plurality of pixel values 1 to 16 supplied from the product-sum operation unit 76.
- interpolation processing with a finer precision than 1 ⁇ 4 phase is performed, the pixel value of the target pixel is calculated, and the third image obtained by the interpolation processing is output.
- the product-sum operation unit 76 and the interpolation operation unit 77 can be interchanged. That is, the interpolation operation unit 77 can interpolate a plurality of tap coefficients supplied from the coefficient storage unit 75 by an interpolation method such as linear interpolation or Lagrange interpolation. At this time, the product-sum operation unit 76 may perform product-sum operation using the interpolated tap coefficient.
- FIG. 17 is a diagram showing a configuration example of the filter taps selected by the filter tap selection unit 72 of FIG.
- Thin circle marks represent pixels (input pixels) of the input image and also represent pixels (output pixels) of the output image.
- the circle of the dot pattern indicates the output pixel in which the phase difference of the pixel position occurs with respect to the input pixel. That is, the output pixel is present at a position shifted in phase from the input pixel. Therefore, the first image, which is an input image, is converted into a phase-shifted third image based on the phase information from the distortion correction position information storage unit 54.
- Solid circles indicate the target pixel, that is, the output pixel.
- a bold circle represents an input pixel to be a filter tap. The reason why the black circle mark and the bold line circle overlap is that the input pixel at the position corresponding to the target pixel is one of the filter taps.
- an input pixel serving as a filter tap for the target pixel is selected based on the input pixel closest to the position of the input pixel corresponding to the target pixel.
- FIG. 18 is a diagram showing a configuration example of a code tap selected by the code tap selection unit 73 of FIG.
- a bold circle represents an input pixel to be a code tap.
- the other circles are the same as in FIG.
- the input pixel to be the code tap for the target pixel is selected, for example, based on the input pixel closest to the position of the input pixel corresponding to the target pixel.
- the filter taps and the code taps may have the same pattern as shown in FIGS. 17 and 18 or different patterns.
- FIG. 19 is a block diagram showing a configuration example of the code operation unit 74.
- the code operation unit 74 includes a quantization operation unit 81, a center pixel difference ratio detection unit 82, and a conversion table storage unit 83.
- the quantization operation unit 81 quantizes the pixel values of the input pixels forming the code tap supplied from the code tap selection unit 73 using, for example, 1-bit DR quantization, and calculates the quantized values of each input pixel.
- the arranged quantization values are arranged in a predetermined order and supplied to the conversion table storage unit 83 as a quantization code.
- FIG. 20 is a diagram for explaining an example of 1-bit DR quantization.
- the horizontal axis represents the order (or position) of the input pixels constituting the code tap.
- the vertical axis represents the pixel value of the input pixel constituting the code tap.
- the minimum pixel value Min is subtracted from the maximum pixel value Max among the pixel values of the input pixels constituting the code tap, and the simple dynamic range DR is obtained.
- a level dividing the simple dynamic range DR into two is set as a threshold.
- the pixel value of each input pixel constituting the code tap is binarized based on the set threshold value and converted into a 1-bit quantization value.
- a quantization code is obtained by arranging the quantization values of each pixel in a predetermined order. For example, in the case of the code tap shown in FIG. 18, 13 pixels constituting the code tap are subjected to the 1-bit DR quantization process. As a result, a 13-bit DR quantization code representing the feature quantity of the input code tap is obtained.
- the central pixel difference ratio detection unit 82 obtains a code by the method described later from the pixel value of the input pixel forming the code tap supplied from the code tap selection unit 73, and supplies the code to the coefficient storage unit 75 in FIG.
- FIG. 21 is a diagram for explaining the central pixel difference ratio performed by the central pixel difference ratio detection unit 82.
- the horizontal axis represents the position of the input pixel constituting the code tap.
- the vertical axis represents the pixel value of the input pixel constituting the code tap.
- the central pixel difference ratio detection unit 82 performs the following process. First, in the pixel position (horizontal axis) direction, the central pixel difference ratio detection unit 82 sets a predetermined range centered on the target pixel (central pixel) as the range 1, the target pixel is central, and the range A wide range including 1 is set as range 2.
- the central pixel difference ratio detection unit 82 calculates, in the range 1, a difference maximum value at which the difference between the pixel value of each pixel and the pixel value of the target pixel is maximum.
- the central pixel difference ratio detection unit 82 calculates, in the range 2, a difference maximum value at which the difference between the pixel value of each pixel and the pixel value of the target pixel is maximum.
- the central pixel difference ratio detection unit 82 outputs the code “1” when the ratio of the two difference maximum values is equal to or more than a predetermined value, and outputs the code “0” when the ratio is less than the predetermined value.
- a 1-bit code is obtained.
- the central pixel difference ratio detection unit 82 sets the code tap to code “0” or “1” according to the ratio of the pixel difference between the narrow range and the wide range centered on the target pixel (center pixel). Classified into The code “0” indicates that the pixel correlation between the target pixel (center pixel) and the far-distance tap is high and ringing does not occur. The code “1” indicates that the pixel correlation between the target pixel (center pixel) and the far-distance tap is low and ringing may occur.
- the code obtained by the central pixel difference ratio detection unit 82 is supplied to the coefficient storage unit 75 of FIG.
- the conversion table storage unit 83 stores in advance a conversion table for converting the quantization code obtained by the quantization operation unit 81 into an address (new code).
- the conversion table storage unit 83 refers to the conversion table and converts the supplied quantization code into a new code corresponding to the quantization code, The converted code is supplied to the coefficient storage unit 75 of FIG.
- each code obtained by the quantization operation unit 81 and the central pixel difference ratio detection unit 82 is supplied to the coefficient storage unit 75 as it is.
- the conversion table storage unit 83 may convert a code whose occurrence frequency is lower than a threshold into a representative predetermined address (new code).
- new code a representative predetermined address
- the conversion table storage unit 83 When the conversion table storage unit 83 generates tap coefficients to be respectively approximated from different converted new codes, the converted new different codes may be converted to the same code (one different new code is selected. You may unify it into two codes. Thereby, the storage capacity of the coefficient storage unit 75 can be reduced.
- the conversion table stored in the conversion table storage unit 83 is prepared for each product-sum operation unit, but is not limited to this.
- a conversion table common to the predetermined plurality of product-sum operators is provided. You may prepare. That is, a plurality of conversion tables can be integrated into one.
- FIG. 23 is a flow chart for explaining an example of image conversion processing by the image conversion device 70 of FIG.
- the target pixel selection unit 71 selects one of the pixels constituting the output image for the input image input to the image conversion device 70 as one of the target pixels that has not been focused on yet. Then, the process proceeds to step S52.
- the target pixel selection unit 71 selects, for example, a pixel not selected as a target pixel among the pixels constituting the output image as a target pixel in the raster scan order.
- step S52 the code tap selection unit 72 selects, from the input image, a pixel forming a code tap for the target pixel.
- the filter tap selection unit 73 selects, from the input image, a pixel that constitutes a filter tap for the pixel of interest.
- the code taps are supplied to the code operation unit 74, and the filter taps are supplied to the product-sum operation unit 76. Then, the process proceeds to step S53.
- step S53 the code operation unit 74 performs code operation on the target pixel based on the code tap for the target pixel supplied from the code tap selection unit 72.
- the code operation unit 74 supplies the code of the pixel of interest obtained by the code operation to the coefficient storage unit 75. Then, the process proceeds to step S54.
- step S54 the coefficient storage unit 75 is a tap coefficient stored in the address corresponding to the code supplied from the code operation unit 74, and the phase supplied from the distortion correction position information storage unit 54 in FIG. A plurality of nearby tap coefficients corresponding to the information are selected and output.
- the product-sum operation unit 76 acquires a plurality of tap coefficients from the coefficient storage unit 75. Then, the process proceeds to step S55.
- step S55 the product-sum operation unit 76 performs predetermined plural product-sum operations using the filter tap selected by the filter tap selection unit 72 and the plurality of tap coefficients acquired from the coefficient storage unit 75. Thereby, the product-sum operation unit 76 obtains and outputs a plurality of pixel values. Then, the process proceeds to step S56.
- step S56 the interpolation operation unit 77 performs interpolation using an interpolation method such as linear interpolation or Lagrange interpolation based on the plurality of pixel values output from the product-sum operation unit 76 and the phase information supplied from the optical system 41. Thus, the pixel value of the target pixel is determined. Then, the process proceeds to step S57.
- an interpolation method such as linear interpolation or Lagrange interpolation based on the plurality of pixel values output from the product-sum operation unit 76 and the phase information supplied from the optical system 41.
- step S57 the target pixel selection unit 71 determines whether or not there is a pixel not selected yet as a target pixel in the output image. If the determination result is affirmative, the process returns to step S51. And the process after step S51 is performed again. In the case of a negative determination result, this series of processing ends.
- the tap coefficients stored in the coefficient storage unit 75 shown in FIG. 14 are obtained by the learning device 20 shown in FIG. Specifically, a high quality image is stored as a learning image in the learning image storage unit 21 shown in FIG.
- the teacher data generation unit 22 shifts the phase by changing a plurality of phases with respect to student data (student image) described later by filtering the learning image stored in the learning image storage unit 21 or the like. Generate teacher data (teacher image).
- the teacher data generation unit 22 supplies the generated teacher data to the teacher data storage unit 23.
- the student data generation unit 24 generates student data (student image) by filtering the learning image stored in the learning image storage unit 21 or the like.
- the student data generation unit 24 supplies the generated student data to the student data storage unit 55.
- the learning unit 26 reads teacher data from the teacher data storage unit 23, and reads student data from the student data storage unit 25.
- the learning unit 26 derives tap coefficients for each code and for each phase by setting up and solving the normal equation of Equation (8) for each code and each phase using the read teacher data and student data. Do.
- the codes are classified according to the ratio of the pixel difference between the narrow range and the wide range centered on the pixel of interest (center pixel).
- the pixel correlation between the pixel of interest (center pixel) and the far-distance tap is high and ringing does not occur, tap coefficients for emphasizing high frequencies can be obtained.
- tap coefficients for suppressing ringing can be obtained.
- the image conversion apparatus 70 of FIG. 14 can output an image in which distortion is corrected without losing sharpness, while preventing image quality deterioration due to the occurrence of jaggies and ringing by using the above-described tap coefficient. .
- the second aberration correction processing unit 61 of FIG. 7 has an image conversion device 110 shown in FIG.
- the image conversion device 110 mainly performs aberration correction processing for improving sharpness on the third image whose distortion is corrected by the first aberration correction processing unit 60 using a code classification type adaptive filter. Do.
- the image conversion apparatus 110 performs not only sharpness improvement but also aberration correction for improving the deterioration of any image caused by the aberration.
- the input image (third image) supplied to the image conversion device 110 is supplied to the code tap selection unit 112 and the filter tap selection unit 113.
- the pixel-of-interest selection unit 71 sequentially selects, as pixels of interest, pixels constituting the second image that is an output image of the image conversion device 110, and supplies information representing the selected pixel-of-interest to a predetermined block.
- the filter tap selection unit 112 selects and selects pixel values of a plurality of pixels of the third image located near the position of the pixel of interest as filter taps.
- the filter taps are supplied to the product-sum operation unit 116.
- the code tap selection unit 73 selects, as a code tap, the pixel values of a plurality of pixels forming the third image near the position of the pixel of interest, as in the filter tap selection unit 13 of FIG. 1, for example.
- the selected code tap is supplied to the code operation unit 74.
- the code calculation unit 114 calculates the quantization value by DR quantization, the difference ratio from the central pixel value, etc. based on the code tap from the code tap selection unit 113, and classifies the pixel of interest into a code.
- the code is supplied to the coefficient storage unit 115.
- the coefficient storage unit 115 is a plurality of tap coefficients obtained by the coefficient interpolation processing unit 56 of FIG. 7 and is divided by position information (block information) obtained by dividing the position information from the target pixel selection unit 111 as shown in FIG. A tap coefficient is stored for each read code.
- the coefficient storage unit 115 is a tap coefficient stored in an address corresponding to the code supplied from the code operation unit 114 among the stored tap coefficients, and the block information supplied from the target pixel selection unit 111 Select and read nearby tap coefficients corresponding to.
- the plurality of read tap coefficients are supplied to the product-sum operation unit 116. For example, as shown in FIG. 28, when processing for the pixel of interest P, the coefficients of a total of 16 blocks in each of the horizontal and vertical directions as indicated by the bold line in FIG. 28 necessary for third-order Lagrange interpolation. Select and supply.
- the product-sum operation unit 116 includes a plurality of (16 in the present embodiment) product-sum operation units 91 to 106 for the pixels required for interpolation by the interpolation operation unit 117.
- the product-sum operation unit 116 calculates a plurality of pixel values 1 to 16 by supplying the plurality of tap coefficients supplied from the coefficient storage unit 115 to the respective product-sum operation units 91 to 106, and the interpolation operation unit 117.
- the interpolation operation unit 117 interpolates the plurality of pixel values 1 to 16 supplied from the product-sum operation unit 116 using an interpolation method such as linear interpolation or Lagrange interpolation using the position information supplied from the target pixel selection unit 111 And output as an output image (second image).
- the product-sum operation unit 116 and the interpolation operation unit 117 can be interchanged. That is, the interpolation operation unit 116 can interpolate the plurality of tap coefficients supplied from the coefficient storage unit 117 by an interpolation method such as linear interpolation or Lagrange interpolation. At this time, the product-sum operation unit 116 may perform product-sum operation using the interpolated tap coefficient.
- FIG. 25 is a diagram showing a configuration example of the filter taps selected by the filter tap selection unit 112 of FIG. Thin circles indicate input pixels and also output pixels. Unlike FIG. 17, the output pixel is converted to the same position as the input pixel.
- Solid circles indicate the target pixel, that is, the output pixel.
- a bold circle represents an input pixel to be a filter tap. The reason why the black circle mark and the bold line circle overlap is that the input pixel at the position corresponding to the target pixel is one of the filter taps.
- an input pixel serving as a filter tap for the target pixel is selected based on the input pixel closest to the position of the input pixel corresponding to the target pixel.
- FIG. 26 is a diagram showing a configuration example of a code tap selected by the code tap selection unit 113 of FIG.
- a bold circle represents an input pixel to be a code tap.
- the other circles are the same as in FIG.
- the input pixel to be the code tap for the target pixel is selected, for example, based on the input pixel closest to the position of the input pixel corresponding to the target pixel.
- the filter tap and the code tap may have the same pattern as shown in FIGS. 25 and 26, or may have different patterns.
- FIG. 27 is a block diagram showing a configuration example of the code operation unit 114. As shown in FIG.
- the code operation unit 114 includes a quantization operation unit 121 and a conversion table storage unit 122.
- the quantization operation unit 121 quantizes the pixel value of the input pixel forming the code tap supplied from the code tap selection unit 113, using, for example, the above-described 1-bit DR quantization, and the quantized value of each input pixel Are arranged in a predetermined order, and the arranged quantized values are supplied to the conversion table storage unit 122 as quantization codes. For example, in the case of the code tap shown in FIG. 26, nine pixels forming the code tap are subjected to 1-bit DR quantization processing. As a result, a 9-bit DR quantization code representing the feature amount of the input code tap is obtained.
- the conversion table storage unit 122 stores in advance a conversion table for converting the quantization code obtained by the quantization operation unit 121 into an address (new code).
- the conversion table storage unit 122 converts the supplied quantization code into a new code corresponding to the quantization code with reference to the conversion table, The converted code is supplied to the coefficient storage unit 115 of FIG.
- the conversion table storage unit 122 is unnecessary. In this case, the quantization code obtained by the quantization operation unit 121 is supplied to the coefficient storage unit 115 as it is.
- the conversion table storage unit 122 may convert a code whose occurrence frequency is lower than the threshold value into a representative predetermined address (new code) as shown in FIG. 22B.
- new code a representative predetermined address
- the converted new different codes may be converted to the same code (one different new code is selected. You may unify it into two codes. As a result, the storage capacity of the coefficient storage unit 115 can be reduced.
- the conversion table stored in the conversion table storage unit 83 is prepared for each product-sum operation unit, but is not limited to this.
- a conversion table common to the predetermined plurality of product-sum operators is provided. You may prepare. That is, a plurality of conversion tables can be integrated into one.
- FIG. 29 is a flowchart for explaining an example of image conversion processing by the image conversion device 110 of FIG.
- step S71 the pixel-of-interest selection unit 111 selects one of the pixels constituting the output image for the input image input to the image conversion device 110 as one of the pixels not yet noticed. Then, the process proceeds to step S72.
- the target pixel selection unit 111 selects, for example, a pixel not selected as a target pixel among the pixels constituting the output image as a target pixel in the raster scan order.
- step S72 the code tap selection unit 112 selects, from the input image, the pixels forming the code tap for the target pixel.
- the filter tap selection unit 113 selects, from the input image, a pixel that constitutes a filter tap for the target pixel.
- the code taps are supplied to the code operation unit 114, and the filter taps are supplied to the product-sum operation unit 116. Then, the process proceeds to step S73.
- step S73 the code operation unit 114 performs code operation on the target pixel based on the code tap for the target pixel supplied from the code tap selection unit 112.
- the code operation unit 114 supplies the code of the target pixel obtained by the code operation to the coefficient storage unit 115. Then, the process proceeds to step S74.
- step S74 the coefficient storage unit 115 stores a coefficient for each position information (block information) supplied from the target pixel selection.
- the coefficient storage unit 115 is a tap coefficient stored in an address corresponding to the code supplied from the code operation unit 114 among the stored tap coefficients, and the block information supplied from the target pixel selection unit 111 Select and read nearby tap coefficients corresponding to.
- the product-sum operation unit 116 obtains a plurality of tap coefficients output from the coefficient storage unit 115. Then, the process proceeds to step S75.
- step S75 the product-sum operation unit 116 performs a plurality of predetermined product-sum operations using the filter tap selected by the filter tap selection unit 112 and the plurality of tap coefficients acquired from the coefficient storage unit 115. Thereby, the product-sum operation unit 116 obtains and outputs a plurality of pixel values. Then, the process proceeds to step S76.
- step S76 the interpolation operation unit 117 performs interpolation using an interpolation method such as linear interpolation or Lagrange interpolation based on the plurality of pixel values output from the product-sum operation unit 116 and the phase information supplied from the optical system 41. . Thereby, the pixel value of the target pixel is obtained. Then, the process proceeds to step S77.
- an interpolation method such as linear interpolation or Lagrange interpolation
- step S57 the target pixel selection unit 111 determines whether or not there is a pixel not selected yet as a target pixel in the output image. If the determination result is affirmative, the process returns to step S51. And the process after step S51 is performed again. In the case of a negative determination result, this series of processing ends.
- the tap coefficients stored in the coefficient storage unit 115 shown in FIG. 24 are obtained by the learning device 20 shown in FIG. Specifically, the learning image storage unit 21 illustrated in FIG. 3 stores a high-quality image with high sharpness as a learning image.
- the teacher data generation unit 22 supplies the learning image stored in the learning image storage unit 21 to the teacher data storage unit 23 as teacher data (teacher image) as it is.
- the student data generation unit 24 reads a learning image from the learning image storage unit 21 and, for the read learning image, performs optical simulation data, distortion correction data, etc. based on the design data of the optical system 41 of FIG. It is used to generate student data (student image) with degraded image quality.
- the student data generation unit 24 supplies the generated student data to the student data storage unit 25.
- the learning unit 26 reads teacher data from the teacher data storage unit 23, and reads student data from the student data storage unit 25.
- the learning unit 26 derives tap coefficients for each code and each position by establishing and solving the normal equation of Equation (8) for each code and each position using the read teacher data and student data. Do.
- the learning device 20 can obtain tap coefficients for correcting any aberration by classifying and learning codes in accordance with position information of an image (lens).
- the image conversion apparatus 110 of FIG. 24 uses the above-described tap coefficient to remove an image whose image quality has been deteriorated due to aberrations other than distortion, for example, spherical aberration, coma, critical aberration, curvature of field, lateral chromatic aberration, etc. While correcting, it is possible to obtain a homogeneous output image that does not lose peripheral light reduction and a sense of sharpness.
- aberrations other than distortion for example, spherical aberration, coma, critical aberration, curvature of field, lateral chromatic aberration, etc. While correcting, it is possible to obtain a homogeneous output image that does not lose peripheral light reduction and a sense of sharpness.
- FIG. 30 is a follow chart for explaining an example of image conversion processing by the image conversion device 50 of FIG. 7.
- the image conversion device 50 acquires shooting information such as the zoom value of the imaging lens and the F value of the aperture from the optical system 41 of FIG. Then, the process proceeds to step S92.
- step S92 the distortion aberration correction position information generation processing unit 53 in FIG. 7 generates photographing information such as a predetermined zoom value from the aberration correction table storage unit 52 based on the photographing information such as the zoom value input in step S91. Read out the aberration correction table in the vicinity.
- the distortion aberration correction position information generation processing unit 53 generates distortion aberration correction position information up to the decimal point in accordance with the flowchart shown in FIG. 10 described above.
- the distortion aberration correction position information generated is stored in the distortion aberration correction position information storage unit 54.
- the coefficient interpolation processing unit 56 reads out from the coefficient storage unit 55 the coefficient information in the vicinity of the shooting information such as the zoom value and the F value input in step S91, and according to the flowchart of FIG. Generate locational coefficients for The generated positional coefficient is stored in the positional coefficient storage unit 57.
- step S93 the first image is written to the frame storage unit 58 of FIG. Then, the process proceeds to step S94.
- step S94 the pixel-of-interest selection unit 51 selects one of the pixels constituting the second image for the first image stored in the frame storage unit 58 as one of the pixels of interest that has not been noticed yet. . That is, the pixel-of-interest selection unit 51 selects, for example, a pixel not yet considered as a pixel-of-interest in the raster scan order among pixels constituting the second image as a pixel-of-interest. Then, the process proceeds to step S95.
- step S95 the distortion aberration correction position information storage unit 54 stores the distortion aberration correction position information (x ',) stored based on the address (x, y) representing the pixel of interest supplied from the pixel of interest selection unit 51. Read y '). The distortion correction position information storage unit 54 rounds off the decimal places of the read distortion correction position information (x ', y').
- the distortion aberration correction position information storage unit 54 supplies, to the frame storage unit 58, integer position information which is information obtained by rounding off the distortion aberration correction position information (x ', y').
- the distortion correction position information storage unit 54 supplies the lower several bits of the integer position information to the address alignment unit 59.
- the distortion aberration correction position information storage unit 54 supplies the first aberration correction unit 60 with information on the decimal point of the distortion aberration correction position information (x ′, y ′). Then, the process proceeds to step S96.
- step S96 the frame storage unit 58 reads out the pixel values of each pixel of the first image in the raster scan order, for example, in accordance with the integer position information supplied from the distortion correction position information storage unit 54, thereby causing distortion in pixel units.
- An aberration-corrected first image is output.
- the frame storage unit 58 reads pixel values of four or more pixels in the vicinity of the pixel of the integer position information.
- the frame storage unit 58 supplies the pixel value to the position information alignment unit 59. Then, the process proceeds to step S97.
- step S97 the position information alignment unit 59 applies the pixel values supplied in the raster scan order from the frame storage unit 58 to the lower position information (position information of lower several bits) supplied from the distortion correction position information storage unit 54. Arrange. As a result, the position information aligning unit 59 recognizes the neighboring pixels of the pixel of interest in FIG. 12A, and supplies tap information (pixel values) in the vicinity of the pixel of interest to the first aberration correction processor 60 as a first image. Then, the process proceeds to step S98.
- step S98 the first aberration correction unit 60 is based on the tap information in the vicinity of the pixel of interest supplied from the position information alignment unit 59 and the position information after the decimal point supplied from the distortion aberration correction position information storage unit 54. Correction processing of distortion is performed, and a third image is output. The third image is supplied to the second aberration correction unit 61. Then, the process proceeds to step S99.
- step S99 the second aberration correction unit 61 uses the target pixel information supplied from the target pixel selection unit 51 and the position coefficient supplied from the position coefficient storage unit 57 to perform the first aberration correction unit.
- Aberration correction other than distortion is performed on the third image supplied from the F.60 to output a second image. Then, the process proceeds to step S100.
- step S100 the target pixel selection unit 51 determines whether there is an output image that has not yet been selected as a target pixel. In the case of a positive determination result, the process returns to step S94, and the processes after step S94 are repeated. In the case of a negative determination result, a series of processing ends.
- the first aberration correction processing unit 60 may be configured to have an image conversion device 150 shown in FIG. 31 instead of the image conversion device 70 shown in FIG.
- image conversion device 150 For example, pixel values of RGB color components are input to the image conversion device 150 as an input image (first image).
- the image conversion device 150 corrects minute distortion of the input image and obtains an output image in which jaggies and ringing that may occur in the image are suppressed.
- the image conversion device 150 performs image processing for converting an input image into an image with improved distortion, interpolation processing using a sine function, interpolation processing using a triangular wave filter (linear interpolation processing), and the like as minor distortion aberration correction of an input image. Do.
- the image conversion device 150 performs horizontal and vertical interpolation processing using a sine function to obtain an image of an image signal Vc as described below.
- the pixel interval of the image signal Va is ⁇ t
- the pixel data of the pixel position n ⁇ t of the image signal Va is x (n ⁇ t)
- the pixel position of the image signal Vc is t.
- the pixel data x (t) at the pixel position t is obtained using the pixel values of N (appropriate finite number) pixels of the image signal Va located before and after the pixel position t according to the following equation (15) Be
- the image conversion device 150 has a pixel-of-interest selection unit 151, a vertical interpolation unit 152, and a horizontal interpolation unit 153 as shown in FIG.
- the input image supplied to the image conversion device 150 is supplied to the vertical interpolation unit 152.
- the target pixel selection unit 151 sequentially selects the pixels forming the output image as a target pixel, and supplies information representing the selected target pixel to a predetermined block.
- the vertical interpolation unit 152 performs vertical interpolation processing on the pixel of interest using the pixel values of the input image, and supplies the vertically interpolated image to the horizontal interpolation unit 153.
- the horizontal interpolation unit 153 performs horizontal interpolation processing on the pixel of interest using the image supplied from the vertical interpolation unit 152, and outputs the horizontally interpolated image as an output image (third image).
- the digital camera 40 according to the first embodiment captures an aberration-free still image or a moving image using design information of the optical system.
- the digital camera 40 according to the first embodiment can not be used because the design information of the optical system can not be obtained.
- the digital camera according to the second embodiment captures still images or moving images even when design information of the optical system is not available, such as when using an interchangeable lens made by another company. can do.
- the digital camera according to the second embodiment is configured substantially the same as the first embodiment.
- the image correction unit 45 shown in FIG. 6 has an image conversion device 160 shown in FIG. 32 in place of the image conversion device 50 shown in FIG.
- the image conversion device 160 can also be used alone for geometric conversion such as image editing.
- FIG. 32 is a block diagram showing a configuration example of the image conversion device 160. As shown in FIG. In the following, the same parts as the parts described above are denoted by the same reference numerals, and redundant description will be omitted. Similar to the image conversion device 50 shown in FIG. 7, the image conversion device 160 includes a pixel of interest selection unit 51, a distortion aberration table storage unit 52, a distortion aberration correction position information generation processing unit 53, a frame storage unit 58, and a position information arrangement unit. 59 and a first aberration correction processing unit 60.
- the image conversion device 160 excludes the coefficient storage unit 55, the coefficient interpolation processing unit 56, and the position-specific coefficient storage unit 57 from the image conversion device 50 shown in FIG. In place of the second aberration correction processing unit 61, a distortion aberration correction position information storage unit 161 and a second aberration correction processing unit 162 are provided.
- the distortion aberration correction position information storage unit 161 stores distortion aberration correction position information (x ′, y) stored by the above-described processing based on the address (x, y) representing the pixel of interest supplied from the pixel of interest selection unit 51. Read '). The distortion aberration correction position information storage unit 161 rounds off the decimal places of the distortion aberration correction position information (x ', y').
- the distortion aberration correction position information storage unit 161 supplies, to the frame storage unit 58, integer position information which is information obtained by rounding off the distortion aberration correction position information (x ', y').
- the distortion correction position information storage unit 161 supplies the lower several bits of the integer position information to the address alignment unit 59.
- the distortion aberration correction position information storage unit 161 supplies information on the decimal point of the distortion aberration correction position information (x ′, y ′) to the first aberration correction unit 60. Further, the distortion correction position information storage unit 161 supplies the distance ( ⁇ x, ⁇ y) to the adjacent pixel to the second aberration correction processing unit 162.
- the second aberration correction processing unit 162 has the image conversion device 170 of FIG.
- the image conversion device 170 mainly performs aberration correction processing for improving sharpness. Similar to the image conversion device 110 of FIG. 24, the image conversion device 170 includes a pixel of interest selection unit 111, a code tap selection unit 112, a filter tap selection unit 113, a code calculation unit 114, and a product-sum operation unit 116.
- the image conversion device 170 has a coefficient storage unit 171 and an interpolation operation unit 172 instead of the coefficient storage unit 115 and the interpolation operation unit 117 in the image conversion device 110 of FIG.
- the coefficient storage unit 171 stores a plurality of tap coefficients obtained by a learning method described later.
- the coefficient storage unit 171 is based on the distance information ( ⁇ x, ⁇ y) with the adjacent pixel supplied from the distortion aberration correction position information storage unit 161 of FIG. 32 and the code supplied from the code operation unit 114.
- a plurality of coefficients stored in the address corresponding to the code and near the distance information ( ⁇ x, ⁇ y) are selected. Then, the coefficient storage unit 171 reads out the selected plurality of coefficients and supplies the same to the product-sum operation unit 116.
- the product-sum operation unit 116 includes a plurality of product-sum operation units for the pixels required for interpolation by the interpolation operation unit 172.
- the product-sum operation unit 116 calculates a plurality of pixel values by supplying the plurality of tap coefficients supplied from the coefficient storage unit 171 to each product-sum operation unit, and the calculated pixel value is sent to the interpolation operation unit 172. Supply.
- the interpolation operation unit 172 calculates distance information ( ⁇ x, ⁇ y) between adjacent pixels supplied from the distortion aberration correction position information storage unit 161 of FIG. 32 with respect to a plurality of pixel values supplied from the product-sum operation unit 116.
- the output image (second image) is obtained by interpolation using an interpolation method such as linear interpolation or Lagrange interpolation.
- the product-sum operation unit 116 and the interpolation operation unit 172 can be interchanged. That is, the interpolation operation unit 116 can interpolate the plurality of tap coefficients supplied from the coefficient storage unit 172 by an interpolation method such as linear interpolation or Lagrange interpolation. At this time, the product-sum operation unit 116 may perform product-sum operation using the interpolated tap coefficient.
- the image conversion device 170 configured as described above can perform the image conversion processing in the same manner as the flowchart shown in FIG.
- the tap coefficients stored in the coefficient storage unit 171 shown in FIG. 33 are obtained by the learning device 20 shown in FIG. Specifically, the learning image storage unit 21 illustrated in FIG. 3 stores a high-quality image with high sharpness as a learning image.
- the teacher data generation unit 22 supplies the learning image stored in the learning image storage unit 21 to the teacher data storage unit 23 as teacher data (teacher image) as it is.
- the student data generation unit 24 reads the learning image from the learning image storage unit 21 and filters the read learning image with a low pass filter etc. Generate).
- the student data generation unit 24 supplies the generated student data to the student data storage unit 25.
- the learning unit 26 reads teacher data from the teacher data storage unit 23, and reads student data from the student data storage unit 25.
- the learning unit 26 uses the read teacher data and student data to solve the normal equation of equation (8) for each code, for each coefficient of the horizontal and vertical low pass filters, and solves for each code, It is possible to derive tap coefficients for each of the horizontal and vertical low pass filters.
- FIG. 34 is a follow chart for explaining an example of image conversion processing by the image conversion device 160 of FIG.
- the processing from step S121 to step S126 is the same as the processing from step S91 to step S96 shown in FIG. 30 except for acquisition of the F value (step S91) and generation of a coefficient by position (step S92). To be done. Thus, the processes after step S127 will be described.
- step S127 the position information alignment unit 59 performs the same processing as step S97, and supplies tap information (pixel values) in the vicinity of the pixel of interest as a first image to the first aberration correction processing unit 60. Further, the position information alignment unit 59 supplies distance information ( ⁇ x, ⁇ y) to the adjacent pixel to the second aberration correction processing unit 162. Then, the process proceeds to step S128.
- step S1208 the first aberration correction unit 60 performs processing similar to that in step S97, performs distortion correction processing, and outputs a third image. Then, the process proceeds to step S99.
- step S129 the second aberration correction unit 162 calculates target pixel information supplied from the target pixel selection unit 51 and distance information ( ⁇ x, ⁇ y) between adjacent pixels supplied from the distortion correction position information storage unit 161. , As described above, the third image supplied from the first aberration correction unit 60 is subjected to aberration correction other than distortion (for example, processing for improving sharpness etc.) to obtain a second image. Output Then, the process proceeds to step S130.
- step S130 the target pixel selection unit 51 determines whether there is an output image that has not yet been selected as a target pixel. In the case of a positive determination result, the process returns to step S124, and the processes after step S124 are repeated. In the case of a negative determination result, a series of processing ends.
- the present invention is not limited to the embodiments described above, and various modifications are possible within the scope of the matters described in the claims.
- the present invention is also applicable to cloud computing in which one function is shared and processed by a plurality of devices via a network.
- one apparatus may execute each step described in the above-described flowchart, or a plurality of apparatuses may share and execute each step. Furthermore, when one step is configured by a plurality of processes, one apparatus may execute a plurality of processes, or a plurality of apparatuses may share and execute the respective processes.
- the target of the image processing is the pixel values of the three color components of RGB, but the present invention is not limited to this.
- pixel values of four or more color components including white, yellow, and the like, pixel values of CMYK color components, and pixel values of luminance signals are also targets.
- the present invention is also applicable to digital cameras, so-called smartphones, surveillance cameras, endoscopes, microscopes, imaging devices such as microscopes and cinema cameras, applications for editing images, and the like.
- a broadcast station camera, an endoscope, a microscope, and the like are often photographed in real time for a long time, and thus are often divided into a camera head and an image processing apparatus.
- a system divided into a camera head and an image processing device is called a system camera.
- the present invention is also applicable to such a system camera.
- Images taken with a lens-interchangeable digital camera such as a high-end single-lens digital camera and a professional camera are often edited by a signal processing apparatus.
- the lens-interchangeable digital camera records data such as a photographed image and photographing information on a memory card.
- the signal processing apparatus edits an image by reading data such as an image and shooting information recorded on a memory card.
- the present invention is also applicable to this signal processing device.
- the means for supplying an image or the like from the lens-interchangeable digital camera to the signal processing apparatus is not limited to the memory card, but may be a magnetic disk, an optical disk, or communication means such as a network, various cables, or wireless.
- the image correction unit 45 shown in FIG. 6 may be configured by hardware, or may be configured by a computer (processor) on which a program capable of executing the above-described series of processes is installed.
Abstract
Description
特許文献3には、解像度変換を行う際の画質劣化を未然に防止する技術が開示されている。
特許文献4及び特許文献5には、レンズ収差に起因する像の歪みを補正するための補正量を精度良く求める技術が開示されている。
コード分類型適応フィルタは、第1の画像を第2の画像に変換する画像変換処理であり、第1及び第2の画像の定義に応じて様々な信号処理を行う。
例えば、第1の画像が低解像度画像であり、第2の画像が高解像度画像の場合、コード分類型適応フィルタは、解像度を向上させる超解像処理を行う。
第2の画像が第1の画像よりも画素数が多い又は少ない場合、コード分類型適応フィルタは、画像のリサイズ(拡大又は縮小)処理を行う。
第1の画像が位相をシフトさせた画像であり、第2の画像が位相をシフトさせていない画像の場合、コード分類型適応フィルタは、位相シフト処理を行う。
注目画素選択部11は、第2の画像を構成するそれぞれの画素を、順次、注目画素として選択し、選択した注目画素を表す情報を所定のブロックに供給する。
なお、フィルタタップとコードタップのタップ構造(選択される画素の構造)は、同一のタップ構造であっても良いし、異なるタップ構造であっても良い。
以上のようなコードタップを構成するNビットの各画素の画素値が所定の順番で並べられ、並べられたビット列がDR量子化コードとして出力される。
ステップS11では、注目画素選択部11は、画像変換装置10に入力された第1の画像に対する第2の画像を構成する画素のうち、まだ注目されていない(変換処理されていない)画素の1つを注目画素として選択する。注目画素選択部11は、例えば、第2の画像を構成する各画素に対して、ラスタースキャン順に、まだ注目されていない画素から注目画素を選択する。そして、ステップS12へ進む。
次に、積和演算部16の積和演算と、係数記憶部15に記憶されるタップ係数の学習について説明する。ここでは、第2の画像が高画質画像であり、第1の画像がその高画質画像にLPF(Low Pass Filter)処理を施して画質(解像度)を低下させた低画質画像であるものとする。
第kサンプル(k番目)の高画質画素の画素値の真値をykと表し、式(1)によって得られるその真値ykの予測値をyk’と表す。予測値yk’の真値ykに対する予測誤差ekは、式(2)で表される。
学習装置20の学習用画像記憶部21は、タップ係数wnの学習に用いられる学習用画像を記憶している。なお、学習用画像は、例えば、解像度の高い高画質画像が該当する。
生徒データ生成部24は、学習用画像記憶部21から学習用画像を読み出す。生徒データ生成部24は、学習用画像から、タップ係数の学習の生徒、すなわち、式(1)による予測演算としての写像による変換対象の画素値となる生徒データ(生徒画像)を生成し、生徒データ記憶部25に供給する。
生徒データ記憶部25は、生徒データ生成部24から供給される低画質画像を生徒データとして記憶する。
注目画素選択部31は、教師データ記憶部23に記憶されている教師データを構成する画素を、順次注目画素として選択し、その注目画素を表す情報を所定のブロックに供給する。
図1の画像変換装置10における係数記憶部15には、以上のようにして求められたコード毎のタップ係数wnが記憶されている。
ステップS21では、教師データ生成部22は、学習用画像記憶部21に記憶された学習用画像から教師データを生成し、生成した教師データを教師データ記憶部23に供給する。生徒データ生成部24は、学習用画像記憶部21に記憶された学習用画像から生徒データを生成し、生成した生徒データを生徒データ記憶部25に供給する。そして、ステップS22へ進む。なお、教師データ又は生徒データとして最適な学習用画像は、コード分類型適応フィルタがどのような画像変換処理を行うかに応じて、選択される。
図4に示すコードタップ選択部33は、注目画素について、図3に示す生徒データ記憶部25に記憶された生徒データからコードタップとする生徒データを選択し、コード演算部34に供給する。そして、ステップS24に進む。
図6は、本発明の第1の実施の形態に係るディジタルカメラ40の構成例を示すブロック図である。ディジタルカメラ40は、光学系の設計情報を用いて、収差のない静止画又は動画を撮影する。
光学系41は、例えば、図示せぬズームレンズや、フォーカスレンズ、絞り、光学ローパスフィルタ等を有する。光学系41は、外部からの光をイメージセンサ42に入射させる。光学系41は、図示せぬズームレンズのズーム情報、絞りの絞り情報などの撮影情報を画像補正部45に供給する。
記憶部43は、イメージセンサ42が出力する画像を一時記憶する。
画像補正部45は、光学系41より供給された撮影情報を用いて、信号処理部44から供給された画像に対して収差補正等の画像補正処理を行う。
制御部47は、ユーザの操作等に従い、ディジタルカメラ40を構成する各ブロックを制御する。
イメージセンサ42が出力する画像は、記憶部43に供給されて記憶される。記憶部43に記憶された画像は、信号処理部44及び画像補正部45による信号処理が施される。画像補正部45による画像補正済みの画像は、出力部46を介して、外部に出力される。
図7は、図6の画像補正部45で行われる収差補正処理を行う画像変換装置50の構成例を示すブロック図である。
歪曲収差とは、画像の中心から放射方向に画像が伸縮して画像が歪曲することをいう。但し、画像の中心は、撮像光学系の光軸とイメージセンサの撮像面との交点とする。
図10は、図7の歪曲収差補正位置情報生成処理部53の処理を説明するフローチャートである。
図7の係数補間処理部56は、図6の光学系41からズームレンズのズーム量や絞り値などの撮影情報が供給されると、現在のズーム量又は絞り値に対応する位置別係数を生成して、位置別係数記憶部57に供給する。
ステップS41では、係数補間処理部56は、光学系41から供給された撮影情報(例えば、ズームレンズのズーム情報やF値、交換レンズに対応した複数の係数値)を参照して、図7の係数記憶部55から、現在の撮影情報の近傍の値に対応する複数の係数を読み出す。そして、ステップS42に進む。
画像変換装置50は、光学系41から撮影情報が供給され、信号処理部44から信号処理済みの画像(第1の画像)が供給されると、次の処理を行う。
画像変換装置70は、入力画像の微小な収差補正処理、具体的には、入力画像に発生しうるジャギーやリンギングを抑制しつつ入力画像に対して位相シフト補正を行うコード分類型適応フィルタである。
画像変換装置70に供給された入力画像(第1の画像)は、コードタップ選択部72及びフィルタタップ選択部73に供給される。なお、入力画像は、例えば、RGBの各色成分の画素値である。
フィルタタップ選択部72は、例えば、図1のフィルタタップ選択部12と同様に、注目画素の位置から近い位置にある第1の画像の複数の画素の画素値をフィルタタップとして選択し、選択したフィルタタップを積和演算部76に供給する。
コード演算部74は、コードタップ選択部73からのコードタップに基づいて、例えばDR量子化による量子化値、中心画素値からの差分比等を演算して、コードタップの特徴量を示すコードに分類し、当該コードを係数記憶部75に供給する。
係数記憶部75は、コード演算部74からコードが供給され、図7に示す歪曲収差補正位置情報記憶部54から位相情報(歪曲収差補正位置情報の小数点以下の情報)が供給されると、供給されたコードに対応し、かつ、供給された位相情報の近傍の複数のタップ係数を読み出す。読み出された複数のタップ係数は、積和演算部76に供給される。
積和演算器91~106には、フィルタタップ(例えば後述する図17に示す13画素の各画素値)が入力される。さらに、積和演算器91~106には、図15に示す16個のタップ係数がそれぞれ入力される。本実施の形態では、選択されたタップ係数の数は、積和演算器の数と同じであるが、積和演算器の数と異なる数であってもよい。
積和演算器92は、係数群2(例えば図15の16個の黒丸印のうちの左から1番目で上から2番目の黒丸印に対応するタップ係数)とフィルタタップ(例えば前述した13画素の各画素値)とを用いて積和演算を行い、画素値2を出力する。
同様に、積和演算器93,94は、係数群3、4(例えば図15の16個の黒丸印のうちの左から1番目で上から3番目及び4番目の黒丸印にそれぞれ対応するタップ係数)とフィルタタップ(例えば13画素の画素値)とを用いて積和演算を行い、画素値3,4を出力する。
以上のように、積和演算部76は、注目画素の画素値について、係数記憶部75より供給された複数のタップ係数(係数群1~16)をそれぞれの積和演算器91~106に供給して、複数の画素値1~16を得て、これら複数の画素値1~16を補間演算部77に供給する。
図17は、図14のフィルタタップ選択部72で選択されるフィルタタップの構成例を示す図である。
したがって、入力画像である第1の画像は、歪曲収差補正位置情報記憶部54からの位相情報に基づいて、位相シフトした第3の画像に変換される。
図18は、図14のコードタップ選択部73で選択されるコードタップの構成例を示す図である。太線の丸印は、コードタップとなる入力画素を表す。その他の丸印は、図17と同じである。
図19は、コード演算部74の構成例を示すブロック図である。
コード演算部74は、量子化演算部81、中心画素差分比検出部82、及び変換テーブル記憶部83を有する。
例えば、図18に示すコードタップの場合、コードタップを構成する13画素が1ビットDR量子化処理の対象となる。その結果、入力されたコードタップの特徴量を表す13ビットのDR量子化コードが得られる。
最初に、中心画素差分比検出部82は、画素位置(横軸)方向において、注目画素(中心画素)が中心となる所定の範囲を範囲1として設定し、注目画素が中心となり、かつ、範囲1を包含する広い範囲を範囲2として設定する。
図23は、図14の画像変換装置70による画像変換処理の例を説明するフローチャートである。
ステップS51では、注目画素選択部71は、画像変換装置70に入力される入力画像に対する出力画像を構成する画素のうち、まだ注目されていない画素の1つを注目画素として選択する。そして、ステップS52に進む。例えば、注目画素選択部71は、例えば、出力画像を構成する画素のうち注目画素として選択されていない画素を、ラスタースキャン順で注目画素として選択する。
図14に示す係数記憶部75に記憶されるタップ係数は、図3に示す学習装置20により求められる。
具体的には、図3に示す学習用画像記憶部21には、学習用画像として高画質画像が記憶される。
図7の第2収差補正処理部61は、図24に示す画像変換装置110を有する。画像変換装置110は、コード分類型適応フィルタを利用して、第1収差補正処理部60により歪曲収差が補正された第3の画像に対して、主に先鋭感改善のための収差補正処理を行う。なお、画像変換装置110は、先鋭感改善だけでなく、収差に起因するあらゆる画像の劣化を改善させるための収差補正を行う。
注目画素選択部71は、画像変換装置110の出力画像である第2の画像を構成するそれぞれ画素を、順次、注目画素として選択し、選択した注目画素を表す情報を所定のブロックに供給する。
コードタップ選択部73は、例えば、図1のフィルタタップ選択部13と同様に、注目画素の位置から近い位置にある第3の画像を構成する複数の画素の画素値をコードタップとして選択し、選択したコードタップをコード演算部74に供給する。
係数記憶部115は、図7の係数補間処理部56で求めた複数のタップ係数であって、図28に示すように注目画素選択部111からの位置情報を分割した位置情報(ブロック情報)によって読み出されたコード毎にタップ係数を記憶している。
積和演算部116は、係数記憶部115より供給された複数のタップ係数を、それぞれの積和演算器91~106に供給することで複数の画素値1~16を演算し、補間演算部117に供給する。
図25は、図24のフィルタタップ選択部112で選択されるフィルタタップの構成例を示す図である。
細線の丸印は、入力画素を表すと共に、出力画素も表す。出力画素は、図17と異なり、入力画素と同じ位置に変換される。
図26は、図24のコードタップ選択部113で選択されるコードタップの構成例を示す図である。太線の丸印は、コードタップとなる入力画素を表す。その他の丸印は、図25と同じである。
図27は、コード演算部114の構成例を示すブロック図である。
コード演算部114は、量子化演算部121、及び変換テーブル記憶部122を有する。
例えば、図26に示すコードタップの場合、コードタップを構成する9画素が1ビットDR量子化処理の対象となる。その結果、入力されたコードタップの特徴量を表す9ビットのDR量子化コードが得られる。
図29は、図24の画像変換装置110による画像変換処理の例を説明するフローチャートである。
係数記憶部115は、記憶しているタップ係数のうち、コード演算部114から供給されるコードに対応するアドレスに記憶されているタップ係数であって、注目画素選択部111から供給されるブロック情報に対応した近傍の複数のタップ係数を選択し、読み出す。積和演算部116は、係数記憶部115から出力された複数のタップ係数を取得する。そして、ステップS75に進む。
図24に示す係数記憶部115に記憶されるタップ係数は、図3に示す学習装置20により求められる。
具体的には、図3に示す学習用画像記憶部21には、学習用画像として先鋭感の高い高画質画像が記憶される。
生徒データ生成部24は、学習用画像記憶部21から学習用画像を読み出し、読み出した学習用画像に対して、図6の光学系41の設計データに基づく光学シミュレーションデータや歪曲収差補正データ等を用いて、画質を劣化させた生徒データ(生徒画像)を生成する。生徒データ生成部24は、生成された生徒データを生徒データ記憶部25に供給する。
図30は、図7の画像変換装置50による画像変換処理の例を説明するフォローチャートである。
ステップS91では、画像変換装置50は、図6の光学系41から、撮像レンズのズーム値や絞りのF値等の撮影情報を取得する。そして、ステップS92に進む。
さらに、フレーム記憶部58は、図12Bに示すように、その整数位置情報の画素近傍の4画素以上の画素値を読み出す。フレーム記憶部58は、画素値を位置情報整列部59に供給する。そして、ステップS97に進む。
第1収差補正処理部60は、図14に示す画像変換装置70に代えて、図31に示す画像変換装置150を有する構成であってもよい。
画像変換装置150には、入力画像(第1の画像)として、例えばRGBの各色成分の画素値が入力される。画像変換装置150は、入力画像の微小な歪曲収差を補正し、かつ、画像に発生しうるジャギーやリンギングを抑制した出力画像を得る。画像変換装置150は、入力画像の微小な歪曲収差補正として、入力画像を歪曲収差が改善された画像に変換する画像処理、サイン関数による補間処理、三角波フィルタによる補間処理(線形補間処理)等を行う。
ここで、画像信号Vaの画素間隔をΔt、画像信号Vaの画素位置nΔtの画素データをx(nΔt)、画像信号Vcの画素位置をtとする。画素位置tの画素データx(t)は、以下の式(15)により、画素位置tの前後に位置する画像信号VaのN個(適当な有限の数)の画素の画素値を用いて求められる。
注目画素選択部151は、出力画像を構成する画素を、順次注目画素として選択し、選択した注目画素を表す情報を所定のブロックに供給する。
つぎに、本発明の第2の実施の形態について説明する。
第1の実施の形態に係るディジタルカメラ40は、光学系の設計情報を用いて収差のない静止画又は動画を撮影する。但し、他社製の交換レンズを利用しようとすると、光学系の設計情報が手に入らないため、第1の実施の形態に係るディジタルカメラ40が利用できなくなる。
画像変換装置160は、図7に示す画像変換装置50と同様に、注目画素選択部51、歪曲収差テーブル記憶部52、歪曲収差補正位置情報生成処理部53、フレーム記憶部58、位置情報整列部59、及び第1収差補正処理部60を有する。
第2収差補正処理部162は、図33の画像変換装置170を有する。画像変換装置170は、主に先鋭感改善のための収差補正処理を行う。
画像変換装置170は、図24の画像変換装置110と同様に、注目画素選択部111、コードタップ選択部112、フィルタタップ選択部113、コード演算部114、及び積和演算部116を有する。
以上のように構成された画像変換装置170は、図29に示すフローチャートと同様にして、画像変換処理を行うことができる。
図33に示す係数記憶部171に記憶されるタップ係数は、図3に示す学習装置20により求められる。
具体的には、図3に示す学習用画像記憶部21には、学習用画像として先鋭感の高い高画質画像が記憶される。
生徒データ生成部24は、学習用画像記憶部21から学習用画像を読み出し、読み出した学習用画像に対してローパスフィルタ等でフィルタリングすることにより、教師データよりも先鋭感が低い生徒データ(生徒画像)を生成する。生徒データ生成部24は、生成された生徒データを生徒データ記憶部25に供給する。
図34は、図32の画像変換装置160による画像変換処理の例を説明するフォローチャートである。なお、ステップS121からステップS126までの処理は、図30に示すステップS91からステップS96までの処理と比較すると、F値の取得(ステップS91)、位置別係数の生成(ステップS92)を除き、同様に行われる。そこで、ステップS127以降の処理について説明する。
本発明は、前述した実施の形態に限定されるものではなく、請求の範囲に記載された事項の範囲内において種々の変更が可能である。
本発明は、1つの機能を、ネットワークを介して、複数の装置で分担、共同して処理するクラウドコンピューティングにも適用可能である。
本発明は、ディジタルカメラ、いわゆるスマートフォン、監視カメラ、内視鏡、顕微鏡、シネマカメラなどの撮像装置、画像を編集するアプリケーション等にも適用可能である。
放送局用カメラ、内視鏡、及び顕微鏡などは、リアルタイムで長時間の撮影することが多いため、カメラヘッドと画像処理装置に分かれているケースが多い。カメラヘッドと画像処理装置に分かれているシステムを、システムカメラという。本発明は、このようなシステムカメラにも適用可能である。
高級一眼ディジタルカメラ及び業務用カメラなどのレンズ交換式ディジタルカメラで撮影された画像は、信号処理装置で編集されることが多い。この場合、レンズ交換式ディジタルカメラは、撮影した画像や撮影情報などのデータを、メモリカードに記録する。信号処理装置は、メモリカードに記録された画像や撮影情報などのデータを読み込んで、画像を編集する。本発明は、この信号処理装置にも適用可能である。
本発明は、ハードウェアにもソフトウェアにも適用可能である。例えば、図6に示す画像補正部45は、ハードウェアで構成されたものでもよいし、前述した一連の処理を実行できるプログラムがインストールされたコンピュータ(プロセッサ)で構成されたものでもよい。
Claims (15)
- 光学系の収差の影響を受けた第1の画像を記憶する画像記憶部と、
前記収差の影響が除去された第2の画像の各画素の画素値を生成するために所定の順序で走査された注目画素の位置情報と、前記第1の画像の各画素の位置情報と前記第2の画像の各画素の位置情報との対応関係を示す歪曲収差テーブルと、に基づいて、前記注目画素が前記第2の画像上で所定の順序で走査される毎に、走査された注目画素に対応する前記第1の画像の画素の位置情報を生成する位置情報生成部と、
前記位置情報生成部により生成された位置情報の小数点以下情報を用いて、前記画像記憶部から読み出された第1の画像の各画素に対して、歪曲収差による位相ずれを補正する第1の収差補正部と、
前記第1の収差補正部により補正された第1の画像に対して、歪曲収差以外の収差を補正することで前記第2の画像を生成する第2の収差補正部と、
を備える画像処理装置。 - 第1の収差補正部は、
前記画像記憶部から読み出された前記第1の画像から、前記走査された注目画素に基づいて、所定パターンの複数画素を選択する第1の選択部と、
前記第1の選択部により選択された前記所定パターンの複数画素の特徴量を示すコードを演算する第1のコード演算部と、
コード毎に歪曲収差による位相ずれを補正するためのタップ係数を記憶し、前記コード演算部により演算されたコードと、前記位置情報生成部により生成された位置情報の小数点以下情報と、に基づく複数のタップ係数を出力する第1の係数記憶部と、
前記画像記憶部から読み出された前記第1の画像から、前記走査された注目画素に基づいて、前記第1の画像の特定パターンの複数画素を選択する第2の選択部と、
前記第2の選択部により選択された複数画素の各画素値と、前記第1の係数記憶部から出力された複数のタップ係数と、前記位置情報生成部により生成された位置情報の小数点以下情報と、に基づいて、前記注目画素の画素値を演算することで、前記第1の画像の各画素に対して歪曲収差による位相ずれを補正する第1の画素値演算部と、
を有する請求項1に記載の画像処理装置。 - 第2の収差補正部は、
前記第1の収差補正部により補正された前記第1の画像から、前記走査された注目画素に基づいて、所定パターンの複数画素を選択する第3の選択部と、
前記第3の選択部により選択された前記所定パターンの複数画素の特徴量を示すコードを演算する第2のコード演算部と、
コード毎に歪曲収差以外の収差を補正するためのタップ係数を記憶し、前記コード演算部により演算されたコードに基づいて複数のタップ係数を出力する第2の係数記憶部と、
前記第1の収差補正部により補正された前記第1の画像から、前記走査された注目画素に基づいて、前記第1の画像の特定パターンの複数画素を選択する第4の選択部と、
前記第4の選択部により選択された複数画素の各画素値と、前記第2の係数記憶部から出力された複数のタップ係数と、に基づいて、前記注目画素の画素値を演算することで、前記第1の画像の各画素に対して歪曲収差以外の収差を補正して前記第2の画像を生成する第2の画素値演算部と、
を有する請求項1に記載の画像処理装置。 - 前記第2の係数記憶部は、前記光学系の設計情報による光学シミュレーションを用いた学習により求められたものであって、画像の先鋭感を改善するための複数のタップ係数を記憶し、
前記第2の画素値演算部は、前記第2の係数記憶部から出力された複数のタップ係数を用いて前記注目画素の画素値を演算することで、前記第1の画像から先鋭感の改善された前記第2の画像を生成する
請求項3に記載の画像処理装置。 - 前記第1のコード演算部は、
前記第1の選択部により選択された複数画素の各画素値を量子化して量子化コードを出力する第1の量子化手段と、
複数の量子化コードと前記第1の係数記憶部から複数のタップ係数を読み出すための複数のコードとの対応関係を示す第1の変換テーブルに対して、近似するタップ係数を読み出すための複数のコードを同一のコードに設定して、前記第1の量子化コード演算部により演算された量子化コードを、前記第1の変換テーブルに基づいて、対応するコードに変換する第1の変換テーブル記憶部と、
を有する請求項2に記載の画像処理装置。 - 前記第2のコード演算部は、
前記第3の選択部により選択された複数画素の各画素値を量子化して量子化コードを出力する第2の量子化手段と、
複数の量子化コードと前記第2の係数記憶部から複数のタップ係数を読み出すための複数のコードとの対応関係を示す第2の変換テーブルに対して、近似するタップ係数を読み出すための複数のコードを同一のコードに設定して、前記第2の量子化コード演算部により演算された量子化コードを、前記第2の変換テーブルに基づいて、対応するコードに変換する第2の変換テーブル記憶部と、
を有する請求項3に記載の画像処理装置。 - 第1の画素値演算部は、
前記第2の選択部により選択された複数画素の各画素値と、前記第1の係数記憶部から出力された複数のタップ係数と、の積和演算を行うことで複数の画素値を得る複数の第1の積和演算器と、
前記複数の第1の積和演算器により得られた複数の画素値を用いて補間処理を行うことで、前記注目画素の画素値を演算する第1の補間演算部と、
を有する請求項2に記載の画像処理装置。 - 第2の画素値演算部は、
前記第4の選択部により選択された複数画素の各画素値と、前記第2の係数記憶部から出力された複数のタップ係数と、の積和演算を行うことで複数の画素値を得る複数の第2の積和演算器と、
前記複数の第2の積和演算器により得られた複数の画素値を用いて補間処理を行うことで、前記注目画素の画素値を演算する第2の補間演算部と、
を有する請求項3に記載の画像処理装置。 - 前記位置情報生成部により生成された所定フレームの前記第1の画像の各画素の位置情報を記憶する位置情報記憶部を更に備え、
前記第1の収差補正部は、前記所定フレーム以降のフレームについて、前記位置情報記憶部に記憶された各画素の位置情報の小数点以下の情報を用いて、歪曲収差による位相ずれを補正する
請求項1に記載の画像処理装置。 - 撮影情報毎に用意された複数の前記歪曲収差テーブルを記憶する歪曲収差テーブル記憶部を更に備え、
前記位置情報生成部は、前記歪曲収差テーブル記憶部から、入力された撮影情報に近似する撮影情報に対応する複数の歪曲収差テーブルを読み出し、読み出された複数の歪曲収差テーブルを用いて前記入力された撮影情報に対応する歪曲収差テーブルを補間して、補間された歪曲収差テーブルを用いて、前記第1の画像の画素の位置情報を生成する
請求項1に記載の画像処理装置。 - 前記第2の収差補正部は、
撮影情報毎に用意された複数のタップ係数を記憶する前記第2の係数記憶部から、入力された撮影情報に近似する撮影情報に対応する複数のタップ係数を読み出し、読み出された複数のタップ係数を用いて前記入力された撮影情報に対応する複数のタップ係数を補間するタップ係数補間部を更に備え、
前記第2の画素値演算部は、前記タップ係数補間部で補間された複数のタップ係数を用いて、前記注目画素の画素値を演算する
請求項3に記載の画像処理装置。 - 光学系の収差の影響を受けた第1の画像の各画素の位置情報と前記収差の影響が除去された第2の画像の各画素の位置情報との対応関係を示す歪曲収差テーブルと、前記第2の画像の各画素の画素値を生成するために所定の順序で走査された注目画素の位置情報と、に基づいて、前記注目画素が前記第2の画像上で所定の順序で走査される毎に、走査された注目画素に対応する前記第1の画像の画素の位置情報を生成する位置情報生成ステップと、
前記位置情報生成ステップで生成された位置情報の小数点以下情報を用いて、前記第1の画像を記憶する画像記憶部から読み出された第1の画像の各画素に対して、歪曲収差による位相ずれを補正する第1の収差補正ステップと、
前記第1の収差補正ステップで補正された第1の画像に対して、歪曲収差以外の収差を補正することで前記第2の画像を生成する第2の収差補正ステップと、
を備える画像処理方法。 - コンピュータを、
光学系の収差の影響を受けた第1の画像を記憶する画像記憶部と、
前記収差の影響が除去された第2の画像の各画素の画素値を生成するために所定の順序で走査された注目画素の位置情報と、前記第1の画像の各画素の位置情報と前記第2の画像の各画素の位置情報との対応関係を示す歪曲収差テーブルと、に基づいて、前記注目画素が前記第2の画像上で所定の順序で走査される毎に、走査された注目画素に対応する前記第1の画像の画素の位置情報を生成する位置情報生成部と、
前記位置情報生成部により生成された位置情報の小数点以下情報を用いて、前記画像記憶部から読み出された第1の画像の各画素に対して、歪曲収差による位相ずれを補正する第1の収差補正部と、
前記第1の収差補正部により補正された第1の画像に対して、歪曲収差以外の収差を補正することで前記第2の画像を生成する第2の収差補正部と、
して機能させるためのプログラムが記録された記録媒体。 - コンピュータを、
光学系の収差の影響を受けた第1の画像を記憶する画像記憶部と、
前記収差の影響が除去された第2の画像の各画素の画素値を生成するために所定の順序で走査された注目画素の位置情報と、前記第1の画像の各画素の位置情報と前記第2の画像の各画素の位置情報との対応関係を示す歪曲収差テーブルと、に基づいて、前記注目画素が前記第2の画像上で所定の順序で走査される毎に、走査された注目画素に対応する前記第1の画像の画素の位置情報を生成する位置情報生成部と、
前記位置情報生成部により生成された位置情報の小数点以下情報を用いて、前記画像記憶部から読み出された第1の画像の各画素に対して、歪曲収差による位相ずれを補正する第1の収差補正部と、
前記第1の収差補正部により補正された第1の画像に対して、歪曲収差以外の収差を補正することで前記第2の画像を生成する第2の収差補正部と、
して機能させるためのプログラム。 - 光学系を介して入射される撮像光に応じて第1の画像を生成する撮像素子と、
前記撮像素子で生成された第1の画像を記憶する画像記憶部と、
前記収差の影響が除去された第2の画像の各画素の画素値を生成するために所定の順序で走査された注目画素の位置情報と、前記第1の画像の各画素の位置情報と前記第2の画像の各画素の位置情報との対応関係を示す歪曲収差テーブルと、に基づいて、前記注目画素が前記第2の画像上で所定の順序で走査される毎に、走査された注目画素に対応する前記第1の画像の画素の位置情報を生成する位置情報生成部と、
前記位置情報生成部により生成された位置情報の小数点以下情報を用いて、前記画像記憶部から読み出された第1の画像の各画素に対して、歪曲収差による位相ずれを補正する第1の収差補正部と、
前記第1の収差補正部により補正された第1の画像に対して、歪曲収差以外の収差を補正することで前記第2の画像を生成する第2の収差補正部と、
を備える撮像装置。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/740,568 US10291844B2 (en) | 2016-03-14 | 2016-03-14 | Image processing apparatus, image processing method, recording medium, program and imaging-capturing apparatus |
DE112016006582.5T DE112016006582T5 (de) | 2016-03-14 | 2016-03-14 | Bildverarbeitungsvorrichtung, bildverarbeitungsverfahren,aufzeichnungsmedium, programm und bildaufnahmegerät |
PCT/JP2016/057997 WO2017158690A1 (ja) | 2016-03-14 | 2016-03-14 | 画像処理装置、画像処理方法、記録媒体、プログラム及び撮像装置 |
CN201680038914.4A CN107924558B (zh) | 2016-03-14 | 2016-03-14 | 图像处理装置、图像处理方法、记录介质以及拍摄装置 |
JP2016560837A JP6164564B1 (ja) | 2016-03-14 | 2016-03-14 | 画像処理装置、画像処理方法、記録媒体、プログラム及び撮像装置 |
KR1020187029600A KR102011938B1 (ko) | 2016-03-14 | 2016-03-14 | 화상 처리 장치, 화상 처리 방법, 기록 매체, 프로그램 및 촬상 장치 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/057997 WO2017158690A1 (ja) | 2016-03-14 | 2016-03-14 | 画像処理装置、画像処理方法、記録媒体、プログラム及び撮像装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017158690A1 true WO2017158690A1 (ja) | 2017-09-21 |
Family
ID=59351317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/057997 WO2017158690A1 (ja) | 2016-03-14 | 2016-03-14 | 画像処理装置、画像処理方法、記録媒体、プログラム及び撮像装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10291844B2 (ja) |
JP (1) | JP6164564B1 (ja) |
KR (1) | KR102011938B1 (ja) |
CN (1) | CN107924558B (ja) |
DE (1) | DE112016006582T5 (ja) |
WO (1) | WO2017158690A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021124596A1 (ja) * | 2019-12-17 | 2021-06-24 | リアロップ株式会社 | 画像処理装置、画像処理方法、プログラム、記録媒体及び撮像装置 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018000795A (ja) * | 2016-07-07 | 2018-01-11 | オリンパス株式会社 | 内視鏡プロセッサ |
CN107845583B (zh) * | 2016-09-18 | 2020-12-18 | 中芯国际集成电路制造(上海)有限公司 | 基板表面缺陷检测装置、图像畸变校正方法和装置以及基板表面缺陷检测设备 |
JP7005168B2 (ja) * | 2017-05-02 | 2022-01-21 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
CN107290700B (zh) * | 2017-08-08 | 2020-12-04 | 上海联影医疗科技股份有限公司 | 一种相位校正方法、装置及磁共振系统 |
JP6516806B2 (ja) * | 2017-08-31 | 2019-05-22 | キヤノン株式会社 | 画像表示装置 |
JP6435560B1 (ja) * | 2018-03-10 | 2018-12-12 | リアロップ株式会社 | 画像処理装置、画像処理方法、プログラム及び撮像装置 |
US11170481B2 (en) * | 2018-08-14 | 2021-11-09 | Etron Technology, Inc. | Digital filter for filtering signals |
CN110188225B (zh) * | 2019-04-04 | 2022-05-31 | 吉林大学 | 一种基于排序学习和多元损失的图像检索方法 |
JP7403279B2 (ja) | 2019-10-31 | 2023-12-22 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
CN112819725B (zh) * | 2021-02-05 | 2023-10-03 | 广东电网有限责任公司广州供电局 | 一种径向畸变的快速图像校正方法 |
CN116823681B (zh) * | 2023-08-31 | 2024-01-26 | 尚特杰电力科技有限公司 | 红外图像的畸变矫正方法、装置、系统及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07193790A (ja) * | 1993-12-25 | 1995-07-28 | Sony Corp | 画像情報変換装置 |
JP2005311473A (ja) * | 2004-04-16 | 2005-11-04 | Sharp Corp | 撮像装置および信号処理方法ならびにそのプログラムと記録媒体 |
WO2009078454A1 (ja) * | 2007-12-18 | 2009-06-25 | Sony Corporation | データ処理装置、データ処理方法、及び記憶媒体 |
JP2011123589A (ja) * | 2009-12-09 | 2011-06-23 | Canon Inc | 画像処理方法、画像処理装置、撮像装置および画像処理プログラム |
JP2014087022A (ja) * | 2012-10-26 | 2014-05-12 | Canon Inc | 撮像装置、撮像装置の制御方法、プログラム |
JP2014093714A (ja) * | 2012-11-06 | 2014-05-19 | Canon Inc | 画像処理装置、その制御方法、および制御プログラム |
JP2015198380A (ja) * | 2014-04-02 | 2015-11-09 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理プログラム、および画像処理方法 |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6165024A (ja) | 1984-09-04 | 1986-04-03 | Mazda Motor Corp | エンジンのトルク変動制御装置 |
JPH04348343A (ja) | 1991-05-27 | 1992-12-03 | Matsushita Electron Corp | 縮小投影露光装置用レチクル |
JP3035416B2 (ja) | 1992-11-18 | 2000-04-24 | キヤノン株式会社 | 撮像装置及び画像再生装置及び映像システム |
JPH0893790A (ja) | 1994-09-29 | 1996-04-09 | Exedy Corp | ダンパーディスク組立体 |
JP3950188B2 (ja) * | 1996-02-27 | 2007-07-25 | 株式会社リコー | 画像歪み補正用パラメータ決定方法及び撮像装置 |
JP2000004391A (ja) | 1998-06-16 | 2000-01-07 | Fuji Photo Film Co Ltd | 収差補正量設定方法、位置調整装置及び画像処理装置 |
JP4144292B2 (ja) * | 2002-08-20 | 2008-09-03 | ソニー株式会社 | 画像処理装置と画像処理システム及び画像処理方法 |
JP2004241991A (ja) * | 2003-02-05 | 2004-08-26 | Minolta Co Ltd | 撮像装置、画像処理装置及び画像処理プログラム |
US7596286B2 (en) * | 2003-08-06 | 2009-09-29 | Sony Corporation | Image processing apparatus, image processing system, imaging apparatus and image processing method |
JP2005267457A (ja) * | 2004-03-19 | 2005-09-29 | Casio Comput Co Ltd | 画像処理装置、撮影装置、画像処理方法及びプログラム |
US7920200B2 (en) * | 2005-06-07 | 2011-04-05 | Olympus Corporation | Image pickup device with two cylindrical lenses |
JP4348343B2 (ja) | 2006-02-10 | 2009-10-21 | パナソニック株式会社 | 部品実装機 |
JP4487952B2 (ja) * | 2006-02-27 | 2010-06-23 | ソニー株式会社 | カメラ装置及び監視システム |
WO2008139577A1 (ja) * | 2007-05-09 | 2008-11-20 | Fujitsu Microelectronics Limited | 画像処理装置、撮像装置、および画像歪み補正方法 |
JP5010533B2 (ja) * | 2008-05-21 | 2012-08-29 | 株式会社リコー | 撮像装置 |
JP5180687B2 (ja) * | 2008-06-03 | 2013-04-10 | キヤノン株式会社 | 撮像装置及び補正方法 |
CN102474626B (zh) * | 2009-07-21 | 2014-08-20 | 佳能株式会社 | 用于校正色像差的图像处理设备和图像处理方法 |
JP5438579B2 (ja) * | 2010-03-29 | 2014-03-12 | キヤノン株式会社 | 画像処理装置及びその制御方法 |
JP5742179B2 (ja) * | 2010-11-05 | 2015-07-01 | ソニー株式会社 | 撮像装置、画像処理装置、および画像処理方法、並びにプログラム |
JP6041651B2 (ja) * | 2012-12-10 | 2016-12-14 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
CN104704807B (zh) * | 2013-03-28 | 2018-02-02 | 富士胶片株式会社 | 图像处理装置、摄像装置及图像处理方法 |
-
2016
- 2016-03-14 KR KR1020187029600A patent/KR102011938B1/ko active IP Right Grant
- 2016-03-14 CN CN201680038914.4A patent/CN107924558B/zh active Active
- 2016-03-14 JP JP2016560837A patent/JP6164564B1/ja active Active
- 2016-03-14 DE DE112016006582.5T patent/DE112016006582T5/de not_active Withdrawn
- 2016-03-14 US US15/740,568 patent/US10291844B2/en active Active
- 2016-03-14 WO PCT/JP2016/057997 patent/WO2017158690A1/ja active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07193790A (ja) * | 1993-12-25 | 1995-07-28 | Sony Corp | 画像情報変換装置 |
JP2005311473A (ja) * | 2004-04-16 | 2005-11-04 | Sharp Corp | 撮像装置および信号処理方法ならびにそのプログラムと記録媒体 |
WO2009078454A1 (ja) * | 2007-12-18 | 2009-06-25 | Sony Corporation | データ処理装置、データ処理方法、及び記憶媒体 |
JP2011123589A (ja) * | 2009-12-09 | 2011-06-23 | Canon Inc | 画像処理方法、画像処理装置、撮像装置および画像処理プログラム |
JP2014087022A (ja) * | 2012-10-26 | 2014-05-12 | Canon Inc | 撮像装置、撮像装置の制御方法、プログラム |
JP2014093714A (ja) * | 2012-11-06 | 2014-05-19 | Canon Inc | 画像処理装置、その制御方法、および制御プログラム |
JP2015198380A (ja) * | 2014-04-02 | 2015-11-09 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理プログラム、および画像処理方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021124596A1 (ja) * | 2019-12-17 | 2021-06-24 | リアロップ株式会社 | 画像処理装置、画像処理方法、プログラム、記録媒体及び撮像装置 |
JP2021097334A (ja) * | 2019-12-17 | 2021-06-24 | リアロップ株式会社 | 画像処理装置、画像処理方法、プログラム及び撮像装置 |
Also Published As
Publication number | Publication date |
---|---|
KR102011938B1 (ko) | 2019-08-19 |
US10291844B2 (en) | 2019-05-14 |
CN107924558A (zh) | 2018-04-17 |
US20180198977A1 (en) | 2018-07-12 |
DE112016006582T5 (de) | 2018-12-13 |
JPWO2017158690A1 (ja) | 2018-04-05 |
KR20180118790A (ko) | 2018-10-31 |
CN107924558B (zh) | 2019-04-30 |
JP6164564B1 (ja) | 2017-07-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017158690A1 (ja) | 画像処理装置、画像処理方法、記録媒体、プログラム及び撮像装置 | |
CN102474626B (zh) | 用于校正色像差的图像处理设备和图像处理方法 | |
JP3915563B2 (ja) | 画像処理装置および画像処理プログラム | |
JP2004241991A (ja) | 撮像装置、画像処理装置及び画像処理プログラム | |
JP5983373B2 (ja) | 画像処理装置、情報処理方法及びプログラム | |
JP2015088833A (ja) | 画像処理装置、撮像装置及び画像処理方法 | |
JP5344648B2 (ja) | 画像処理方法、画像処理装置、撮像装置および画像処理プログラム | |
JP4649171B2 (ja) | 倍率色収差補正装置、倍率色収差補正方法及び倍率色収差補正プログラム | |
JP2013055623A (ja) | 画像処理装置、および画像処理方法、情報記録媒体、並びにプログラム | |
JP2013009293A (ja) | 画像処理装置、画像処理方法、プログラム、および記録媒体、並びに学習装置 | |
JP2014042176A (ja) | 画像処理装置および方法、プログラム、並びに、固体撮像装置 | |
CN107979715B (zh) | 摄像装置 | |
JP6694626B1 (ja) | 画像処理装置、画像処理方法、プログラム及び撮像装置 | |
KR101205833B1 (ko) | 디지털 줌 시스템을 위한 영상 처리 장치 및 방법 | |
KR102470242B1 (ko) | 영상 처리 장치, 영상 처리 방법, 및 프로그램 | |
JP6408884B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
JP6435560B1 (ja) | 画像処理装置、画像処理方法、プログラム及び撮像装置 | |
JP5359814B2 (ja) | 画像処理装置 | |
JP4708180B2 (ja) | 画像処理装置 | |
JPWO2015083502A1 (ja) | 画像処理装置、該方法および該プログラム | |
JP5334265B2 (ja) | 倍率色収差・像歪補正装置およびそのプログラム | |
JP2014110507A (ja) | 画像処理装置および画像処理方法 | |
JP2013055622A (ja) | 画像処理装置、および画像処理方法、情報記録媒体、並びにプログラム | |
JP2014086957A (ja) | 画像処理装置及び画像処理方法 | |
CN103957392B (zh) | 图像处理设备和图像处理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2016560837 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20187029600 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16894314 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16894314 Country of ref document: EP Kind code of ref document: A1 |