WO2010032720A1 - 画像歪み補正方法及び画像処理装置 - Google Patents
画像歪み補正方法及び画像処理装置 Download PDFInfo
- Publication number
- WO2010032720A1 WO2010032720A1 PCT/JP2009/066081 JP2009066081W WO2010032720A1 WO 2010032720 A1 WO2010032720 A1 WO 2010032720A1 JP 2009066081 W JP2009066081 W JP 2009066081W WO 2010032720 A1 WO2010032720 A1 WO 2010032720A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- image
- pixel data
- color
- memory
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2209/00—Details of colour television systems
- H04N2209/04—Picture signal generators
- H04N2209/041—Picture signal generators using solid-state devices
- H04N2209/042—Picture signal generators using solid-state devices having a single pick-up sensor
- H04N2209/045—Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
- H04N2209/046—Colour interpolation to calculate the missing colour values
Definitions
- the present invention relates to an image distortion correction method and an image processing apparatus in a distortion correction process for an image captured by an image sensor via an optical system.
- an image sensor made up of a CCD, a CMOS, or the like has pixels physically arranged in a Bayer arrangement structure as shown in FIG.
- the image data as shown in FIG. 2A output from the image sensor is stored in the storage device (memory) of the image processing apparatus as shown in FIG. It is stored in a continuous permutation (see, for example, Patent Document 1).
- R (Red) data for all pixels from image data having a Bayer array structure in Japanese Patent Laid-Open No. 2009-157733, FIGS. 4 (a) to 4 (d).
- R data of all pixels by performing color separation interpolation processing.
- the color separation interpolation processing as shown in FIGS. 4B to 4D is performed, the data of R1 to R4 (R21, R23, R41, R43) are stored separately as shown in FIG. 2B. Therefore, when reading from the storage device (memory), the waiting time due to the access time increases. For high-speed access, pixel data as shown in FIG.
- the present invention provides an image distortion correction method and an image processing apparatus capable of realizing an improvement in image processing speed by improving an access speed of a memory without adding a memory in view of the above-described problems of the prior art. Objective.
- the processing time can be shortened by storing the pixel data for creating the average value in the same storage area.
- a complementary color array structure there is addition / subtraction processing, and each of these data is also created from the average value of multiple data, so these can also be stored in the same storage area, even in the case of a complementary color system. Processing is possible.
- the image distortion correction method includes a plurality of pixels, each of which includes an image sensor corresponding to one color, and images are captured by the image sensor via an optical system.
- the pixel data after the distortion correction is obtained by interpolation processing using pixel data of a plurality of pixels around the pixel before correction stored in the memory.
- the pixel data of the same color is continuously stored for each color in the memory.
- pixel data of the same color is continuously stored in the memory for each color, thereby enabling high-speed access in the memory when performing interpolation processing using pixel data of the same color,
- the memory access speed can be improved without adding memory, and the image processing speed can be improved.
- the image distortion correction method includes a plurality of pixels, each of which includes an image sensor corresponding to one color, and images are captured by the image sensor via an optical system.
- pixel data after correction of the same color as the color before correction is obtained by interpolation processing with pixel data of a plurality of pixels around the pixel before correction stored in the memory, and the memory Is characterized in that pixel data of the same color is continuously stored for each color.
- pixel data of the same color is continuously stored in the memory for each color, thereby enabling high-speed access in the memory when performing interpolation processing using pixel data of the same color,
- the memory access speed can be improved without adding memory, and the image processing speed can be improved.
- the interpolation processing is performed when the pixel arranged at a predetermined position around the pixel before distortion correction is the same as the color of the pixel after correction.
- a first process in which data is used as it is and when it is different from the corrected pixel color the pixel arranged at the predetermined position is interpolated with pixel data of a plurality of pixels of the same color as the color after distortion correction around the pixel. And the relative positional relationship between the position of the pixel before correction and the pixel arranged at the predetermined position, and the pixel data of the pixels arranged at the plurality of predetermined positions obtained in the first process. And a second process obtained by interpolating the subsequent pixel data.
- the size of one block unit of the memory area for continuously storing pixel data for each color is ensured to be larger than the unit of the plurality of pixels used in the interpolation process.
- the size of one block unit of the memory area is an area where pixel data of four pixels can be stored.
- pixel data after distortion correction is stored in the memory continuously for pixel data of colors used for RGB calculation. It is preferable.
- the image processing apparatus includes a plurality of pixels, each of the pixels corresponding to one color and imaging an image via the optical system, and an arithmetic device that processes an image captured from the imaging element
- the memory in the process of correcting the distortion of the image, a plurality of corrected pixel data of the same color as the color before correction are stored around the pixel before correction stored in the memory. It is calculated by interpolation processing using pixel data of pixels, and pixel data of the same color is continuously stored for each color in the memory.
- pixel data of the same color is continuously stored in the memory for each color, so that high-speed access in the memory becomes possible when performing interpolation processing using pixel data of the same color. It is possible to improve the memory access speed without increasing the number of images, and improve the image processing speed.
- the arithmetic unit is configured such that when the pixel arranged at a predetermined position around the pixel before distortion correction is the same as the color of the pixel after correction, the pixel of the pixel arranged at the predetermined position A first process using the data as it is and interpolating the pixel arranged at the predetermined position with pixel data of a plurality of pixels having the same color as the corrected color in the periphery when the color is different from the corrected pixel Correction based on the relative positional relationship between the position of the pixel before distortion correction and the pixel arranged at the predetermined position, and the pixel data of the plurality of pixels arranged at the predetermined position obtained in the first process. It is preferable to perform the interpolation process by a second process obtained by interpolating the subsequent pixel data.
- the size of one block unit of the memory area for continuously storing the pixel data for each color in the memory is larger than the unit of the plurality of pixels used in the interpolation processing. Is preferred. For example, when interpolation processing is performed using pixel data of four pixels around a predetermined pixel, the size of one block unit of the memory area is preferably an area where pixel data of four pixels or more can be stored.
- pixel data after distortion correction is stored in the memory continuously for pixel data of colors used for RGB calculation. It is preferable.
- optical system is a wide-angle optical system
- image distortion caused by the wide-angle optical system can be corrected.
- the image forming apparatus includes the above-described image processing apparatus and an image processing unit that performs another image processing on an image whose distortion has been corrected by the image processing apparatus. According to this image forming apparatus, it is possible to output the image after the distortion correction processing as described above to the image processing unit and perform the distortion correction processing before the image processing such as ISP processing. By performing image processing on the image after distortion correction in this way, a more natural image can be obtained.
- an image distortion correction method and an image processing apparatus capable of improving the image processing speed by improving the memory access speed without adding a memory.
- FIG. 2A schematically shows a memory area in which R pixel data from the pixel data in FIG. 2A is stored in the present embodiment, and also schematically shows a memory area in which G pixel data is stored.
- FIG. 2B is a diagram schematically showing a memory area in which G pixel data is also stored.
- FIG. 6 is a diagram showing peripheral pixels used for interpolation calculation when the pixel after interpolation is R (red) in the present embodiment, and the pixel before interpolation is R, and in the case of R ⁇ R (a), B ⁇ It is a figure for demonstrating the case (d) of the case of R (b), also the case of oddG-> R (c), and the case of evenG-> R.
- FIG. 1 it is a figure which shows the surrounding pixel used for interpolation calculation, when the pixel after interpolation is B (red), the pixel before interpolation is B, and the case of B-> B (a) is also R-> It is a figure for demonstrating the case (b) of the case (b) of B, the case of oddG-> B (c), and the case of evenG-> B (d).
- FIG. 5A is a diagram for explaining an image before image distortion correction and a diagram for explaining an image after image distortion correction in the present embodiment. It is a figure for demonstrating calculation of the correction coefficient in the interpolation calculation by this embodiment.
- FIG. 9A and FIG. 7B are diagrams for explaining distortion correction of the image.
- FIG. 9A is the same as FIG. 7A, and FIG. 9B is FIG. 7B.
- FIG. 9C is a schematic enlarged view of FIG. 9A
- FIG. 9D is a schematic enlarged view of FIG. 9B.
- 11 is a flowchart for explaining steps S01 to S08 of image distortion correction by the image processing apparatus 10 of FIG.
- FIG. 1 is a block diagram illustrating a schematic configuration of an image forming apparatus according to an embodiment. It is a figure which shows typically the example of an array structure containing a complementary color system pixel in the image pick-up element in other embodiment.
- FIGS. 14A and 14B are diagrams for explaining image distortion correction in the case of the arrangement structure of FIG. 13.
- FIG. 14A is the same diagram as FIG. 9A and FIG. 14B is a schematic diagram of FIG.
- FIG. 14C is a diagram showing rearrangement for pixel data of G and Ye in an enlarged view
- FIG. 14C is a schematic enlarged view of FIG. 14A and a diagram showing rearrangement for pixel data of B
- FIG. ) Is the same diagram as FIG. 9B for the B pixel data
- FIG. 9B for the B pixel data
- FIG. 14E is the same diagram as FIG. 9B for the G and Ye pixel data
- FIG. 14F is the R pixel data.
- FIG. 14 (g) is a schematic enlarged view of FIG. 14 (e)
- FIG. 14 (h) is a schematic enlarged view of FIG. 14 (e)
- FIG. 14 (i) is a schematic enlarged view of FIG. 15 .
- FIG. 7 is a diagram (a) for explaining an image before image distortion correction and a diagram (b) for explaining an image after image distortion correction in this embodiment.
- FIG. 1 described above is a diagram schematically showing a general Bayer arrangement in a RAW image from the image sensor.
- interpolation means that an output pixel is calculated using peripheral pixels
- correction means that the position of the pixel is moved for distortion correction
- the distortion correction of an image obtained with a wide-angle lens or a fisheye lens is performed by exchanging pixels as shown in FIGS. 7 (a) and 7 (b). That is, as shown in FIG. 7A, the pixel coordinates of a certain point in the circular image area before the image distortion correction are (X, Y), and the pixel coordinates in the rectangular image area after the distortion correction are (X ′ , Y ′), the pixel before correction is replaced with the pixel after correction as (X, Y) ⁇ (X ′, Y ′), but the inclination angle of each straight line from the center (0, 0) to each coordinate is changed. Since ⁇ is the same, let L be the distance from the center (0,0) before correction, and L ′ be the distance from the center (0,0) after distortion correction corresponding to that point. Replace the pixel before correction with the pixel after correction as'.
- X ′ and Y ′ of (X ′, Y ′) after distortion correction are integers, but (X, Y ′) before correction calculated from (X ′, Y ′) after correction.
- Y is not an integer in most cases and can take a real value having a fractional part. The calculation of each coordinate is performed based on a correction LUT (lookup table) determined from the lens characteristics to be used.
- Each pixel is two-dimensionally arranged in a rectangular shape, and when the pixel coordinates X and Y are both integers, they coincide with the position (center position) of any pixel, and either X or Y In the case of pixel coordinates having a decimal part, the pixel position (center position) does not match.
- Distortion correction can be performed by exchanging pixels from L to L ′ on the captured image based on the image distortion correction coefficient.
- the above-described pixel replacement when correcting distortion of a wide-angle lens or a fish-eye lens has been performed after conversion to an RGB image.
- a RAW image from an image sensor imaging device
- the color (RGB) varies depending on the pixel position as shown in FIG. Since it is determined, random pixel replacement cannot be performed. For this reason, it is necessary to perform distortion correction after conversion into an RGB image. Therefore, in the present embodiment, the pixel of the target coordinate is generated by interpolation calculation from the peripheral pixels, so that the distortion correction processing can be performed with the RAW image without converting to the RGB image.
- FIG. 8 is a diagram for explaining the calculation of the correction coefficient in the interpolation calculation according to the present embodiment.
- the color (RGB) is determined by the position of the pixel after distortion correction.
- the pixel data after distortion correction is calculated by interpolation processing based on pixel data around the pixel position (X, Y) before correction corresponding to the pixel position (X ′, Y ′) after correction.
- the interpolation process is performed by two stages of the first process and the second process.
- (1) As the first process, as shown in FIGS. 4 to 6 to be described later, four points of pixels in the vicinity of the coordinates (X, Y) of the pixel before correction (pixel 51 in FIG. 4). Pixel data of .about.pixel 54) is obtained by interpolation processing.
- the pixel data of the four pixels is subjected to interpolation processing from pixel data of a plurality of pixels having the same color as the peripheral corrected pixels.
- the positions of these four pixels correspond to the positions of the pixels of the image sensor and are arranged at predetermined positions.
- Pixel data at a pixel (virtual pixel) at (X, Y) is obtained by interpolation processing. This will be specifically described below.
- the interpolation processes in the first process and the second process are also referred to as a first interpolation process and a second interpolation process, respectively.
- FIG. 4 is a diagram showing peripheral pixels used for interpolation calculation when the pixel after distortion correction processing is R (red) in the present embodiment.
- the pixel before interpolation is R, and R ⁇ R (a ), Also in the case of B ⁇ R (b), similarly in the case of oddG ⁇ R (c), and also in the case of evenG ⁇ R (d).
- pixel data of a plurality of pixels arranged at predetermined positions around the coordinates (X, Y) before distortion correction are obtained by the interpolation process. For example, among the intersection points of each pixel in FIG. 4, the intersection point having the closest coordinates (X, Y) is calculated, and pixel data of four pixels surrounding the intersection point are calculated. For example, if the closest intersection of the coordinates (X, Y) is an intersection surrounded by the pixels 51 to 54, pixel data in the color of the pixel after the distortion correction processing of these pixels 51 to 54 is calculated.
- the pixel 51 before interpolation in the first interpolation process is R (R ⁇ R), that is, when the pixel color is the same before and after interpolation, the pixel of the R pixel 51 before interpolation.
- the data is directly used as pixel data R51 after interpolation.
- the pixel 53 before interpolation is an odd-numbered G (oddG ⁇ R)
- the interpolated pixel data R53 is obtained using the four pixel data R1, R2, R3, and R4.
- an averaging process may be performed, or an averaging process in which weighting according to the distance is performed may be performed.
- interpolated pixel data R54 is obtained using the pixel data R1, R2, R3, and R4 of the four pixels 54d.
- the second interpolation process Based on the pixel data (R51 to R54) of the pixels (51 to 54) arranged at the predetermined positions close to the coordinates (X, Y) before interpolation obtained by the first interpolation process, the second interpolation process performs the following.
- the pixel data R after interpolation of the coordinates (X, Y) is obtained from the equation (Equation 1).
- [] represents a Gaussian symbol (also referred to as a floor function)
- [X] represents a maximum integer not exceeding X.
- FIG. 5 is a diagram showing peripheral pixels used for interpolation calculation when the pixel after interpolation is B (blue) in the present embodiment.
- the pixel before interpolation is B, and when B ⁇ B (a), It is a figure for demonstrating the case (d) of the case of (b) similarly, the case of (b) similarly, the case of oddG-> B (c), and the case of evenG-> B.
- the pixel data of the B pixel 61 before the interpolation is directly used as the pixel data after the interpolation.
- the pixel data B1, B2, B3 of the four pixels 62a, 62b, 62c, and 62d at the four corners around the pixel 62 are displayed.
- the interpolated pixel data B is obtained by using B4 by averaging processing or the like.
- FIG. 6 is a diagram illustrating peripheral pixels used for interpolation calculation when the pixel after interpolation is G (green) in the present embodiment.
- the pixel before interpolation is G, and when G ⁇ G (a), It is a figure for demonstrating the case (b) when the pixel before interpolation is other than G and it is other than G-> G.
- the pixel data of the G pixel 71 before interpolation is directly used as pixel data after interpolation.
- the pixel 72 before interpolation is R or B other than G (other than G ⁇ G)
- the pixel data G after interpolation is obtained by averaging processing or the like using the pixel data G1, G2, G3, and G4.
- the pixel data can be obtained with high accuracy by interpolation calculation from four pixels around the pixel before the replacement (the same color as the pixel after the replacement). It is possible to perform distortion correction processing with little image degradation.
- the size of one block unit of the memory area is preferably larger than the area where pixel data of 4 pixels can be stored.
- FIGS. 7A and 7B The image distortion correction and pixel data storage (storing) in FIGS. 7A and 7B will be further described with reference to FIG.
- FIGS. 7A and 7B are diagrams for explaining image distortion correction in FIGS. 7A and 7B.
- FIG. 9A is the same as FIG. 7A
- FIG. 9C is a schematic enlarged view of FIG. 9A
- FIG. 9D is a schematic enlarged view of FIG. 9B.
- An image photographed by the imaging device through the lens optical system becomes an image that contracts in a circular direction in the center as shown in FIG. 9A due to the influence of the distortion of the lens optical system as described above, and in particular, the lens optical system. This is noticeable in the case of a wide-angle / fish-eye lens.
- the image of FIG. 9A is image-processed as shown in FIG. 9B in the image processing step after shooting, the pixels in the effective image data area C that have contracted as shown in FIG. 9C.
- By rearranging the data into an area including the invalid image data area D as shown in FIGS. 9C and 9D an image similar to that seen with the naked eye is obtained.
- the pixel data has a data structure in which a plurality of colors are stored in a continuous permutation in the Bayer array (see Patent Document 1).
- image data is arranged and stored in a memory area of one block as shown in FIGS. 3A to 3C for each RGB pixel type. Then, interpolation processing is performed using the stored pixel data of each color. Since the pixel data of the four pixels around the pixel before correction can be read from the memory area of one block and subjected to the interpolation processing as shown in FIGS. 4 and 5, the data can be accessed quickly. .
- the memory read bus width (bit) is the same as the size of the memory area of one block, and the memory area size is a unit of a plurality of pixels (4 in the embodiment) used in the interpolation process. Is set sufficiently larger than (pixel). For this reason, pixel data of at least four pixels can be read out from one block of memory area as shown in FIGS. 3A to 3C in one cycle, so that access time to the memory is shortened and high speed is achieved. Processing is possible.
- FIG. 10 is a block diagram illustrating a schematic configuration of the image processing apparatus according to the present embodiment.
- the image processing apparatus 10 includes an imaging device 11 in which light from a captured image enters through the wide-angle lens A, a counter 12, a distance calculation unit 13, a distortion correction coefficient storage unit 14, and a calculation unit. 15, a correction LUT calculation unit 16, a distortion correction processing unit 17, an image buffer memory 19, and a memory control unit 18.
- the wide-angle lens A is composed of a lens optical system composed of a plurality of lenses, and can obtain an image with a wide angle of view.
- the image sensor 11 is composed of an image sensor such as a CCD or CMOS composed of a large number of pixels, and outputs a RAW image of a photographed image according to the Bayer array in FIG.
- the counter 12 detects the vertical synchronization signal VD or the horizontal synchronization signal HD from the image sensor 11 and outputs the coordinates (X ′, Y ′) after distortion correction.
- the distance calculation unit 13 calculates a distance L ′ from the center from the coordinates (X ′, Y ′) after distortion correction as shown in FIG.
- the distortion correction coefficient storage unit 14 includes a ROM, a RAM, and the like, and stores an image distortion correction coefficient corresponding to the lens characteristics of the wide-angle lens A.
- the calculation unit 15 calculates the distance L from the center before distortion correction based on the distance L ′ from the center after distortion correction and the image distortion correction coefficient stored in the distortion correction coefficient storage unit 14, and before distortion correction.
- the coordinates (X, Y) before distortion correction are calculated from the distance L from the center of the coordinates and the coordinates (X ′, Y ′) after distortion correction.
- the correction LUT calculation unit 16 calculates a correction LUT (lookup table) in which the distances L and L ′ obtained as described above and the coordinates (X, Y) and (X ′, Y ′) are associated with each other. To do.
- the distortion correction processing unit 17 refers to the correction LUT calculated by the correction LUT calculation unit 16 with respect to the input RAW image data P, and corrects the distortion by exchanging pixels. At this time, the distortion correction processing unit 17 saves each color in the image buffer memory 19 described later. Each pixel data after replacement using the pixel data thus obtained is obtained by the interpolation processing as shown in FIGS. 4 to 6 and FIG. In this way, the RAW image data P ′ after distortion correction is output from the distortion correction processing unit 17.
- the image buffer memory 19 performs interpolation calculation of the color of a predetermined pixel after distortion correction from the 3 ⁇ 3 image data area as shown in FIGS. 4 to 6 using pixel data of four pixels around the predetermined pixel.
- the four-pixel unit pixel data is stored for each RGB and has memory areas 19a, 19b, and 19c that can be read out.
- the image buffer memory 19 temporarily stores RAW image data captured by the image sensor 11 in each of the memory areas 19a, 19b, and 19c in units of one block via the cache memory. At this time, the image areas are stored in the memory areas 19a to 19c. As shown in 3 (a) to (c), each pixel data is arranged and stored for each RGB.
- the memory control unit 18 controls the input / output of RAW image data between the image buffer memory 19 and the distortion correction processing unit 17.
- the counter 12 detects the vertical synchronization signal VD or the horizontal synchronization signal HD from the electric signal from the image sensor 11 (S01)
- the counter-corrected coordinates (X ′, Y ′) are output from the counter 12 (S02).
- the output of the coordinates (X ′, Y ′) after distortion correction starts, for example, with the upper left corner of the rectangular area of the image after distortion correction in FIG. 7B as the starting point (0, 0).
- a distance L ′ from the center is calculated by the distance calculation unit 13 from the coordinates (X ′, Y ′) after distortion correction (S03).
- the calculation unit 15 calculates the distance L from the center before distortion correction from the distance L 'from the center (S04).
- the pre-distortion coordinates (X, Y) are calculated from the distance L from the center before distortion correction and the coordinates (X ′, Y ′) after distortion correction by the correction LUT calculation unit 16 (S05).
- RAW image data P from the image sensor 11 of FIG. 10 is stored for each RGB in the memory areas 19a to 19c of the image buffer memory 19 as shown in FIG. 3, and the memory area 19a is controlled by the memory control unit 18.
- RAW image data stored in .about.19c are read out as needed.
- the distortion correction processing unit 17 performs the first-stage interpolation processing (first interpolation processing: see FIGS. 4 to 6) based on the read image data, and calculates in S05 using the pre-distortion coordinates (X, Y). Neighboring neighboring pixels are selected, and pixel data (R51 to R54 in the example of FIG. 4) of the coordinates (X ′, Y ′) after distortion correction of the selected neighboring pixels are calculated by interpolation processing (S06).
- first interpolation processing see FIGS. 4 to 6
- the distortion correction processing unit 17 converts the pixel data of the pre-correction coordinates (X, Y) to the second position based on the relative positions of the peripheral pixels selected in S06 and the pre-correction coordinates (X, Y) and the pixel data of the peripheral pixels. Calculation is performed by stage interpolation processing (second interpolation processing: see FIG. 8) (S07).
- the pixel data of the coordinates (X, Y) before distortion correction calculated in S07 is used as the pixel data of the coordinates (X ′, Y ′) after distortion correction (S08).
- Steps S01 to S08 described above are shifted by one pixel (pixel) from the upper left of the rectangular area of the image after distortion correction in FIG. 7B as the starting point (0, 0), for example, the lower right end point (640, 480), the distortion correction is performed on all the pixels of the image after distortion correction in FIG. 7B.
- a RAW image can be obtained without being converted into an RGB image by obtaining pixel data of a pixel at a target coordinate by interpolation calculation from pixel data of peripheral pixels. Since distortion correction processing with little image degradation can be performed as it is, high-speed processing is possible, and the amount of memory necessary for pixel replacement can be reduced.
- each RGB pixel data is read from the memory areas 19a to 19c storing the pixel data for each RGB and interpolation calculation is performed, the waiting time is different from the data reading from the storage state as shown in FIG.
- the access time is shortened and high-speed processing is possible.
- the apparatus can be reduced in power consumption and heat generation.
- the effect of smoothing the gradation of the image can be obtained.
- the ISP processing can be performed after the distortion correction processing, and the image processing by the ISP processing can be performed on the distortion-corrected image.
- FIG. 12 is a block diagram illustrating a schematic configuration of the image forming apparatus according to the present embodiment.
- the image forming apparatus 50 includes a wide-angle lens A, the image processing apparatus 10 of FIG. 10, an ISP processing unit 20, an image display unit 30, and an image memory unit 40, and a digital still camera. Can be configured.
- the image forming apparatus 50 When the light of the captured image enters the image sensor 11 in FIG. 10 through the wide-angle lens A, the image forming apparatus 50 performs distortion correction processing on the RAW image data P from the image sensor 11 as shown in FIGS. 4 to 8 described above.
- the RAW image data P ′ after the distortion correction is input to the ISP processing unit 20, and the RAW image after the distortion correction is subjected to image processing such as white balance, color correction, and gamma correction processing by the ISP processing unit 20, and the image
- image processing such as white balance, color correction, and gamma correction processing by the ISP processing unit 20, and the image
- the processed image is displayed on the image display unit 30 made of liquid crystal or the like and is stored in the image memory unit 40.
- the RAW image obtained by performing distortion correction processing on the RAW image obtained through the wide-angle lens A is output to the ISP processing unit 20, and the RAW image after distortion correction is output. Since the ISP process is performed, the distortion correction process can be performed before the ISP process, and when the pixel color is different before and after the distortion correction, the pixel data can be obtained at high speed by interpolation calculation. Thus, since ISP processing is performed on a RAW image after distortion correction, a more natural image can be obtained at a relatively high speed.
- the lens placed in front of the image sensor 11 is a wide-angle lens.
- the present invention is not limited thereto, and may be a fish-eye lens that can obtain a wide-field image. Other lenses that require distortion correction may also be used.
- the color filter of the image sensor does not have a Bayer array structure, and a color filter pixel (hereinafter referred to simply as a complementary color system) used for calculation of RGB excluding RGB such as complementary colors (hereinafter referred to as complementary color).
- a color filter pixel hereinafter referred to simply as a complementary color system
- RGB complementary colors
- a case of including a system pixel will be described.
- FIG. 13 is a diagram schematically illustrating an example of an arrangement structure including complementary color pixels in the image sensor. In the present embodiment described with reference to FIGS.
- each RGB data is collectively arranged and stored in one block of memory area as shown in FIGS. 3A to 3C for each RGB pixel type.
- Interpolation processing is executed using the stored pixel data of each color, and distortion correction processing is performed based on the interpolated pixel data.
- the processes in FIGS. 3 to 12 are the same, and each data of Y and GB is stored in one block memory area as shown in FIGS. They are arranged and stored together, interpolation processing is executed using the stored pixel data of each color, and distortion correction processing is performed based on the interpolated pixel data.
- FIG. 14A and 14B are diagrams for explaining image distortion correction in the case of the arrangement structure of FIG. 13.
- FIG. 14A is the same as FIG. 9A
- FIG. 14B is FIG.
- FIG. 14C is a schematic enlarged view of FIG. 14A
- FIG. 14C is a schematic enlarged view of FIG. 14A
- FIG. 14C is a schematic enlarged view of FIG. 14D
- FIG. 9B for the B pixel data
- FIG. 14E is the same as FIG. 9B for the G and Ye pixel data
- FIG. 14G is a schematic enlarged view of FIG. 14E
- FIG. 14H is a schematic enlarged view of FIG. 14 (i) is a schematic enlarged view of FIG. 14 (f).
- pixel data after distortion correction is stored in a memory by continuously performing pixel data of colors used for RGB calculation, and pixel data of other colors is stored for each color. Perform continuously. That is, for Ye and G used for calculating R, these are continuously stored in the memory, and for B, which is a color other than this, only B is continuously stored in the memory. The reason will be described below.
- each pixel data of RGB is obtained by calculation from the complementary color system pixel data.
- R data is usually expressed by the following equation (1). It is obtained by subtracting G (Green) pixel data from Ye (Yellow) pixel data.
- R (Red) Ye (Yellow)-G (Green) (1)
- the pixel data of Ye which is a complementary color that is not the primary color (RGB), is not processed independently, and usually R pixel data obtained by the calculation of the above equation (1) is used. Therefore, the pixel data of Ye after the distortion correction processing is preferably stored in the same area of the memory as the pixel data of G, and the pixel data of the close coordinates is preferably continuous with the pixel data of G.
- FIG. 14 illustrates processing in another embodiment.
- distortion correction processing is performed on pixel data including complementary color Ye, and Ye is processed at that time.
- G pixel data are stored in the same memory area of one block
- R pixel data is obtained by the above equation (1)
- distortion correction is performed on the R pixel data as shown in FIGS.
- the pixel data is stored in a memory area of one block.
- the pixel data of the pixels used for the calculation of RGB is converted in one cycle from the memory area of one block when performing conversion to image data in which one pixel has BGR pixel data. Since data can be read, the access time to the memory is shortened, and high-speed processing is possible.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Color Television Image Signal Generators (AREA)
- Image Processing (AREA)
Abstract
Description
前述のとおり歪み補正後の画素の位置によって色(RGB)が決まっている。歪み補正後の画素データは、補正後の画素の位置(X’,Y’)に対応する補正前の画素の位置(X,Y)周辺の画素データに基づいて補間処理により算出している。
図4は本実施形態において歪み補正処理後の画素がR(赤)である場合に補間計算に用いる周辺画素を示す図であり、補間前の画素がRであり、R→Rの場合(a)、同じくB→Rの場合(b)、同じくoddG→Rの場合(c)、同じくevenG→Rの場合(d)を説明するための図である。
第1補間処理で得た補間前の座標(X,Y)に近接する所定位置に配置されている画素(51乃至54)の画素データ(R51乃至R54)に基づいて、第2補間処理により次式(数1)から座標(X,Y)の補間後の画素データRを求める。
(第1補間処理)
図5は本実施形態において補間後の画素がB(青)である場合に補間計算に用いる周辺画素を示す図であり、補間前の画素がBであり、B→Bの場合(a)、同じくR→Bの場合(b)、同じくoddG→Bの場合(c)、同じくevenG→Bの場合(d)を説明するための図である。
第2補間処理により前述のRのときと同様に数式(数1)と同様の式から座標(X,Y)の補間後の画素データBを求めることができる。詳細な説明は省略する。
(第1補間処理)
図6は本実施形態において補間後の画素がG(緑)である場合に補間計算に用いる周辺画素を示す図であり、補間前の画素がGであり、G→Gの場合(a)、補間前の画素がG以外であり、G以外→Gの場合(b)を説明するための図である。
次に、他の実施形態として撮像素子の色フィルタがベイヤー配列構造でなく、補色等のRGBを除いたRGBの算出に用いる色(以下、単に補色系という)の色フィルタの画素(以下、補色系画素という)を含む場合について説明する。他の実施形態においては最終的には補色系画素データからRGBの各画素データを演算によって求める必要がある。
1.YellowとGreen →Red
2.YellowとRed →Green
3.CyanとGreen →Blue
4.CyanとBlue →Green
5.WhiteとYellow →Blue
6.WhiteとCyan →Red
7.WhiteとMagenta →Green
8.MagentaとRed →Blue
9.MagentaとBlue →Red
図13は、撮像素子において補色系画素を含む配列構造例を模式的に示す図である。図3乃至図12で説明した本実施形態においてはRGBの画素種類ごとに図3(a)~(c)のようにRGBの各データを1ブロックのメモリ領域にそれぞれまとめて配置し格納し、格納した各色の画素データを用いて補間処理を実行し、補間処理した画素データに基づいて歪み補正処理を行っていた。図13等に示す他の実施形態においても、図3乃至図12の処理については同様であり、図3(a)~(c)のようにY及びGBの各データを1ブロックのメモリ領域にそれぞれまとめて配置し格納し、格納した各色の画素データを用いて補間処理を実行し、補間処理した画素データに基づいて歪み補正処理を行う。
原色(RGB)でない補色系の色であるYeの画素データを単独で処理することはなく、通常は上記式(1)の演算により求められたRの画素データを利用する。したがって、歪み補正処理後のYeの画素データは、Gの画素データとメモリの同じ領域に格納し、近接座標の画素データはGの画素データと連続していることが望ましい。
11 撮像素子
17 歪み補正処理部
18 メモリ制御部
19 画像バッファメモリ
19a~19c メモリ領域
50 画像形成装置
A 広角レンズ
Claims (11)
- 複数の画素を備えそれぞれの画素が1の色に対応する撮像素子を備え、光学系を介して前記撮像素子により撮像された画像の歪みを補正する画像歪み補正方法において、
歪み補正前後で画素の色が異なるとき、歪み補正後の画素データをメモリに保存した補正前の画素の周囲における複数画素の画素データによる補間処理で得、
前記メモリには同一色の画素データが各色毎に連続して保存されていることを特徴とする画像歪み補正方法。 - 複数の画素を備えそれぞれの画素が1の色に対応する撮像素子を備え、光学系を介して前記撮像素子により撮像された画像の歪みを補正する画像歪み補正方法において、
補正前の色と同じ色の補正後の画素データを、メモリに保存した補正前の画素の周囲における複数画素の画素データによる補間処理で得、
前記メモリには同一色の画素データが各色毎に連続して保存されていることを特徴とする画像歪み補正方法。 - 前記補間処理は、
歪み補正前の画素の周囲の所定位置に配置されている画素が、補正後の画素の色と同じときは該所定位置に配置されている画素の画素データをそのまま用い、補正後の画素の色と異なるときは該所定位置に配置されている画素をその周囲における補正後の色と同じ色の複数画素の画素データにより補間して得る第1処理と、
前記補正前の画素の位置と前記所定位置に配置されている画素との相対位置関係、及び前記第1処理で得た複数の前記所定位置に配置されている画素の画素データにより、補正後の画素データを補間して得る第2処理と、からなる請求項2に記載の画像歪み補正方法。 - 前記各色毎に連続して画素データを保存するメモリ領域の1ブロック単位の大きさは、前記補間処理で使用する前記複数画素の単位よりも大きいサイズが確保されている請求項1から3のいずれか一項に記載の画像歪み補正方法。
- 前記複数の色にはRGBの算出に用いる色が含まれており、
歪み補正後の画素データの前記メモリへの保存を、RGBの算出に用いる色同士の画素データを連続して行う請求項1から4のいずれか一項に記載の画像歪み補正方法。 - 光学系と、複数の画素を備えそれぞれの画素が1の色に対応し前記光学系を介して画像を撮像する撮像素子と、前記撮像素子から取り込んだ画像を処理する演算装置と、メモリと、を備え、
前記演算装置は、前記画像の歪みを補正する処理において、歪み補正前後で画素の色が異なるとき、歪み補正後の画素データをメモリに保存した補正前の画素の周囲における複数画素の画素データによる補間処理で算出し、前記メモリに同一色の画素データを各色毎に連続して保存することを特徴とする画像処理装置。 - 光学系と、複数の画素を備えそれぞれの画素が1の色に対応し前記光学系を介して画像を撮像する撮像素子と、前記撮像素子から取り込んだ画像を処理する演算装置と、メモリと、を備え、
前記演算装置は、前記画像の歪みを補正する処理において、補正前の色と同じ色の補正後の画素データを、メモリに保存した補正前の画素の周囲における複数画素の画素データによる補間処理で算出し、前記メモリに同一色の画素データを各色毎に連続して保存することを特徴とする画像処理装置。 - 前記演算装置は、
歪み補正前の画素の周囲の所定位置に配置されている画素が、補正後の画素の色と同じときは該所定位置に配置されている画素の画素データをそのまま用い、補正後の画素の色と異なるときは該所定位置に配置されている画素をその周囲における補正後の色と同じ色の複数画素の画素データにより補間して得る第1処理と、
前記補正前の画素の位置と前記所定位置に配置されている画素との相対位置関係、及び前記第1処理で得た複数の前記所定位置に配置されている画素の画素データにより、補正後の画素データを補間して得る第2処理と、
により前記補間処理を行う請求項7に記載の画像処理装置。 - 前記メモリにおける前記各色毎に連続して画素データを保存するメモリ領域の1ブロック単位の大きさは、前記補間処理で使用する前記複数画素の単位よりも大きいサイズが確保されている請求項6から8のいずれか一項に記載の画像処理装置。
- 前記複数の色にはRGBの算出に用いる色が含まれており、
歪み補正後の画素データの前記メモリへの保存を、RGBの算出に用いる色同士の画素データを連続して行う請求項6から9のいずれか一項に記載の画像処理装置。 - 前記光学系は広角用光学系である請求項6から10のいずれか一項に記載の画像処理装置。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010529757A JP5187602B2 (ja) | 2008-09-19 | 2009-09-15 | 画像歪み補正方法及び画像処理装置 |
US13/119,303 US20110170776A1 (en) | 2008-09-19 | 2009-09-15 | Image distortion correcting method and image processing apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008240425 | 2008-09-19 | ||
JP2008-240425 | 2008-09-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010032720A1 true WO2010032720A1 (ja) | 2010-03-25 |
Family
ID=42039544
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/066081 WO2010032720A1 (ja) | 2008-09-19 | 2009-09-15 | 画像歪み補正方法及び画像処理装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20110170776A1 (ja) |
JP (1) | JP5187602B2 (ja) |
WO (1) | WO2010032720A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012105246A (ja) * | 2010-11-09 | 2012-05-31 | Avisonic Technology Corp | 画像補正方法、および関連する画像補正システム |
JP2018205987A (ja) * | 2017-06-01 | 2018-12-27 | 株式会社リコー | 画像処理装置、画像処理方法及びプログラム |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014183473A (ja) * | 2013-03-19 | 2014-09-29 | Toshiba Corp | 電気機器の光学システム |
US9536287B1 (en) * | 2015-07-09 | 2017-01-03 | Intel Corporation | Accelerated lens distortion correction with near-continuous warping optimization |
EP3331236A1 (en) * | 2016-11-30 | 2018-06-06 | Thomson Licensing | Method for rendering a final image from initial images acquired by a camera array, corresponding device, computer program product and computer-readable carrier medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001186533A (ja) * | 1999-12-22 | 2001-07-06 | Olympus Optical Co Ltd | 画像処理装置 |
JP2005086247A (ja) * | 2003-09-04 | 2005-03-31 | Olympus Corp | 撮像装置 |
JP2008060893A (ja) * | 2006-08-31 | 2008-03-13 | Dainippon Printing Co Ltd | 補間演算装置 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7839446B2 (en) * | 2005-08-30 | 2010-11-23 | Olympus Corporation | Image capturing apparatus and image display apparatus including imparting distortion to a captured image |
-
2009
- 2009-09-15 WO PCT/JP2009/066081 patent/WO2010032720A1/ja active Application Filing
- 2009-09-15 US US13/119,303 patent/US20110170776A1/en not_active Abandoned
- 2009-09-15 JP JP2010529757A patent/JP5187602B2/ja active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001186533A (ja) * | 1999-12-22 | 2001-07-06 | Olympus Optical Co Ltd | 画像処理装置 |
JP2005086247A (ja) * | 2003-09-04 | 2005-03-31 | Olympus Corp | 撮像装置 |
JP2008060893A (ja) * | 2006-08-31 | 2008-03-13 | Dainippon Printing Co Ltd | 補間演算装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012105246A (ja) * | 2010-11-09 | 2012-05-31 | Avisonic Technology Corp | 画像補正方法、および関連する画像補正システム |
US9153014B2 (en) | 2010-11-09 | 2015-10-06 | Avisonic Technology Corporation | Image correction method and related image correction system thereof |
JP2018205987A (ja) * | 2017-06-01 | 2018-12-27 | 株式会社リコー | 画像処理装置、画像処理方法及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JP5187602B2 (ja) | 2013-04-24 |
JPWO2010032720A1 (ja) | 2012-02-09 |
US20110170776A1 (en) | 2011-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5049155B2 (ja) | プログレッシブ・インタレース変換方法、画像処理装置及び画像撮像装置 | |
JP4911628B2 (ja) | 画像処理方法、画像処理装置及び画像撮像装置 | |
JP5451782B2 (ja) | 画像処理装置および画像処理方法 | |
US8417031B2 (en) | Image processing apparatus and method | |
JP5008578B2 (ja) | 画像処理方法、画像処理装置及び画像撮像装置 | |
TWI486062B (zh) | 攝像設備 | |
JP5078148B2 (ja) | 画像処理装置及び画像撮像装置 | |
JP4966894B2 (ja) | 画像撮像装置 | |
JP5062846B2 (ja) | 画像撮像装置 | |
JP5240453B2 (ja) | 画像処理方法、画像処理装置及び画像撮像装置 | |
JP2004241991A (ja) | 撮像装置、画像処理装置及び画像処理プログラム | |
JP2009218909A (ja) | 画像撮像装置 | |
JP5187602B2 (ja) | 画像歪み補正方法及び画像処理装置 | |
US20090167917A1 (en) | Imaging device | |
JP5398750B2 (ja) | カメラモジュール | |
JP5020792B2 (ja) | 合成画像生成装置および合成画像生成方法 | |
JP2009157733A (ja) | 画像歪み補正方法、画像歪み補正装置及び画像形成装置 | |
JP2009225119A (ja) | 画像撮像装置 | |
JP5268321B2 (ja) | 画像処理装置及び画像処理方法、画像処理プログラム | |
JP5535099B2 (ja) | カメラモジュール及び画像記録方法 | |
JP5278421B2 (ja) | 撮像装置 | |
WO2018193544A1 (ja) | 撮像装置および内視鏡装置 | |
JP5333163B2 (ja) | 撮像装置 | |
JP4517896B2 (ja) | 撮像装置及びその画素補間方法 | |
JP2012227869A (ja) | 画像処理装置及び画像処理方法並びにデジタルカメラ |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09814565 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010529757 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13119303 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09814565 Country of ref document: EP Kind code of ref document: A1 |