WO2005081517A1 - 撮像装置、撮像方法 - Google Patents
撮像装置、撮像方法 Download PDFInfo
- Publication number
- WO2005081517A1 WO2005081517A1 PCT/JP2005/002714 JP2005002714W WO2005081517A1 WO 2005081517 A1 WO2005081517 A1 WO 2005081517A1 JP 2005002714 W JP2005002714 W JP 2005002714W WO 2005081517 A1 WO2005081517 A1 WO 2005081517A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- horizontal
- line
- pixel
- correction
- movement amount
- Prior art date
Links
- 238000000034 method Methods 0.000 title description 43
- 238000012937 correction Methods 0.000 claims abstract description 184
- 238000003384 imaging method Methods 0.000 claims description 95
- 230000001133 acceleration Effects 0.000 claims description 22
- 238000013507 mapping Methods 0.000 claims description 15
- 238000006073 displacement reaction Methods 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 12
- 230000008602 contraction Effects 0.000 claims description 3
- 230000008707 rearrangement Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 40
- 238000012545 processing Methods 0.000 description 29
- 238000004364 calculation method Methods 0.000 description 15
- 238000001444 catalytic combustion detection Methods 0.000 description 15
- 230000006641 stabilisation Effects 0.000 description 2
- 238000011105 stabilization Methods 0.000 description 2
- 241001591518 Norape Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
Definitions
- the present invention relates to an imaging device using a solid-state imaging device, and more particularly to camera shake correction in an imaging device.
- imaging devices Conventionally, video cameras, surveillance cameras, industrial cameras, and the like have been known as imaging devices.
- mobile phones, personal digital assistants (PDAs), and the like have also become widespread, and there is a strong demand from the market to provide an imaging function for capturing images even in these small mobile devices.
- Fig. 1 shows a configuration in a case where camera shake correction is performed in an imaging apparatus using a CCD sensor as an imaging element.
- the imaging device includes a CCD sensor 61 having a larger number of pixels than an image, and a CCD sensor
- a / D converter 62 that converts analog signal 67 from 61 to digital signal 68, signal processing unit 63 that generates YUV output from digital signal 68, memory 64 that stores YUV output 68, and memory It has a memory control unit 65 that uses the recorded YUV output 70 as the horizontal movement amount 73 and the vertical movement amount 72 from the motion detection circuit 66 as input and reads out the YUV output 70 recorded in the memory 64 as digital output 71.
- the analog signal 67 read from the CCD sensor 61 is converted into a digital signal 68 by the A / D converter 62.
- the signal processing unit 63 generates a YUV output 69 from the digital signal 68 and writes the captured image to the memory 64.
- the memory control unit 65 cuts out an image having the number of pixels to be output from the image in the memory 64 and outputs it as a digital output 71.
- the imaging apparatus repeats this to perform imaging. Sensor moved due to camera shake, etc. In this case, an image that has been moved in the horizontal and vertical directions from the previous frame image is captured. This is camera shake.
- Figure 2 shows the correction procedure at this time.
- the motion detection circuit 66 detects a horizontal movement amount 73 and a vertical movement amount 72 in a frame cycle.
- the memory control unit 71 sets a position shifted by the horizontal movement amount from the previous frame fl as the horizontal reading start position of the output image f2, and also shifts the vertical movement amount from the previous frame fl as the vertical reading position. Set the position. By reading the output image f2 from this position, camera shake correction is realized.
- Patent Document 1 Japanese Patent Application Laid-Open No. 2000-147586
- FIG. 3A and FIG. 3B show the difference between the M ⁇ S type sensor and the CCD sensor.
- the MOS type sensor cuts the shirt for each line and reads out the data sequentially for each line.
- the CCD sensor cuts all pixels at the same time and reads out to the vertical CCD.
- an object of the present invention is to provide an imaging device that corrects image distortion in a frame generated in an MS sensor with a small circuit scale.
- an imaging device provides an M ⁇ S type sensor having a light receiving surface composed of a plurality of pixel portions arranged in a plurality of lines, and a horizontal period from the MOS type sensor. Detecting means for detecting a horizontal movement amount in a mapping corresponding to at least two lines among the read-out mappings for each line; and a detecting means for detecting a horizontal movement amount in at least one of the plurality of lines according to the horizontal movement amount. And a horizontal correction means for generating a corrected image in accordance with the determined start position.
- the detection means may detect the horizontal movement amount of the mapping corresponding to all two adjacent lines of the plurality of lines. Further, the determining means may determine the head position of at least one of the at least two lines according to the horizontal movement amount. Further, the determining means may determine the head position of a line to be read out later of the two adjacent lines according to the horizontal movement amount.
- the image distortion in the frame generated in the MOS sensor, particularly in the horizontal direction, The distortion of the direction can be corrected.
- the correction force can be corrected by the circuit scale and the number of parts, and the configuration is small.
- the detection means may be configured to include an acceleration sensor for detecting acceleration from the motion of the imaging device, and a calculation means for calculating the horizontal movement amount from the detected acceleration.
- the horizontal movement amount can be easily detected using an existing acceleration sensor such as an angular velocity sensor.
- the acceleration sensor detects the acceleration every horizontal period
- the calculating means calculates a horizontal movement amount in one horizontal period
- the horizontal correcting means is determined by the determining means. It is also possible to provide a reading means for reading pixel signals for the number of horizontal pixels from the head position from the MOS type sensor.
- pixel signals for the number of horizontal pixels required for the image can be read from the determined start position, and horizontal correction can be performed simultaneously with line reading.
- the determination means determines the head position to a sub-pixel position
- the horizontal correction means further performs pixel interpolation on a pixel column in the line read by the reading means.
- a configuration may be provided that includes a horizontal interpolation unit that corrects the position of the sub-pixel.
- the correction in addition to the correction of the head position in the unit of the pixel pitch in the horizontal direction, the correction can be performed in the unit of the subpixel.
- the imaging apparatus further includes a storage unit that stores a frame image from a MOS-type imaging sensor, and the horizontal correction unit performs a process on the frame image stored in the storage unit.
- the head position may be corrected.
- the detecting means further detects a vertical movement amount of the mapping
- the imaging device further comprises, in accordance with the detected vertical movement amount, a vertical direction of an image picked up by the imaging means. It is also possible to provide a vertical correction means for correcting the expansion and contraction distortion.
- the vertical correction means includes a line buffer for holding a plurality of lines of pixel signals read from the MOS type sensor, and a vertical movement amount detected by the detection means.
- a pixel at a correction line position is determined by pixel interpolation between lines using a determination unit for determining a correction line position, a pixel signal of a line held in a line buffer, and a pixel signal read from the MOS sensor.
- a vertical interpolation means for calculating a signal may be provided.
- the detection means further detects a displacement amount between the two frame images stored in the storage means, and the horizontal correction means and the vertical movement means determine a frame displacement according to the displacement amount.
- a configuration for performing the interval correction may be adopted.
- image distortion correction in a frame which has been a drawback of the conventional MOS sensor, can be realized with a configuration having a small circuit scale and a small number of components.
- FIG. 1 is a diagram showing a configuration in a case where camera shake correction is performed in an imaging apparatus using a CCD sensor as an imaging element.
- FIG. 2 is an explanatory view showing a procedure of camera shake correction in a conventional technique.
- FIG. 3A is an explanatory diagram showing a shutter operation of a MOS sensor.
- FIG. 3B is an explanatory diagram showing a shutter operation of the CCD sensor.
- FIG. 4 is a block diagram showing a configuration of a MOS imaging device according to Embodiment 1 of the present invention.
- FIG. 5A is an explanatory diagram of horizontal correction.
- FIG. 5B is an explanatory diagram of vertical correction.
- FIG. 6 is a diagram showing a positional relationship between a horizontal angular velocity sensor, a vertical angular velocity sensor, and a light receiving surface.
- FIG. 7A is an explanatory diagram showing a method of calculating the horizontal movement amount.
- FIG. 7B is an explanatory diagram showing a method of calculating the amount of vertical movement.
- FIG. 8 is a flowchart showing a process of correcting an image distortion in capturing one frame of image.
- FIG. 9A is an explanatory diagram showing a head position of a pixel to be a head in a line.
- FIG. 9B is an explanatory diagram showing a head position of a pixel to be a head in a line.
- FIG. 10A is an explanatory diagram of a pixel position correction process on a sub-pixel basis.
- FIG. 10B shows an example of a circuit that performs linear interpolation in a correction unit.
- FIG. 11 is a flowchart showing details of a vertical correction process.
- FIG. 12A is an explanatory diagram of a vertical correction process.
- FIG. 12B is an explanatory diagram of a vertical correction process.
- FIG. 13A is an explanatory diagram of a vertical correction process.
- FIG. 13B is an explanatory diagram of a vertical correction process.
- FIG. 14A is an explanatory diagram of a vertical correction process for a black-and-white image.
- FIG. 14B is an explanatory diagram of a vertical correction process for a black-and-white image.
- FIG. 14C is an explanatory diagram of a vertical correction process for a black-and-white image.
- FIG. 15A is an explanatory diagram of a vertical correction process for a color image.
- FIG. 15B is an explanatory diagram of a vertical correction process for a color image.
- FIG. 15C is an explanatory diagram of a vertical correction process for a color image.
- FIG. 16 is a block diagram showing a configuration of an imaging device according to Embodiment 2 of the present invention.
- 17 (a)-(c) are explanatory diagrams of the intra-frame correction process and the inter-frame correction process.
- FIG. 4 is a block diagram illustrating a configuration of the MOS imaging device according to Embodiment 1 of the present invention.
- This imaging device includes a correction unit 10, a light receiving surface 12, a horizontal drive unit 13, a vertical drive unit 14, an A / D converter 15, a signal processing unit 16, a calculation unit 17, an angular velocity sensor 18, and an angular velocity sensor 19.
- the correction unit 10 performs horizontal correction for correcting horizontal image distortion occurring in one frame and vertical correction for correcting vertical image distortion. The correction of the image distortion will be described with reference to FIGS. 5A and 5B.
- FIG. 5A is an explanatory diagram of horizontal correction.
- the image size of the frame image flO is smaller than the imaging area ml of the light receiving surface 12.
- the subject P13 is originally a rectangular parallelepiped, However, since the imaging apparatus has moved to the left during imaging, horizontal image distortion has occurred such that the image is inclined obliquely (see FIG. 3A).
- the correction unit 10 and the horizontal drive unit 13 adjust the top pixel to be the top pixel of each line according to the horizontal movement amount so as to cancel the horizontal image distortion. Adjust the position and read out pixel signals for the number of horizontal pixels from the adjusted top position.
- the horizontal drive unit 13 adjusts the head position in pixel units
- the correction unit 10 adjusts the head position in subpixel units smaller than pixels by performing inter-pixel interpolation.
- the image distortion in the horizontal direction of the frame image f10b is corrected.
- FIG. 5B is an explanatory diagram of the vertical correction.
- the subject P11 has an image distortion that extends in the vertical direction due to the upward movement of the main imaging device during imaging (see FIG. 3A).
- the correction unit 10 has a line buffer that holds the pixel values of a plurality of lines (for example, about three lines), and performs vertical interpolation so as to cancel the vertical image distortion.
- the line position is corrected in the vertical direction using the frame image f20a longer than the frame image f20 according to the moving amount.
- the line position and the number of lines are corrected by pixel interpolation between lines so that the number of lines is the same as that of the frame f20.
- the image distortion in the vertical direction of the frame image f20b is corrected.
- the light receiving surface 12, the horizontal drive unit 13, and the vertical drive unit 14 constitute a MOS image sensor.
- the light receiving surface 12 has the imaging area ml shown in FIGS. 5A and 5B.
- the horizontal drive unit 13 simultaneously reads out pixel signals for the number of horizontal pixels from the lines of the frame images flOa and f20a, and sequentially outputs each pixel signal as an analog signal 20. At this time, the horizontal drive unit 13 adjusts the read start position in each line in pixel units according to the horizontal movement amount output from the calculation unit 17.
- the vertical drive unit 14 selects the lines of the frame images flOa and f20a one by one for each horizontal period. At that time, the vertical drive unit 14 adjusts the number of lines to be selected according to the horizontal movement amount output from the calculation unit 17.
- the AZD converter 15 converts the analog signal 20 that has been driven by the horizontal drive unit 13 and has been horizontally corrected into a digital signal 21, and outputs the digital signal 21 to the correction unit 10.
- the signal processing unit 16 generates a YUV output signal 22 from the digital signal 21 expressed in RGB. To achieve.
- the angular velocity sensor 18 is installed on the vertical center line of the light receiving surface 12 as shown in FIG. 6, and detects the horizontal angular acceleration on the light receiving surface 12.
- the angular velocity sensor 19 is installed on a horizontal center line of the light receiving surface 12 as shown in FIG. 6, and detects a vertical angular acceleration on the light receiving surface 12.
- An acceleration sensor may be used instead of the angular acceleration sensors 18 and 19.
- the calculation unit 17 calculates the amount of movement in the horizontal and vertical directions for each horizontal cycle based on the angular velocities output from the angular velocity sensors 18 and 19.
- FIG. 7A is an explanatory diagram showing a method of calculating the horizontal movement amount in the calculation unit 17. As shown in FIG. 7A, it is assumed that the light receiving surface 12 and the lens 101 are arranged apart from each other by a focal length f of the lens 101.
- the calculation unit 17 calculates the rotation angle ⁇ ⁇ by integrating the angular acceleration ⁇ ⁇ ⁇ detected by the angular acceleration sensor 18 over a period of one horizontal cycle. Further, the calculation unit 17 calculates a horizontal movement amount, f-tan (0x), of the mapping of the light receiving surface 12 in one horizontal period.
- FIG. 7B is an explanatory diagram showing a vertical movement amount calculation method in the calculation unit 17. As in FIG. 7A, the calculation unit 17 calculates the vertical movement amount f ′ tan (®y) in one horizontal period.
- FIG. 8 is a flowchart showing a process of correcting an image distortion in capturing an image of one frame.
- Norepe 1 (S501-S510) indicates horizontal correction and vertical correction in reading out the i-th line (hereinafter, line i).
- the calculation unit 17 detects the horizontal movement amount Mhi and the vertical movement amount Mvi in one horizontal period (S502, S503). However, in the first line (line 1) of the frame image, the horizontal movement amount and the vertical movement amount are zero.
- the horizontal movement amount Mhi and the vertical movement amount Mvi are in units of pixel pitch or line pitch.
- the horizontal movement amount Mhi is 1.00, it means that the pixel has moved by one pixel pitch, and if it is 0.75, it means that it has moved by 3Z4 pixel pitch. If the vertical movement amount is 0.5, it means that it has moved 1Z2 line pitch.
- FIG. 9A is an explanatory diagram of the head position determined by the horizontal drive unit 13 when the frame image is monochrome.
- the horizontal drive unit 13 is set at a fixed position SO which is the head position of the first horizontal line 1.
- Ml is the integral part of the horizontal movement amount Mhl, and the movement to the left is positive.
- the start positions S2, S3... are repeatedly determined by the number of lines to be output.
- the above reading method is hereinafter referred to as horizontal shift reading. In horizontal shift readout, horizontal correction is performed in pixel units (pixel pitch units).
- FIG. 9B is an explanatory diagram of the start position determined by the horizontal drive unit 13 when the frame image is in color.
- the shift amount is set to a minimum of 1 pixel, but in the case of color, the minimum unit of the shift amount is 4 pixels, 2 horizontal pixels and 2 vertical pixels, when generating a YUV signal at the subsequent stage. The difference is that it is two pixels (one pixel in the YUV signal).
- the complementary color filter showing the case of RGB in FIG. 9B and the case of other color filters.
- the horizontal drive unit 13 reads pixel signals for the number of horizontal pixels of the frame image from the line i from the determined start position (S505).
- the read pixel signal is held in the line buffer in the correction unit 10 via the A / D converter 5.
- the correcting unit 10 corrects the pixel signal of one line (corresponding to one line of the frame image) held in the line buffer in a sub-pixel unit smaller than the pixel pitch in accordance with the decimal part of the horizontal movement amount M hi. (S506).
- FIG. 10A is an explanatory diagram of the pixel position correction processing in sub-pixel units. In FIG. 10A, the decimal part of the horizontal movement amount Mhi is ⁇ .
- Pixels # 1, # 2 ⁇ ⁇ ⁇ indicate pixels held in the line buffer.
- the corrected pixel is Ql, (32 ⁇ .
- the same applies to the pixels Q2 and Q3 ⁇ .10B shows an example of a circuit that performs linear interpolation in the correction unit 10. Corrects the horizontal pixel position in sub-pixel units.
- the correction unit 10 performs vertical correction processing for correcting expansion and contraction in the vertical direction according to the vertical movement amount Mvi (S508). More specifically, the correction unit 10 uses the pixel signal Qj of the line (i ⁇ 1) or the line (i + 1) held in the line buffer and the pixel signal The pixel signal at the line position corresponding to the vertical movement amount Mvi is calculated by pixel interpolation.
- FIG. 12A is an explanatory diagram of the vertical correction processing.
- the horizontal direction corresponds to the vertical direction of the image
- white circles represent the first pixels Q1 (called original pixels) of lines 1, 2,.
- Black circles (dotted and tipped circles) show the pixels after interpolation (called interpolated pixels) at the line positions after vertical correction.
- Mvi is -0.25 (when the line is moved downward by 1/4 pixel after the reading of the line 1 and before the reading of the line 2).
- the line pitch between the original pixel lines 1 and 2 is 1 while the line pitch between the interpolated lines 1 and 2 is 5/4.
- the correction unit 10 determines that the line position of the line 2 to be interpolated is a position having a distance ratio of 1/4 to 3/4 between the line 2 and the line 3 of the original pixel. Further, the correction unit 10 linearly interpolates the corresponding pixels between the original pixel line 2 and the original pixel line 3 using the inverse ratio of the distance ratio as a weighting factor, thereby calculating each pixel value of the interpolation line 2. calculate. As shown in FIG. 12A, the weighting factors in this case are 3/4 and 1/4. As described above, when the imaging apparatus moves downward, the image is expanded so as to cancel the reduced vertical image distortion.
- FIG. 12B is an explanatory diagram in the case where Mvi is ⁇ 1 / n. In this case, the weighting factors used for linear interpolation between the original pixel line 2 and the original pixel line 3 are 1 / n and (l-1 / n).
- FIG. 13A shows a case where Mvi is +0.25 (a case where the pixel is moved upward by 1/4 pixel after reading line 1 and before reading line 2).
- FIG. 13A is different from FIG. 12A in that linear interpolation is performed between the original pixel line 1 and the original pixel line 2.
- FIG. 13B is an explanatory diagram when Mvi is + lZn. In this case, the weighting factors are lZn and (l_l / n).
- the correction unit 10 and the vertical drive unit 14 correct the number of loops in loop 1.
- the number of loops is decremented by 1
- the number of interpolation lines S1 decreases below the number of original pixel lines
- the number of loops is incremented by 1.
- the correction unit 10 performs the horizontal line reading process until the number of lines after the interpolation reaches the number of vertical lines required for the frame image, or until the reading of the horizontal lines reaches the last line of the imaging area. Will be.
- FIG. 11 is a flowchart showing details of the vertical correction processing.
- the correction unit 10 calculates the accumulated vertical movement amount near the line from Mvi from the calculation unit 17 (S801), and calculates the position of the interpolation line and the distance between the original pixels of the interpolation line. Is calculated (S802), and the inverse ratio of the distance ratio is calculated as a weight coefficient (S803).
- the weight coefficient S803
- the position of the interpolation line 2 is 5Z4
- the distance ratio is 3Z4 to 1Z4
- the weighting factors are 1/4 and 3Z4.
- the position of the interpolation line 2 is 3/4
- the distance ratio is 3/4 to 1/4
- the weight coefficients are 1/4 and 3/4.
- the correction unit 10 After that, the correction unit 10 generates an interpolation line by performing pixel interpolation between original pixel lines by means of Norape 2 (S804-809). That is, the pixel value Qj is read from the original pixel line located immediately before the interpolation line position (S805), the pixel value Qj is read from the original pixel line located immediately after the interpolation line position (S806), and the weight coefficient is used. Then, a pixel value is calculated by linear interpolation (S807). In this way, the correction unit 10 can correct vertical image distortion due to vertical movement of the imaging device.
- FIG. 14A is an explanatory diagram of a vertical correction process for a monochrome image.
- the vertical movement amount from the first horizontal line to the second horizontal line is ml
- the vertical movement amount from the second horizontal line to the third horizontal line is m2
- FIG. 15A is an explanatory diagram of a vertical correction process for a color image.
- the first and third lines are composed of R and G
- the second and fourth lines are composed of B and G. That is, odd lines are composed of R.G
- even lines are composed of B.G.
- FIGS. 15B and 15C vertical distortion is corrected by performing the above-described vertical correction processing between odd lines and between even lines.
- any method may be used as long as the zoom readout satisfies the conditions for generating the force YUV signal, which describes the method of performing the two-line force zoom readout.
- the correction unit 10 since the correction unit 10 only needs to have a line buffer of about three lines, it is not necessary to provide a frame memory for correction in the subsequent processing. Can be configured. In other words, the image pickup device does not require a frame memory for correction, and has a configuration with a small circuit scale and a small number of components, thereby realizing image distortion correction in a frame, which was a drawback of the conventional MOS sensor.
- the circuit of the signal processing unit can be reduced. This makes it applicable to small portable devices such as mobile phones and PDAs.
- the pixel value subjected to image distortion correction in the correction unit 10 becomes a YUV signal in the YUV signal processing unit.
- the YUV signal is output to a signal processing unit (not shown), for example, a JPEG circuit or the like.
- the correction unit 10 performs the correction processing on the digital pixel values output from the AZD converter 15 by analog data on the input side of the AZD converter 15. May be configured to perform a correction process on.
- FIG. 16 is a block diagram illustrating a configuration of an imaging device according to Embodiment 2 of the present invention.
- this image pickup apparatus the same components as those of the image pickup apparatus shown in FIG. 4 are denoted by the same reference numerals, and the description of the same points will be omitted, and different points will be mainly described below.
- the light receiving surface 42, the horizontal drive unit 43, and the vertical drive unit 44 may be equivalent to a conventional MOS sensor.
- the memory 47 is a memory that holds one frame image and has a work area for intra-frame correction processing and inter-frame correction processing.
- the frame image output from the signal processing unit 16 has horizontal and vertical image distortions.
- the correction unit 48 performs an intra-frame correction process and an inter-frame correction process on a frame image held in the memory 47.
- the correction unit 48 performs the horizontal correction processing and the vertical correction processing described in the first embodiment on the frame image held in the memory 47. Therefore, in the correction processing shown in FIG. 8, the correction unit 48 performs the horizontal correction processing for each pixel (horizontal shift reading), the horizontal correction processing for each sub-pixel, the vertical correction processing (FIG. 11), and the memory 47. Is executed for the frame image of. For example, the correction unit 48 determines the head position for each line according to the horizontal movement amount, and rearranges the frame images stored in the memory 47 according to the determined head position.
- the correction unit 48 performs horizontal correction in sub-pixel units in addition to horizontal correction in pixel units. Thereafter, the correction unit 48 determines an interpolation line position for each line according to the vertical movement amount, calculates a pixel signal at the correction line position for the frame image by interpolating pixels between lines, and To be stored.
- the correction unit 48 performs a camera shake correction between frames as an inter-frame correction process.
- FIG. 17 is an explanatory diagram of the intra-frame correction process and the inter-frame correction process performed by the correction unit 48.
- image distortion in a frame and image displacement due to camera shake between frames occur simultaneously. That is, as for the subject P30 in the image, an oblique distortion and an elongation distortion are generated due to the movement of the imaging device to the upper left, and the position is shifted from the immediately preceding frame image f10.
- (B) of the figure is an explanatory diagram showing an intra-frame correction process and an inter-frame correction process.
- the correction unit 48 performs the horizontal correction process (pixel unit and subpixel unit) and the vertical correction process shown in FIG. 8 as the intra-frame correction process. Further, position correction is performed as inter-frame correction processing.
- the position correction is to correct the position of the frame image so as to cancel the horizontal position shift amount and the vertical position shift amount in one vertical cycle.
- a frame image f2 in which the positional displacement between frames that can be corrected only by correcting the image distortion in the frames is obtained.
- the correction unit 48 can perform inter-frame correction simultaneously with intra-frame correction without having to perform it separately. That is, by using the value obtained by adding the horizontal displacement amount as the horizontal movement amount and the value obtained by adding the vertical displacement amount as the vertical movement amount, the displacement between the frames in the correction processing within the frame can be simultaneously performed. Can be corrected.
- the correction unit 47 can perform the intra-frame correction and the inter-frame correction regardless of the YUV format of the frame image held in the memory 47.
- the frame image held in the memory 47 may be of the RGB format.
- the pixel signals for all the pixels are read from the sensor, stored in the memory, and the reading method from the memory is made variable, whereby the frame is read. It is possible to correct the positional distortion between frames simultaneously with the correction of the image distortion inside.
- the calculating unit 17 detects the horizontal movement amount for all the lines, but it is not necessary to detect all the lines of the pixel unit 12 as follows. Oh good.
- the calculation unit 17 detects the horizontal movement amount for each odd line of the pixel unit 12 in an odd field, and detects the horizontal movement amount for each even line in an even field. Just fine.
- the arithmetic unit 17 is configured to detect the horizontal movement amount for every predetermined number N of several lines from two lines, and the correction unit 10 is configured to correct each head position of the N lines. Is also good.
- the calculation unit 17 detects horizontal movement amounts of two lines adjacent to each other, for example, every five lines out of all the lines, and the correction unit 10 corrects the head position of the two lines, and By predicting that the two lines are constant, the head positions of the three lines following the two lines may be corrected.
- the movement may be detected by analyzing a force frame image using the angular velocity sensors 17 and 18.
- the present invention is suitable for an imaging device provided with a MOS sensor having a light receiving surface composed of a plurality of pixel portions arranged in a plurality of lines, for example, a video camera, a surveillance camera, an industrial camera, and a portable with a camera.
- a video camera for example, a video camera, a surveillance camera, an industrial camera, and a portable with a camera.
- small portable devices such as telephones and personal digital assistants (PDAs).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Image Input (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/597,797 US20070160355A1 (en) | 2004-02-25 | 2005-02-21 | Image pick up device and image pick up method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-049574 | 2004-02-25 | ||
JP2004049574A JP2005244440A (ja) | 2004-02-25 | 2004-02-25 | 撮像装置、撮像方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005081517A1 true WO2005081517A1 (ja) | 2005-09-01 |
Family
ID=34879551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/002714 WO2005081517A1 (ja) | 2004-02-25 | 2005-02-21 | 撮像装置、撮像方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20070160355A1 (ja) |
JP (1) | JP2005244440A (ja) |
CN (1) | CN1922868A (ja) |
WO (1) | WO2005081517A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2060110A4 (en) * | 2006-08-01 | 2011-06-29 | Pelco Inc | METHOD AND DEVICE FOR MOTION COMPENSATION IN ONE VIDEO |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4834406B2 (ja) * | 2006-01-16 | 2011-12-14 | Hoya株式会社 | 撮像装置 |
JP2007264074A (ja) * | 2006-03-27 | 2007-10-11 | Canon Inc | 撮影装置及びその制御方法 |
JP5036482B2 (ja) * | 2007-10-19 | 2012-09-26 | シリコン ヒフェ ベー.フェー. | 画像処理装置及び画像処理方法、画像処理プログラム |
JP4994288B2 (ja) * | 2008-04-02 | 2012-08-08 | 三菱電機株式会社 | 監視カメラシステム |
JP2010268225A (ja) * | 2009-05-14 | 2010-11-25 | Sony Corp | 映像信号処理装置および表示装置 |
JP5487722B2 (ja) * | 2009-05-25 | 2014-05-07 | ソニー株式会社 | 撮像装置と振れ補正方法 |
US8248541B2 (en) * | 2009-07-02 | 2012-08-21 | Microvision, Inc. | Phased locked resonant scanning display projection |
JP5335614B2 (ja) * | 2009-08-25 | 2013-11-06 | 株式会社日本マイクロニクス | 欠陥画素アドレス検出方法並びに検出装置 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0377483A (ja) * | 1989-08-19 | 1991-04-03 | Hitachi Ltd | 画振れ防止カメラ |
JPH0810907B2 (ja) * | 1987-11-02 | 1996-01-31 | 株式会社日立製作所 | 信号処理装置 |
JPH08336076A (ja) * | 1995-06-07 | 1996-12-17 | Sony Corp | 固体撮像装置及びこれを用いたビデオカメラ |
JP2000341577A (ja) * | 1999-05-26 | 2000-12-08 | Fuji Photo Film Co Ltd | 手振れ補正装置およびその補正方法 |
JP2000350101A (ja) * | 1999-03-31 | 2000-12-15 | Toshiba Corp | 固体撮像装置及び画像情報取得装置 |
JP2001358999A (ja) * | 2000-06-12 | 2001-12-26 | Sharp Corp | 画像入力装置 |
JP2004363869A (ja) * | 2003-06-04 | 2004-12-24 | Pentax Corp | 画像歪み補正機能付き撮像装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5012270A (en) * | 1988-03-10 | 1991-04-30 | Canon Kabushiki Kaisha | Image shake detecting device |
US6992700B1 (en) * | 1998-09-08 | 2006-01-31 | Ricoh Company, Ltd. | Apparatus for correction based upon detecting a camera shaking |
US6507365B1 (en) * | 1998-11-30 | 2003-01-14 | Kabushiki Kaisha Toshiba | Solid-state imaging device |
US6351319B1 (en) * | 1998-12-18 | 2002-02-26 | Xerox Corporation | System and apparatus for single subpixel elimination with local error compensation in an high addressable error diffusion process |
US7042507B2 (en) * | 2000-07-05 | 2006-05-09 | Minolta Co., Ltd. | Digital camera, pixel data read-out control apparatus and method, blur-detection apparatus and method |
WO2002037837A1 (en) * | 2000-10-30 | 2002-05-10 | Simon Fraser University | Active pixel sensor with built in self-repair and redundancy |
US6963365B2 (en) * | 2001-02-28 | 2005-11-08 | Hewlett-Packard Development Company, L.P. | System and method for removal of digital image vertical distortion |
US7525526B2 (en) * | 2003-10-28 | 2009-04-28 | Samsung Electronics Co., Ltd. | System and method for performing image reconstruction and subpixel rendering to effect scaling for multi-mode display |
-
2004
- 2004-02-25 JP JP2004049574A patent/JP2005244440A/ja active Pending
-
2005
- 2005-02-21 WO PCT/JP2005/002714 patent/WO2005081517A1/ja active Application Filing
- 2005-02-21 CN CNA2005800060016A patent/CN1922868A/zh active Pending
- 2005-02-21 US US10/597,797 patent/US20070160355A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0810907B2 (ja) * | 1987-11-02 | 1996-01-31 | 株式会社日立製作所 | 信号処理装置 |
JPH0377483A (ja) * | 1989-08-19 | 1991-04-03 | Hitachi Ltd | 画振れ防止カメラ |
JPH08336076A (ja) * | 1995-06-07 | 1996-12-17 | Sony Corp | 固体撮像装置及びこれを用いたビデオカメラ |
JP2000350101A (ja) * | 1999-03-31 | 2000-12-15 | Toshiba Corp | 固体撮像装置及び画像情報取得装置 |
JP2000341577A (ja) * | 1999-05-26 | 2000-12-08 | Fuji Photo Film Co Ltd | 手振れ補正装置およびその補正方法 |
JP2001358999A (ja) * | 2000-06-12 | 2001-12-26 | Sharp Corp | 画像入力装置 |
JP2004363869A (ja) * | 2003-06-04 | 2004-12-24 | Pentax Corp | 画像歪み補正機能付き撮像装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2060110A4 (en) * | 2006-08-01 | 2011-06-29 | Pelco Inc | METHOD AND DEVICE FOR MOTION COMPENSATION IN ONE VIDEO |
CN101496394B (zh) * | 2006-08-01 | 2013-11-20 | 派尔高公司 | 补偿视频运动的方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
US20070160355A1 (en) | 2007-07-12 |
CN1922868A (zh) | 2007-02-28 |
JP2005244440A (ja) | 2005-09-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7340160B2 (en) | Imaging apparatus | |
JP3745067B2 (ja) | 撮像装置及びその制御方法 | |
JP5744263B2 (ja) | 撮像装置及びその合焦制御方法 | |
WO2005081517A1 (ja) | 撮像装置、撮像方法 | |
JP5468177B2 (ja) | 撮像装置及びその合焦制御方法 | |
US8072497B2 (en) | Imaging apparatus and recording medium | |
US20020158973A1 (en) | Image-taking apparatus and image-taking method | |
US20100053356A1 (en) | Imaging Device And Video Signal Generating Method Employed In Imaging Device | |
JP5608820B2 (ja) | 撮像装置及び合焦制御方法 | |
JP4605217B2 (ja) | 撮影装置及びそのプログラム | |
JP4281724B2 (ja) | 撮像装置および方法 | |
US20120127330A1 (en) | Image pickup device | |
KR100236682B1 (ko) | 화상 흔들림 보정 장치 | |
JP2010015241A (ja) | 超解像撮像装置及び画像処理方法 | |
JP4246244B2 (ja) | 撮像装置 | |
JP2004363857A (ja) | 画像歪み補正機能付き撮像装置 | |
JP4270947B2 (ja) | 画像歪み補正機能付き撮像装置 | |
JP2002287197A (ja) | 撮像装置における画像安定化装置 | |
JP6223135B2 (ja) | 撮像装置およびその制御方法 | |
KR100562334B1 (ko) | 씨모스 영상 센서 왜곡 보정 방법 및 장치 | |
JP5012875B2 (ja) | 高速動画像撮影装置、高速動画像撮影方法 | |
JP2925890B2 (ja) | 手振れ補正装置を有するビデオカメラ | |
JPWO2006046387A1 (ja) | 撮像装置および手ぶれ補正装置 | |
JPH11196428A (ja) | 撮像装置およびその制御方法 | |
JPH10257380A (ja) | 固体撮像装置およびこれを用いたカメラ |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2007160355 Country of ref document: US Ref document number: 10597797 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580006001.6 Country of ref document: CN |
|
122 | Ep: pct application non-entry in european phase | ||
WWP | Wipo information: published in national office |
Ref document number: 10597797 Country of ref document: US |