WO2005122554A1 - 撮像装置 - Google Patents
撮像装置 Download PDFInfo
- Publication number
- WO2005122554A1 WO2005122554A1 PCT/JP2005/011004 JP2005011004W WO2005122554A1 WO 2005122554 A1 WO2005122554 A1 WO 2005122554A1 JP 2005011004 W JP2005011004 W JP 2005011004W WO 2005122554 A1 WO2005122554 A1 WO 2005122554A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- imaging device
- frames
- motion estimation
- reading
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/42—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by switching between different modes of operation using different resolutions or aspect ratios, e.g. switching between interlaced and non-interlaced mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/44—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
- H04N25/445—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array by skipping some contiguous pixels within the read portion of the array
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/44—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
- H04N25/447—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array by preserving the colour pattern with or without loss of information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/48—Increasing resolution by shifting the sensor relative to the scene
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
Definitions
- the present invention relates to an imaging apparatus that generates a high-resolution image using an image input unit having a small number of pixels. Background technology
- an imaging apparatus is an imaging apparatus for electronically obtaining an image of a subject
- Optical imaging means for forming an image of a subject on an imaging device; an imaging device capable of outputting an image signal of a predetermined area; an area setting unit for setting an output area from the imaging device; Means for selecting a reading rule of the imaging device according to the size of the area set by the section, and means for generating a high-resolution image from image signals of a plurality of frames output from the imaging device. It is characterized by having.
- the invention of (1) corresponds to the embodiment shown in FIG.
- the optical system 101 corresponds to “optical imaging means for imaging an image of a subject on an imaging device”.
- the “imaging device capable of outputting an image signal of a predetermined area” corresponds to the imager 102.
- the “area setting unit for setting the output area from the imaging device” corresponds to the magnification specifying unit 103.
- the read control unit 104 corresponds to “means for selecting the readout rule of the imaging device according to the size of the area set by the area setting unit”.
- the “means for generating a high-resolution image from the image signals of a plurality of frames output from the imaging device” corresponds to the high-resolution image estimating unit 108.
- the invention of (2) is characterized in that the invention has means for making the reading rule of the imaging device different for each frame.
- the invention of (3) corresponds to the embodiment of FIG.
- the “means for making the readout rule of the imaging device different for each frame” is “the readout control unit 104 changes the readout rule in a cycle of two frames of ODD (odd number) and EVEN (even number)”. Applicable. According to this configuration, it is possible to make the image information different for each frame, and to complement the information missing from each other.
- the means for generating a high-resolution image from the image signals of the plurality of frames comprises: means for estimating a motion between the plurality of frames; Means for estimating a high-resolution image signal using the image signals of a plurality of frames subjected to the correction, and having means for selecting the same readout rule when performing motion estimation between the plurality of frames.
- the invention of (4) corresponds to the embodiment of FIGS.
- the “means for estimating motion between a plurality of frames” corresponds to the motion estimation unit 107.
- the “means for estimating a high-resolution image signal by using image signals of a plurality of frames on which motion estimation has been performed” corresponds to the high-resolution image estimation calculation unit 108.
- motion estimation section 107 selects a frame according to the same read rule and estimates the motion. According to the invention of (4), motion estimation according to the characteristics of the image signal can be performed.
- the invention of (4) is characterized in that, when performing the motion estimation between the plurality of frames, the motion estimation is performed by selecting the frames having the same readout rule. And means for performing motion estimation calculation between frames to be performed.
- the invention of (5) corresponds to the embodiment of FIGS. As shown in FIG. 9, when performing motion estimation between the plurality of frames, the motion estimation unit 107 selects a frame having the same readout rule and performs motion estimation. In addition, motion estimation between consecutive frames is performed. According to the invention of (5), the form of motion estimation can be diversified.
- the invention of (2) or (3) is characterized in that the readout rule of the imaging device is thinned-out readout for separately reading out pixels.
- the invention of (6) corresponds to the embodiment of FIG.
- the readout rule of the imaging device is thinning-out readout for separately reading out pixels” corresponds to the skip processing in FIG. 2. By performing such thinning-out reading, the number of crops can be kept constant even when reading out an area larger than the number of pixels to be output.
- the invention of (6) is characterized in that, after performing the thinning-out reading of the imaging device, there is provided means for performing distortion correction processing by the thinning-out reading.
- the invention of (7) corresponds to the embodiment of FIG.
- the “means for performing the correction processing of the distortion by the thinning-out reading after performing the thinning-out reading of the imaging device” corresponds to the distortion correction processing unit 113.
- the distortion correction processing is the same. It is characterized in that it is a pixel operation processing within a frame.
- the invention of (8) corresponds to the embodiment of FIGS. 11 and 12. “The distortion correction processing is a pixel calculation processing in the same frame” corresponds to the correction processing by the linear interpolation parameters kl and k2 in FIG. According to this configuration, the distortion correction processing is simplified.
- the size of the imaging region is electronically changed without changing the number of clocks in one frame, and the super-resolution processing is performed on the captured region.
- FIG. 1 is a configuration diagram of the first embodiment.
- FIG. 2 is an explanatory diagram showing an example of thinning-out reading.
- Figure 3 is a flowchart of the motion estimation algorithm.
- FIG. 4 is a conceptual diagram showing the estimation of the optimal similarity in the motion estimation.
- Figure 5 is a flowchart for high-resolution image estimation.
- FIG. 6 is a configuration diagram showing the configuration of the super-resolution processing.
- FIG. 7 is a conceptual diagram of motion estimation for thinning-out reading.
- FIG. 8 is a conceptual diagram of motion estimation of a continuous frame.
- Figure 9 is a conceptual diagram of motion estimation for continuous frames.
- Figure 10 is a conceptual diagram that estimates the motion of continuous frames after intra-frame interpolation (distortion correction).
- FIG. 11 is a conceptual diagram of the distortion correction processing.
- FIG. 12 is a configuration diagram showing a filter configuration of the distortion correction processing.
- FIG. 13 is a configuration diagram of an embodiment including a distortion correction processing unit.
- FIG. 2 is a configuration diagram of the first embodiment.
- an optical system 101 forms an optical image on a imager 102.
- the read control unit 104 selects a read rule of the imager according to the magnification specified by the magnification specification unit 103.
- the reading rule indicates a reading start position on the imager and a rule of thinning-out reading, as described later.
- the imager 102 converts the optical image of the specified area into an electric signal according to the readout rule.
- the read image signals are stored in n image memories 105-1 to 105-n.
- n is the number of images required for performing the super-resolution processing.
- the super-resolution processing includes a motion estimation 107 and a high-resolution image estimating unit 108 for estimating image data of a high-resolution pixel array.
- the selector 106 selects a reference image for performing motion estimation and an image to be subjected to motion estimation.
- FIG. 2 is an explanatory diagram showing an example of thinning-out reading, which is a reading rule selected by the reading control unit 104.
- a reading rule selected by the reading control unit 104 an area that is 4/3 times as large as the number of output pixels in the X direction and 4/3 times as large as the y direction is read.
- the read control unit in Fig. 1-104 has a function to make the read rules different for each frame.
- the read rule is changed in a two-frame cycle of ODD (odd) and EVEN (even).
- the first row is an array of RG RG '
- the second row is an array of GBGB'
- the third row is an array of RG RG ','
- the fourth row is GBGB.
- ⁇ ⁇ ⁇ Array and is repeated.
- the first column has an array of RGRG ','
- the second column has an array of GBGB '
- the third column has an array of RGRGRG
- the fourth column has a GBGB ' ⁇ The array of 'is repeated.
- read indicates the position where the pixel is being read
- skip thin oblique line
- No read clock is generated at the skip position.
- the read / skip pattern is It is changed according to the magnification by the designation unit 103. Therefore, by performing such reading, the angle of view of reading can be changed by the reading control function of the imager while keeping the number of clocks of one frame constant.
- FIG. 3 is a flowchart showing the motion estimation algorithm.
- S 1 Read one image as a reference for motion estimation.
- S2 Deforms the reference image in multiple motions.
- S3 Reads one reference image for motion estimation with the reference image.
- S4 Calculate a similarity value between an image sequence obtained by deforming a plurality of reference images and a reference image.
- S5 A discrete similarity map is created using the relationship between the parameters of the deformation motion and the calculated similarity value.
- S 6 By searching the extremum of the similarity map by complementing the discrete similarity map created in S 5, the extremum of the similarity map is obtained.
- the deformed motion having the extremum is the estimated motion.
- Methods for searching for extremums in the similarity map include parabolic fitting and spline interpolation.
- S7 Determine whether motion estimation has been performed for all reference images.
- S8 If motion estimation has not been performed for all the reference images, the frame number of the reference image is incremented by one and the process returns to S3, where the next image is read and the processing is continued. Motion estimation is performed for all target reference images, and the process ends.
- FIG. 4 is a conceptual diagram showing the estimation of the optimal similarity in the motion estimation performed by the motion estimation unit 107 described in FIG.
- the one-dimensional optimal similarity is shown for simplicity, but the two-dimensional optimal similarity is estimated by the same method.
- Fig. 4 shows an example in which motion estimation is performed by para- fitting using the three points of black circles.
- the vertical axis represents the similarity
- the horizontal axis represents the deformation motion parameters. The smaller the value on the vertical axis is, the higher the similarity is, and the gray circle with the minimum value on the vertical axis is the extreme value of the similarity. Become.
- FIG. 5 is a flowchart showing the algorithm of the embodiment of the high-resolution image estimation processing.
- S11 A plurality of ⁇ low-resolution images are read for use in high-resolution image estimation (n1).
- S12 An initial high-resolution image is created by assuming any one of the multiple low-resolution images as a target frame and performing interpolation processing. This step can be optionally omitted.
- S13 Clarify the positional relationship between the images by using the motion between the target frame and the images of the other frames, which is obtained in advance by some motion estimation method.
- S14 Calculate the point spread function (PSF) considering the optical transfer function (0TF) and the imaging characteristics such as the CCD aperture. PSF uses, for example, a Gauss function.
- S 15 Based on the information of S 13 and S 14, minimize the evaluation function f (z). However, f (z) has the form shown in equation (1).
- y is a low-resolution image
- z is a high-resolution image
- A is an image conversion matrix representing an imaging system including motion between images, PSF, and the like.
- g (z) contains constraints such as smoothness of images and color correlation. 1 is the weighting factor. For example, the steepest descent method is used to minimize the evaluation function.
- S16 When f (z) obtained in S15 is minimized, the processing ends and a high-resolution image z is obtained.
- S17 If f (z) is not yet minimized, update the high-resolution image z and return to S15.
- FIG. 6 is a configuration diagram showing a configuration of super-resolution processing when the algorithm is performed.
- the high-resolution image estimation calculation unit 108 includes an interpolation enlargement unit 6, a convolution integration unit 62, a PSF data holding unit 63, an image comparison unit 64, a multiplication unit 65, a combination addition unit 66, and an accumulation addition unit 67.
- Update image generator 68, Image storage It comprises a section 69, an iterative operation determination section 610, and an iterative determination value holding section 611.
- an arbitrary reference image data is given to the interpolation enlargement unit 61, and the interpolation enlargement unit 61
- This image is interpolated and enlarged.
- the interpolation enlargement method used here for example, general bilinear interpolation or bicubic interpolation can be used.
- the interpolated and enlarged image is supplied to a convolution integrator 62 and is convolved with the PSF data supplied from a PSF data holding unit 63.
- the PSF data here is given in consideration of the motion of each frame.
- the interpolated and enlarged image data is simultaneously sent to the image storage unit 69, where it is stored.
- the convolved image data is sent to the image comparison unit 64, and the captured image is given from the imaging unit at an appropriate coordinate position based on the motion for each frame obtained by the motion estimation unit 107.
- Is compared to The compared residual is sent to the multiplication unit 65, and is multiplied by the value of each pixel of the PSF data provided from the PSF data holding unit 63.
- the result of this calculation is sent to the combining and adding unit 66, and is placed at the corresponding coordinate position.
- the image data from the multiplying unit 65 are shifted in the coordinate position little by little while having an overlap. Therefore, the overlapped portion is added.
- the image data is sent to the accumulation adding section 67.
- the accumulation adder 67 accumulates the data sequentially transmitted until the processing for the number of frames is completed, and sequentially adds the image data for each frame in accordance with the estimated motion.
- the added image data is sent to the updated image generation unit 68.
- the image data stored in the image storage unit 69 is given to the update image generation unit 68 at the same time, and the two image data are weighted and added to generate the update image data.
- the generated updated image data is provided to the iterative operation determination unit 610, Based on the repetition judgment value given from the constant value holding unit 611, it is determined whether or not to repeat the operation. When the operation is repeated, the data is sent to the convolution integrator 62 to repeat the above-described series of processing, and when not repeated, the generated image data is output.
- the image output from the iterative operation determination unit 610 after performing the above series of processing has a higher resolution than the captured image. Also, since the PSF data held in the PSF data holding unit 63 needs to be calculated at an appropriate coordinate position during convolution, the motion estimation unit 107 calculates the motion of each frame. Is to be given.
- Motion estimation is performed between frames that perform the same readout rule, and interpolation of motion estimation between consecutive frames is not performed.
- Motion estimation is performed between frames that perform the same readout rule, and motion estimation interpolation is performed between consecutive frames.
- reading is performed by ODD and EVEN with the same reading rule, and a motion is estimated between these frames.
- the image signal is obtained in the reading method as shown in Fig. 2, then stored in image memory, and the motion estimation is performed in consideration of the skipped position.
- the motion estimation is performed in consideration of the case where the image of the object is skipped between the two frames and the case where the skipped position appears in the previous frame. Add constraints.
- FIG. 7 is a conceptual diagram of motion estimation for thinning-out reading.
- the estimation of the high-resolution image is performed only by the respective frame sequences of 0DD and EVEN.
- FIG. 8 is a conceptual diagram of motion estimation of a continuous frame.
- the estimation of the motion of the continuous frame may be, for example, an interpolation process by an averaging process. For example, motion a of 1 frame and 1 + 2 frame, and motion a 'of 1 + 1 and 1 + 3 are obtained as vectors, respectively, and 1/2 frame is 1 / a of a. Subtract a motion of size 2 and use this as a candidate motion value between 1 + 1 and 1 +2. The average of this and the half motion of a 'is calculated, and the average value processing is used as the estimated value of the motion of the continuous frame.
- FIG. 9 is a conceptual diagram of motion estimation of a continuous frame. As shown in FIG. 9, in the case of (3), interpolation processing is performed in each frame from image signal data of continuous frames, and data between the frames is made to correspond. Thus, the process of correcting the image distortion caused by the thinning-out reading with respect to the reading rule of the example shown in FIG. 2 will be referred to as a distortion correction process.
- FIG. 10 is a conceptual diagram of performing motion estimation of a continuous frame after the intra-frame interpolation (distortion correction) processing.
- the distortion correction processing is performed for each of the i-th frame and the i + 1-th frame, and then the motion data interpolation processing unit estimates the motion of the continuous frame. That is, the example of FIG. 10 shows an embodiment including a distortion correction process.
- FIG. 11 is a conceptual diagram of the distortion correction processing. The details of the distortion correction processing will be described with reference to FIG. In the example in Fig. 11, among the eight pixels in the horizontal direction R0 to G7 of the RG line in the Bayer array, two equally spaced pixels R-G-R-G are read from the image data that is read out while skipping two pixels G3 and R6. Raw array of ...
- FIG. 2 shows a conceptual diagram of a process to be performed.
- the outline of the processing is as follows: (1) Calculate the estimated value of the missing pixel data from the read pixel data and use it as 8 pixel data. (2) Perform reduction processing to generate 6 pixel data from 8 pixels. .
- a two-stage process of interpolation and reduction may be used as shown in Fig. 11 (a), but a single process as shown in Fig. 11 (b).
- a method using interpolation processing may be used. This can be expressed by a linear transformation as shown in Eq. (3) below.
- the reduction processing primary interpolation as described below or tertiary interpolation may be used.
- Equation (2) shows the method of filling in missing pixels with linear interpolation and using linear interpolation as a size change in a matrix format.
- FIG. 12 is a configuration diagram showing a filter configuration of the distortion correction processing. As shown in FIG. 12, the configuration for executing this operation is pipeline processing.
- a shift register 901 is a 6-tap FIFO, and is configured to shift for each pixel.
- iO-i5 represents a shift register into which pixel data is stored, and sl and s2 represent selector signals.
- 31.32 contains four values: -1, 0, 1, 2. There is no limitation on the value range, and it is sufficient if there are four values.
- dl and d2 are the outputs of the selectors 902 and 903, kl and k2 are the parameters of linear interpolation, and the output value of the adder 904 is kl ⁇ dl + k2 ⁇ d2.
- Table 1 shows a logic table when the operation is performed for each pixel pixel with the configuration of Fig. 12.
- the correlation between the RGB channels is used.
- An interpolation method using the relation can be used.
- the estimated value of the luminance level of the missing pixel is given by equation (4) or (5).
- Lambda after the spaced readouts, such as, + 3 + +1 nono Figure 2, (4), or
- FIG. 13 is a configuration diagram showing a configuration of an embodiment including the above-described distortion correction means. (Distortion correction processing section 113).
- the image data subjected to the distortion correction processing is held in the image memories 105-1 to 105-n. After that, motion estimation between frames is performed as described above, and an estimated high-resolution image is obtained by the high-resolution image estimation calculation unit 108 having the configuration shown in FIG. Industrial applicability
- the size of the imaging region is electronically changed without changing the number of clocks in one frame, and further, the super-resolution is applied to the captured region.
- An imaging device that performs image processing can be provided.
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/628,909 US7868923B2 (en) | 2004-06-10 | 2005-06-09 | Imaging system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-172094 | 2004-06-10 | ||
JP2004172094A JP4184319B2 (ja) | 2004-06-10 | 2004-06-10 | 撮像装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005122554A1 true WO2005122554A1 (ja) | 2005-12-22 |
Family
ID=35503495
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/011004 WO2005122554A1 (ja) | 2004-06-10 | 2005-06-09 | 撮像装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US7868923B2 (ja) |
JP (1) | JP4184319B2 (ja) |
WO (1) | WO2005122554A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101483712A (zh) * | 2008-01-10 | 2009-07-15 | 佳能株式会社 | 固态成像装置、成像系统和固态成像装置的驱动方法 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8698924B2 (en) | 2007-03-05 | 2014-04-15 | DigitalOptics Corporation Europe Limited | Tone mapping for low-light video frame enhancement |
US8417055B2 (en) | 2007-03-05 | 2013-04-09 | DigitalOptics Corporation Europe Limited | Image processing method and apparatus |
US8264576B2 (en) | 2007-03-05 | 2012-09-11 | DigitalOptics Corporation Europe Limited | RGBW sensor array |
IES20070229A2 (en) * | 2006-06-05 | 2007-10-03 | Fotonation Vision Ltd | Image acquisition method and apparatus |
US9307212B2 (en) | 2007-03-05 | 2016-04-05 | Fotonation Limited | Tone mapping for low-light video frame enhancement |
JP5263753B2 (ja) * | 2007-11-19 | 2013-08-14 | 株式会社ザクティ | 超解像処理装置及び方法並びに撮像装置 |
JP4508279B2 (ja) | 2008-07-17 | 2010-07-21 | ソニー株式会社 | 画像処理装置、画像処理方法、及び、プログラム |
US8654205B2 (en) * | 2009-12-17 | 2014-02-18 | Nikon Corporation | Medium storing image processing program and imaging apparatus |
RU2431889C1 (ru) * | 2010-08-06 | 2011-10-20 | Дмитрий Валерьевич Шмунк | Способ суперразрешения изображений и нелинейный цифровой фильтр для его осуществления |
US20120320182A1 (en) * | 2011-06-17 | 2012-12-20 | Richard Hubbard | Electro optical image-magnifying device |
US9304089B2 (en) * | 2013-04-05 | 2016-04-05 | Mitutoyo Corporation | System and method for obtaining images with offset utilized for enhanced edge resolution |
US9570106B2 (en) | 2014-12-02 | 2017-02-14 | Sony Corporation | Sensor configuration switching for adaptation of video capturing frame rate |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04172778A (ja) * | 1990-11-06 | 1992-06-19 | Nippon Telegr & Teleph Corp <Ntt> | 高解像度画像合成処理方式 |
JPH04196775A (ja) * | 1990-11-27 | 1992-07-16 | Matsushita Electric Ind Co Ltd | 静止画形成装置 |
JPH07131692A (ja) * | 1993-11-05 | 1995-05-19 | Sharp Corp | 静止画像撮像装置 |
JP2000041186A (ja) * | 1998-07-22 | 2000-02-08 | Minolta Co Ltd | デジタルカメラおよびその制御方法 |
JP2002112096A (ja) * | 2000-09-29 | 2002-04-12 | Sony Corp | カメラ装置及びカメラ機能調整方法 |
JP2002369083A (ja) * | 2001-06-07 | 2002-12-20 | Olympus Optical Co Ltd | 撮像装置 |
JP2003338988A (ja) * | 2002-05-22 | 2003-11-28 | Olympus Optical Co Ltd | 撮像装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5657402A (en) * | 1991-11-01 | 1997-08-12 | Massachusetts Institute Of Technology | Method of creating a high resolution still image using a plurality of images and apparatus for practice of the method |
JPH10164326A (ja) * | 1996-11-28 | 1998-06-19 | Minolta Co Ltd | 画像取り込み装置 |
US6330344B1 (en) * | 1997-02-14 | 2001-12-11 | Sony Corporation | Image processing device and method employing motion detection to generate improved quality image from low resolution image |
JP3695119B2 (ja) * | 1998-03-05 | 2005-09-14 | 株式会社日立製作所 | 画像合成装置、及び画像合成方法を実現するプログラムを記録した記録媒体 |
US6906751B1 (en) | 1998-07-22 | 2005-06-14 | Minolta Co., Ltd. | Digital camera and control method thereof |
US6285804B1 (en) * | 1998-12-21 | 2001-09-04 | Sharp Laboratories Of America, Inc. | Resolution improvement from multiple images of a scene containing motion at fractional pixel values |
US8040385B2 (en) | 2002-12-02 | 2011-10-18 | Olympus Corporation | Image pickup apparatus |
US7352919B2 (en) * | 2004-04-28 | 2008-04-01 | Seiko Epson Corporation | Method and system of generating a high-resolution image from a set of low-resolution images |
-
2004
- 2004-06-10 JP JP2004172094A patent/JP4184319B2/ja not_active Expired - Fee Related
-
2005
- 2005-06-09 US US11/628,909 patent/US7868923B2/en not_active Expired - Fee Related
- 2005-06-09 WO PCT/JP2005/011004 patent/WO2005122554A1/ja active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04172778A (ja) * | 1990-11-06 | 1992-06-19 | Nippon Telegr & Teleph Corp <Ntt> | 高解像度画像合成処理方式 |
JPH04196775A (ja) * | 1990-11-27 | 1992-07-16 | Matsushita Electric Ind Co Ltd | 静止画形成装置 |
JPH07131692A (ja) * | 1993-11-05 | 1995-05-19 | Sharp Corp | 静止画像撮像装置 |
JP2000041186A (ja) * | 1998-07-22 | 2000-02-08 | Minolta Co Ltd | デジタルカメラおよびその制御方法 |
JP2002112096A (ja) * | 2000-09-29 | 2002-04-12 | Sony Corp | カメラ装置及びカメラ機能調整方法 |
JP2002369083A (ja) * | 2001-06-07 | 2002-12-20 | Olympus Optical Co Ltd | 撮像装置 |
JP2003338988A (ja) * | 2002-05-22 | 2003-11-28 | Olympus Optical Co Ltd | 撮像装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101483712A (zh) * | 2008-01-10 | 2009-07-15 | 佳能株式会社 | 固态成像装置、成像系统和固态成像装置的驱动方法 |
CN101483712B (zh) * | 2008-01-10 | 2012-10-24 | 佳能株式会社 | 固态成像装置、成像系统和固态成像装置的驱动方法 |
Also Published As
Publication number | Publication date |
---|---|
US20070268388A1 (en) | 2007-11-22 |
US7868923B2 (en) | 2011-01-11 |
JP2005352721A (ja) | 2005-12-22 |
JP4184319B2 (ja) | 2008-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005122554A1 (ja) | 撮像装置 | |
JP4879261B2 (ja) | 撮像装置、高解像度化処理方法、高解像度化処理プログラム、及び記録媒体 | |
US7948543B2 (en) | Imaging apparatus provided with image scaling function and image data thinning-out readout function | |
JP5341010B2 (ja) | 画像処理装置、撮像装置、プログラム及び画像処理方法 | |
JP4555775B2 (ja) | 撮像装置 | |
US8698906B2 (en) | Image processing device, imaging device, information storage medium, and image processing method | |
US8040385B2 (en) | Image pickup apparatus | |
US20090033792A1 (en) | Image Processing Apparatus And Method, And Electronic Appliance | |
EP2579206A1 (en) | Image processing device, image capturing device, program and image processing method | |
US8085320B1 (en) | Early radial distortion correction | |
JP4361991B2 (ja) | 画像処理装置 | |
JP4445870B2 (ja) | 撮像装置 | |
JP6274744B2 (ja) | 画像処理装置および画像処理方法 | |
JP3868446B2 (ja) | 撮像装置 | |
JP2006325276A (ja) | 撮像装置 | |
JP2013126123A (ja) | 画像処理装置、撮像装置及び画像処理方法 | |
JP5397250B2 (ja) | 画像処理装置および画像処理方法 | |
JP2006262382A (ja) | 画像処理装置 | |
JP3935528B2 (ja) | カラー画像処理装置 | |
JP2012142676A (ja) | 撮像装置及び画像生成方法 | |
JP2006054583A (ja) | 撮像装置 | |
JP2002330283A (ja) | 解像度変換方法および解像度変換装置 | |
JP2013125999A (ja) | 画像処理装置、撮像装置及び画像処理方法 | |
JP2024013652A (ja) | 画像処理方法、画像処理装置、プログラム | |
JP2012119852A (ja) | 撮像装置及び撮像方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11628909 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase | ||
WWP | Wipo information: published in national office |
Ref document number: 11628909 Country of ref document: US |