WO2012011484A1 - Dispositif de capture d'image - Google Patents

Dispositif de capture d'image Download PDF

Info

Publication number
WO2012011484A1
WO2012011484A1 PCT/JP2011/066413 JP2011066413W WO2012011484A1 WO 2012011484 A1 WO2012011484 A1 WO 2012011484A1 JP 2011066413 W JP2011066413 W JP 2011066413W WO 2012011484 A1 WO2012011484 A1 WO 2012011484A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
pixel
image
unit
pixel value
Prior art date
Application number
PCT/JP2011/066413
Other languages
English (en)
Japanese (ja)
Inventor
慈朗 福田
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Publication of WO2012011484A1 publication Critical patent/WO2012011484A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/42Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by switching between different modes of operation using different resolutions or aspect ratios, e.g. switching between interlaced and non-interlaced mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/48Increasing resolution by shifting the sensor relative to the scene

Definitions

  • the present invention relates to an imaging device and the like.
  • Some modern digital cameras and video cameras can be used by switching between still image shooting mode and movie shooting mode. For example, there is a camera that can shoot a still image with a resolution higher than that of a moving image by a user operating a button during moving image shooting.
  • moving image shooting is interrupted when shooting a high-resolution still image.
  • the moving image shooting is performed at a resolution equivalent to a still image in order not to interrupt the moving image shooting, the frame rate of the moving image is lowered.
  • the present inventor is considering generating a high-resolution still image from a low-resolution moving image by using a method of addition reading. Specifically, at the time of moving image shooting, the pixel values of a plurality of pixels are weighted and added and read out from the image sensor, and a high resolution image is restored from the pixel values obtained by the weighted addition.
  • Patent Document 1 discloses a technique for mechanically shifting pixels of an optical system to perform moving image shooting and acquiring a high-definition image from the moving image.
  • Patent Document 2 discloses a technique for performing exposure control according to a live view display gain.
  • an imaging device or the like that enables simple exposure control.
  • One embodiment of the present invention sets an imaging element that captures a subject image, a reading control unit that performs weighted addition of pixel values of a plurality of pixels of the imaging element and reads the result as an added pixel value, and a weighting coefficient in the weighted addition.
  • the present invention relates to an imaging apparatus including a coefficient setting unit and an exposure control information output unit that outputs exposure control information for performing exposure control of the imaging unit based on the weighting coefficient.
  • a weighting coefficient is set, weighted addition is performed using the weighting coefficient, an added pixel value is read, and exposure control information is output based on the weighting coefficient.
  • the coefficient setting unit sets a first weighting coefficient in the first imaging mode, sets a second weighting coefficient in the second imaging mode, and performs the exposure control.
  • the information output unit obtains a ratio of the sum of the first weighting factors and the sum of the second weighting factors as a weighting factor ratio, and outputs the exposure control information using a photometric evaluation value based on the weighting factor ratio May be.
  • the exposure control information is output using the photometric evaluation value based on the weighting coefficient ratio between the first imaging mode and the second imaging mode, so that the exposure control for performing the exposure control of the imaging unit is performed.
  • Information can be output.
  • the coefficient setting unit may set a coefficient having the same value as the first weighting coefficient for each pixel to be weighted and added in the first imaging mode.
  • the second weighting coefficient different from the first weighting coefficient is set, and the exposure control information output unit obtains a photometric evaluation value from the added pixel value in the first imaging mode.
  • the exposure control information may be output using the obtained photometric evaluation value.
  • a photometric evaluation value is obtained from the added pixel value that is not multiplied by the weighting factor ratio
  • the pixel obtained by multiplying the added pixel value by the weighting factor ratio is obtained from the value.
  • a display control unit that adjusts the luminance of the display image based on the weighting coefficient and performs control to display the adjusted display image may be included.
  • the coefficient setting unit sets a first weighting coefficient in the first imaging mode, sets a second weighting coefficient in the second imaging mode, and performs the display control.
  • the unit may obtain a ratio of the sum of the first weighting coefficients and the sum of the second weighting coefficients as a weighting coefficient ratio, and may adjust the luminance of the display image based on the weighting coefficient ratio .
  • the brightness of the display image can be adjusted based on the weighting coefficient ratio between the first imaging mode and the second imaging mode.
  • the coefficient setting unit may set a coefficient having the same value as the first weighting coefficient for each pixel to be weighted and added in the first imaging mode.
  • the second weighting coefficient different from the first weighting coefficient is set, and the display control unit is weighted and added by the first weighting coefficient in the first imaging mode. Control the display of the display image based on the added pixel value, and multiply the added pixel value weighted and added by the second weighting coefficient in the second imaging mode by the weighting coefficient ratio. You may perform control which displays the said display image by the said addition pixel value after.
  • a display image with the added pixel value not multiplied by the weighting coefficient ratio is displayed
  • a display image with the added pixel value multiplied by the weighting coefficient ratio is displayed. Is displayed. Thereby, the display image can be adjusted to the same brightness in the first shooting mode and the second shooting mode.
  • the light receiving unit is configured based on a storage unit that stores an image based on the added pixel value as a low-resolution frame image, and a plurality of low-resolution frame images stored in the storage unit.
  • An estimation calculation unit that estimates a pixel value of each pixel included, and an image output that outputs a high-resolution frame image having a higher resolution than the low-resolution frame image based on the pixel value estimated by the estimation calculation unit
  • the readout control unit sets a light receiving unit, which is a unit for obtaining the added pixel value, for each of a plurality of pixels of the imaging element, and sets pixel values of the plurality of pixels included in the light receiving unit.
  • the weighted addition is performed, and the added pixel values are read while sequentially shifting the pixels while superimposing the light receiving units, and the estimation calculation unit is configured to obtain a plurality of added pixel values obtained by sequentially shifting the light receiving units. Based on, it may estimate the pixel value of each pixel included in the light receiving unit.
  • an additional pixel value is acquired while sequentially shifting pixels while superimposing light receiving units, and a low-resolution frame image based on the additional pixel value is acquired. Then, a pixel value is estimated based on the plurality of low resolution frame images, and a high resolution frame image is output based on the pixel values. Thereby, it is possible to obtain a high-resolution still image from a moving image by a simple process.
  • the light receiving unit is sequentially set to a first position and a second position next to the first position by the pixel shift, and the light receiving unit of the first position is set. And the light receiving unit of the second position overlap, the estimation calculation unit obtains a difference value between the added pixel values of the first and second positions, and overlaps the light receiving unit of the first position.
  • a first intermediate pixel value that is a light receiving value of the first light receiving region excluding the light receiving value
  • a second intermediate value that is a light receiving value of the second light receiving region excluding the overlapping region from the light receiving unit of the second position.
  • a relational expression with a pixel value is expressed using the difference value, the first and second intermediate pixel values are estimated using the relational expression, and the light reception is performed using the estimated first intermediate pixel value. Even if the pixel value of each pixel included in the unit is obtained There.
  • the estimation calculation unit may be included in the intermediate pixel value pattern when successive intermediate pixel values including the first and second intermediate pixel values are used as an intermediate pixel value pattern.
  • a relational expression between intermediate pixel values is expressed using the added pixel values of the first and second positions, and successive added pixel values including the added pixel values of the first and second positions are added pixel value patterns.
  • the intermediate pixel value included in the intermediate pixel value pattern may be determined so that the similarity is the highest.
  • the intermediate pixel value can be estimated based on a plurality of added pixel values acquired by pixel shifting while superimposing the light receiving units.
  • the estimation calculation unit obtains an evaluation function representing an error between the intermediate pixel value pattern and the addition pixel value pattern expressed by a relational expression between the intermediate pixel values, and the evaluation
  • the intermediate pixel value included in the intermediate pixel value pattern may be determined so that the function value is minimized.
  • the value of the intermediate pixel value is set so that the similarity between the intermediate pixel value pattern and the added pixel value is the highest. Can be determined.
  • FIG. 1 is a comparative example of this embodiment.
  • 2A and 2B are explanatory diagrams of a weighted addition method.
  • FIG. 3A is an explanatory diagram of a range of added pixel values.
  • FIG. 3B is an example of a program diagram for exposure control.
  • FIG. 4 is an explanatory diagram of exposure control according to the present embodiment.
  • FIG. 5 is a configuration example of the imaging apparatus according to the present embodiment.
  • FIG. 6 is a flowchart of processing performed by the present embodiment.
  • FIG. 7 is a flowchart of a modification.
  • FIG. 8 is a flowchart of a second modification.
  • FIG. 9 is a detailed explanatory diagram of the shooting mode.
  • FIG. 10 is an explanatory diagram of addition readout control when performing pixel shift.
  • FIG. 10 is an explanatory diagram of addition readout control when performing pixel shift.
  • FIG. 11 is an explanatory diagram of addition readout control when pixel shift is performed without weighting.
  • FIG. 12 is an explanatory diagram of addition readout control when performing pixel shift with weighting.
  • FIG. 13 is an explanatory diagram of addition readout control when pixel shift is not performed.
  • FIG. 14A is an explanatory diagram of addition readout control when no pixel shift is performed without weighting.
  • FIG. 14B is an explanatory diagram of addition readout control when weighted and pixel shift is not performed.
  • FIG. 15 is an explanatory diagram of weighting coefficients.
  • FIG. 16A is an explanatory diagram of an added pixel value and an estimated pixel value.
  • FIG. 16B illustrates the intermediate pixel value.
  • FIG. 17 is an explanatory diagram of an intermediate pixel value estimation method.
  • FIG. 18 is an explanatory diagram of an intermediate pixel value estimation method.
  • FIG. 19 is an explanatory diagram of an intermediate pixel value estimation method.
  • FIG. 1 shows a comparative example of this embodiment.
  • the imaging apparatus starts moving image shooting, the imaging unit 100 captures an image, and the moving image signal processing circuit 116 processes the image to acquire moving image data.
  • the still image switch 109 is turned on during moving image capturing, the moving image capturing is temporarily stopped, the image capturing unit 100 captures an image, and the still image signal processing circuit 117 processes the image to acquire still image data.
  • a 12-megapixel high-pixel sensor can be driven at a high speed of 60 fps (fps: “frame” per-second)
  • a 12-megapixel moving image can be captured, and one of them can be acquired as a still image.
  • a high-resolution still image can be acquired without interrupting moving image shooting (image disappearance).
  • recording a 12 megapixel moving image causes an increase in storage capacity, resulting in a decrease in recording time.
  • Patent Document 1 discloses a shift unit that shifts the incident position of an optical image on an image sensor using a camera shake control signal from an optical camera shake control circuit and a pixel shift control signal from a pixel shift control circuit. Discloses a method of shifting pixels and obtaining a high-resolution still image from the pixel-shifted image.
  • weighted addition is performed in reading out pixel values to further improve the reproducibility of high frequency components.
  • the range (signal level) of the pixel value obtained by addition and reading differs depending on the mode, which affects the automatic exposure control and the brightness of the live view display. There are challenges.
  • FIG. 2A and FIG. 2B schematically show a weighted addition method performed by this embodiment.
  • the fused moving image mode for example, the moving image still image fused moving image 1 mode shown in FIG. 9
  • Fig. 3 (B) shows an example of a program diagram for exposure control.
  • a photometric evaluation value is assumed to be a 4-pixel addition value, and an exposure time program diagram will be described as an example.
  • the 4-pixel addition value is 1024
  • the exposure time is controlled to be T1
  • the 4-pixel addition value is 576
  • the exposure time is T2. Controlled.
  • a program diagram is required for each mode, and the control becomes complicated.
  • the 4-pixel addition value in the fusion moving image mode is increased by 1.78 times, and exposure control is performed using the increased 4-pixel addition value.
  • the fusion video mode a video with a gain-up 4-pixel addition value is displayed in live view.
  • the exposure control may be performed not only by controlling the exposure time but also by controlling the aperture value.
  • Patent Document 2 discloses a technique that enables a photographer to display an image displayed in a live view with a desired brightness, and also allows an image in a more preferable exposure state to be captured.
  • this technique does not mention a form for capturing an image displayed in the live view and an image in a more preferable exposure state. That is, no mention is made of performing exposure control or display control according to the weighting coefficient.
  • FIG. 5 shows a configuration example of the imaging device of the present embodiment that performs gain control and exposure control by increasing the 4-pixel added value according to the weighting coefficient.
  • the imaging apparatus includes an imaging unit 100, an A / D conversion unit 104, a user I / F unit 106, a control unit 113, and an imaging control unit 118.
  • fusion video is an image that can generate a high-resolution still image. For example, it is acquired by a pixel shift described later in FIG. 10 and the like, and a still image is acquired by an estimation method described later in FIGS. Is possible.
  • the imaging control unit 118 includes an aperture control unit 120 and an imaging element control unit 119.
  • the imaging control unit 118 drives and controls the imaging unit 100.
  • the image sensor control unit 119 includes a read control unit 160 that controls the image sensor 103 and controls reading of pixel values, and an exposure control unit 161 that controls exposure time.
  • the imaging unit 100 is an optical system for performing imaging, and includes an imaging lens 101, an aperture 102, an imaging element 103 such as a CMOS sensor, and a shutter (not shown).
  • the diaphragm controller 120 drives the diaphragm 102 and the shutter, whereby the operations of the diaphragm 102 and the shutter are performed.
  • the A / D conversion unit 104 converts an analog signal obtained by imaging by the imaging unit 100 into digital data.
  • the system controller 105 controls each part of the imaging apparatus (system).
  • the system controller 105 includes a coefficient setting unit 130 that sets a weighting coefficient used for weighted addition.
  • the user I / F unit 106 includes a mode switch 107 for setting a shooting mode by the user, a moving image switch 108 for instructing start / stop of moving image recording, and a still image switch 109 for instructing still image recording.
  • the user I / F unit 106 includes a touch panel, operation buttons, and the like.
  • the external memory 110 records captured video data and still image data.
  • the display device 111 is, for example, a liquid crystal display device, and performs live view display and display of reproduced moving images and still images.
  • the recording medium 112 is a medium for recording image data.
  • the display device 111 and the recording medium 112 may be incorporated in the imaging device, or may be an external device that can be attached and detached by a USB or the like.
  • the control unit 113 (signal processing system) includes a system controller 105, a compression / decompression circuit 121 (compression / decompression circuit, compression / decompression unit), a recording medium I / F circuit 126 (recording medium I / F unit), and an exposure control information output unit.
  • 140 AE processing system
  • a still image processing unit 141 (signal processing system)
  • a moving image processing unit 142 (signal processing system)
  • the control unit 113 performs processing of a captured image and control of each component.
  • the moving image processing unit 142 processes the moving image data from the A / D conversion unit 104.
  • the moving image processing unit 142 includes an electronic image stabilization circuit 114, a line memory 115, and a moving image signal processing circuit 116.
  • the electronic image stabilization circuit 114 is an image stabilization circuit that electronically corrects camera shake by image processing.
  • the line memory 115 holds image data for one line so that the electronic image stabilization circuit 114 performs a camera shake correction process of less than one pixel.
  • the moving image signal processing circuit 116 performs processing such as luminance signal conversion and color difference signal conversion on the image data from the electronic image stabilization circuit 114.
  • the still image processing unit 141 processes still image data from the A / D conversion unit 104.
  • the still image processing unit 141 includes a still image signal processing circuit 117, a high resolution processing circuit 127 (estimation calculation unit), and a frame memory 128 (storage unit).
  • the high resolution processing circuit 127 performs high resolution processing for resolving a moving image and estimating a still image.
  • the frame memory 128 holds a frame image in order to perform resolution enhancement processing (estimation processing) by the high resolution processing circuit 127.
  • the still image signal processing circuit 117 performs image processing on a still image that has been subjected to high resolution processing and image processing on a still image that has been shot in the normal still image mode. For example, the still image signal processing circuit 117 performs processing such as luminance signal conversion and color difference signal conversion on still image data.
  • the exposure control information output unit 140 outputs AE control information (exposure control information) for the imaging control unit 118 to perform AE control (exposure control, AE: Auto Exposure).
  • Exposure control information output unit 1 40 includes an AE processing circuit 122 and an AE gain setting circuit 123.
  • the AE processing circuit 122 obtains an AE evaluation value from the digital image data from the A / D conversion unit 104.
  • the system controller 105 controls the imaging control unit 118 based on this AE evaluation value, the aperture control unit 120 sets the aperture 102, and the imaging element control unit 119 sets the accumulation time of the imaging element 103. . In this way, the exposure is controlled to be appropriate.
  • the AE gain setting circuit 123 sets an AE gain for adjusting the difference in the image signal range for AE processing in the normal moving image mode and the fused moving image mode.
  • the AE processing circuit 122 obtains an AE evaluation value from the digital image data multiplied by the AE gain.
  • the display control unit 150 performs control to display a display image on the display device 111, and includes a display device control circuit 124 and a display gain setting circuit 125. In the following, the operation of the display control unit 150 when used as a monitor during recording will be described as an example.
  • the digital image data output from the A / D conversion circuit 104 is input to the display device control circuit 124 via the moving image signal processing circuit 116 and the system controller 105.
  • the display device control circuit 124 performs control to send a display image with an appropriate signal level to the display device 111.
  • the display gain setting circuit 125 sets a display gain for adjusting the difference in the range of the display image signal in the normal moving image mode and the fused moving image mode.
  • the display device control circuit 124 performs control to display digital image data multiplied by the display gain.
  • the compression / decompression circuit 121 compresses still image data generated by the still image signal processing circuit 117, compresses moving image data generated by the moving image signal processing circuit 116, and compresses the compressed image data. Perform decompression processing. For example, the compression / decompression circuit 121 compresses image data into a JPEG image or compresses moving image data into an MPEG image.
  • the recording medium I / F circuit 126 controls reading and writing with respect to the recording medium 112.
  • the system controller 105 performs read and write access to the external memory 110.
  • AE Control and Display Control performed by the imaging apparatus will be described in detail.
  • a problem of AE processing and display processing caused by the weighting coefficient described in FIG. 3B and the like will be described with a specific example, and then a flowchart of processing performed by the present embodiment will be described.
  • a calculation example in the case where the pixel value GR11 is calculated from the 4-pixel addition value gr11 described later with reference to FIG. 10 and the AE gain is set using the GR11 will be described.
  • the weighting coefficients are W1, W2, W3, and W4, they are expressed by the following equation (1).
  • the four-pixel addition value gr11 is expressed by the following expression (2).
  • r is a real number with r ⁇ 1
  • GR11, GR13, GR31, and GR33 are pixel values that are added and read.
  • W * gr11 W1 * GR11 + W2 * GR13 + W3 * GR31 + W4 * GR33 (2)
  • a value different from the AE gain when weighted addition is not performed is set as the AE gain when weighted addition is performed.
  • the display gain is set according to the weighting coefficient.
  • FIG. 6 shows a flowchart of processing performed by the present embodiment.
  • a mode is selected (step S1).
  • the aperture is controlled (step S2), and the exposure time of the image sensor is controlled (step S3).
  • pixel readout control is set to no weighting and no pixel shift (step S4), and 4-pixel addition readout is performed (step S5).
  • AE processing is performed without multiplying the 4-pixel addition value by AE gain (step S6), an AE evaluation value is obtained (step S7), and an aperture value and exposure time are set based on the AE evaluation value (step S7).
  • the live view display is controlled without applying the display gain (step S8), and the live view image is displayed (step S9).
  • control is performed to record the captured moving image (step S10), and recording is performed on the recording medium (step S11).
  • step S1 when the fused video mode is selected in step S1, the aperture is controlled (step S12), and the exposure time of the image sensor is controlled (step S13).
  • step S14 pixel readout control is set to weighted (step S14), and a weighting coefficient is set (step S15).
  • step S16 four pixels are added and read (step S16), the sum of the weighting coefficients is calculated (step S17), and the AE gain and the display gain are set (step S18).
  • the 4-pixel addition value is multiplied by the AE gain (step S19), AE processing is performed using the image (step S20), an AE evaluation value is obtained (step S21), and the aperture value is determined by the AE evaluation value. Settings and exposure time are set (steps S12 and S13). Further, the 4-pixel addition value is multiplied by the display gain (step S22), display control of the live view image is performed (step S23), and the live view image is displayed (step S24). Further, control is performed to record the captured moving image (step S25), and recording is performed on a recording medium (step S26). This recorded video is not multiplied by gain.
  • the AE process is, for example, a process of setting an area for performing photometric evaluation on a captured image, or a process of setting an aperture value setting characteristic and an exposure time setting characteristic (program diagram) with respect to the exposure amount.
  • the AE evaluation value is a photometric evaluation value obtained based on the pixel value in the area set in the captured image.
  • FIG. 7 shows a flowchart of a modified example of processing performed by the present embodiment. This modification is an example in which processing independent of the mode is made common. As shown in FIG. 7, when this process is started, the aperture is controlled (step S50), the exposure time of the image sensor is controlled (step S51), and the mode is selected (step S52).
  • step S53 When the normal moving image mode is selected, pixel readout control is set (step S53), and 4-pixel addition readout is performed (step S54). Next, AE control (steps S61, S62, S50, S51), display control (steps S64, S65), and recording control (steps S66, S67) are performed.
  • step S52 when the fused video mode is selected in step S52, pixel readout control is set (step S55), and a weighting coefficient is set (step S56). Next, four pixels are added and read (step S57), the sum of the weighting coefficients is calculated (step S58), and the AE gain and the display gain are set (step S59). Next, the 4-pixel addition value is multiplied by the AE gain (step S60), and AE control is performed using the image (steps S61, S62, S50, S51). Further, the display gain is multiplied by the 4-pixel addition value (step S63), and display control of the live view image is performed (steps S64 and S65). Further, control for recording a moving image (steps S66 and S67) is performed.
  • FIG. 8 shows a flowchart of a second modification of the process performed by this embodiment.
  • This modification is an example in which the weighting coefficient setting process is shared.
  • the aperture is controlled (step S100)
  • the exposure time of the image sensor and the like are controlled (step S101)
  • the readout control is set to weighted addition
  • the mode is set according to the mode.
  • the presence / absence of pixel shift is set (step S102).
  • the weighting coefficient is set to the value shown in the above formula (1) (step S103).
  • the pixel values are weighted (step S104), and four pixels are added and read (step S105).
  • a sum of weighting coefficients is calculated (step S106), and an AE gain and a display gain are set (step S107).
  • the gain is 1, and in the fused moving image mode, the gain is a value represented by the following expression (7). 4 / (1 + 1 / r + 1 / r + 1 / r 2 ) (7)
  • the 4-pixel addition value is multiplied by the AE gain (step S108), and AE control is performed (steps S109, S110, S100, and S101). Further, the 4-pixel addition value is multiplied by the display gain (step S111), and display control is performed (steps S112 and S113). Further, control for recording a moving image (steps S114 and S115) is performed.
  • the imaging device weights and adds the pixel values of the imaging element 103 that captures the subject image and a plurality of pixels of the imaging element 103 to obtain an added pixel value.
  • a readout control unit 160 that reads out, a coefficient setting unit 130 that sets a weighting coefficient in weighted addition, an exposure control information output unit 140 that outputs exposure control information for performing exposure control of the imaging unit 100 based on the weighting coefficient, including.
  • the exposure control information is, for example, a photometric evaluation value, or information indicating an aperture value and an exposure time obtained from the photometric evaluation value. Exposure control is performed by setting the aperture and exposure time based on these pieces of information.
  • the imaging unit 100 may be configured integrally with the imaging device such as a compact camera.
  • the imaging unit 100 may be configured such that the imaging element 103 is integrated with the imaging device (body), and the interchangeable lens including the diaphragm 102 and the optical system 101 is configured separately.
  • the coefficient setting unit 130 sets the first weighting coefficient in the first imaging mode (for example, the normal moving image mode).
  • the coefficient setting unit 130 sets a second weighting coefficient in the second imaging mode (for example, the fused moving image mode).
  • the exposure control information output unit 140 obtains the ratio of the sum of the first weighting coefficients and the sum of the second weighting coefficients as the weighting coefficient ratio.
  • the exposure control information output unit 140 outputs exposure control information using a photometric evaluation value based on the weighting coefficient ratio.
  • the exposure control information for performing the exposure control of the imaging unit 100 can be output by outputting the exposure control information using the photometric evaluation value based on the weighting coefficient ratio.
  • the exposure control information output unit 140 obtains a photometric evaluation value from the added pixel value (such as gr11 described later in FIG. 10), and
  • a photometric evaluation value is obtained from a pixel value obtained by multiplying the added pixel value by a weighting coefficient ratio.
  • a photometric evaluation value is obtained from the added pixel value that is not multiplied by the weighting factor ratio
  • the pixel obtained by multiplying the added pixel value by the weighting factor ratio is obtained from the value.
  • the imaging apparatus includes a display control unit 150 that adjusts the luminance of the display image based on the weighting coefficient and performs control to display the adjusted display image.
  • the display control unit 150 obtains the ratio of the sum of the first weighting coefficients and the sum of the second weighting coefficients as the weighting coefficient ratio. Then, the display control unit 150 adjusts the luminance of the display image based on the weighting coefficient ratio.
  • the brightness of the display image can be adjusted based on the weighting coefficient ratio between the first imaging mode and the second imaging mode.
  • the live view display can be adjusted and displayed with the same brightness in the first shooting mode and the second shooting mode.
  • Normal Movie Mode The shooting mode of the present embodiment will be described in detail with reference to FIG. First, the normal moving image mode will be described. This mode is a mode in which only a moving image is captured without performing still image shooting in the middle, and addition reading is performed without weighting and without pixel shift.
  • the system controller 105 controls the imaging unit 100, the aperture control unit 120, the imaging element control unit 119, and the control unit 113 in accordance with the mode setting content shown in FIG.
  • the system controller 105 performs various settings according to instructions from the mode switch 107.
  • the pixel addition signal without weighting and without superposition shift is read from the image sensor 103.
  • the addition reading is performed by a method described later with reference to FIG.
  • the image used for AE control and the image used for display are images that do not gain up and are the same as the images recorded on the recording medium 112.
  • the subject image formed on the image sensor 103 is converted into an electrical signal and sequentially read out.
  • the read image is converted into a digital image by the A / D conversion unit 104 and then input to the electronic image stabilization circuit 114.
  • the image subjected to the image stabilization process is processed by the moving image signal processing circuit 116 to generate a luminance color difference signal.
  • the image data from the moving image signal processing circuit 116 is held in the external memory 110.
  • the image data held by the external memory 110 is output to the compression / decompression circuit 121 via the system controller 105 and converted into a format such as MPEG4 or Motion-JPEG.
  • the converted image data is stored again in the external memory 110 via the system controller 105.
  • the compressed image data recorded in the external memory 110 is output to the recording medium I / F circuit 126 via the system controller 105 and recorded on the recording medium 112. It is also possible to record on the recording medium 112 without performing compression processing.
  • the image converted into a digital image by the A / D conversion unit 104 is input to the AE processing circuit 122.
  • An image from the AE processing circuit 122 is output to the system controller 105, and an AE evaluation value is obtained.
  • the aperture control unit 120 controls the aperture 102 using the AE evaluation value, and the image sensor control unit 119 performs accumulation time control of the image sensor 103 using the AE evaluation value. By these controls, AE control is performed so that an appropriate exposure value is obtained.
  • the AE gain setting circuit 123 sets an AE gain (for example, 1) in the normal moving image mode, or does not set an AE gain in the normal moving image mode.
  • the image converted into a digital image by the A / D conversion unit 104 is input to the AE processing circuit 122. Based on the image from the AE processing circuit 122, the system controller 105 obtains an evaluation value for displaying on the display device 111 with appropriate brightness. The display control unit 150 performs display control based on the evaluation value.
  • the display gain setting circuit 125 sets a display gain (for example, 1) in the normal moving image mode, or does not set a display gain in the normal moving image mode.
  • This mode is a mode for photographing only a still image, and is a mode for reading all pixels without performing addition reading.
  • the system controller 105 controls the imaging unit 100, the aperture control unit 120, the imaging element control unit 119, and the control unit 113 in accordance with the setting contents shown in FIG. In this mode, the estimation calculation process by the high resolution processing circuit 127 is not performed.
  • the system controller 105 performs various settings according to instructions from the mode switch 107.
  • the image used for AE control and the image used for display are images that do not gain up and are the same as the images recorded on the recording medium 112.
  • the subject image formed on the image sensor 103 is converted into an electrical signal and sequentially read out.
  • the read image is converted into a digital image by the A / D conversion unit 104 and then input to the high resolution processing circuit 127.
  • the high resolution processing circuit 127 is set to OFF (non-operating state).
  • the signal from the high resolution processing circuit 127 is processed by the still image signal processing circuit 117 to generate a luminance color difference signal.
  • the image data from the still image signal processing circuit 117 is held in the external memory 110.
  • the image data held by the external memory 110 is output to the compression / decompression circuit 121 via the system controller 105 and converted into a format such as RAW or JPEG.
  • the converted image data is stored again in the external memory 110 via the system controller 105.
  • the compressed image data recorded in the external memory 110 is output to the recording medium I / F circuit 126 via the system controller 105 and recorded on the recording medium 112. It is also possible to record on the recording medium 112 without performing compression processing.
  • the AE control is the same control as the AE control in the normal moving image mode.
  • the display control is the same control as the display control in the normal moving image mode because the live view image is acquired and displayed by the same readout control as in the normal moving image mode.
  • This mode is one of the fused moving image modes, in which a fused moving image for acquiring a still image is recorded as a moving image, and a still image is not estimated. In this mode, pixel shift readout is not performed. Note that it is possible to estimate and acquire a high-resolution still image from the fused video shot in this mode after the end of shooting.
  • the system controller 105 controls the imaging unit 100, the aperture control unit 120, the imaging element control unit 119, and the control unit 113 in accordance with the mode setting content shown in FIG.
  • the system controller 105 performs various settings according to instructions from the mode switch 107.
  • a weighted pixel addition signal with a superimposed shift is read out.
  • the addition reading is performed by a method described later with reference to FIG.
  • the weighting of the pixel value is realized by a method of changing the gain for each pixel. Specifically, when each pixel has an A / D conversion circuit, weighting is performed during A / D conversion.
  • the pixel readout circuit may be weighted in an analog manner by giving a gain, or may be weighted by digital processing after A / D conversion.
  • the subject image formed on the image sensor 103 is converted into an electrical signal and sequentially read out.
  • the read image is converted into a digital image by the A / D conversion unit 104 and then input to the electronic image stabilization circuit 114.
  • the image subjected to the image stabilization process is processed by the moving image signal processing circuit 116 to generate a luminance color difference signal.
  • the image data from the moving image signal processing circuit 116 is held in the external memory 110.
  • the image data held by the external memory 110 is output to the compression / decompression circuit 121 via the system controller 105 and converted into a format such as MPEG4 or Motion-JPEG.
  • the converted image data is stored again in the external memory 110 via the system controller 105.
  • the compressed image data recorded in the external memory 110 is output to the recording medium I / F circuit 126 via the system controller 105 and recorded on the recording medium 112. It is also possible to record on the recording medium 112 without performing compression processing.
  • the image converted into a digital image by the A / D conversion unit 104 is input to the AE processing circuit 122.
  • An image from the AE processing circuit 122 is output to the system controller 105, and an AE evaluation value is obtained.
  • the aperture control unit 120 controls the aperture 102 using the AE evaluation value, and the image sensor control unit 119 performs accumulation time control of the image sensor 103 using the AE evaluation value. By these controls, AE control is performed so that an appropriate exposure value is obtained.
  • the AE gain setting circuit 123 sets the AE gain. That is, the AE image in the normal moving image mode is a pixel addition signal without weighting and without a superimposition shift, whereas the AE image in the moving image still image fusion moving image 1 mode is a pixel addition signal with weighting and a superimposition shift. Therefore, as described above, the range (value) of the signal after addition differs between the weighted addition signal and the weighted addition signal.
  • the AE gain setting circuit 123 sets the AE gain (for example, 1.78), thereby aligning the pixel addition signal ranges in the normal moving image mode and the moving image still image fusion moving image 1 mode.
  • the image converted into a digital image by the A / D conversion unit 104 is input to the AE processing circuit 122.
  • a coefficient having the same value as the coefficient input to the AE gain setting circuit 123 set by the coefficient setting unit 130 in the system controller 105 or a coefficient suitable for the display device 111 is input to the display gain setting circuit 125.
  • the display gain setting circuit 125 sets the display gain to match the characteristics of the display device 111 using the same value as the coefficient input to the AE gain setting circuit 123.
  • the display device control circuit 124 performs control to display a moving image on the display device 111 with appropriate brightness by performing display control using the display gain.
  • the display gain setting circuit 125 sets the display gain. That is, as described above, the added signal range differs between the weighted addition signal and the unweighted addition signal.
  • the display gain setting circuit 125 sets the display gain (for example, 1.78), thereby aligning the pixel addition signal ranges in the normal moving image mode and the moving image still image fusion moving image 1 mode.
  • Movie Still Image Fusion Movie 2 Mode Next, the movie still image fusion movie 2 mode shown in FIG. 9 will be described.
  • This mode is one of the fused video modes, in which the fused video is recorded as a video and no still image is estimated. In this mode, pixel shift readout is performed. Note that it is possible to estimate and acquire a high-resolution still image from the fused video shot in this mode after the end of shooting.
  • description of operations similar to those described in the moving image still image fusion moving image 1 mode will be omitted as appropriate.
  • the system controller 105 controls the imaging unit 100, the aperture control unit 120, the imaging element control unit 119, and the control unit 113 in accordance with the mode setting content shown in FIG.
  • the system controller 105 performs various settings according to instructions from the mode switch 107.
  • a weighted pixel addition signal with a superimposed shift is read out.
  • the addition reading is performed by a method described later with reference to FIG.
  • the weighted addition is realized in the same manner as the method described in the moving image still image fusion moving image 1 mode.
  • AE control such as AE gain setting and display control such as display gain setting are performed in the same manner as described in the video still image fusion video 1 mode.
  • This mode is one of the fused video modes, in which a fused video is shot by pixel shift readout, and a high-resolution still image is acquired from the fused video.
  • movement demonstrated in the moving image still image fusion moving image 1 mode description is abbreviate
  • high-resolution still image processing is performed by the high-resolution processing circuit 127. That is, the high resolution processing circuit 127 is turned on (operating state) by the control signal from the system controller 105.
  • the image from the A / D conversion unit 104 is temporarily stored in the frame memory 128 and subjected to high resolution processing by the high resolution processing circuit 127.
  • the process of estimating a still image is performed by the method described later with reference to FIGS. Alternatively, the process may be performed by other methods such as a known super-resolution process.
  • the image from the high resolution processing circuit 127 is processed by the still image signal processing circuit 117 to generate a luminance color difference signal.
  • the image data from the still image signal processing circuit 117 is held in the external memory 110.
  • the image data held by the external memory 110 is output to the compression / decompression circuit 121 via the system controller 105 and converted into a format such as RAW or JPEG.
  • the converted image data is stored again in the external memory 110 via the system controller 105.
  • the compressed image data recorded in the external memory 110 is output to the recording medium I / F circuit 126 via the system controller 105 and recorded on the recording medium 112. It is also possible to record on the recording medium 112 without performing compression processing.
  • AE control such as AE gain setting and display control such as display gain setting are performed in the same manner as described in the video still image fusion video 1 mode.
  • This mode is one of the fused video modes, and is a mode in which a fused video is shot without pixel shift and a high resolution still image is acquired from the fused video.
  • movement demonstrated in the moving image still image fusion moving image 2 mode description is abbreviate
  • high-resolution still image processing is performed by the high-resolution processing circuit 127. That is, the high resolution processing circuit 127 is turned on (operating state) by the control signal from the system controller 105.
  • the image from the A / D conversion unit 104 is temporarily stored in the frame memory 128 and subjected to high resolution processing by the high resolution processing circuit 127.
  • the process of estimating a still image is performed by the method described later with reference to FIGS.
  • the process may be performed by other methods such as a known super-resolution process.
  • a pixel value corresponding to the pixel shift is obtained by performing interpolation processing on the 4-pixel addition value captured in each frame.
  • interpolation processing is not performed, and high resolution processing may be performed by a technique such as edge enhancement.
  • the image from the high resolution processing circuit 127 is processed by the still image signal processing circuit 117 to generate a luminance color difference signal.
  • AE control such as AE gain setting and display control such as display gain setting are performed in the same manner as described in the video still image fusion video 1 mode.
  • the frame used in the following description is, for example, the timing at which one (one) image is captured by the image sensor or the timing at which one image is processed in image processing.
  • one image in the image data is also referred to as a frame as appropriate.
  • FIG. 10 shows an explanatory diagram in the case where pixel shift is performed at one pixel pitch in each frame. This read control is performed in the above-described moving image still image fusion moving image 1 mode and moving image still image fusion still image 1 mode.
  • the image sensor is a Bayer array color image sensor
  • the 4-pixel addition value shown in the following equation (8) is read.
  • W1, W2, W3 and W4 are weighting coefficients shown in the above equation (1).
  • GRij and GBij (i and j are natural numbers) represent green pixel values
  • Rij represents a red pixel value
  • Bij represents a blue pixel value.
  • rope and rope represent a green 4-pixel addition value
  • rij represents a red 4-pixel addition value
  • bij represents a blue 4-pixel addition value.
  • gr11 W1 * GR11 + W2 * GR13 + W3 * GR31 + W4 * GR33
  • r12 W1 * R12 + W2 * R14 + W3 * R32 + W4 * R34
  • b21 W1 * B21 + W2 * B23 + W3 * B41 + W4 * B43
  • gb22 W1 * GB22 + W2 * GB24 + W3 * GB42 + W4 * GB44 (8)
  • FIG. 13 is an explanatory diagram showing the addition read control when pixel shift is not performed. This reading control is performed in the above-described normal moving image mode, moving image still image fusion moving image 2 mode, and moving image still image fusion still image 2 mode.
  • r12 W1 * R12 + W2 * R14 + W3 * R32 + W4 * R34
  • r14 W1 * R16 + W2 * R18 + W3 * R36 + W4 * R38
  • r32 W1 * R52 + W2 * R54 + W3 * R72 + W4 * R74
  • r34 W1 * R56 + W2 * R58 + W3 * R76 + W4 * R78 (13)
  • b21 W1 * B21 + W2 * B23 + W3 * B41 + W4 * B43
  • b23 W1 * B25 + W2 * B27 + W3 * B45 + W4 * B47
  • b41 W1 * B61 + W2 * B63 + W3 * B81 + W4 * B83
  • b43 W1 * B65 + W2 * B67 + W3 * B85 + W4 * B87 (14)
  • gb22 W1 * GB22 + W2 * GB24 + W3 * GB42 + W4 * GB44
  • gb24 W1 * GB26 + W2 * GB28 + W3 * GB46 + W4 * GB48
  • gb42 W1 * GB62 + W2 * GB64 + W3 * GB82 + W4 * GB84
  • gb44 W1 * GB66 + W2 * GB68 + W3 * GB86 + W4 * GB88 (15)
  • the light receiving unit (pixel group) used in the following description represents an area on the image sensor including a plurality of pixels to be added and read, and pixel values of a plurality of pixels included in the light receiving unit are weighted and added. Thus, the added pixel value is acquired.
  • a direction along one axis is referred to as a horizontal direction
  • a direction along the other axis is referred to as a vertical direction.
  • the horizontal direction is the horizontal scanning direction in the imaging operation.
  • the direction along one of the two orthogonal axes is referred to as a horizontal direction
  • the direction along the other axis is referred to as a vertical direction.
  • the weighting factors for addition reading are c 1 , c 2 , c 3 , and c 4 .
  • c 1 1
  • the weighting coefficient takes the ratio relationship rule shown in the following equation (16) (r is a real number where r ⁇ 1).
  • FIG. 16A is an explanatory diagram of light reception units.
  • v ij is an estimated pixel value estimated from the added pixel value, and is a pixel value corresponding to each pixel of the image sensor.
  • the light reception unit is set for every four pixels of v ij, and the 4-pixel addition value a ij is acquired by reading from each light reception unit.
  • Adjacent light receiving units have overlapping regions. For example, a 00 and a 10 are superimposed in v 10, v 01.
  • 4-pixel addition values a 00 , a 10 , a 01 , and a 11 are read in frames fn to fn + 3, respectively.
  • the 4-pixel addition values a 00 , a 20 ,... are read, and the 4-pixel addition values a 10 , a 01 , a 11 are the surrounding 4-pixel addition values a 00. , A 20 ,...
  • FIG. 16B illustrates an intermediate pixel value (intermediate estimated pixel value).
  • the intermediate pixel value b ij is first performed in the horizontal direction of the high-resolution, determining an estimated pixel value v ij by Kokai Zoka the b ij in the vertical direction.
  • the intermediate pixel value b ij corresponds to v ij and v i (j + 1) .
  • Bij adjacent to each other in the vertical direction has an overlapping region. For example, b 00 and b 01 are superimposed in v 01.
  • b ij may be obtained by increasing the resolution in the vertical direction
  • v ij may be obtained by increasing the resolution in the horizontal direction.
  • the weighted pixel addition values are set to a 00 , a 10 , and a 20 in the order of shift.
  • a 00 c 1 v 00 + c 2 v 01 + c 3 v 10 + c 4 v 11
  • a 10 c 1 v 10 + c 2 v 11 + c 3 v 20 + c 4 v 21
  • b 00 , b 10 , and b 20 are defined as shown in the following expression (19), and the above expression (17) is substituted.
  • a pattern ⁇ a 00 , a 10 ⁇ based on sampling pixel values detected by weighted superimposition shift sampling is compared with a pattern based on intermediate pixel values ⁇ b 00 , b 10 , b 20 ⁇ . Then, an unknown number b 00 that minimizes the error E is derived and set as the intermediate pixel value b 00 .
  • the evaluation function Ej shown in the following equation (24) is obtained. Then, the similarity between the pattern ⁇ a 00 , a 10 ⁇ and the intermediate estimated pixel value ⁇ b 00 , b 10 , b 20 ⁇ is evaluated using this evaluation function Ej.
  • the processing for estimating the v ij from b ij is carried out in the same manner as method of estimating the b ij from above a ij. That is, the relational expression of v 00 , v 01 , v 02 is obtained using the difference value of b 00 , b 01 with v 00 as an unknown. Next, an error evaluation function of ⁇ v 00 , v 01 , v 02 ⁇ and ⁇ b 00 , b 01 ⁇ is obtained, v 00 that minimizes the evaluation function is obtained, and the obtained v 00 is substituted into the relational expression. Then, v 01 and v 02 are obtained.
  • the light receiving unit is set for each of the plurality of pixels of the image sensor, and the pixel values of the plurality of pixels included in the light receiving unit are weighted and added to read as an added pixel value (light receiving value).
  • the low resolution frame image is obtained.
  • the acquired low-resolution frame image is stored, and the pixel value of each pixel included in the light receiving unit is estimated based on the plurality of stored low-resolution frame images.
  • a high-resolution frame image having a higher resolution than the low-resolution frame image is output.
  • the low-resolution frame image is acquired by reading the added pixel value while sequentially shifting the pixels while superimposing the light receiving units.
  • the pixel value of each pixel included in the light reception unit is estimated based on a plurality of added pixel values obtained by sequentially shifting the light reception unit.
  • the light receiving unit is set for every four pixels.
  • the additional pixel values a 00 , a 20 and the like are added and read, and a low resolution frame image by a 00 , a 20 and the like is acquired.
  • a low resolution frame image by a 10 , a 30, etc. a low resolution frame image by a 11 , a 31, etc.
  • a low resolution frame image by a 01 , a 21, etc. are sequentially acquired.
  • the light receiving units for acquiring a 00 , a 10 , a 11 , and a 01 are shifted by one pixel and overlapped by two pixels.
  • the estimated pixel value v ij is estimated by the high resolution processing circuit 127 (estimation calculation unit).
  • the still image signal processing circuit 117 image output unit processes v ij and outputs a high resolution image corresponding to the resolution of the image sensor.
  • the estimation process can be simplified using the above-described estimation of the intermediate pixel value.
  • the high-resolution still image can be generated at any timing of the low-resolution moving image, the user can easily obtain the high-resolution still image at the decisive moment.
  • by capturing a low-resolution moving image at the time of shooting it is possible to capture at a high frame rate and acquire a high-resolution still image as necessary.
  • the light receiving unit is sequentially set to a first position a 00 second position a 10 follows. These light receiving units are overlapped in a region including v 10 and v 11 . Then, as described above with reference to FIG. 17, the difference value ⁇ i 0 of the added pixel values obtained from these light receiving units is obtained.
  • the first intermediate pixel value b 00 is the light reception value of the first light receiving regions v 00 and v 01 obtained by removing the overlapping regions v 10 and v 11 from the light receiving unit a 00. .
  • the second intermediate pixel value b 20 is a received-light value in the second light receiving region v 20, v 21 excluding the overlapping area v 10, v 11 from the light receiving unit a 10. Then, as shown in the above equation (22), equation of b 00 and b 20 are represented by using a difference value .delta.i 0.
  • the first and second intermediate pixel values b 00 and b 20 are estimated using the relational expression, and the pixel value of each pixel of the light receiving unit is obtained using the estimated first intermediate pixel value b 00 .
  • successive intermediate pixel values ⁇ b 00 , b 10 , b 20 ⁇ including the intermediate pixel values b 00 and b 20 are converted into intermediate pixel values.
  • the relational expression between the intermediate pixel values is expressed using the added pixel values a 00 and a 10 .
  • the addition pixel value pattern addition pixel values a 00, a summing pixel values successive containing 10 ⁇ a 00, a 10 ⁇ .
  • the similarity is evaluated by comparing the intermediate pixel value pattern with the added pixel value pattern, and based on the evaluation result, the intermediate pixel values b 00 , b 10 , and b 20 are determined so that the similarity becomes the highest. Is done.
  • the intermediate pixel value can be estimated based on a plurality of added pixel values acquired by pixel shifting while superimposing the light receiving units.
  • the intermediate pixel value pattern ⁇ b 00 , b 10 , b 20 ⁇ represented by the relational expression between the intermediate pixel values.
  • an evaluation function Ej representing an error between the added pixel value pattern ⁇ a 00 , a 10 ⁇ .
  • the intermediate pixel values b 00 , b 10 , and b 20 are determined so that the value of the evaluation function Ej is minimized.
  • the value of the intermediate pixel value can be estimated by expressing the error by the evaluation function and obtaining the intermediate pixel value corresponding to the minimum value of the evaluation function.
  • the initial value of the intermediate pixel estimation can be set with a simple process by obtaining the unknown using the least square method.
  • each part of the exposure control information output unit 140 and the display control unit 150 is configured by hardware.
  • the CPU may be configured to perform processing of each unit, and may be realized as software by the CPU executing a program.
  • the CPU executes, for example, the processing of the flowcharts shown in FIGS.
  • each unit constituting the still image processing unit 141 is configured by hardware.
  • the present invention is not limited to this.
  • a known computer system such as a personal computer
  • a program for realizing processing performed by each unit of the still image processing unit 141 is executed by the CPU of the computer system
  • each unit of the still image processing unit 141 The processing performed by may be implemented as software.
  • imaging unit 101 imaging lens, 102 aperture, 103 imaging device, 104 A / D converter, 105 system controller, 106 User I / F section, 107 Mode switch, 108 Movie switch, 109 still image switch, 110 external memory, 111 display device, 112 recording medium, 113 control unit, 114 electronic image stabilization circuit, 115 line memory, 116 video signal processing circuit, 117 still image signal processing circuit, 118 imaging control unit, 119 Image sensor control unit, 120 Aperture control unit, 121 Compression / decompression circuit, 122 AE processing circuit, 123 AE gain setting circuit, 124 display device control circuit, 125 display gain setting circuit, 126 recording medium I / F circuit, 127 high resolution processing circuit, 128 frame memory, 130 coefficient setting unit, 140 exposure control information output unit, 141 still image processing unit, 142 video processing unit, 150 display control unit, 160 readout control unit, 161 exposure control unit, a ij pixel addition value, b ij intermediate pixel value, ⁇ i 0 difference value, E

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention a pour objectif de réaliser un dispositif de capture d'image qui permet une commande d'exposition simple. Un dispositif de capture d'image selon la présente invention comprend un élément d'acquisition d'image (103) pour capturer une image sujet, une unité de commande de lecture (160) qui prend le total pondéré des valeurs des pixels d'une pluralité de pixels de l'élément d'acquisition d'image (103) pour y lire une valeur de pixel totalisée, une unité de réglage de coefficient (130) pour régler un coefficient de pondération dans la totalisation pondérée et une unité de délivrance d'informations de commande d'exposition (140) pour délivrer des informations de commande d'exposition afin d'effectuer la commande d'exposition d'une unité de capture d'image (100) en se basant sur le coefficient de pondération.
PCT/JP2011/066413 2010-07-22 2011-07-20 Dispositif de capture d'image WO2012011484A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-164705 2010-07-22
JP2010164705A JP2012028971A (ja) 2010-07-22 2010-07-22 撮像装置

Publications (1)

Publication Number Publication Date
WO2012011484A1 true WO2012011484A1 (fr) 2012-01-26

Family

ID=45496903

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/066413 WO2012011484A1 (fr) 2010-07-22 2011-07-20 Dispositif de capture d'image

Country Status (2)

Country Link
JP (1) JP2012028971A (fr)
WO (1) WO2012011484A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104885445A (zh) * 2012-12-25 2015-09-02 索尼公司 固体摄像器件及其驱动方法和电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015115224A1 (fr) * 2014-02-03 2015-08-06 オリンパス株式会社 Dispositif de capture d'image à semi-conducteurs et système de capture d'image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001359038A (ja) * 2000-06-09 2001-12-26 Olympus Optical Co Ltd 撮像装置
JP2007282134A (ja) * 2006-04-11 2007-10-25 Olympus Imaging Corp 撮像装置
JP2009124621A (ja) * 2007-11-19 2009-06-04 Sanyo Electric Co Ltd 超解像処理装置及び方法並びに撮像装置
JP2010130289A (ja) * 2008-11-27 2010-06-10 Panasonic Corp 固体撮像装置、半導体集積回路および欠陥画素補正方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001359038A (ja) * 2000-06-09 2001-12-26 Olympus Optical Co Ltd 撮像装置
JP2007282134A (ja) * 2006-04-11 2007-10-25 Olympus Imaging Corp 撮像装置
JP2009124621A (ja) * 2007-11-19 2009-06-04 Sanyo Electric Co Ltd 超解像処理装置及び方法並びに撮像装置
JP2010130289A (ja) * 2008-11-27 2010-06-10 Panasonic Corp 固体撮像装置、半導体集積回路および欠陥画素補正方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104885445A (zh) * 2012-12-25 2015-09-02 索尼公司 固体摄像器件及其驱动方法和电子设备
CN104885445B (zh) * 2012-12-25 2018-08-28 索尼公司 固体摄像器件及其驱动方法和电子设备

Also Published As

Publication number Publication date
JP2012028971A (ja) 2012-02-09

Similar Documents

Publication Publication Date Title
US10063768B2 (en) Imaging device capable of combining a plurality of image data, and control method for imaging device
JP5652649B2 (ja) 画像処理装置、画像処理方法および画像処理プログラム
US7995852B2 (en) Imaging device and imaging method
JP5764740B2 (ja) 撮像装置
US8982242B2 (en) Imaging device and imaging method
US9398230B2 (en) Imaging device and imaging method
JP5729237B2 (ja) 画像処理装置、画像処理方法、及びプログラム
US20130286254A1 (en) Image capturing apparatus, control method, and recording medium
KR101013830B1 (ko) 촬영 장치 및 프로그램을 기록한 기록 매체
US8687093B2 (en) Image sensing apparatus, control method thereof, and storage medium
JP5780764B2 (ja) 撮像装置
JP4639406B2 (ja) 撮像装置
US10187594B2 (en) Image pickup apparatus, image pickup method, and non-transitory computer-readable medium storing computer program
KR100819811B1 (ko) 촬상 장치 및 촬상 방법
US8836821B2 (en) Electronic camera
JP2018148512A (ja) 撮像装置と撮像装置の制御方法、及びプログラム
US20080012964A1 (en) Image processing apparatus, image restoration method and program
JP4678061B2 (ja) 画像処理装置、それを備えたデジタルカメラ及び画像処理プログラム
US20180020149A1 (en) Imaging apparatus and image compositing method
WO2012011484A1 (fr) Dispositif de capture d'image
JP2006253887A (ja) 撮像装置
JP2013192121A (ja) 撮像装置及び撮像方法
US20180084209A1 (en) Image pickup apparatus, signal processing method, and signal processing program
JP5655589B2 (ja) 撮像装置
US20230328339A1 (en) Image capture apparatus and control method thereof, and image processing apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11809654

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11809654

Country of ref document: EP

Kind code of ref document: A1