WO2017126377A1 - 受光装置、制御方法、及び、電子機器 - Google Patents
受光装置、制御方法、及び、電子機器 Download PDFInfo
- Publication number
- WO2017126377A1 WO2017126377A1 PCT/JP2017/000568 JP2017000568W WO2017126377A1 WO 2017126377 A1 WO2017126377 A1 WO 2017126377A1 JP 2017000568 W JP2017000568 W JP 2017000568W WO 2017126377 A1 WO2017126377 A1 WO 2017126377A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- photometric
- light
- pixel
- sensor
- exposure
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 168
- 238000005375 photometry Methods 0.000 claims abstract description 43
- 230000003287 optical effect Effects 0.000 claims description 84
- 238000003384 imaging method Methods 0.000 claims description 79
- 238000005259 measurement Methods 0.000 claims description 30
- 239000000758 substrate Substances 0.000 claims description 30
- 238000010030 laminating Methods 0.000 claims description 3
- 229920006395 saturated elastomer Polymers 0.000 description 55
- 238000012545 processing Methods 0.000 description 39
- 238000007792 addition Methods 0.000 description 32
- 230000001629 suppression Effects 0.000 description 28
- 238000005516 engineering process Methods 0.000 description 22
- 238000010586 diagram Methods 0.000 description 16
- 238000006243 chemical reaction Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- KNMAVSAGTYIFJF-UHFFFAOYSA-N 1-[2-[(2-hydroxy-3-phenoxypropyl)amino]ethylamino]-3-phenoxypropan-2-ol;dihydrochloride Chemical compound Cl.Cl.C=1C=CC=CC=1OCC(O)CNCCNCC(O)COC1=CC=CC=C1 KNMAVSAGTYIFJF-UHFFFAOYSA-N 0.000 description 1
- 101000596041 Homo sapiens Plastin-1 Proteins 0.000 description 1
- 101000596046 Homo sapiens Plastin-2 Proteins 0.000 description 1
- 102100035181 Plastin-1 Human genes 0.000 description 1
- 102100035182 Plastin-2 Human genes 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
- 239000012780 transparent material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
- G02B7/40—Systems for automatic generation of focusing signals using time delay of the reflected waves, e.g. of ultrasonic waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/02—Details
- G01C3/06—Use of electric means to obtain final indication
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J1/00—Photometry, e.g. photographic exposure meter
- G01J1/42—Photometry, e.g. photographic exposure meter using electric radiation detectors
- G01J1/44—Electric circuits
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4816—Constructional features, e.g. arrangements of optical elements of receivers alone
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4868—Controlling received signal intensity or exposure of sensor
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B7/00—Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
- G03B7/08—Control effected solely on the basis of the response, to the intensity of the light received by the camera, of a built-in light-sensitive device
- G03B7/091—Digital circuits
- G03B7/093—Digital circuits for control of exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/702—SSIS architectures characterised by non-identical, non-equidistant or non-planar pixel layout
Definitions
- the present technology relates to a light receiving device, a control method, and an electronic device, and more particularly, to a light receiving device, a control method, and an electronic device that can suppress saturation of a sensor that receives light, for example.
- Examples of the sensor that receives light include an image sensor (imaging sensor) that captures an image by receiving light, and a distance measuring sensor that performs distance measurement.
- imaging sensor imaging sensor
- distance measuring sensor that performs distance measurement.
- ToF Time-of-Flight
- ToF sensor for example, see Non-Patent Document 1.
- the ToF sensor has pixels that receive light and perform photoelectric conversion.
- the ToF sensor emits light and irradiates the subject. Further, in the ToF sensor, the reflected light that is reflected by the subject and returned is received by the pixel, and the distance to the subject is obtained by the time required from the light emission to the reception of the reflected light (ranging is performed). ).
- the ToF sensor When the ToF sensor receives reflected light from the subject, the light received by the pixels of the ToF sensor includes ambient light in addition to the reflected light from the subject.
- the ToF sensor a difference between an electrical signal as a result of photoelectric conversion of reflected light (including ambient light) from an object and an electrical signal as a result of photoelectric conversion of only ambient light (hereinafter also referred to as ambient light removal difference). As a result, the electrical signal corresponding to the ambient light can be removed.
- the ToF sensor is strong light such as sunlight (direct sunlight)
- the pixels of the ToF sensor are saturated and the ambient light removal difference cannot be obtained. It becomes difficult to perform accurate distance measurement.
- the pixels When the ambient light is strong light, the pixels may be saturated even in the image sensor, and the image quality may deteriorate.
- the present technology has been made in view of such a situation, and is intended to suppress saturation of a sensor that receives light.
- the light receiving device of the present technology by receiving light, a photometric sensor that performs photometry, another sensor in which a light receiving surface that receives light is divided into a plurality of blocks, and a photometric result of the photometric sensor, And a control unit that performs exposure control for controlling exposure of the other sensor for each of the blocks.
- the control method of the present technology is an exposure control that controls exposure of another sensor in which a light receiving surface that receives light is divided into a plurality of blocks according to a photometric result of a photometric sensor that performs photometry by receiving light. Is a control method including a step of performing for each block.
- An electronic device of the present technology includes an optical system that collects light and a light receiving device that receives light, and the light receiving device includes a photometric sensor that performs photometry by receiving light that has passed through the optical system, and , Another sensor in which a light receiving surface that receives light that has passed through an optical system different from the optical system or the optical system is divided into a plurality of blocks, and the other sensor according to a photometric result of the photometric sensor It is an electronic apparatus which has a control part which performs the exposure control which controls exposure of this for every said block.
- the light receiving surface that receives light is divided into a plurality of blocks according to the photometric result of the photometric sensor that performs photometry by receiving the light. Exposure control for controlling the exposure of the sensors is performed for each block.
- the light receiving device may be an independent device or an internal block constituting one device.
- saturation of a sensor that receives light can be suppressed.
- FIG. 2 is a diagram illustrating a configuration example of a photometric pixel 11.
- FIG. It is a figure explaining the 1st pixel saturation suppression method which suppresses the saturation of the ranging pixel 21 and the imaging pixel 31.
- FIG. 5 is a flowchart illustrating an example of a distance measurement process using a distance sensor 20 and an image pickup process using an image sensor 30. It is a flowchart explaining the example of the process of the exposure control in which the saturation of the ranging pixel 21 / imaging pixel 31 is suppressed by the 1st pixel saturation suppression method. It is a flowchart explaining the other example of the process of the exposure control in which the saturation of the ranging pixel 21 / imaging pixel 31 is suppressed by the first pixel saturation suppression method.
- FIG. It is a figure explaining the 2nd pixel saturation suppression method which suppresses saturation of the ranging pixel 21 and the imaging pixel 31.
- FIG. It is a flowchart explaining the example of the process of the exposure control in which the saturation of the light receiving pixel (ranging pixel 21 / imaging pixel 31) is suppressed by the second pixel saturation suppression method. It is a flowchart explaining the other example of the process of the exposure control by which the saturation of a light reception pixel is suppressed by the 2nd pixel saturation suppression method.
- FIG. 18 is a block diagram illustrating a configuration example of an embodiment of a computer to which the present technology is applied.
- FIG. 1 is a block diagram illustrating a configuration example of a first embodiment of a light receiving device to which the present technology is applied.
- the light receiving device includes a photometric sensor 10, a distance measuring sensor 20, an image sensor 30, a control unit 41, and a processing unit 42. I do.
- the photometric sensor 10 performs photometry by receiving light.
- the photometric sensor 10 has a plurality of photometric pixels 11.
- the photometric pixel 11 is composed of, for example, a PD (Photo-Diode).
- the photometric pixel 11 receives light, performs photoelectric conversion, and outputs an electric signal (for example, charge) corresponding to the amount of received light (exposure amount) to the processing unit 42.
- the distance sensor 20 performs distance measurement by receiving light.
- the distance measuring sensor 20 includes a plurality of distance measuring pixels 21 and a light emitting unit 22.
- the ranging pixel 21 is composed of, for example, a PD.
- the ranging pixel 21 receives the light emitted from the light emitting unit 22, reflected by the subject and returned, performs photoelectric conversion, and outputs an electrical signal corresponding to the amount of received light to the processing unit 42.
- the light emitting unit 22 emits light such as infrared rays, for example, as electromagnetic waves used for distance measurement.
- the image sensor 30 performs imaging by receiving light.
- the image sensor 30 has a plurality of imaging pixels 31.
- the imaging pixel 31 is configured by, for example, a PD.
- the imaging pixel 31 receives light and performs photoelectric conversion, and outputs an electrical signal corresponding to the amount of received light to the processing unit 42.
- the control unit 41 controls the photometric sensor 10, the distance measuring sensor 20, the image sensor 30, and the processing unit 42 according to, for example, the result of processing in the processing unit 42.
- the processing unit 42 performs predetermined processing on the electrical signals output from the photometric sensor 10, the distance measuring sensor 20, and the image sensor 30, and outputs the processing results to the outside or the control unit 41 as necessary. .
- the photometric pixel 11 of the photometric sensor 10 receives light, performs photoelectric conversion, and outputs an electrical signal corresponding to the amount of received light to the processing unit 42.
- the processing unit 42 performs photometry of light (environmental light) incident on the photometric sensor 10 and thus the distance measuring sensor 20 and the image sensor 30 from the electrical signal output from the photometric sensor 10, and the photometric result (photometric result). Is output to the control unit 41.
- the control unit 41 performs exposure control for controlling the exposure of the distance measuring sensor 20 and the image sensor 30 according to the photometric result from the processing unit 42.
- the light receiving surface on which the ranging pixels 21 of the distance measuring sensor 20 are arranged and the light receiving surface on which the imaging pixels 31 of the image sensor 30 are arranged are divided into a plurality of pixel blocks.
- the exposure control by the control unit 41 is performed for each pixel block (the distance measuring pixel 21 and the imaging pixel 31).
- the 1 includes a distance measuring sensor 20 and an image sensor 30 as sensors other than the photometric sensor 10.
- a distance measuring sensor 20 and an image sensor 30 as sensors other than the photometric sensor 10.
- only the distance measuring sensor 20 or only the image sensor 30 can be provided.
- any sensor that receives light other than the distance measuring sensor 20 and the image sensor 30 can be provided.
- FIG. 2 is a block diagram illustrating a configuration example of a second embodiment of a light receiving device to which the present technology is applied.
- the light receiving device includes a photometric sensor 10, a distance measuring sensor 20, a control unit 41, and a processing unit 42.
- the light receiving device of FIG. 2 is common to the case of FIG. 1 in that it includes the photometric sensor 10, the distance measuring sensor 20, the control unit 41, and the processing unit 42, and is not provided with the image sensor 30. This is different from the case of 1.
- the photometric sensor 10 configured as described above, the photometric sensor 10, the distance measuring sensor 20, the control unit 41, and the processing unit 42 perform the same processing as in the case of FIG.
- FIG. 3 is a block diagram illustrating a configuration example of a third embodiment of a light receiving device to which the present technology is applied.
- the light receiving device includes a photometric sensor 10, an image sensor 30, a control unit 41, and a processing unit 42.
- the light receiving device of FIG. 3 is similar to the case of FIG. 1 in that it has a photometric sensor 10, an image sensor 30, a control unit 41, and a processing unit 42, and is not shown in FIG. This is different from the case of 1.
- the photometric sensor 10 the image sensor 30, the control unit 41, and the processing unit 42 perform the same processing as in FIG.
- FIG. 4 is a diagram for explaining an example of distance measurement by the distance measurement sensor 20.
- the distance measuring sensor 20 is, for example, a ToF sensor, and repeatedly emits pulsed irradiation light from the light emitting unit 22. Further, the distance measuring sensor 20 calculates the distance to the subject (measures the distance) by receiving, with the distance measuring pixel 21, the reflected light that is returned from the irradiation light reflected by the subject.
- the period of pulses as irradiation light is also referred to as a cycle.
- the proportion of the light receiving signal N 1 and the light receiving signal N 2 to the whole of the light receiving signal N 1 and N 2 are irradiated light is reflected by the object, time to come reflected light back is irradiation light reflected by the object, As a result, it corresponds to the distance to the subject.
- the distance measuring sensor 20 (the processing unit 42 but processes the electric signal output) is ToF sensor, by using the light reception signals N 1 and N 2, the distance to the object is calculated.
- the distance measurement error ⁇ L of the distance measurement sensor 20 that is a ToF sensor is expressed by the following equation.
- ⁇ L cT 0 / (4 ⁇ N 12 ) ⁇ (1+ (N B + N O + 4N R 2 ) / N 12 ) ... (1)
- T 0 represents the pulse width of the pulse as the irradiation light
- c is representative of the speed of light
- N 12 represents an added value N 1 + N 2 of the received light signals N 1 and N 2
- N R represents read noise
- N B represents background shot noise
- N O represents an offset charge
- the accuracy of distance measurement is improved as the pulse width T 0 of the irradiation light is smaller.
- a predetermined time longer than one cycle is one frame (1F), and one frame is composed of N predetermined cycles.
- the exposure result of the exposure of the distance measuring pixels 21 for each cycle is added up to N cycles constituting one frame, and from the added value obtained as a result, The distance to the subject is required.
- the accuracy of distance measurement improves as the number of cycles to be added increases, that is, as the integrated value of one frame of the exposure time of the distance measurement pixel 21 increases.
- a distance measuring sensor other than the ToF sensor can be employed as the distance measuring sensor 20.
- the distance measuring pixel 21 when the distance measuring pixel 21 is exposed, that is, when the distance measuring pixel 21 receives the reflected light from the subject, the light received by the distance measuring pixel 21 includes the reflected light (signal component) from the subject. , Ambient light (noise component) is included.
- FIG. 5 is a plan view showing an outline of a configuration example of the distance measuring sensor 20 that also serves as the photometric sensor 10.
- the distance measuring sensor 20 can also serve as the photometric sensor 10.
- the distance measuring sensor 20 has a rectangular light receiving surface, for example, and the light receiving surface is divided into rectangular pixel blocks in a matrix.
- a plurality of pixels are arranged in a matrix, for example.
- the pixel block is configured by arranging only one or more distance measuring pixels 21 (in a matrix).
- the pixel block includes the light measuring pixel 11 in addition to the distance measuring pixel 21.
- a plurality of photometric pixels 11 can be randomly (uniformly) arranged in the pixel block.
- the photometric pixel 11 that performs photometry in addition to the ranging pixel 21 in the pixel block, the distribution of the light energy amount on the light receiving surface is measured, and the exposure is controlled for each pixel block. be able to.
- the photometric pixel 11 corresponding to each pixel block that is, from the electric signal (photometric result) output from the photometric pixel 11 included in each pixel block, the (charge) of the distance measuring pixel 21 with respect to the amount of light incident on the light receiving surface.
- the presence or absence of saturation is determined for each pixel block, and the exposure of the ranging pixel 21 can be controlled for each pixel block according to the determination result.
- the distance measuring sensor 20 does not serve as the light measuring sensor 10
- whether or not the distance measuring pixel 21 of the distance measuring sensor 20 is saturated is determined according to the electric signal output from the light measuring sensor 10. Can be determined for each.
- the amount of light incident on the pixel block B where the distance measurement sensor 20 is located is equal to the pixel block B of the photometry sensor 10.
- the image sensor 30 can also serve as the distance measuring sensor 10 in the same manner as the distance measuring sensor 20. That is, in the description of FIG. 5 described above, “ranging sensor 20” and “ranging pixel 21” can be read as “image sensor 30” and “imaging pixel 31”, respectively.
- the light measuring pixel 11 and the distance measuring pixel 21 are arranged on one (semiconductor) substrate constituting the distance measuring sensor 20, as shown in FIG. That is, the photometric pixel 11 and the distance measuring pixel 21 can be arranged on the same substrate.
- the distance measuring sensor 20 that also serves as the light measuring sensor 10
- the distance measuring sensor 20 is configured in a laminated structure, and the light measuring pixels 11 are arranged on one substrate constituting the distance measuring sensor 20, thereby configuring the distance measuring sensor 20.
- the ranging pixels 21 can be arranged on another substrate.
- the distance measuring sensor 20 that also serves as the light measuring sensor 10 can be configured by laminating a substrate on which the light measuring pixels 11 are arranged and a substrate on which the distance measuring pixels 21 are arranged.
- FIG. 6 is a side view showing an outline of another configuration example of the distance measuring sensor 20 that also serves as the photometric sensor 10.
- the distance measuring sensor 20 has a laminated structure in which a substrate 61 on which the photometric pixels 11 are arranged and a substrate 62 on which the ranging pixels 21 are arranged are laminated.
- the photometric pixels 11 and the distance measuring pixels 21 can be arranged on the substrates 61 and 62 in the same pattern as in FIG.
- the substrate 61 on which the photometric pixels 11 are arranged is arranged on the upper side (the side on which light is incident), and the substrate 62 on which the ranging pixels 21 are arranged is arranged on the lower side.
- the portion of the upper substrate 61 that faces the ranging pixels 21 arranged on the lower substrate 62 can be made of a transparent material so that light is transmitted to the lower substrate 62, for example.
- the image sensor 30 also serving as the photometric sensor 10 can be configured in a laminated structure, similar to the distance measuring sensor 20 also serving as the photometric sensor 10 of FIG.
- FIG. 7 is a diagram illustrating a configuration example of the photometric pixel 11.
- each pixel block of the distance measuring sensor 20 or the image sensor 30 is, for example, a pixel type pa, pb, pc, pd, pe, It has a plurality of pixel types such as pf and pg.
- the light measuring sensor 10 has, for example, a pixel type in each area of the light receiving surface corresponding to each pixel block of the distance measuring sensor 20 or the image sensor 30. It has a plurality of pixel types such as pa, pb, pc, pd, pe, pf, and pg.
- the pixels of the pixel types pa to pg are appropriately shielded from light by a light shielding film, and have different aperture ratios.
- the aperture is not shielded, and the aperture ratio is 1 (100%).
- the aperture ratio is, for example, 0.75, 0.66, 0.5, 0.33, 0.25, 0.1.
- the photometric pixel 11 included in the pixel block of the distance measuring sensor 20 or the image sensor 30, and the distance measuring sensor 20 or the image sensor 30 are The photometric pixels 11 in the area of the light receiving surface of the photometric sensor 10 corresponding to the pixel block of the distance measuring sensor 20 or the image sensor 30 when not serving as the photometric sensor 10 are both the photometric pixels 11 corresponding to the pixel block.
- At least one pixel of the pixel type pa or pg can be adopted one by one.
- the number of each of the pixel types pa to pg as the photometric pixel 11 corresponding to one pixel block may be the same or different.
- the photometric pixel 11 corresponding to one pixel block in addition to the pixels of the pixel types pa to pg, a light-shielded pixel whose aperture is completely shielded (a pixel with an aperture ratio of 0) can be adopted. .
- FIG. 8 is a diagram for explaining a first pixel saturation suppression method that suppresses saturation of the ranging pixel 21 and the imaging pixel 31.
- the photometric pixel 11 corresponding to the pixel block the amount of light incident on the distance measuring sensor 20 or the image sensor 30 is measured in each frame, and the distance measuring pixel 21 of the pixel block or
- the exposure control (drive control) of the imaging pixel 31 can be performed to suppress the distance measuring pixel 21 or the imaging pixel 31 of the pixel block.
- the distance measuring sensor 20 As a method of improving the accuracy of distance measurement of the distance measuring sensor 20, for example, there is a method of increasing the intensity of electromagnetic waves (infrared rays or the like) emitted from the light emitting unit 22. Furthermore, as a method for improving the accuracy of distance measurement, when the electromagnetic wave emitted from the light emitting unit 22 is a sine wave, there is a method of increasing the frequency of the sine wave, and the electromagnetic wave emitted from the light emitting unit 22 is a pulse. In some cases, there is a method of shortening the pulse width of the pulse.
- electromagnetic waves infrared rays or the like
- the distance measuring sensor 20 has the distance measurement pixel 21 so that the frequency of the sine wave as the electromagnetic wave issued by the light emitting unit 22 can be increased and the pulse width of the pulse as the electromagnetic wave can be shortened.
- the exposure is configured to be capable of high-speed operation that performs short-time exposure at the nanosecond level.
- the distance measurement sensor 20 the distance measurement pixel 21
- the image sensor 30 the image pickup pixel 31
- the following first pixel saturation suppression method Further, the saturation of the ranging pixel 21 and the imaging pixel 31 can be suppressed by the second pixel saturation suppression method.
- the exposure time is appropriately set according to the electrical signal (photometric result) output from the photometric pixel 11 corresponding to the pixel block.
- the saturation of the ranging pixel 21 and the imaging pixel 31 it is possible to suppress the overflow of the electric charge as an electric signal obtained by photoelectric conversion in the ranging pixel 21 and the imaging pixel 31. it can.
- exposure with a short exposure time is repeated within one frame with respect to the ranging pixel 21 or the imaging pixel 31, and is output from the ranging pixel 21 or the imaging pixel 31 at each exposure. It is assumed that electrical signals (exposure results) are added using storage means such as a memory, and the added values are used as the pixel values of the ranging pixels 21 and the imaging pixels 31.
- the ranging pixel 21 or the pixel pixel according to the electrical signal (photometric result) output from the photometric pixel 11 corresponding to the pixel block. Suppression of saturation of the ranging pixels 21 and the imaging pixels 31 in one frame is suppressed by performing exposure control that appropriately sets an addition number (for example, the number of exposure results to be added) for adding the exposure results of the imaging pixels 31 To do.
- an addition number for example, the number of exposure results to be added
- the pixel values of the ranging pixel 21 and the imaging pixel 31 are the maximum values (the ranging pixel 21 and the imaging pixel 31 are accumulated). The pixel value corresponding to the maximum charge amount that can be reduced) can be suppressed.
- an image composed of pixel values (representing distance) of the ranging pixel 21 can be increased.
- both or one of the ranging image 21 and the imaging pixel 31 is also referred to as a light receiving pixel.
- the pixel type represents the pixel types pa to pg described in FIG.
- the aperture ratio is the ratio of the aperture ratio of each pixel type to the aperture ratio of 100% (a / b when the ratio is a: b, here equal to the aperture ratio of each pixel type) Represents.
- the exposure time ratio represents the value of the ratio between the photometric exposure time that is the exposure time of the photometric pixel 11 and the light receiving exposure time that is the exposure time of the light receiving pixel. The exposure time ratio matches the aperture ratio.
- the first pixel saturation suppression method first, a certain time is set as a photometric exposure time, and exposure for that photometric exposure time is performed. Then, by the exposure for the photometric exposure time, it is determined whether each of the photometric pixels 11 of the pixel types pa to pg is saturated or unsaturated.
- the aperture ratio in the unsaturated photometric pixels 11 is The light receiving exposure time is set using the exposure time ratio of the photometric pixel 11 of the maximum pixel type pa.
- a multiplication value obtained by multiplying the photometric exposure time by the exposure time ratio of the photometric pixel 11 of the pixel type pa is set as the light receiving exposure time. Since the exposure time ratio of the photometric pixel 11 of the pixel type pa is 1, when the received light exposure time is set using the exposure time ratio of the photometric pixel 11 of the pixel type pa, the received light exposure time is the photometric exposure time. Is set to the same time.
- the photometric pixels 11 of the pixel types pa to pc are saturated, and the remaining photometric pixels 11 of the pixel types pd to pg are unsaturated.
- the light receiving exposure time is set using the exposure time ratio of the photometric pixel 11 of the pixel type pd having the maximum aperture ratio.
- a multiplication value obtained by multiplying the photometric exposure time by the exposure time ratio of the photometric pixel 11 of the pixel type pd is set as the light receiving exposure time. Since the exposure time ratio of the photometric pixel 11 of the pixel type pd is 0.5, when the light reception exposure time is set using the exposure time ratio of the photometric pixel 11 of the pixel type pd, the light reception exposure time is the photometric exposure time. Is set to 0.5 times the time.
- FIG. 9 is a flowchart for explaining an example of a distance measurement process using the distance sensor 20 and an image pickup process using the image sensor 30.
- step S11 the control unit 41 sets a predetermined default time as the photometric exposure time, and the process proceeds to step S12.
- step S12 the control unit 41 controls the photometric sensor 10 to perform exposure for the photometric exposure time, thereby causing the photometric pixel 11 to receive light for the photometric exposure time and perform photometry.
- the photometric result of the photometric pixel 11 is supplied to the control unit 41 via the processing unit 42, and the process proceeds from step S12 to step S13.
- step S13 the control unit 41 performs exposure control of the distance measuring pixel 21 / imaging pixel 31 for each pixel block in accordance with the photometry result of the photometry pixel 11, and the process proceeds to step S14.
- the saturation of the ranging pixel 21 / imaging pixel 31 is suppressed by the first or second pixel saturation suppression method.
- step S14 the distance measurement sensor 21 / image sensor 30 performs light exposure according to the exposure control for each pixel block by the control unit 41 in step S13, thereby measuring the light by the distance measurement pixel 21 / image pickup pixel 31.
- the distance / image is taken, and the process ends.
- FIG. 10 is a flowchart for explaining an example of exposure control processing for suppressing the saturation of the ranging pixels 21 / imaging pixels 31 by the first pixel saturation suppression method performed in step S13 of FIG.
- the processing according to the flowchart of FIG. 10 is performed for each pixel block.
- exposure control of a target block will be described as a target block of interest for a certain pixel block.
- step S21 the control unit 41 determines whether or not the photometric pixel 11 of the pixel type pa corresponding to the block of interest is saturated.
- step S21 If it is determined in step S21 that the photometric pixel 11 of the pixel type pa is not saturated, the process proceeds to step S22.
- step S22 the control unit 41 sets the light receiving exposure time using the exposure time ratio (FIG. 8) of the photometric pixel 11 of the pixel type pa, and the process returns. That is, the control unit 41 sets a multiplication value of the exposure time ratio of the photometric pixel 11 of the pixel type pa and the current photometric exposure time as the light receiving exposure time.
- the exposure time ratio FOG. 8
- step S21 If it is determined in step S21 that the photometric pixel 11 of the pixel type pa is saturated, the process proceeds to step S23, and the control unit 41 saturates the photometric pixel 11 of the pixel type pb corresponding to the block of interest. Determine whether or not.
- step S23 If it is determined in step S23 that the photometric pixel 11 of the pixel type pb is not saturated, the process proceeds to step S24.
- step S24 the control unit 41 sets the light receiving exposure time using the exposure time ratio of the photometric pixel 11 of the pixel type pb, and the process returns. That is, the control unit 41 sets a multiplication value of the exposure time ratio of the photometric pixel 11 of the pixel type pb and the current photometric exposure time as the light receiving exposure time.
- step S23 If it is determined in step S23 that the photometric pixel 11 of pixel type pb is saturated, the process proceeds to step S25, and the control unit 41 saturates the photometric pixel 11 of pixel type pc corresponding to the block of interest. Determine whether or not.
- step S25 If it is determined in step S25 that the photometric pixel 11 of the pixel type pc is not saturated, the process proceeds to step S26.
- step S26 the control unit 41 sets the light receiving exposure time using the exposure time ratio of the photometric pixel 11 of the pixel type pc, and the process returns. That is, the control unit 41 sets a multiplication value of the exposure time ratio of the photometric pixel 11 of the pixel type pc and the current photometric exposure time as the light receiving exposure time.
- step S25 If it is determined in step S25 that the photometric pixel 11 of the pixel type pc is saturated, the process proceeds to step S27, and the control unit 41 saturates the photometric pixel 11 of the pixel type pd corresponding to the block of interest. Determine whether or not.
- step S27 If it is determined in step S27 that the photometric pixel 11 of the pixel type pd is not saturated, the process proceeds to step S28.
- step S28 the control unit 41 sets the light receiving exposure time using the exposure time ratio of the photometric pixel 11 of the pixel type pd, and the process returns. That is, the control unit 41 sets a multiplication value of the exposure time ratio of the photometric pixel 11 of the pixel type pd and the current photometric exposure time as the light receiving exposure time.
- step S27 If it is determined in step S27 that the photometric pixel 11 of the pixel type pd is saturated, the process proceeds to step S29, and the control unit 41 saturates the photometric pixel 11 of the pixel type pe corresponding to the block of interest. Determine whether or not.
- step S29 If it is determined in step S29 that the photometric pixel 11 of the pixel type pe is not saturated, the process proceeds to step S30.
- step S30 the control unit 41 sets the light receiving exposure time using the exposure time ratio of the photometric pixel 11 of the pixel type pe, and the process returns. That is, the control unit 41 sets a multiplication value of the exposure time ratio of the photometric pixel 11 of the pixel type pe and the current photometric exposure time as the light receiving exposure time.
- step S29 If it is determined in step S29 that the photometric pixel 11 of the pixel type pe is saturated, the process proceeds to step S31, and the control unit 41 saturates the photometric pixel 11 of the pixel type pf corresponding to the block of interest. Determine whether or not.
- step S31 If it is determined in step S31 that the photometric pixel 11 of the pixel type pf is not saturated, the process proceeds to step S32.
- step S32 the control unit 41 sets the light receiving exposure time using the exposure time ratio of the photometric pixel 11 of the pixel type pf, and the process returns. That is, the control unit 41 sets a multiplication value of the exposure time ratio of the photometric pixel 11 of the pixel type pf and the current photometric exposure time as the light receiving exposure time.
- step S31 If it is determined in step S31 that the photometric pixel 11 of the pixel type pf is saturated, the process proceeds to step S33, and the control unit 41 saturates the photometric pixel 11 of the pixel type pg corresponding to the block of interest. Determine whether or not.
- step S33 If it is determined in step S33 that the photometric pixel 11 of the pixel type pg is not saturated, the process proceeds to step S34.
- step S34 the control unit 41 sets the light receiving exposure time using the exposure time ratio of the photometric pixel 11 of the pixel type pg, and the process returns. That is, the control unit 41 sets a multiplication value of the exposure time ratio of the photometric pixel 11 of the pixel type pg and the current photometric exposure time as the light receiving exposure time.
- step S33 If it is determined in step S33 that the photometric pixel 11 of the pixel type pg is saturated, the process proceeds to step S35.
- step S35 the control unit 41 sets a time shorter than the current photometric exposure time as a new photometric exposure time, and controls the photometric sensor 10 to perform exposure for the new photometric exposure time.
- the photometric result of the photometric pixel 11 obtained by exposure is received. And a process returns to step S21 from step S35, and the same process is repeated hereafter.
- step S14 of FIG. 9 the exposure of the light receiving exposure time set by the exposure control process of FIG. 10 is performed for the light receiving pixels (ranging pixels 21 / imaging pixels 31). Is done.
- the photometric pixel 11 can include a light-shielded pixel in addition to a pixel type pa to pg.
- the photometric pixel 11 includes a light-shielded pixel
- the control unit 41 can output an error message.
- the control unit 41 can perform such light amount reduction control when it is possible to perform light amount reduction control that reduces the amount of light incident on the light receiving pixels, for example, by reducing a diaphragm (not shown). it can.
- FIG. 11 is a flowchart for explaining another example of the exposure control process for suppressing the saturation of the ranging pixels 21 / imaging pixels 31 by the first pixel saturation suppression method performed in step S13 of FIG.
- the processing according to the flowchart of FIG. 11 is performed for each pixel block.
- exposure control of a target block will be described as a target block of interest for a certain pixel block.
- step S51 the control unit 41 determines whether the photometric pixel 11 of the pixel type pg corresponding to the block of interest is saturated.
- step S51 If it is determined in step S51 that the photometric pixel 11 of the pixel type pg is saturated, the process proceeds to step S52.
- step S52 the control unit 41 sets a time shorter than the current photometric exposure time as a new photometric exposure time, and controls the photometric sensor 10 to perform exposure for a new photometric exposure time.
- the photometric result of the photometric pixel 11 obtained by exposure is received. And a process returns to step S51 from step S52, and the same process is repeated hereafter.
- step S51 If it is determined in step S51 that the photometric pixel 11 of the pixel type pg is not saturated, the process proceeds to step S53, and the control unit 41 determines that the photometric pixel 11 of the pixel type pf corresponding to the block of interest is Determine if it is saturated.
- step S53 If it is determined in step S53 that the photometric pixel 11 of the pixel type pf is saturated, the process proceeds to step S54, and the control unit 41 receives light using the exposure time ratio of the photometric pixel 11 of the pixel type pg. After setting the exposure time, the process returns.
- step S53 If it is determined in step S53 that the photometric pixel 11 of the pixel type pf is not saturated, the process proceeds to step S55, and the control unit 41 saturates the photometric pixel 11 of the pixel type pe corresponding to the block of interest. Determine whether or not.
- step S55 If it is determined in step S55 that the photometric pixel 11 of the pixel type pe is saturated, the process proceeds to step S56, and the control unit 41 receives light using the exposure time ratio of the photometric pixel 11 of the pixel type pf. After setting the exposure time, the process returns.
- step S55 If it is determined in step S55 that the photometric pixel 11 of the pixel type pe is not saturated, the process proceeds to step S57, and the control unit 41 saturates the photometric pixel 11 of the pixel type pd corresponding to the block of interest. Determine whether or not.
- step S57 If it is determined in step S57 that the photometric pixel 11 of the pixel type pd is saturated, the process proceeds to step S58, and the control unit 41 receives light using the exposure time ratio of the photometric pixel 11 of the pixel type pe. After setting the exposure time, the process returns.
- step S57 If it is determined in step S57 that the photometric pixel 11 of the pixel type pd is not saturated, the process proceeds to step S59, and the control unit 41 saturates the photometric pixel 11 of the pixel type pc corresponding to the block of interest. Determine whether or not.
- step S59 If it is determined in step S59 that the photometric pixel 11 of the pixel type pc is saturated, the process proceeds to step S60, and the control unit 41 receives light using the exposure time ratio of the photometric pixel 11 of the pixel type pd. After setting the exposure time, the process returns.
- step S59 If it is determined in step S59 that the photometric pixel 11 of pixel type pc is not saturated, the process proceeds to step S61, and the control unit 41 saturates the photometric pixel 11 of pixel type pb corresponding to the block of interest. Determine whether or not.
- step S61 If it is determined in step S61 that the photometric pixel 11 of the pixel type pb is saturated, the process proceeds to step S62, and the control unit 41 receives light using the exposure time ratio of the photometric pixel 11 of the pixel type pc. After setting the exposure time, the process returns.
- step S61 If it is determined in step S61 that the photometric pixel 11 of the pixel type pb is not saturated, the process proceeds to step S63, and the control unit 41 saturates the photometric pixel 11 of the pixel type pa corresponding to the block of interest. Determine whether or not.
- step S63 If it is determined in step S63 that the photometric pixel 11 of pixel type pa is saturated, the process proceeds to step S64, and the control unit 41 receives light using the exposure time ratio of the photometric pixel 11 of pixel type pb. After setting the exposure time, the process returns.
- step S63 If it is determined in step S63 that the photometric pixel 11 of the pixel type pa is not saturated, the process proceeds to step S65.
- step S65 the control unit 41 sets a time longer than the current photometric exposure time as a new photometric exposure time, and controls the photometric sensor 10 to perform exposure for the new photometric exposure time.
- the photometric result of the photometric pixel 11 obtained by exposure is received. And a process returns from step S65 to step S51, and the same process is repeated hereafter.
- step S14 of FIG. 9 the exposure of the light receiving exposure time set by the exposure control process of FIG. 11 is performed for the light receiving pixels (ranging pixels 21 / imaging pixels 31). Is done.
- FIG. 12 is a diagram for explaining a second pixel saturation suppression method for suppressing the saturation of the distance measurement pixel 21 and the imaging pixel 31.
- the second pixel saturation suppression method repeats exposure with a short exposure time within one frame for the light receiving pixels, and adds the exposure results of the light receiving pixels at each exposure, It is assumed that the added value is the pixel value of the light receiving pixel.
- the addition number for adding the exposure results of the light receiving pixels is an appropriate value, that is, the saturation of the ranging pixels 21 and the imaging pixels 31. Is set to a value that suppresses.
- FIG. 12 is the same diagram as FIG. 8 except that the addition number is described instead of the exposure time ratio of FIG.
- the exposure with a short exposure time is repeated within one frame, and the exposure results of the light receiving pixels in each exposure are added.
- FIG. This represents the number of exposure results to be added.
- the N times is set as a reference addition number as a reference, and the aperture ratio and reference addition of the photometric pixel 11 of the pixel type are assigned to each pixel type.
- a multiplication value with the number N is associated as an addition number.
- the second pixel saturation suppression method as in the first pixel saturation suppression method, first, exposure is performed for the photometric exposure time, and each of the photometric pixels 11 of the pixel types pa to pg is saturated in the exposure. It is determined whether it is present or unsaturated.
- the aperture ratio in the unsaturated photometric pixels 11 is The addition number N associated with the maximum pixel type pa is set to the number of exposure results of the light receiving pixels to be added.
- the photometric pixels 11 of the pixel types pa to pc are saturated, and the remaining photometric pixels 11 of the pixel types pd to pg are unsaturated.
- the addition number 0.5N associated with the pixel type pd having the maximum aperture ratio is the number of exposure results of the light receiving pixels to be added. Is set.
- FIG. 13 is a flowchart for explaining an example of the exposure control process for suppressing the saturation of the light receiving pixels (ranging pixels 21 / imaging pixels 31) by the second pixel saturation suppression method performed in step S13 of FIG. .
- the processing according to the flowchart of FIG. 13 is performed for each pixel block.
- exposure control of a target block will be described as a target block of interest for a certain pixel block.
- step S70 the control unit 41 sets, for example, a time proportional to the current photometric exposure time as the light exposure time for each exposure of the light receiving pixels that is repeatedly performed within one frame in accordance with the current photometric exposure time. The process proceeds to step S71.
- steps S71, S73, S75, S77, S79, S81, and S83 determination processes similar to those in steps S21, S23, S25, S27, S29, S31, and S33 of FIG. 10 are performed.
- step S71 If it is determined in step S71 that the photometric pixel 11 of the pixel type pa is not saturated, the process proceeds to step S72.
- step S72 the control unit 41 sets the addition number (FIG. 12) associated with the pixel type pa to the number of exposure results of the light receiving pixels to be added, and the process returns.
- step S73 If it is determined in step S73 that the photometric pixel 11 of the pixel type pb is not saturated, the process proceeds to step S74.
- step S74 the control unit 41 sets the addition number associated with the pixel type pb to the number of exposure results of the light receiving pixels to be added, and the process returns.
- step S75 If it is determined in step S75 that the photometric pixel 11 of the pixel type pc is not saturated, the process proceeds to step S76.
- step S76 the control unit 41 sets the number of additions associated with the pixel type pc to the number of exposure results of the light receiving pixels to be added, and the process returns.
- step S77 If it is determined in step S77 that the photometric pixel 11 of the pixel type pd is not saturated, the process proceeds to step S78.
- step S78 the control unit 41 sets the addition number associated with the pixel type pd to the number of exposure results of the light receiving pixels to be added, and the process returns.
- step S79 If it is determined in step S79 that the photometric pixel 11 of the pixel type pe is not saturated, the process proceeds to step S80.
- step S80 the control unit 41 sets the addition number associated with the pixel type pe to the number of exposure results of the light receiving pixels to be added, and the process returns.
- step S81 If it is determined in step S81 that the photometric pixel 11 of the pixel type pf is not saturated, the process proceeds to step S82.
- step S82 the control unit 41 sets the addition number associated with the pixel type pf to the number of exposure results of the light receiving pixels to be added, and the process returns.
- step S83 If it is determined in step S83 that the photometric pixel 11 of the pixel type pg is not saturated, the process proceeds to step S84.
- step S84 the control unit 41 sets the addition number associated with the pixel type pg to the number of exposure results of the light receiving pixels to be added, and the process returns.
- step S83 If it is determined in step S83 that the photometric pixel 11 of the pixel type pg is saturated, the process proceeds to step S85.
- step S85 as in step S35 of FIG. 10, the control unit 41 sets a time shorter than the current photometric exposure time as a new photometric exposure time, and controls the photometric sensor 10 to perform new photometry. Exposure is performed for an exposure time, and a photometric result of the photometric pixel 11 obtained by the exposure is received. And a process returns to step S70 from step S85, and the same process is repeated hereafter.
- step S14 of FIG. 9 exposure for the light receiving exposure time is repeatedly performed within one frame for the light receiving pixels (ranging pixels 21 / imaging pixels 31). Then, the exposure results corresponding to the addition number among the exposure results of the light receiving pixels in each exposure are added, and an addition value obtained as a result is obtained as a pixel value of the light receiving pixels.
- FIG. 14 is a flowchart for explaining another example of the exposure control process for suppressing the saturation of the light receiving pixels by the second pixel saturation suppression method performed in step S13 of FIG.
- exposure control of a target block will be described as a target block of interest for a certain pixel block.
- step S90 the same process as in step S70 of FIG. 13 is performed, and the process proceeds to step S91.
- steps S91, S93, S95, S97, S99, S101, and S103 determination processes similar to those in steps S51, S53, S55, S57, S59, S61, and S63 in FIG. 11 are performed.
- step S91 If it is determined in step S91 that the photometric pixel 11 of the pixel type pg is saturated, the process proceeds to step S92.
- step S92 as in step S52 of FIG. 11, the control unit 41 sets a time shorter than the current photometric exposure time as a new photometric exposure time, and controls the photometric sensor 10 to perform new photometry. Exposure is performed for an exposure time, and a photometric result of the photometric pixel 11 obtained by the exposure is received. And a process returns to step S90 from step S92, and the same process is repeated hereafter.
- step S93 If it is determined in step S93 that the photometric pixel 11 of the pixel type pf is saturated, the process proceeds to step S94, and the control unit 41 calculates the addition number associated with the pixel type pg as the addition number. The process returns after setting the number of exposure results of the target light receiving pixels.
- step S95 If it is determined in step S95 that the photometric pixel 11 of the pixel type pe is saturated, the process proceeds to step S96, and the control unit 41 sets the addition number associated with the pixel type pf as the addition number. The process returns after setting the number of exposure results of the target light receiving pixels.
- step S97 If it is determined in step S97 that the photometric pixel 11 of the pixel type pd is saturated, the process proceeds to step S98, and the control unit 41 sets the addition number associated with the pixel type pe to the addition. The process returns after setting the number of exposure results of the target light receiving pixels.
- step S99 If it is determined in step S99 that the photometric pixel 11 of the pixel type pc is saturated, the process proceeds to step S100, and the control unit 41 calculates the addition number associated with the pixel type pd as the addition number. The process returns after setting the number of exposure results of the target light receiving pixels.
- step S101 If it is determined in step S101 that the photometric pixel 11 of the pixel type pb is saturated, the process proceeds to step S102, and the control unit 41 sets the addition number associated with the pixel type pc to the addition. The process returns after setting the number of exposure results of the target light receiving pixels.
- step S103 If it is determined in step S103 that the photometric pixel 11 of the pixel type pa is saturated, the process proceeds to step S104, and the control unit 41 calculates the addition number associated with the pixel type pb as the addition number. The process returns after setting the number of exposure results of the target light receiving pixels.
- step S103 If it is determined in step S103 that the photometric pixel 11 of the pixel type pa is not saturated, the process proceeds to step S105.
- step S105 similarly to step S65 of FIG. 11, the control unit 41 sets a time longer than the current photometric exposure time as a new photometric exposure time, and controls the photometric sensor 10 to perform new photometry. Exposure is performed for an exposure time, and a photometric result of the photometric pixel 11 obtained by the exposure is received. And a process returns to step S90 from step S105, and the same process is repeated hereafter.
- exposure of the light receiving exposure time is repeatedly performed within one frame for the light receiving pixels (ranging pixels 21 / imaging pixels 31) in step S14 of FIG. Then, the exposure results corresponding to the addition number among the exposure results of the light receiving pixels in each exposure are added, and an addition value obtained as a result is obtained as a pixel value of the light receiving pixels.
- FIG. 15 is a side sectional view showing an outline of a first configuration example of a digital camera to which the present technology is applied.
- the distance measuring sensor 20 which is a ToF sensor can perform a high-speed operation capable of exposure in a nanosecond level for a short time, so even if the light receiving surface of the distance measuring sensor 20 is collectively exposed (on the light receiving surface). Even if the whole is exposed), the exposure time can be controlled for each pixel block.
- CMOS Complementary Metal Metal Oxide Semiconductor
- CMOS image sensor there is a first image sensor in which a light receiving surface composed of pixels and a driver circuit (and other peripheral circuits) for driving the pixels are formed on the same substrate.
- drive signal lines for driving pixels arranged in one row in the horizontal direction of the light receiving surface are wired in each row (pixels), and on the substrate on which the light receiving surface is formed, for example, A driver circuit is arranged at a position adjacent to the left or right of the light receiving surface.
- the first image sensor by driving the pixels arranged in one row from the driver circuit via the drive signal line, signals from the pixels in each column arranged in the one row are paralleled for each row. Read out.
- the time constant of the first image sensor is governed by the size of the light receiving surface (area where the pixels are arranged). Therefore, it is difficult to operate the first image sensor at high speed like the distance measuring sensor 20 that is a ToF sensor and control the exposure time for each pixel block.
- the driver circuit is arranged at a position adjacent to the light receiving surface, when the drive signal lines wired in each row are divided short, the drive signal lines in the light receiving surface are divided. It is necessary to provide a driver circuit (or buffer) for driving the pixel connected to the divided drive signal line at the position. As a result, the size of the light receiving surface, and hence the image sensor, is increased.
- CMOS complementary metal-oxide-semiconductor
- each substrate has a laminated structure in which the substrates are electrically connected.
- the connection between the divided drive signal line of the pixel substrate and the driver circuit of the circuit board is received at the position where the drive signal line is divided. This can be done by short wiring in a direction perpendicular to the plane.
- the second image sensor as in the first image sensor, a driver that drives a pixel connected to the divided drive signal line at a position where the drive signal line is divided in the light receiving surface. Without providing a circuit, the time constant can be reduced and high-speed operation can be performed.
- the second image sensor similarly to the distance measuring sensor 20 which is a ToF sensor, exposure can be performed in a short time, and the exposure time can be controlled for each pixel block.
- the second image sensor as described above can be adopted.
- the second image sensor as the image sensor 30, it becomes possible to control the exposure time for each pixel block of one frame, and as described in FIG. 8, the pixels of the imaging pixels 31 of the image sensor 30.
- the dynamic range of an image composed of values can be increased.
- the digital camera is, for example, a digital camera to which the light receiving device of FIG.
- the optical system 101 includes a lens, a diaphragm, and the like, and condenses light from the subject on the light receiving surface of the image sensor 30.
- the image sensor 30 is an image sensor that also serves as the photometric sensor 10, for example, as described in FIG. 5, and performs photometry and imaging by receiving light from the optical system 101.
- control unit 41 controls the photometric pixel 11 of each pixel block (FIG. 5) of the image sensor 30 to perform exposure for the photometric exposure time.
- the photometric pixel 11 receives light for the photometric exposure time and performs photometry under the control of the control unit 41.
- the control unit 41 acquires a photometric result obtained by the photometric pixel 11 performing photometry, and performs exposure control of the imaging pixel 31 of the pixel block according to the photometric result of each pixel block.
- the imaging pixel 31 of each pixel block of the image sensor 30 receives light according to exposure according to the exposure control of the control unit 41 to perform imaging, and outputs a pixel value corresponding to the amount of light.
- the photometry result of each pixel block is determined from the photometry result of the photometry sensor 10. It is necessary to specify. This is because the exposure control of the imaging pixel 31 of the pixel block is performed according to the photometric result of the pixel block.
- the same light as that received by the image sensor 30 is received by the photometry sensor 10, that is, the same light is received by the photometry sensor 10 and the image sensor 30. There is a way.
- a method for specifying the photometric result of the pixel block there is a method of performing alignment calibration described later when the photometric sensor 10 receives (slightly) different light from the light received by the image sensor 30. is there.
- both the photometric sensor 10 and the image sensor 30 receive the same light that has passed through the optical system 101.
- an image having the image pickup result of the image pickup pixel 31 of the image sensor 30 as a pixel value is also referred to as a picked-up image
- an image having the photometry result of the photometry pixel 11 of the photometry sensor 10 as a pixel value is also referred to as a photometric image.
- the pixel block described in FIG. 5 is introduced into the captured image.
- the photometric sensor 10 and the image sensor 30 receive the same light that has passed through the optical system 101, as a photometric result of the target block of interest among the pixel blocks of the captured image, It is possible to specify a photometric result that is a pixel value in a corresponding area (in the case where the captured image and the photometric image have the same size, an area that matches the block of interest).
- the photometric sensor 10 and the image sensor 30 receive the same light
- the photometric sensor 10 and the image sensor 30 receive the same light passing through the same optical system such as the optical system 101
- the photometric sensor 10 and the image sensor 30 receive the same light passing through the same optical system such as the optical system 101
- incident light one of light reflected by an optical component such as a mirror or a prism, light received after being refracted or divided, light received by the photometric sensor 10 and the image sensor 30 is Cases that go through unique optical components are included.
- FIG. 16 is a cross-sectional view schematically showing a second configuration example of a digital camera to which the present technology is applied.
- the digital camera is, for example, a single-lens reflex digital camera to which the light receiving device of FIG. 3 is applied, and includes an image sensor 30, an optical system 111, a movable mirror 112, and an EVF (Electric View Finder) sensor 113.
- the optical system 111 includes a lens, a diaphragm, and the like (not shown), and collects light L from the subject.
- the movable mirror 112 is a flat plate-shaped mirror, and when the image sensor 30 is not performing imaging, as shown in FIG. Reflects on top of digital camera.
- the movable mirror 112 assumes a horizontal posture and causes the light that has passed through the optical system 111 to enter the image sensor 30 as shown in FIG. 16B.
- the EVF sensor 113 receives the light reflected by the movable mirror 112 to capture an EVF image to be displayed by an EVF (not shown).
- the EVF sensor 113 is an image sensor that also serves as the photometric sensor 10, and receives light reflected by the movable mirror 112 to capture an EVF image and also perform photometry.
- an image sensor 30 is an image sensor that does not serve as the photometric sensor 10 and picks up an image by receiving light from the optical system 111 when the release button is operated.
- the movable mirror 112 is moved upward as shown in FIG. 16A, and the light passing through the optical system 111 is reflected by the movable mirror 112. Then, the light enters the EVF sensor 113 that also serves as the photometric sensor 10.
- the control unit 41 controls the EVF sensor 113 that also serves as the photometric sensor 10 to perform exposure for photometric exposure time (receive light reflected by the movable mirror 112), A photometric result of photometry performed by the exposure is acquired.
- control unit 41 performs exposure control of the imaging pixel 31 of the pixel block according to the latest photometric result of each pixel block.
- the movable mirror 112 assumes a horizontal posture, and the light passing through the optical system 111 enters the image sensor 30.
- the imaging pixels 31 of each pixel block of the image sensor 30 receive light that has passed through the optical system 111 by exposure according to the exposure control of the control unit 41 to perform imaging, and set a pixel value corresponding to the light amount of the light. Output.
- FIG. 17 is a cross-sectional view showing an outline of a third configuration example of the digital camera to which the present technology is applied.
- the digital camera includes an image sensor 30, an optical system 111, a movable mirror 112, an EVF sensor 113, and an EVF optical system 121.
- the digital camera of FIG. 17 has an image sensor 30, an optical system 111, a movable mirror 112, and an EVF sensor 113, and is newly provided with an EVF optical system 121 in common with the case of FIG. This is different from the case of FIG.
- the EVF optical system 121 is an optical component unique to the EVF sensor 113, such as an optical filter or a lens, and is provided on the EVF sensor 113 on the light incident side.
- the EVF sensor 113 receives light that has passed (via) the EVF optical system 121.
- FIG. 18 is a cross-sectional view showing an outline of a fourth configuration example of the digital camera to which the present technology is applied.
- the digital camera includes a photometric sensor 10, an image sensor 30, an optical system 111, a movable half mirror 131, a movable mirror 132, a pentaprism 133, and a viewfinder 134.
- the digital camera of FIG. 18 is common to the case of FIG. 16 in that it includes the image sensor 30 and the optical system 111.
- the digital camera in FIG. 18 does not have the movable mirror 112 and the EVF sensor 113, and has the movable half mirror 131, the movable mirror 132, the pentaprism 133, and the viewfinder 134 in FIG. It is different from the case.
- the movable half mirror 131 is a flat half mirror. When the image sensor 30 is not capturing an image, the movable half mirror 131 is in a right-up position and passes through the optical system 111 as shown in FIG. Is reflected on the top of the digital camera and allows the remaining light to pass.
- the movable half mirror 131 is configured so that the light that has passed through the optical system 111 in a horizontal posture together with the movable mirror 132, as shown in FIG. 30 is incident.
- the movable half mirror 131 is in a horizontal position as shown in FIG. 18B when a release button (not shown) is operated, and as shown in FIG. 18A when the release button is not operated. In addition, the posture is going up to the right.
- the movable mirror 132 is a flat mirror, and when the image sensor 30 is not capturing an image, as shown in FIG. The light is reflected on the lower part of the digital camera and is incident on the photometric sensor 10.
- the movable mirror 132 When the image is picked up by the image sensor 30, the movable mirror 132, together with the movable half mirror 131, takes the light that has passed through the optical system 111 in the horizontal posture as shown in FIG. 18B. 30 is incident.
- the pentaprism 133 appropriately reflects the light reflected by the movable half mirror 131 and emits it to the viewfinder 134.
- the light from the pentaprism 133 is incident on the viewfinder 134. Accordingly, the user can check an image (image) captured by the image sensor 30 by looking through the viewfinder 134.
- the image sensor 30 is an image sensor that also serves as the photometric sensor 10 as described in FIG. 16.
- part of the light that has passed through the optical system 111 passes through the movable half mirror 131, and the remaining light is reflected by the movable half mirror 131.
- the light reflected by the movable half mirror 131 is further reflected by the pentaprism 133 and enters the viewfinder 134.
- the light that has passed through the movable half mirror 131 is reflected by the movable mirror 132 and enters the photometric sensor 10.
- the control unit 41 controls the EVF sensor 113 that also serves as the photometric sensor 10 to perform exposure for photometric exposure time (receive light reflected by the movable mirror 132). A photometric result of photometry performed by the exposure is acquired.
- control unit 41 performs exposure control of the imaging pixel 31 of the pixel block according to the latest photometric result of each pixel block.
- the movable half mirror 131 and the movable mirror 132 are in a horizontal posture, and the light passing through the optical system 111 enters the image sensor 30.
- the imaging pixels 31 of each pixel block of the image sensor 30 receive light that has passed through the optical system 111 by exposure according to the exposure control of the control unit 41 to perform imaging, and set a pixel value corresponding to the light amount of the light. Output.
- the photometric sensor 10 (EVF sensor 113 which also serves as the sensor) and the image sensor 30 receive the same light, for example, as in the case of the digital camera in FIG.
- the photometric sensor 10 (the EVF sensor 113 that also serves as the light) receives the light reflected by the movable mirror 112.
- a case for receiving the light that has passed through the movable half mirror 131 and reflected by the movable mirror 132, and the light received by the photometric sensor 10 (also used as the EVF sensor 113) is an EVF optical system as a unique optical component.
- a case via 121 is included.
- the image sensor 30 can also serve as the photometric sensor 10
- the EVF sensor 113 can also serve as the photometric sensor 10. That is, the EVF sensor 113 can also be used as the photometric sensor 10.
- FIG. 19 is a plan view showing an outline of a fifth configuration example of the digital camera to which the present technology is applied.
- the digital camera is a digital camera to which the light receiving device of FIG. 3 is applied, for example, and includes a photometric sensor 10, an image sensor 30, a photometric optical system 151, and an imaging optical system 152.
- the photometric optical system 151 is composed of a lens, a diaphragm, etc., not shown, and condenses light from the subject on the photometric sensor 10.
- the imaging optical system 152 is composed of a lens, a diaphragm, etc. (not shown), and collects light from the subject on the image sensor 30.
- the photometric optical system 151 and the imaging optical system 152 are physically different optical systems.
- the photometric sensor 10 receives light that has passed through the photometric optical system 151 and performs photometry
- the image sensor 30 receives light that has passed through the imaging optical system 152 and performs imaging.
- the control unit 41 controls the photometric sensor 10 to perform exposure for photometric exposure time (receive light that has passed through the photometric optical system 151), and photometry performed by the exposure. Get the photometric result.
- control part 41 performs exposure control of the image pick-up pixel 31 of the pixel block according to the photometry result of each pixel block.
- the imaging pixels 31 of each pixel block of the image sensor 30 receive light that has passed through the imaging optical system 152 by exposure according to the exposure control of the control unit 41 and perform imaging, and pixel values corresponding to the light amount of the light Is output.
- the photometric sensor 10 receives light that has passed through the photometric optical system 151, and the image sensor 30 has passed through an imaging optical system 152 that is physically different from the photometric optical system 151. Is received.
- the photometric optical system 151 and the imaging optical system 152 are physically different, they cannot be arranged at the same position and are arranged at shifted positions.
- the light received by the photometric sensor 10 and the light received by the image sensor 30 are (slightly) different, and therefore correspond to the target block (target pixel block) of the photometric image obtained from the photometric sensor 10.
- the photometric result that is the pixel value in the area is not the photometric result of the block of interest.
- alignment calibration can be performed in which the photometric image obtained by the photometric sensor 10 and the captured image obtained by the image sensor 30 are associated with the position where the same subject appears.
- FIG. 20 is a diagram for explaining the alignment calibration.
- FIG. 20 shows an example of a photometric image obtained by the photometric sensor 10 and a captured image obtained by the image sensor 30.
- the area of the photometric image shifted from the area corresponding to the target block is different depending on the difference.
- the photometric result that is the pixel value becomes the photometric result of the block of interest.
- the position P of the photometric image in which the subject X is reflected corresponds to the position Q of the captured image in which the subject X is reflected. It shifts without matching with the position Q '.
- the photometric sensor 10 and the image sensor 30 receive different light, and there is a horizontal shift of only H pixels between the photometric image and the captured image, the image of the subject X is captured.
- the subject X does not appear in the photometric image position (hereinafter also referred to as the corresponding position) Q ′ corresponding to the image position Q.
- the subject X is horizontal by H pixels from the corresponding position Q ′. It appears at a position P that is displaced in the direction.
- Alignment calibration includes software calibration and mechanical calibration.
- one or both of the captured image and the photometric image are processed by the photometric optical system 151 and the imaging optical so that the positions of the photometric image and the captured image in which the same subject appears are the same.
- the information of the system 152 (lens information or the like) is used as necessary.
- the positional relationship between the photometric optical system 151 and the photometric sensor 10 In mechanical calibration, the positional relationship between the photometric optical system 151 and the photometric sensor 10, the positional relationship between the imaging optical system 152 and the image sensor 30, the photometric sensor 10, the image sensor 30, the photometric optical system 151, and the imaging optical system.
- the posture of 152 and the like are adjusted so that the positions of the photometric image and the captured image in which the same subject appears are the same.
- alignment calibration one or both of software calibration and mechanical calibration can be performed.
- saturation of the imaging pixel 31 can be suppressed, and as a result, as described with reference to FIG. 8, the dynamic range of the captured image formed by the pixel value of the imaging pixel 31 of the image sensor 30 can be increased.
- 1 to 3 is a digital camera, an arbitrary electronic device having an imaging function, an arbitrary electronic device having a distance measuring function, or any other device that receives light and performs processing. It can be applied to other electronic devices.
- the series of processes of the control unit 41 described above can be performed by hardware or software.
- a program constituting the software is installed in a computer such as a microcomputer.
- FIG. 21 is a block diagram illustrating a configuration example of an embodiment of a computer in which a program for executing the above-described series of processes is installed.
- the program can be recorded in advance in a hard disk 205 or ROM 203 as a recording medium built in the computer.
- the program can be stored (recorded) in the removable recording medium 211.
- a removable recording medium 211 can be provided as so-called package software.
- examples of the removable recording medium 211 include a flexible disk, a CD-ROM (Compact Disc Read Only Memory), a MO (Magneto Optical) disc, a DVD (Digital Versatile Disc), a magnetic disc, and a semiconductor memory.
- the program can be installed on the computer from the removable recording medium 211 as described above, or downloaded to the computer via a communication network or a broadcast network, and installed on the built-in hard disk 205. That is, the program is transferred from a download site to a computer wirelessly via a digital satellite broadcasting artificial satellite, or wired to a computer via a network such as a LAN (Local Area Network) or the Internet. be able to.
- a network such as a LAN (Local Area Network) or the Internet.
- the computer incorporates a CPU (Central Processing Unit) 202, and an input / output interface 210 is connected to the CPU 202 via the bus 201.
- a CPU Central Processing Unit
- the CPU 202 executes a program stored in a ROM (Read Only Memory) 203 according to the command. .
- the CPU 202 loads a program stored in the hard disk 205 into a RAM (Random Access Memory) 204 and executes it.
- the CPU 202 performs processing according to the flowchart described above or processing performed by the configuration of the block diagram described above. Then, the CPU 202 outputs the processing result as necessary, for example, via the input / output interface 210, from the output unit 206, or from the communication unit 208, and further recorded in the hard disk 205.
- the input unit 207 includes a keyboard, a mouse, a microphone, and the like.
- the output unit 206 includes an LCD (Liquid Crystal Display), a speaker, and the like.
- the processing performed by the computer according to the program does not necessarily have to be performed in chronological order in the order described as the flowchart. That is, the processing performed by the computer according to the program includes processing executed in parallel or individually (for example, parallel processing or object processing).
- the program may be processed by one computer (processor), or may be distributedly processed by a plurality of computers. Furthermore, the program may be transferred to a remote computer and executed.
- the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Accordingly, a plurality of devices housed in separate housings and connected via a network and a single device housing a plurality of modules in one housing are all systems. .
- the present technology can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is jointly processed.
- each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
- the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
- a sine wave, a triangular wave, or the like can be employed in addition to a pulse.
- the photometric pixel 11 a plurality of pixels having different aperture ratios may be employed, and a plurality of pixels having different amounts of incident light, such as a plurality of pixels having different color filter transmittances, may be employed. it can.
- this technique can take the following structures.
- a photometric sensor that performs photometry by receiving light; and Other sensors in which the light receiving surface that receives light is divided into a plurality of blocks; And a control unit that performs exposure control for controlling exposure of the other sensor for each of the blocks in accordance with a photometric result of the photometric sensor.
- Ranging sensor that measures distance by receiving light
- the light-receiving device as described in ⁇ 1> provided with one or both of the image sensors which image by receiving light.
- the other sensor also serves as the photometric sensor.
- ⁇ 4> The light receiving device according to ⁇ 3>, wherein a pixel that receives light of the other sensor and a photometric pixel that receives light of the photometric sensor are arranged on one substrate.
- the other sensor is configured by laminating a substrate on which pixels for receiving light of the other sensor are arranged and a substrate on which photometric pixels for receiving light of the photometric sensor are arranged.
- ⁇ 6> The photometric device according to any one of ⁇ 1> to ⁇ 5>, wherein the photometric sensor includes a plurality of pixels having different aperture ratios as photometric pixels that receive light.
- ⁇ 7> The light receiving device according to any one of ⁇ 1> to ⁇ 6>, wherein the control unit sets an exposure time or an addition number for adding an exposure result of exposure for a predetermined time as the exposure control.
- the photometry device according to any one of ⁇ 1> to ⁇ 7>, wherein the photometric sensor and another sensor receive light that has passed through a predetermined optical system.
- the photometric sensor receives light that has passed through a predetermined optical system, and the other sensor receives light that has passed through an optical system different from the predetermined optical system.
- the light receiving device according to ⁇ 1> in which a photometric image obtained by light reception and an image obtained by light reception by the other sensor are calibrated to correspond to positions where the same subject appears.
- the other sensor includes a distance measuring sensor that performs distance measurement by receiving light, The light receiving device according to ⁇ 1>, further including a light emitting unit that emits an electromagnetic wave to be received by the distance measuring sensor.
- Exposure control for controlling exposure of other sensors in which a light receiving surface that receives light is divided into a plurality of blocks is performed for each block according to a photometric result of a photometric sensor that performs photometry by receiving light.
- a control method including steps.
- An optical system that collects the light
- a light receiving device for receiving light The light receiving device is: A photometric sensor that performs photometry by receiving light that has passed through the optical system; Other sensors in which a light receiving surface that receives light that has passed through the optical system or an optical system different from the optical system is divided into a plurality of blocks;
- An electronic apparatus comprising: a control unit that performs exposure control for controlling exposure of the other sensor for each of the blocks according to a photometric result of the photometric sensor.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- Optics & Photonics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Studio Devices (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Measurement Of Optical Distance (AREA)
- Exposure Control For Cameras (AREA)
- Focusing (AREA)
- Automatic Focus Adjustment (AREA)
Abstract
Description
・・・(1)
光を受光することにより、測光を行う測光センサと、
光を受光する受光面が複数のブロックに区分された他のセンサと、
前記測光センサの測光結果に応じて、前記他のセンサの露光を制御する露光制御を、前記ブロックごとに行う制御部と
を備える受光装置。
<2>
前記他のセンサとして、
光を受光することにより、測距を行う測距センサ、
及び、光を受光することにより、撮像を行うイメージセンサ
のうちの一方、又は、両方を備える
<1>に記載の受光装置。
<3>
前記他のセンサは、前記測光センサを兼ねる
<1>又は<2>に記載の受光装置。
<4>
前記他のセンサの、光を受光する画素と、前記測光センサの、光を受光する測光画素とが、1の基板上に配置されている
<3>に記載の受光装置。
<5>
前記他のセンサは、前記他のセンサの、光を受光する画素が配置されている基板と、前記測光センサの、光を受光する測光画素が配置されている基板とが積層されて構成される
<3>に記載の受光装置。
<6>
前記測光センサは、光を受光する測光画素として、開口率が異なる複数の画素を有する
<1>ないし<5>のいずれかに記載の受光装置。
<7>
前記制御部は、露光時間、又は、所定時間の露光の露光結果を加算する加算数の設定を、前記露光制御として行う
<1>ないし<6>のいずれかに記載の受光装置。
<8>
前記測光センサ及び他のセンサは、所定の光学系を通過した光を受光する
<1>ないし<7>のいずれかに記載の受光装置。
<9>
前記測光センサが、所定の光学系を通過した光を受光し、前記他のセンサが、前記所定の光学系とは異なる光学系を通過した光を受光する場合に、前記測光センサでの光の受光により得られる測光画像と、前記他のセンサでの光の受光により得られる画像とについて、同一の被写体が映る位置を対応させるキャリブレーションを行う
<1>に記載の受光装置。
<10>
前記他のセンサとして、光を受光することにより、測距を行う測距センサを備え、
前記測距センサが受光する光となる電磁波を発する発光部をさらに備える
<1>に記載の受光装置。
<11>
光を受光することにより、測光を行う測光センサの測光結果に応じて、光を受光する受光面が複数のブロックに区分された他のセンサの露光を制御する露光制御を、前記ブロックごとに行う
ステップを含む制御方法。
<12>
光を集光する光学系と、
光を受光する受光装置と
を備え、
前記受光装置は、
前記光学系を通過した光を受光することにより、測光を行う測光センサと、
前記光学系、又は、前記光学系と異なる光学系を通過した光を受光する受光面が複数のブロックに区分された他のセンサと、
前記測光センサの測光結果に応じて、前記他のセンサの露光を制御する露光制御を、前記ブロックごとに行う制御部と
を有する
電子機器。
Claims (12)
- 光を受光することにより、測光を行う測光センサと、
光を受光する受光面が複数のブロックに区分された他のセンサと、
前記測光センサの測光結果に応じて、前記他のセンサの露光を制御する露光制御を、前記ブロックごとに行う制御部と
を備える受光装置。 - 前記他のセンサとして、
光を受光することにより、測距を行う測距センサ、
及び、光を受光することにより、撮像を行うイメージセンサ
のうちの一方、又は、両方を備える
請求項1に記載の受光装置。 - 前記他のセンサは、前記測光センサを兼ねる
請求項1に記載の受光装置。 - 前記他のセンサの、光を受光する画素と、前記測光センサの、光を受光する測光画素とが、1の基板上に配置されている
請求項3に記載の受光装置。 - 前記他のセンサは、前記他のセンサの、光を受光する画素が配置されている基板と、前記測光センサの、光を受光する測光画素が配置されている基板とが積層されて構成される
請求項3に記載の受光装置。 - 前記測光センサは、光を受光する測光画素として、開口率が異なる複数の画素を有する
請求項1に記載の受光装置。 - 前記制御部は、露光時間、又は、所定時間の露光の露光結果を加算する加算数の設定を、前記露光制御として行う
請求項1に記載の受光装置。 - 前記測光センサ及び他のセンサは、所定の光学系を通過した光を受光する
請求項1に記載の受光装置。 - 前記測光センサが、所定の光学系を通過した光を受光し、前記他のセンサが、前記所定の光学系とは異なる光学系を通過した光を受光する場合に、前記測光センサでの光の受光により得られる測光画像と、前記他のセンサでの光の受光により得られる画像とについて、同一の被写体が映る位置を対応させるキャリブレーションを行う
請求項1に記載の受光装置。 - 前記他のセンサとして、光を受光することにより、測距を行う測距センサを備え、
前記測距センサが受光する光となる電磁波を発する発光部をさらに備える
請求項1に記載の受光装置。 - 光を受光することにより、測光を行う測光センサの測光結果に応じて、光を受光する受光面が複数のブロックに区分された他のセンサの露光を制御する露光制御を、前記ブロックごとに行う
ステップを含む制御方法。 - 光を集光する光学系と、
光を受光する受光装置と
を備え、
前記受光装置は、
前記光学系を通過した光を受光することにより、測光を行う測光センサと、
前記光学系、又は、前記光学系と異なる光学系を通過した光を受光する受光面が複数のブロックに区分された他のセンサと、
前記測光センサの測光結果に応じて、前記他のセンサの露光を制御する露光制御を、前記ブロックごとに行う制御部と
を有する
電子機器。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/578,269 US11134209B2 (en) | 2016-01-22 | 2017-01-11 | Light receiving device, control method, and electronic apparatus |
CN201780001798.3A CN107615010B (zh) | 2016-01-22 | 2017-01-11 | 光接收器件、控制方法和电子设备 |
JP2017562525A JP6780662B2 (ja) | 2016-01-22 | 2017-01-11 | 受光装置、制御方法、及び、電子機器 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016-010577 | 2016-01-22 | ||
JP2016010577 | 2016-01-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017126377A1 true WO2017126377A1 (ja) | 2017-07-27 |
Family
ID=59361645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/000568 WO2017126377A1 (ja) | 2016-01-22 | 2017-01-11 | 受光装置、制御方法、及び、電子機器 |
Country Status (4)
Country | Link |
---|---|
US (1) | US11134209B2 (ja) |
JP (1) | JP6780662B2 (ja) |
CN (1) | CN107615010B (ja) |
WO (1) | WO2017126377A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020183533A1 (ja) * | 2019-03-08 | 2020-09-17 | 株式会社ブルックマンテクノロジ | 距離画像センサ及び距離画像撮像装置 |
WO2022196139A1 (ja) * | 2021-03-15 | 2022-09-22 | ソニーセミコンダクタソリューションズ株式会社 | 撮像装置および撮像システム |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3614174B1 (en) | 2018-08-21 | 2021-06-23 | Omron Corporation | Distance measuring device and distance measuring method |
US10957729B2 (en) * | 2019-04-15 | 2021-03-23 | Mediatek Inc. | Image sensor with embedded light-measuring pixels and method of automatic exposure control using the same |
US20230239590A1 (en) * | 2020-09-04 | 2023-07-27 | Qualcomm Incorporated | Sensitivity-biased pixels |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004048561A (ja) * | 2002-07-15 | 2004-02-12 | Fuji Photo Film Co Ltd | 撮像装置及び測光装置 |
JP2005173257A (ja) * | 2003-12-11 | 2005-06-30 | Canon Inc | 測距測光装置及び測距リモコン受信装置 |
JP2010154192A (ja) * | 2008-12-25 | 2010-07-08 | Sony Corp | 撮像装置 |
JP2014025967A (ja) * | 2012-07-24 | 2014-02-06 | Canon Inc | 撮像装置及びカメラシステム |
JP2015089033A (ja) * | 2013-10-31 | 2015-05-07 | リコーイメージング株式会社 | 撮像装置および撮像方法 |
JP2015128131A (ja) * | 2013-11-27 | 2015-07-09 | ソニー株式会社 | 固体撮像素子および電子機器 |
JP2016012101A (ja) * | 2014-06-30 | 2016-01-21 | リコーイメージング株式会社 | コントラストaf機能を備えたカメラ |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007035720A2 (en) * | 2005-09-20 | 2007-03-29 | Deltasphere, Inc. | Methods, systems, and computer program products for acquiring three-dimensional range information |
WO2008090607A1 (ja) * | 2007-01-24 | 2008-07-31 | Pioneer Corporation | エラー検出装置及び方法、並びにコンピュータプログラム |
WO2009139154A1 (ja) | 2008-05-14 | 2009-11-19 | パナソニック株式会社 | 撮像装置及び撮像方法 |
JP5211008B2 (ja) * | 2009-10-07 | 2013-06-12 | 本田技研工業株式会社 | 光電変換素子、受光装置、受光システム及び測距装置 |
EP2477043A1 (en) | 2011-01-12 | 2012-07-18 | Sony Corporation | 3D time-of-flight camera and method |
DE102013208802A1 (de) * | 2012-05-30 | 2013-12-05 | Pmdtechnologies Gmbh | Lichtlaufzeitsensor mit spektralen Filtern |
WO2014171051A1 (ja) * | 2013-04-15 | 2014-10-23 | パナソニック株式会社 | 距離測定装置、及び、距離測定方法 |
KR102105284B1 (ko) * | 2013-06-21 | 2020-04-28 | 삼성전자 주식회사 | 이미지 센서, 이의 제조 방법, 및 상기 이미지 센서를 포함하는 이미지 처리 장치 |
CN104427254B (zh) * | 2013-09-10 | 2019-01-15 | 联想(北京)有限公司 | 感光控制方法及感光控制装置 |
US10136079B2 (en) | 2013-10-31 | 2018-11-20 | Ricoh Imaging Company, Ltd. | Method and apparatus for imaging an object |
KR102488709B1 (ko) * | 2015-09-30 | 2023-01-13 | 가부시키가이샤 니콘 | 촬상 소자 및 촬상 장치 |
DE102015225797B3 (de) * | 2015-12-17 | 2017-05-04 | Robert Bosch Gmbh | Optischer Detektor |
JP6900227B2 (ja) * | 2017-04-10 | 2021-07-07 | キヤノン株式会社 | 撮像装置及び撮像装置の駆動方法 |
-
2017
- 2017-01-11 US US15/578,269 patent/US11134209B2/en active Active
- 2017-01-11 CN CN201780001798.3A patent/CN107615010B/zh active Active
- 2017-01-11 WO PCT/JP2017/000568 patent/WO2017126377A1/ja active Application Filing
- 2017-01-11 JP JP2017562525A patent/JP6780662B2/ja active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004048561A (ja) * | 2002-07-15 | 2004-02-12 | Fuji Photo Film Co Ltd | 撮像装置及び測光装置 |
JP2005173257A (ja) * | 2003-12-11 | 2005-06-30 | Canon Inc | 測距測光装置及び測距リモコン受信装置 |
JP2010154192A (ja) * | 2008-12-25 | 2010-07-08 | Sony Corp | 撮像装置 |
JP2014025967A (ja) * | 2012-07-24 | 2014-02-06 | Canon Inc | 撮像装置及びカメラシステム |
JP2015089033A (ja) * | 2013-10-31 | 2015-05-07 | リコーイメージング株式会社 | 撮像装置および撮像方法 |
JP2015128131A (ja) * | 2013-11-27 | 2015-07-09 | ソニー株式会社 | 固体撮像素子および電子機器 |
JP2016012101A (ja) * | 2014-06-30 | 2016-01-21 | リコーイメージング株式会社 | コントラストaf機能を備えたカメラ |
Non-Patent Citations (2)
Title |
---|
ANONYMOUS: "EOS-1D X Catalog", CANON INC., 2017, Retrieved from the Internet <URL:http://cweb.canon.jp/pdf-catatog/eos/pdf/eos-ldx-1402.pdf> [retrieved on 20170322] * |
WIKIPEDIA, 26 December 2015 (2015-12-26), XP055598771, Retrieved from the Internet <URL:https://ja.wikipedia.org/w/index.php?title=%E3%83%87%E3%82%B8%E3%82%BF%E3%83%AB%E3%82%AB%E3%83%A1%E3%83%A9&oldid=58012137> [retrieved on 20170322] * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020183533A1 (ja) * | 2019-03-08 | 2020-09-17 | 株式会社ブルックマンテクノロジ | 距離画像センサ及び距離画像撮像装置 |
JPWO2020183533A1 (ja) * | 2019-03-08 | 2021-11-25 | 株式会社ブルックマンテクノロジ | 距離画像撮像装置 |
JP7114132B2 (ja) | 2019-03-08 | 2022-08-08 | 株式会社ブルックマンテクノロジ | 距離画像撮像装置 |
WO2022196139A1 (ja) * | 2021-03-15 | 2022-09-22 | ソニーセミコンダクタソリューションズ株式会社 | 撮像装置および撮像システム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2017126377A1 (ja) | 2018-11-15 |
JP6780662B2 (ja) | 2020-11-04 |
US20180313938A1 (en) | 2018-11-01 |
US11134209B2 (en) | 2021-09-28 |
CN107615010B (zh) | 2021-11-16 |
CN107615010A (zh) | 2018-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017126377A1 (ja) | 受光装置、制御方法、及び、電子機器 | |
US10419664B2 (en) | Image sensors with phase detection pixels and a variable aperture | |
US9267797B2 (en) | Range-finding device and imaging apparatus | |
KR101334219B1 (ko) | 3차원 적층구조의 이미지센서 | |
US10652513B2 (en) | Display device, display system and three-dimension display method | |
JP2008263352A (ja) | 撮像素子、焦点検出装置および撮像装置 | |
CN111436209A (zh) | 一种光学传感装置和终端 | |
US10110811B2 (en) | Imaging module and imaging device | |
JP6716902B2 (ja) | 電子機器 | |
CN109155323A (zh) | 拍摄元件以及拍摄装置 | |
WO2019078335A1 (ja) | 撮像装置および方法、並びに、画像処理装置および方法 | |
JPWO2017126242A1 (ja) | 撮像装置、及び、画像データ生成方法 | |
WO2022124059A1 (ja) | 測距装置 | |
EP3163369B1 (en) | Auto-focus control in a camera to prevent oscillation | |
JP5434816B2 (ja) | 測距装置及び撮像装置 | |
KR20220140369A (ko) | 이미지 센싱 장치 및 그 동작 방법 | |
JP6872028B2 (ja) | 撮像制御装置、撮像装置、撮像制御方法、及び撮像制御プログラム | |
CN111818267A (zh) | 图像传感器、光学模组、对焦方法及电子设备 | |
US10237469B2 (en) | Information processing apparatus, information processing method, and storage medium | |
JP2016021073A (ja) | 焦点検出装置 | |
JP2008032602A (ja) | 表面欠点検査装置、表面欠点検査方法およびその検査方法を用いた回路基板の製造方法 | |
WO2022123974A1 (ja) | 測距装置 | |
US20100245654A1 (en) | Image pickup device, image reproducing device, and image pickup method | |
JP2010080577A (ja) | 半導体装置 | |
WO2022080467A1 (ja) | 撮像素子、及び、撮像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17741257 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017562525 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15578269 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17741257 Country of ref document: EP Kind code of ref document: A1 |