WO2022196288A1 - 固体撮像装置、撮像装置、固体撮像装置の処理方法、固体撮像装置の処理プログラム、撮像装置の処理方法、及び、撮像装置の処理プログラム - Google Patents
固体撮像装置、撮像装置、固体撮像装置の処理方法、固体撮像装置の処理プログラム、撮像装置の処理方法、及び、撮像装置の処理プログラム Download PDFInfo
- Publication number
- WO2022196288A1 WO2022196288A1 PCT/JP2022/007836 JP2022007836W WO2022196288A1 WO 2022196288 A1 WO2022196288 A1 WO 2022196288A1 JP 2022007836 W JP2022007836 W JP 2022007836W WO 2022196288 A1 WO2022196288 A1 WO 2022196288A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal level
- photoelectric conversion
- imaging device
- conversion unit
- noise
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 154
- 238000003672 processing method Methods 0.000 title claims description 32
- 238000006243 chemical reaction Methods 0.000 claims abstract description 219
- 239000003990 capacitor Substances 0.000 claims abstract description 122
- 238000004364 calculation method Methods 0.000 claims description 56
- 238000003860 storage Methods 0.000 claims description 33
- 239000007787 solid Substances 0.000 claims description 6
- 238000012545 processing Methods 0.000 description 109
- 238000004891 communication Methods 0.000 description 71
- 238000000034 method Methods 0.000 description 65
- 230000000875 corresponding effect Effects 0.000 description 55
- 230000008569 process Effects 0.000 description 27
- 238000010586 diagram Methods 0.000 description 26
- 238000012546 transfer Methods 0.000 description 22
- 238000013500 data storage Methods 0.000 description 21
- 238000007667 floating Methods 0.000 description 17
- 238000009792 diffusion process Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 9
- 101100443251 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) DIG2 gene Proteins 0.000 description 8
- 101100041128 Schizosaccharomyces pombe (strain 972 / ATCC 24843) rst2 gene Proteins 0.000 description 8
- 238000009825 accumulation Methods 0.000 description 8
- 238000001514 detection method Methods 0.000 description 8
- 230000033001 locomotion Effects 0.000 description 7
- 230000000007 visual effect Effects 0.000 description 7
- 230000003321 amplification Effects 0.000 description 6
- 238000005259 measurement Methods 0.000 description 6
- 238000003199 nucleic acid amplification method Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 239000004065 semiconductor Substances 0.000 description 5
- 230000010391 action planning Effects 0.000 description 4
- 230000004927 fusion Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000010408 sweeping Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 206010062519 Poor quality sleep Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000004020 luminiscence type Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000001172 regenerating effect Effects 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000015541 sensory perception of touch Effects 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/67—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
- H04N25/671—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction
- H04N25/672—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction between adjacent sensors or output registers for reading a single image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/67—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
- H04N25/671—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
- H04N25/77—Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
- H04N25/772—Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising A/D, V/T, V/F, I/T or I/F converters
Definitions
- the present disclosure relates to a solid-state imaging device, an imaging device, a processing method for the solid-state imaging device, a processing program for the solid-state imaging device, a processing method for the imaging device, and a processing program for the imaging device.
- Patent Document 1 Various techniques have been proposed to improve the image quality of captured images (see Patent Document 1, for example).
- WDR Wide Dynamic Range
- SNR Signal to Noise Ratio
- the present disclosure provides a solid-state imaging device capable of suppressing an SNR drop while ensuring WDR, an imaging device, a processing method of the solid-state imaging device, a processing program of the solid-state imaging device, a processing method of the imaging device, and an imaging device. Provide a processing program.
- a solid-state imaging device includes a photoelectric conversion unit that generates electric charge according to the amount of received light, and a capacitor that is provided so as to share the charge generated by the photoelectric conversion unit with the photoelectric conversion unit and store the electric charge. and a noise signal level corresponding to noise generated in the capacitor due to temperature is subtracted from a pixel signal level corresponding to charges generated in the photoelectric conversion unit and accumulated in the photoelectric conversion unit and the capacitor. and a calculation unit.
- An imaging device includes a photoelectric conversion unit that generates electric charge according to the amount of received light, and a capacitor that is provided so as to share and accumulate the electric charge generated by the photoelectric conversion unit with the photoelectric conversion unit.
- a noise signal corresponding to noise generated in the capacitor due to temperature from a solid-state imaging device including a plurality of pixels and a pixel signal level corresponding to charges generated in the photoelectric conversion unit and accumulated in the photoelectric conversion unit and the capacitor. and a calculation unit for subtracting the level.
- the solid-state imaging device includes a photoelectric conversion unit that generates electric charge according to the amount of received light, and the electric charge generated in the photoelectric conversion unit can be shared with the photoelectric conversion unit and accumulated. a plurality of pixels each including a capacitor provided in the capacitor, the processing method comprising: a pixel signal level corresponding to the charge generated in the photoelectric conversion unit and stored in the photoelectric conversion unit and the capacitor; subtracting a noise signal level that is responsive to noise occurring within.
- the solid-state imaging device includes a photoelectric conversion unit that generates electric charge according to the amount of received light, and the electric charge generated in the photoelectric conversion unit can be shared with the photoelectric conversion unit and stored. a plurality of pixels, each including a capacitor provided in the capacitor, wherein the processing program converts the pixel signal level corresponding to the charge generated in the photoelectric conversion unit and accumulated in the photoelectric conversion unit and the capacitor to the capacitor Subtracting a noise signal level that is responsive to noise occurring within.
- the imaging device includes a photoelectric conversion unit that generates electric charge according to the amount of received light, and is provided so that the electric charge generated by the photoelectric conversion unit can be shared with the photoelectric conversion unit and accumulated.
- a solid-state imaging device including a plurality of pixels each including a capacitor, wherein the processing method converts a pixel signal level generated in a photoelectric conversion unit and corresponding to charges accumulated in the photoelectric conversion unit and the capacitor to a pixel signal level caused by temperature. and subtracting a noise signal level responsive to noise generated in the capacitor.
- the imaging device is provided with a photoelectric conversion unit that generates electric charge according to the amount of received light, and the electric charge generated by the photoelectric conversion unit can be shared with the photoelectric conversion unit and accumulated.
- a solid-state imaging device including a plurality of pixels each including a capacitor, wherein the processing program converts a pixel signal level corresponding to the charge generated in the photoelectric conversion unit and accumulated in the photoelectric conversion unit and the capacitor to a temperature-induced and subtracting the noise signal level corresponding to the noise generated in the capacitor.
- FIG. 4 is a diagram showing an example of a sensing area
- 1 is a block diagram showing a schematic configuration example of a solid-state imaging device
- FIG. It is a figure which shows the example of the schematic structure of a pixel.
- FIG. 4 is a diagram schematically showing SNR drop;
- FIG. 4 is a diagram schematically showing SNR drop;
- FIG. 4 is a diagram schematically showing SNR drop;
- FIG. 4 is a diagram schematically showing SNR drop;
- FIG. 3 is a diagram showing an example of functional blocks of a signal processing unit and a data storage unit;
- FIG. 3 is a diagram showing an example of functional blocks of a signal processing unit and a data storage unit;
- 4 is a diagram schematically showing suppression of SNR drop; It is a figure which shows the example of a schematic structure of an imaging device. It is a figure which shows the example of a schematic structure of an imaging device. 4 is a flowchart showing an example of processing executed in a solid-state imaging device or an imaging device; It is a figure which shows the example of the schematic structure of a pixel. 3 is a diagram showing an example of functional blocks of a signal processing unit (or column processing circuit) and a data storage unit; FIG. It is a figure which shows an exposure period typically. 4 is a flow chart showing an example of processing executed in a solid-state imaging device; It is a figure which shows the example of the schematic structure of a pixel.
- FIG. 3 is a diagram showing an example of functional blocks of a signal processing unit (or column processing circuit) and a data storage unit; FIG. It is a figure which shows an exposure period typically.
- 4 is a flow chart showing an example of processing executed in a solid-state imaging device; It is a figure which shows the example of the schematic structure of a pixel.
- the disclosed technology is applied to a mobile device control system.
- a mobile device control system is a vehicle control system, which will be described with reference to FIG.
- FIG. 1 is a diagram showing an example of a schematic configuration of a vehicle control system.
- the vehicle control system 11 is provided in the vehicle 1 and performs processing related to driving support and automatic driving of the vehicle 1 .
- the vehicle control system 11 includes a vehicle control ECU (Electronic Control Unit) 21, a communication unit 22, a map information accumulation unit 23, a GNSS (Global Navigation Satellite System) reception unit 24, an external recognition sensor 25, an in-vehicle sensor 26, a vehicle sensor 27, It has a recording unit 28 , a driving support/automatic driving control unit 29 , a driver monitoring system (DMS) 30 , a human machine interface (HMI) 31 , and a vehicle control unit 32 .
- a vehicle control ECU Electronic Control Unit
- a communication unit 22 a communication unit 22
- a map information accumulation unit 23 a GNSS (Global Navigation Satellite System) reception unit 24
- an external recognition sensor 25
- an in-vehicle sensor 26 a vehicle sensor 27, It has a recording unit 28 , a driving support/automatic driving control unit 29 , a driver monitoring system (DMS) 30 , a human machine interface (HMI) 31 , and a vehicle control unit 32 .
- DMS driver monitoring system
- the vehicle control ECU 21, communication unit 22, map information storage unit 23, GNSS reception unit 24, external recognition sensor 25, in-vehicle sensor 26, vehicle sensor 27, recording unit 28, driving support/automatic driving control unit 29, DMS 30, HMI 31, and , and the vehicle control unit 32 are communicably connected to each other via a communication network 41 .
- the communication network 41 is, for example, a CAN (Controller Area Network), LIN (Local Interconnect Network), LAN (Local Area Network), FlexRay (registered trademark), Ethernet (registered trademark), and other digital two-way communication standards. It is composed of a communication network, a bus, and the like.
- the communication network 41 may be used properly depending on the type of data to be communicated.
- CAN is applied for data related to vehicle control
- Ethernet is applied for large-capacity data.
- each part of the vehicle control system 11 performs wireless communication assuming relatively short-range communication such as near field communication (NFC (Near Field Communication)) or Bluetooth (registered trademark) without going through the communication network 41. may be connected directly using NFC (Near Field Communication) or Bluetooth (registered trademark)
- the vehicle control ECU 21 is composed of various processors such as a CPU (Central Processing Unit) and an MPU (Micro Processing Unit).
- the vehicle control ECU 21 controls the entire or part of the vehicle control system 11 .
- the communication unit 22 communicates with various devices inside and outside the vehicle, other vehicles, servers, base stations, etc., and transmits and receives various data. At this time, the communication unit 22 can perform communication using a plurality of communication methods.
- the communication unit 22 is, for example, a wireless communication system such as 5G (5th generation mobile communication system), LTE (Long Term Evolution), DSRC (Dedicated Short Range Communications), via a base station or access point, on the external network communicates with a server (hereinafter referred to as an external server) located in the external network.
- the external network with which the communication unit 22 communicates is, for example, the Internet, a cloud network, or a provider's own network.
- the communication method for communicating with the external network by the communication unit 22 is not particularly limited as long as it is a wireless communication method that enables digital two-way communication at a predetermined communication speed or higher and at a predetermined distance or longer.
- the communication unit 22 can communicate with terminals existing in the vicinity of the own vehicle using P2P (Peer To Peer) technology.
- Terminals in the vicinity of one's own vehicle are, for example, terminals worn by pedestrians, bicycles, and other moving objects that move at relatively low speeds, terminals installed at fixed locations such as stores, or MTC (Machine Type Communication ) terminal.
- the communication unit 22 can also perform V2X communication.
- V2X communication is, for example, vehicle-to-vehicle communication with other vehicles, vehicle-to-infrastructure communication with roadside equipment, etc., vehicle-to-home communication , and communication between the vehicle and others, such as vehicle-to-pedestrian communication with a terminal or the like possessed by a pedestrian.
- the communication unit 22 can receive from the outside a program for updating the software that controls the operation of the vehicle control system 11 (Over The Air).
- the communication unit 22 can also receive map information, traffic information, information around the vehicle 1, and the like from the outside.
- the communication unit 22 can transmit information about the vehicle 1, information about the surroundings of the vehicle 1, and the like to the outside.
- the information about the vehicle 1 that the communication unit 22 transmits to the outside includes, for example, data indicating the state of the vehicle 1, recognition results by the recognition unit 73, and the like.
- the communication unit 22 performs communication corresponding to a vehicle emergency call system such as e-call.
- the communication with the inside of the vehicle that can be performed by the communication unit 22 will be described schematically.
- the communication unit 22 can communicate with each device in the vehicle using, for example, wireless communication.
- the communication unit 22 performs wireless communication with devices in the vehicle using a communication method such as wireless LAN, Bluetooth, NFC, and WUSB (Wireless USB) that enables digital two-way communication at a communication speed higher than a predetermined value. can be done.
- the communication unit 22 can also communicate with each device in the vehicle using wired communication.
- the communication unit 22 can communicate with each device in the vehicle by wired communication via a cable connected to a connection terminal (not shown).
- the communication unit 22 performs digital two-way communication at a predetermined communication speed or higher through wired communication such as USB (Universal Serial Bus), HDMI (High-Definition Multimedia Interface) (registered trademark), and MHL (Mobile High-definition Link). can communicate with each device in the vehicle.
- USB Universal Serial Bus
- HDMI High-Definition Multimedia Interface
- MHL Mobile High-definition Link
- equipment in the vehicle refers to equipment that is not connected to the communication network 41 in the vehicle, for example.
- in-vehicle devices include mobile devices and wearable devices possessed by passengers such as drivers, information devices that are brought into the vehicle and temporarily installed, and the like.
- the communication unit 22 receives electromagnetic waves transmitted by a road traffic information communication system (VICS (Vehicle Information and Communication System) (registered trademark)) such as radio wave beacons, optical beacons, and FM multiplex broadcasting.
- VICS Vehicle Information and Communication System
- radio wave beacons such as radio wave beacons, optical beacons, and FM multiplex broadcasting.
- the map information accumulation unit 23 accumulates one or both of the map obtained from the outside and the map created by the vehicle 1. For example, the map information accumulation unit 23 accumulates a three-dimensional high-precision map, a global map covering a wide area, and the like, which is lower in accuracy than the high-precision map.
- High-precision maps are, for example, dynamic maps, point cloud maps, and vector maps.
- the dynamic map is, for example, a map consisting of four layers of dynamic information, quasi-dynamic information, quasi-static information, and static information, and is provided to the vehicle 1 from an external server or the like.
- a point cloud map is a map composed of a point cloud (point cloud data).
- the vector map refers to a map adapted to ADAS (Advanced Driver Assistance System) in which traffic information such as lane and signal positions are associated with a point cloud map.
- ADAS Advanced Driver Assistance System
- the point cloud map and the vector map may be provided from an external server or the like, and based on the sensing results of the radar 52, LiDAR 53, etc., the vehicle 1 as a map for matching with a local map described later. It may be created and stored in the map information storage unit 23 . Further, when a high-precision map is provided from an external server or the like, in order to reduce the communication capacity, map data of, for example, several hundred meters square, regarding the planned route that the vehicle 1 will travel from now on, is acquired from the external server or the like. .
- the GNSS receiver 24 receives GNSS signals from GNSS satellites and acquires position information of the vehicle 1 .
- the received GNSS signal is supplied to the driving support/automatic driving control unit 29 .
- the GNSS receiver 24 is not limited to the method using the GNSS signal, and may acquire the position information using, for example, a beacon.
- the external recognition sensor 25 includes various sensors used for recognizing situations outside the vehicle 1 and supplies sensor data from each sensor to each part of the vehicle control system 11 .
- the type and number of sensors included in the external recognition sensor 25 are arbitrary.
- the external recognition sensor 25 includes a camera 51, a radar 52, a LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging) 53, and an ultrasonic sensor 54.
- the configuration is not limited to this, and the external recognition sensor 25 may be configured to include one or more types of sensors among the camera 51 , radar 52 , LiDAR 53 , and ultrasonic sensor 54 .
- the numbers of cameras 51 , radars 52 , LiDARs 53 , and ultrasonic sensors 54 are not particularly limited as long as they are realistically installable in the vehicle 1 .
- the type of sensor provided in the external recognition sensor 25 is not limited to this example, and the external recognition sensor 25 may be provided with other types of sensors. An example of the sensing area of each sensor included in the external recognition sensor 25 will be described later.
- the shooting method of the camera 51 is not particularly limited as long as it is a shooting method that enables distance measurement.
- the camera 51 may be a ToF (Time Of Flight) camera, a stereo camera, a monocular camera, an infrared camera, or any other type of camera as required.
- the camera 51 is not limited to this, and may simply acquire a photographed image regardless of distance measurement.
- the external recognition sensor 25 can include an environment sensor for detecting the environment with respect to the vehicle 1.
- the environment sensor is a sensor for detecting the environment such as weather, climate, brightness, etc., and can include various sensors such as raindrop sensors, fog sensors, sunshine sensors, snow sensors, and illuminance sensors.
- the external recognition sensor 25 includes a microphone used for detecting the sound around the vehicle 1 and the position of the sound source.
- the in-vehicle sensor 26 includes various sensors for detecting information inside the vehicle, and supplies sensor data from each sensor to each part of the vehicle control system 11 .
- the types and number of various sensors included in the in-vehicle sensor 26 are not particularly limited as long as they are realistically installable in the vehicle 1 .
- the in-vehicle sensor 26 can include one or more sensors among cameras, radar, seating sensors, steering wheel sensors, microphones, and biosensors.
- the camera provided in the in-vehicle sensor 26 for example, cameras of various shooting methods capable of distance measurement, such as a ToF camera, a stereo camera, a monocular camera, and an infrared camera, can be used.
- the camera included in the in-vehicle sensor 26 is not limited to this, and may simply acquire a photographed image regardless of distance measurement.
- the biosensors included in the in-vehicle sensor 26 are provided, for example, in seats, steering wheels, etc., and detect various biometric information of passengers such as the driver.
- the vehicle sensor 27 includes various sensors for detecting the state of the vehicle 1, and supplies sensor data from each sensor to each section of the vehicle control system 11.
- the types and number of various sensors included in the vehicle sensor 27 are not particularly limited as long as they can be installed in the vehicle 1 realistically.
- the vehicle sensor 27 includes a speed sensor, an acceleration sensor, an angular velocity sensor (gyro sensor), and an inertial measurement unit (IMU (Inertial Measurement Unit)) integrating them.
- the vehicle sensor 27 includes a steering angle sensor that detects the steering angle of the steering wheel, a yaw rate sensor, an accelerator sensor that detects the amount of operation of the accelerator pedal, and a brake sensor that detects the amount of operation of the brake pedal.
- the vehicle sensor 27 includes a rotation sensor that detects the number of rotations of an engine or a motor, an air pressure sensor that detects tire air pressure, a slip rate sensor that detects a tire slip rate, and a wheel speed sensor that detects the rotational speed of a wheel.
- a sensor is provided.
- the vehicle sensor 27 includes a battery sensor that detects the remaining battery level and temperature, and an impact sensor that detects external impact.
- the recording unit 28 includes at least one of a nonvolatile storage medium and a volatile storage medium, and stores data and programs.
- the recording unit 28 is used, for example, as an EEPROM (Electrically Erasable Programmable Read Only Memory) and a RAM (Random Access Memory). And a magneto-optical storage device can be applied.
- the recording unit 28 records various programs and data used by each unit of the vehicle control system 11 .
- the recording unit 28 includes an EDR (Event Data Recorder) and a DSSAD (Data Storage System for Automated Driving), and records information on the vehicle 1 before and after an event such as an accident and biometric information acquired by the in-vehicle sensor 26. .
- the driving support/automatic driving control unit 29 controls driving support and automatic driving of the vehicle 1 .
- the driving support/automatic driving control unit 29 includes an analysis unit 61 , an action planning unit 62 and an operation control unit 63 .
- the analysis unit 61 analyzes the vehicle 1 and its surroundings.
- the analysis unit 61 includes a self-position estimation unit 71 , a sensor fusion unit 72 and a recognition unit 73 .
- the self-position estimation unit 71 estimates the self-position of the vehicle 1 based on the sensor data from the external recognition sensor 25 and the high-precision map accumulated in the map information accumulation unit 23. For example, the self-position estimation unit 71 generates a local map based on sensor data from the external recognition sensor 25, and estimates the self-position of the vehicle 1 by matching the local map and the high-precision map.
- the position of the vehicle 1 is based on, for example, the center of the rear wheel versus axle.
- a local map is, for example, a three-dimensional high-precision map created using techniques such as SLAM (Simultaneous Localization and Mapping), an occupancy grid map, or the like.
- the three-dimensional high-precision map is, for example, the point cloud map described above.
- the occupancy grid map is a map that divides the three-dimensional or two-dimensional space around the vehicle 1 into grids (lattice) of a predetermined size and shows the occupancy state of objects in grid units.
- the occupancy state of an object is indicated, for example, by the presence or absence of the object and the existence probability.
- the local map is also used, for example, by the recognizing unit 73 for detection processing and recognition processing of the situation outside the vehicle 1 .
- the self-position estimation unit 71 may estimate the self-position of the vehicle 1 based on the GNSS signal and sensor data from the vehicle sensor 27.
- the sensor fusion unit 72 combines a plurality of different types of sensor data (for example, image data supplied from the camera 51 and sensor data supplied from the radar 52) to perform sensor fusion processing to obtain new information.
- Methods for combining different types of sensor data include integration, fusion, federation, and the like.
- the recognition unit 73 executes a detection process for detecting the situation outside the vehicle 1 and a recognition process for recognizing the situation outside the vehicle 1 .
- the recognition unit 73 performs detection processing and recognition processing of the situation outside the vehicle 1 based on information from the external recognition sensor 25, information from the self-position estimation unit 71, information from the sensor fusion unit 72, and the like. .
- the recognition unit 73 performs detection processing and recognition processing of objects around the vehicle 1 .
- Object detection processing is, for example, processing for detecting the presence or absence, size, shape, position, movement, and the like of an object.
- Object recognition processing is, for example, processing for recognizing an attribute such as the type of an object or identifying a specific object.
- detection processing and recognition processing are not always clearly separated, and may overlap.
- the recognition unit 73 detects objects around the vehicle 1 by clustering the point cloud based on sensor data from the LiDAR 53 or the radar 52 or the like for each cluster of point groups. As a result, presence/absence, size, shape, and position of objects around the vehicle 1 are detected.
- the recognizing unit 73 detects the movement of objects around the vehicle 1 by performing tracking that follows the movement of the cluster of points classified by clustering. As a result, the speed and traveling direction (movement vector) of the object around the vehicle 1 are detected.
- the recognition unit 73 detects or recognizes vehicles, people, bicycles, obstacles, structures, roads, traffic lights, traffic signs, road markings, etc. from the image data supplied from the camera 51 . Also, the types of objects around the vehicle 1 may be recognized by performing recognition processing such as semantic segmentation.
- the recognition unit 73 based on the map accumulated in the map information accumulation unit 23, the estimation result of the self-position by the self-position estimation unit 71, and the recognition result of the object around the vehicle 1 by the recognition unit 73, Recognition processing of traffic rules around the vehicle 1 can be performed. Through this processing, the recognizing unit 73 can recognize the position and state of traffic signals, the content of traffic signs and road markings, the content of traffic restrictions, and the lanes in which the vehicle can travel.
- the recognition unit 73 can perform recognition processing of the environment around the vehicle 1 .
- the surrounding environment to be recognized by the recognition unit 73 includes the weather, temperature, humidity, brightness, road surface conditions, and the like.
- the action plan section 62 creates an action plan for the vehicle 1.
- the action planning unit 62 creates an action plan by performing route planning and route following processing.
- trajectory planning is the process of planning a rough route from the start to the goal. This route planning is called a trajectory plan.
- trajectory generation Local path planning
- Path planning may be distinguished from long-term path planning and activation generation from short-term path planning or local path planning.
- a safety priority path represents a concept similar to activation generation, short-term path planning, or local path planning.
- Route following is the process of planning actions to safely and accurately travel the route planned by route planning within the planned time.
- the action planning unit 62 can, for example, calculate the target speed and the target angular speed of the vehicle 1 based on the result of this route following processing.
- the motion control unit 63 controls the motion of the vehicle 1 in order to implement the action plan created by the action planning unit 62.
- the operation control unit 63 controls a steering control unit 81, a brake control unit 82, and a drive control unit 83 included in the vehicle control unit 32, which will be described later, so that the vehicle 1 can control the trajectory calculated by the trajectory plan. Acceleration/deceleration control and direction control are performed so as to advance.
- the operation control unit 63 performs cooperative control aimed at realizing ADAS functions such as collision avoidance or shock mitigation, follow-up driving, vehicle speed maintenance driving, collision warning of own vehicle, and lane deviation warning of own vehicle.
- the operation control unit 63 performs cooperative control aimed at automatic driving in which the vehicle autonomously travels without depending on the driver's operation.
- the DMS 30 performs driver authentication processing, driver state recognition processing, etc., based on sensor data from the in-vehicle sensor 26 and input data input to the HMI 31, which will be described later.
- the driver's condition to be recognized by the DMS 30 includes, for example, physical condition, wakefulness, concentration, fatigue, gaze direction, drunkenness, driving operation, posture, and the like.
- the DMS 30 may perform authentication processing for passengers other than the driver and processing for recognizing the state of the passenger. Further, for example, the DMS 30 may perform recognition processing of the situation inside the vehicle based on the sensor data from the sensor 26 inside the vehicle. Conditions inside the vehicle to be recognized include temperature, humidity, brightness, smell, and the like, for example.
- the HMI 31 inputs various data, instructions, etc., and presents various data to the driver.
- the HMI 31 comprises an input device for human input of data.
- the HMI 31 generates an input signal based on data, instructions, etc. input from an input device, and supplies the input signal to each section of the vehicle control system 11 .
- the HMI 31 includes operators such as a touch panel, buttons, switches, and levers as input devices.
- the HMI 31 is not limited to this, and may further include an input device capable of inputting information by a method other than manual operation using voice, gestures, or the like. Further, the HMI 31 may use, as an input device, a remote control device using infrared rays or radio waves, or an externally connected device such as a mobile device or wearable device corresponding to the operation of the vehicle control system 11 .
- the presentation of data by HMI31 will be briefly explained.
- the HMI 31 generates visual information, auditory information, and tactile information for the passenger or outside the vehicle.
- the HMI 31 also performs output control for controlling the output, output content, output timing, output method, and the like of each of the generated information.
- the HMI 31 generates and outputs visual information such as an operation screen, a status display of the vehicle 1, a warning display, an image such as a monitor image showing the situation around the vehicle 1, and information indicated by light.
- the HMI 31 also generates and outputs information indicated by sounds such as voice guidance, warning sounds, warning messages, etc., as auditory information.
- the HMI 31 generates and outputs, as tactile information, information given to the passenger's tactile sense by force, vibration, motion, or the like.
- a display device that presents visual information by displaying an image by itself or a projector device that presents visual information by projecting an image can be applied.
- the display device displays visual information within the passenger's field of view, such as a head-up display, a transmissive display, or a wearable device with an AR (Augmented Reality) function. It may be a device.
- the HMI 31 can also use display devices such as a navigation device, an instrument panel, a CMS (Camera Monitoring System), an electronic mirror, and lamps provided in the vehicle 1 as output devices for outputting visual information.
- Audio speakers, headphones, and earphones can be applied as output devices for the HMI 31 to output auditory information.
- a haptic element using haptic technology can be applied as an output device for the HMI 31 to output tactile information.
- a haptic element is provided at a portion of the vehicle 1 that is in contact with a passenger, such as a steering wheel or a seat.
- the vehicle control unit 32 controls each unit of the vehicle 1.
- the vehicle control section 32 includes a steering control section 81 , a brake control section 82 , a drive control section 83 , a body system control section 84 , a light control section 85 and a horn control section 86 .
- the steering control unit 81 detects and controls the state of the steering system of the vehicle 1 .
- the steering system includes, for example, a steering mechanism including a steering wheel, an electric power steering, and the like.
- the steering control unit 81 includes, for example, a control unit such as an ECU that controls the steering system, an actuator that drives the steering system, and the like.
- the brake control unit 82 detects and controls the state of the brake system of the vehicle 1 .
- the brake system includes, for example, a brake mechanism including a brake pedal, an ABS (Antilock Brake System), a regenerative brake mechanism, and the like.
- the brake control unit 82 includes, for example, a control unit such as an ECU that controls the brake system.
- the drive control unit 83 detects and controls the state of the drive system of the vehicle 1 .
- the drive system includes, for example, an accelerator pedal, a driving force generator for generating driving force such as an internal combustion engine or a driving motor, and a driving force transmission mechanism for transmitting the driving force to the wheels.
- the drive control unit 83 includes, for example, a control unit such as an ECU that controls the drive system.
- the body system control unit 84 detects and controls the state of the body system of the vehicle 1 .
- the body system includes, for example, a keyless entry system, smart key system, power window device, power seat, air conditioner, air bag, seat belt, shift lever, and the like.
- the body system control unit 84 includes, for example, a control unit such as an ECU that controls the body system.
- the light control unit 85 detects and controls the states of various lights of the vehicle 1 .
- Lights to be controlled include, for example, headlights, backlights, fog lights, turn signals, brake lights, projections, bumper displays, and the like.
- the light control unit 85 includes a control unit such as an ECU for controlling lights.
- the horn control unit 86 detects and controls the state of the car horn of the vehicle 1 .
- the horn control unit 86 includes, for example, a control unit such as an ECU that controls the car horn.
- FIG. 2 is a diagram showing an example of a sensing area.
- the sensing area is sensed by the camera 51, the radar 52, the LiDAR 53, the ultrasonic sensor 54, etc. of the external recognition sensor 25 described above with reference to FIG. 1, for example. 2 schematically shows the vehicle 1 viewed from above, the left end side is the front end (front) side of the vehicle 1, and the right end side is the rear end (rear) side of the vehicle 1.
- a sensing area 91F and a sensing area 91B show examples of sensing areas of the ultrasonic sensor 54.
- the sensing area 91 ⁇ /b>F covers the periphery of the front end of the vehicle 1 with a plurality of ultrasonic sensors 54 .
- the sensing area 91B covers the periphery of the rear end of the vehicle 1 with a plurality of ultrasonic sensors 54 .
- the sensing results in the sensing area 91F and the sensing area 91B are used, for example, for parking assistance of the vehicle 1 and the like.
- a sensing area 92F to a sensing area 92B show examples of sensing areas of the radar 52 for short or medium range.
- the sensing area 92F covers the front of the vehicle 1 to a position farther than the sensing area 91F.
- the sensing area 92B covers the rear of the vehicle 1 to a position farther than the sensing area 91B.
- the sensing area 92L covers the rear periphery of the left side surface of the vehicle 1 .
- the sensing area 92R covers the rear periphery of the right side surface of the vehicle 1 .
- the sensing results in the sensing area 92F are used, for example, to detect vehicles, pedestrians, etc. existing in front of the vehicle 1.
- the sensing result in the sensing area 92B is used for the rear collision prevention function of the vehicle 1, for example.
- the sensing results in the sensing area 92L and the sensing area 92R are used, for example, for detecting an object in a blind spot on the side of the vehicle 1, or the like.
- a sensing area 93F to a sensing area 93B show examples of sensing areas by the camera 51.
- the sensing area 93F covers the front of the vehicle 1 to a position farther than the sensing area 92F.
- the sensing area 93B covers the rear of the vehicle 1 to a position farther than the sensing area 92B.
- the sensing area 93 ⁇ /b>L covers the periphery of the left side surface of the vehicle 1 .
- the sensing area 93R covers the periphery of the right side surface of the vehicle 1 .
- the sensing results in the sensing area 93F can be used, for example, for the recognition of traffic lights and traffic signs, the lane departure prevention support system, and the automatic headlight control system.
- a sensing result in the sensing area 93B can be used, for example, for parking assistance and a surround view system.
- Sensing results in the sensing area 93L and the sensing area 93R can be used, for example, in a surround view system.
- a sensing area 94 shows an example of the sensing area of the LiDAR 53 .
- the sensing area 94 covers the front of the vehicle 1 to a position farther than the sensing area 93F.
- the sensing area 94 has a narrower lateral range than the sensing area 93F.
- the sensing results in the sensing area 94 are used, for example, to detect objects such as surrounding vehicles.
- a sensing area 95 shows an example of the sensing area of the long-range radar 52 .
- the sensing area 95 covers the front of the vehicle 1 to a position farther than the sensing area 94 .
- the sensing area 95 has a narrower range in the horizontal direction than the sensing area 94 .
- the sensing results in the sensing area 95 are used, for example, for ACC (Adaptive Cruise Control), emergency braking, and collision avoidance.
- ACC Adaptive Cruise Control
- emergency braking emergency braking
- collision avoidance collision avoidance
- the sensing regions of the cameras 51, the radar 52, the LiDAR 53, and the ultrasonic sensors 54 included in the external recognition sensor 25 may have various configurations other than those shown in FIG. Specifically, the ultrasonic sensor 54 may also sense the sides of the vehicle 1 , and the LiDAR 53 may sense the rear of the vehicle 1 . Moreover, the installation position of each sensor is not limited to each example mentioned above. Also, the number of each sensor may be one or plural.
- An imaging device such as the camera 51 described above may include a solid-state imaging device such as an image sensor.
- the solid-state imaging device will be described in detail.
- FIG. 3 is a block diagram showing a schematic configuration example of a solid-state imaging device.
- the solid-state imaging device 100 is, for example, an image sensor manufactured by applying or partially using a CMOS process.
- the solid-state imaging device 100 has, for example, a stacked structure in which a semiconductor chip having a pixel array section 101 formed thereon and a semiconductor chip having a peripheral circuit formed thereon are stacked.
- Peripheral circuits may include, for example, a vertical drive circuit 102, a column processing circuit 103, a horizontal drive circuit 104, and a system controller 105.
- the solid-state imaging device 100 further includes a signal processing section 108 and a data storage section 109 .
- the signal processing unit 108 and the data storage unit 109 may be provided on the same semiconductor chip as the peripheral circuit, or may be provided on a separate semiconductor chip.
- pixels 110 each having a photoelectric conversion portion (photoelectric conversion element) for generating and accumulating electric charges according to the amount of received light are arranged in a two-dimensional grid in the row and column directions, that is, in a matrix. It has a configuration arranged in a shape.
- the row direction refers to the arrangement direction of pixels in a pixel row (horizontal direction in the drawing)
- the column direction refers to the arrangement direction of pixels in a pixel column (vertical direction in the drawing).
- the pixel drive lines LD are wired along the row direction for each pixel row, and the vertical signal lines VSL are wired along the column direction for each pixel column with respect to the matrix-like pixel array.
- the pixel drive line LD transmits a drive signal for driving when reading a signal from a pixel.
- the pixel drive lines LD are shown as wirings one by one, but the number of the pixel drive lines LD is not limited to one each.
- One end of the pixel drive line LD is connected to an output terminal corresponding to each row of the vertical drive circuit 102 .
- the vertical drive circuit 102 is composed of a shift register, an address decoder, etc., and drives each pixel of the pixel array section 101 simultaneously or in units of rows. That is, the vertical drive circuit 102 constitutes a drive section that controls the operation of each pixel in the pixel array section 101 together with a system control section 105 that controls the vertical drive circuit 102 .
- the vertical drive circuit 102 generally has two scanning systems, a readout scanning system and a discharge scanning system, although the specific configuration thereof is not shown.
- the readout scanning system sequentially selectively scans the pixels 110 of the pixel array section 101 in units of rows in order to read out signals from the pixels.
- a signal read out from the pixel 110 is an analog signal.
- the sweep-scanning system performs sweep-scanning ahead of the read-out scanning by the exposure time for the read-out rows to be read-scanned by the read-out scanning system.
- a so-called electronic shutter operation is performed by sweeping out (resetting) the unnecessary charges in this sweeping scanning system.
- the electronic shutter operation refers to an operation of discarding the charge of the photoelectric conversion unit and the like and starting new exposure (starting charge accumulation).
- the signal read out by the readout operation by the readout scanning system corresponds to the amount of light received after the immediately preceding readout operation or the electronic shutter operation.
- a period from the readout timing of the previous readout operation or the sweep timing of the electronic shutter operation to the readout timing of the current readout operation is a charge accumulation period (also referred to as an exposure period) in the pixel 110 .
- An example of the length of one frame exposure period (exposure time) is about 16.7 msec (corresponding to 60 fps).
- a signal output from each pixel 110 in a pixel row selectively scanned by the vertical drive circuit 102 is input to the column processing circuit 103 via each vertical signal line VSL for each pixel column.
- the column processing circuit 103 performs predetermined signal processing on the signal output from each pixel in the selected row via the vertical signal line VSL for each pixel column of the pixel array unit 101, and converts the signal after the signal processing (for example, a signal after AD conversion) is temporarily held (held in a line memory).
- the column processing circuit 103 may perform noise removal processing, such as CDS (Correlated Double Sampling) processing and DDS (Double Data Sampling) processing.
- CDS Correlated Double Sampling
- DDS Double Data Sampling
- the CDS processing removes pixel-specific noise such as reset noise and variations in threshold value of amplification transistors in pixels.
- the column processing circuit 103 also has an AD (analog-digital) conversion function, for example, and converts (the level of) an analog signal read from the photoelectric conversion unit into a digital signal and outputs the digital signal.
- AD analog-digital
- the horizontal driving circuit 104 is composed of shift registers, address decoders, etc., and sequentially selects readout circuits (hereinafter referred to as pixel circuits) corresponding to the pixel columns of the column processing circuit 103 .
- pixel circuits readout circuits
- the system control unit 105 is composed of a timing generator that generates various timing signals. and other drive control.
- the signal processing unit 108 has at least an arithmetic processing function, and performs various signal processing such as arithmetic processing on signals output from the column processing circuit 103 .
- the data storage unit 109 temporarily stores data required for signal processing in the signal processing unit 108 .
- the data storage unit 109 may be configured including, for example, a non-volatile memory.
- Image data output from the signal processing unit 108 (which may be pixel signals to be described later) is subjected to predetermined processing in, for example, the driving support/automatic driving control unit 29 in the vehicle control system 11 in which the solid-state imaging device 100 is mounted. It may be executed or transmitted to the outside via the communication unit 22 .
- FIG. 4 is a diagram showing an example of a schematic configuration of a pixel.
- the illustrated pixel 110 includes a photoelectric conversion unit SP, a capacitor C, a transfer transistor FCG, a floating diffusion FD, a reset transistor RST, an amplification transistor AMP, and a selection transistor SEL.
- the photoelectric conversion unit SP generates and accumulates electric charges according to the amount of received light.
- the photoelectric conversion unit SP is a photodiode whose anode is connected to the ground GND and whose cathode is connected to the transfer transistor FCG.
- the capacitor C is a capacitor (floating capacitor) electrically connected to the photoelectric conversion unit SP so that the charge generated in the photoelectric conversion unit SP can be shared with the photoelectric conversion unit SP and accumulated.
- the capacitor C is directly connected to the photoelectric conversion part SP.
- the transfer transistor FCG is connected between the photoelectric conversion unit SP and capacitor C and the floating diffusion FD, and transfers the charge accumulated in the photoelectric conversion unit SP and capacitor C to the floating diffusion FD. Charge transfer is controlled by a signal supplied from the vertical drive circuit 102 (FIG. 3) to the gate of the transfer transistor FCG.
- the floating diffusion FD accumulates charges transferred from the photoelectric conversion unit SP and the capacitor C via the transfer transistor FCG.
- a floating diffusion FD is, for example, a floating diffusion region formed in a semiconductor substrate.
- a voltage corresponding to the charges accumulated in the floating diffusion FD is applied to the gate of the amplification transistor AMP.
- the reset transistor RST is connected between the floating diffusion FD and the power supply VDD, and resets the floating diffusion FD.
- the photoelectric conversion unit SP and capacitor C are also reset via the transfer transistor FCG and reset transistor RST.
- the resetting of the floating diffusion FD is controlled by a signal supplied from the vertical drive circuit 102 (FIG. 3) to the gate of the reset transistor RST.
- the resetting of the photoelectric conversion unit SP and the capacitor C is controlled by a signal supplied from the vertical drive circuit 102 (FIG. 3) to the gate of the transfer transistor FCG and a signal supplied to the gate of the reset transistor RST.
- the amplification transistor AMP outputs a voltage level corresponding to the charge accumulated in the floating diffusion FD.
- the selection transistor SEL is connected between the amplification transistor AMP and the vertical signal line VSL, and causes the output voltage (signal) of the amplification transistor AMP to appear on the vertical signal line VSL. Appearance of the signal on the vertical signal line VSL is controlled by a signal supplied from the vertical drive circuit 102 (FIG. 3) to the gate of the select transistor SEL.
- a signal appearing on the vertical signal line VSL is input to the column processing circuit 103 as described above with reference to FIG.
- the noise reduces the SNR.
- a signal level corresponding to charges generated in the photoelectric conversion unit SP and accumulated in the photoelectric conversion unit SP and the capacitor C is called a "pixel signal level”.
- a signal level corresponding to noise generated in the capacitor C due to temperature (due to temperature rise) is referred to as a "noise signal level.”
- SNR is the ratio (difference) between the pixel signal level and the noise signal level. As described above, as the temperature rises, the noise level increases, resulting in a decrease in SNR, or SNR drop.
- FIG. 5 schematically shows the pixel image signal level (pixel image data) based on the pixel signal level and the noise image signal level (noise image data) based on the noise signal level when the black level is used as a reference.
- the level of the pixel image signal is the level obtained by adding the level of the noise image signal to the level of the original pixel image signal (superimposed with noise).
- FIG. 6 schematically shows the relationship between temperature and SNR. The SNR decreases as the temperature increases. For example, the SNR at temperature T2 will be lower than the SNR at temperature T1 (which is lower than temperature T2).
- FIG. 7 schematically shows captured images at temperatures T1 and T2.
- FIG. 8 schematically shows the relationship between illuminance and SNR. Especially in the low illuminance region, the SNR becomes low at temperature T2, which is higher than temperature T1, and the effect of the SNR drop becomes apparent.
- the SNR drop is suppressed by subtracting the noise signal level from the pixel signal level.
- the noise signal level of each of the plurality of pixels 110 can be treated as fixed pattern noise FPN (Fixed Pattern Noise).
- FPN Fixed Pattern Noise
- the fixed pattern noise FPN is determined at the design stage, the manufacturing stage, etc. of the solid-state imaging device 100, and thus can be acquired as data in advance.
- the noise signal level has a correlation with the exposure time. For example, the longer the exposure time, the higher the noise signal level. Also, the noise signal level has a correlation with the temperature of the solid-state imaging device 100 , more specifically with the temperature of the pixel array section 101 . For example, as temperature increases, the level of fixed pattern noise FPN increases.
- the noise signal level is calculated based on the actual exposure time and temperature, and the data of the fixed pattern noise FPN that has been converted into data in advance, and the calculated noise signal is calculated from the pixel signal level. level is subtracted.
- the subtraction process is performed inside the solid-state imaging device 100 .
- the subtraction process is realized by cooperation of the signal processing unit 108 and the data storage unit 109, etc., which will be described below, for example.
- FIG. 9 is a diagram showing an example of functional blocks of the signal processing unit and the data storage unit.
- the data storage unit 109 and the signal processing unit 108 will be explained in order.
- the data storage unit 109 stores in advance FPN data 109a and a program 109b as information used for subtraction processing.
- the FPN data 109 a is data of fixed pattern noise FPN, is obtained in advance before shipment of the solid-state imaging device 100 , and is stored in the data storage unit 109 .
- the program 109b is a program for causing the computer, more specifically the signal processing section 108, to perform subtraction processing.
- the signal processing unit 108 includes an acquisition unit 108a and a calculation unit 108b.
- the acquisition unit 108a acquires the exposure time and temperature.
- the exposure time may be, for example, a predetermined time, or may be ascertained by the system control unit 105 and acquired by the acquisition unit 108a.
- the temperature is detected by, for example, a temperature sensor (not shown) and acquired by the acquisition unit 108a.
- the temperature may be the temperature of the solid-state imaging device 100, the temperature of the pixel array section 101, or the like.
- the calculation unit 108b calculates the noise signal level using the exposure time and temperature acquired by the acquisition unit 108a and the FPN data 109a.
- the noise signal level can be calculated using a given algorithm for calculating the noise signal level based on the exposure time, temperature, and FPN data 109a, or referring to table data.
- the calculation unit 108b subtracts the calculated noise signal level from the pixel signal level.
- the FPN data 109a is data indicating the noise signal level of each of the plurality of pixels 110, it can also be treated as noise image data (noise image signal level) based on the noise signal level.
- the calculation unit 108b calculates noise image data based on the noise signal level using the exposure time, temperature, and FPN data 109a.
- the calculation unit 108b subtracts the calculated noise image data from the pixel image data (pixel image signal level) based on the pixel signal level.
- the SNR drop can be suppressed by the above subtraction processing.
- FIG. 10 is a diagram schematically showing SNR drop suppression.
- the noise appearing in the image is reduced and the image quality is improved as compared with the captured image at the temperature T2 shown in FIG. 7 described above.
- the solid-state imaging device 100 is mounted on the external recognition sensor 25 of the vehicle 1 as a component of the camera 51, the recognition accuracy by the external recognition sensor 25 can be improved.
- the subtraction process may be executed outside the solid-state imaging device 100.
- an imaging device including the solid-state imaging device 100 for example, the camera 51 in FIG. 1 may have a function of subtraction processing. Some examples of such imaging devices are described with reference to FIGS. 11 and 12. FIG.
- FIG. 11 and 12 are diagrams showing an example of a schematic configuration of an imaging device.
- the imaging device exemplified in FIG. 11 is the camera 51 previously described with reference to FIG.
- the camera 51 includes a solid-state imaging device 100, a processing section 51a, and a storage section 51b.
- the processing unit 51a includes an acquisition unit 108a and a calculation unit 108b.
- the storage unit 51b stores in advance the FPN data 109a and the program 109b. Acquisition unit 108a, calculation unit 108b, FPN data 109a, and program 109b are as described above, so description thereof will not be repeated.
- the program 109b here is a program for causing a computer, more specifically, a processor (not shown) installed in the camera 51, etc., to execute the processing of the processing unit 51a.
- the functions of the processing unit 51a described above may be provided outside the camera 51.
- the information (FPN data 109 a and program 109 b ) stored in the storage unit 51 b described above may be stored outside the camera 51 .
- the vehicle control ECU 21 previously described with reference to FIG. 1 includes an acquisition section 108a and a calculation section 108b.
- the recording unit 28 preliminarily stores the FPN data 109a and the program 109b. That is, the camera 51, the vehicle control ECU 21, and the recording unit 28 cooperate to form an imaging device.
- the program 109b here is a program for causing the computer, more specifically the vehicle control ECU 21, to execute the processing of the acquisition unit 108a and the calculation unit 108b.
- FIG. 13 is a flow chart showing an example of processing executed in the solid-state imaging device or the imaging device (processing method of the solid-state imaging device, processing method of the imaging device). The details of each process are as described above, so the description will not be repeated.
- step S1 the exposure time and temperature are acquired.
- the acquisition unit 108a acquires the exposure time and the detected temperature.
- the noise signal level is calculated.
- the calculation unit 108b calculates the noise signal level using the exposure time and temperature obtained in step S1 and the FPN data 109a.
- step S3 the noise signal level is subtracted from the pixel signal level.
- the calculation unit 108b subtracts the noise signal level calculated in step S2 from the pixel signal level.
- the subtraction process may be a process of subtracting noise image data from pixel image data.
- the SNR drop is suppressed by the first technique as described above.
- the noise signal level corresponding to the noise actually generated in the capacitor C (the noise signal level based on the generated noise) is subtracted from the pixel signal level.
- FIG. 14 is a diagram showing an example of a schematic configuration of a pixel.
- the illustrated pixel 110A differs from the pixel 110 (FIG. 4) in that it further includes a switch SW and a reset transistor RST2.
- the switch SW is a switch transistor connected between the photoelectric conversion unit SP and the capacitor C. ON and OFF (conducting state and non-conducting state) of the switch SW is controlled by a signal supplied to the gate of the switch SW from the vertical drive circuit 102 (FIG. 3).
- the reset transistor RST2 is connected between the photoelectric conversion unit SP and the power supply VDD, and resets the photoelectric conversion unit SP. Reset of the photoelectric conversion unit SP is controlled by a signal supplied from the vertical drive circuit 102 (FIG. 3) to the gate of the reset transistor RST2.
- the exposure period includes a phosphorescent exposure period and a non-phosphorescent exposure period.
- the switch SW is controlled to be ON or OFF.
- the switch SW is controlled to be OFF.
- the switch SW is controlled to be ON during the phosphorescent exposure period, a pixel signal level corresponding to the charge generated in the photoelectric conversion unit SP and accumulated in the photoelectric conversion unit SP and the capacitor C is obtained by the exposure during that period.
- the switch SW is controlled to be OFF during the phosphorescent exposure period, the charge generated in the photoelectric conversion unit SP and overflowed from the photoelectric conversion unit SP due to the exposure during that period corresponds to the charge accumulated in the capacitor C.
- a signal level is obtained. Such signal levels can also be pixel signal levels.
- a noise signal level corresponding to the noise generated in the capacitor C due to the temperature is obtained by exposure during the non-storage exposure period. The obtained noise signal level is subtracted from the obtained pixel signal level.
- FIG. 15 is a diagram showing an example of functional blocks of a signal processing unit (or column processing circuit) and a data storage unit.
- the signal processing unit 108A includes a calculation unit 108Ab as a functional block related to subtraction processing.
- the calculation unit 108Ab subtracts the noise signal level obtained by the exposure during the non-phosphorescent exposure period from the pixel signal level obtained by the exposure during the phosphorescent exposure period.
- the calculation unit 108Ab may subtract the noise signal level after AD conversion from the pixel signal level after AD conversion.
- the calculation unit 108Ab may subtract the noise image data based on the noise signal level from the pixel image data based on the pixel signal level.
- the length of the phosphorescent exposure period and the non-phosphorescent exposure period may be different.
- the length of the non-phosphorescent exposure period may be shorter than the length of the phosphorescent exposure period. Examples of lengths of such phosphorescent exposure periods and non-phosphorescent exposure periods are about 11 msec and about 5.5 msec.
- the calculation unit 108Ab may calculate the noise signal level for the phosphorescent exposure period and subtract the calculated noise signal level from the pixel signal level.
- the calculation unit 108Ab multiplies the noise signal level obtained by exposure during the non-phosphorescent exposure period by (length of the phosphorescent exposure period)/(length of the non-phosphorescent exposure period) to obtain the pixel signal level. may be subtracted from the multiplied noise signal level.
- the subtraction process may be performed only on some of the pixels 110, for example, the pixels 110 in the low-illuminance area. This is because, as described above with reference to FIG. 8, the effect of the SNR drop becomes conspicuous especially in the low illuminance region. Note that in the pixel 110 in the high-illuminance region, part of the charge of the photoelectric conversion unit SP leaks to the capacitor C even when the switch SW is OFF, which may make it difficult to obtain an appropriate noise signal level. can also
- the program 109Ab of the data storage unit 109A is a program for causing the computer, more specifically the signal processing unit 108A, to execute the processing of the calculation unit 108Ab.
- the column processing circuit 103A may have the function of the arithmetic unit 108Ab.
- the reading of the exposure and pixel signal levels during the phosphorescent exposure period and the reading of the exposure and noise signal levels during the non-phosphorescent exposure period are DOL (Digital Over Lap). It may be done under drive.
- the column processing circuit 103A AD-converts the pixel signal level and the noise signal level for each pixel row, and subtracts the AD-converted noise signal level from the AD-converted pixel signal level for each pixel row. good.
- the column processing circuit 103A converts the AD-converted noise signal level from the pixel signal level stored in the line memory. may be subtracted. This makes it possible to save the memory capacity required for holding the pixel signal level.
- the process of multiplying the noise signal level obtained by exposure during the non-phosphorescent exposure period by (length of the phosphorescent exposure period)/(length of the non-phosphorescent exposure period) is also performed in the column processing circuit 103A. can be broken At this time, only the lower bits of the noise signal level may be AD-converted in order to expedite the AD conversion process. This is because the noise signal level is lower than the pixel signal level in most cases. Simplified processing using bit shifting also enables high-speed and small-area operations.
- FIG. 16 is a diagram schematically showing the exposure period.
- the exposure period includes a phosphorescent exposure period and a non-phosphorescent exposure period in this order. It is assumed that the photoelectric conversion unit SP and the capacitor C are initially reset.
- the switch SW is controlled to be ON during the phosphorescent exposure period.
- the switch SW is controlled to be OFF during the phosphorescent exposure period.
- the phosphorescent exposure period starts.
- the reset transistor RST2 is controlled to be OFF.
- the switch SW is controlled to be ON.
- the photoelectric conversion unit SP generates electric charge corresponding to the amount of received light. The generated charge is accumulated in the photoelectric conversion part SP and the capacitor C.
- the switch SW is controlled to be OFF. Charge generated in the photoelectric conversion unit SP and overflowing from the photoelectric conversion unit SP is accumulated in the capacitor C. As shown in FIG.
- the phosphorescent exposure period ends.
- a pixel signal level obtained by exposure during the phosphorescent exposure period is AD-converted.
- the non-phosphorescence exposure period starts.
- the reset transistor RST2 is controlled to be ON.
- the switch SW is controlled to be OFF. Noise is generated in capacitor C due to temperature.
- the non-luminous exposure period ends.
- a noise signal level obtained by exposure in the non-phosphorescence exposure period is AD-converted. After that, the next exposure period starts.
- FIG. 17 is a flow chart showing an example of processing executed in the solid-state imaging device (processing method of the solid-state imaging device). Since the specific processing is as described above, detailed description will not be repeated here.
- step S11 the pixel signal level is obtained by exposure during the phosphorescent exposure period.
- step S12 the noise signal level is obtained by exposure during the non-phosphorescence exposure period.
- step S13 the noise signal level is subtracted from the pixel signal level.
- the calculation unit 108Ab subtracts the noise signal level obtained in previous step S12 from the pixel signal level obtained in previous step S11.
- the subtraction process may be a process of subtracting noise image data from pixel image data.
- the SNR drop is also suppressed by the second technique as described above.
- the reset transistor RST2 may be used to release electric charge.
- the configuration of the pixel 110A and the first method described above may be combined.
- the FPN data 109a can be obtained by turning on the reset transistor RST2 and turning off the switch SW.
- the subtraction process using the FPN data 109a is as described above.
- the subtraction process is the subtraction of the fixed pattern noise FPN, even when motion-compensated temporal filtering (MCTF) is performed on the frame image data, the noise is appropriately reduced and the SNR drop is reduced. is suppressed.
- MCTF motion-compensated temporal filtering
- the third technique subtracts the noise signal level from the pixel signal level in a simpler manner than the second technique in some respects.
- FIG. 18 is a diagram showing an example of a schematic configuration of a pixel.
- the illustrated pixel 110B differs from the pixel 110A (FIG. 14) in that it does not include the reset transistor RST2.
- the exposure period includes the first period and the second period in this order.
- the switch SW is controlled to be OFF.
- the switch SW is controlled to be ON. Exposure during the first period provides a noise signal level corresponding to the noise generated in the capacitor C due to temperature. A pixel signal level corresponding to charges generated in the photoelectric conversion unit SP and accumulated in the photoelectric conversion unit SP and the capacitor C is obtained by exposure during the total period of the first period and the second period. The obtained noise signal level is subtracted from the obtained pixel signal level.
- FIG. 19 is a diagram showing an example of functional blocks of a signal processing unit (or column processing circuit) and a data storage unit.
- the signal processing unit 108B includes a calculation unit 108Bb as a functional block related to subtraction processing. Based on the noise signal level obtained by the exposure in the first period, the calculation unit 108Bb calculates the noise signal level for the total period of the first period and the second period, and the exposure in the total period The calculated noise signal level is subtracted from the obtained pixel signal level. Specifically, the calculation unit 108Bb multiplies the noise signal level obtained by the exposure in the first period by (length of the total period)/(length of the first period), and converts the pixel signal level into , subtract the noise signal level after multiplication.
- the arithmetic unit 108Bb may subtract the AD-converted noise signal level from the AD-converted pixel signal level. Further, the calculation unit 108Bb may subtract noise image data based on the noise signal level from pixel image data based on the pixel signal level.
- the data storage unit 109B stores a program 109Bb as information on noise cancellation processing.
- the program 109Bb is a program for causing the computer, more specifically the signal processing section 108B, to execute the processing of the calculation section 108Bb.
- the column processing circuit 103B may have the function of the arithmetic section 108Bb.
- FIG. 20 is a diagram schematically showing exposure periods.
- a solid line schematically indicates the charge amount of each of the photoelectric conversion unit SP and the capacitor C.
- FIG. It is assumed that the photoelectric conversion unit SP and the capacitor C are initially reset.
- the first period starts.
- the switch SW is controlled to be OFF.
- the photoelectric conversion unit SP generates electric charge corresponding to the amount of received light. The generated charge is accumulated in the photoelectric conversion portion SP. On the other hand, noise occurs in the capacitor C due to temperature.
- a noise signal level obtained by exposure in the first period is AD-converted.
- the second period begins.
- the switch SW is controlled to be ON.
- the charge that has been generated and accumulated in the photoelectric conversion unit SP so far is accumulated in the capacitor C as well.
- the photoelectric conversion part SP continues to generate charges. The generated charge is accumulated in the photoelectric conversion part SP and the capacitor C. As shown in FIG.
- the length of the first period (time t21 to time t22) is set so as to end before the charge accumulation in the photoelectric conversion unit SP is saturated.
- the length of such a first period may be a predetermined length based on design data, experimental data, or the like, for example.
- the length of the first period may be fixed, or may be dynamically set according to, for example, the illuminance at the time of exposure.
- the second period ends.
- a pixel signal level obtained by exposure in the total period of the first period and the second period is AD-converted.
- the next exposure period starts thereafter.
- the calculation unit 108Bb multiplies the noise signal level obtained by the exposure in the first period by (total period length)/(first period), Calculate the noise signal level of The calculated noise signal level is the noise signal level at time t23 when it is assumed that the noise signal level in the first period (time t21 to time t22) is extended to time t23 as indicated by the dashed line in FIG. level.
- the calculation unit 108Bb subtracts the calculated noise signal level from the pixel signal level.
- FIG. 21 is a flow chart showing an example of processing executed in the solid-state imaging device (processing method of the solid-state imaging device). Since the specific processing is as described above, detailed description will not be repeated here.
- step S21 the noise signal level due to the exposure in the first period is obtained.
- step S22 the pixel signal level due to exposure during the total period of the first period and the second period is obtained.
- step S23 the noise signal level converted to the total period is calculated.
- the calculation unit 108Bb multiplies the noise signal level obtained in the previous step S21 by (length of total period)/(length of first period).
- step S24 the noise signal level is subtracted from the pixel signal level.
- the calculation unit 108Bb subtracts the noise signal level calculated in step S23 from the pixel signal level obtained in step S22.
- the subtraction process may be a process of subtracting noise image data from pixel image data.
- the SNR drop is also suppressed by the third method as described above.
- the third method reading of the reset level (time t21 in FIG. 20) is enough.
- the process is simplified by eliminating the need to read the reset level twice (time t11 and time t12 in FIG. 16) as in the second method.
- one pixel 110 includes one photoelectric conversion unit SP.
- one pixel 110 may include a plurality of photoelectric conversion units SP. In that case, a part of the pixel circuit may be shared among the plurality of photoelectric conversion units SP.
- FIG. 22 is a diagram showing an example of a schematic configuration of a pixel.
- the photoelectric conversion unit SP described so far is illustrated as a photoelectric conversion unit SP2.
- the illustrated pixel 110C differs from the pixel 110B (FIG. 18) in that it further includes a photoelectric conversion unit SP1, a transfer transistor TGL, and a transfer transistor FDG.
- the photoelectric conversion unit SP1 generates and accumulates electric charges according to the amount of received light.
- the photoelectric conversion unit SP1 is also a photodiode like the photoelectric conversion unit SP2.
- the anode of the photoelectric conversion part SP1 is connected to the ground GND.
- the cathode is connected to the transfer transistor TGL.
- a capacitor is not connected to the photoelectric conversion unit SP1.
- the photoelectric conversion unit SP1 has saturation characteristics different from those of the photoelectric conversion unit SP2 and the capacitor C.
- the transfer transistor TGL is connected between the photoelectric conversion unit SP1 and the floating diffusion FD, and transfers the charges accumulated in the photoelectric conversion unit SP1 to the floating diffusion FD. Charge transfer is controlled by a signal supplied from the vertical drive circuit 102 (FIG. 3) to the gate of the transfer transistor TGL.
- the transfer transistor FDG is connected between the transfer transistor FCG and the floating diffusion FD, and cooperates with the transfer transistor FCG to transfer the charges accumulated in the photoelectric conversion part SP2 and the capacitor C to the floating diffusion FD. Charge transfer is controlled by signals supplied from the vertical drive circuit 102 (FIG. 3) to the gates of the transfer transistors FCG and FDG.
- the charge accumulated in the photoelectric conversion unit SP2 and/or the capacitor C and the charge accumulated in the photoelectric conversion unit SP1 are transferred to the floating diffusion FD at different timings, and the corresponding signals are read out.
- the exposure times of the photoelectric conversion unit SP2 and the photoelectric conversion unit SP1 may be set individually. For example, by reducing the capacitance of the photoelectric conversion unit SP1 or shortening the exposure period, the sensitivity at low illuminance can be improved more than the photoelectric conversion unit SP2 and the capacitor C. By combining and using the signals obtained by the photoelectric conversion units SP2 and SP1 having different saturation characteristics, it is possible to achieve further WDR.
- the previously described pixel 110 (FIG. 4) and pixel 110A (FIG. 14) may also be modified to include the photoelectric conversion unit SP1. Also, one pixel 110 may include three or more photoelectric conversion units with different saturation characteristics.
- the camera 51 of the vehicle 1 has been described as an application example of the solid-state imaging device 100 .
- the solid-state imaging device 100 may be used for various purposes other than this. Examples of other uses are mobile objects such as robots, IOT devices, and the like.
- the solid-state imaging device 100 includes a plurality of pixels 110, etc. and a portion 108b and the like.
- Each of the plurality of pixels 110 and the like includes a photoelectric conversion unit SP that generates electric charge according to the amount of received light, and a capacitor that is provided so as to share the charge generated in the photoelectric conversion unit SP with the photoelectric conversion unit SP and accumulate the electric charge. Including C.
- the calculation unit 108b and the like convert a pixel signal level corresponding to the charge generated in the photoelectric conversion unit SP and accumulated in the photoelectric conversion unit SP and the capacitor C into a noise signal corresponding to noise generated in the capacitor C due to temperature. Subtract the level (step S3, step S13, step S24).
- the charge generated in the photoelectric conversion unit SP is accumulated not only in the photoelectric conversion unit SP but also in the capacitor C, so that the charge saturation can be suppressed and the WDR can be ensured. Moreover, since the noise signal level corresponding to the noise generated in the capacitor C due to the temperature is subtracted from the pixel signal level, it is possible to suppress the SNR drop. Therefore, it becomes possible to suppress the SNR drop while ensuring the WDR.
- the solid-state imaging device 100 includes a storage unit (data storage unit 109), and the calculation unit 108b calculates the noise signal level using the exposure time, temperature, and fixed pattern noise data (FPN data 109a) stored in the storage unit (data storage unit 109). (step S2), and the calculated noise signal level may be subtracted from the pixel signal level (step S3). For example, the SNR drop can be suppressed by such a subtraction process (first technique).
- the fixed pattern noise data (FPN data 109a) is noise image data based on the noise signal level of each of the plurality of pixels 110. and fixed pattern noise data (FPN data 109a) stored in the storage unit (data storage unit 109) to calculate noise image data based on the noise signal level, and from the pixel image data based on the pixel signal level, The calculated noise image data may be subtracted. In this way, subtraction processing can also be performed at the image data level.
- each of the plurality of pixels 110A includes a switch SW connected between the photoelectric conversion unit SP and the capacitor C, and the exposure period of the plurality of pixels 110A is , a phosphorescent exposure period (time t11 to time t12) in which the switch SW is controlled to the conductive state (ON) or non-conductive state (OFF), and a non-phosphorescent exposure period ( from time t12 to time t13), and the calculation unit 108Ab calculates, from the pixel signal level obtained by exposure during the phosphorescent exposure period (time t11 to time t12), during the non-phosphorescence exposure period (time t12 to time t13) (Steps S11 to S13), and the calculation unit 108Ab may subtract the AD-converted noise signal level from the AD-converted pixel signal level. For example, such a subtraction process (second method) can also suppress the SNR drop.
- the calculation unit 108Ab based on the noise signal level obtained by the exposure during the non-phosphorescence exposure period (time t12 to time t13), determines the The noise signal level for t12) may be calculated, and the calculated noise signal level may be subtracted from the pixel signal level obtained by exposure during the phosphorescent exposure period (time t11 to time t12).
- the length of the phosphorescent exposure period (time t11 to time t12) and the length of the non-phosphorescent exposure period (time t12 to time t13) are different, it is possible to perform an appropriate subtraction process.
- the calculation unit 108Ab may subtract noise image data based on the noise signal level from pixel image data based on the pixel signal level. In this way, subtraction processing can also be performed at the image data level.
- the plurality of pixels 110A are arranged in an array, the pixel signal level and noise signal level are AD-converted for each pixel row, and the arithmetic unit 108Ab The noise signal level after AD conversion may be subtracted from the pixel signal level after AD conversion for each row.
- Such DOL driving can save the memory capacity required for holding the pixel signal level.
- each of the plurality of pixels 110B includes a switch SW connected between the photoelectric conversion unit SP and the capacitor C, and the exposure period of the plurality of pixels 110B is , a first period (time t21 to time t22) during which the switch SW is controlled to be non-conductive (OFF), and a second period (time t22 to time t23) during which the switch SW is controlled to be conductive (ON).
- the calculation unit 108Bb calculates the first period (time t21 to time t22) and the The noise signal level for the total period (time t21 to time t23) of period 2 (time t22 to time t23) is calculated, and from the pixel signal level obtained by exposure during the total period (time t21 to time t23), The calculated noise signal level may be subtracted. For example, such a subtraction process (third technique) can also suppress the SNR drop.
- An imaging device (camera 51, etc.) described with reference to FIGS.
- An imaging device includes a photoelectric conversion unit SP that generates electric charge according to the amount of received light, and a capacitor that is provided so that the electric charge generated in the photoelectric conversion unit SP can be shared with the photoelectric conversion unit SP and accumulated. C, and the pixel signal level according to the charge generated in the photoelectric conversion unit SP and accumulated in the photoelectric conversion unit SP and the capacitor C, the capacitor C and a calculation unit 108b for subtracting a noise signal level corresponding to noise generated in C. Even with such an imaging device (such as the camera 51), it is possible to suppress the SNR drop while ensuring the WDR, as described above.
- the imaging device stores in advance fixed pattern noise data (FPN data 109a) indicating the noise signal level of each of the plurality of pixels 110.
- FPN data 109a fixed pattern noise data
- a storage unit storage unit 51b, etc.
- the calculation unit 108b calculates noise using the exposure time, temperature, and fixed pattern noise data (FPN data 109a) stored in the storage unit (storage unit 51b, etc.).
- a signal level may be calculated and the calculated noise signal level may be subtracted from the pixel signal level. For example, the SNR drop can be suppressed by such a subtraction process (first method).
- the fixed pattern noise data (FPN data 109a) is noise image data based on the noise signal level of each of the plurality of pixels 110
- the calculation unit 108b calculates the exposure time and the temperature and fixed pattern noise data (FPN data 109a) stored in a storage unit (camera 51, etc.) to calculate noise image data based on the noise signal level, and obtain pixel image data based on the pixel signal level.
- the calculated noise image data may be subtracted from . In this way, subtraction processing can also be performed at the image data level.
- the processing method of the solid-state imaging device 100 described with reference to FIGS. 3, 4, 13, 14, 17, 18, 21 and 22 is also one of the embodiments.
- the solid-state imaging device 100 can accumulate the charges generated by the photoelectric conversion unit SP, which generates electric charges corresponding to the amount of received light, and shares the electric charges generated by the photoelectric conversion unit SP with the photoelectric conversion unit SP.
- step S3, S13, S24 subtracting the noise signal level corresponding to the noise generated in the capacitor C due to the temperature. Even with such a processing method of the solid-state imaging device 100, it is possible to suppress the SNR drop while ensuring the WDR, as described above.
- the processing programs (program 109b, etc.) of the solid-state imaging device 100 are also is one.
- the solid-state imaging device 100 In the processing program (program 109b, etc.) of the solid-state imaging device 100, the solid-state imaging device 100 generates a charge corresponding to the amount of received light, and shares the charge generated in the photoelectric conversion unit with the photoelectric conversion unit SP.
- a processing program (program 109b, etc.) is generated in the photoelectric conversion unit SP and stored in the photoelectric conversion unit SP and the capacitor C.
- the computer is caused to subtract the noise signal level corresponding to the noise generated in the capacitor C due to the temperature from the pixel signal level corresponding to the charged charge (steps S3, S13 and S24).
- the imaging device includes a photoelectric conversion unit SP that generates electric charges according to the amount of light received, and the electric charges generated in the photoelectric conversion unit SP. and a solid-state imaging device 100 including a plurality of pixels 110 each including a capacitor C provided so as to be able to share and store the charge generated in the photoelectric conversion unit SP and stored in the photoelectric conversion unit SP and the capacitor C. Subtracting a noise signal level corresponding to noise generated in the capacitor C due to temperature from a pixel signal level corresponding to the accumulated charge (step S3). Even with such a processing method of the imaging device (camera 51, etc.), it is possible to suppress the SNR drop while ensuring the WDR, as described above.
- the imaging device (camera 51, etc.) generates a photoelectric conversion unit SP that generates electric charge according to the amount of received light, and converts the electric charge generated in the photoelectric conversion unit SP.
- the solid-state imaging device 100 includes a plurality of pixels 110 each including a capacitor C provided so as to share and store with the photoelectric conversion unit SP, and a processing program (program 109b) is generated in the photoelectric conversion unit SP.
- a processing program (such as the program 109b) of such an imaging device (such as the camera 51) also makes it possible to suppress the SNR drop while ensuring the WDR, as described above.
- the present technology can also take the following configuration.
- (2) a storage unit storing in advance fixed pattern noise data indicating the noise signal level of each of the plurality of pixels;
- the calculation unit calculates the noise signal level using the exposure time, temperature, and fixed pattern noise data stored in the storage unit, and calculates the calculated noise signal level from the pixel signal level. to subtract, (1)
- (3) The fixed pattern noise data is noise image data based on the noise signal level of each of the plurality of pixels,
- the calculation unit calculates noise image data based on the noise signal level using the exposure time, temperature, and fixed pattern noise data stored in the storage unit, and calculates pixel image data based on the pixel signal level. Subtracting the calculated noise image data from (2) The solid-state imaging device according to (2).
- each of the plurality of pixels includes a switch connected between the photoelectric conversion unit and the capacitor;
- the exposure period of the plurality of pixels is a phosphorescent exposure period in which the switch is controlled to be in a conducting state or a non-conducting state; a non-luminous exposure period in which the switch is controlled to be in a non-conducting state; including
- the computing unit subtracts the noise signal level obtained by exposure during the non-phosphorescent exposure period from the pixel signal level obtained by exposure during the phosphorescent exposure period,
- the calculation unit subtracts the noise signal level after AD conversion from the pixel signal level after AD conversion.
- the computing unit calculates the noise signal level for the phosphorescent exposure period based on the noise signal level obtained by the exposure during the non-phosphorescent exposure period, and calculates the noise signal level for the phosphorescent exposure period. subtracting the calculated noise signal level from the pixel signal level; The solid-state imaging device according to (4). (6) The calculation unit subtracts noise image data based on the noise signal level from pixel image data based on the pixel signal level. The solid-state imaging device according to (4) or (5). (7) The plurality of pixels are arranged in an array, the pixel signal level and the noise signal level are AD-converted for each pixel row; The computing unit subtracts the noise signal level after AD conversion from the pixel signal level after AD conversion for each pixel row.
- each of the plurality of pixels includes a switch connected between the photoelectric conversion unit and the capacitor;
- the exposure period of the plurality of pixels is a first time period during which the switch is controlled to be non-conducting; a second period in which the switch is controlled to be conductive; in that order,
- the calculation unit calculates the noise signal level for a total period of the first period and the second period based on the noise signal level obtained by exposure in the first period, subtracting the calculated noise signal level from the pixel signal level obtained by exposing for a total period; (1)
- the solid-state imaging device (1).
- a solid state including a plurality of pixels each including a photoelectric conversion unit that generates electric charge according to the amount of received light, and a capacitor that is provided so as to store the electric charge generated by the photoelectric conversion unit in a shared manner with the photoelectric conversion unit.
- an imaging device A computing unit for subtracting a noise signal level corresponding to noise generated in the capacitor due to temperature from a pixel signal level corresponding to charges generated in the photoelectric conversion unit and accumulated in the photoelectric conversion unit and the capacitor.
- (10) a storage unit storing in advance fixed pattern noise data indicating the noise signal level of each of the plurality of pixels;
- the calculation unit calculates the noise signal level using the exposure time, temperature, and fixed pattern noise data stored in the storage unit, and calculates the calculated noise signal level from the pixel signal level. to subtract,
- the fixed pattern noise data is noise image data based on the noise signal level of each of the plurality of pixels,
- the calculation unit calculates noise image data based on the noise signal level using the exposure time, temperature, and fixed pattern noise data stored in the storage unit, and calculates pixel image data based on the pixel signal level. Subtracting the calculated noise image data from (10) The imaging device according to (10).
- a processing method for a solid-state imaging device comprising: The solid-state imaging device is a plurality of pixels each including a photoelectric conversion unit that generates an electric charge corresponding to the amount of received light; and a capacitor that is provided so as to store the electric charge generated by the photoelectric conversion unit in a shared manner with the photoelectric conversion unit; with The processing method is subtracting a noise signal level corresponding to noise generated in the capacitor due to temperature from a pixel signal level corresponding to charges generated in the photoelectric conversion unit and accumulated in the photoelectric conversion unit and the capacitor; including, A processing method for a solid-state imaging device.
- a processing program for a solid-state imaging device is a plurality of pixels each including a photoelectric conversion unit that generates an electric charge corresponding to the amount of received light; and a capacitor that is provided so as to store the electric charge generated by the photoelectric conversion unit in a shared manner with the photoelectric conversion unit; with The processing program is subtracting a noise signal level corresponding to noise generated in the capacitor due to temperature from a pixel signal level corresponding to charges generated in the photoelectric conversion unit and accumulated in the photoelectric conversion unit and the capacitor; cause the computer to run A processing program for a solid-state imaging device.
- a processing method for an imaging device comprising:
- the imaging device is A solid state including a plurality of pixels each including a photoelectric conversion unit that generates electric charge according to the amount of received light, and a capacitor that is provided so as to store the electric charge generated by the photoelectric conversion unit in a shared manner with the photoelectric conversion unit.
- imaging device, with The processing method is subtracting a noise signal level corresponding to noise generated in the capacitor due to temperature from a pixel signal level corresponding to charges generated in the photoelectric conversion unit and accumulated in the photoelectric conversion unit and the capacitor; including, A processing method of an imaging device.
- a processing program for an imaging device is A solid state including a plurality of pixels each including a photoelectric conversion unit that generates electric charge according to the amount of received light, and a capacitor that is provided so as to store the electric charge generated by the photoelectric conversion unit in a shared manner with the photoelectric conversion unit.
- imaging device, with The processing program is subtracting a noise signal level corresponding to noise generated in the capacitor due to temperature from a pixel signal level corresponding to charges generated in the photoelectric conversion unit and accumulated in the photoelectric conversion unit and the capacitor; cause the computer to run A processing program for an imaging device.
- vehicle control ECU 28 recording unit (storage unit) 51 camera (imaging device) 51a processing unit 51b storage unit 100 solid-state imaging device 101 pixel array unit 102 vertical drive circuit 103 column processing circuit 104 horizontal drive circuit 105 system control unit 108 signal processing unit 108a acquisition unit 108b calculation unit 109 data storage unit (storage unit) 109a FPN data 109b Program SP Photoelectric converter SW Switch C Capacitor
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
1.実施形態
1.1 車両制御システムの構成例
1.2 固体撮像装置の構成例
1.3 SNRドロップ
1.4 第1の手法
1.5 第2の手法
1.6 第3の手法
2.変形例
3.効果の例
一実施形態において、開示される技術は、移動装置制御システムに適用される。移動装置制御システムの例は、車両制御システムであり、これについて、図1を参照して説明する。
図1は、車両制御システムの概略構成の例を示す図である。車両制御システム11は、車両1に設けられ、車両1の走行支援及び自動運転に関わる処理を行う。
上述のカメラ51のような撮像装置は、イメージセンサ等の固体撮像装置を含んで構成されてよい。固体撮像装置について詳述する。
例えば以上説明したような構成を備える画素110によれば、光電変換部SPで発生した電荷が、光電変換部SPだけでなくキャパシタCにも蓄積される。その分、蓄積容量が大きくなり、WRDが確保しやすくなる。しかしながら、キャパシタC内には、暗電流等に起因してノイズが発生する。ノイズは、とくに温度に起因して発生し、温度が上昇するにつれてそのレベルが大きくなる。
画像中のレベルは、固体撮像装置100に固有のパターンで現れる。すなわち、複数の画素110それぞれのノイズ信号レベルは、固定パターンノイズFPN(Fixed Pattern Noise)として扱うことができる。固定パターンノイズFPNは、固体撮像装置100の設計段階、製造段階等で決まるので、予めデータとして取得しておくことが可能である。
第2の手法では、画素信号レベルから、実際にキャパシタCで発生したノイズに応じたノイズ信号レベル(発生したノイズに基づくノイズ信号レベル)が減算される。
第3の手法では、いくつかの側面において第2の手法よりも簡素化された手法で、画素信号レベルからノイズ信号レベルが減算される。
以上、本開示の一実施形態について説明したが、開示される技術は上記実施形態に限定されない。いくつかの変形例について説明する。
以上説明した実施形態は、例えば次のように特定される。図3、図4、図9、図12~図15、図17~図19、図21及び図22等を参照して説明したように、固体撮像装置100は、複数の画素110等と、演算部108b等と、を備える。複数の画素110等の各々は、受光光量に応じた電荷を発生する光電変換部SP、及び、光電変換部SPで発生した電荷を光電変換部SPと分担して蓄積できるように設けられたキャパシタCを含む。演算部108b等は、光電変換部SPで発生し光電変換部SP及びキャパシタCに蓄積された電荷に応じた画素信号レベルから、温度に起因してキャパシタC内で発生するノイズに応じたノイズ信号レベルを減算する(ステップS3、ステップS13、ステップS24)。
(1)
受光光量に応じた電荷を発生する光電変換部、及び、前記光電変換部で発生した電荷を前記光電変換部と分担して蓄積できるように設けられたキャパシタを各々が含む複数の画素と、
前記光電変換部で発生し前記光電変換部及び前記キャパシタに蓄積された電荷に応じた画素信号レベルから、温度に起因して前記キャパシタ内で発生するノイズに応じたノイズ信号レベルを減算する演算部と、
を備える、
固体撮像装置。
(2)
前記複数の画素それぞれの前記ノイズ信号レベルを示す固定パターンノイズデータを予め記憶している記憶部を備え、
前記演算部は、露光時間と、温度と、前記記憶部に記憶されている固定パターンノイズデータとを用いて前記ノイズ信号レベルを算出し、前記画素信号レベルから、前記算出した前記ノイズ信号レベルを減算する、
(1)に記載の固体撮像装置。
(3)
前記固定パターンノイズデータは、前記複数の画素それぞれの前記ノイズ信号レベルに基づくノイズ画像データであり、
前記演算部は、露光時間と、温度と、前記記憶部に記憶されている固定パターンノイズデータとを用いて前記ノイズ信号レベルに基づくノイズ画像データを算出し、前記画素信号レベルに基づく画素画像データから、前記算出した前記ノイズ画像データを減算する、
(2)に記載の固体撮像装置。
(4)
前記複数の画素の各々は、前記光電変換部と前記キャパシタとの間に接続されたスイッチを含み、
前記複数の画素の露光期間は、
前記スイッチが導通状態又は非導通状態に制御される蓄光露光期間と、
前記スイッチが非導通状態に制御される無蓄光露光期間と、
を含み、
前記演算部は、前記蓄光露光期間での露光によって得られた前記画素信号レベルから、前記無蓄光露光期間での露光によって得られた前記ノイズ信号レベルを減算し、
前記演算部は、AD変換後の前記画素信号レベルからAD変換後の前記ノイズ信号レベルを減算する、
(1)に記載の固体撮像装置。
(5)
前記演算部は、前記無蓄光露光期間での露光によって得られた前記ノイズ信号レベルに基づいて、前記蓄光露光期間分の前記ノイズ信号レベルを算出し、前記蓄光露光期間での露光によって得られた前記画素信号レベルから、算出した前記ノイズ信号レベルを減算する、
(4)に記載の固体撮像装置。
(6)
前記演算部は、前記画素信号レベルに基づく画素画像データから、前記ノイズ信号レベルに基づくノイズ画像データを減算する、
(4)又は(5)に記載の固体撮像装置。
(7)
前記複数の画素は、アレイ状に配置され、
前記画素信号レベル及び前記ノイズ信号レベルは、画素行ごとにAD変換され、
前記演算部は、画素行ごとに、AD変換後の前記画素信号レベルからAD変換後の前記ノイズ信号レベルを減算する、
(4)又は(5)に記載の固体撮像装置。
(8)
前記複数の画素の各々は、前記光電変換部と前記キャパシタとの間に接続されたスイッチを含み、
前記複数の画素の露光期間は、
前記スイッチが非導通状態に制御される第1の期間と、
前記スイッチが導通状態に制御される第2の期間と、
をこの順に含み、
前記演算部は、前記第1の期間での露光によって得られた前記ノイズ信号レベルに基づいて、前記第1の期間及び前記第2の期間の合計期間分の前記ノイズ信号レベルを算出し、前記合計期間での露光によって得られた前記画素信号レベルから、前記算出した前記ノイズ信号レベルを減算する、
(1)に記載の固体撮像装置。
(9)
受光光量に応じた電荷を発生する光電変換部、及び、前記光電変換部で発生した電荷を前記光電変換部と分担して蓄積できるように設けられたキャパシタを各々が含む複数の画素を含む固体撮像装置と、
前記光電変換部で発生し前記光電変換部及び前記キャパシタに蓄積された電荷に応じた画素信号レベルから、温度に起因して前記キャパシタ内で発生するノイズに応じたノイズ信号レベルを減算する演算部と、
を備える、
撮像装置。
(10)
前記複数の画素それぞれの前記ノイズ信号レベルを示す固定パターンノイズデータを予め記憶している記憶部を備え、
前記演算部は、露光時間と、温度と、前記記憶部に記憶されている固定パターンノイズデータとを用いて前記ノイズ信号レベルを算出し、前記画素信号レベルから、前記算出した前記ノイズ信号レベルを減算する、
(9)に記載の撮像装置。
(11)
前記固定パターンノイズデータは、前記複数の画素それぞれの前記ノイズ信号レベルに基づくノイズ画像データであり、
前記演算部は、露光時間と、温度と、前記記憶部に記憶されている固定パターンノイズデータとを用いて前記ノイズ信号レベルに基づくノイズ画像データを算出し、前記画素信号レベルに基づく画素画像データから、前記算出した前記ノイズ画像データを減算する、
(10)に記載の撮像装置。
(12)
固体撮像装置の処理方法であって、
前記固体撮像装置は、
受光光量に応じた電荷を発生する光電変換部、及び、前記光電変換部で発生した電荷を前記光電変換部と分担して蓄積できるように設けられたキャパシタを各々が含む複数の画素、
を備え、
前記処理方法は、
前記光電変換部で発生し前記光電変換部及び前記キャパシタに蓄積された電荷に応じた画素信号レベルから、温度に起因して前記キャパシタ内で発生するノイズに応じたノイズ信号レベルを減算すること、
を含む、
固体撮像装置の処理方法。
(13)
固体撮像装置の処理プログラムであって、
前記固体撮像装置は、
受光光量に応じた電荷を発生する光電変換部、及び、前記光電変換部で発生した電荷を前記光電変換部と分担して蓄積できるように設けられたキャパシタを各々が含む複数の画素、
を備え、
前記処理プログラムは、
前記光電変換部で発生し前記光電変換部及び前記キャパシタに蓄積された電荷に応じた画素信号レベルから、温度に起因して前記キャパシタ内で発生するノイズに応じたノイズ信号レベルを減算すること、
をコンピュータに実行させる、
固体撮像装置の処理プログラム。
(14)
撮像装置の処理方法であって、
前記撮像装置は、
受光光量に応じた電荷を発生する光電変換部、及び、前記光電変換部で発生した電荷を前記光電変換部と分担して蓄積できるように設けられたキャパシタを各々が含む複数の画素を含む固体撮像装置、
を備え、
前記処理方法は、
前記光電変換部で発生し前記光電変換部及び前記キャパシタに蓄積された電荷に応じた画素信号レベルから、温度に起因して前記キャパシタ内で発生するノイズに応じたノイズ信号レベルを減算すること、
を含む、
撮像装置の処理方法。
(15)
撮像装置の処理プログラムであって、
前記撮像装置は、
受光光量に応じた電荷を発生する光電変換部、及び、前記光電変換部で発生した電荷を前記光電変換部と分担して蓄積できるように設けられたキャパシタを各々が含む複数の画素を含む固体撮像装置、
を備え、
前記処理プログラムは、
前記光電変換部で発生し前記光電変換部及び前記キャパシタに蓄積された電荷に応じた画素信号レベルから、温度に起因して前記キャパシタ内で発生するノイズに応じたノイズ信号レベルを減算すること、
をコンピュータに実行させる、
撮像装置の処理プログラム。
28 記録部(記憶部)
51 カメラ(撮像装置)
51a 処理部
51b 記憶部
100 固体撮像装置
101 画素アレイ部
102 垂直駆動回路
103 カラム処理回路
104 水平駆動回路
105 システム制御部
108 信号処理部
108a 取得部
108b 演算部
109 データ格納部(記憶部)
109a FPNデータ
109b プログラム
SP 光電変換部
SW スイッチ
C キャパシタ
Claims (15)
- 受光光量に応じた電荷を発生する光電変換部、及び、前記光電変換部で発生した電荷を前記光電変換部と分担して蓄積できるように設けられたキャパシタを各々が含む複数の画素と、
前記光電変換部で発生し前記光電変換部及び前記キャパシタに蓄積された電荷に応じた画素信号レベルから、温度に起因して前記キャパシタ内で発生するノイズに応じたノイズ信号レベルを減算する演算部と、
を備える、
固体撮像装置。 - 前記複数の画素それぞれの前記ノイズ信号レベルを示す固定パターンノイズデータを予め記憶している記憶部を備え、
前記演算部は、露光時間と、温度と、前記記憶部に記憶されている固定パターンノイズデータとを用いて前記ノイズ信号レベルを算出し、前記画素信号レベルから、前記算出した前記ノイズ信号レベルを減算する、
請求項1に記載の固体撮像装置。 - 前記固定パターンノイズデータは、前記複数の画素それぞれの前記ノイズ信号レベルに基づくノイズ画像データであり、
前記演算部は、露光時間と、温度と、前記記憶部に記憶されている固定パターンノイズデータとを用いて前記ノイズ信号レベルに基づくノイズ画像データを算出し、前記画素信号レベルに基づく画素画像データから、前記算出した前記ノイズ画像データを減算する、
請求項2に記載の固体撮像装置。 - 前記複数の画素の各々は、前記光電変換部と前記キャパシタとの間に接続されたスイッチを含み、
前記複数の画素の露光期間は、
前記スイッチが導通状態又は非導通状態に制御される蓄光露光期間と、
前記スイッチが非導通状態に制御される無蓄光露光期間と、
を含み、
前記演算部は、前記蓄光露光期間での露光によって得られた前記画素信号レベルから、前記無蓄光露光期間での露光によって得られた前記ノイズ信号レベルを減算し、
前記演算部は、AD変換後の前記画素信号レベルからAD変換後の前記ノイズ信号レベルを減算する、
請求項1に記載の固体撮像装置。 - 前記演算部は、前記無蓄光露光期間での露光によって得られた前記ノイズ信号レベルに基づいて、前記蓄光露光期間分の前記ノイズ信号レベルを算出し、前記蓄光露光期間での露光によって得られた前記画素信号レベルから、算出した前記ノイズ信号レベルを減算する、
請求項4に記載の固体撮像装置。 - 前記演算部は、前記画素信号レベルに基づく画素画像データから、前記ノイズ信号レベルに基づくノイズ画像データを減算する、
請求項4に記載の固体撮像装置。 - 前記複数の画素は、アレイ状に配置され、
前記画素信号レベル及び前記ノイズ信号レベルは、画素行ごとにAD変換され、
前記演算部は、画素行ごとに、AD変換後の前記画素信号レベルからAD変換後の前記ノイズ信号レベルを減算する、
請求項4に記載の固体撮像装置。 - 前記複数の画素の各々は、前記光電変換部と前記キャパシタとの間に接続されたスイッチを含み、
前記複数の画素の露光期間は、
前記スイッチが非導通状態に制御される第1の期間と、
前記スイッチが導通状態に制御される第2の期間と、
をこの順に含み、
前記演算部は、前記第1の期間での露光によって得られた前記ノイズ信号レベルに基づいて、前記第1の期間及び前記第2の期間の合計期間分の前記ノイズ信号レベルを算出し、前記合計期間での露光によって得られた前記画素信号レベルから、前記算出した前記ノイズ信号レベルを減算する、
請求項1に記載の固体撮像装置。 - 受光光量に応じた電荷を発生する光電変換部、及び、前記光電変換部で発生した電荷を前記光電変換部と分担して蓄積できるように設けられたキャパシタを各々が含む複数の画素を含む固体撮像装置と、
前記光電変換部で発生し前記光電変換部及び前記キャパシタに蓄積された電荷に応じた画素信号レベルから、温度に起因して前記キャパシタ内で発生するノイズに応じたノイズ信号レベルを減算する演算部と、
を備える、
撮像装置。 - 前記複数の画素それぞれの前記ノイズ信号レベルを示す固定パターンノイズデータを予め記憶している記憶部を備え、
前記演算部は、露光時間と、温度と、前記記憶部に記憶されている固定パターンノイズデータとを用いて前記ノイズ信号レベルを算出し、前記画素信号レベルから、前記算出した前記ノイズ信号レベルを減算する、
請求項9に記載の撮像装置。 - 前記固定パターンノイズデータは、前記複数の画素それぞれの前記ノイズ信号レベルに基づくノイズ画像データであり、
前記演算部は、露光時間と、温度と、前記記憶部に記憶されている固定パターンノイズデータとを用いて前記ノイズ信号レベルに基づくノイズ画像データを算出し、前記画素信号レベルに基づく画素画像データから、前記算出した前記ノイズ画像データを減算する、
請求項10に記載の撮像装置。 - 固体撮像装置の処理方法であって、
前記固体撮像装置は、
受光光量に応じた電荷を発生する光電変換部、及び、前記光電変換部で発生した電荷を前記光電変換部と分担して蓄積できるように設けられたキャパシタを各々が含む複数の画素、
を備え、
前記処理方法は、
前記光電変換部で発生し前記光電変換部及び前記キャパシタに蓄積された電荷に応じた画素信号レベルから、温度に起因して前記キャパシタ内で発生するノイズに応じたノイズ信号レベルを減算すること、
を含む、
固体撮像装置の処理方法。 - 固体撮像装置の処理プログラムであって、
前記固体撮像装置は、
受光光量に応じた電荷を発生する光電変換部、及び、前記光電変換部で発生した電荷を前記光電変換部と分担して蓄積できるように設けられたキャパシタを各々が含む複数の画素、
を備え、
前記処理プログラムは、
前記光電変換部で発生し前記光電変換部及び前記キャパシタに蓄積された電荷に応じた画素信号レベルから、温度に起因して前記キャパシタ内で発生するノイズに応じたノイズ信号レベルを減算すること、
をコンピュータに実行させる、
固体撮像装置の処理プログラム。 - 撮像装置の処理方法であって、
前記撮像装置は、
受光光量に応じた電荷を発生する光電変換部、及び、前記光電変換部で発生した電荷を前記光電変換部と分担して蓄積できるように設けられたキャパシタを各々が含む複数の画素を含む固体撮像装置、
を備え、
前記処理方法は、
前記光電変換部で発生し前記光電変換部及び前記キャパシタに蓄積された電荷に応じた画素信号レベルから、温度に起因して前記キャパシタ内で発生するノイズに応じたノイズ信号レベルを減算すること、
を含む、
撮像装置の処理方法。 - 撮像装置の処理プログラムであって、
前記撮像装置は、
受光光量に応じた電荷を発生する光電変換部、及び、前記光電変換部で発生した電荷を前記光電変換部と分担して蓄積できるように設けられたキャパシタを各々が含む複数の画素を含む固体撮像装置、
を備え、
前記処理プログラムは、
前記光電変換部で発生し前記光電変換部及び前記キャパシタに蓄積された電荷に応じた画素信号レベルから、温度に起因して前記キャパシタ内で発生するノイズに応じたノイズ信号レベルを減算すること、
をコンピュータに実行させる、
撮像装置の処理プログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/549,464 US20240179429A1 (en) | 2021-03-16 | 2022-02-25 | Solid-state imaging device, imaging device, processing method in solid-state imaging device, processing program in solid-state imaging device, processing method in imaging device, and processing program in imaging device |
JP2023506913A JPWO2022196288A1 (ja) | 2021-03-16 | 2022-02-25 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021042001 | 2021-03-16 | ||
JP2021-042001 | 2021-03-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022196288A1 true WO2022196288A1 (ja) | 2022-09-22 |
Family
ID=83321421
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/007836 WO2022196288A1 (ja) | 2021-03-16 | 2022-02-25 | 固体撮像装置、撮像装置、固体撮像装置の処理方法、固体撮像装置の処理プログラム、撮像装置の処理方法、及び、撮像装置の処理プログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240179429A1 (ja) |
JP (1) | JPWO2022196288A1 (ja) |
WO (1) | WO2022196288A1 (ja) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001094882A (ja) * | 1999-09-24 | 2001-04-06 | Casio Comput Co Ltd | 撮像装置およびその信号処理方法 |
JP2002077737A (ja) * | 2000-06-14 | 2002-03-15 | Nec Corp | イメージセンサ |
JP2005328493A (ja) * | 2004-04-12 | 2005-11-24 | Shigetoshi Sugawa | 固体撮像装置、光センサおよび固体撮像装置の動作方法 |
-
2022
- 2022-02-25 JP JP2023506913A patent/JPWO2022196288A1/ja active Pending
- 2022-02-25 US US18/549,464 patent/US20240179429A1/en active Pending
- 2022-02-25 WO PCT/JP2022/007836 patent/WO2022196288A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001094882A (ja) * | 1999-09-24 | 2001-04-06 | Casio Comput Co Ltd | 撮像装置およびその信号処理方法 |
JP2002077737A (ja) * | 2000-06-14 | 2002-03-15 | Nec Corp | イメージセンサ |
JP2005328493A (ja) * | 2004-04-12 | 2005-11-24 | Shigetoshi Sugawa | 固体撮像装置、光センサおよび固体撮像装置の動作方法 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022196288A1 (ja) | 2022-09-22 |
US20240179429A1 (en) | 2024-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102613792B1 (ko) | 촬상 장치, 화상 처리 장치 및 화상 처리 방법 | |
US20230370709A1 (en) | Imaging device, information processing device, imaging system, and imaging method | |
WO2022153896A1 (ja) | 撮像装置、画像処理方法及び画像処理プログラム | |
TW202147825A (zh) | 攝像裝置及攝像方法 | |
WO2024024148A1 (ja) | 車載監視装置、情報処理装置および車載監視システム | |
WO2022196288A1 (ja) | 固体撮像装置、撮像装置、固体撮像装置の処理方法、固体撮像装置の処理プログラム、撮像装置の処理方法、及び、撮像装置の処理プログラム | |
EP4322516A1 (en) | Information processing device and information processing method | |
WO2024106196A1 (ja) | 固体撮像装置および電子機器 | |
US20240064431A1 (en) | Solid-state imaging device, method of controlling solid-state imaging device, and control program for solid-state imaging device | |
WO2024224856A1 (ja) | 重みづけ加算回路及び固体撮像装置 | |
WO2024214440A1 (ja) | 光検出装置および光検出システム | |
WO2024185361A1 (ja) | 固体撮像装置 | |
WO2024106132A1 (ja) | 固体撮像装置および情報処理システム | |
WO2024009739A1 (ja) | 光学式測距センサ、及び光学式測距システム | |
WO2024101128A1 (en) | Image and distance sensing system, image and distance sensing control device, and image and distance sensing method | |
JP2024158002A (ja) | 重みづけ加算回路及び固体撮像装置 | |
WO2023276223A1 (ja) | 測距装置、測距方法及び制御装置 | |
WO2024181041A1 (ja) | 測距装置及び測距方法 | |
WO2022075039A1 (ja) | 情報処理装置、情報処理システム及び情報処理方法 | |
WO2024062842A1 (ja) | 固体撮像装置 | |
US20240241227A1 (en) | Distance measuring device and distance measuring method | |
WO2022254839A1 (ja) | 光検出装置及び測距システム | |
WO2023149089A1 (ja) | 学習装置、学習方法及び学習プログラム | |
WO2023053498A1 (ja) | 情報処理装置、情報処理方法、記録媒体、および車載システム | |
WO2023063145A1 (ja) | 情報処理装置、情報処理方法および情報処理プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22771044 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023506913 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18549464 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22771044 Country of ref document: EP Kind code of ref document: A1 |