WO2023208372A1 - Camera system and method for determining depth information of an area - Google Patents
Camera system and method for determining depth information of an area Download PDFInfo
- Publication number
- WO2023208372A1 WO2023208372A1 PCT/EP2022/061563 EP2022061563W WO2023208372A1 WO 2023208372 A1 WO2023208372 A1 WO 2023208372A1 EP 2022061563 W EP2022061563 W EP 2022061563W WO 2023208372 A1 WO2023208372 A1 WO 2023208372A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- laser
- dots
- different patterns
- area
- light source
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000012545 processing Methods 0.000 claims description 87
- 230000003287 optical effect Effects 0.000 claims description 50
- 238000001514 detection method Methods 0.000 claims description 36
- 230000008859 change Effects 0.000 claims description 18
- 239000002096 quantum dot Substances 0.000 claims description 18
- 239000010409 thin film Substances 0.000 claims description 9
- 239000004038 photonic crystal Substances 0.000 claims description 7
- 239000004973 liquid crystal related substance Substances 0.000 claims description 5
- 229910052710 silicon Inorganic materials 0.000 claims description 5
- 239000010703 silicon Substances 0.000 claims description 5
- 238000004377 microelectronic Methods 0.000 claims description 3
- 238000009826 distribution Methods 0.000 description 10
- 238000013528 artificial neural network Methods 0.000 description 8
- 230000008901 benefit Effects 0.000 description 6
- 230000007246 mechanism Effects 0.000 description 6
- 238000012421 spiking Methods 0.000 description 5
- 238000010408 sweeping Methods 0.000 description 5
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 229910001218 Gallium arsenide Inorganic materials 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 229910000679 solder Inorganic materials 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 241000735235 Ligustrum vulgare Species 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000010408 film Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2513—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/46—Indirect determination of position data
Definitions
- the disclosure relates to a camera system and method for determining depth information of an area.
- the disclosure is in the field of camera systems for determining depth information of an area.
- Such camera system may be referred to as depth camera systems.
- Depth information of an area may comprise or be three dimensional (3D) information of the area.
- a camera system for determining depth information of an area may be used in consumer electronic devices (e.g. smartphones, tablets, augmented reality and virtual reality (ARVR) devices), in automotive (e.g. autonomous driving and advanced driver assistance systems (ADAS)), as well as in robots (e.g. industrial robots, medical robots and domestic robots).
- An active depth camera system employing light sources may be referred to as light detection and ranging (Lidar) system.
- Lidar system light sources
- FOV field of view
- a Lidar system may employ a scanning mechanism (e.g. using prisms, mirrors, Microelectromechanical systems (MEMS), optical parametric amplifiers (OPA), etc.) for sweeping the XY-dimensions (i.e. lighting the XY-dimensions) in a sequential order, either in a one dimensional (ID) or a two dimensional (2D) manner.
- a scanning mechanism e.g. using prisms, mirrors, Microelectromechanical systems (MEMS), optical parametric amplifiers (OPA), etc.
- MEMS Microelectromechanical systems
- OPA optical parametric amplifiers
- Such a scanning mechanism is usually slow.
- a Lidar system employing such a scanning mechanism may achieve a refresh rate of about 30 Hz.
- the receiver and the data processing unit of a Lidar system may also be a bottleneck for lower latency of the Lidar system.
- a Lidar system employing the above outlined scanning mechanism e.g. with 30 Hz refresh rate
- may have the duty of processing e.g. more than 1 million point clouds per second with a throughput limit in data processing and transmission. There may be a lot of redundancy in the acquired data.
- Lidar systems may employ indirect or direct time of flight (TOF) working principles and scanning mechanisms (e.g. using prims, mirror, OPA, MEMS, etc.).
- TOF time of flight
- such Lidar system may use a more aggressive scanning speed (e.g. by employing electrical scanning and/or multi-beam scanning) and a planar two dimension (2D) receiver as receiver.
- the planar 2D receiver may be for example a single-photon avalanche diode (SPAD) sensor array based receiver or a CMOS image sensor based indirect time of flight receiver (CIS based iTOF receiver).
- SPAD single-photon avalanche diode
- CIS based iTOF receiver CMOS image sensor based indirect time of flight receiver
- CMOS image sensor stands for “complementary metal-oxide-semiconductor image sensor” and may be abbreviated by “CIS”.
- CIS complementary metal-oxide-semiconductor image sensor
- Such Lidar systems according to the aforementioned alternative may be referred to as flash Lidar systems.
- Such Lidar systems i.e. flash Lidar systems
- flash Lidar systems may only improve the system latency in a gradual way, because the latency is constrained not only by the scanning mechanism, but also the huge data acquisition and processing burden at the receiver.
- the present disclosure aims to improve a camera system for determining depth information of an area.
- a camera system for determining depth information of an area the camera system comprising a light source (i.e. a Lidar system), wherein the camera system is improved with regard to lower latency.
- An object may be to provide a camera system (e.g. Lidar system) for determining depth information of an area that is improved with regard to lower latency, while maintaining at least one of the following: a performance of detecting ranges and accuracy, FOVs, robustness and costs.
- a camera system e.g. Lidar system
- FOVs field-to-time to ultra low latency
- a first aspect of the disclosure provides a camera system for determining depth information of an area.
- the camera system comprises a laser light source for lighting the area, and one or more event camera sensors for detecting reflected laser light caused by lighting the area.
- the laser light source is configured to generate a main pattern of laser dots.
- the laser light source is configured to generate in subsequent time frames different patterns of laser dots for lighting the area, wherein the laser dots of each of the different patterns are a subset of the laser dots of the main pattern.
- the one or more event camera sensors are configured to detect reflected laser light caused by lighting the area with the different patterns of laser dots in the subsequent time frames.
- the first aspect proposes to light an area, of which depth information is to be determined, with different sparse laser dot patterns in subsequent time frames and to use one or more event camera sensors for detecting the reflected laser light caused by lighting the area with the different sparse laser dot patterns.
- the different laser dot patterns are sparse compared to a main pattern of laser dots that could be generated by the laser light source because the laser light source is configured to generate the main pattern of laser dots.
- the passage “pattern of laser dots” and “laser dot pattern” may be used as synonyms.
- the different patterns of laser dots comprise only a subset of the laser dots of the main pattern the amount of data acquired by the one or more event camera sensors at each time frame is smaller or less compared to the amount of data when lighting the area with the main pattern. This reduces the processing burden of acquired data at each time frame compared to lighting the area with the main pattern of laser dots. Thus, less time is required for processing the acquired data (e.g. even in case the scanning speed is increased) compared to a scenario, in which the area is lighted using the main pattern of laser dots. Namely, at any time frame a subset of laser dots of the laser dots of the main pattern may be used by the camera system for lighting the area.
- a reduced amount or less laser light is reflected from the area being lighted by the respective subset of laser dots.
- the detection result of the one or more event camera sensors in a time frame may be a detected light intensity change caused by the reflected laser light in the time frame. Therefore, less data needs to be processed at a time, which reduces the burden on processing of the data by the camera system.
- An event camera sensor is a type of CMOS image sensor (CIS).
- the terms “camera sensor” and “image sensor” may be used as synonyms and, thus, the terms “event camera sensor” and “event image sensor” may be used as synonyms.
- the pixels of an event camera sensor are configured to only respond to a change of photon intensity and the event camera sensor is configured to only provide the outputs from pixels responding to such a change, in an asynchronous manner.
- the one or more event camera sensors may each be configured to detect only a change of photon intensity (e.g. laser light intensity) and output the change of photon intensity as a detection result.
- an event camera sensor may be referred to as dynamic vision system sensor (DVS sensor). Therefore, an event camera sensor is much lower in output latency compared to a camera sensor detecting intensity level of received or detected photons, i.e. compared to a camera sensor that outputs the light intensity detected by every pixel of the camera sensor irrespective of whether the pixel has detected an intensity change or not.
- the one or more event camera sensors may have a latency of 1 microsecond (1 ps).
- the output data (i.e. the detection result) of an event camera sensor is more efficient compared to a camera sensor detecting intensity level of received or detected photons.
- the one or more event camera sensors detect only intensity changes the data acquired as detection result of the one or more event camera sensors only comprise useful information of the area (e.g. changes in the area causing an intensity change in one or more pixels of the event camera sensor) and optionally noise.
- This enables much faster data processing (e.g. by using a spiking neural network processing unit). Namely, at each time frame less data needs to be processed compared to the scenario of using a camera sensor detecting intensity level of received or detected photons.
- a spiking neural network processing unit is a special type of deep neural network learning processor, which is configured for asynchronous input (such as an input from an event camera sensor).
- a spiking neural network processing unit may achieve better computing efficiency compared to a conventional neural network processing unit (e.g. a convolutional neural network processing unit).
- the present disclosure is not limited to a spiking neural network processing unit or any other type of processing unit.
- the number of laser dots of the main pattern may be determined by a desired resolution of the camera system. The greater the desired resolution the greater may be the number of laser dots of the main pattern.
- the resolution of depth information determination providable by a respective pattern of the different patterns of laser dots is smaller or less compared to the resolution of depth information determination providable by the main pattern of laser dots.
- this is compensated by generating the different patterns of laser dots in the subsequent time frames. That is, after each time frame the resolution increases because in each time frame additional laser dots compared to previous time frames may be generated by the laser light source for lighting the area.
- the resolution achievable when lighting the area with the different patterns of laser dots in the subsequent time frames equals to the resolution achievable when lighting and detecting the area with the main pattern of laser dots.
- the laser light source may be configured to generate in the subsequent time frames the main pattern of laser dots for lighting the area
- the one or more event camera sensors may be configured to detect in the subsequent time frames reflected laser light caused by different patterns of laser dots of the main pattern, wherein the laser dots of each of the different patterns are a subset of the laser dots of the main pattern.
- the one or more event camera sensors may be configured to detect and output, at each time frame of the subsequent time frames, a subset of data of the data corresponding to the reflected light caused by lighting the area with the main pattern of laser dots. This may result in a lower data processing amount.
- Lighting the area with the main pattern of laser dots in the subsequent time frames and detecting in the subsequent time frames reflected light of different patterns of laser dots being subsets of the main pattern of laser dots is equivalent to lighting the area with the different patterns of laser dots being subsets of the main pattern of laser dots and detecting in the subsequent time frames the reflected laser light caused by lighting the area with the different patterns of laser dots.
- processing data acquired by the camera system (when the area is lighted with the different patterns of laser dots in the subsequent time frames and the one or more event camera sensors detecting the respective reflected laser light) is faster compared to processing data acquired by the camera system when the area is lighted and detected once with the main pattern of laser dots.
- information e.g. depth information
- this information may be of lower resolution compared to information, e.g. depth information, generated based on a lighting of the area with the main patter of laser dots, the resolution of this information may be already sufficient for a basic evaluation of the area. This may be an advantage when using the camera system in a device configured to move in the area for detecting obstacles. Thus, this allows determining information, e.g. depth information, on the area similar to the human eye.
- the detection of the obstacle is sufficient for the human brain to start reacting to the obstacle.
- the person may start giving way to the obstacle and turning the head in order to get a better view on the obstacle.
- the camera system of the first aspect allows earlier detecting an obstacle in the area and, thus, reacting to the obstacle.
- patterns of the different patterns of laser dots for lighting the area in subsequent time frames after a time frame of detecting the obstacle may be focused on an area of interest of the area, in which the obstacle is positioned.
- a maximum number of laser dots of the different patterns of laser dots may be set by or may be equal to a throughput of the one or more event camera sensors.
- the number of laser dots of the different patterns of laser dots may be different to each other.
- At least two patterns of the different laser dots may comprise a different number of laser dots.
- at least two patterns of the different patterns of laser dots may comprise the same number of laser dots.
- the laser light source may be configured to light the area by generating the main pattern of laser dots or the different patterns of laser dots.
- the laser dots of the main pattern may be spatially distributed for lighting the area. That is, they do not light the same position or location in the area when the main pattern of laser dots lights the area. Accordingly, the laser dots of the different patterns of laser dots may be spatially distributed for lighting the area.
- the laser light source may be configured to light the area in the subsequent time frames by generating the different patterns of laser dots in the subsequent time frames.
- the laser dots of the different patterns may be spatially distributed for the lighting the area.
- the term “dot” may be used for referring to a “laser dot”. Since the laser dots of a respective pattern of laser dots is a subset of the laser dots of the main pattern, the laser dots are spatially arranged or spatially distributed according to the main pattern. That is, the laser dots are arranged in the respective pattern of laser dots at the same position as they would be arranged in the main pattern.
- the main pattern of laser dots may be referred to as “pattern of laser dots” and the different patterns of laser dots may be referred to as “sub-patterns of laser dots”.
- the laser dots of the main pattern may be generated in a plane.
- the plane may be referred to as “projection plane”.
- the laser dots of the different patterns of laser dots may be arranged in a plane.
- the different patterns of laser dots may be emitted or projected by the laser light source to the area that is to be detected or captured by the camera system.
- the laser light source may be configured to light the area to be detected by the camera system with the different patterns of laser dots.
- the term “illuminate” may be used as a synonym for the term “light”.
- the area may be in the field of view (FOV) of the camera system (e.g. field of view of the one or more event camera sensors).
- the term “scene” or environment” may be used as a synonym for the term “area”.
- Each pattern of the different patterns of laser dots may light the area, such as entities (e.g. objects, persons, vehicles etc.) that are present in the area.
- the term “detecting area” may be used as a synonym for the term area to
- the one or more event camera sensors may be configured to detect or capture the reflections from the area, which are caused by the laser dots of the different patterns projected or emitted to the area for lighting the area.
- the camera system may be configured to determine, e.g. compute, depth information of the area by processing the detection result of the one or more event camera sensors.
- the laser light source and the one or more event camera sensors may be positioned or arranged at different positions.
- a number of the laser dots of one or more of the different patterns of laser dots equals to a number of laser dots in a range between 0.1% and 25 % of a number of laser dots of the main pattern.
- a number of the laser dots of the different patterns of laser dots may equal to a number of laser dots in a range between 0.1% and 25% of a number of laser dots of the main pattern.
- the different patterns of laser dots are sparser compared to the main pattern of laser dots. This reduces the amount of acquired data providable by the one or more event camera sensors as detection result in a time frame, when the one or more event camera sensors detect the reflected laser light caused by lighting the area with a respective pattern of the different patterns of laser dots in the time frame.
- the time between two consecutive time frames of the subsequent time frames is in a range between 100 nanoseconds and 10 milliseconds.
- the laser light source may be configured to generate the different patterns of laser dots with a time frame frequency in a range between 100Hz and 10 MHz. Therefore, the laser light source may be configured to emit or project the different patterns of laser dots to the area (detecting scene) in a fast sweeping manner. This allows reducing the latency of the camera system for determining the depth information of the area.
- the time frame frequency and, thus, the scanning speed of the camera system may be set by or may be equal to the response time of the one or more event camera sensors for capturing an light intensity change or photon intensity change (e.g. the up-pulse and/or down-pulse photon intensity change).
- the fast sweeping of the different patterns of laser dots means that the laser light source may be configured to emit or project the laser dots of the respective pattern of the different patterns of laser dots and the one or more event camera sensors may be configured to detect reflected laser light caused by the laser dots at a very short time interval.
- the time interval may be at a micro-second scale, which may correspond to the response time capability of an event camera sensor.
- the sparsity of the different patterns of laser dots means the following. Even though the camera system is to determine or acquire depth information (e.g. 3D information) in a large field of view (FOV) of the area, at each time frame, the laser light source may be configured to emit or project only a fraction of the laser dots of the main pattern. Optionally, the laser light source may be configured to emit or project the fraction of the laser dots of the main pattern into a fraction of the area.
- the laser light source may be configured to generate the different patterns of laser dots such that the different patterns are designed in a way that they transverse the whole FOV in a short time period. This allows the camera system to capture the depth information of the area and, thus, 3D scenes. Furthers, this allows, by probability, capturing movement of one or more entities (e.g. objects, persons, vehicles etc.) in the area in a very short latency.
- entities e.g. objects, persons, vehicles etc.
- the different patterns of laser dots may be referred to as dot patterns across different bursts.
- the different patterns of laser dots in the subsequent time frames may be considered as a sparse distribution of laser dots in the time-spatial domain.
- the laser light source is configured to repeat one or more of the different patterns of laser dots in two or more consecutive time frames of the subsequent time frames.
- the laser dots of the one or more different patterns (being repeated) may be more dense in time domain.
- the laser light source is configured to generate the different patterns of laser dots such that the laser dots of one or more patterns of the different patterns of laser dots are uniformly distributed in time domain and/or spatial domain.
- Distribution of laser dots of a pattern of laser dots in spatial domain may be understood as following. If laser dots of a first pattern of the different patterns of laser dots and laser dots of a second pattern of the different patterns of laser dots are distributed in the spatial domain in the same way, then this may mean that the first pattern and second pattern have the same spatial density, i.e. their density in the XY domain is the same.
- Distribution of laser dots of a pattern of laser dots in time domain may be understood as following. If laser dots of a first pattern of the different patterns of laser dots and a laser dots of a second pattern of the different patterns of laser dots are distributed in the time domain in the same way, then this may mean that the laser dots of the first pattern and the laser dots of the second pattern may be generated the same number of times.
- the laser light source may be configured to generate the different patterns of lasers dots such that a dot density of one or more patterns of the different patterns of laser dots is smaller than 15% in variation compared to the dot density of one or more other patterns of the different patterns of laser dots. That is, the laser light source may be configured to generate the different patterns of laser dots such that the dot density of one or more patterns of the different patterns of laser dots varies by less than 15% from the dot density of one or more other patterns of the different patterns of laser dots.
- the terms “deviation” and “deviate” may be used as synonyms for the terms “variation” and “vary”, respectively.
- the laser light source is configured to generate the different patterns of laser dots such that the dot densities of a plurality of patterns of the different patterns of laser dots vary to each other by less than 15%.
- the dot density of each pattern of the plurality of patterns may vary by less than 15% from the other pattern or patterns of the plurality of patterns.
- the laser light source is configured to generate the different patterns of laser dots such that the laser dots of one or more patterns of the different patterns of laser dots are pseudo randomly distributed in time domain and/or spatial domain.
- the laser light source is configured to generate one or more patterns of the different patterns of laser dots such that the laser dots of the one or more patterns are, in an area of interest of the area, more dense in time domain and/or spatial domain.
- the laser light source is configured to generate the one or more patterns of the different patterns of laser dots such that the one or more patterns comprise, in the area of interest of the area, a dot density that is greater than or equal to 20% compared to the rest of the area.
- a pattern of laser dots comprising laser dots that are denser, in spatial domain, in the area of interest may mean that the laser dots are spatially arranged such that more laser dots of the pattern light the area of interest compared to another pattern comprising laser dots that are equally distributed in the spatial domain.
- a pattern of laser dots comprising laser dots that are denser, in time domain, in the area of interest may mean that the laser dots for lighting the area of interest of the area are more often repeated compared to laser dots of the pattern for lighting the rest of the area.
- the laser light source comprises a laser array comprising two or more laser elements.
- the laser array may be configured to selectively turn on and off the laser elements for generating the different patterns of laser dots.
- the laser array may be configured to modulate its laser light emission by selectively turning on and off the laser elements.
- the laser array may be configured to modulate its laser light emission for generating the different patterns of laser dots.
- the laser array may be configured to generate the different patterns of laser dots by selectively turning on and off the laser elements.
- the phrase “selectively turning on and off the laser elements” means that the laser elements may be turned on and off independent of each other. That is each laser element may be individually turned on and off.
- the laser array may be a vertical-cavity surface emitting laser array (VCSEL array), an edge emitting laser array (EEL array) or a photonic crystal surface emitting laser array (PCSEL array).
- the laser array may be any other laser array type.
- the laser array may be a pulsed laser array.
- switch off’ and “switch on” may be used as synonyms for the terms “turn off’ and “turn on”, respectively.
- the laser array is one of the following: a vertical-cavity surface emitting laser (VCSEL) array; an edge emitting laser (EEL) array; and a photonic crystal surface emitting laser (PCSEL) array.
- VCSEL vertical-cavity surface emitting laser
- EEL edge emitting laser
- PCSEL photonic crystal surface emitting laser
- a laser element may be or may comprise a laser diode.
- the two or more laser elements of a VCSEL array may be referred to as one or more VCSEL elements.
- a VCSEL element may be or may comprise one or more VCSEL diodes.
- the two or more laser elements of a PCSEL array may be referred to as one or more PCSEL elements.
- a PCSEL element may be or may comprise one or more PCSEL diodes.
- the laser light source comprises a rotation prism.
- the laser array may be configured to emit one or more laser dots to the rotation prism, and the rotation prism may be configured to generate, using the one or more emitted laser dots, the different patterns of laser dots by rotating accordingly.
- the laser array and the rotation prism may be configured to simultaneously perform the respective function for generating the different patterns of laser dots. That is, the laser array and the rotation prism may be configured such that the laser array modulates its laser light emission while the rotation prism generates, using the modulated laser light emission of the laser array (i.e. one or more emitted laser dots), the different patterns of laser dots by rotating accordingly.
- the laser light source comprises a diffractive optical element.
- the laser array may be configured to emit one or more laser dots to the diffractive optical element, and the diffractive optical element may be configured to split up the one or more emitted laser dots for generating the different patterns of laser dots.
- the diffractive optical element may be configured to generate the different patterns of laser dots by splitting up the one or more emitted laser dots.
- the laser light source comprises a prism array.
- the prism array may be configured to deflect laser dots generated by the diffractive optical element for generating the different patterns of laser dots.
- the prism array may be configured to generate the different patterns of laser dots by deflecting laser dots generated by the diffractive optical element.
- the laser light source comprises one or more spatial light modulators.
- the one or more spatial light modulators may be configured to deflect laser dots generated by the diffractive optical element for generating the different patterns of laser dots.
- the laser light source may comprise the edge emitting laser array. That is, the laser array of the laser light source may be the edge emitting laser array.
- the one or more spatial light modulators may be configured to generate the different patterns of laser dots by deflecting laser dots generated by the diffractive optical element.
- the laser light source comprises a liquid crystal on silicon (LCOS) light modulator with a plurality of LCOS pixels.
- the laser array may be configured to emit one or more laser dots to the LCOS light modulator, and the LCOS light modulator may be configured to selectively change the phase of the LCOS pixels for generating the different patterns of laser dots.
- LCOS liquid crystal on silicon
- the laser array may be configured to generate the different patterns of laser dots by selectively turning on and off the laser elements.
- the LCOS light modulator may be configured to generate the different patterns of laser dots by selectively changing the phase of its LCOS pixels.
- the LCOS light modulator may be configured to change the phase of its LCOS pixels such that the phase of the LCOS pixels may be changed independent of each other. That is the phase of each LCOS pixel may be individually changed.
- the laser light source comprises a micro electronic mechanical systems (MEMS) mirror array.
- the laser array may be configured to emit one or more laser dots to the MEMS mirror array, and the MEMS mirror array may be configured to deflect the one or more laser dots for generating the different patterns of laser dots.
- MEMS micro electronic mechanical systems
- a MEMS mirror array may be used instead of a prism array or LCOS light modulator for generating the different patterns of laser dots.
- the camera system comprises a processing unit configured to determine the depth information of the area by processing a detection result of the one or more event camera sensors of two or more time frames of the subsequent time frames.
- the two or more time frames of the subsequent time frames may be two or more consecutive time frames.
- the processing unit may be configured to determine the depth information of the area by processing a detection result of the one or more event camera sensors in each of the subsequent time frames.
- the processing unit may comprise or may be at least one of a controller, microcontroller, processor, microprocessor, application specific integrated circuit (ASIC) and field programmable gate array (FPGA).
- the processing unit may comprise or be any other known processing means type.
- the processing unit may be a deep neural network learning processing unit, e.g. a spiking neural network processing unit.
- the processing unit may be a convolutional neural network processing unit or any other type of processing unit.
- the processing unit is configured to determine an area of interest of the area by processing a detection result of the one or more event camera sensors of one or more time frames of the subsequent time frames.
- the processing unit may be configured to inform the laser light source on the area of interest.
- the processing unit may be configured to inform a control unit for controlling the laser light source on the area of interest.
- the processing unit may be or may be part of the control unit for controlling the laser light source.
- the control unit may be configured to control the laser light source using information on the area of interest.
- the control unit may be configured to control the laser light source to generate one or more patterns of the different patterns of laser dots such that the laser dots of the one or more patterns are, in the area of interest, more dense in time domain and/or spatial domain.
- the control unit may comprise or may be at least one of a controller, microcontroller, processor, microprocessor, application specific integrated circuit (ASIC) and field programmable gate array (FPGA).
- the control unit may comprise or be any other known control means type.
- the processing unit may be configured to perform a sliding window type processing of the detection result of the one or more event camera sensors (i.e. the output of the one or more event camera sensors).
- the processing unit may be configured to perform a stream-based processing (e.g. signal processing) of the output (i.e. detection result) of the one or more event camera sensors.
- the processing unit may be configured to perform a three dimensions (3D) reconstruction of the area to be detected by the camera system using the processing result of processing the detection result of the one or more event camera sensors. That is, the processing unit may be configured to perform a 3D reconstruction of the area using determined depth information of the area.
- the processing unit may be configured to produce or generate a depth map (depth information map) of the area by processing the output of the one or more event camera sensors.
- the processing unit may be configured to detect and optionally track one or more objects of the area (e.g. moving in the area) using the processing result of processing the detection result of the one or more event camera sensors.
- the processing unit is configured to determine the depth information of the area by combining a detection result of the one or more event camera sensors of two or more time frames of the subsequent time frames.
- the two or more time frames of the subsequent time frames may be two or more consecutive time frames.
- the processing unit is configured to determine the depth information of the area by combining a detection result of the one or more event camera sensors of each of the subsequent time frames.
- the one or more event camera sensors are one or more quantum dots (QD) thin film integrated CMOS image sensors configured for a wavelength range of short wavelength infrared (SWIR).
- QD quantum dots
- SWIR short wavelength infrared
- CMOS image sensor stands for “complementary metal-oxide-semiconductor image sensor”.
- CMOS image sensor may be abbreviated by “CIS”.
- the wavelength range of SWIR may be equal to or greater than 1400 nm or 1550 nm.
- the laser light source may be configured to emit laser light of a corresponding wavelength range.
- the laser light source may be configured to emit laser light of SWIR.
- the laser light source may be configured to emit laser light having a wavelength equal to or greater than 1400 nm or 1550 nm.
- the SWIR allows for better eye safety.
- the QD thin film integrated CIS is a low-cost technology for providing a SWIR sensor.
- QD thin film integrated CIS may achieve very high quantum efficiency (e.g. > 60 Hz), while inheriting the advantages of CIS in terms of scalability (e.g. large pixel resolutions, small pixel pitch, and very low costs).
- the camera system of the first aspect does not need a high speed of processing the acquired data, as it is the case for determining depth information using time of flight (TOF) processing (e.g. using signal -to-noise-ratio, SPAD) or indirect time of flight (iTOF) processing (e.g. using frequency modulated continuous wave, FMCW).
- TOF time of flight
- SPAD signal -to-noise-ratio
- iTOF indirect time of flight
- the QD thin film integrated CIS which are too slow for TOF and iTOF processing, may be used as the one or more event camera sensors of the camera system of the first aspect. That is QD thin film integrated CIS may maintain a sufficient speed for determining the depth information using the different patterns of laser dots and the one or more event camera sensors.
- some or all of the implementation forms and optional features of the first aspect, as described above, may be combined with each other.
- a second aspect of the disclosure provides a method for determining depth information of an area.
- the method comprises generating, by a laser light source, in subsequent time frames different patterns of laser dots for lighting the area.
- the laser light source is configured to generate a main pattern of laser dots and the laser dots of each of the different patterns are a subset of the laser dots of the main pattern.
- the method comprises detecting, by one or more event camera sensors, reflected laser light caused by lighting the area with the different patterns of laser dots in the subsequent time frames.
- the above description of the camera system according to the first aspect is correspondingly valid for the method of the second aspect.
- the laser light source generating the different patterns of laser dots may be the laser light source of the camera system according to the first aspect.
- the one or more event camera sensors detecting the reflected light may be the one or more camera sensors of the camera system according to the first aspect.
- the method according to the second aspect of the disclosure may be performed by the camera system according to the first aspect of the disclosure.
- the description of the method of the second aspect may be correspondingly valid for the camera system according to the first aspect.
- a number of the laser dots of one or more of the different patterns of laser dots equals to a number of laser dots in a range between 0.1% and 25% of a number of laser dots of the main pattern.
- the time between two consecutive time frames of the subsequent time frames is in a range between 100 nanoseconds and 10 milliseconds.
- the method comprises repeating, by the laser light source, one or more of the different patterns of laser dots in two or more consecutive time frames of the subsequent time frames.
- the method comprises generating, by the laser light source, the different patterns of laser dots such that the laser dots of one or more patterns of the different patterns of laser dots are uniformly distributed in time domain and/or spatial domain. In an implementation form of the second aspect, the method comprises generating, by the laser light source, the different patterns of laser dots such that the laser dots of one or more patterns of the different patterns of laser dots are pseudo randomly distributed in time domain and/or spatial domain.
- the method comprises generating, by the laser light source, one or more patterns of the different patterns of laser dots such that the laser dots of the one or more patterns are, in an area of interest of the area, more dense in time domain and/or spatial domain.
- the laser light source comprises a laser array comprising two or more laser elements.
- the method may comprise selectively turning on and off the laser elements of the laser array for generating the different patterns of laser dots.
- the laser array is one of the following: a vertical-cavity surface emitting laser (VCSEL) array; an edge emitting laser (EEL) array; and a photonic crystal surface mitting laser (PCSEL) array.
- VCSEL vertical-cavity surface emitting laser
- EEL edge emitting laser
- PCSEL photonic crystal surface mitting laser
- the laser light source comprises a rotation prism.
- the method may comprise emitting, by the laser array, one or more laser dots to the rotation prism.
- the method may further comprise generating, by the rotation prism, using the one or more emitted laser dots the different patterns of laser dots by rotating accordingly.
- the laser light source comprises a diffractive optical element.
- the method may comprise emitting, by the laser array, one or more laser dots to the diffractive optical element.
- the method may further comprise splitting up, by the diffractive optical element, the one or more emitted laser dots for generating the different patterns of laser dots.
- the laser light source comprises a prism array.
- the method may comprise deflecting, by the prism array, laser dots generated by the diffractive optical element for generating the different patterns of laser dots.
- the laser light source comprises one or more spatial light modulators.
- the method may comprise deflecting, by the one or more spatial light modulators, laser dots generated by the diffractive optical element for generating the different patterns of laser dots.
- the laser light source comprises a liquid crystal on silicon (LCOS) light modulator with a plurality of LCOS pixels.
- the method may comprise emitting, by the laser array, one or more laser dots to the LCOS light modulator.
- the method may further comprise selectively changing the phase of the LCOS pixels of the LCOS light modulator for generating the different patterns of laser dots.
- LCOS liquid crystal on silicon
- the laser light source comprises a micro electronic mechanical systems (MEMS) mirror array.
- the method may comprise emitting, by the laser array, one or more laser dots to the MEMS mirror array.
- the method may further comprise deflecting, by the MEMS mirror array, the one or more laser dots for generating the different patterns of laser dots.
- MEMS micro electronic mechanical systems
- the method comprises determining, by a processing unit, the depth information of the area by processing a detection result of the one or more event camera sensors of two or more time frames of the subsequent time frames.
- the method comprises determining, by the processing unit, an area of interest of the area by processing a detection result of the one or more event camera sensors of one or more time frames of the subsequent time frames.
- the method comprises determining, by the processing unit, the depth information of the area by combining a detection result of the one or more event camera sensors of two or more time frames of the subsequent time frames.
- the one or more event camera sensors are one or more quantum dots (QD) thin film integrated CMOS image sensors configured for a wavelength range of short wavelength infrared (SWIR).
- QD quantum dots
- SWIR short wavelength infrared
- the method of the second aspect and its implementation forms and optional features achieve the same advantages as the camera system of the first aspect and its respective implementation forms and respective optional features.
- Figures 1 and 2 each show a block diagram of an example of a camera system according to an embodiment of the present disclosure for determining depth information of an area
- Figure 3 shows an example of a laser light source output of the camera system of Figures 1 and
- Figures 4 and 5 each schematically show an example of a laser light source of a camera system according to an embodiment of the present disclosure
- Figures 6 schematically shows two examples of different patterns of laser dots that may be generated by a laser light source of a camera system according to an embodiment of the present disclosure
- Figures 7 to 9 each schematically show an example of a laser light source of a camera system according to an embodiment of the present disclosure.
- Figure 10 shows a flow diagram of an example of a method according to an embodiment of the present disclosure for determining depth information of an area.
- Figures 1 and 2 each show a block diagram of an example of a camera system according to an embodiment of the present disclosure for determining depth information of an area.
- the camera systems of Figures 1 and 2 are examples of the camera system according to the first aspect of the present disclosure. Therefore, the above description of the camera system according to the first aspect of the disclosure is correspondingly valid for the camera systems of Figures 1 and 2.
- the camera system 1 of Figure 1 is a camera system for determining depth information of an area 4.
- the area 4 is represented by a dashed rectangle in which a person is walking. This is only by way of example. That is, the area 4 may be any scene or environment. For example, the area 4 may be a part of a street, a building etc. In the area 4 there may be immobile entities (e.g. traffic light, parking vehicle, building, furniture etc.) and/or mobile entities (e.g. driving vehicle, walking person etc.).
- the camera system 1 comprises a laser light source 2 for lighting the area 4, and one or more event camera sensors 3 for detecting reflected laser light 6 caused by lighting the area 4.
- the laser light source 2 is represented by a rectangle.
- the one or more event camera sensors are also represented by a rectangle.
- the laser light source 2 may comprise or may be implemented by at least one of multibeam rotating prisms, mirrors, and spatial light modulators (SLM) with a laser array (electrically controlled laser array) such as a vertical-cavity surface emitting laser (VCSEL) array; an edge emitting laser (EEL) array; or a photonic crystal surface mitting laser (PCSEL) array. Any other known laser array type may be used. Examples of implementation forms of the laser light source 2 are schematically shown in Figures 4, 5, 7 (A) and (B), 8 and 9. Thus, the laser light source 2 is described in more detail in the following with regard to the aforementioned Figures.
- SLM spatial light modulators
- the one or more event camera sensors 3 may be for example one or more quantum dots (QD) thin fdm integrated CMOS image sensors.
- the one or more QD thin fdm integrated CMOS image sensors may be configured for a wavelength range of short wavelength infrared (SWIR).
- SWIR provides the advantage of eye-safety compared to nearinfrared (NIR).
- NIR nearinfrared
- the one or more event camera sensors may be implemented by any other one or more known event camera sensor types.
- the one or more event camera sensors may be or may comprise one or more GaAs based sensors and/or one or more Ge-Si based sensors. Using QD thing film integrated CMOS image sensors is less costly compared to using GaAs based sensors and/or Ge-Si based sensors.
- the one or more event camera sensors 3 may be implemented based on thin film photodiode integrated CMOS image sensor technology, e.g. CQD (colloidal quantum dots).
- the quantum dots (QD) CMOS image sensors may be combined with event camera sensor photo diode and pixel circuitry design for implementing the one or more event camera sensors 3.
- QD CMOS image sensors may achieve very high spatial resolution (e.g. ⁇ 3 pm pixel pitch).
- QE quantum efficiency
- Such sensors have a high quantum efficiency (QE)
- QE quantum efficiency
- such sensors may be industry scalable, and have very low costs (the costs are as low as for CMOS image sensors).
- At the same time such sensors may achieve high resolution (e.g. > 1 Megapixels).
- the laser light source 2 is configured to generate a main pattern of laser dots (not shown in Figure 1).
- An example of a main pattern of laser dots is shown in Figures 6 (A) and 6 (B).
- the laser light source 2 is configured to generate in subsequent time frames different patterns 7 of laser dots for lighting the area 4.
- the laser dots 7a of each of the different patterns 7 are a subset of the laser dots of the main pattern.
- the laser light source 2 may be configured to light the area 4 in subsequent time frames by generating the different patterns 7 of laser dots in the subsequent time frames and projecting or emitting the different patterns 7 of laser dots to the area 4 in the subsequent time frames.
- the laser light source 2 is configured to emit laser light 5. That is, the laser light source 2 may be configured to generate the different patterns 7 of laser dots by emitting laser light 5.
- the different patterns 7 of laser dots are represented by oval planes, in which the laser dots 7a are exemplarily distributed.
- the one or more event camera sensors 3 are configured to detect reflected laser light 6 caused by lighting the area 4 with the different patterns 7 of laser dots in the subsequent time frames.
- the one or more event camera sensors 3 may be configured to acquire or receive reflected laser dots 6 corresponding to the laser dots 7a of a respective pattern of the different patterns 7 of laser dots that are reflected at the area 4 (e.g. from entities of the area 4, such as objects, persons, vehicles etc.).
- the laser light source 2 and the one or more event camera sensors 3 may be positioned or arranged at different positions. That is, the laser light source 2 and the one or more event camera sensors may 3 be positioned at a geometric distance from each other. This is indicated in Figure
- the laser light source 2 and the one or more event camera sensors 3 may be arranged according to stereoscopic geometry for acquiring the depth information of the area 4.
- the time between two consecutive time frames of the subsequent time frames may be in a range between 100 nanoseconds and 10 milliseconds. That is, the laser light source 2 may be configured to generate the different patterns 7 of laser dots with a time frame frequency in a range between 100Hz and 10 MHz. Therefore, the laser light source 2 may be configured to emit or project the different patterns 7 of laser dots to the area 8 (detecting scene) in a fast sweeping manner. This allows reducing the latency (e.g. to an ultra low latency) of the camera system 1 for determining the depth information of the area 4.
- the camera system 1 of Figure 2 corresponds to the camera system 1 of Figure 1, wherein the camera system 1 of Figure 2 comprises an additional feature. Therefore, the description of Figure 1 is also valid for the camera system 1 of Figure 2. In the following, mainly the additional feature of the camera system 1 of Figure 2 is described.
- the camera system 1 comprises (in addition to the laser light source 2 and the one or more event camera sensors 3) a processing unit 8 configured to determine the depth information of the area 4 by processing a detection result 3a of the one or more event camera sensors 3 of two or more time frames of the subsequent time frames.
- the processing unit 8 may be configured to process the detection result 3a of the one or more event camera sensors 3 in order to reconstruct the area 4 or the scene to be detected by the camera system 1 in three dimensions (3D).
- the detection result 3a of the one or more event camera sensors 3 in a time frame may comprise or may be a change of light intensity (laser light intensity) due to the reflected laser light 6 received by the one or more event camera sensors in the time frame.
- the reflected laser light 6 is caused by the respective pattern of the different patterns 7 of laser dots generated and emitted, by the laser light source 2, to the area 4 in the time frame.
- the processing unit 8 is represented by a rectangle.
- the acquired reflected laser dots may serve as landmarks.
- the processing unit 8 may be configured to process the detection result of the one or more event camera sensors 3 (i.e. a light intensity change due to the reflected laser light 6 or reflected laser dots detected by the one or more event camera sensors 7). For this, the processing unit 8 may be configured to compare an acquired XY position of reflected laser dots and corresponding time stamps of the output or detection result of the one or more event camera sensors 3 with the XY position and time stamps of the corresponding laser dots of the pattern of the different patterns 7 of laser dots that caused the reflected laser dots. The processing unit 8 may be configured to perform said comparison using one or more structured light depth sensing principle algorithms.
- the processing unit 8 may be configured to perform these one or more sensing and processing algorithms. This allows the processing unit 8 to reconstruct and update information (e.g. depth information) of the area 4 as three dimensional (3D) scenes in point clouds. In addition or alternatively, this allows the processing unit 8 to perform detection, recognition, and/or movement tracking tasks with regard to objects of the area 4. That is, the processing unit 8 may be configured to reconstruct and update information (e.g. depth information) of the area 4 as three dimensional (3D) scenes in point clouds. The processing unit 8 may be configured to perform detection, recognition, and/or movement tracking tasks with regard to objects of the area 4.
- information e.g. depth information
- the processing unit 8 may be configured to perform detection, recognition, and/or movement tracking tasks with regard to objects of the area 4.
- the processing unit 8 may comprise or may be at least one of a controller, microcontroller, processor, microprocessor, application specific integrated circuit (ASIC) and field programmable gate array (FPGA). In addition or alternatively, the processing unit 8 may comprise or be any other known processing means type.
- the processing unit 8 may be configured to determine the depth information of the area 8 by combining a detection result of the one or more event camera sensors 3 of two or more time frames of the subsequent time frames.
- the processing unit 8 may be configured to determine an area of interest of the area 4 by processing a detection result 3a of the one or more event camera sensors 3 of one or more time frames of the subsequent time frames.
- Figure 3 shows an example of a laser light source output of the camera system of Figures 1 and 2 and corresponding received laser light.
- Figure 3 shows an example of a laser light source output of the camera system of Figures 1 and 2 and corresponding received laser light.
- the top graph of Figure 3 exemplarily shows three patterns of the different patterns 7 of laser dots that may be generated in subsequent time frames TFi, TF2, TF3 by the laser light source 2 of the camera system 1. That is, the top graph of Figure 3 shows the laser light source output over time.
- the laser light source 2 may be configured to generate different patterns 7 of laser dots in n subsequent time frames TFi, TF2, TF3, ... , TF n -i, TF n .
- the number n depends on the desired resolution of the camera system and the number (e.g. minimum number) of laser dots 7a of the different patterns 7 of laser dots. The greater the desired resolution the greater may be the number n of time frames.
- the bottom graph of Figure 3 exemplarily shows the received laser light 9 (i.e. received laser dots) that are caused by the respective pattern 7 of laser dots lighting the area 4.
- the bottom graph of Figure 3 shows for the time frames TFi, TF2 and TF3 the received reflected laser dots that are reflected in a respective time frame as a result of lighting the area 4 in the respective time frame with the respective pattern of the different patterns 7 of laser dots (shown in the top graph of Figure 3).
- the number of the laser dots 7a of the different patterns 7 of laser dots may equal to a number of laser dots in a range between 0.1% and 25% of a number of laser dots of the main pattern (not shown in Figure 3).
- the different patterns 7 of laser dots are sparser compared to the main pattern of laser dots. This reduces the amount of acquired data providable by the one or more event camera sensors 3 as detection result at a time frame, when the one or more event camera sensors 3 detect the reflected laser light caused by lighting the area 4 with a respective pattern of the different patterns 7 of laser dots in the time frame.
- the number of laser dots 7a of two or more patterns of the different patterns 7 of laser dots may be different to each other.
- the time At between two consecutive time frames of the subsequent time frames TFi, TF2, ..., TF n -i, TFN may be in a range between 100 nanoseconds and 10 milliseconds.
- the laser light source 2 may be configured to generate the different patterns 7 of laser dots with a time frame frequency in a range between 100Hz and 10 MHz. Therefore, the laser light source 2 may be configured to emit or project the different patterns 7 of laser dots to the area 4 (detecting scene) in a fast sweeping manner. This allows reducing the latency of the camera system 1 for determining the depth information of the area 4.
- the distribution of the laser dots 7a of the different patterns 7 of laser dots in time domain and/or spatial domain may be pseudo random or uniform.
- at least one of the different patterns 7 of laser dots may comprise laser dots 7a that are uniformly distributed in time domain and/or spatial domain.
- the rest of the different patterns 7 of laser dots may comprise laser dots 7a that are pseudo randomly (may be referred to as randomly) distributed in time domain and/or spatial domain.
- a pseudo random distribution provides the camera system 1 robustness to reject noise and patterned interference by probability. In case that non-uniform sensing or detection of the area 4 is desired (e.g.
- the different patterns 7 of laser dots may be designed in a non-uniform manner in time domain and/or spatial domain.
- the camera system 1, e.g. the laser light source 2 may be configured to adapt the different patterns 7 of laser dots during the operation of the camera system.
- the camera system 1 e.g. the laser light source 2
- the camera system 1 may be configured to repeat one or more patterns of the different patterns 7 of laser dots at a same XY position (i.e. same spatial distribution). This allows accumulating the signal to noise ratio (SNR). In other words, this allows extending the range of the camera system 1, e.g. of the one or more event camera sensors, by increasing the receiving SNR.
- SNR signal to noise ratio
- the laser light source 2 may be configured to generate one or more patterns of the different patterns 7 of laser dots such that the laser dots 7a of the one or more patterns are, in an area of interest of the area 4, more dense in time domain and/or spatial domain (not shown in Figure 3).
- the camera system 1 e.g. the laser light source 2 may be configured to adapt the area of interest during the operation of the camera system 1. For example, in case the area of interest corresponds to a vehicle moving in the area 4, then the area of interest may be moved to right side when the vehicle plans moving or moves to the right side. As a result, the camera system 1 (e.g. the laser light source 2) may be configured to adapt the different patterns 7 of laser dots according to the adapted area of interest.
- the one or more event camera sensors 3 may be configured to detect a duration and/or shape of the laser dots 7a of the different patterns 7 of laser dots with a corresponding time and polarity resolution.
- Figures 4 and 5 each schematically show an example of a laser light source of a camera system according to an embodiment of the present disclosure.
- the laser light sources of Figures 4 and 5 are examples of the laser light source 2 of the camera systems 1 of Figures 1 and 2.
- the laser light sources of Figures 4 and 5 are examples of the laser light source of the camera system according to the first aspect of the present disclosure. Therefore, the description of the camera system according to the first aspect of the disclosure is correspondingly valid for the laser light sources of Figures 4 and 5.
- the laser light source 2 may comprise a laser array 41 or 51 comprising two or more laser elements.
- the laser array 41 or 51 may be configured to selectively turn on and off the laser elements for generating the different patterns of laser dots (not shown in Figure 4).
- the laser array 41 or 51 may be a pulsed laser array.
- the laser array 41 or 51 may be a vertical -cavity surface emitting laser (VCSEL) array; an edge emitting laser (EEL) array; or a photonic crystal surface mitting laser (PCSEL) array.
- VCSEL vertical -cavity surface emitting laser
- EEL edge emitting laser
- PCSEL photonic crystal surface mitting laser
- the laser array 41 or 51 may be any other known laser array type.
- the laser light source 2 may further comprise a rotation prism 42.
- the laser array 41 may be configured to emit laser light 5 (e.g. one or more laser dots) to the rotation prism 42.
- the rotation prism 42 may be configured to generate, using the emitted laser light 5 (e.g. the one or more emitted laser dots), the different patterns 7 of laser dots by rotating accordingly.
- the rotation prism 42 may be a fast rotation prism.
- the rotation prism 42 may rotate and/or tilt according to a certain angle such that one or more laser dots emitted or projected by the laser array 41 are moving with different trajectories in a projection plane between consecutive time frames of the subsequent time frames (not shown in Figure 4). This allows generating the different patterns 7 of laser dots for the subsequent time frames, with which the area 4 is lighted or illuminated in the subsequent time frames.
- the laser light source 2 may be configured to turn off in a time frame two or more laser dots of a respective pattern of the different patterns 7 of laser dots, when the two or more laser dots are neighboring to each other. This allows avoiding ambiguity and self-interference.
- the laser light source 2 comprises a diffractive optical element 53.
- the laser array 51 may be configured to emit one or more laser dots to the diffractive optical element 53.
- the diffractive optical element 53 may be configured to split up the one or more emitted laser dots for generating the different patterns of laser dots.
- the laser array 51 may be arranged on a substrate or submount 50 for electrical connection. This is only by way of example and does not limit the present disclosure.
- the laser array 51 may be arranged on the submount 50 using one or more solder bumps (not shown in Figure 5).
- the laser array 51 may emit four laser dots, wherein the diffractive optical element 53 may split up each of the four laser dots into five laser dots.
- the diffractive optical element 53 may be configured to split up an optical beam in multiple beams (e.g. five beams) to generate a pattern of the different patterns of laser dots.
- the laser light source 2 e.g. the laser array 51 and the diffractive optical element 53, may generate a pattern of the different patterns of laser dots that comprises 20 laser dots.
- the aforementioned number of laser dots emittable by the laser array 51 and the number of laser dots in which each emitted laser dot may be split-up by the diffractive optical element 53 is only by way of example and does not limit the present disclosure.
- the number of laser dots of the different patterns of laser dots may be hundreds or thousands depending on the optical system design.
- Figures 6 (A) and 6 (B) show an example of a laser array 51 and two different patterns 7 of laser dots that may be generated by the laser light source 2 of Figure 5 in case the diffractive optical element 53 is configured to split-up each emitted laser dot into five laser dots. This is only by way of example and does not limit the present disclosure.
- Figures 6 schematically shows two examples of different patterns of laser dots that may be generated by a laser light source of a camera system according to an embodiment of the present disclosure.
- the laser array 51 is a VCSEL laser array comprising sixteen VCSEL elements (i.e. sixteen laser elements).
- the following description is correspondingly valid, in case the laser array 51 is a different laser array type (e.g. EEL array or PCSEL array).
- FIG. 6 (A) and 6 (B) shows a top view of the VCSEL array, wherein it is assumed that three VCSEL elements of the sixteen VCSEL elements are turned on (i.e. emit laser light and, thus, a laser dot) for generating a pattern of the different patterns 7 of laser dots.
- the number of VCSEL elements being turned on for generating a pattern of lased dots is only by way of example and does not limit the present disclosure.
- the sixteen VCSEL elements 51a of the VCSEL array 51 are represented by circles, wherein the three VCSEL elements each emitting laser light and, thus, a laser dot are represented by a circle that is filled with a pattern.
- the main pattern 10 of laser dots that may be generated by the laser array 51 and, thus, the laser light source 2 is shown.
- the laser dots 10a of the main pattern 10 are represented by circles, which are either white or black.
- the main pattern 10 may be generated by the VCSEL array 51 and the diffractive optical element 53 when all sixteen VCSEL elements are turned on and, thus, each VCSEL element emits a laser dot.
- the diffractive optical element 53 may split-up each laser dot into five laser dots
- the number of laser dots 10a of the main pattern 10 of laser dots may be eighty laser dots.
- the laser dots of the pattern of laser dots that may be generated by the VCSEL array 51 and the diffractive optical element 53, when the three VCSEL elements shown on the left of Figure 6 (A) or 6 (B) are turned on and, thus, each emit a laser dot, are represented by black circles in the middle and on the right of Figure 6 (A) respectively 6 (B).
- the laser dots 7a of the respective pattern of different patterns 7 of laser dots are a subset of the laser dots 10a of the main pattern 10 of laser dots that may be generated by the laser light source 2. Since different VCSEL elements are turned on in the example of Figure 6 (A) compared to the example of Figure 6 (B), the pattern 7 of laser dots generated in the example of Figure 6 (A) differs from the pattern 7 of laser dots generated in the example of Figure 6 (B).
- the VCSEL elements may be selectively turned on.
- the laser light source 2 may optionally comprise a lens array 52 (e.g. micro lens array) for collimating the laser light (i.e. one or more laser dots) that may be emitted by the laser array 51.
- a lens array 52 e.g. micro lens array
- the laser light i.e. one or more laser dots
- Figures 7 to 9 each schematically show an example of a laser light source of a camera system according to an embodiment of the present disclosure.
- the laser light sources of Figures 7 to 9 are examples of the laser light source 2 of the camera systems 1 of Figures 1 and 2.
- the laser light sources of Figures 7 to 9 are examples of the laser light source of the camera system according to the first aspect of the present disclosure. Therefore, the description of the camera system according to the first aspect of the disclosure is correspondingly valid for the laser light sources of Figures 7 to 9.
- the laser light source 2 may comprise a laser array 71, 81 or 91 comprising two or more laser elements.
- the laser array 71, 81 or 91 may be configured to selectively turn on and off the laser elements for generating the different patterns 7 of laser dots.
- the laser array 71 , 81 or 91 may be a pulsed laser array.
- the laser array 71 , 81 or 91 may be a vertical-cavity surface emitting laser (VCSEL) array; an edge emitting laser (EEL) array; or a photonic crystal surface mitting laser (PCSEL) array.
- the laser array 71, 81 or 91 may be any other known laser array type.
- the laser array 71, 81 or 91 may be arranged on a substrate or submount 70, 80 or 90 for electrical connection. This is only by way of example and does not limit the present disclosure.
- the laser array 71, 81 or 91 may be arranged on the submount 70, 80 or 90 using one or more solder bumps (not shown in Figures 7 to 9).
- the laser array 71 , 81 or 91 is an edge emitting laser array (EEL array).
- EEL array edge emitting laser array
- the following description is correspondingly valid, in case the laser array 71, 81 or 91 is a different laser array type (e.g. VCSEL array or PCSEL array).
- the active stripe of each edge emitting laser diode element of the EEL array 71, 81 or 91 is exemplarily shown and labeled by the reference sign 71a, 81a or 91a.
- the laser light source 2 may further comprise a diffractive optical element 73.
- the laser array 71 may be configured to emit one or more laser dots to the diffractive optical element 73.
- the diffractive optical element 73 may be configured to split up the one or more emitted laser dots for generating the different patterns 7 of laser dots.
- the laser light source 2 may comprise a prism array 74.
- the prism array 74 may be configured to deflect laser dots generated by the diffractive optical element 73 for generating the different patterns 7 of laser dots.
- the diffractive optical element 73 may split up an optical beam in multiple beams (e.g. five beams) to generate a pattern of the different patterns of laser dots.
- An additional prism array 74 may be inserted in the optical path to deflect the multiple light beams (generated by the diffractive optical element 73).
- the prism array 74 may be configured to deflect laser dots generated by the diffractive optical element 73 in a vertical direction (e.g. vertically downwards). This allows covering a wider field of view (FOW) by the camera system 1.
- the function of the optional prism array 74 may be achieved by the diffractive optical element 73. That is, the diffractive optical element 73 may be configured to achieve the function of the prism array 74 (in this case the prism array 74 may be not part of the laser light source 2).
- Using the prism array 74 may be advantageous for the assembly process of assembling the laser light source 2 and, thus, the camera system 1 .
- one or more laser elements e.g.
- multiple laser elements) of the laser array 71 may be selectively turned on at the same time, i.e. at a time frame, for generating the pattern 7 of laser dots for a time frame.
- at least one other laser element is turned on and/or at least one of the turned on laser elements is turned off, another pattern 7 of laser dots is generated.
- the laser array 71 may emit four laser dots, wherein the diffractive optical element 53 may split up each of the four laser dots into five laser dots.
- the laser light source 2 e.g. the laser array 71 and the diffractive optical element 73, may generate a pattern of the different patterns 7 of laser dots that comprises 20 laser dots.
- the aforementioned number of laser dots emittable by the laser array 71 and the number of laser dots in which each emitted laser dot may be split-up by the diffractive optical element 73 is only by way of example and does not limit the present disclosure.
- the laser light source 2 may optionally comprise a lens array 72 (e.g. micro lens array) for collimating the laser light (i.e. one or more laser dots) that may be emitted by the laser array 71.
- a lens array 72 e.g. micro lens array
- the laser light i.e. one or more laser dots
- Figure 7 (B) a pattern 7 of laser dots (e.g. five laser dots) that may be generated by one laser element of the laser array 71 and the diffractive optical element 73 is shown (only by way of example).
- the laser light source 2 of Figure 8 differs from the laser light source 2 of Figure 7 in that the laser light source 2 of Figure 8 comprises one or more spatial light modulators 84 instead of the prism array 74.
- the above description of Figure 7 is correspondingly valid for the laser light source 2 of Figure 8 and in the following mainly the difference of the laser light source 2 of Figure 8 with regard to the laser light source 2 of Figure 7 is described.
- Figure 8 shows a side view of the laser light source 2.
- the laser light source 2 comprises one or more spatial light modulators 84.
- the one or more spatial light modulators 84 may be configured to deflect laser dots generated by the diffractive optical element 83 for generating the different patterns 7 of laser dots.
- An optional lens array of the laser light source for collimation is labeled by the reference sign 82.
- the laser light source 2 comprises two spatial light modulators 84
- one spatial light modulator of the two spatial light modulator 84 may be configured to deflect a beam and, thus, laser dots in a horizontal direction and another spatial light modulator of the two spatial light modulators 84 may be configured to deflect a beam and, thus, laser dots in a vertical direction.
- the laser light source 2 comprises a spatial light modulator 84
- the spatial light modulator may be configured to deflect a beam and, thus, laser dots in a horizontal direction and a vertical direction.
- the laser array 81 is turned off during switching of the one or more spatial light modulators 84.
- the laser light source 2 of Figure 9 differs from the laser light source 2 of Figure 7 in that the laser light source 2 of Figure 9 comprises a liquid crystal on silicon (LCOS) light modulator 93 with a plurality of LCOS pixels 93a instead of the diffractive optical element 73 and the prism array 74.
- LCOS liquid crystal on silicon
- Figure 9 shows a side view of the laser light source 2.
- the laser light source 2 comprises a liquid crystal on silicon (LCOS) light modulator 93 with a plurality of LCOS pixels 93a.
- the laser array 91 may be configured to emit one or more laser dots to the LCOS light modulator 93.
- the LCOS light modulator 93 may be configured to selectively change the phase of the LCOS pixels 93a for generating the different patterns 7 of laser dots.
- An optional lens array of the laser light source 2 for collimation is labeled by the reference sign 92.
- each LCOS pixel 93a of the LCOS light modulator 93 may be individually adjusted to control the phase of the light beam and, thus, one or more laser dots generated by the laser array 91.
- the LCOS light modulator 93 may be configured to generate a diffraction pattern to split an incoming beam and, thus, a laser dot into multiple output beams or laser dots.
- the LCOS light modulator 93 may be configured to adjust at the same the direction of the propagation, i.e. the direction of the multiple beams or laser dots.
- the LCSO light modulator 93 may be configured to change the light pattern shape and/or direction of propagation by changing the phase of the LCOS pixels 93a.
- the laser array 91 is turned off during changing of the phase of the LCOS pixels 93a of the LCOS light modulator 93. This allows the one or more camera sensors 3 of the camera system 1 to see each pattern of the different patterns 7 of laser dots at a fixed position in a corresponding time frame of the subsequent time frames (not moving continuously).
- the laser light source 2 may be configured to emit the different patterns 7 of laser dots in the subsequent time frames to the area 4.
- the different patterns 7 of laser dots may move over an object arranged in the area 4. This allows covering multiple positions on the object by the different patterns 7 of laser dots in the subsequent time frames.
- depth information of the object may be detected by the camera system 1 comprising the laser light source 2.
- the sparsity of the different patterns of laser dots which may be generated by the laser light source of the camera system, provides the following advantages.
- the camera system may project or emit only a fraction or subset of the laser dots of the main pattern and, thus, acquire and process a fraction of reflected laser dots compared to a scenario, in which the area is lighted by the main pattern of laser dots. This is much easier to accomplish both for transmitter side (i.e. the laser light source) and the receiver side (i.e. one or more event camera sensors) of the camera system.
- the one or more event camera sensors are advantageous, because an event camera sensor may output only the pixels that received a photon intensity change induced by the laser dots of a pattern emitted by the laser light source. This saves a lot of redundancy compared to camera sensors (e.g. conventional CMOS image sensors) outputting the intensity level of received or detected photons for each pixel. In other words, the one or more event camera sensors allow reducing an output and processing redundancy. This allows improving the system speed of the camera system.
- the camera system of the present disclosure allows a configurability of the different patterns of laser dots (used for lighting the area) in a time and/or spatial domain randomness, uniformity and repetition. This allow the camera system to be adaptable to various flexibility requirements in terms of robustness, area of interest and range extension. Further, the camera system of the present disclosure allows to use QD thin film integrated CMOS image sensor technology, which is low in costs, to be used for implementing the one or more event camera sensors.
- the laser light source may be implemented using basic optic elements.
- the camera system is easy to integrate and may be implemented with low costs.
- Figure 10 shows a flow diagram of an example of a method according to an embodiment of the present disclosure for determining depth information of an area.
- the method of Figure 10 is an example of the method according to the second aspect of the disclosure.
- the description of the method according to the second aspect is correspondingly valid for the method of Figure 10.
- the method of Figure 10 may be performed by the camera system of Figures 1 and 2.
- the above description of Figures 1 to 9 is correspondingly valid for the method of Figure 10.
- the method of Figure 10 is a method for determining depth information of an area.
- the method comprises generating, by a laser light source, in subsequent time frames different patterns of laser dots for lighting the area.
- the laser light source is configured to generate a main pattern of laser dots and the laser dots of each of the different patterns are a subset of the laser dots of the main pattern.
- the method comprises detecting, by one or more event camera sensors, reflected laser light caused by lighting the area with the different patterns of laser dots in the subsequent time frames.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The present disclosure relates to a camera system and method for determining depth information of an area. The camera system comprises a laser light source for lighting the area, and one or more event camera sensors for detecting reflected laser light caused by lighting the area. The laser light source is configured to generate a main pattern of laser dots. The laser light source is configured to generate in subsequent time frames different patterns of laser dots for lighting the area, wherein the laser dots of each of the different patterns are a subset of the laser dots of the main pattern. The one or more event camera sensors are configured to detect reflected laser light caused by lighting the area with the different patterns of laser dots in the subsequent time frames.
Description
CAMERA SYSTEM AND METHOD FOR DETERMINING DEPTH INFORMATION OF AN AREA
TECHNICAL FIELD
The disclosure relates to a camera system and method for determining depth information of an area.
BACKGROUND
The disclosure is in the field of camera systems for determining depth information of an area. Such camera system may be referred to as depth camera systems. Depth information of an area may comprise or be three dimensional (3D) information of the area.
SUMMARY
The following considerations are made by the inventors:
A camera system for determining depth information of an area may be used in consumer electronic devices (e.g. smartphones, tablets, augmented reality and virtual reality (ARVR) devices), in automotive (e.g. autonomous driving and advanced driver assistance systems (ADAS)), as well as in robots (e.g. industrial robots, medical robots and domestic robots). An active depth camera system employing light sources may be referred to as light detection and ranging (Lidar) system.
There are still many technical challenges for the development of a depth camera system employing light sources (Lidar system), in particular for serving mission critical applications, where low latency, large sensing area (also referred to as field of view (FOV)), and robustness are needed. In addition, costs and eye safety aspects are also vital with regard to a practical usage of a Lidar system.
For addressing large FOV requirements, a Lidar system may employ a scanning mechanism (e.g. using prisms, mirrors, Microelectromechanical systems (MEMS), optical parametric amplifiers (OPA), etc.) for sweeping the XY-dimensions (i.e. lighting the XY-dimensions) in a sequential order, either in a one dimensional (ID) or a two dimensional (2D) manner. Such a scanning mechanism is usually slow. For example a Lidar system employing such a scanning mechanism may achieve a refresh rate of about 30 Hz.
Moreover, the receiver and the data processing unit of a Lidar system may also be a bottleneck for lower latency of the Lidar system. A Lidar system employing the above outlined scanning mechanism (e.g. with 30 Hz refresh rate) may have the duty of processing e.g. more than 1 million point clouds per
second with a throughput limit in data processing and transmission. There may be a lot of redundancy in the acquired data.
According to an alternative, Lidar systems may employ indirect or direct time of flight (TOF) working principles and scanning mechanisms (e.g. using prims, mirror, OPA, MEMS, etc.). For achieving lower latency with regard to determining depth information, such Lidar system may use a more aggressive scanning speed (e.g. by employing electrical scanning and/or multi-beam scanning) and a planar two dimension (2D) receiver as receiver. The planar 2D receiver may be for example a single-photon avalanche diode (SPAD) sensor array based receiver or a CMOS image sensor based indirect time of flight receiver (CIS based iTOF receiver). The term “CMOS image sensor” stands for “complementary metal-oxide-semiconductor image sensor” and may be abbreviated by “CIS”. Such Lidar systems according to the aforementioned alternative may be referred to as flash Lidar systems. Such Lidar systems (i.e. flash Lidar systems) may only improve the system latency in a gradual way, because the latency is constrained not only by the scanning mechanism, but also the huge data acquisition and processing burden at the receiver.
In view of the above, the present disclosure aims to improve a camera system for determining depth information of an area. In particular, it may be an object to provide a camera system for determining depth information of an area, the camera system comprising a light source (i.e. a Lidar system), wherein the camera system is improved with regard to lower latency. An object may be to provide a camera system (e.g. Lidar system) for determining depth information of an area that is improved with regard to lower latency, while maintaining at least one of the following: a performance of detecting ranges and accuracy, FOVs, robustness and costs. For example, it is an object to provide a camera system (e.g. Lidar system) for determining depth information of an area, wherein the camera system achieves a low (e.g. ultra low) latency while maintaining at least one of following: a performance of detecting ranges and accuracy, FOVs, robustness and costs.
The objective is achieved by the subject-matter of the enclosed independent claims. Advantageous implementations are further defined in the dependent claims.
A first aspect of the disclosure provides a camera system for determining depth information of an area. The camera system comprises a laser light source for lighting the area, and one or more event camera sensors for detecting reflected laser light caused by lighting the area. The laser light source is configured to generate a main pattern of laser dots. The laser light source is configured to generate in subsequent time frames different patterns of laser dots for lighting the area, wherein the laser dots of each of the different patterns are a subset of the laser dots of the main pattern. The one or more event camera sensors
are configured to detect reflected laser light caused by lighting the area with the different patterns of laser dots in the subsequent time frames.
With other words, the first aspect proposes to light an area, of which depth information is to be determined, with different sparse laser dot patterns in subsequent time frames and to use one or more event camera sensors for detecting the reflected laser light caused by lighting the area with the different sparse laser dot patterns. The different laser dot patterns are sparse compared to a main pattern of laser dots that could be generated by the laser light source because the laser light source is configured to generate the main pattern of laser dots. The passage “pattern of laser dots” and “laser dot pattern” may be used as synonyms.
Since the different patterns of laser dots comprise only a subset of the laser dots of the main pattern the amount of data acquired by the one or more event camera sensors at each time frame is smaller or less compared to the amount of data when lighting the area with the main pattern. This reduces the processing burden of acquired data at each time frame compared to lighting the area with the main pattern of laser dots. Thus, less time is required for processing the acquired data (e.g. even in case the scanning speed is increased) compared to a scenario, in which the area is lighted using the main pattern of laser dots. Namely, at any time frame a subset of laser dots of the laser dots of the main pattern may be used by the camera system for lighting the area. As a result, a reduced amount or less laser light is reflected from the area being lighted by the respective subset of laser dots. This results in a reduce amount of data acquired by the one or more event camera sensors as a detection result of detecting the reflected laser light. The detection result of the one or more event camera sensors in a time frame may be a detected light intensity change caused by the reflected laser light in the time frame. Therefore, less data needs to be processed at a time, which reduces the burden on processing of the data by the camera system.
An event camera sensor is a type of CMOS image sensor (CIS). The terms “camera sensor” and “image sensor” may be used as synonyms and, thus, the terms “event camera sensor” and “event image sensor” may be used as synonyms. Instead of outputting the intensity level of received or detected photons for each pixel, the pixels of an event camera sensor are configured to only respond to a change of photon intensity and the event camera sensor is configured to only provide the outputs from pixels responding to such a change, in an asynchronous manner. In other words, the one or more event camera sensors may each be configured to detect only a change of photon intensity (e.g. laser light intensity) and output the change of photon intensity as a detection result. The term “light intensity” may be used as a synonym for the term “photon intensity”. An event camera sensor may be referred to as dynamic vision system sensor (DVS sensor). Therefore, an event camera sensor is much lower in output latency compared to a camera sensor detecting intensity level of received or detected photons, i.e. compared to a camera sensor that outputs the light intensity detected by every pixel of the camera sensor irrespective of whether the
pixel has detected an intensity change or not. For example, the one or more event camera sensors may have a latency of 1 microsecond (1 ps).
Moreover, the output data (i.e. the detection result) of an event camera sensor is more efficient compared to a camera sensor detecting intensity level of received or detected photons. Since, the one or more event camera sensors detect only intensity changes the data acquired as detection result of the one or more event camera sensors only comprise useful information of the area (e.g. changes in the area causing an intensity change in one or more pixels of the event camera sensor) and optionally noise. This enables much faster data processing (e.g. by using a spiking neural network processing unit). Namely, at each time frame less data needs to be processed compared to the scenario of using a camera sensor detecting intensity level of received or detected photons. A spiking neural network processing unit is a special type of deep neural network learning processor, which is configured for asynchronous input (such as an input from an event camera sensor). A spiking neural network processing unit may achieve better computing efficiency compared to a conventional neural network processing unit (e.g. a convolutional neural network processing unit). The present disclosure is not limited to a spiking neural network processing unit or any other type of processing unit.
The number of laser dots of the main pattern may be determined by a desired resolution of the camera system. The greater the desired resolution the greater may be the number of laser dots of the main pattern. In each time frame the resolution of depth information determination providable by a respective pattern of the different patterns of laser dots is smaller or less compared to the resolution of depth information determination providable by the main pattern of laser dots. However, this is compensated by generating the different patterns of laser dots in the subsequent time frames. That is, after each time frame the resolution increases because in each time frame additional laser dots compared to previous time frames may be generated by the laser light source for lighting the area. After a number of time frames, the number depending on the number of laser dots of the different patterns of laser dots, the resolution achievable when lighting the area with the different patterns of laser dots in the subsequent time frames equals to the resolution achievable when lighting and detecting the area with the main pattern of laser dots.
In addition or alternatively, the laser light source may be configured to generate in the subsequent time frames the main pattern of laser dots for lighting the area, and the one or more event camera sensors may be configured to detect in the subsequent time frames reflected laser light caused by different patterns of laser dots of the main pattern, wherein the laser dots of each of the different patterns are a subset of the laser dots of the main pattern. Thus, the one or more event camera sensors may be configured to detect and output, at each time frame of the subsequent time frames, a subset of data of the data corresponding to the reflected light caused by lighting the area with the main pattern of laser
dots. This may result in a lower data processing amount. Lighting the area with the main pattern of laser dots in the subsequent time frames and detecting in the subsequent time frames reflected light of different patterns of laser dots being subsets of the main pattern of laser dots is equivalent to lighting the area with the different patterns of laser dots being subsets of the main pattern of laser dots and detecting in the subsequent time frames the reflected laser light caused by lighting the area with the different patterns of laser dots.
As outlined already above, processing data acquired by the camera system (when the area is lighted with the different patterns of laser dots in the subsequent time frames and the one or more event camera sensors detecting the respective reflected laser light) is faster compared to processing data acquired by the camera system when the area is lighted and detected once with the main pattern of laser dots. As a result, information, e.g. depth information, about the area may be already earlier available when using the different patterns of laser dots for lighting the area compared to the scenario of using the main pattern of laser dots. Although, this information may be of lower resolution compared to information, e.g. depth information, generated based on a lighting of the area with the main patter of laser dots, the resolution of this information may be already sufficient for a basic evaluation of the area. This may be an advantage when using the camera system in a device configured to move in the area for detecting obstacles. Thus, this allows determining information, e.g. depth information, on the area similar to the human eye.
For example, when a person is in a room and an obstacle is detected by an eye of the person in a peripheral area of the field of view of the eye, where the resolution is lower and the detection of obstacles is blurred, the detection of the obstacle is sufficient for the human brain to start reacting to the obstacle. For example, the person may start giving way to the obstacle and turning the head in order to get a better view on the obstacle. In the same way, the camera system of the first aspect allows earlier detecting an obstacle in the area and, thus, reacting to the obstacle. For example, patterns of the different patterns of laser dots for lighting the area in subsequent time frames after a time frame of detecting the obstacle may be focused on an area of interest of the area, in which the obstacle is positioned.
A maximum number of laser dots of the different patterns of laser dots may be set by or may be equal to a throughput of the one or more event camera sensors. The number of laser dots of the different patterns of laser dots may be different to each other. At least two patterns of the different laser dots may comprise a different number of laser dots. Optionally, at least two patterns of the different patterns of laser dots may comprise the same number of laser dots.
The laser light source may be configured to light the area by generating the main pattern of laser dots or the different patterns of laser dots. The laser dots of the main pattern may be spatially distributed for lighting the area. That is, they do not light the same position or location in the area when the main pattern
of laser dots lights the area. Accordingly, the laser dots of the different patterns of laser dots may be spatially distributed for lighting the area. The laser light source may be configured to light the area in the subsequent time frames by generating the different patterns of laser dots in the subsequent time frames. The laser dots of the different patterns may be spatially distributed for the lighting the area.
The term “dot” may be used for referring to a “laser dot”. Since the laser dots of a respective pattern of laser dots is a subset of the laser dots of the main pattern, the laser dots are spatially arranged or spatially distributed according to the main pattern. That is, the laser dots are arranged in the respective pattern of laser dots at the same position as they would be arranged in the main pattern. The main pattern of laser dots may be referred to as “pattern of laser dots” and the different patterns of laser dots may be referred to as “sub-patterns of laser dots”.
The laser dots of the main pattern may be generated in a plane. The plane may be referred to as “projection plane”. Thus, the laser dots of the different patterns of laser dots may be arranged in a plane. The different patterns of laser dots may be emitted or projected by the laser light source to the area that is to be detected or captured by the camera system. In other words, the laser light source may be configured to light the area to be detected by the camera system with the different patterns of laser dots. The term “illuminate” may be used as a synonym for the term “light”. The area may be in the field of view (FOV) of the camera system (e.g. field of view of the one or more event camera sensors). The term “scene” or environment” may be used as a synonym for the term “area”. Each pattern of the different patterns of laser dots may light the area, such as entities (e.g. objects, persons, vehicles etc.) that are present in the area. The term “detecting area” may be used as a synonym for the term area to be detected.
The one or more event camera sensors may be configured to detect or capture the reflections from the area, which are caused by the laser dots of the different patterns projected or emitted to the area for lighting the area. The camera system may be configured to determine, e.g. compute, depth information of the area by processing the detection result of the one or more event camera sensors. The laser light source and the one or more event camera sensors may be positioned or arranged at different positions.
In an implementation form of the first aspect, a number of the laser dots of one or more of the different patterns of laser dots equals to a number of laser dots in a range between 0.1% and 25 % of a number of laser dots of the main pattern.
A number of the laser dots of the different patterns of laser dots may equal to a number of laser dots in a range between 0.1% and 25% of a number of laser dots of the main pattern. In other words, the different patterns of laser dots are sparser compared to the main pattern of laser dots. This reduces the amount of acquired data providable by the one or more event camera sensors as detection result in a time frame,
when the one or more event camera sensors detect the reflected laser light caused by lighting the area with a respective pattern of the different patterns of laser dots in the time frame.
In an implementation form of the first aspect, the time between two consecutive time frames of the subsequent time frames is in a range between 100 nanoseconds and 10 milliseconds.
In other words, the laser light source may be configured to generate the different patterns of laser dots with a time frame frequency in a range between 100Hz and 10 MHz. Therefore, the laser light source may be configured to emit or project the different patterns of laser dots to the area (detecting scene) in a fast sweeping manner. This allows reducing the latency of the camera system for determining the depth information of the area. The time frame frequency and, thus, the scanning speed of the camera system may be set by or may be equal to the response time of the one or more event camera sensors for capturing an light intensity change or photon intensity change (e.g. the up-pulse and/or down-pulse photon intensity change).
The fast sweeping of the different patterns of laser dots means that the laser light source may be configured to emit or project the laser dots of the respective pattern of the different patterns of laser dots and the one or more event camera sensors may be configured to detect reflected laser light caused by the laser dots at a very short time interval. For example, the time interval may be at a micro-second scale, which may correspond to the response time capability of an event camera sensor. The sparsity of the different patterns of laser dots (compared to the main pattern of laser dots) means the following. Even though the camera system is to determine or acquire depth information (e.g. 3D information) in a large field of view (FOV) of the area, at each time frame, the laser light source may be configured to emit or project only a fraction of the laser dots of the main pattern. Optionally, the laser light source may be configured to emit or project the fraction of the laser dots of the main pattern into a fraction of the area.
The laser light source may be configured to generate the different patterns of laser dots such that the different patterns are designed in a way that they transverse the whole FOV in a short time period. This allows the camera system to capture the depth information of the area and, thus, 3D scenes. Furthers, this allows, by probability, capturing movement of one or more entities (e.g. objects, persons, vehicles etc.) in the area in a very short latency.
The different patterns of laser dots may be referred to as dot patterns across different bursts. The different patterns of laser dots in the subsequent time frames may be considered as a sparse distribution of laser dots in the time-spatial domain.
In an implementation form of the first aspect, the laser light source is configured to repeat one or more of the different patterns of laser dots in two or more consecutive time frames of the subsequent time frames.
In other words, the laser dots of the one or more different patterns (being repeated) may be more dense in time domain.
In an implementation form of the first aspect, the laser light source is configured to generate the different patterns of laser dots such that the laser dots of one or more patterns of the different patterns of laser dots are uniformly distributed in time domain and/or spatial domain.
Distribution of laser dots of a pattern of laser dots in spatial domain may be understood as following. If laser dots of a first pattern of the different patterns of laser dots and laser dots of a second pattern of the different patterns of laser dots are distributed in the spatial domain in the same way, then this may mean that the first pattern and second pattern have the same spatial density, i.e. their density in the XY domain is the same.
Distribution of laser dots of a pattern of laser dots in time domain may be understood as following. If laser dots of a first pattern of the different patterns of laser dots and a laser dots of a second pattern of the different patterns of laser dots are distributed in the time domain in the same way, then this may mean that the laser dots of the first pattern and the laser dots of the second pattern may be generated the same number of times.
For achieving such a uniform distribution in time domain and/or spatial domain, the laser light source may be configured to generate the different patterns of lasers dots such that a dot density of one or more patterns of the different patterns of laser dots is smaller than 15% in variation compared to the dot density of one or more other patterns of the different patterns of laser dots. That is, the laser light source may be configured to generate the different patterns of laser dots such that the dot density of one or more patterns of the different patterns of laser dots varies by less than 15% from the dot density of one or more other patterns of the different patterns of laser dots. The terms “deviation” and “deviate” may be used as synonyms for the terms “variation” and “vary”, respectively. For example, the laser light source is configured to generate the different patterns of laser dots such that the dot densities of a plurality of patterns of the different patterns of laser dots vary to each other by less than 15%. In other words, the dot density of each pattern of the plurality of patterns may vary by less than 15% from the other pattern or patterns of the plurality of patterns.
In an implementation form of the first aspect, the laser light source is configured to generate the different patterns of laser dots such that the laser dots of one or more patterns of the different patterns of laser dots are pseudo randomly distributed in time domain and/or spatial domain.
This has the advantage of making the camera system robust against noise and interference. This allows rejecting or preventing, by probability, noise and interference by exploiting the diversity of the different patterns of laser dots in time domain and/or spatial domain.
In an implementation form of the first aspect, the laser light source is configured to generate one or more patterns of the different patterns of laser dots such that the laser dots of the one or more patterns are, in an area of interest of the area, more dense in time domain and/or spatial domain.
For example, the laser light source is configured to generate the one or more patterns of the different patterns of laser dots such that the one or more patterns comprise, in the area of interest of the area, a dot density that is greater than or equal to 20% compared to the rest of the area.
A pattern of laser dots comprising laser dots that are denser, in spatial domain, in the area of interest may mean that the laser dots are spatially arranged such that more laser dots of the pattern light the area of interest compared to another pattern comprising laser dots that are equally distributed in the spatial domain. A pattern of laser dots comprising laser dots that are denser, in time domain, in the area of interest may mean that the laser dots for lighting the area of interest of the area are more often repeated compared to laser dots of the pattern for lighting the rest of the area.
In an implementation form of the first aspect, the laser light source comprises a laser array comprising two or more laser elements. The laser array may be configured to selectively turn on and off the laser elements for generating the different patterns of laser dots.
In other words, the laser array may be configured to modulate its laser light emission by selectively turning on and off the laser elements. The laser array may be configured to modulate its laser light emission for generating the different patterns of laser dots. The laser array may be configured to generate the different patterns of laser dots by selectively turning on and off the laser elements.
The phrase “selectively turning on and off the laser elements” means that the laser elements may be turned on and off independent of each other. That is each laser element may be individually turned on and off. For example, the laser array may be a vertical-cavity surface emitting laser array (VCSEL array), an edge emitting laser array (EEL array) or a photonic crystal surface emitting laser array (PCSEL array). The laser array may be any other laser array type. The laser array may be a pulsed laser
array. The terms “switch off’ and “switch on” may be used as synonyms for the terms “turn off’ and “turn on”, respectively.
In an implementation form of the first aspect, the laser array is one of the following: a vertical-cavity surface emitting laser (VCSEL) array; an edge emitting laser (EEL) array; and a photonic crystal surface emitting laser (PCSEL) array.
A laser element may be or may comprise a laser diode. The two or more laser elements of a VCSEL array may be referred to as one or more VCSEL elements. A VCSEL element may be or may comprise one or more VCSEL diodes. The two or more laser elements of a PCSEL array may be referred to as one or more PCSEL elements. A PCSEL element may be or may comprise one or more PCSEL diodes.
In an implementation form of the first aspect, the laser light source comprises a rotation prism. The laser array may be configured to emit one or more laser dots to the rotation prism, and the rotation prism may be configured to generate, using the one or more emitted laser dots, the different patterns of laser dots by rotating accordingly.
The laser array and the rotation prism may be configured to simultaneously perform the respective function for generating the different patterns of laser dots. That is, the laser array and the rotation prism may be configured such that the laser array modulates its laser light emission while the rotation prism generates, using the modulated laser light emission of the laser array (i.e. one or more emitted laser dots), the different patterns of laser dots by rotating accordingly.
In an implementation form of the first aspect, the laser light source comprises a diffractive optical element. The laser array may be configured to emit one or more laser dots to the diffractive optical element, and the diffractive optical element may be configured to split up the one or more emitted laser dots for generating the different patterns of laser dots.
The diffractive optical element may be configured to generate the different patterns of laser dots by splitting up the one or more emitted laser dots.
In an implementation form of the first aspect, the laser light source comprises a prism array. The prism array may be configured to deflect laser dots generated by the diffractive optical element for generating the different patterns of laser dots.
The prism array may be configured to generate the different patterns of laser dots by deflecting laser dots generated by the diffractive optical element.
In an implementation form of the first aspect, the laser light source comprises one or more spatial light modulators. The one or more spatial light modulators may be configured to deflect laser dots generated by the diffractive optical element for generating the different patterns of laser dots.
In this case, the laser light source may comprise the edge emitting laser array. That is, the laser array of the laser light source may be the edge emitting laser array. The one or more spatial light modulators may be configured to generate the different patterns of laser dots by deflecting laser dots generated by the diffractive optical element.
In an implementation form of the first aspect, the laser light source comprises a liquid crystal on silicon (LCOS) light modulator with a plurality of LCOS pixels. The laser array may be configured to emit one or more laser dots to the LCOS light modulator, and the LCOS light modulator may be configured to selectively change the phase of the LCOS pixels for generating the different patterns of laser dots.
The laser array may be configured to generate the different patterns of laser dots by selectively turning on and off the laser elements. The LCOS light modulator may be configured to generate the different patterns of laser dots by selectively changing the phase of its LCOS pixels. In other words, the LCOS light modulator may be configured to change the phase of its LCOS pixels such that the phase of the LCOS pixels may be changed independent of each other. That is the phase of each LCOS pixel may be individually changed.
In an implementation form of the first aspect, the laser light source comprises a micro electronic mechanical systems (MEMS) mirror array. The laser array may be configured to emit one or more laser dots to the MEMS mirror array, and the MEMS mirror array may be configured to deflect the one or more laser dots for generating the different patterns of laser dots.
In other words, optionally a MEMS mirror array may be used instead of a prism array or LCOS light modulator for generating the different patterns of laser dots.
In an implementation form of the first aspect, the camera system comprises a processing unit configured to determine the depth information of the area by processing a detection result of the one or more event camera sensors of two or more time frames of the subsequent time frames.
The two or more time frames of the subsequent time frames may be two or more consecutive time frames. Optionally, the processing unit may be configured to determine the depth information of the
area by processing a detection result of the one or more event camera sensors in each of the subsequent time frames.
The processing unit may comprise or may be at least one of a controller, microcontroller, processor, microprocessor, application specific integrated circuit (ASIC) and field programmable gate array (FPGA). In addition or alternatively, the processing unit may comprise or be any other known processing means type.
The processing unit may be a deep neural network learning processing unit, e.g. a spiking neural network processing unit. Alternatively, the processing unit may be a convolutional neural network processing unit or any other type of processing unit.
In an implementation form of the first aspect, the processing unit is configured to determine an area of interest of the area by processing a detection result of the one or more event camera sensors of one or more time frames of the subsequent time frames.
The processing unit may be configured to inform the laser light source on the area of interest. The processing unit may be configured to inform a control unit for controlling the laser light source on the area of interest. The processing unit may be or may be part of the control unit for controlling the laser light source. The control unit may be configured to control the laser light source using information on the area of interest. The control unit may be configured to control the laser light source to generate one or more patterns of the different patterns of laser dots such that the laser dots of the one or more patterns are, in the area of interest, more dense in time domain and/or spatial domain.
The control unit may comprise or may be at least one of a controller, microcontroller, processor, microprocessor, application specific integrated circuit (ASIC) and field programmable gate array (FPGA). In addition or alternatively, the control unit may comprise or be any other known control means type.
The processing unit may be configured to perform a sliding window type processing of the detection result of the one or more event camera sensors (i.e. the output of the one or more event camera sensors). The processing unit may be configured to perform a stream-based processing (e.g. signal processing) of the output (i.e. detection result) of the one or more event camera sensors. The processing unit may be configured to perform a three dimensions (3D) reconstruction of the area to be detected by the camera system using the processing result of processing the detection result of the one or more event camera sensors. That is, the processing unit may be configured to perform a 3D reconstruction of the area using determined depth information of the area. For example, the processing unit may be configured to
produce or generate a depth map (depth information map) of the area by processing the output of the one or more event camera sensors. In addition or alternatively, the processing unit may be configured to detect and optionally track one or more objects of the area (e.g. moving in the area) using the processing result of processing the detection result of the one or more event camera sensors.
In an implementation form of the first aspect, the processing unit is configured to determine the depth information of the area by combining a detection result of the one or more event camera sensors of two or more time frames of the subsequent time frames.
The two or more time frames of the subsequent time frames may be two or more consecutive time frames. Optionally, the processing unit is configured to determine the depth information of the area by combining a detection result of the one or more event camera sensors of each of the subsequent time frames.
In an implementation form of the first aspect, the one or more event camera sensors are one or more quantum dots (QD) thin film integrated CMOS image sensors configured for a wavelength range of short wavelength infrared (SWIR).
The term “CMOS image sensor” stands for “complementary metal-oxide-semiconductor image sensor”. The term “CMOS image sensor” may be abbreviated by “CIS”. For example, the wavelength range of SWIR may be equal to or greater than 1400 nm or 1550 nm. The laser light source may be configured to emit laser light of a corresponding wavelength range. The laser light source may be configured to emit laser light of SWIR. For example, the laser light source may be configured to emit laser light having a wavelength equal to or greater than 1400 nm or 1550 nm. The SWIR allows for better eye safety. The QD thin film integrated CIS is a low-cost technology for providing a SWIR sensor. QD thin film integrated CIS may achieve very high quantum efficiency (e.g. > 60 Hz), while inheriting the advantages of CIS in terms of scalability (e.g. large pixel resolutions, small pixel pitch, and very low costs). For the reasons outlined above, for achieving a low latency, the camera system of the first aspect does not need a high speed of processing the acquired data, as it is the case for determining depth information using time of flight (TOF) processing (e.g. using signal -to-noise-ratio, SPAD) or indirect time of flight (iTOF) processing (e.g. using frequency modulated continuous wave, FMCW). Therefore, the QD thin film integrated CIS, which are too slow for TOF and iTOF processing, may be used as the one or more event camera sensors of the camera system of the first aspect. That is QD thin film integrated CIS may maintain a sufficient speed for determining the depth information using the different patterns of laser dots and the one or more event camera sensors.
In order to achieve the camera system according to the first aspect of the disclosure, some or all of the implementation forms and optional features of the first aspect, as described above, may be combined with each other.
A second aspect of the disclosure provides a method for determining depth information of an area. The method comprises generating, by a laser light source, in subsequent time frames different patterns of laser dots for lighting the area. The laser light source is configured to generate a main pattern of laser dots and the laser dots of each of the different patterns are a subset of the laser dots of the main pattern. Further, the method comprises detecting, by one or more event camera sensors, reflected laser light caused by lighting the area with the different patterns of laser dots in the subsequent time frames.
The above description of the camera system according to the first aspect is correspondingly valid for the method of the second aspect. The laser light source generating the different patterns of laser dots may be the laser light source of the camera system according to the first aspect. The one or more event camera sensors detecting the reflected light may be the one or more camera sensors of the camera system according to the first aspect. The method according to the second aspect of the disclosure may be performed by the camera system according to the first aspect of the disclosure. The description of the method of the second aspect may be correspondingly valid for the camera system according to the first aspect.
In an implementation form of the second aspect, a number of the laser dots of one or more of the different patterns of laser dots equals to a number of laser dots in a range between 0.1% and 25% of a number of laser dots of the main pattern.
In an implementation form of the second aspect, the time between two consecutive time frames of the subsequent time frames is in a range between 100 nanoseconds and 10 milliseconds.
In an implementation form of the second aspect, the method comprises repeating, by the laser light source, one or more of the different patterns of laser dots in two or more consecutive time frames of the subsequent time frames.
In an implementation form of the second aspect, the method comprises generating, by the laser light source, the different patterns of laser dots such that the laser dots of one or more patterns of the different patterns of laser dots are uniformly distributed in time domain and/or spatial domain.
In an implementation form of the second aspect, the method comprises generating, by the laser light source, the different patterns of laser dots such that the laser dots of one or more patterns of the different patterns of laser dots are pseudo randomly distributed in time domain and/or spatial domain.
In an implementation form of the second aspect, the method comprises generating, by the laser light source, one or more patterns of the different patterns of laser dots such that the laser dots of the one or more patterns are, in an area of interest of the area, more dense in time domain and/or spatial domain.
In an implementation form of the second aspect, the laser light source comprises a laser array comprising two or more laser elements. The method may comprise selectively turning on and off the laser elements of the laser array for generating the different patterns of laser dots.
In an implementation form of the second aspect, the laser array is one of the following: a vertical-cavity surface emitting laser (VCSEL) array; an edge emitting laser (EEL) array; and a photonic crystal surface mitting laser (PCSEL) array.
In an implementation form of the second aspect, the laser light source comprises a rotation prism. The method may comprise emitting, by the laser array, one or more laser dots to the rotation prism. The method may further comprise generating, by the rotation prism, using the one or more emitted laser dots the different patterns of laser dots by rotating accordingly.
In an implementation form of the second aspect, the laser light source comprises a diffractive optical element. The method may comprise emitting, by the laser array, one or more laser dots to the diffractive optical element. The method may further comprise splitting up, by the diffractive optical element, the one or more emitted laser dots for generating the different patterns of laser dots.
In an implementation form of the second aspect, the laser light source comprises a prism array. The method may comprise deflecting, by the prism array, laser dots generated by the diffractive optical element for generating the different patterns of laser dots.
In an implementation form of the second aspect, the laser light source comprises one or more spatial light modulators. The method may comprise deflecting, by the one or more spatial light modulators, laser dots generated by the diffractive optical element for generating the different patterns of laser dots.
In an implementation form of the second aspect, the laser light source comprises a liquid crystal on silicon (LCOS) light modulator with a plurality of LCOS pixels. The method may comprise emitting, by the laser array, one or more laser dots to the LCOS light modulator. The method may further comprise
selectively changing the phase of the LCOS pixels of the LCOS light modulator for generating the different patterns of laser dots.
In an implementation form of the second aspect, the laser light source comprises a micro electronic mechanical systems (MEMS) mirror array. The method may comprise emitting, by the laser array, one or more laser dots to the MEMS mirror array. The method may further comprise deflecting, by the MEMS mirror array, the one or more laser dots for generating the different patterns of laser dots.
In an implementation form of the second aspect, the method comprises determining, by a processing unit, the depth information of the area by processing a detection result of the one or more event camera sensors of two or more time frames of the subsequent time frames.
In an implementation form of the second aspect, the method comprises determining, by the processing unit, an area of interest of the area by processing a detection result of the one or more event camera sensors of one or more time frames of the subsequent time frames.
In an implementation form of the second aspect, the method comprises determining, by the processing unit, the depth information of the area by combining a detection result of the one or more event camera sensors of two or more time frames of the subsequent time frames.
In an implementation form of the second aspect, the one or more event camera sensors are one or more quantum dots (QD) thin film integrated CMOS image sensors configured for a wavelength range of short wavelength infrared (SWIR).
The method of the second aspect and its implementation forms and optional features achieve the same advantages as the camera system of the first aspect and its respective implementation forms and respective optional features.
In order to achieve the method according to the second aspect of the disclosure, some or all of the implementation forms and optional features of the second aspect, as described above, may be combined with each other.
It has to be noted that all devices, elements, units and means described in the present application could be implemented in software or hardware elements or any kind of combination thereof. All steps which are performed by the various entities described in the present application as well as the functionalities described to be performed by the various entities are intended to mean that the respective entity is adapted to or configured to perform the respective steps and functionalities. Even if, in the following
description of specific embodiments, a specific functionality or step to be performed by external entities is not reflected in the description of a specific detailed element of that entity which performs that specific step or functionality, it should be clear for a skilled person that these methods and functionalities can be implemented in respective software or hardware elements, or any kind of combination thereof.
BRIEF DESCRIPTION OF DRAWINGS
The above described aspects and implementation forms of the present disclosure will be explained in the following description of specific embodiments in relation to the enclosed drawings, in which
Figures 1 and 2 each show a block diagram of an example of a camera system according to an embodiment of the present disclosure for determining depth information of an area;
Figure 3 shows an example of a laser light source output of the camera system of Figures 1 and
2 and corresponding received laser light;
Figures 4 and 5 each schematically show an example of a laser light source of a camera system according to an embodiment of the present disclosure;
Figures 6 schematically shows two examples of different patterns of laser dots that may be generated by a laser light source of a camera system according to an embodiment of the present disclosure;
Figures 7 to 9 each schematically show an example of a laser light source of a camera system according to an embodiment of the present disclosure; and
Figure 10 shows a flow diagram of an example of a method according to an embodiment of the present disclosure for determining depth information of an area.
In the Figures, corresponding elements may be labeled with the same reference sign. The size of the elements of the Figures is not to scale.
DETAILED DESCRIPTION OF EMBODIMENTS
Figures 1 and 2 each show a block diagram of an example of a camera system according to an embodiment of the present disclosure for determining depth information of an area. The camera systems of Figures 1 and 2 are examples of the camera system according to the first aspect of the present disclosure. Therefore, the above description of the camera system according to the first aspect of the disclosure is correspondingly valid for the camera systems of Figures 1 and 2.
The camera system 1 of Figure 1 is a camera system for determining depth information of an area 4. In Figure 1, the area 4 is represented by a dashed rectangle in which a person is walking. This is only by way of example. That is, the area 4 may be any scene or environment. For example, the area 4 may be a part of a street, a building etc. In the area 4 there may be immobile entities (e.g. traffic light, parking
vehicle, building, furniture etc.) and/or mobile entities (e.g. driving vehicle, walking person etc.). The camera system 1 comprises a laser light source 2 for lighting the area 4, and one or more event camera sensors 3 for detecting reflected laser light 6 caused by lighting the area 4. In Figure 1, the laser light source 2 is represented by a rectangle. The one or more event camera sensors are also represented by a rectangle.
For example, the laser light source 2 may comprise or may be implemented by at least one of multibeam rotating prisms, mirrors, and spatial light modulators (SLM) with a laser array (electrically controlled laser array) such as a vertical-cavity surface emitting laser (VCSEL) array; an edge emitting laser (EEL) array; or a photonic crystal surface mitting laser (PCSEL) array. Any other known laser array type may be used. Examples of implementation forms of the laser light source 2 are schematically shown in Figures 4, 5, 7 (A) and (B), 8 and 9. Thus, the laser light source 2 is described in more detail in the following with regard to the aforementioned Figures. The one or more event camera sensors 3 may be for example one or more quantum dots (QD) thin fdm integrated CMOS image sensors. The one or more QD thin fdm integrated CMOS image sensors may be configured for a wavelength range of short wavelength infrared (SWIR). The SWIR provides the advantage of eye-safety compared to nearinfrared (NIR). This is only by way of example. In addition or alternatively, the one or more event camera sensors may be implemented by any other one or more known event camera sensor types. For example, the one or more event camera sensors may be or may comprise one or more GaAs based sensors and/or one or more Ge-Si based sensors. Using QD thing film integrated CMOS image sensors is less costly compared to using GaAs based sensors and/or Ge-Si based sensors.
For example, the one or more event camera sensors 3 may be implemented based on thin film photodiode integrated CMOS image sensor technology, e.g. CQD (colloidal quantum dots). The quantum dots (QD) CMOS image sensors may be combined with event camera sensor photo diode and pixel circuitry design for implementing the one or more event camera sensors 3. Such QD CMOS image sensors may achieve very high spatial resolution (e.g. < 3 pm pixel pitch). Further, such sensors have a high quantum efficiency (QE), may be industry scalable, and have very low costs (the costs are as low as for CMOS image sensors).. At the same time such sensors may achieve high resolution (e.g. > 1 Megapixels).
The laser light source 2 is configured to generate a main pattern of laser dots (not shown in Figure 1). An example of a main pattern of laser dots is shown in Figures 6 (A) and 6 (B). As indicated in Figure 1, the laser light source 2 is configured to generate in subsequent time frames different patterns 7 of laser dots for lighting the area 4. The laser dots 7a of each of the different patterns 7 are a subset of the laser dots of the main pattern. Thus, the laser light source 2 may be configured to light the area 4 in subsequent time frames by generating the different patterns 7 of laser dots in the subsequent time frames and projecting or emitting the different patterns 7 of laser dots to the area 4 in the subsequent time
frames. That is, in each time frame of the subsequent time frames a respective pattern of the different patterns 7 of laser dots may be generated and used for lighting the area 4. For this, the laser light source 2 is configured to emit laser light 5. That is, the laser light source 2 may be configured to generate the different patterns 7 of laser dots by emitting laser light 5. In Figure 1, the different patterns 7 of laser dots are represented by oval planes, in which the laser dots 7a are exemplarily distributed. As indicated in Figure 1, the one or more event camera sensors 3 are configured to detect reflected laser light 6 caused by lighting the area 4 with the different patterns 7 of laser dots in the subsequent time frames. That is, in a respective time frame the one or more event camera sensors 3 may be configured to acquire or receive reflected laser dots 6 corresponding to the laser dots 7a of a respective pattern of the different patterns 7 of laser dots that are reflected at the area 4 (e.g. from entities of the area 4, such as objects, persons, vehicles etc.).
As indicated in Figure 1, the laser light source 2 and the one or more event camera sensors 3 may be positioned or arranged at different positions. That is, the laser light source 2 and the one or more event camera sensors may 3 be positioned at a geometric distance from each other. This is indicated in Figure
1 by the dotted line between the rectangle representing the laser light source 2 and the rectangle representing the one or more event camera sensors 3. The distance serves as a baseline for determining the depth information. The laser light source 2 and the one or more event camera sensors 3 may be arranged according to stereoscopic geometry for acquiring the depth information of the area 4.
The time between two consecutive time frames of the subsequent time frames may be in a range between 100 nanoseconds and 10 milliseconds. That is, the laser light source 2 may be configured to generate the different patterns 7 of laser dots with a time frame frequency in a range between 100Hz and 10 MHz. Therefore, the laser light source 2 may be configured to emit or project the different patterns 7 of laser dots to the area 8 (detecting scene) in a fast sweeping manner. This allows reducing the latency (e.g. to an ultra low latency) of the camera system 1 for determining the depth information of the area 4.
For more details on the camera system 1 for determining depth information of the area 4 reference is made to the description of the camera system of the first aspect and the following description of Figures
2 to 10.
The camera system 1 of Figure 2 corresponds to the camera system 1 of Figure 1, wherein the camera system 1 of Figure 2 comprises an additional feature. Therefore, the description of Figure 1 is also valid for the camera system 1 of Figure 2. In the following, mainly the additional feature of the camera system 1 of Figure 2 is described.
As indicated in Figure 2, the camera system 1 comprises (in addition to the laser light source 2 and the one or more event camera sensors 3) a processing unit 8 configured to determine the depth information of the area 4 by processing a detection result 3a of the one or more event camera sensors 3 of two or more time frames of the subsequent time frames. In other words, the processing unit 8 may be configured to process the detection result 3a of the one or more event camera sensors 3 in order to reconstruct the area 4 or the scene to be detected by the camera system 1 in three dimensions (3D). The detection result 3a of the one or more event camera sensors 3 in a time frame may comprise or may be a change of light intensity (laser light intensity) due to the reflected laser light 6 received by the one or more event camera sensors in the time frame. The reflected laser light 6 is caused by the respective pattern of the different patterns 7 of laser dots generated and emitted, by the laser light source 2, to the area 4 in the time frame. In Figure 2, the processing unit 8 is represented by a rectangle.
The acquired reflected laser dots (i.e. the reflected laser light 6 detected by the one or more event camera sensors 3) may serve as landmarks. The processing unit 8 may be configured to process the detection result of the one or more event camera sensors 3 (i.e. a light intensity change due to the reflected laser light 6 or reflected laser dots detected by the one or more event camera sensors 7). For this, the processing unit 8 may be configured to compare an acquired XY position of reflected laser dots and corresponding time stamps of the output or detection result of the one or more event camera sensors 3 with the XY position and time stamps of the corresponding laser dots of the pattern of the different patterns 7 of laser dots that caused the reflected laser dots. The processing unit 8 may be configured to perform said comparison using one or more structured light depth sensing principle algorithms. The processing unit 8 may be configured to perform these one or more sensing and processing algorithms. This allows the processing unit 8 to reconstruct and update information (e.g. depth information) of the area 4 as three dimensional (3D) scenes in point clouds. In addition or alternatively, this allows the processing unit 8 to perform detection, recognition, and/or movement tracking tasks with regard to objects of the area 4. That is, the processing unit 8 may be configured to reconstruct and update information (e.g. depth information) of the area 4 as three dimensional (3D) scenes in point clouds. The processing unit 8 may be configured to perform detection, recognition, and/or movement tracking tasks with regard to objects of the area 4.
The processing unit 8 may comprise or may be at least one of a controller, microcontroller, processor, microprocessor, application specific integrated circuit (ASIC) and field programmable gate array (FPGA). In addition or alternatively, the processing unit 8 may comprise or be any other known processing means type. The processing unit 8 may be configured to determine the depth information of the area 8 by combining a detection result of the one or more event camera sensors 3 of two or more time frames of the subsequent time frames. The processing unit 8 may be configured to determine an
area of interest of the area 4 by processing a detection result 3a of the one or more event camera sensors 3 of one or more time frames of the subsequent time frames.
For more details on the camera system 1 for determining depth information of the area 4 reference is made to the description of the camera system of the first aspect and the following description of Figures 3 to 10.
Figure 3 shows an example of a laser light source output of the camera system of Figures 1 and 2 and corresponding received laser light. In the following reference is made to the features of the camera system 1 of Figures 1 and 2.
The top graph of Figure 3 exemplarily shows three patterns of the different patterns 7 of laser dots that may be generated in subsequent time frames TFi, TF2, TF3 by the laser light source 2 of the camera system 1. That is, the top graph of Figure 3 shows the laser light source output over time. As indicated in Figure 3, the laser light source 2 may be configured to generate different patterns 7 of laser dots in n subsequent time frames TFi, TF2, TF3, ... , TFn-i, TFn. The number n depends on the desired resolution of the camera system and the number (e.g. minimum number) of laser dots 7a of the different patterns 7 of laser dots. The greater the desired resolution the greater may be the number n of time frames.
The bottom graph of Figure 3 exemplarily shows the received laser light 9 (i.e. received laser dots) that are caused by the respective pattern 7 of laser dots lighting the area 4. For example, the bottom graph of Figure 3 shows for the time frames TFi, TF2 and TF3 the received reflected laser dots that are reflected in a respective time frame as a result of lighting the area 4 in the respective time frame with the respective pattern of the different patterns 7 of laser dots (shown in the top graph of Figure 3).
The number of the laser dots 7a of the different patterns 7 of laser dots may equal to a number of laser dots in a range between 0.1% and 25% of a number of laser dots of the main pattern (not shown in Figure 3). In other words, the different patterns 7 of laser dots are sparser compared to the main pattern of laser dots. This reduces the amount of acquired data providable by the one or more event camera sensors 3 as detection result at a time frame, when the one or more event camera sensors 3 detect the reflected laser light caused by lighting the area 4 with a respective pattern of the different patterns 7 of laser dots in the time frame. The number of laser dots 7a of two or more patterns of the different patterns 7 of laser dots may be different to each other.
The time At between two consecutive time frames of the subsequent time frames TFi, TF2, ..., TFn-i, TFN may be in a range between 100 nanoseconds and 10 milliseconds. In other words, the laser light source 2 may be configured to generate the different patterns 7 of laser dots with a time frame frequency
in a range between 100Hz and 10 MHz. Therefore, the laser light source 2 may be configured to emit or project the different patterns 7 of laser dots to the area 4 (detecting scene) in a fast sweeping manner. This allows reducing the latency of the camera system 1 for determining the depth information of the area 4.
The distribution of the laser dots 7a of the different patterns 7 of laser dots in time domain and/or spatial domain (i.e. the temporal distribution and/or spatial distribution) may be pseudo random or uniform. Optionally, at least one of the different patterns 7 of laser dots may comprise laser dots 7a that are uniformly distributed in time domain and/or spatial domain. The rest of the different patterns 7 of laser dots may comprise laser dots 7a that are pseudo randomly (may be referred to as randomly) distributed in time domain and/or spatial domain. A pseudo random distribution provides the camera system 1 robustness to reject noise and patterned interference by probability. In case that non-uniform sensing or detection of the area 4 is desired (e.g. only a narrow front field of view of the camera system 1 and, thus, of the one or more event camera sensors 3 is desired with high resolution sensing), the different patterns 7 of laser dots may be designed in a non-uniform manner in time domain and/or spatial domain. The camera system 1, e.g. the laser light source 2, may be configured to adapt the different patterns 7 of laser dots during the operation of the camera system.
In case that a range extension is desired for the camera system 1, the camera system 1 (e.g. the laser light source 2) may be configured to repeat one or more patterns of the different patterns 7 of laser dots at a same XY position (i.e. same spatial distribution). This allows accumulating the signal to noise ratio (SNR). In other words, this allows extending the range of the camera system 1, e.g. of the one or more event camera sensors, by increasing the receiving SNR.
The laser light source 2 may be configured to generate one or more patterns of the different patterns 7 of laser dots such that the laser dots 7a of the one or more patterns are, in an area of interest of the area 4, more dense in time domain and/or spatial domain (not shown in Figure 3). The camera system 1 (e.g. the laser light source 2) may be configured to adapt the area of interest during the operation of the camera system 1. For example, in case the area of interest corresponds to a vehicle moving in the area 4, then the area of interest may be moved to right side when the vehicle plans moving or moves to the right side. As a result, the camera system 1 (e.g. the laser light source 2) may be configured to adapt the different patterns 7 of laser dots according to the adapted area of interest.
The one or more event camera sensors 3 may be configured to detect a duration and/or shape of the laser dots 7a of the different patterns 7 of laser dots with a corresponding time and polarity resolution.
Figures 4 and 5 each schematically show an example of a laser light source of a camera system according to an embodiment of the present disclosure.
The laser light sources of Figures 4 and 5 are examples of the laser light source 2 of the camera systems 1 of Figures 1 and 2. Thus, the laser light sources of Figures 4 and 5 are examples of the laser light source of the camera system according to the first aspect of the present disclosure. Therefore, the description of the camera system according to the first aspect of the disclosure is correspondingly valid for the laser light sources of Figures 4 and 5.
As indicated in Figures 4 and 5, the laser light source 2 may comprise a laser array 41 or 51 comprising two or more laser elements. The laser array 41 or 51 may be configured to selectively turn on and off the laser elements for generating the different patterns of laser dots (not shown in Figure 4). The laser array 41 or 51 may be a pulsed laser array. The laser array 41 or 51 may be a vertical -cavity surface emitting laser (VCSEL) array; an edge emitting laser (EEL) array; or a photonic crystal surface mitting laser (PCSEL) array. In addition or alternatively, the laser array 41 or 51 may be any other known laser array type.
According to the example of Figure 4, the laser light source 2 may further comprise a rotation prism 42. The laser array 41 may be configured to emit laser light 5 (e.g. one or more laser dots) to the rotation prism 42. The rotation prism 42 may be configured to generate, using the emitted laser light 5 (e.g. the one or more emitted laser dots), the different patterns 7 of laser dots by rotating accordingly. The rotation prism 42 may be a fast rotation prism. The rotation prism 42 may rotate and/or tilt according to a certain angle such that one or more laser dots emitted or projected by the laser array 41 are moving with different trajectories in a projection plane between consecutive time frames of the subsequent time frames (not shown in Figure 4). This allows generating the different patterns 7 of laser dots for the subsequent time frames, with which the area 4 is lighted or illuminated in the subsequent time frames.
Optionally, the laser light source 2 may be configured to turn off in a time frame two or more laser dots of a respective pattern of the different patterns 7 of laser dots, when the two or more laser dots are neighboring to each other. This allows avoiding ambiguity and self-interference.
According to the example of Figure 5, the laser light source 2 comprises a diffractive optical element 53. The laser array 51 may be configured to emit one or more laser dots to the diffractive optical element 53. The diffractive optical element 53 may be configured to split up the one or more emitted laser dots for generating the different patterns of laser dots.
As indicated in Figure 5, the laser array 51 may be arranged on a substrate or submount 50 for electrical connection. This is only by way of example and does not limit the present disclosure. For example, the laser array 51 may be arranged on the submount 50 using one or more solder bumps (not shown in Figure 5).
For example, as shown in Figure 5, the laser array 51 may emit four laser dots, wherein the diffractive optical element 53 may split up each of the four laser dots into five laser dots. In other words, the diffractive optical element 53 may be configured to split up an optical beam in multiple beams (e.g. five beams) to generate a pattern of the different patterns of laser dots. Thus, the laser light source 2, e.g. the laser array 51 and the diffractive optical element 53, may generate a pattern of the different patterns of laser dots that comprises 20 laser dots. The aforementioned number of laser dots emittable by the laser array 51 and the number of laser dots in which each emitted laser dot may be split-up by the diffractive optical element 53 is only by way of example and does not limit the present disclosure. The number of laser dots of the different patterns of laser dots may be hundreds or thousands depending on the optical system design.
Figures 6 (A) and 6 (B) show an example of a laser array 51 and two different patterns 7 of laser dots that may be generated by the laser light source 2 of Figure 5 in case the diffractive optical element 53 is configured to split-up each emitted laser dot into five laser dots. This is only by way of example and does not limit the present disclosure.
Figures 6 schematically shows two examples of different patterns of laser dots that may be generated by a laser light source of a camera system according to an embodiment of the present disclosure. For the examples of Figure 6, it is assumed (only by way of example) that the laser array 51 is a VCSEL laser array comprising sixteen VCSEL elements (i.e. sixteen laser elements). The following description is correspondingly valid, in case the laser array 51 is a different laser array type (e.g. EEL array or PCSEL array).
The left side of Figures 6 (A) and 6 (B) shows a top view of the VCSEL array, wherein it is assumed that three VCSEL elements of the sixteen VCSEL elements are turned on (i.e. emit laser light and, thus, a laser dot) for generating a pattern of the different patterns 7 of laser dots. The number of VCSEL elements being turned on for generating a pattern of lased dots is only by way of example and does not limit the present disclosure. In Figures 6 (A) and (B) the sixteen VCSEL elements 51a of the VCSEL array 51 are represented by circles, wherein the three VCSEL elements each emitting laser light and, thus, a laser dot are represented by a circle that is filled with a pattern. In the middle of Figures 6 (A) and 6 (B) the main pattern 10 of laser dots that may be generated by the laser array 51 and, thus, the laser light source 2 is shown. The laser dots 10a of the main pattern 10 are represented by circles, which
are either white or black. The main pattern 10 may be generated by the VCSEL array 51 and the diffractive optical element 53 when all sixteen VCSEL elements are turned on and, thus, each VCSEL element emits a laser dot. According to the example of Figures 6 (A) and 6 (B), assuming by way of example that the diffractive optical element 53 may split-up each laser dot into five laser dots, the number of laser dots 10a of the main pattern 10 of laser dots may be eighty laser dots. The laser dots of the pattern of laser dots that may be generated by the VCSEL array 51 and the diffractive optical element 53, when the three VCSEL elements shown on the left of Figure 6 (A) or 6 (B) are turned on and, thus, each emit a laser dot, are represented by black circles in the middle and on the right of Figure 6 (A) respectively 6 (B).
As can be seen based on Figures 6 (A) and 6 (B), the laser dots 7a of the respective pattern of different patterns 7 of laser dots (that may be generated by the laser light source 2, i.e. the VCSEL array 51 and the diffractive optical element 53) are a subset of the laser dots 10a of the main pattern 10 of laser dots that may be generated by the laser light source 2. Since different VCSEL elements are turned on in the example of Figure 6 (A) compared to the example of Figure 6 (B), the pattern 7 of laser dots generated in the example of Figure 6 (A) differs from the pattern 7 of laser dots generated in the example of Figure 6 (B). The spatial distribution of the laser dots 7a in the pattern 7 of laser dots of the example of Figure
6 (A) (shown on the right side) is different to the spatial distribution of the laser dots 7a in the pattern 7 of laser dots of the example of Figure 6 (B). In other words, the positions of the laser dots of the pattern
7 of the example of Figure 6 (A) are different to the positions of the laser dots of the pattern 7 of the example of Figure 6 (B). As shown in the examples of Figures 6 (A) and 6 (B) the VCSEL elements may be selectively turned on.
As shown in Figure 5, the laser light source 2 may optionally comprise a lens array 52 (e.g. micro lens array) for collimating the laser light (i.e. one or more laser dots) that may be emitted by the laser array 51.
Figures 7 to 9 each schematically show an example of a laser light source of a camera system according to an embodiment of the present disclosure. The laser light sources of Figures 7 to 9 are examples of the laser light source 2 of the camera systems 1 of Figures 1 and 2. Thus, the laser light sources of Figures 7 to 9 are examples of the laser light source of the camera system according to the first aspect of the present disclosure. Therefore, the description of the camera system according to the first aspect of the disclosure is correspondingly valid for the laser light sources of Figures 7 to 9.
As it is the case for the examples of Figures 4 and 5, the laser light source 2 may comprise a laser array 71, 81 or 91 comprising two or more laser elements. The laser array 71, 81 or 91 may be configured to selectively turn on and off the laser elements for generating the different patterns 7 of laser dots. The
laser array 71 , 81 or 91 may be a pulsed laser array. The laser array 71 , 81 or 91 may be a vertical-cavity surface emitting laser (VCSEL) array; an edge emitting laser (EEL) array; or a photonic crystal surface mitting laser (PCSEL) array. In addition or alternatively, the laser array 71, 81 or 91 may be any other known laser array type.
As indicated in Figures 7 to 9, the laser array 71, 81 or 91 may be arranged on a substrate or submount 70, 80 or 90 for electrical connection. This is only by way of example and does not limit the present disclosure. For example, the laser array 71, 81 or 91 may be arranged on the submount 70, 80 or 90 using one or more solder bumps (not shown in Figures 7 to 9).
For the examples of Figure 7 to 9 it is assumed (only by way of example) that the laser array 71 , 81 or 91 is an edge emitting laser array (EEL array). The following description is correspondingly valid, in case the laser array 71, 81 or 91 is a different laser array type (e.g. VCSEL array or PCSEL array). In Figures 7 to 9 the active stripe of each edge emitting laser diode element of the EEL array 71, 81 or 91 is exemplarily shown and labeled by the reference sign 71a, 81a or 91a.
Figure 7A shows a top view of the laser light source 2 and Figure 7B shows a side view of the laser light source 2. According to the example of Figure 7, the laser light source 2 may further comprise a diffractive optical element 73. The laser array 71 may be configured to emit one or more laser dots to the diffractive optical element 73. The diffractive optical element 73 may be configured to split up the one or more emitted laser dots for generating the different patterns 7 of laser dots. Further, the laser light source 2 may comprise a prism array 74. The prism array 74 may be configured to deflect laser dots generated by the diffractive optical element 73 for generating the different patterns 7 of laser dots. In other words, the diffractive optical element 73 may split up an optical beam in multiple beams (e.g. five beams) to generate a pattern of the different patterns of laser dots. An additional prism array 74 may be inserted in the optical path to deflect the multiple light beams (generated by the diffractive optical element 73).
For example, the prism array 74 may be configured to deflect laser dots generated by the diffractive optical element 73 in a vertical direction (e.g. vertically downwards). This allows covering a wider field of view (FOW) by the camera system 1. The function of the optional prism array 74 may be achieved by the diffractive optical element 73. That is, the diffractive optical element 73 may be configured to achieve the function of the prism array 74 (in this case the prism array 74 may be not part of the laser light source 2). Using the prism array 74 may be advantageous for the assembly process of assembling the laser light source 2 and, thus, the camera system 1 . During operation, one or more laser elements (e.g. multiple laser elements) of the laser array 71 may be selectively turned on at the same time, i.e. at a time frame, for generating the pattern 7 of laser dots for a time frame. When at least one other laser
element is turned on and/or at least one of the turned on laser elements is turned off, another pattern 7 of laser dots is generated.
For example, as shown in Figure 7, the laser array 71 may emit four laser dots, wherein the diffractive optical element 53 may split up each of the four laser dots into five laser dots. Thus, the laser light source 2, e.g. the laser array 71 and the diffractive optical element 73, may generate a pattern of the different patterns 7 of laser dots that comprises 20 laser dots. The aforementioned number of laser dots emittable by the laser array 71 and the number of laser dots in which each emitted laser dot may be split-up by the diffractive optical element 73 is only by way of example and does not limit the present disclosure.
As shown in Figure 7, the laser light source 2 may optionally comprise a lens array 72 (e.g. micro lens array) for collimating the laser light (i.e. one or more laser dots) that may be emitted by the laser array 71. In Figure 7 (B) a pattern 7 of laser dots (e.g. five laser dots) that may be generated by one laser element of the laser array 71 and the diffractive optical element 73 is shown (only by way of example).
The laser light source 2 of Figure 8 differs from the laser light source 2 of Figure 7 in that the laser light source 2 of Figure 8 comprises one or more spatial light modulators 84 instead of the prism array 74. The above description of Figure 7 is correspondingly valid for the laser light source 2 of Figure 8 and in the following mainly the difference of the laser light source 2 of Figure 8 with regard to the laser light source 2 of Figure 7 is described. Figure 8 shows a side view of the laser light source 2.
According to the example of Figure 8, the laser light source 2 comprises one or more spatial light modulators 84. The one or more spatial light modulators 84 may be configured to deflect laser dots generated by the diffractive optical element 83 for generating the different patterns 7 of laser dots. An optional lens array of the laser light source for collimation is labeled by the reference sign 82.
For example, in case the laser light source 2 comprises two spatial light modulators 84, one spatial light modulator of the two spatial light modulator 84 may be configured to deflect a beam and, thus, laser dots in a horizontal direction and another spatial light modulator of the two spatial light modulators 84 may be configured to deflect a beam and, thus, laser dots in a vertical direction. In case the laser light source 2 comprises a spatial light modulator 84, the spatial light modulator may be configured to deflect a beam and, thus, laser dots in a horizontal direction and a vertical direction. Optionally, the laser array 81 is turned off during switching of the one or more spatial light modulators 84. This allows the one or more event camera sensors 3 of the camera system 1 to see each pattern of the different patterns 7 of laser dots at a fixed position in a corresponding time frame of the subsequent time frames (not moving continuously).
The laser light source 2 of Figure 9 differs from the laser light source 2 of Figure 7 in that the laser light source 2 of Figure 9 comprises a liquid crystal on silicon (LCOS) light modulator 93 with a plurality of LCOS pixels 93a instead of the diffractive optical element 73 and the prism array 74. The above description of Figure 7 is correspondingly valid for the laser light source 2 of Figure 9 and in the following mainly the difference of the laser light source 2 of Figure 9 with regard to the laser light source 2 of Figure 7 is described. Figure 9 shows a side view of the laser light source 2.
According to the example of Figure 9, the laser light source 2 comprises a liquid crystal on silicon (LCOS) light modulator 93 with a plurality of LCOS pixels 93a. The laser array 91 may be configured to emit one or more laser dots to the LCOS light modulator 93. The LCOS light modulator 93 may be configured to selectively change the phase of the LCOS pixels 93a for generating the different patterns 7 of laser dots. An optional lens array of the laser light source 2 for collimation is labeled by the reference sign 92.
For example, each LCOS pixel 93a of the LCOS light modulator 93 may be individually adjusted to control the phase of the light beam and, thus, one or more laser dots generated by the laser array 91. The LCOS light modulator 93 may be configured to generate a diffraction pattern to split an incoming beam and, thus, a laser dot into multiple output beams or laser dots. The LCOS light modulator 93 may be configured to adjust at the same the direction of the propagation, i.e. the direction of the multiple beams or laser dots. The LCSO light modulator 93 may be configured to change the light pattern shape and/or direction of propagation by changing the phase of the LCOS pixels 93a.
Optionally, the laser array 91 is turned off during changing of the phase of the LCOS pixels 93a of the LCOS light modulator 93. This allows the one or more camera sensors 3 of the camera system 1 to see each pattern of the different patterns 7 of laser dots at a fixed position in a corresponding time frame of the subsequent time frames (not moving continuously).
With regard to Figures 4, 5 and 7 to 9, the laser light source 2 may be configured to emit the different patterns 7 of laser dots in the subsequent time frames to the area 4. Thus, the different patterns 7 of laser dots may move over an object arranged in the area 4. This allows covering multiple positions on the object by the different patterns 7 of laser dots in the subsequent time frames. Thus, depth information of the object may be detected by the camera system 1 comprising the laser light source 2.
With respect to the camera systems described above with regard to Figures 1 to 9 the sparsity of the different patterns of laser dots, which may be generated by the laser light source of the camera system, provides the following advantages. At each time frame, the camera system may project or emit only a fraction or subset of the laser dots of the main pattern and, thus, acquire and process a fraction of
reflected laser dots compared to a scenario, in which the area is lighted by the main pattern of laser dots. This is much easier to accomplish both for transmitter side (i.e. the laser light source) and the receiver side (i.e. one or more event camera sensors) of the camera system. The one or more event camera sensors are advantageous, because an event camera sensor may output only the pixels that received a photon intensity change induced by the laser dots of a pattern emitted by the laser light source. This saves a lot of redundancy compared to camera sensors (e.g. conventional CMOS image sensors) outputting the intensity level of received or detected photons for each pixel. In other words, the one or more event camera sensors allow reducing an output and processing redundancy. This allows improving the system speed of the camera system.
Moreover, the camera system of the present disclosure allows a configurability of the different patterns of laser dots (used for lighting the area) in a time and/or spatial domain randomness, uniformity and repetition. This allow the camera system to be adaptable to various flexibility requirements in terms of robustness, area of interest and range extension. Further, the camera system of the present disclosure allows to use QD thin film integrated CMOS image sensor technology, which is low in costs, to be used for implementing the one or more event camera sensors.
As described with respect to Figures 4, 5 and 7 to 9, the laser light source may be implemented using basic optic elements. Thus, the camera system is easy to integrate and may be implemented with low costs.
Figure 10 shows a flow diagram of an example of a method according to an embodiment of the present disclosure for determining depth information of an area. The method of Figure 10 is an example of the method according to the second aspect of the disclosure. The description of the method according to the second aspect is correspondingly valid for the method of Figure 10. The method of Figure 10 may be performed by the camera system of Figures 1 and 2. The above description of Figures 1 to 9 is correspondingly valid for the method of Figure 10.
The method of Figure 10 is a method for determining depth information of an area. In a first step 101 the method comprises generating, by a laser light source, in subsequent time frames different patterns of laser dots for lighting the area. The laser light source is configured to generate a main pattern of laser dots and the laser dots of each of the different patterns are a subset of the laser dots of the main pattern. In a further step 102 following the first step 101, the method comprises detecting, by one or more event camera sensors, reflected laser light caused by lighting the area with the different patterns of laser dots in the subsequent time frames.
For further details on the method of Figure 10, reference is made to the description of the method of the second aspect of the disclosure.
The present disclosure has been described in conjunction with various embodiments as examples as well as implementations. However, other variations can be understood and effected by those persons skilled in the art and practicing the claimed subject-matter, from the studies of the drawings, this disclosure and the independent claims. In the claims as well as in the description the word “comprising” does not exclude other elements or steps and the indefinite article “a” or “an” does not exclude a plurality. A single element or other unit may fulfill the functions of several entities or items recited in the claims. The mere fact that certain measures are recited in the mutual different dependent claims does not indicate that a combination of these measures cannot be used in an advantageous implementation.
Claims
1. A camera system (1) for determining depth information of an area (4), wherein the camera system (1) comprises a laser light source (2) for lighting the area (4), and one or more event camera sensors (3) for detecting reflected laser light (6) caused by lighting the area (4); wherein the laser light source (2) is configured to generate a main pattern of laser dots, the laser light source (2) is configured to generate in subsequent time frames different patterns (7) of laser dots for lighting the area (4), wherein the laser dots (7a) of each of the different patterns (7) are a subset of the laser dots of the main pattern, and the one or more event camera sensors (3) are configured to detect reflected laser light (6) caused by lighting the area (4) with the different patterns (7) of laser dots in the subsequent time frames.
2. The camera system (1) according to claim 1, wherein a number of the laser dots (7a) of one or more of the different patterns (7) of laser dots equals to a number of laser dots in a range between 0.1% and 25% of a number of laser dots of the main pattern.
3. The camera system (1) according to claim 1 or 2, wherein the time between two consecutive time frames of the subsequent time frames is in a range between 100 nanoseconds and 10 milliseconds.
4. The camera system (1) according to any one of the previous claims, wherein the laser light source (2) is configured to repeat one or more of the different patterns (7) of laser dots in two or more consecutive time frames of the subsequent time frames.
5. The camera system (1) according to any one of the previous claims, wherein the laser light source (2) is configured to generate the different patterns (7) of laser dots such that the laser dots (7a) of one or more patterns of the different patterns (7) of laser dots are uniformly distributed in time domain and/or spatial domain.
6. The camera system (1) according to any one of the previous claims, wherein the laser light source (2) is configured to generate the different patterns (7) of laser dots such that the laser dots (7a) of one or more patterns of the different patterns (7) of laser dots are pseudo randomly distributed in time domain and/or spatial domain.
7. The camera system (1) according to any one of the previous claims, wherein the laser light source (2) is configured to generate one or more patterns of the different patterns (7) of laser dots such that the laser dots (7a) of the one or more patterns are, in an area of interest of the area (4), more dense in time domain and/or spatial domain.
8. The camera system (1) according to any one of the previous claims wherein the laser light source (2) comprises a laser array (41, 51, 71, 81, 91) comprising two or more laser elements; wherein the laser array (41, 51, 71, 81, 91) is configured to selectively turn on and off the laser elements for generating the different patterns (7) of laser dots.
9. The camera system (1) according to claim 8, wherein the laser array is one of the following: a vertical-cavity surface emitting laser, VCSEL, array (51); an edge emitting laser, EEL, array (71, 81, 91); and a photonic crystal surface mitting laser, PCSEL, array.
10. The camera system (1) according to claim 8 or 9, wherein the laser light source (2) comprises a rotation prism (42); wherein the laser array (41) is configured to emit one or more laser dots to the rotation prism (42), and the rotation prism (42) is configured to generate, using the one or more emitted laser dots, the different patterns (7) of laser dots by rotating accordingly.
11. The camera system (1) according to claim 8 or 9, wherein the laser light source (2) comprises a diffractive optical element (53, 73, 83); wherein the laser array (51, 71, 81 ) is configured to emit one or more laser dots to the diffractive optical element (53, 73, 83), and the diffractive optical element (53, 73, 83) is configured to split up the one or more emitted laser dots for generating the different patterns (7) of laser dots.
12. The camera system (1) according to claim 11, wherein the laser light source (2) comprises a prism array (74); wherein the prism array (74) is configured to deflect laser dots generated by the diffractive optical element (73) for generating the different patterns (7) of laser dots.
13. The camera system (1) according to claim 11, wherein the laser light source (2) comprises one or more spatial light modulators (84); wherein
the one or more spatial light modulators (84) are configured to deflect laser dots generated by the diffractive optical element (83) for generating the different patterns (7) of laser dots. The camera system (1) according to claim 8 or 9, wherein the laser light source (2) comprises a liquid crystal on silicon, LCOS, light modulator (93) with a plurality of LCOS pixels (93a); wherein the laser array (91) is configured to emit one or more laser dots to the LCOS light modulator (93), and the LCOS light modulator (93) is configured to selectively change the phase of the LCOS pixels (93a) for generating the different patterns (7) of laser dots. The camera system (1) according to claim 8 or 9, wherein the laser light source (2) comprises a micro electronic mechanical systems, MEMS, mirror array; wherein the laser array is configured to emit one or more laser dots to the MEMS mirror array, and the MEMS mirror array is configured to deflect the one or more laser dots for generating the different patterns (7) of laser dots. The camera system (1) according to any one of the previous claims, wherein the camera system (1) comprises a processing unit (8) configured to determine the depth information of the area (4) by processing a detection result (3a) of the one or more event camera sensors (3) of two or more time frames of the subsequent time frames. The camera system (1) according to claim 16, wherein the processing unit (8) is configured to determine an area of interest of the area (4) by processing a detection result (3a) of the one or more event camera sensors (3) of one or more time frames of the subsequent time frames. The camera system (1) according to claim 16 or 17, wherein the processing unit (8) is configured to determine the depth information of the area (4) by combining a detection result (3 a) of the one or more event camera sensors (3) of two or more time frames of the subsequent time frames. The camera system (1) according to any one of the previous claims, wherein the one or more event camera sensors (3) are one or more quantum dots, QD, thin film integrated CMOS image sensors configured for a wavelength range of short wavelength infrared, SWIR.
A method for determining depth information of an area, wherein the method comprises generating (101), by a laser light source, in subsequent time frames different patterns of laser dots for lighting the area, wherein the laser light source is configured to generate a main pattern of laser dots and the laser dots of each of the different patterns are a subset of the laser dots of the main pattern, and detecting (102), by one or more event camera sensors, reflected laser light caused by lighting the area with the different patterns of laser dots in the subsequent time frames.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202280090030.9A CN118647901A (en) | 2022-04-29 | 2022-04-29 | Camera system and method for determining depth information of an area |
PCT/EP2022/061563 WO2023208372A1 (en) | 2022-04-29 | 2022-04-29 | Camera system and method for determining depth information of an area |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2022/061563 WO2023208372A1 (en) | 2022-04-29 | 2022-04-29 | Camera system and method for determining depth information of an area |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023208372A1 true WO2023208372A1 (en) | 2023-11-02 |
Family
ID=83188505
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/061563 WO2023208372A1 (en) | 2022-04-29 | 2022-04-29 | Camera system and method for determining depth information of an area |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN118647901A (en) |
WO (1) | WO2023208372A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190355138A1 (en) * | 2018-05-21 | 2019-11-21 | Facebook Technologies, Llc | Dynamic structured light for depth sensing systems |
US20220043153A1 (en) * | 2020-08-05 | 2022-02-10 | Envisics Ltd | Light Detection and Ranging |
-
2022
- 2022-04-29 CN CN202280090030.9A patent/CN118647901A/en active Pending
- 2022-04-29 WO PCT/EP2022/061563 patent/WO2023208372A1/en unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190355138A1 (en) * | 2018-05-21 | 2019-11-21 | Facebook Technologies, Llc | Dynamic structured light for depth sensing systems |
US20220043153A1 (en) * | 2020-08-05 | 2022-02-10 | Envisics Ltd | Light Detection and Ranging |
Also Published As
Publication number | Publication date |
---|---|
CN118647901A (en) | 2024-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7347585B2 (en) | distance measuring device | |
US10649072B2 (en) | LiDAR device based on scanning mirrors array and multi-frequency laser modulation | |
US10018724B2 (en) | System and method for scanning a surface and computer program implementing the method | |
CN113454421A (en) | Solid state electronically scanned laser array with high and low side switches for increased channel | |
US9285477B1 (en) | 3D depth point cloud from timing flight of 2D scanned light beam pulses | |
CN111722241B (en) | Multi-line scanning distance measuring system, method and electronic equipment | |
JP7569334B2 (en) | Synchronized image capture for electronically scanned LIDAR systems | |
GB2579689A (en) | Improved 3D sensing | |
JP2021532368A (en) | Distributed modular solid-state lidar system | |
US11754682B2 (en) | LIDAR system with spatial beam combining | |
US10679370B2 (en) | Energy optimized imaging system with 360 degree field-of-view | |
US20200284882A1 (en) | Lidar sensors and methods for the same | |
JP2023541098A (en) | LIDAR system with variable resolution multi-beam scanning | |
US20220026574A1 (en) | Patterned illumination for three dimensional imaging | |
US20230393245A1 (en) | Integrated long-range narrow-fov and short-range wide-fov solid-state flash lidar system | |
US20230176219A1 (en) | Lidar and ambience signal fusion in lidar receiver | |
US11156716B1 (en) | Hybrid LADAR with co-planar scanning and imaging field-of-view | |
US11796643B2 (en) | Adaptive LIDAR scanning methods | |
WO2023208372A1 (en) | Camera system and method for determining depth information of an area | |
US20230072058A1 (en) | Omni-view peripheral scanning system with integrated mems spiral scanner | |
JP2023552698A (en) | imaging system | |
US20210333405A1 (en) | Lidar projection apparatus | |
US11460551B2 (en) | Virtual array method for 3D robotic vision | |
US20230266450A1 (en) | System and Method for Solid-State LiDAR with Adaptive Blooming Correction | |
KR20240152320A (en) | Solid-state LiDAR system and method with adaptive blooming correction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22726466 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022726466 Country of ref document: EP Effective date: 20240828 |