CN116745638B - System, method and computer program product for generating depth image based on short wave infrared detection information - Google Patents

System, method and computer program product for generating depth image based on short wave infrared detection information Download PDF

Info

Publication number
CN116745638B
CN116745638B CN202180086480.6A CN202180086480A CN116745638B CN 116745638 B CN116745638 B CN 116745638B CN 202180086480 A CN202180086480 A CN 202180086480A CN 116745638 B CN116745638 B CN 116745638B
Authority
CN
China
Prior art keywords
photosite
detection value
frame
detection
different
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202180086480.6A
Other languages
Chinese (zh)
Other versions
CN116745638A (en
Inventor
阿里尔·达南
丹·库兹明
埃利奥·德克尔
希莱尔·希莱尔
罗尼·多布尔斯基
乌拉罕·巴卡尔
乌利尔·利维
俄梅尔·卡帕奇
纳达夫·梅拉穆德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Trieye Ltd
Original Assignee
Trieye Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Trieye Ltd filed Critical Trieye Ltd
Priority to CN202311779387.0A priority Critical patent/CN117768801A/en
Priority to CN202311781352.0A priority patent/CN117768794A/en
Priority claimed from PCT/IB2021/062314 external-priority patent/WO2022137217A1/en
Publication of CN116745638A publication Critical patent/CN116745638A/en
Application granted granted Critical
Publication of CN116745638B publication Critical patent/CN116745638B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The plurality of depth sensors includes: a focal plane array having Photosites (PSs) pointing in different directions, each PS being operable to detect light arriving from an instantaneous field of view (IFOV) of the PS; a plurality of readout circuits (ROCs) of a readout set, each ROC coupled by a switch to a PS of a readout group and operable to output an electrical signal indicative of an amount of light impinging on the PS of the readout group when the readout group is connected to a respective ROC via at least one switch; a controller operable to change a plurality of switch states of the plurality of switches such that different ROCs of the readout set are coupled to the readout group at different times and are exposed to reflections from different distances; and a processor operable to obtain from the readout set electrical signals indicative of the detection level of reflected light collected from the IFOV of the readout group and determining depth information of the object.

Description

System, method and computer program product for generating depth image based on short wave infrared detection information
Cross Reference to Related Applications
This application is related to and claims priority from U.S. provisional patent application Ser. No. 63/130,646, filed on 12/26/2020, and U.S. provisional patent application Ser. No. 63/194,977, filed on 5/29 2021, both of which are incorporated herein by reference in their entireties.
Technical Field
The present disclosure relates to photonic systems, methods, and computer program products. More particularly, the present invention relates to electro-optic devices and lasers used in Infrared (IR) photons.
Background
A photo-detection device such as a photo-detector array (also referred to as a "photo-sensor array (photosensor arrays)") includes a plurality of photosites (a multitude of Photosites) (PSs), each photosite including one or more photodiodes for detecting impinging (imaging) light and a capacitor for storing charge provided by the photodiodes. The capacitance may be implemented as a dedicated capacitor and/or parasitic capacitance using the photodiode, transistor, and/or other components of the PS. Hereinafter, in the present specification and for simplicity, the term "photo detection device (photodetecting device)" is often replaced with the abbreviation "PDD", the term "photo detector array (photodetector array)" is often replaced with the abbreviation "PDA", and the term "photodiode" is often replaced with the abbreviation "PD".
The term "photosite" refers to a single sensor element (also referred to as a "sensor") in an array of multiple sensors, such as the words "sensor" and "cell" or a combination of "sensor" and "element", and is also referred to as a "sensor element", "photosensor element", "photosensor element", "photodetector element (photodetector element)", etc. Hereinafter, "photosite" is generally replaced by the abbreviation "PS". Each PS may include: one or more PDs (e.g., if a color filter array is implemented, multiple PDs detecting light in different portions of the spectrum may alternatively be collectively labeled as a single PS). In addition to the PD, the PS may further include: some circuitry or a plurality of additional components.
Dark current (also referred to herein as "DC") is a well known phenomenon that when referring to PDs, pertains to the current flowing through the PD even if no photons enter the device. DC in many PDs may be caused by random generation of electrons and holes in a depletion region of the PD.
In some cases, it is desirable to provide photodiodes featuring a relatively high DC to photosites while implementing capacitors of limited size. In some cases, it is desirable to provide PDs featuring a relatively high DC to the PS while reducing the effect of dark current on an output detection signal. In PS's characterized by high DC accumulation, it would be beneficial to have a need and overcome the detrimental effects of DC on electro-optic systems. Hereinafter and for simplicity, the term "electro-optical" may be replaced by the abbreviation "EO".
Short Wave Infrared (SWIR) imaging enables a range of applications that are difficult to perform using visible light imaging. Many applications include electronic board inspection, solar cell inspection, product inspection, gated imaging, identification and classification, surveillance, anti-counterfeiting, process quality control, and more. Many existing gallium indium arsenide (InGaAs) based SWIR imaging systems are expensive to manufacture and are currently subject to limited manufacturing capabilities.
Accordingly, it would be beneficial to be able to provide a SWIR imaging system that uses more cost-effective photoreceptors based on PDs that are more easily integrated into the surrounding electronics.
Disclosure of Invention
In various examples, a method of generating an image from an array of photodetectors is disclosed, comprising: obtaining a plurality of detection values for different photosites measured during a first frame duration from the photodetector array, the photodetector array comprising a plurality of replicated PS, the plurality of detection values comprising: a first detection value of a first PS indicative of an amount of light impinging on the first PS from a field of view (FOV) during the first frame duration; a second detection value of a second PS indicative of an amount of light impinging on the second PS from the FOV during the first frame duration; a third detection value of a third PS indicative of an amount of light impinging on the third PS from the FOV during the first frame duration; a fourth detection value of each fourth PS of the at least one fourth PS measured while the respective fourth PS is shielded from ambient illumination; and a fifth detection value for each of the at least one fifth PS measured while the respective fifth PS is shielded from ambient illumination; determining a first PS output value based on subtracting an average of the at least one fourth detection value from the first detection value; determining a second PS output value based on subtracting an average of the at least one fifth detection value from the second detection value; determining a third PS output value based on subtracting an average of the at least one fourth detection value from the third detection value; a first frame image is generated based at least on the first PS output value, the second PS output value, and the third PS output value.
In various examples, an electro-optic (EO) system operable to generate an image is disclosed, the EO system comprising: a PDA comprising a plurality of Photosites (PSs), each PS operable to output a detection value indicative of the amount of light impinging on the corresponding PS during a detection duration and a DC level generated by the PS during the detection duration; a shield for shielding a subgroup of the plurality of PS from ambient illumination at least during a first frame duration; and a processor operable to: obtaining a plurality of detection values for a plurality of different PS of said PDA measured during said first frame duration, said plurality of obtained detection values comprising: a first detection value of a first PS indicative of an amount of light impinging on the first PS from a FOV during the first frame duration; a second detection value of a second PS indicative of an amount of light impinging on the second PS from the FOV during the first frame duration; a third detection value of a third PS indicative of an amount of light impinging on the third PS from the FOV during the first frame duration; a fourth detection value of each fourth PS of the at least one fourth PS measured while the respective fourth PS is shielded from ambient illumination; and a fifth detection value for each of the at least one fifth PS measured while the respective fifth PS is shielded from ambient illumination; the processor is further operable to: determining a first PS output value based on subtracting an average of the at least one fourth detection value from the first detection value; determining a second PS output value based on subtracting an average of the at least one fifth detection value from the second detection value; determining a third PS output value based on subtracting an average of the at least one fourth detection value from the third detection value; and generating a first frame image based at least on the first PS output value, the second PS output value, and the third PS output value.
In various examples, a non-transitory computer-readable medium is disclosed that includes a plurality of instructions stored thereon that when executed on a processor perform the steps of: obtaining a plurality of detection values of different PS measured during a first frame duration from a PDA, said PDA comprising a plurality of copied PS, said plurality of detection values comprising: a first detection value of a first PS indicative of an amount of light impinging on the first PS from a FOV during the first frame duration; a second detection value of a second PS indicative of an amount of light impinging on the second PS from the FOV during the first frame duration; a third detection value of a third PS indicative of an amount of light impinging on the third PS from the FOV during the first frame duration; a fourth detection value of each fourth PS of the at least one fourth PS measured while the respective fourth PS is shielded from ambient illumination; and a fifth detection value for each of the at least one fifth PS measured while the respective fifth PS is shielded from ambient illumination; determining a first PS output value based on subtracting an average of the at least one fourth detection value from the first detection value; determining a second PS output value based on subtracting an average of the at least one fifth detection value from the second detection value; determining a third PS output value based on subtracting an average of the at least one fourth detection value from the third detection value; and generating a first frame image based at least on the first PS output value, the second PS output value, and the third PS output value.
In various examples, a method of generating a depth image of a scene based on detection of a Short Wave Infrared (SWIR) electro-optic imaging system (SEI system) is disclosed, the method comprising: obtaining a plurality of detection signals of the SEI system, wherein each detection signal of the plurality of detection signals is indicative of an amount of light captured by at least one FPA of the SEI system from a particular direction within a FOV of the SEI system within a respective detection frame, the at least one FPA comprising a plurality of individual PS, each PS comprising a germanium element (Ge element) that converts a plurality of impinging photons into a detected charge, wherein for each of a plurality of directions within a FOV, a plurality of different detection signals are indicative of a plurality of reflected SWIR illumination levels from different distance ranges along the direction; and processing the plurality of detection signals to determine a 3D detection map, the 3D detection map comprising a plurality of 3D positions in the FOV at which a plurality of objects are detected, wherein the processing comprises: compensating for a plurality of Dark Current (DC) levels accumulated during the collecting of the plurality of detection signals caused by the germanium element, and wherein the compensating comprises: different degrees of dark current compensation are applied to the plurality of detection signals detected by different PS of the at least one FPA.
In various examples, a system for generating a depth image of a scene based on detection of an SEI system is disclosed, the system comprising at least one processor configured to: obtaining a plurality of detection signals of the SEI system, wherein each detection signal of the plurality of detection signals is indicative of an amount of light captured by at least one Focal Plane Array (FPA) of the SEI system from a particular direction within a FOV of the SEI system within a respective detection time frame, the at least one FPA comprising a plurality of individual PS, each PS comprising a germanium element that converts a plurality of impinging photons into a detection charge, wherein for each of a plurality of directions within a FOV a plurality of different detection signals are indicative of a plurality of reflected SWIR illumination levels from different distance ranges along the direction; and processing the plurality of detection signals to determine a three-dimensional (3D) detection map, the 3D detection map comprising a plurality of 3D locations in the field of view in which a plurality of objects are detected, wherein the processing comprises: compensating for a plurality of DC levels accumulated during the collecting of the plurality of detection signals caused by the germanium element, and wherein the compensating comprises: different degrees of DC compensation are applied to the plurality of detection signals detected by different PS of the at least one FPA.
In various examples, a non-transitory computer-readable medium is disclosed that is based on detection of an SEI system to generate a depth image of a scene, comprising a plurality of instructions stored thereon that when executed on a processor perform the steps of: obtaining a plurality of detection signals of the SEI system, each detection signal indicating an amount of light captured by at least one FPA of the SEI system from a particular direction within a FOV of the SEI system within a respective detection time frame, the at least one FPA comprising a plurality of individual PS, each PS comprising germanium elements that convert a plurality of impinging photons into detected charges, wherein for each of a plurality of directions within a FOV a plurality of different detection signals indicate a plurality of reflected SWIR illumination levels from different distance ranges along the direction; and processing the plurality of detection signals to determine a three-dimensional (3D) detection map, the 3D detection map comprising a plurality of 3D positions in the FOV at which a plurality of objects are detected, wherein the processing comprises: compensating for a plurality of dark current levels accumulated during the collecting of the plurality of detection signals caused by the germanium element, and wherein the compensating comprises: different degrees of DC compensation are applied to the plurality of detection signals detected by different PS of the at least one FPA.
In various examples, a sensor operable to detect depth information of an object is disclosed, the sensor comprising: a FPA having a plurality of PS, each PS being operable to detect light arriving from an IFOV of the PS, wherein different PS are pointing in different directions within a field of view of the sensor; a plurality of readout circuits (a readout-set of readout circuitries) (ROCs) of a readout set, each readout circuit coupled by a plurality of switches to a plurality of PS (a readout-groups of PSs) of a readout group of the FPA and operable to output an electrical signal indicative of an amount of light impinging on the plurality of PS of the readout group when the readout group is connected to the respective ROC via at least one of the plurality of switches; a controller operable to change a plurality of switch states of the plurality of switches such that different ROCs of the readout set are coupled to the readout group at different times to expose different ROCs to reflections of illumination light from a plurality of objects at different distances from the sensor; and a processor operable to obtain the plurality of electrical signals from the readout set, to indicate a plurality of detection levels of reflected light collected from the IFOV of a plurality of photosites of the readout group, and to determine depth information of the object, to indicate a distance of the object from the sensor based on the plurality of electrical signals.
In various examples, a method of detecting depth information of an object is disclosed, comprising: during a first duration: connecting a first ROC of a sensor with a plurality of PS of a readout group consisting of a plurality of photosites of an FPA while maintaining a second ROC and a third ROC of the sensor disconnected from the plurality of PS of the readout group, and obtaining a first electrical signal from the first ROC indicative of a first amount of illumination pulses reflected from the object during the first duration to commonly impinge on the plurality of PS of the readout group; during a second duration: connecting the plurality of PS of the readout group to the second ROC while keeping the first ROC and the third ROC disconnected from the plurality of PS of the readout group, and obtaining a second electrical signal from the second ROC, the second electrical signal being indicative of a second amount of illumination pulses reflected from the object to commonly impinge on the plurality of PS of the readout group during the second duration; during a third duration: connecting the plurality of PS of the readout group to the third ROC while keeping the first ROC and the second ROC disconnected from the plurality of PS of the readout group, and obtaining a third electrical signal from the third ROC, the third electrical signal being indicative of a third amount of illumination pulses reflected from the object to commonly impinge on the plurality of PS of the readout group during the third duration; and determining a distance of the object from a sensor based at least on the first electrical signal, the second electrical signal, and the third electrical signal, the sensor comprising the FPA.
In various examples, a switchable optical sensor is disclosed, comprising: a FPA having a plurality of PS, each PS being operable to detect light arriving from an IFOV of the PS, wherein different PS are pointing in different directions within a field of view of the sensor; a plurality of ROCs of a readout set, each ROC coupled by a plurality of switches to a plurality of PS of a readout group of the FPA, and the ROC operable to output an electrical signal indicative of an amount of light impinging on a plurality of photosites of the readout group when the plurality of PS of the readout group are connected to the respective ROC via at least one of the plurality of switches; a controller operable to change a plurality of switch states of the plurality of switches such that different ROCs of the readout set are coupled to the readout group at different times to expose different ROCs to reflection of illumination light from a plurality of objects located at different distances from the sensor; and a processor configured to obtain the plurality of electrical signals from the readout set, to indicate detection levels of reflected light collected from the plurality of IFOVs of the plurality of PS of the readout group, and to generate a 2D model of a plurality of objects in the FOV based on processing the plurality of electrical signals.
In various examples, a method of correcting multiple saturation detection results in an FPA is disclosed, the method comprising: obtaining a first electrical signal from a first ROC indicative of amounts of illumination pulses reflected from an object within a FOV of the FPA to commonly impinge on a plurality of photosites of a readout group during a first duration within a time of flight TOF of the respective illumination pulses, wherein during the first duration a second ROC, a third ROC, and a fourth ROC are disconnected from a plurality of PS of the readout group; obtaining a second electrical signal from the second ROC indicative of the amount of illumination pulses reflected from the object to commonly impinge on a plurality of photosites of the readout group during a second duration within the TOF of the respective illumination pulses, wherein during the second duration, the first, third, and fourth ROCs are disconnected from the plurality of photosites of the readout group; obtaining a third electrical signal from the third ROC indicative of illumination pulse amounts reflected from the object to commonly impinge on a plurality of photosites of the readout group during a third duration within the TOF of the respective illumination pulses, wherein during the third duration the first, second, and fourth ROCs are disconnected from a plurality of PS of the readout group; obtaining a fourth electrical signal from the fourth readout circuit, the fourth electrical signal being indicative of the amount of illumination pulses reflected from the object to commonly impinge on a plurality of photosites of the readout group during a fourth duration within the time of flight of the respective illumination pulses, wherein during the fourth duration the first ROC, the second ROC, and the third ROC are disconnected from a plurality of PS of the readout group; searching for a matching tuple within a plurality of tuples of a pre-existing set of distance-associated detection levels based on similarity criteria; identifying that an electrical signal in the group of electrical signals is saturated; and determining a corrected detection level corresponding to the saturated electrical signal based on at least one electrical signal in the matching tuple and the group of electrical signals.
In various examples, a method of identifying material of an object based on detection of an SEI system is disclosed, the method comprising: obtaining a plurality of detection signals indicative of the amount of light collected from an IFOV within a FOV of the SEI system, captured by at least one PS of the SEI system at different times, each detection signal indicative of a plurality of SWIR illumination levels reflected from a different distance within the IFOV; processing the plurality of detection signals to determine a distance to an object within the FOV; determining a first reflectivity of illumination of the object in a first short wave infrared range by the SEI based on illumination intensity emitted by the SEI towards the object in the first SWIR range, a plurality of detection levels of illumination light reflected from the object in the first SWIR range, and the distance; and determining material composition information based on the first reflectivity, the material composition information indicating at least one material from which the object is fabricated.
Brief description of the drawings
Reference is made to the accompanying drawings listed after this paragraph to describe non-limiting examples of embodiments disclosed herein. The same structures, elements or components shown in more than one figure may be labeled with the same number in all figures in which they appear. The drawings and description are intended to illustrate and explain the embodiments disclosed herein and should not be taken as limiting in any way. All figures illustrate various example apparatus or flow diagrams in accordance with the presently disclosed subject matter. In the drawings:
FIGS. 1A, 1B and 1C are schematic block diagrams illustrating an active SWIR imaging system;
FIG. 2 is an exemplary graph illustrating the relative magnitudes of noise power after different durations of integration times in a SWIR imaging system;
FIGS. 3A, 3B and 3C illustrate a flow chart of a method of operation of an active SWIR imaging system, respectively, of some embodiments;
FIGS. 4A, 4B and 4C illustrate a flow chart of an exemplary method of operation of an active SWIR imaging system, respectively;
FIG. 5 is a flow chart illustrating a method for generating SWIR images of objects in a FOV of an EO system;
FIG. 6 is a schematic functional block diagram illustrating an example of a SWIR optical system;
FIGS. 7A, 7B and 7C are schematic functional block diagrams illustrating examples of P-QS lasers;
FIGS. 8 and 9 are schematic functional diagrams illustrating a SWIR optical system;
FIG. 10 is a schematic functional block diagram illustrating an example of a SWIR optical system;
FIGS. 11A, 11B and 11C show a flow chart illustrating an example of a method for fabricating components of a P-QS laser, respectively;
Fig. 12A schematically illustrates a PS including a PD controlled by a voltage controlled current source;
FIG. 12B schematically illustrates a PS including a PD controlled by a voltage controlled current source in a "3T" configuration;
FIGS. 13A and 13B illustrate a photo-detection device (PDD) including a PS and circuitry operable to reduce DC effects;
FIG. 13C illustrates a PDD including a plurality of PS and circuitry operable to reduce DC effects;
FIG. 14 illustrates an exemplary PD IV curve and possible operating voltages for a PDD;
fig. 15 shows a control voltage generation circuit connected to a plurality of reference PS;
FIGS. 16A and 16B illustrate a PDD comprising an array of PS's and a reference circuit based on PD's;
FIGS. 17 and 18 illustrate PDDs, each including a PS and circuitry operable to reduce DC effects;
FIG. 19 is a schematic diagram illustrating a PDD including optics, a processor and a plurality of additional components;
FIG. 20 is a flow chart illustrating a method for compensating DC in a photodetector;
FIG. 21 is a flow chart illustrating a method for compensating DC in a photodetector;
FIG. 22 is a flow chart illustrating a method for testing a photodetector;
FIG. 23 is an EO system to illustrate some embodiments;
FIG. 24 is a flowchart illustrating an example of a method of generating image information based on data of a photodetector array (PDA);
FIGS. 25 and 26 show a flow chart illustrating a method for generating a model for PDA operation at different frame exposure times (referred to herein as "FETs"), respectively;
FIG. 27 is a flow chart illustrating an example of a method to generate multiple images based on different subsets of multiple PS at different operating conditions;
28A and 28B are diagrams illustrating an EO system and exemplary objects;
FIG. 29 is a flowchart illustrating a method of generating image information based on data of a PDA;
FIG. 30 is a diagram illustrating three views of a PDA;
FIG. 31 is a diagram illustrating a method of generating an image by a PDA;
FIG. 32 is a diagram illustrating a mapping between different active PS of a PDA to PS of a PS reference group;
FIG. 33 is a diagram illustrating a method for determining a matching model between PS's of a PDA;
FIG. 34 is a graph illustrating exemplary analog detection signals for a plurality of PS at four different temperatures;
FIG. 35 illustrates a PDA in which each PS is classified as either one of six families or defective;
FIG. 36 illustrates a method of detection of an EO imaging system based on a SWIR to generate a depth image of a scene;
FIG. 37 is a graph illustrating detection signals reflected from objects located at different distances;
38A-38C to illustrate a number of sensors;
FIG. 39 is a diagram illustrating various detection timing diagrams;
FIGS. 40A-40C illustrate a number of sensors;
FIGS. 41A and 41B are diagrams illustrating a number of sensors;
FIG. 42 illustrates a FOV of a sensor;
FIGS. 43A and 43B illustrate a number of sensors;
FIG. 44 illustrates a focal plane array in which a depth detection switching mode is implemented simultaneously (or concurrently) with an image detection switching mode;
FIGS. 45A and 45B illustrate switching mechanisms where the same readout circuitry at different times within a single TOF is connected to a readout group;
FIG. 46 illustrates a method of detecting depth information of an object;
fig. 47 illustrates a method of correcting the saturation detection results in an FPA.
FIG. 48 is a graph illustrating correction of saturation detection results based on temporally different detection signals;
FIG. 49 illustrates a method of detection by a SWIR based EO imaging system to identify materials of objects.
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example: the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Furthermore, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
Detailed Description
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention.
In the drawings and description set forth, like reference numerals designate those components that are common to different embodiments or configurations.
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "processing," "computing," "calculating," "determining," "generating," "setting," "configuring," "selecting," "defining," or the like, include the action and/or processes of a computer, that manipulates and/or transforms data into other data represented as physical quantities, such as electronic quantities, and/or data representing the physical quantities.
The terms "computer", "processor" and "controller" should be interpreted broadly to cover any kind of electronic device having data processing capabilities, including by way of non-limiting example: a personal computer, a server, a computing system, a communication device, a processor (e.g., digital Signal Processor (DSP), a microcontroller, a Field Programmable Gate Array (FPGA), an application specific integrated circuit, etc.), any other electronic computing device, or any combination thereof.
Operations according to the teachings herein may be performed by a computer specially constructed for the desired purposes or by a general purpose computer specially configured for the desired purposes by a computer program stored in a computer readable storage medium.
As used herein, the phrase "for example," "such as," "for instance," and variations thereof describe many non-limiting embodiments of the presently disclosed subject matter. Reference in the specification to "one case," "some cases," "other cases," or variations thereof means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the presently disclosed subject matter. Thus, the appearances of the phrases "one case," "some cases," "other cases," or variations thereof are not necessarily intended to refer to the same embodiment(s).
It is appreciated that certain features of the presently disclosed subject matter, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the presently disclosed subject matter that are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination.
In various embodiments of the presently disclosed subject matter, one or more of the stages or steps illustrated in the figures may be performed in a different order and/or one or more groups of stages may be performed concurrently, or vice versa. The figures are presented to illustrate a general schematic of a system architecture in accordance with an embodiment of the presently disclosed subject matter. Each module in the figures may be comprised of any combination of software, hardware, and/or firmware that performs the functions defined and explained herein. The modules in the figures may be centralized in one location or distributed across more than one location.
Any reference in the specification to a method shall be construed as applying to a system capable of performing the method and shall be construed as applying to a non-transitory computer-readable medium that stores instructions that, when executed by a computer, cause the method to be performed.
Any reference in the specification to a system should be construed as applying to a method capable of being performed by the system and to a non-transitory computer readable medium storing instructions that are executable by the system.
Any reference in the specification to a non-transitory computer-readable medium or similar terms should be construed to refer to the instructions capable of executing the instructions stored in the non-transitory computer-readable medium and should be construed to refer to methods executable by a computer that reads the instructions stored in the non-transitory computer-readable medium.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The materials, methods, and examples provided herein are illustrative only and not intended to be limiting.
Implementation of the method and system of the present invention involves performing or completing certain selected tasks or steps manually, automatically, or a combination thereof. Moreover, the actual instrumentation and equipment of the preferred embodiments of the method and system of the present invention could implement several selected steps through hardware or through software on any operating system of any firmware or a combination thereof. For example: as hardware, selected steps of the invention could be implemented as a chip or a circuit. As software, selected steps of the invention could be implemented as software instructions being executed by a computer using any suitable operating system. In any event, selected steps of the methods and systems of the invention can be described as being performed by a data processor, such as a computing platform for executing instructions.
Fig. 1A, 1B, and 1C are various schematic block diagrams illustrating various active SWIR imaging systems 100, 100', and 100", respectively, in accordance with various examples of the presently disclosed subject matter.
As used herein, an "active" imaging system is operable to detect light reaching the system from its field of view (FOV), detect it by an imaging receiver comprising a plurality of PDs, and process the plurality of detection signals to provide one or more images of the FOV or a portion thereof. The term "image" means a digital representation of a scene detected by the imaging system that stores a color value for each element (pixel) in the image, each pixel color representing light reaching the imaging system from a different location of the field of view (e.g., a 0.02 ° by 0.02 ° portion of the FOV, depending on the receiver optics). Note that the imaging system may alternatively be operable to generate other representations of objects or light in the FOV, such as a depth map, three-dimensional (3D) model, polygonal mesh, but the term "image" means a two-dimensional (2D) image without depth data.
The system 100 includes an Illumination Source (IS) 102 operable to emit a plurality of pulses of radiation in the SWIR band toward one or more targets 104, causing reflected radiation from the object to be reflected back in the direction of the system 100. In fig. 1A, the outgoing illumination (outgoing illumination) is labeled 106, and the illumination reflected toward the system 100 is labeled 108. Portions of the emitted radiation may also be reflected, deflected, or absorbed by the target in other directions. The term "target" means any object (object) in the FOV of the imaging sensor, such as solid, liquid, flexible, and rigid objects. Some non-limiting examples of such objects include vehicles, roads, humans, animals, plants, buildings, electronic equipment, clouds, microscopic samples, items under manufacture, and the like. Any suitable type of illumination source 102 may be used, such as one or more lasers, one or more Light Emitting Diodes (LEDs), one or more incident flash lamps, any combination of the above, and the like. As discussed in more detail below, the illumination source 102 may optionally include one or more active lasers, or one or more P-QS lasers.
The system 100 further comprises at least one imaging receiver (or simply "receiver") 110, said imaging receiver 110 comprising a plurality of germanium (Ge) PDs operable for detecting said reflected SWIR radiation. The receiver generates an electrical signal for each of the plurality of germanium PDs that is representative of the amount of impinging SWIR light in its detectable spectral range. The amount includes an amount of SWIR radiation pulses reflected from the target, and may further include: additional SWIR light (e.g., arriving from the sun or from an external source).
The term "germanium PD (Ge PD)" relates to any PD in which photo-induced electron excitation (later detectable as photocurrent) occurs within the germanium, within a germanium alloy (such as SiGe), or at the interface between the germanium (or germanium alloy) and another material (such as silicon, siGe). In particular, the term "germanium PD" relates to both pure germanium PD and germanium-silicon PD. When germanium PD comprising germanium and silicon is used, various concentrations of geranium may be used. For example: the relative portion of germanium in the germanium PD (whether alloyed with silicon or adjacent thereto) may be in the range of 5% and 99%. For example: the relative portion of germanium in the plurality of germanium PDs may be between 15% and 40%. Note that materials other than silicon may also be part of the germanium PD, such as aluminum, nickel, silicide, or any other suitable material. In some implementations of the invention, the plurality of germanium PDs may be pure germanium PDs (including greater than 99.0% germanium).
Note that the receiver may be implemented as a PDA fabricated on a single wafer. Any of the PD arrays discussed throughout the present disclosure may be used as the receiver 110. The germanium PDs may be arranged in any suitable arrangement, such as a rectangular matrix (straight rows and columns of germanium PDs), cellular tiling, and even irregular configurations. Preferably, the number of germanium PDs in the receiver allows for the generation of high resolution images. For example: the number of PDs may be on the order of 1 megapixel, 10 megapixel, or more.
In some embodiments, the receiver 110 has the following specifications:
hfov (horizontal FOV) [ meters (m) ]:60
Wd (working distance) [ m ]:150
c. Pixel size [ microns (um) ]:10
d. Resolution (on target) [ millimeters (mm) ]:58
e. Pixel # H: 1,050
f. Pixel # [ V ]:1112
g. Aspect ratio: 3:1
h. Viewing angle [ rad ]:0.4
i. Reflectivity of the target [% ]:10 percent of
j. Collection (assuming a target reflectivity of 100% and assuming lambertian reflectivity (Lambertian reflectance), the ratio of photons collected to photons emitted): 3e -9
In addition to the impinging SWIR light as described above, the electrical signal generated by each of the plurality of germanium PDs also represents:
a. The readout noise, which is random, has an amplitude that is independent (or substantially independent) of the integration time. Examples of such noise include Nyquist Johnson noise (also known as thermal noise or kTC noise). In addition to the statistical component, the readout process may introduce a DC component into the signal, but the term "readout noise" relates to the random component of the signal introduced by the readout process.
Dc noise is random and will be accumulated over the integration time (i.e. it depends on the integration time). In addition to the statistical component, the DC introduces a DC component (which may or may not be eliminated, such as discussed with respect to fig. 12A-22) into the signal, but the term "DC noise (dark current noise)" pertains to the random component of the signal that is accumulated by DC over the integration time.
Some germanium PDs, particularly those incorporating germanium with another material (such as silicon, for example), are characterized by a relatively high level of DC. For example: the DC of the plurality of germanium PDs may be greater than 50 μA/cm 2 (in relation to a surface area of the PD) or even larger (e.g., greater than 100. Mu.A/cm) 2 More than 200 mu A/cm 2 Or greater than 500. Mu.A/cm 2 ). Depending on the surface area of the PD, such multiple levels of DC may be converted to 50 picoamps (pA) per germanium PD or higher (e.g., greater than 100pA per germanium PD, greater than 200pA per germanium PD, greater than 500pA per germanium PD, or greater than 2nA per germanium PD). Note that, a plurality of PDs of different sizes may be used,such as about 10mm 2 About 50mm 2 About 100mm 2 About 500mm 2 . It is noted that when the plurality of germanium PDs are subjected to different levels of non-zero bias (non zero bias), the plurality of germanium PDs may generate dark currents of different magnitudes (which cause a dark current on each of the plurality of germanium PDs of, for example, greater than 50 picoamps).
The system 100 also includes a controller 112 and an image processor 114, the controller 112 controlling the operation of the receiver 110 (and optionally also the Illumination Source (IS) 102 and/or other components). Thus, the controller 112 is configured to control the activation of the receiver 110 for a relatively short integration time, thereby limiting the impact of the accumulation of DC noise on signal quality. For example: the controller 112 is operable to control the activation of the receiver 110 during an integration time during which the accumulated DC noise does not exceed the readout noise of the extraneous integration time.
Referring now to fig. 2, fig. 2 is an exemplary graph illustrating the relative magnitudes of noise power after different durations of multiple integration times according to examples of the inventive subject matter. For a given laser pulse energy, the signal-to-noise ratio (SNR) is primarily determined by the noise level, including the DC noise (the noise of the dark current) and thermal noise (also known as kTC noise). As shown in the exemplary graph of fig. 2, depending on the integration time of the germanium-based receiver 110, the DC noise or the thermal noise dominates the SNR of the electrical signal affecting the PD. Since the controller 112 limits the activation time of the germanium photodetector for a relatively short period of time (within the range designated "a" in fig. 2), no too many electrons from the DC noise are collected, and thus the SNR is improved and thus is mainly affected by thermal noise. For a longer receiver integration time, the noise originating from the DC of the germanium photodetector will exceed the thermal noise while affecting the SNR of the receiver, thereby causing receiver performance degradation. Note that the graph of fig. 2 is merely illustrative, and that the accumulation of dark current noise over time is typically squared over time Root to increase(alternatively, the y-axis is considered to be plotted on a matched nonlinear polynomial scale). Also, at zero integration time (in such a case, the accumulated DC noise is zero), the plurality of axes do not intersect each other.
Returning to the system 100, note that the controller 112 may control the activation of the receiver 110 for a shorter integration time (e.g., integration time during which the accumulated DC noise does not exceed half of the readout noise or one quarter of the readout noise). Note that limiting the integration time to very low levels, unless specifically required, limits the number of photo-induced signals that can be detected and can degrade the SNR with respect to thermal noise. Note that thermal noise levels in multiple readout circuits that are suitable for reading multiple noisy signals (relatively high signal levels need to be collected) introduce non-negligible readout noise, which can severely degrade the SNR.
In some implementations, the controller 112 may apply a slightly longer integration time (e.g., an integration time during which the accumulated DC noise does not exceed twice the readout noise or x 1.5 of the readout noise).
Example embodiments disclosed herein relate to systems and methods for high SNR active SWIR imaging using multiple receivers including multiple PD based on germanium. The main advantage of germanium receiver technology over gallium arsenide indium technology is compatibility with CMOS process flows, allowing the receiver to be fabricated as part of a CMOS production line. For example: by growing multiple Ge epilayers on a silicon (Si) substrate, such as with Si photonics, multiple germanium PDs may be integrated into a CMOS process flow. Thus, germanium PDs are also more cost effective than equivalent indium gallium arsenide (InGaAs) PDs.
To utilize germanium PDs, an exemplary system disclosed herein is adapted to overcome the relatively high DC limitations of germanium diodes, in generalAt about (-) 50. Mu.A/cm 2 Within a range of (2). The DC problem can be overcome by using active imaging with short acquisition times and a combination of high power laser pulses.
The use of germanium PDs, particularly but not limited to those fabricated using CMOS process flows, is a much cheaper solution for uncooled SWIR imaging than indium gallium arsenide (InGaAs) technology. Unlike many prior art imaging systems, the active imaging system 100 includes a pulsed illumination source with a short illumination duration (e.g., less than 1 μs, such as 1 to 1000 μs) and high peak power. Despite the drawbacks of such pulsed light sources (e.g. non-uniform illumination, more complex readout circuits may introduce higher levels of readout noise) and shorter integration times (e.g. a large range of distances cannot be extracted in a single acquisition cycle). In the following description, several approaches are discussed to overcome these shortcomings to provide a number of efficient imaging systems.
Reference is now made to fig. 1B and 1C, which are schematic to illustrate a number of other SWIR imaging systems numbered 100' and 100 "in accordance with some embodiments. Like system 100, system 100' includes an active illumination source 102A and receiver 110. In some embodiments, the imaging systems 100, 100', and 100″ also include a controller 112 and an image processor 114. In some embodiments, the processing of the output of the receiver 110 may be performed by the image processor 114, and additionally or alternatively by an external image processor (not shown). The plurality of imaging systems 100' and 100″ may be many variations of the imaging system 100. Any of the components or functions discussed with respect to system 100 may be implemented in any of systems 100' and 100", and vice versa.
The controller 112 is a computing device (computing device). In some embodiments, many of the functions of the controller 112 are provided within the illumination source 102 and the receiver 110, and the controller 112 is not required as a separate component. In some embodiments, control of the imaging systems 100' and 100″ is effected by the controller 112, illumination source 102, and receiver 110 acting together. Additionally or alternatively, in some embodiments, control of the imaging systems 100' and 100″ may be (or additionally) performed by an external controller such as a vehicle Electronic Control Unit (ECU) 120, which may belong to a vehicle in which the imaging system has been installed.
The illumination source 102 is configured to emit a pulse of light 106 in the Infrared (IR) region of the electromagnetic spectrum. More specifically, the light pulses 106 include wavelengths in a range of about 1.3 μm to 3.0 μm in the SWIR spectral band.
In some embodiments, such as shown in fig. 1B, the illumination source (now labeled 102A) is an active Q-switched laser (or "active Q-switched" laser) that includes a gain medium (gain medium) 122, a pump (124), mirrors (not shown), and an active QS element 126A. In some embodiments, QS component 126A is a modulator. After electronic or optical pumping (pumping) of the gain medium 122 by pump 124, a pulse of light is released by active triggering of QS element 126A.
In some embodiments, such as that shown in FIG. 1C, illumination source 102P is a P-QS laser that includes gain medium 122, pump 124, mirrors (not shown), and an SA 126P. After a "passive QS (passive QS)" light pulse is released, the SA 126P allows the laser cavity to store light energy (from the gain medium 122 pumped by the pump 124) until a saturation level is reached in the SA 126P. To detect the release of the passive QS pulse, a QS pulse photodetector 128 is coupled to the illumination source 102P. In some embodiments, QS pulse photodetector 128 is a germanium PD. The signal from the QS pulse photodetector 128 is used to trigger the reception process in the receiver 110 so that the receiver 110 will be activated after a period of time appropriate for the distance of the target 104 to be imaged. The time period is derived as described further below with reference to fig. 3B, 3C, 4B, and 4C.
In some embodiments, the laser pulse duration from illumination source 102 is in the range from 100ps to 1 microsecond. In some embodiments, the laser pulse energy is in the range from 10 microjoules to 100 millijoules. In some embodiments, the laser pulse period is on the order of 100 microseconds. In some embodiments, the laser pulse period is in the range from 1 microsecond to 100 milliseconds.
Gain medium 122 is provided in a crystalline form or alternatively in a ceramic form. A number of non-limiting examples of materials that may be used for gain medium 122 include: YAG, nd YVO4, nd YLF, nd Glass, nd GdVO4, nd GGG, nd KGW, nd KYW, nd YALO, nd YAP, nd LSB, nd S-FAP, nd Cr GSGG, nd Cr YSGG, nd YSAG, nd Y2O3, nd Sc2O3, er Glass, er YAG, and so on. In some embodiments, the multiple doping levels of the gain medium may be varied based on the need for a particular gain. A number of non-limiting examples of the plurality of SAs 126P include: co2+: mgAl2O4, co2+: spinel (Spinel), co2+: znSe and other cobalt doped crystals, V3+: YAG, doped glass, quantum dots, semiconductor SA mirrors (SESAM), cr4+ YAG SA, and the like. Many additional ways in which the P-QS laser 102P may be implemented are discussed with reference to fig. 6-11, and any of the variations discussed with respect to a laser 600 may also be compared to the applicable illumination source 102P.
With respect to illumination source 102, it is noted that pulsed lasers with sufficient power and short enough pulses are more difficult to obtain and more expensive than non-pulsed illumination, especially when eye-safe SWIR radiation based on solar absorption is required.
The receiver 110 may include: one or more germanium PDs 118 and receiver optics 116. In some embodiments, the receiver 110 includes a 2D array of germanium PDs 118. The receiver 110 is selected such that it is sensitive to infrared radiation, including at least the wavelength emitted by the illumination source 102, such that the receiver can form an image of the illuminated target 104 from the reflected radiation 108.
The optics 116 of the receiver may include: one or more optical elements, such as mirrors or lenses, are arranged to collect, concentrate, and optionally filter the reflected electromagnetic radiation 228 and focus the electromagnetic radiation onto a focal plane of the receiver 110.
The receiver 110 generates a plurality of electrical signals in response to electromagnetic radiation detected by one or more germanium PDs 118 representing an image of the illumination scene. The plurality of signals detected by the receiver 110 may be transmitted to an internal image processor 114 or an external image processor (not shown) for processing into a SWIR image of the target 104. In some embodiments, the receiver 110 is activated multiple times to create "multiple time slices," each covering a particular range of distances. In some embodiments, the image processor 114 combines the slices to create a single image with a greater visual depth, such as that proposed by Gruber, tobias (Tobias), and the like. "modified 2depth: real-time dense laser radar (LIDAR) from gated images, arXiv pre-printed version arXiv:1902.04997 (2019), which is incorporated herein by reference in its entirety.
In the automotive field, the targets 104 of the image within the field of view (FOV) of the receiver 110 generated by the multiple imaging systems 100' or 100″ may be processed to provide various driver assistance and safety functions, such as: forward Collision Warning (FCW), lane Departure Warning (LDW), traffic Sign Recognition (TSR), and detection of related entities such as pedestrians or oncoming vehicles. The generated image may also be displayed to the driver, for example projected on a head-up display (HUD) on the vehicle windshield. Additionally or alternatively, multiple imaging systems 100' or 100″ may interface to a vehicle ECU 120 to provide images or video to enable autopilot under low light levels or poor visibility conditions.
In many active imaging scenarios, a light source, such as a laser, is used in combination with an array of light receivers. Since the germanium PD operates in the SWIR band, high power light pulses are possible without exceeding human eye safety regulations. For implementation in an automotive scene, a typical pulse length is about (-) 100 nanoseconds (ns), although longer pulse durations of up to about 1 microsecond are contemplated in some embodiments. A peak pulse power of about (to) 300 Kilowatts (KW) is allowed in view of the safety of the human eye, but current laser diodes cannot actually achieve this level. Thus, in the present system, the high power pulses are generated by a QS laser. In some embodiments, the laser is a P-QS laser to further reduce cost. In some embodiments, the laser is an active QS.
As used herein, the term "target" means any imaged entity, object, region, or scene. Non-limiting examples of targets in automotive applications include vehicles, pedestrians, physical obstacles, or other objects.
Some embodiments, an active imaging system includes: an illumination source for emitting a pulse of radiation toward a target, thereby causing radiation to be reflected from the target, wherein the illumination source comprises a QS laser; and a receiver comprising one or more germanium PDs for receiving said reflected radiation. In some embodiments, the illumination source operates in the SWIR spectral band.
In some embodiments, the QS laser is an active QS laser. In some embodiments, the QS laser is a P-QS laser. In some embodiments, the P-QS laser includes an SA. In some embodiments, the SA is selected from the group consisting of: co2+: mgAl2O4, co2+: spinel, co2+: znSe and other cobalt doped crystals, V3+: YAG, doped glass, quantum dots, semiconductor SA mirrors (SESAM), and Cr4+ YAG SA.
In some embodiments, the system further comprises a QS pulse photodetector for detecting a radiation pulse emitted by the P-QS laser. In some embodiments, the receiver is configured to be activated for a time sufficient for the radiation pulse to travel to a target and return to the receiver. In some embodiments, the receiver is activated for an integration time during which the DC power of the germanium PD does not exceed the kTC noise power of the germanium PD.
In some embodiments, the receiver generates a plurality of electrical signals in response to the reflected radiation received by a plurality of germanium PDs, wherein the plurality of electrical signals represent an image of the target illuminated by the pulses of radiation. In some embodiments, the plurality of electrical signals are processed by one of an internal image processor or an external image processor into an image of the target. In some embodiments, the image of the target is processed to provide one or more of forward collision warning, lane departure warning, traffic sign identification, and detection of pedestrians or oncoming vehicles.
In further embodiments, a method for active imaging includes the steps of: releasing a pulse of light through an illumination source, the illumination source comprising an active QS laser; and activating a receiver for a limited period of time after sufficient time for the light pulse to travel to a target and return to the QS laser, the receiver including one or more germanium PDs to receive a reflected light pulse (reflected light pulse) reflected from the target. In some embodiments, the illumination source operates in the Short Wave Infrared (SWIR) spectral band. In some embodiments, the limited period of time is equal to an integration time during which the DC power of the germanium PD does not exceed a kTC noise power of the germanium PD.
In some embodiments, the receiver generates the plurality of electrical signals in response to the reflected light pulses received by the plurality of germanium PDs, wherein the plurality of electrical signals represent an image of the target illuminated by the light pulses. In some embodiments, the plurality of electrical signals are processed by one of an internal image processor or an external image processor into an image of the target. In some embodiments, the image of the target is processed to provide one or more of forward collision warning, lane departure warning, traffic sign identification, and detection of pedestrians or oncoming vehicles.
In further embodiments, a method for active imaging includes the steps of: pumping a P-QS laser, said P-QS laser comprising a SA to cause release of a light pulse when said SA is saturated; detecting said release of said light pulses by a QS pulse photodetector; based on the detected light pulse release, after a time sufficient for the light pulse to travel to a target and return to the QS laser, a receiver is activated for a limited period of time, the receiver including one or more germanium PDs to receive the reflected light pulse. In some embodiments, the QS laser operates in the Short Wave Infrared (SWIR) spectral band.
In some embodiments, the SA is selected from the group consisting of Co2+: mgAl2O4, co2+: spinel, co2+: znSe, other cobalt doped crystals, V3+: YAG, doped glass, quantum dots, semiconductor SA mirrors (SESAMs), and Cr4+ YAG SA. In some embodiments, the limited period of time is equal to an integration time during which the DC power of the germanium PD does not exceed the kTC noise power of the germanium PD.
In some embodiments, the receiver generates a plurality of electrical signals in response to the reflected light pulses received by the plurality of germanium PDs, wherein the plurality of electrical signals represent an image of the target illuminated by the light pulses. In some embodiments, the plurality of electrical signals are processed by one of an internal image processor or an external image processor into an image of the target. In some embodiments, the image of the target is processed to provide one or more of forward collision warning, lane departure warning, traffic sign identification, and detection of pedestrians or oncoming vehicles.
Various exemplary embodiments relate to a system and method for high SNR active SWIR imaging using multiple germanium-based PDs. In some embodiments, the imaging system is a gated imaging system (gated imaging system). In some embodiments, the pulsed illumination source is an active or P-QS laser.
Referring now to fig. 3A, 3B, and 3C, a flow chart and a plurality of schematic diagrams of a method of operation of an active SWIR imaging system of some embodiments are shown, respectively. The process 300 illustrated in fig. 3A is based on the system 100' as described with reference to fig. 1B. In step 302, the pump 124 of the illumination source 102A is activated to pump the gain medium 122. In step 304, the active QS element 126A releases a light pulse in the direction of a target 104, the target 104 being located at a distance D. In step 306, at time = T, the light pulse impinges on target 104 and generates reflected radiation that returns toward system 100' and receiver 110. In step 308, after waiting a time = T2, the receiver 110 is activated to receive the reflected radiation. The return propagation delay T2 consists of the time of flight (flight time) of the pulse from illumination source 102A to target 104 plus the time of flight of the light signal reflected from target 104. Thus, T2 is known for a target 104 at a distance "D" from the illumination source 102A and the receiver 110. The activation period Δt of the receiver 110 is determined based on the required depth of field (DoV). The DoV is given by 2dov=c×Δt, where c is the speed of light. A typical Δt of 100ns provides a depth of field of 15 meters. In step 310, the reflected radiation is received by the receiver 110 for a period of Δt. The received data from the receiver 110 is processed by an image processor 114 (or external image processor) to generate a received image. Process 300 may be repeated N times in each frame, where a frame is defined as the set of data transmitted from receiver 110 to image processor 114 (or an external image processor). In some embodiments, N is between 1 and 10,000.
Referring now to fig. 4A, 4B, and 4C, a flowchart and a plurality of schematic diagrams, respectively, of an exemplary method of operation of an active SWIR imaging system of some embodiments are shown. A process 400 is shown in fig. 4 based on the system 100 "as described with reference to fig. 1C. In step 402, the pump 124 of the illumination source 102P is activated to pump the gain medium 122 and saturate the SA 126P. In step 404, after reaching a saturation level, SA 126P releases a pulse of light in the direction of a target 430, the target 430 being located at a distance D. In step 406, QS pulse photodetector 128 detects the released light pulse. In step 408, at time = T, the light pulse impinges on target 430 and generates reflected radiation that returns toward system 100 "and receiver 110. In step 410, after waiting a time = T2 after a released light pulse detected by QS pulse photodetector 128, receiver 110 is activated to receive the reflected radiation. The return propagation delay T2 includes the time of flight of the pulse from illumination source 102P to target 430 plus the time of flight of the optical signal reflected from target 430. Thus, T2 is known for a target 430 at a distance "D" from illumination source 102P and receiver 110. The activation period of Δt is determined according to the required depth of field (DoV). In step 412, the receiver 110 receives the reflected radiation for a period of Δt. The received data from the receiver 110 is processed by an image processor 114 (or by an external image processor) to generate a received image. Process 400 may be repeated N times in each frame. In some embodiments, N is between 1 and 10,000.
Referring to all of the imaging systems 100, 100', and 100", it is noted that any of those imaging systems may include: a readout circuit for reading out an accumulation of charge collected by each germanium PD after the integration time to provide the detection signal of the respective PD. Thus, unlike LIDARs or other depth sensors, the readout process may be performed after oscillation of the integration time and thus after the signal is irreversibly summed (irreversibly summed) from a large range of distances.
Referring to all of the imaging systems 100, 100' and 100", optionally, the receiver 110 outputs a set of detection signals representative of the charge accumulated by each of the plurality of germanium PDs over the integration time, wherein the set of detection signals is representative of an image of the target illuminated by at least one SWIR radiation pulse.
Referring to all of the imaging systems 100, 100' and 100", the imaging systems may optionally have at least one Diffractive Optical Element (DOE) operable to improve the illumination uniformity of the light of the pulsed illumination source prior to emitting light towards the target. As described above, a high peak power pulsed light source 102 may emit an insufficiently uniform illumination distribution over different portions of the FOV. The DOE (not illustrated) may improve uniformity of the illumination to generate a plurality of high quality images of the FOV. Note that equivalent illumination uniformity is not typically required in lidar systems and other depth sensors, so they may not include DOE elements for reasons of cost, system complexity, system volume, etc. For example: in many LIDAR systems, it does not matter whether certain areas in the FOV receive more illumination density than other portions of the FOV as long as the entire FOV receives enough illumination (above a threshold that allows detection of objects at a minimum required distance). The DOE of the system 100, if implemented, may be used, for example, to reduce speckle effects. Note that many imaging systems 100, 100', and 100″ may further include: other types of optics for directing light from the light source 102 to the FOV, such as lenses, mirrors, prisms, waveguides, and the like.
Referring to all of the imaging systems 100, 100' and 100", the controller 112 may optionally be operable to activate the receiver 110 to sequentially acquire a series of gating images, each gating image representing a different germanium PD detection signal over a different range of distances, and an image processor operable to combine the series of images into a single two-dimensional image. For example: a first image may acquire between 0 and 50 meters (m) of light from the imaging sensor, a second image may acquire between 50 and 100 meters of light from the imaging sensor, a third image may acquire between 100 and 125 meters of light from the imaging sensor, and the image processor 114 may combine the plurality of 2D images into a single 2D image. In this way, each range of distances is extracted with accumulated DC noise that is still smaller than the readout noise introduced by the readout circuitry, at the cost of using more light pulses and more computations. The color value (e.g., gray value) of each pixel of the final image may be determined based on a function (e.g., a maximum value or a weighted average of all values) of the pixels in the plurality of gated images.
All imaging systems 100, 100', and 100″ which may be an uncooled germanium-based SWIR imaging system, are operable to detect a 1m target at a distance exceeding 50 (meters) m with a 20% SWIR reflectivity (in the relevant spectral range).
Referring to all of the imaging systems 100, 100', and 100", the pulsed illumination source 102 may be a QS laser operable to emit eye-safe laser pulses having a pulse energy between 10 millijoules (millijoules) and 100 millijoules (mj). Although not required, the illumination wavelength may be selected to match a solar absorption band (e.g., the illumination wavelength may be between 1.3 micrometers (μm) and 1.4 μm).
Referring to all of the imaging systems 100, 100' and 100", the output signal of each germanium PD for image generation may represent a single scalar (scale) for each PD. Referring to all of the imaging systems 100, 100' and 100", each PD may output a cumulative signal representing a wide range of distances. For example: some, most, or all of the germanium PDs of receiver 110 may output a plurality of detection signals, each representing light reflected from 20m, 40m, and 60m to a respective PD.
Another distinguishing feature of many imaging systems 100, 100' and 100 "compared to many known art systems is that the pulsed illumination is not used to freeze the rapid movement of objects in the field (e.g., unlike photographic flash illumination), and is also used for static scenes. Another distinguishing feature of many imaging systems 100, 100' and 100 "compared to many prior art systems is that the gating of the image compared to external noise is not primarily to avoid internal noise in the system, which is a nuisance for some prior art techniques, such as sunlight.
Note that any of the components, features, modes of operation, system architecture, and internal relationships discussed above with respect to the systems 100, 100', and 100 "may be implemented in any EO system discussed below, if necessary, such as the systems 700, 1300', 1600', 1700, 1800, 1900, 2300, and 3600.
Fig. 5 is a flow chart illustrating a method 500 to generate SWIR images of objects in a FOV of an EO system in accordance with examples of the inventive subject matter. Referring to the examples set forth with respect to the previous figures, the method 500 may be performed by any of the imaging systems 100, 100', and 100″. Note that method 500 may also be implemented by any of the active imaging systems described below (such as systems 700, 1300', 1600', 1700, 1800, 1900, 2300, and 3600).
The method 500 begins with a step (or "stage") 510 of emitting at least one illumination pulse toward the FOV, thereby causing SWIR radiation to be reflected from at least one target. Hereinafter, "step" and "stage" may be used interchangeably. Alternatively, the one or more pulses may be high peak power pulses. For example: multiple illumination pulses may need to be used to achieve a generally higher illumination level (an overall higher level of illumination) than a single pulse. Referring to the examples of the figures, step 510 may optionally be performed by controller 112.
A step 520 includes triggering initiation of continuous signal acquisition by an imaging receiver comprising a plurality of germanium PDs (in the sense discussed above with respect to receiver 110), the receiver 110 being operable to detect the reflected SWIR radiation. The continuous signal acquisition of step 520 means that the charge is continuously and irreversibly collected (i.e., it is impossible to learn what level of charge is collected at any intermediate time), and not in small increments. The triggering of step 520 may be performed prior to step 510 (e.g., if the detection array requires an acceleration time), concurrently with step 510, or after step 510 ends (e.g., beginning detection at a non-zero distance from the system). Referring to the example of the drawings, step 520 may optionally be performed by controller 112.
Step 530 begins after triggering step 520 and includes collecting for each of a plurality of germanium PDs, at least charge resulting from impingement of the SWIR reflected radiation on the respective germanium PD, greater than 50 μA/cm, as a result of the triggering 2 DC noise related to integration time and readout noise independent of integration time. Referring to the drawingsFor example, step 530 may optionally be performed by receiver 110.
Step 540 includes: when the amount of charge collected due to DC noise is still lower than the amount of charge collected due to the readout noise accumulating for an irrelevant time, the stopping of the collection of the charge is triggered. The integration time is the duration of time from step 530 to step 540 stopping. Referring to the example of the drawings, step 540 may optionally be performed by controller 112.
A step 560 is performed after step 540 ends, and step 560 includes generating an image of the FOV based on the charge levels collected by each of the plurality of germanium PDs. As described above with respect to imaging systems 100, 100' and 100", the image generated in step 560 is a 2D image without depth information. Referring to the example of the drawings, step 560 may optionally be performed by imaging processor 114.
Optionally, the stopping of the collection as a result of step 540 may be followed by optional step 550 of reading a signal related to the amount of charge collected by each germanium PD of said plurality of germanium PDs by a readout circuit, amplifying said read signal, and providing said amplified signal (optionally after further processing) to an image processor, said image processor performing said generating of said image as in step 560. Referring to the example of the drawings, step 550 may optionally be performed by the readout circuitry (not illustrated above, but may be equivalent to any readout circuitry discussed below, such as readout circuitry 1610, 2318, and 3630). Note that step 550 is optional, as other suitable methods of reading out the test results from the plurality of germanium PS may be implemented.
Optionally, the signal output by each of the plurality of germanium PDs is a scalar representing the amount of light reflected from 20 meters, the amount of light reflected from 40 meters, and the amount of light reflected from 60 meters.
Optionally, the generating of step 560 may include: the image is generated based on a pure magnitude read for each of the plurality of germanium PDs. Optionally, the transmitting of step 510 may include: the illumination uniformity of the pulsed laser illumination is increased by passing the pulsed laser illumination (via one or more lasers) through at least one Diffractive Optical Element (DOE) and emitting the attenuated light to the FOV. Optionally, the DC is greater than 50 picoamps (picoamp) per germanium PD. Optionally, the plurality of germanium PDs are a plurality of silicon germanium PDs (Si-Ge PDs), each comprising silicon and germanium. Optionally, the emission is performed by at least one active QS laser. Optionally, the transmitting is performed by at least one P-QS laser. Optionally, the collecting is performed when the receiver is operated at a temperature above 30 ℃ and the image of the FOV is processed to detect multiple vehicles and pedestrians in multiple ranges between 50 meters and 150 meters. Optionally, the transmitting includes transmitting a plurality of illumination pulses having a pulse energy between 10 millijoules and 100 millijoules into an unprotected eye of a person at a distance less than 1 meter without damaging the eye.
As previously described with respect to the active imaging systems 100, 100' and 100", several gated images may be combined into a single image. Optionally, the method 500 may include: repeating the sequence of transmitting, triggering, collecting, and stopping (collecting and ceasing) a plurality of times; triggering the acquisition at each sequential different time from light emission. In each order, the method 500 may include: a detection value is read from the receiver for each germanium PD of the plurality of germanium PDs corresponding to a different distance range of more than 2 meters, such as 2.1 meters, 5 meters, 10 meters, 25 meters, 50 meters, 100 meters. In such a case, the generating of the image in step 560 includes generating a single two-dimensional image based on the plurality of detection values read from different germanium PDs in different orders. Note that the multiple gated images are not sparse (i.e., there are many pixel detection values in all or most gated images) since only a few images are taken. It is also noted that the plurality of gating images may have overlapping distance ranges. For example: a first image may represent a distance in the range of 0 to 60 meters, a second image may represent a distance in the range of 50 to 100 meters, and a third image may represent a distance in the range of 90 to 120 meters. Fig. 6-11C demonstrate that SWIR EO systems and P-QS lasers can be used in such systems, as well as methods for the operation and fabrication of such lasers.
Fig. 10 is a schematic functional block diagram illustrating an example of a SWIR optical system 700 according to examples of the inventive subject matter. The system 700 includes at least the P-QS laser 600, but may also include additional components such as, for example, as shown in fig. 10:
a. a sensor 702 is operable to sense reflected light from the FOV of the system 700, particularly reflected illumination of the laser 600 reflected from external objects 910. Referring to other examples, the sensor 702 may be implemented as an imaging receiver, PDA, or PDD as discussed in the present disclosure, such as components 110, 1300', 1600', 1700, 1800, 1900, 2302, and 3610.
b. A processor 710 is operable to process the plurality of sensing results of the sensor 702. The output of the processing may be an image of the FOV, a depth model of the FOV, spectral analysis of one or more portions of the FOV, information of identified objects in the FOV, light statistics on the FOV, or any other type of output. Referring to other examples, processor 710 may be implemented as any of the processors discussed in the present disclosure, such as processors 114, 1908, 2304, and 3620.
c. A controller 712 is operable to control the activities of the laser 600 and/or the processor 710. For example: the controller 712 may include: timing, synchronization, and other operating parameters of the processor 710 and/or laser 600 are controlled. Referring to numerous other examples, controller 712 may be implemented as any of numerous other controllers discussed in this disclosure, such as controllers 112, 1338, 2314, and 3640.
Optionally, the system 700 may include: a SWIR PDA 706 that is sensitive to the wavelength of the laser light. Thus, the SWIR optical system may be used as an active SWIR camera, SWIR time of flight (ToF) sensor, SWIR photo detection and ranging (LIDAR) sensor, etc. The ToF sensor may be sensitive to the wavelength of the laser light. Alternatively, the PDA may be a CMOS-based PDA sensitive to SWIR frequencies emitted by the laser 600, such as a CMOS-based PDA designed and manufactured by Teravuv's eye of Israel, inc. (TriEye LTD).
Optionally, the system 700 may include: a processor 710 for processing the detection data from the SWIR PDA (or any other photosensitive sensor of system 700). For example: the processor may process the detection information to provide a SWIR image of a field of view (FOV) of the system 700, to detect objects in the FOV, and the like. Optionally, the SWIR optical system may include: a time of flight (ToF) SWIR sensor sensitive to the wavelength of the laser light, and a controller operable to synchronize operation of the ToF SWIR sensor and the P-QS SWIR laser to detect a distance of at least one object in a FOV of the SWIR optical system. Optionally, the system 700 may include: a controller 712, the controller 712 being operable to control one or more aspects of an operation of the laser 600 or many other components of the system, such as a PDA (e.g., focal plane array, FPA). For example: some parameters of the laser may be controlled by the controller, including timing, duration, intensity, focus (focusing), and the like. The controller may, although not necessarily, control the operation of the laser based on the detection results of the PDA (either directly or based on the processing of the processor). Alternatively, the controller may be operable to control the laser pump or other type of light source to affect the activation parameters of the laser. Alternatively, the controller may be operable to dynamically vary the pulse repetition rate. Optionally, the controller may be operable to control dynamic modification of the light shaping optics, for example: for improving a signal-to-noise ratio (SNR) in specific regions in the FOV. Alternatively, the controller may be operable to control the lighting module to dynamically vary the pulse energy and/or duration (e.g. in the same way as many other P-QS lasers are possible, such as varying the focus of the pumped laser, etc.).
Further and optionally, the system 700 may include: temperature control (e.g., passive temperature control, active temperature control) for controlling a temperature of the laser or one or more components thereof (e.g., the pump diode) as a whole. Such temperature control may include: such as a thermoelectric cooler (TEC), a fan, a heat sink, resistive heater under pump diodes, and so on.
Further and optionally, the system 700 may include: another laser used to bleach at least one of GM 602 and SA 604. Optionally, the system 700 may include: an internal photosensitive detector (such as one or more PDs, like PDA 706) is operable to measure the time that a pulse is generated by laser 600 (such as PD 226 as described above). In such a case, the controller 740 may be operable to send a trigger signal to the PDA 706 (or other type of camera or sensor 702) based on the timing information obtained from the internal photosensitive detector 706, the PDA 706 detecting the reflection of the light from the laser light from objects in the FOV of the system 700.
The main industry requiring large amounts of lasers in the above spectral range (1.3 to 1.5 μm) is the electronics industry for optical data storage, which reduces the cost of the diode lasers to dollars per device, or even lower. However, these lasers are not suitable for other industries, such as the automotive industry, which require lasers with fairly high peak power and beam brightness and will be used in harsh environmental conditions.
Note that there is no scientific consensus about the wavelength range that is considered to be part of the SWIR spectrum. For the purposes of the present invention, however, the SWIR spectrum comprises electromagnetic radiation having wavelengths greater than those of the visible spectrum and comprising at least a spectral range between 1300 and 1500 nm.
Although not limited to this use, one or more P-QS lasers 600 may be used as the illumination source 102 of any of the imaging systems 100, 100', and 100″. The laser 600 may be used in any other EO system in the SWIR range where pulsed illumination is required, such as lidars, spectrometers, communication systems, and the like. It is noted that the proposed lasers 600 and methods for fabricating such lasers allow for high volume manufacturing of lasers operating in the SWIR spectral range at relatively low production costs.
The P-QS laser 600 includes at least a crystalline gain medium (crystalline gain medium) 602 (hereinafter also referred to as "GM"), a crystalline SA 604, and an optical cavity 606, the crystalline material being confined in the optical cavity 606 to allow light to propagate in the gain medium 602 to enhance the tendency to generate a laser beam 612 (such as shown in fig. 8). The optical cavity is also known by the terms "optical resonator (optical resonator)" and "resonant cavity (resonating cavity)", and includes a high reflectivity mirror 608 (also referred to as a "high reflector") and an output coupler 610. Discussed below are unique and novel combinations of several different types of crystalline materials and using a variety of fabrication techniques to fabricate lasers, allowing for the mass production of a multitude of lasers of the SWIR spectral range at reasonable cost. For the sake of brevity of this disclosure, general details known in the art regarding P-QS lasers are not provided herein, but are readily available from a wide variety of sources. As known in the art, the saturable absorber (saturable absorber) of the laser acts as the Q-switch (Q-switch) of the laser. The term "crystalline material (crystalline material)" broadly includes any material in single crystal form or in polycrystalline form.
The dimensions of the connected crystal gain medium and crystal SA may depend on the purpose of designing a particular P-QS laser 600. In a non-limiting example, a combined length of the SA and the GM is between 5 and 15 millimeters. In a non-limiting example, the combined length of the SA and the GM is between 2 and 40 millimeters. In a non-limiting example, a diameter of the combination of the SA and the GM (e.g., if a cylinder, or confined to an imaginary such cylinder) is between 2 and 5 millimeters. In a non-limiting example, a diameter of the combination of SA and GM is between 0.5 and 10 millimeters.
The P-QS laser 600 includes a gain medium crystalline material (GMC) rigidly connected to a SA crystalline material (SAC). The rigid coupling may be achieved in any manner known in the art, such as using adhesives, diffusion bonding, composite crystal bonding, growing one on top of the other, and the like. However, as described below, a rigidly connected crystalline material in the form of a ceramic can be achieved using simple and inexpensive methods. Note that the GMC and SAC materials may be rigidly connected to each other directly, but may alternatively be rigidly connected to each other via an intermediate object (e.g., another crystal). In some embodiments, both the gain medium and the SA may be implemented on a monolithic crystalline material, by doping different dopants (such as those discussed below with respect to SAC material and GMC) in different portions of the monolithic crystalline material, or by co-doping the monolithic crystalline material (a single piece of crystalline material), two dopants (such as co-doping with N 3+ V (V) 3+ Is doped with the same volume of crystalline material. Alternatively, the gain medium may be grown on a single crystal saturated absorber substrate (single crystal saturable absorbing substrate) (e.g., using liquid phase epitaxy, LPE). Note that the separate GMC and SA crystalline materials, which are widely discussed in the following disclosure, a monolithic ceramic crystalline material doped with two dopants may also be used in comparison to any of the latter implementations.
Fig. 7A, 7B and 7C are schematic functional block diagrams illustrating examples of P-QS lasers 600 according to the presently disclosed subject matter. In fig. 7A, the two dopants are implemented on both portions of the common crystalline material 614 (acting as both GM and SA), while in fig. 7B, the two dopants are implemented interchangeably on a common volume of common crystalline material 614 (in the illustrated case—the entirety of the common crystal). Alternatively, the GM and the SA may be implemented on a single piece of crystalline material doped with neodymium and at least one other material. Alternatively (e.g., as shown in fig. 7C), either or both of the output coupler 610 and the high reflectivity mirror 608 may be glued directly to one of the multiple crystalline materials (e.g., the GM or the SA, or a crystal combining the two).
At least one of SAC and GMC is a ceramic crystalline material that is a related crystalline material in a ceramic form (e.g., a polycrystalline form) (e.g., doped yttrium aluminum garnet, YAG, doped vanadium). Having a crystalline material in the form of one (especially two) ceramics allows for higher numbers and lower cost production. For example: instead of growing individual single crystal materials in a slow and limited process, polycrystalline materials may be manufactured by powder sintering (i.e., compacting and possibly heating a powder to form a solid mass), low temperature sintering, vacuum sintering, and the like. One of the crystalline materials (SAC or GMC) may be sintered on top of the other, eliminating complex and expensive process recipes such as polishing, diffusion bonding or surface activated bonding. Optionally, at least one of the GMC and SAC is polycrystalline. Optionally, both the GMC and the SAC are polycrystalline.
The combinations of crystalline materials mentioned for the GMC and the SAC may be made, such combinations may include:
a. the GMC is a ceramic neodymium doped yttrium aluminum garnet (Nd: YAG) and the SAC is (a) a ceramic trivalent vanadium doped yttrium aluminum garnet (V) 3+ YAG) or (b) a ceramic cobalt-doped crystalline material. Alternatively, the ceramic cobalt-doped crystalline material may be a divalent ceramic cobalt-doped crystalline material. In those alternatives, both the Nd: YAG and the SAC selected from the above groups are in ceramic form. A cobalt-doped crystalline material is a crystalline material doped with cobalt. Examples include cobalt doped spinels (Co: cobalt or Co) 2+ :MgAl 2 O 4 ) Cobalt-doped zinc selenide (Co 2+ ZnSe), cobalt-doped YAG (Co 2+ YAG). Although this is not necessarily so, hereAlternatively, the high reflectivity mirror and the SA may optionally be rigidly connected to the gain medium and the SA, such that the P-QS laser is a monolithic microchip P-QS laser (e.g., as shown in FIGS. 8 and 10).
b. The GMC is a ceramic neodymium doped yttrium aluminum garnet (Nd: YAG) and the SAC is a non-ceramic SAC selected from a group of doped ceramic materials consisting of: (a) Trivalent vanadium doped yttrium aluminum garnet (V) 3+ YAG) and (b) cobalt-doped crystalline material. Alternatively, the cobalt-doped crystalline material may be a divalent cobalt-doped crystalline material. In such a case, the high reflectivity mirror 608 and output coupler 610 are rigidly connected to the gain medium and the SA, such that the P-QS laser 600 is a monolithic microchip P-QS laser.
c. The GMC is a ceramic neodymium-doped rare earth crystalline material and the SAC is a ceramic crystalline material selected from the group of a plurality of doped crystalline materials consisting of: (a) Trivalent vanadium doped yttrium aluminum garnet (V) 3+ YAG) and (b) a plurality of cobalt-doped crystalline materials. Alternatively, the cobalt doped crystalline material may be a divalent cobalt doped crystalline material. Although not required, in this option, the high reflectivity mirror 608 and output coupler 610 may optionally be rigidly connected to the gain medium and the SA, such that the P-QS laser 600 is a monolithic microchip P-QS laser.
Note that in any one implementation, one doped crystalline material may be doped with more than one dopant. For example: the SAC may be doped with the main dopant disclosed above and with at least one other doping material (e.g. at a significantly lower level). A neodymium-doped rare earth crystalline material is a crystalline material whose unit cell (unit cell) contains a rare earth element (one of 15 chemical element groups defined in a definite sense, including 15 lanthanoids and scandium and yttrium), and which is doped with neodymium (such as triple ionized neodymium) in place of the rare earth element in a portion of the unit cell. Several non-limiting examples of neodymium-doped rare earth crystalline materials that may be used in the present invention are:
YAG (above Nd: YAGSaid) neodymium-doped potassium yttrium tungstate (Nd: KYW), neodymium-doped lithium yttrium fluoride (Nd: YLF), neodymium-doped yttrium orthovanadate (YVO) 4 ) The rare earth element in all of these is neodymium, nd;
b. neodymium-doped orthovanadate ((Nd: gdVO) 4 ) Neodymium doped gallium garnet (Nd: GGG), neodymium doped potassium gadolinium tungstate (Nd: KGW), all of which have the rare earth element gadolinium, gd;
c. neodymium-doped scandium lanthanum borate (Nd: LSB), wherein the rare earth element is scandium;
d. other neodymium-doped rare earth crystalline materials may be used, wherein the rare earth may be yttrium, gadolinium, scandium, or any other rare earth.
The following discussion applies to any optional combination of GMCs and SACs.
Optionally, the GMC is directly rigidly connected to the SAC. Alternatively, the GMC and the SAC may be indirectly connected (e.g., each of the SAC and GMC are connected via a group of one or more intermediate crystalline materials and/or via one or more other solid materials transparent to the relevant wavelengths). Optionally, one or both of the SAC and the GMC are transparent to the relevant wavelengths.
Alternatively, the SAC may be a cobalt doped spinel (CoCo2+: mgAl2O 4). Alternatively, the SAC may be cobalt doped YAG (Co: YAG). Alternatively, this may enable co-doping of cobalt and neodymium Nd on the same YAG. Alternatively, the SAC may be cobalt doped zinc selenide (Co2+: znSe). Alternatively, the GMC may be a ceramic cobalt-doped crystalline material.
Optionally, an initial transmittance (T0) of the SA is between 75% and 90%. Optionally, the initial transmittance of the SA is between 78% and 82%.
The wavelengths emitted by the laser depend on the materials used in its construction, especially on the materials and dopants of the GMC and the SAC. Some examples of output wavelengths include wavelengths in the 1,300nm and 1,500nm ranges. Some more specific examples include 1.32 μm or about 1.32 μm (e.g., 1.32 μm.+ -. 3 nm), 1.34 μm or about 1.34 μm (e.g., 1.34 μm.+ -. 3 nm), 1.44 μm or about 1.44 μm (e.g., 1.44 μm.+ -. 3 nm). A corresponding imager that is sensitive to one or more of these light frequency ranges may be included in SWIR optical system 700 (e.g., as shown in fig. 10).
Fig. 8 and 9 are various schematic functional diagrams to illustrate SWIR optical systems 700 according to examples of the presently disclosed subject matter. As exemplified in these figures, laser 600 may include, in addition to those components discussed above: numerous additional components, such as (but not limited to): a. a light source such as a flash 616 or a laser diode 618, the laser diode 618 acting as a pump for the laser. Referring to the previous examples, the light source may be used as the pump 124.
b. Focusing optics 620, such as a lens, are used to focus light from the light source, such as 618, onto the optical axis of the laser 600.
c. A diffuser or other optics 622 is used to manipulate the laser beam 612 after the laser beam 612 exits the optical cavity 606.
Alternatively, SWIR optical system 700 may include: optics 708 to spread the laser over a wider FOV to improve eye safety issues in the FOV. Alternatively, SWIR optical system 700 may include: optics 704 to collect reflected laser light from the FOV and direct it onto the sensor 702, for example: onto a photodetector array (PDA) 706, see fig. 10. Optionally, the P-QS laser 600 is a Diode Pumped Solid State Laser (DPSSL).
Optionally, the P-QS laser 600 includes at least one diode pump light source 872 and optics 620 for focusing the light of the diode pump light source into the optical resonator (optical cavity). Optionally, the light source is located on the optical axis (as an end pump). Alternatively, the light source may be rigidly connected to the high reflectivity mirror 608 or SA 604 such that the light source is part of a monolithic microchip P-QS laser. Optionally, the light source of the laser may include: one or more Vertical Cavity Surface Emitting Laser (VCSEL) arrays. Optionally, the P-QS laser 600 includes at least one VCSEL array and optics for focusing the light of the VCSEL array into the optical resonator. The wavelength emitted by the light source (e.g., the laser pump) may depend on the crystalline materials and/or dopants used in the laser. Some exemplary pumping wavelengths that may be emitted by the pump include: 808nm or about 808nm, 869nm or about 869nm, about nine hundred (nine hundred and some) nm.
The power of the laser may depend on the use for which it is designed. For example: the laser output power may be between 1W and 5W. For example: the laser output power may be between 5W and 15W. For example: the laser output power may be between 15W and 50W. For example: the laser output power may be between 50W and 200W. For example: the laser output power may be above 200W.
QS laser 600 is a pulsed laser and may have different frequencies (repetition rates), different pulse energies, and different pulse durations, depending on the application for which it is designed. For example: a repetition rate of the laser may be between 10Hz and 50 Hz. For example: a repetition rate of the laser may be between 50Hz and 150 Hz. For example: a pulse energy of the laser may be between 0.1mJ and 1 mJ. For example: a pulse energy of the laser may be between 1mJ and 2 mJ. For example: a pulse energy of the laser may be between 2mJ and 5mJ. For example: a pulse energy of the laser may be higher than 5mJ. For example: a pulse duration of the laser may be between 10ns and 100 ns. For example: a pulse duration of the laser may be between 0.1 mus and 100 mus. For example: a pulse duration of the laser may be between 100 mus and 1 ms. The size of the laser may also vary, for example depending on the size of its components. For example: the laser may be of dimensions X1 by X2 by X3, where each dimension (X1, X2, and X3) is between 10mm and 100mm, between 20 and 200mm, and so on. The output coupling mirror may be flat, curved or slightly curved.
Optionally, in addition to the gain medium and the SA, the laser 600 may further include: undoped YAG for preventing heat accumulation in an absorption region of the gain medium. The undoped YAG may optionally be shaped as a cylinder (e.g., a concentric cylinder) surrounding the gain medium and the SA.
Fig. 11A is a flow chart illustrating an example of a method 1100 in accordance with the presently disclosed subject matter. Method 1100 is a method for fabricating components for a P-QS laser, such as, but not limited to, P-QS laser 600 described above. Referring to the examples set forth with respect to the previous figures, the P-QS laser may be laser 600. Note that any of the variations discussed with respect to laser 600 or with respect to a component thereof may also be implemented with respect to the components of the P-QS laser fabricated in method 1100 or with respect to a corresponding component thereof, and vice versa.
The method 1100 begins with a step 1102 of inserting at least one first powder into a first mold, the step 1102 being subsequently processed in the method 1100 to yield a first crystalline material. The first crystalline material is used as the GM or the SA of the P-QS laser. In some implementations, the gain medium of the laser is first fabricated (e.g., by sintering) and then the SA is fabricated on top of the previously fabricated GM (e.g., by sintering). In other implementations, the SA of the laser is first made and then the GM is made on top of the previously manufactured SA. In other implementations, the SA and the GM are made independent of each other and are coupled to form a single rigid body. The coupling may be done as part of heating, sintering or later.
Step 1104 of method 1100 includes inserting at least one second powder into a second mold, the at least one second powder being different from the at least one first powder. The at least one second powder is later processed in method 1100 to yield a second crystalline material. The second crystalline material is used as the GM or the SA of the P-QS laser (so that one of the SA and the GM is made of the first crystalline material and the other functionality is made of the second crystalline material).
The second mold may be different from the first mold. Alternatively, the second mold may be identical to the first mold. In such a case, the at least one second powder may be inserted, for example: on top of the at least one first powder (or on top of the first green body if already made), beside it, around it, and so on. The same mold (if implemented) that plugs the at least one second powder into the at least one first powder may be performed before processing the at least one first powder into a first green body, before processing the at least one first powder into the post first green body, or at some time during processing the at least one first powder into the first green body.
The first powder and/or the second powder may comprise crushed YAG (or any other of the aforementioned materials, such as spinel, mgAl 2 O 4 ZnSe) and doping materials (e.g. N 3+ 、V 3+ Co). The first powder and/or the second powder may comprise: the material used to make YAG (or any other such material, e.g. spinel, mgAl 2 O 4 ZnSe) and doping materials (e.g. N 3+ 、V 3+ 、Co)。
Step 1106 is performed after step 1102 and includes compacting the at least one first powder in the first mold to yield a first green body. Step 1104 is performed after step 1108 and includes compacting at least one second powder in the second mold to yield a second green body. If the at least one first powder and the at least one second powder are inserted into the same mold in steps 1102 and 1104, compaction of the powders (e.g., by pressing the at least one second powder, which in turn compresses the at least one first powder against the mold) may be performed simultaneously in steps 1106 and 1108, but this is not required. For example: step 1104 (and thus also step 1108) may optionally be performed after the compression of step 1106.
Step 1110 includes heating the first green body to yield (yield) a first crystalline material. Step 1112 includes joining the second green body to produce a second crystalline material. In various embodiments, the heating of the first crystal may be performed before, simultaneously with, partially simultaneously with, or after each of steps 1106 and 1110.
Optionally, heating of the first green body at step 1110 is performed prior to compaction (and possibly also prior to the plunging) of the at least one second powder at step 1108 (and possibly at step 1104). The first green body and the second green body may be heated separately (e.g., at different times, at different temperatures, for different durations). The first green body and the second green body may be heated together (e.g., in the same oven), or may be connected to each other during heating or not. The first green body and the second green body may be subjected to different heating regimes (heating zones), which may share part of the common heating while being heated separately in other parts of the heating regimes. For example: one or both of the first and second green bodies may be heated separately from the other green body, and then the two green bodies may be heated together (e.g., after coupling, but not necessarily). Optionally, heating the first green body and heating the second green body comprises heating the first green body and the second green body simultaneously in a single oven. Note that optionally, the coupling of step 1114 is a result of heating both green bodies simultaneously in a single oven. It is noted that optionally, the coupling of step 1114 is accomplished by co-sintering the two green bodies after being physically connected to each other.
Step 1116 includes coupling the second crystalline material to the first crystalline material. The coupling may be performed in any coupling manner known in the art, several non-limiting examples of which are discussed above with respect to the P-QS laser 600. It is noted that the coupling may have several sub-steps, some of which may be interleaved (interjectwire) with the different ones of steps 1106, 1108, 1110 and 1112 in different ways in different embodiments. The coupling causes a single rigid crystalline body (rigid crystalline body) including the GM and the SA.
Note that method 1100 may include: a plurality of additional steps which are used in the manufacture of crystals, in particular in the manufacture of ceramic or non-ceramic polycrystalline compounds of polycrystalline material bonded to one another. Few non-limiting examples include powder preparation (powder preparation), binder burn-out (binder burn-out), densification, annealing, polishing (if desired, as described below), and the like.
The GM of the P-QS laser in method 1100 (which may be the first crystalline material or the second crystalline material as described above) is a neodymium doped crystalline material. The SA of the P-QS laser in method 1100 (which may be the first crystalline material or the second crystalline material, as described above) is selected from a group of a plurality of crystalline materials consisting of: (a) A neodymium doped crystalline material, and (b) a doped crystalline material selected from the group consisting of trivalent vanadium doped yttrium aluminum garnet (V) 3+ YAG) and a plurality of doped crystalline materials comprised of cobalt-doped crystalline materials. At least one of the GM and the SA is a ceramic crystalline material. Optionally, the GM and the SA are both ceramic crystalline materials. Optionally, at least one of the GM and the SA is a polycrystalline material. Optionally, both the GM and the SA are polycrystalline materials.
Although additional steps of the manufacturing process recipe may be performed between the different stages of method 1100, in at least some implementations, polishing of the first material prior to bonding of the second material during sintering is not required.
Regarding the combinations of crystalline materials that can make the GMC and the SAC in method 1100, such combinations can include:
a. the GMC is a ceramic neodymium doped yttrium aluminum garnet (Nd: YAG) and the SAC is (a) a ceramic trivalent vanadium doped yttrium aluminum garnet (V) 3+ YAG), or (b) a ceramic cobalt-doped crystalline material. In this alternative, nd: YAG and SAC selected from the above group are both in ceramic form. A cobalt-doped crystalline material is a crystalline material doped with cobalt. Examples include cobalt doped spinels (Co: spinel or Co) 2+ :MgAl 2 O 4 ) Cobalt-doped zinc selenide (Co 2+ ZnSe). The high reflectivity mirror and the output coupler in this option may optionally be rigidly connected to the GM and the SA, although not necessarily, so that the P-QS laser is a monolithic microchip P-QS laser.
b. The GMC is a ceramic neodymium doped yttrium aluminum garnet (Nd: YAG), and the SAC is a non-ceramic SAC selected from a group of doped ceramic materials consisting of: (a) Trivalent vanadium doped yttrium aluminum garnet (V) 3+ YAG) and (b) a plurality of cobalt-doped crystalline materials. In such a case, the high reflectivity mirror and the output coupler are rigidly connected to the GM and the SA such that the P-QS laser is a monolithic microchip P-QS laser.
c. The GMC is a ceramic neodymium-doped rare earth element crystalline material, and the SAC is selected from a group of a plurality of doped crystalline materials consisting of: (a) Trivalent vanadium doped yttrium aluminum garnet (V) 3+ YAG) and (b) a plurality of cobalt-doped crystalline materials. The high reflectivity mirror and the output coupler in this option may optionally be rigidly connected to the GM and the SA, although not necessarily, so that the P-QS laser is a monolithic microchip P-QS laser.
Referring generally to method 1100, it is noted that one or both of the SAC and the GMC (and optionally one or more intermediately connected crystalline materials, if any) are transparent to the relevant wavelengths (e.g., SWIR radiation).
Fig. 11B and 11C include several conceptual timelines for performing the method 1100 according to examples of the presently disclosed subject matter. To simplify the drawing, it is assumed that the SA is a result of the processing of at least one first powder and the gain medium is a result of the processing of at least one second powder. As mentioned above, the roles may be interchanged.
Fig. 12A schematically illustrates an example of a PS, numbered 1200, including a photodetector (e.g., PD) 1202 controlled by a Voltage Controlled Current Source (VCCS) 1204. Note that the voltage controlled current source 1204 may alternatively be external to the PS 1200 (e.g., if a single VCCS 1204 provides current to multiple PS). VCCS 1204 is a slave current source delivering a current proportional to a control voltage (labeled VCTRL in the figure). The PS and PDDs disclosed in the present invention may include: any suitable type of VCCS. Other ("additional") components (not shown) of PS 1200 are collectively represented by a generic block 1206. PS such as PS 1200 and photodetectors such as photodetector 1202, when used for sensing, may also be referred to hereinafter as "active" or "non-reference" PS/photodetectors (as opposed to PS and photodetectors used to determine the input of the control voltage to the current source).
Fig. 12B schematically illustrates another example of a PS, numbered 1200', which is an example of PS 1200. In PS 1200', the other member 1206 is in the form of a "3T" (three transistor) structure. Any other suitable circuitry may be used as the number of additional members 1206.
The current source 1204 may be used to provide a current of the same magnitude but opposite direction as the dark current generated by the PD 1202, thereby eliminating (or at least reducing) the DC. This is particularly useful if the PD 1202 is characterized by a high DC characteristic. In this way, the charge flowing from the PD to a capacitance (which, as described above, may be provided by one or more capacitors, by the parasitic capacitance of the PS, or a combination thereof), and the DC-induced charge may be cancelled out. In particular, providing a current by the current source 1204 that is substantially equal in magnitude to the DC means that the provided current does not cancel the actual electrical signal generated by the PD 1202 due to the detected light impinging on the PD 1202.
Fig. 13A illustrates a PDD 1300 according to examples of the presently disclosed subject matter. The PDD 1300 includes circuitry that can controllably match the current drawn by the current source 1204 to the DC generated by the PD 1202, even if the generated DC is not constant (time-varying). Note that the level of the DC generated by the PD 1202 may depend on different parameters, such as operating temperature and the bias voltage supplied to the PD (which may also change from time to time).
Reducing the effects of DC within PS 1200 by PDD 1300 (rather than at a later stage of signal processing, whether analog or digital) enables a relatively small capacitance to be utilized without saturating the capacitance or reducing its linearity in response to the collected charge.
The PDD 1300 includes a PS 1200 and a reference PS 1310, the PS 1200 for detecting impinging light, the output of the reference PS 1310 being used by additional circuitry (discussed below) for reducing or eliminating the effects of DC in the PS 1200. Like PS 1200 (and 1200'), reference PS 1310 includes a PD 1302, a VCCS 1304, and optionally other circuitry ("other components"), generally designated 1306. In some examples, the reference PS 1310 of the PDD 1300 may be the same as the PS 1200 of the PDD 1300. Alternatively, any one or more components of PS 1310 may be identical to a corresponding component of PS 1200. For example: PD 1302 may be substantially identical to PD 1202. For example: VCCS 1304 may be identical to VCCS 1204. Alternatively, any one or more components of PS 1310 may be different from those of PS 1200 (e.g., PD, current source, additional circuitry). Note that substantially the same components of PS 1200 and PS 1310 (e.g., PD, current source, additional circuitry) may be operated under different operating conditions. For example: different biases may be supplied to the plurality of PDs 1202 and 1302. For example: the different components of the additional components 1206 and 1306 may be operated using different parameters or selectively connected/disconnected even when their structures are substantially identical. For simplicity and clarity, many of the components of PS 1310 are numbered 1302 (for the PD), 1304 (for the VCCS), and 1306 (for the additional circuitry), but this is not implied to indicate that these components are different from components 1202, 1204, and 1206.
In some examples, reference is made to additional circuitry 1306 may be omitted or disconnected so as not to affect the decision of the DC. The PD 1202 may operate under any of the following conditions: reverse bias, forward bias, zero bias, or optionally between any two or three of the foregoing (e.g., controlled by a controller such as controller 1338 discussed below). PD 1302 may operate under any of the following conditions: reverse bias, forward bias, zero bias, or optionally between any two or three of the foregoing (e.g., controlled by a controller such as controller 1338 discussed below). The PDs 1202 and 1302 may operate at substantially the same bias voltage (e.g., about-5V, about 0V, about + 0.7V), which is not required (e.g., when testing the PDD 1300, as discussed in more detail below). Alternatively, a single PS of PDD 1300 may sometimes operate as PS 1200 (detecting light from a FOV of PDD 1300), and sometimes as PS 1310 (whose detection signal output is used to determine a control voltage for a VCCS of another PS 1200 of the PDD). Alternatively, the roles of the "active" PS and the reference PS used to detect the impinging light may be exchanged. The PDD 1300 also includes a control voltage generation circuit (control-voltage generating circuitry) 1340, the control voltage generation circuit 1340 including at least an amplifier 1318 and a plurality of electrical connections to a plurality of PS of the PDD 1300. Amplifier 1318 has at least two inputs: a first input 1320 and a second input 1322. A first input 1320 of the amplifier 1318 is supplied with a first input voltage (V FI ) The first input voltage may be directly controlled by a controller (implemented on the PDD 1300, on an external system, or a combination thereof), or derived from other voltages in the system (which in turn may be controlled by the controller). A second input 1322 of the amplifier 1318 is connected to the cathode of the PD 1302 (of the reference PS 1310).
In a first example of use, the PD 1202 is held at a first voltage (also referred to as "anode voltage"), labeled V A ) And a second voltage (also referred to as a cathode voltage) labeled V C ) An operating bias therebetween. The anode voltage may be controlled by the controller (implemented on the PDD 1300, on an external system, or a combination thereofAnd) directly or derived from other voltages in the system (which in turn may be controlled by a controller). The cathode voltage may be controlled directly by the controller (on the PDD 1300, on an external system or a combination thereof) or derived from other voltages in the system (which in turn may be controlled by the controller). The anode voltage V A The cathode voltage V C Which may or may not remain constant in time. For example: the anode voltage V A May be provided by a constant source (e.g., from an external controller via a pad). Depending on the implementation, the cathode voltage V C May be substantially constant or time-varying. For example: when a 3T structure is used for PS 1200, V is caused, for example, by operation of additional components 1206 and/or current from PD 1202 C Over time. V (V) C May optionally be determined/controlled/affected by additional components 1206 (rather than by the reference circuit).
VCCS 1204 is used to provide (feed) a current to the cathode terminal of PD 1202 to cancel the dark current generated by PD 1202. Note that at other times, VCCS 1204 may be fed with other currents to achieve other purposes (e.g., for calibrating or testing PDD 1300). The level of the current generated by VCCS 1204 is controlled in response to an output voltage of amplifier 1318. The control voltage for controlling VCCS 1204, labeled V CTRL May be the same as an output voltage of amplifier 1318 (as shown). Alternatively, V CTRL May be derived from the output voltage of amplifier 1318 (e.g., due to a resistance or impedance between the output of amplifier 1318 and VCCS 1204).
To counteract (or at least reduce) the effect of the DC of PD 1202 on the output signal of PS 1200, PDD 1300 may subject PD 1302 to substantially the same bias voltage as PD 1202 is subject to. For example: when PD 1302 is substantially the same as PD 1202, it may be used to subject PD 1302 and PD 1202 to the same bias. One way to supply the same bias to both PDs (1202 and 1302) is to supply the voltage V to the anode of PD 1302 A (wherein the supplied voltage is denoted as V) RPA RPA stands for "reference PD anode (reference PD anode)") and the cathode of PD 1302 is supplied with voltage V C (wherein the applied voltage is denoted as V) RPC RPC stands for "reference PD cathode (reference PD cathode)"). Another way to supply the same bias is to supply V RPA =V A +ΔV is supplied to the anode of PD 1302, V RPC =V C +Δv is supplied to the cathode of the PD 1302. Alternatively, the anode voltage V A Reference anode voltage V RPA Or both may be provided by an external power source, such as a Printed Circuit Board (PCB) connected via the PDD 1300.
As described above, the first input 1320 of the amplifier 1318 is supplied with the first input voltage V FI . A second input 1322 of amplifier 1318 is connected to the cathode of PD 1302. Operation of amplifier 1318 reduces the voltage difference between its two inputs (1320 and 1322) causing the voltage on second input 1322 to tend to be supplied to the first input (V FI ) Is controlled by the same voltage. Referring now to fig. 3B, wherein the DC on PD 1302 (hereinafter labeled DC Reference to (DC Reference ) Represented by an arrow 1352 (the circuit shown is identical to the circuit of fig. 3A). During the period of time that the PD 1202 remains in the dark condition, the current on the PD 1302 is equal to the dark current of the PD 1202. The PDD1300 (or any system component connected to or adjacent to it) may block light from the PD 1302, so it remains in darkness. The blocking may be performed by physical barriers (such as opaque barriers), by optical devices (such as turning lenses), by electronic shutters, and the like. In the following description, it is assumed that all currents on the PD 1302 are DC generated by the PD 1302. Alternatively, if the PD 1302 is subject to light (such as low levels of known stray light in the system), a current source may be implemented to offset the known light origin signal, or the first input voltage V FI May be modified to compensate (at least in part) for stray illumination. The barrier, optics, or other specialized components aimed at keeping light away from PD 1302 may be implemented on the wafer level (the same wafer from which PDD1300 is fabricated To) may be attached to that wafer (e.g., using an adhesive), may be rigidly attached to a housing in which the wafer is mounted, and the like.
Suppose V FI Is constant (or slowly varying), then the output of VCCS 1304 (represented by arrow 1354) must be equal in magnitude to the DC of PD 1302 (DC Reference to ) This means that VCCS 1304 provides charge carriers for the DC consumption of PD 1302, allowing the voltage to remain at V FI . Since the output of VCCS 1304 is made up of V in response to the output of amplifier 1318 CTRL Control so that amplifier 1318 is operated to output the desired output so that V CTRL The output of VCCS 1304 will be controlled, which will be the same in magnitude as the dark current on PD 1302.
If PD 1202 is substantially the same as PD 1302 and VCCS 1204 is substantially the same as VCCS 1304, then the output of amplifier 1318 will also cause VCCS 1204 to provide the same level of current (DC) to the cathode of PD 1202 Reference to ). In such a case, in order for the output of VCCS 1204 to cancel the DC generated by PD 1202 (hereinafter denoted as DC Active PD (DC ActivePD ) Requiring both PD 1202 and PD 1302 to generate a similar level of DC. In order to subject both PDs (1202 and 1302) to the same bias voltage (which would cause both PDs to generate substantially the same level of DC because both PDs are maintained at substantially the same conditions, such as temperature), the voltage provided to the first input of amplifier 1318 is determined in response to the anode and cathode voltages of PD 1202 and the anode voltage of PD 1302. For example: if V is A Equal to V RPA Then equal to V C V of (2) FI May be provided to a first input 1320. Note that V C May change over time and is not necessarily determined by a controller (e.g., V C May be determined as a result of a number of additional components 1206). If PD 1202 is different from PD 1302 and/or if VCCS 1204 is different from VCCS 1304, the output of amplifier 1318 may be modified by electrical components (not shown) that are matched between amplifier 1318 and VCCS 1204 to provide the relevant control voltages to VCCS 1204 (e.g., if the DC on PD 1202 is known to be linearly related to the DC on PD 1302, the output of amplifier 1318 may be modified according to the linear correlation). Another way to supply the same bias is to supply V RPA =V A +ΔV is supplied to the anode of PD 1302, and V is RPC =V C +Δv is supplied to the cathode of the PD 1302.
Fig. 13C illustrates a PDD 1300' including PS 1200 according to examples of the presently disclosed subject matter. The PDD 1300' includes all the components of the PDD 1300, as well as many additional PS 1200. The different PDDs 1300' are substantially identical to each other (e.g., all are part of a two-dimensional PDA), so that the PDs 1302 of different PS 1200 generate similar DCs to each other. Thus, the same control voltage V CTRL All VCCS 1204 of different PS 1200 supplied to the PDD 1300' cause these VCCS 1204 to cancel (or at least reduce) the effects of the DC generated by the respective PD 1202. Any of the options discussed above with respect to PDD 1300 may be applied to PDD 1300' as opposed.
In some cases (e.g. if V C Not constant and/or unknown), may provide a first input voltage V FI The first input voltage V (e.g. by a controller) FI Is selected to induce a dark current on PD 1302 similar to PD 1202.
Referring now to fig. 14, an exemplary PD I-V curve 1400 is shown in accordance with examples of the presently disclosed subject matter. For ease of illustration, curve 1400 represents the I-V curves of both PD 1302 and PD 1202, which for purposes of this illustration are assumed to be substantially identical and are subject to the same anode voltage (i.e., V for this illustration A =V RPA ). The I-V curve 1400 is relatively flat between voltages 1402 and 1404, meaning that different biases between 1402 and 1404 supplied to the associated PD will yield similar levels of DC. If V is C Varying over a cathode voltage range given a known V A Meaning that the bias voltage on the PD 1202 is limited between voltages 1402 and 1404, then a V is supplied RPC Will cause the bias voltage on PD 1302 to also lie between voltages 1402 and 1404, will cause VCCS 1204 outputs a current sufficiently similar to a DC active PD even though PD 1202 and PD 1302 are subject to different biases. In such a case, V RPC May be within the cathode voltage range (as shown by equivalent voltage 1414) or may be external (but still maintain the bias voltage on PD 1302 between 1402 and 1404) as demonstrated by equivalent voltage 1412. Modifications to other configurations, such as those discussed above, may be implemented in comparison. Note that different biases may also be supplied to different PDs 1202 and 1302 for other reasons. For example: different biases may be supplied as part of the testing or calibration of the PDA.
In real life, the different PDs (or other components) of the different PS of a single PDD are made not exactly the same, and the operation of these PS is also not exactly the same as each other. In a PD array, the PDs may differ somewhat from one another, and the DCs may differ (e.g., due to manufacturing variations, temperature variations, etc.).
Fig. 15 illustrates a control voltage generation circuit 1340 according to examples of the invention, the control voltage generation circuit 1340 being connected to a plurality of reference photosites 1310 (generally designated 1500). The circuit of fig. 15 (also referred to as reference circuit 1500) may be used to determine a control voltage (labeled V) for one or more VCCS 1204 of corresponding one or more PS 1310 of multiple PDDs 1300, 1300' and any PDD variations discussed in this disclosure CTRL ). In particular, the reference circuit 1500 can be used to determine a control voltage based on data collected from multiple reference PS 1310 that varies to some extent (e.g., as a result of manufacturing inaccuracies, varying operating conditions, etc.), for counteracting (or limiting) the effects of DC in one or more PS 1200 of a PDD. As described above, the dark currents of the plurality of PDs may be different from each other even if they are similar. Note that in some PD technologies, PDs intended to be identical may feature dark currents, which differ by a factor of x1.5, x2, x4 or even more. The averaging mechanism discussed herein even allows such significant differences to be compensated for (e.g., in terms of manufacturing). Is connected to multiple reference PS 1310 at amplifier 1318 to average several PS' s1310, such PS 1310 are kept in the dark, for example using any of the mechanisms discussed above. The voltages supplied to the different VCCS 1304 of the various PS 1310 are shorted such that all VCCS 1304 receive substantially the same control voltage. The cathode voltages of the different reference PDs 1302 are shorted to different networks. Thus, while the currents in the different reference PS 1310 are slightly different from each other (since the reference PS 1310 are slightly different from each other), the average control voltage supplied to one or more PS 1200 of each PDD (which may also be slightly different from each other and from the reference PS 1310) is sufficiently accurate to offset the effect of dark current on the different PS 1200 in a sufficiently uniform manner. Optionally, the output voltage of a single amplifier 1318 is supplied to all PS 1200 and all reference PS 1310. Optionally, the selected PDs for the PDD have a flat I-V response (as described above, e.g., with respect to FIG. 14), such that the average control voltage discussed with respect to reference circuit 1500 cancels the DC in different PS 1200 to a very good degree. Non-limiting examples of PDDs are provided in fig. 16A and 16B, which include multiple reference PS 1310 whose average output signals are used to modify the multiple output signals of multiple active PS 1200 (e.g., to reduce the effects of DC of the output signals). Different configurations, geometries, and numerical ratios may be implemented between the multiple reference PS 1310 and the multiple active PS 1200 of a single PDD. For example: in a rectangular PDA comprising PS arranged in rows and columns, PS of an entire row (e.g. 1,000 PS) or PS of several rows or columns may be used as reference PS 1310 (and optionally kept in the dark) while the rest of the array receives the control signals based on averaging the outputs of those reference PS rows. This method of generating the control current substantially reduces the effect of dark current by eliminating the average dark current, leaving only the PS to PS (PS-to-PS) variation.
Fig. 16A and 16B illustrate multiple PDDs including an array of PS and multiple PD-based reference circuits according to examples of the inventive subject matter. PDD 1600 (shown in FIG. 16A) and PDD 1600' (shown in FIG. 16B, a variation of PDD 1600) include all of the components of PDD1300, as well as a plurality of additional PS 1200 and PS 1310. Optionally, the different PS of PDD 1600 (and PDD 1600', respectively) are substantially identical to each other. Any of the options discussed above with respect to the plurality of PDDs 1300 and 1300 'and with respect to the circuit 1500 may be applied to the PDDs 1600 and 1600' as opposed.
Fig. 16A shows a photodetector device 1600, the photodetector device 1600 comprising a photosensitive region 1602 (which is exposed to external light during operation of the photodetector device 1600), a region 1604, and a control voltage generation circuit 1340, the photosensitive region 1602 being provided with a plurality (array) of PS 1200, the region 1604 being provided with a plurality of reference PS 1310 which are kept in darkness (at least in reference current measurements, optionally at all times), the control voltage generation circuit 1340 further comprising a controller 1338. The controller 1338 may control the operation of the amplifier 1318, the voltage supplied to the amplifier 1318, and/or the operation of the plurality of reference PS 1310. Optionally, the controller 1338 may also control operations of other components of the multiple PS 1200 and/or PDD 1600. The controller 1338 can control both the plurality of active PS 1200 and the plurality of reference PS 1310 to operate under the same operating conditions (e.g., bias voltage, exposure time, management readout regime). Note that any of the functions of controller 1338 may be implemented by an external controller (e.g., implemented on another processor of an EO system on which the PDD is installed, or by an auxiliary system such as a controller of an autonomous car on which the PDD is installed). Alternatively, the controller 1338 may be implemented as one or more processors fabricated on the same wafer as the other components of the PDD 1600 (e.g., the PS 1200 and 1310, amplifier 1318). Alternatively, the controller 1338 may be implemented as one or more processors located on a PCB connected to such a wafer. Other suitable controllers may also be implemented as controller 1338.
Fig. 16B illustrates a photodetector apparatus 1600' according to examples of the presently disclosed subject matter. Photodetector device 1600' is similar to device 1600, but with the components arranged in a different geometry and without showing internal details of the different PS. Also illustrated is a readout circuit 1610, which readout circuit 1610 is used to read the plurality of detection signals from the plurality of PS 1200 and provide them for further processing (e.g. to reduce noise, for image processing), for storage or for any other use. For example: the readout circuitry 1610 may sequentially temporally align the readout values of different PS 1200 (possibly after some processing by one or more processors of the PDD, not shown) before providing them for further processing, storage, or any other action. Alternatively, the readout circuitry 1610 may be implemented as one or more units fabricated on the same wafer as other components of the PDD 1600 (e.g., the PS 1200 and 1310, amplifier 1318). Alternatively, the readout circuitry 1610 may be implemented as one or more units on a PCB connected to such a wafer. Other suitable readout circuitry may also be implemented as readout circuitry 1610. Note that a sensing circuit, such as sensing circuit 1610, may be implemented in any PDD discussed in this disclosure (e.g., multiple PDDs 1300, 1700, 1800, and 1900). Examples of analog signal processing may be performed in the PDD (e.g., by one or more processors of the readout circuitry 1610 or the corresponding PDD) prior to an optional digitizing of the signal, including: modifying gain (amplification), offset, and combining (combining multiple output signals from two or more PS). The digitization of the readout data may be implemented on or external to the PDD.
Alternatively, the PDD 1600 (or any other PDD disclosed in the present disclosure) may include: a sampling circuit for sampling the output voltage of amplifier 1318 and/or the control voltage VCTRL, if different, and for maintaining the voltage level for at least a specified minimum period of time. Such sampling circuitry may be located anywhere between the output of amplifier 1318 and one or more of the at least one VCCS 1204 (e.g., at location 1620). Any suitable sampling circuit may be used; for example: in some cases, an exemplary circuit may include: a plurality of sample and hold switches. Alternatively, the sampling circuit may be used only at certain times, while at other times a direct real-time readout of the control voltage is performed. For example: the use of a sampling circuit may be useful when the magnitudes of dark currents in the system are slowly varying, when PS 1310 is to be shaded for only a portion of the time.
Fig. 17 and 18 illustrate further PDDs according to examples of the presently disclosed subject matter. In the PDD described above (e.g., 1300', 1600'), a voltage controlled current source is used for the plurality of active PS 1200 and the plurality of reference PS 1310. A current source is one example of a voltage controlled current circuit that may be used in the disclosed PDD. Another type of voltage-controlled current circuit that may be used is a voltage-controlled current sink (voltage-controlled current-sink) whose current drawn is controlled in magnitude by the control voltage supplied to it. For example: a current sink may be used wherein the bias across the plurality of PDs (1202, 1302) is opposite in direction to the bias exemplified above. More generally, whenever a voltage controlled current source (1204, 1304) is discussed above, this component may be replaced by a voltage controlled current sink (labeled 1704 and 1714, respectively). Note that the use of a current sink instead of a current source may require the use of different types of components or circuits in other portions of the corresponding PDD. For example: an amplifier 1318 used with VCCS 1204 and 1304 differs in power, size, etc. from amplifier 1718 used with voltage controlled current extractors 1704 and 1714. To distinguish between multiple PS including multiple voltage-controlled current sinks instead of multiple VCCS, multiple reference numbers 1200 'and 1310' correspond to multiple PS 1200 and 1300 as discussed above.
In fig. 17, a PDD 1700 includes voltage controlled current circuits that are voltage controlled current sinks (in both PS 1200 'and PS 1310'), and a suitable amplifier 1718 is used in place of amplifier 1318. All the variants discussed above in relation to current sources are equally applicable to current sinks.
In fig. 18, a PDD 1800 includes two types of voltage controlled current circuits, voltage controlled current sources 1204 and 1314 and voltage controlled current sinks 1704 and 1714, and matched amplifiers 1318 and 1718. This may allow, for example, operating the plurality of PDs of PDD 1800 in forward or reverse bias. At least one switch (or other selection mechanism) may be used to select which reference circuit is activated/deactivated, the one based on multiple VCCS or the one based on multiple voltage controlled current extractor reference circuits. Such a selection mechanism may be implemented, for example, to prevent the two feedback regulators from operating "against each other (agains) (e.g., if operating at a near zero bias on the PD). Any of the options, explanations, or variations discussed above with respect to any of the previously discussed plurality of PDDs (e.g., 1300', 1600') may be applied to the plurality of PDDs 1700 and 1800 as opposed. In particular, the plurality of PDDs 1700 and 1800 may include: the plurality of PS 1200 'and/or the plurality of reference PS 1310' are similar to those discussed above (e.g., with respect to fig. 15, 16A, and 16B).
Note that in any of the PDDs discussed above, one or more of the PS (e.g., a PS of a photo-detection array) may optionally be controllable to selectively act as a reference PS 1310 (e.g., at times) or as a conventional PS 1200 (e.g., at other times). Such PS may include circuitry required to operate in two roles. For example: if the same PDD is used in different types of electro-optic systems, it may be used. For example: one system may require an average of accuracy between 1,000 and 4,000 reference PS 1310, while another system may require a lower accuracy, which may be achieved by averaging between 1 and 1200 reference PS 1310. In another example, as described above, when the entire PDA is dimmed and stored in a sample-and-hold circuit, an averaging of control voltages based on some (or even all) PS may be performed, and all PS may be used to detect FOV data in one or more subsequent frames using the determined control voltages.
Note that in the discussion above, it is assumed for simplicity that the anode side of all PDs on each PDA are connected to a known (and possibly controlled) voltage, and that the plurality of detection signals and the connection of the plurality of VCCSs, and that a plurality of additional circuits are implemented on the cathode side. Note that alternatively, the plurality of PDs 1202 and 1302 may be connected in an opposite manner than that (with the readout on the anode side, and so on).
With reference to all PDDs (e.g., 1300, 1600, 1700, 1800) discussed above, it is noted that the plurality of PS, the readout circuitry, the reference circuitry, and other such components (and any additional components that may be needed) may be implemented on a single wafer or on more than one wafer, on one or more PCBs, or on another suitable type of circuit connected to the plurality of PS, and so on.
Fig. 19 illustrates a PDD 1900 in accordance with examples of the presently disclosed subject matter. PDD 1900 may implement any combination of features from one or more PDDs as described above, and further include a plurality of additional components. For example: PDD 1900 may include: any one or more of the following:
a. at least one light source 1902 operable to emit light onto the FOV of PDD 1900. Some of the light source 1902 is reflected from objects in the FOV and is extracted by the plurality of PS 1200 in the photosensitive region 1602 (which is exposed to external light during operation of the photodetector device 1900) and used to generate an image or other model of the plurality of objects. Any suitable type of light source (e.g., pulsed, continuous, modulated LED, laser) may be used. Alternatively, the operation of the light source 1902 may be controlled by a controller, such as the controller 1338.
b. A physical barrier 1904 is used to keep the region 1604 of the detector array in the dark. The physical barrier 1904 may be part of the detector array or external thereto. The physical barrier 1904 may be fixed or movable (e.g., a moving shutter). Note that other types of dimming mechanisms may also be used. Optionally, the physical barrier 1904 (or other darkening mechanism) may darken different portions of the detection array at different times. Alternatively, the operation of barrier 1904, if alterable, may be controlled by a controller, such as controller 1338.
c. Ignored photosites 1906. Note that not all PS of the PDA need be used for detection (PS 1200) or as reference (PS 1310). For example: some PS may be located in an area that is not fully darkened and not fully lit, and thus ignored in the generation of the image (or other types of output generated in response to the plurality of detection signals of the plurality of PS 1200). Alternatively, the PDD 1900 may ignore different PS at different times.
d. At least one processor 1908 for processing the plurality of detection signals output by the plurality of PS 1200. Such processing may include: such as signal processing, image processing, spectral analysis, etc. Alternatively, the results of the processing by processor 1908 may be used to modify the operation of controller 1338 (or another controller). Alternatively, the controller 1338 and the processor 1908 may be implemented as a single processing unit. Alternatively, the processing results of the processor 1908 may be provided to any one or more of the following: a tangible memory module 1910 for external systems (e.g., a remote server or a vehicle computer of a vehicle in which the PDD 1900 is installed), such as via a communication module 1912, a display 1914 for displaying images or other types of results (e.g., graphics, text results of a spectrometer), another type of output interface (e.g., a speaker, not shown), and so forth. Note that, optionally, multiple signals from multiple PS 1310 may also be processed by the processor 1908, for example, to evaluate a condition (e.g., operability, temperature) of the PDD 1900.
e. A memory module 1910 for storing at least one of the plurality of detection signals output by the plurality of active PS or by the readout circuitry 1610 (e.g., if different), and detection information generated by the processor 1908 by processing the plurality of detection signals.
f. A power source 1916 (e.g., battery, alternating Current (AC) force adapter, direct Current (DC) force adapter). The power supply may provide power to the plurality of PS, the amplifier, or any other component of the PDD.
g. A hard shell 1918 (or any other type of structural support).
h. Optics 1920 for directing light of the light source 1902 (if implemented) to the FOV and/or for directing light from the FOV to the plurality of active PS 1200. Such an optical device may include: such as lenses, mirrors (fixed or movable), prisms, filters, and the like.
As described above, multiple PDDs as described above may be used to match the control voltage that determines the current level provided by the at least one first Voltage Control Current Circuit (VCCC) 1204 to account for differences in operating conditions of the PDDs that change the multiple levels of DC generated by the at least one PD 1202. For example: for a PDD comprising multiple PS 1200 and multiple PS 1320: when the PDD is operating at a first temperature, the control voltage generation circuit 1340 is responsive to the DCs of the plurality of reference PDs 1302 to provide a control voltage to the voltage control current circuit for providing a current at a first level at a first temperature to reduce the effect of the DCs of the active PD1202 on the output of the plurality of active PS 1200; and when the PDD is operating at a second temperature (higher than the first temperature), the control voltage generation circuit 1340 provides a control voltage to the voltage control current circuit in response to dark currents of the plurality of reference PDs 1302 for providing a current at a second level to reduce the effect of the DC of the plurality of active PDs 1202 on the output of the plurality of active PS 1200 such that the second level is greater in magnitude than the first level.
Fig. 20 is a flow chart of a method 2000 for compensating for DC in a photodetector according to examples of the inventive subject matter. The method 2000 is performed in a PDD, the PDD comprising at least: (a) A plurality of active PS, each active PS comprising at least one active PD; (b) at least one reference PS comprising a reference PD; (c) At least one first VCCC connected to one or more active PDs; (d) At least one reference VCCC connected to one or more reference PDs; and (e) a control voltage generation circuit connected to the active VCCC and the reference VCCC. For example: the method 2000 may be performed in any of a plurality of PDDs 1300', 1600', 1700, and 1800 (the latter two including a plurality of active PS in a plurality of implementations). Note that the method 2000 may include: any of the actions or functions discussed above with respect to any of the various aforementioned PDDs are performed.
The method 2000 includes at least a plurality of stages (stages) 2010 and 1020. Stage 2010 includes: based on a level of DC in the at least one reference PD, a control voltage is generated that, when provided to the at least one reference VCCC, causes the at least one reference VCCC to generate a current that reduces an effect of DC of the reference PD on an output of the reference PS. Stage 2020 includes providing the control voltage to the at least one first VCCC, thereby causing the at least one first VCCC to generate a current that reduces an effect of DC of the plurality of active PDs on a plurality of outputs of the plurality of active PS. VCCC stands for "voltage-controlled current circuit (Voltage Controlled Current Circuit)", and it can be implemented as a voltage-controlled current source (voltage-controlled current source) or as a voltage-controlled current sink (voltage-controlled current sink).
Optionally, stage 2010 is implemented using an amplifier that is part of the control voltage generation circuit. In such a case, stage 2010 includes supplying a first input voltage to a first input of the amplifier when a second input of the amplifier is electrically connected between the reference PD and the reference voltage control current circuit. The amplifier may be used to continuously reduce a difference between an output of the reference voltage control circuit and the first input voltage, thereby generating the control voltage. Optionally, both the first VCCC(s) and the reference VCCC(s) are connected to an output of the amplifier.
Where the PDD includes a plurality of different reference PDs generating different levels of DC, stage 2010 may include: a single control voltage is generated based on an average of the different DCs of the plurality of reference PDs.
The method 2000 may include: light from a FOV of the PDD is prevented from reaching the plurality of reference PDs (e.g., using a physical barrier or turning optics).
The method 2000 may include: sampling a plurality of outputs of the plurality of active PS after reducing effects of the DC, and generating an image based on the plurality of sampled outputs.
Fig. 21 is a flow chart illustrating a method 1020 for compensating DC in a PDD according to examples of the inventive subject matter. Method 1020 has two phases that are performed in different temperature regimes; when the PDD is at a first temperature (T 1 ) In operation, a first group of phases (1110-1116) is performed and when the PDD is at a second temperature (T) 2 ) In operation, a second stage group (1120-1126) is executed. The first temperature and the second temperature may be different to the extent in different implementations or in different examples of the method 1200. For example: the temperature difference may be at least 5 ℃; at least 10 ℃; at least 20 ℃; at least 40 ℃; at least 100 c, and so on. In particular, the method 1020 may be effective at even smaller temperature differentials (e.g., less than 1 ℃). Note that each of the first temperature and the second temperature may be implemented as a temperature range (e.g., spanning 0.1 ℃;1 ℃;5 ℃ or higher). Any temperature in the second temperature range is higher than any temperature in the first temperature range (e.g., according to the aforementioned plurality of ranges). Method 2000 may optionally be performed in any of the PDDs discussed above (1300, 1600, etc.). Note that method 1020 may include: any of the actions or functions discussed above are performed with respect to any of the various aforementioned PDDs, and the PDD of method 1020 may include: any combination of one or more of the plurality of components discussed above with respect to any one or more of the aforementioned plurality of PDDs.
Reference is made to a plurality of phases that are performed when the PDD is operated at the first temperature (which may be a first temperature range): stage 2110 includes determining a first control voltage based on the DC of at least one reference PD of the PDD. Stage 2112 includes providing the first control voltage to a first VCCC coupled to at least one active PD of an active PS of the PDD, causing the first VCCC to apply a first DC-rejection current (impose a first dark-current countering current) in the active PS. Step 2114 includes generating, by the active PD, a first sense current in response to: (a) Light from an object in a field of view of the PDD striking the active PD, and (b) DC generated by the active PD. Stage 2116 includes outputting, by the active PS, a first sense signal responsive to a first sense current and a first DC rejection current, the first sense signal being smaller in magnitude than the first sense current, thereby compensating for the effect of DC on the first sense signal. The method 1020 may further include an optional stage 2118 of generating at least one first image of a FOV of the PDD based on a plurality of first detection signals from a plurality of PS (and optionally all) of the PDD. Stage 2118 may be performed when the PDD is at the first temperature or at a subsequent stage.
Reference is made to a plurality of phases that are performed when the PDD is operated in the second temperature (which may be a second temperature range): stage 2120 includes determining a second control voltage based on the DC of at least one reference PD of the PDD. Step 2122 includes providing the second control voltage to the first VCCC, causing the first VCCC to apply a second DC-rejection current in the active PS; stage 2124 includes generating, by the active PD, a second sense current in response to: (a) Light from the object striking the active PD, and (b) DC generated by the active PD. Stage 2126 includes outputting, by the active PS, a second sense signal having a magnitude less than the second sense current in response to the second sense current and the second DC offset current, thereby compensating for the DC effect on the second sense signal. The second DC counteracting current has a magnitude that is greater than a magnitude of the first DC counteracting current, and possibly by any ratio that is greater than one. For example: the ratio may be a factor of at least two times or significantly higher, such as on the order of one, two, three (or more) magnitudes. The method 1020 may further include an optional stage 2128 of generating at least one second image of a FOV of the PDD based on a plurality of second detection signals from a plurality of PS (and optionally all) of the PDD. Stage 2128 may be performed when the PDD is at the second temperature or at a later stage.
Optionally, at a first time (t 1 ) A first level of radiation (L) impinging on the active PD from the object 1 ) Is substantially equal to a second time (t 2 ) A second level of radiation (L) impinging on the active PD from the object 2 ) Wherein an amplitude of the second detection signal is substantially equal to an amplitude of the first detection signal. It is noted that the PDD of the present invention may alternatively be used to detect multiple signal levels that are significantly lower than the DC of the multiple levels at which its PD is at certain operating temperatures (e.g., at amplitudes of one, two, or more orders of magnitude). Thus, the method 1020 may be used to emit multiple output signals of multiple similar levels at two different temperatures, where the multiple DCs are two or more orders of magnitude greater than the multiple detection signals and are significantly different from each other (e.g., by a factor of 2, ×10)
Optionally, the determining of the first control voltage and the determining of the second control voltage are performed by a control voltage generation circuit comprising at least one amplifier having an input electrically connected between the reference PD and a reference voltage control current circuit coupled to the reference PD.
Optionally, the method 1020 may further include: a first input voltage is supplied to the other input of the amplifier, the level of the first input voltage being determined to correspond to a bias voltage on the active PD. Optionally, the method 1020 may include: the first input voltage is supplied such that a bias voltage on the reference PD is substantially the same as a bias voltage on the active PD. Optionally, the method 1020 may include: determining the first control voltage and the second control voltage based on different DCs of a plurality of reference PDs of the PDD when the plurality of active PDs have a plurality of different DCs, wherein the providing of the first control voltage includes providing a same first control voltage to a plurality of first voltage control current circuits, each first voltage control current circuit being coupled to at least one active PD of the plurality of active PDs having a different DC, wherein the providing of the second control voltage includes providing a same second control voltage to the plurality of first voltage control current circuits.
Alternatively, a plurality of different active PDs generate a plurality of different levels of dark current simultaneously, and a plurality of different reference PDs generate a plurality of different levels of DC simultaneously, and the control voltage generation circuit provides a same control voltage to the different active PDs based on an average of the plurality of different DCs of the second PD. Optionally, the method 1020 may include: directing light from the field of view to a plurality of active PS of the PDD using dedicated optics; and a plurality of reference PDs preventing light from the field of view from reaching the PDD.
Fig. 22 is a flow chart illustrating a method 2200 for testing a PDD according to examples of the inventive subject matter. For example: the test may be performed by any of the aforementioned PDDs. That is, the same circuitry and architecture as described above that can be used to reduce the impact of DC can be used for additional purposes to test multiple detection paths for multiple different PS in real time. Alternatively, the test may be completed while the PDD is in an operational mode (i.e., not in a test mode). In some implementations, some PS may be tested at the time of being exposed to ambient light from the FOV even when multiple other PS of the same PDD capture an actual image of the FOV (with or without compensation for DC). It is noted, however, that method 2200 may alternatively be implemented in a plurality of other types of PDDs. It is also noted that method 2200 may alternatively be implemented using circuits or architectures similar to those discussed above with respect to PDDs described above, but when the PDs are not characterized by high DC and reduced DC is not needed or performed. The method 2200 is described as being applied to a single PS, but it may be applied to some or all PS of a PDD.
Stage 2210 of method 2200 includes: providing a first voltage to a first input of an amplifier of a control voltage generation circuit, wherein the second input of the amplifier is connected to a reference PD and a second current circuit that supplies current that is dominated by a level in response to an output voltage of the amplifier; causing the amplifier to generate a first control voltage for a first current circuit of a PS of the PDD. Referring to the examples set forth with respect to the previous figures, the amplifier may be the amplifier 1318 or the amplifier 1718, and the PS may be the PS 1310 or PS 1310'. The following discussion may provide a plurality of first voltages to a plurality of examples of the first input.
Stage 2220 of method 2200 includes reading a first output signal of the PS generated by the PS in response to the current generated by the first current circuit and the current generated by a PD of the PS.
Stage 2230 of method 2200 includes providing a second voltage to the first input of the amplifier that is different from the first input, causing the amplifier to generate a second control voltage for the first current circuit. Examples of such multiple second voltages may be discussed below.
Stage 2240 of method 2200 includes reading a second output signal of the PS generated by the PS in response to the current generated by the first current circuit and the current generated by a PD of the PS.
Stage 2250 of method 2200 includes determining a defect state of a detection path of the PDD based on the first output signal and the second output signal, the detection path including the PS and readout circuitry associated with the PS. Examples of what types of defects may be detected when using multiple different combinations of first and second voltages are discussed below.
A first example includes using at least one of the first voltage and the second voltage to attempt to saturate the PS (e.g., by the VCCS providing a very high current to the capacitance of PS independent of the actual detection level). Failure to saturate the PS (e.g., receive a detection signal that is not white, may be fully black or half-tone) indicates that there is a problem with the associated PS or other components in its readout path (e.g., PS amplifier, sampler, analog-to-digital converter). In such a case, the first voltage, for example, causes the amplifier to generate a control voltage that causes the first current circuit to saturate the PS. In such a case, at stage 2250, the determination of the defect state may include: the detection path in which PS is malfunctioning is determined in response to determining that the first output signal is not saturated. In such a case, the second voltage may be a voltage that does not cause the PS to saturate (e.g., it causes the VCCS to not emit current, only compensating for the DC to prevent current from being collected by the capacitor). Testing whether a PS detection path can be saturated can be done in real time.
When attempting to saturate one or more PS to test the PDD, method 2200 may include: the first output signal is read at a time when the PS is exposed to ambient light during a first detection frame of the PDD, wherein the determination of the fault condition is performed in response to reading a saturated output signal at a second detection frame earlier than the first frame after a previous determination that the detection path is operational. For example: in an ongoing operation of the PDD (e.g., while capturing a video), if the saturation attempt fails, it may be determined to be defective or unusable after the previous time in the same operation. The test may be performed on a test frame (testing frame) that is not part of the video, or on a single PS whose saturated output is ignored (e.g., the pixel colors corresponding to those PS may be done from multiple adjacent pixels of the frame on which the test is performed), considering those PS as not available for the span (span) of this frame).
A second example includes using at least one of the first voltage and the second voltage to attempt to consume the PS (e.g., providing a very high reverse current (opencurrent) to the capacitance of the PS through the VCCS) independent of the actual detection level. Failure to consume the PS (e.g., receive a detection signal that is not black, may be full white or half tone) indicates that there is a problem with the associated PS or other components in its read path. In such a case, the second voltage, for example, causes the amplifier to generate a second control voltage that causes the first current circuit to consume a detection signal due to light impinging on the FOV on the PS. In such a case, at stage 2250, the determination of the defect state may include: determining that the detection path is malfunctioning is in response to determining that the second output signal is not consumed. In such a case, the first voltage may be a voltage that does not cause the PS to saturate (e.g., that causes the VCCS to emit no current, compensating only the DC, thereby saturating the capacitor). Testing whether a PS detection path can be consumed (e.g., without darkening individual PS) can be implemented in real-time.
When attempting to consume one or more PS to test the PDD, method 2200 may include reading the second output signal in a third detection frame of the PDD when the PS is exposed to ambient light, wherein the determination of the fault condition is performed in response to reading a consumed output signal in a fourth detection frame earlier than the third frame after a previous determination that the detection path is operational.
Yet another example of the use of the method 2200 to test a PS by using multiple control voltages includes supplying more than two voltages. For example: three or more different voltages may be provided to the first input of the amplifier at different times, such as at different frames. In such a case, stage 2250 may include: the defect state of the detection path of the PDD is determined based on the first output signal, the second output signal, and at least one other output signal corresponding to the third or more voltages supplied to the first input of the amplifier. For example: at different times (e.g., monotonically, where each voltage is greater than a previous voltage), three, four, or more different voltages may be supplied to the first input of the amplifier, and the plurality of output signals of the same PS corresponding to different voltages may be tested to correspond to the plurality of supplied voltages (e.g., the plurality of output signals also monotonically increasing in amplitude).
An example of using method 2200 to test a portion (or even all) of the PDD includes reading at least two output signals from each of a plurality of PS of the PDD in response to at least two different voltages provided to the amplifier of the respective PS, determining an operational state for at least one first detection path based on the at least two output signals output by at least one PS associated with the respective first detection path, and determining a fault state for at least one second detection path based on the at least two output signals output by at least one other PS associated with the respective second detection path.
Alternatively, the method 2200 may be performed in conjunction with a plurality of specified test targets (e.g., black targets, white targets), but this is not required, when the PDD is shielded from ambient light and/or when specified illumination (e.g., of a known magnitude, dedicated illumination, and the like) is used.
Alternatively, stage 2250 may be replaced with determining an operational state of the detection path. For example: this can be used to calibrate a plurality of different PS of the PDD to the same level. For example: when the PDD is dimmed and there is no dedicated target or dedicated illumination, the same voltage may be supplied to VCCS of different PS. The different output signals of the different PS may be compared to each other (at one or more different voltages supplied to the first input of the amplifier). Based on the comparison, multiple correction values may be assigned to different PS detection paths so that they will provide a similar output signal (simulated by the currents contained by the multiple VCCS of different PS) for similar illumination levels. For example: it may be decided that the output of PS a should be multiplied by 1.1 to output a calibrated output signal to PS B. For example: it may be decided that an increment signal deltas should be added to the output of PS C to output a calibrated output signal to PS D. Nonlinear correction may also be implemented.
Fig. 23 illustrates an electro-optic (EO) system 2300 in accordance with examples of the presently disclosed subject matter. The EO system 2300 includes at least one PDA 2302 and at least one processor 2304, the at least one processor 2304 being operable to process a plurality of detection signals from a plurality of PS 2306 of the PDA. EO system 2300 may be any type of EO system that uses a PDA for detection, such as a camera, spectrometer, LIDAR, and the like.
The at least one processor 2304 is operable and configured to process a plurality of detection signals output by a plurality of PS 2306 of at least one PDA 2302. Such processing may include: such as signal processing, image processing, spectral analysis, etc. Alternatively, the processing results of the processor 2304 may be provided to any one or more of the following: a tangible memory module (tangible memory module) 2308 (for storage or later retrieval) for external systems (e.g., a remote server or a vehicle computer of a vehicle in which the EO system 2300 is installed) such as via a communication module 2310, a display 2312 for displaying an image or other type of result (e.g., graphic, textual results of a spectrometer), another type of output interface (e.g., a speaker, not shown), and so forth.
EO system 2300 may include: a controller 2314, the controller 2314 controls a plurality of operating parameters of the EO system 2300, such as the PDA 2302 and an optional light source 2316. In particular, the controller 2314 may be configured to set (or otherwise change) the plurality of frame exposure times used by the EO system 2300 to retrieve different frames. Alternatively, multiple processing results of the multiple photo-detection signals by the processor 2304 may be used to modify the operation of the controller 2314. Alternatively, the controller 2314 and the processor 2304 may be implemented as a single processing unit.
EO system 2300 may include: at least one light source 2316 operable to emit light onto the FOV of the EO system 2300. Some of the light from the light source 2316 is reflected from objects in the FOV and is extracted by the PS 2306 (at least those PS located in a photosensitive area exposed to external light during the FETs of the EO system 2300). Light from a plurality of objects (whether light from a light source is reflected, light from other light sources is reflected, or radiation light) in the FOV is detected and used to generate an image or other model (such as a three-dimensional depth map) of the plurality of objects. Any suitable type of light source may be used (e.g., pulsed, continuous, modulated, LED, laser). Alternatively, the operation of the light source 2316 may be controlled by a controller (e.g., the controller 2314).
EO system 2300 may include: a readout circuit 2318 is used to read out a plurality of electrical detection signals from a plurality of different PS 2306. Alternatively, the readout circuitry 2318 may process the plurality of electrical detection signals before providing them to the processor 2304. Such pre-processing may include: for example: amplification, sampling, weighting, noise reduction, correction, digitizing, thresholding, level adjustment, DC compensation (weighting, sampling, scaling, dark current compensation), and the like.
Further, the EO system 2300 may include: a plurality of additional components, such as (but not limited to) one or more of the following optional components:
a. a memory module 2308 for storing at least one of a plurality of detection signals output by the PS 2306 or by the readout circuitry 2318 (e.g., if different), and detection information generated by the processor 2304 by processing the plurality of detection signals.
b. A power source 2320 such as a battery, an AC power adapter, DC power adapter, and the like. The power supply 2320 may provide power to the PDA, readout circuitry 2318, or any other component of the EO system 2300.
c. A hard shell 2322 (or any other type of structural support).
d. Optics 2324 for directing light from light source 2316 (if implemented) to the FOV and/or for directing light from the FOV to PDA 2300. Such an optical device may include: for example: lenses, mirrors (fixed or movable), prisms, filters, and the like.
Alternatively, the PDA 2302 may be characterized by a relatively high DC (e.g., a result of its PD type and characteristics). Due to the high level of DC, the multiple capacitances of the various PS 2306 collecting detection charges may become saturated (partially or fully) by the DC with little or no dynamic range for detecting ambient light (from FOV). Even if the readout circuitry 2318 or the processor 2304 (or any other component of the system 2300) subtracts multiple DC levels from the detection signal (e.g., normalizes the detection data), the dynamic range for detection is lacking, meaning that the resulting detection signal of the individual PS 2306 is too saturated to be used for meaningful detection of multiple ambient light levels. Since the DC of the PD from each PS 2306 is accumulated in the capacitance (whether the actual capacitance of other components of the plurality of PS or parasitic capacitance or residual capacitance) over the duration of the Frame Exposure Time (FET), a plurality of different PS 2306 with different capacitances may be rendered unusable at a plurality of different FETs.
Fig. 24 illustrates an example of a method 2400 for generating image information based on data of a PDA in accordance with the presently disclosed subject matter. Referring to the examples set forth with respect to the previous figures, the method 2400 may be performed by the EO system 2300 (e.g., by the processor 2304, the controller 2314, etc.). In such a case, the PDA of method 2400 can optionally be PDA 2302. Other relevant components discussed in method 2400 may be the plurality of corresponding components of EO system 2300. Method 2400 includes changing a frame FET where the PDA collects charges from its PD. Such collected charge may be due to photoelectric responses to light impinging on the plurality of PDs and multiple intrinsic sources within the detection system, such as due to DC of the PDs. The impinging light may arrive from a FOV such as a camera or other EO system to which the PDA is mounted. The FET may be controlled electronically, mechanically, or any combination thereof by controlling the flash illumination duration, and the like.
Note that the FET may be a monolithic FET that is a sum of a plurality of different durations, where the PDA collects charges due to electro-optical activity in a plurality of PS of the PDA. An integral FET is used where the charges collected over different pluralities of different durations are added to provide a single output signal. Such an integral FET may be used, for example, with pulsed illumination, or with active illumination in which the collection is shelved for a short period of time (e.g., to avoid saturation by a bright reflection in the FOV). Note that alternatively, in some frames a single FET may be used, while in other frames the entire FET may be used.
Stage 2402 of method 2400 includes receiving first frame information. The first frame information includes, for each PS of a plurality of PS of a PDA, a first frame detection level indicating an intensity of light detected by the respective PS in a first FET. The receiving of the first frame information may include: multiple read-out signals are received from all of the PS of the PDA, but this is not required. For example: some PS may be defective and fail to provide a signal. For example: a region of interest (ROI) may be defined for the frame, indicating that data is collected from only a portion of the frame, and so on.
The frame information may be provided in any format, such as a detected level (or levels) for each PS (e.g., between 0 and 1024, three RGB values each between 0 and 255, and so on), scalar, vector, or any other format. Optionally, the frame information (for the first frame or later frames) may optionally indicate a plurality of detection signals in an indirect manner (e.g. information about the detection level of a given PS may be given with respect to the level of an adjacent PS or with respect to the level of the same PS in a previous frame). The frame information may further include: additional information (e.g., sequence number, time stamp, operating conditions), some of which may be used in subsequent steps of method 2400. The first frame information (and frame information for a number of later frames received in a later stage of method 2400) may be received directly from the PDA or from one or more intermediate units (such as an intermediate processor, memory unit, data aggregator, and the like). The first frame information (and frame information for a later frame received in a later stage of method 2400) may include: the raw data acquired by each PS, but may further include: pre-processed data (e.g., in weighting, denoising, correction, digitizing, thresholding, leveling, and the like).
Stage 2404 includes identifying, based on the first FET, at least two types of PS of the plurality of PS of the PDD:
a. an available PS group (referred to as a "first available PS group (first group of usable PSs)") for the first frame includes at least a first PS, a second PS, and a third PS of the plurality of PS of the PDA.
b. An unavailable PS group (referred to as a "first unavailable PS group (first group of unusable PSs)") for the first frame includes at least a fourth PS of the plurality of PS of the PDA.
The identifying of stage 2404 may be implemented in different ways and may optionally include identifying (explicitly or implicitly) that each of the plurality of PS belongs to one of the at least two groups. Alternatively, each PS of the PDA (or each PS of a predetermined subset thereof, such as all PS of an ROI) may be allocated to one of two complex numbers relative to the first frame, the first available PS group or the first unavailable PS group. However, this is not necessarily required and some PS may not be allocated for some frames or may be allocated for other plural (e.g. the availability of plural PS is determined based on parameters other than the FETs of the respective first frame, such as based on collected data). Optionally, the identifying of stage 2404 may include: deciding which PS corresponds to one of said first plurality of PS and automatically treating the remaining PS (or a predetermined subset thereof, e.g. ROI) of said PDA as belonging to the other plurality of PS of said two.
Note that the identification of stage 2404 (and stages 2412 and 2402) need not reflect an actual availability status of the various PS (in some implementations, these actual availability statuses are also indeed reflected). For example: one PS included in the first unavailable PS group may be actually used under the plurality of conditions of the first frame, and another PS included in the first available PS group may be actually not used under the plurality of conditions of the first frame. The identification of stage 2404 is an estimate or evaluation (estimation or assessment) of the availability of multiple PS of the PDA, rather than a test of individual PS. It is also noted that the availability of multiple PS may also be estimated in stage 2404 based on other factors. For example: a pre-existing list of defective PS can be used to exclude such PS from being considered available.
The identification of stage 2404 (and stages 2412 and 2420) may include: at least one of the plurality of unavailable PS groups (and/or at least one of the plurality of available PS groups) is identified based on a composite FET comprising a sum of durations of the PDD for which the plurality of samples PS are sensitive to light and not comprising a plurality of intermediate times between the plurality of durations of the PDD for which the plurality of samples PS are insensitive to light.
The identification of available and unavailable PS groups (in stages 2404, 2412, and/or 2420) may be based in part on an evaluation of temperature. Optionally, method 2400 may include: one or more frames (particularly a plurality of previous frames or the current frame) are processed for determining a temperature estimate (e.g., by estimating DC levels in a dark frame or in a plurality of darkened PS that are not imaged into the FOV). Then, the method 2400 may include: the temperature assessment is used to identify an available PS group and an unavailable PS group for a later frame, which affects the generation of the respective images. The temperature assessment may be used to assess how quickly the DC will saturate the dynamic range of a given PS for the duration of the relevant FET. Alternatively, the temperature estimate may be used to utilize a parameter of an availability model of the PS (such as one generated in method 2500).
The execution timing of stage 2404 may be changed relative to the execution timing of stage 2402. For example: stage 2404 may optionally be performed before, concurrently with, partially concurrently with, or after stage 2402 is performed. Referring to the example of the drawings, stage 2404 may optionally be performed by the processor 2304 and/or the controller 2314. Examples of methods for performing the identification of stage 2404 are discussed with respect to method 1100.
Stage 2406 includes: disregarding the plurality of first frame detection levels of the first unavailable PS group, generating a first image based on the plurality of first frame detection levels of the first available PS group. The generation of the first image may be accomplished using any suitable method, and may optionally be based on additional information (e.g., data received from an active lighting unit, if used, from additional sensors such as humidity sensors). With reference to the examples set forth with respect to the previous figures, it is noted that stage 2406 may alternatively be implemented by processor 2304. Note that the generating may include: processing the plurality of signals at a plurality of different stages (e.g., weighting, noise reduction, correction, digitizing, thresholding, level adjustment, and the like).
For the first unavailable PS group, it is noted that since the detection data of those PS are ignored in the generation of the first image, a plurality of replacement values may be calculated in any suitable way (if needed). Such a plurality of replacement values may be calculated, for example: a plurality of first frame detection levels based on a plurality of neighboring PS, a plurality of earlier detection levels based on a plurality of earlier frames, based on the same PS (e.g., if available in a previous frame), or one or more neighboring PS (e.g., based on a kinematic analysis of the scene). For example: a one-dimensional nanofiltration (Wiener filter), a local mean algorithm (local mean algorithms), a non-local mean algorithm (non-local means algorithms) and the like may be used. Referring to the generation of multiple images based on the PDA data, optionally, any one or more of such generated images (such as the first image, the second image, and the third image) may include: a replacement value is calculated for at least one pixel associated with a PS that is identified as not available for the respective image based on a detection level of at least one other neighboring PS identified as available for the respective image. In the case of non-binary availability assessment (and the identification of stages 2404, 2412 and/or 2420 includes identifying at least one PS as belonging to a third set of partial availability PS), the detection signal of each such PS is partially identified as available, which can be combined or averaged with multiple detection signals of multiple neighboring PS and/or multiple other readings of multiple same PS at other times when it is available (or partially available).
Optionally, the generating of the first image (and the generating of the second image and the third image later) may further include: the outputs of the PS determined to be defective, nonfunctional or unavailable for any other reason are disregarded or the detection path determined to be defective, nonfunctional or unavailable. An example of an additional method for detecting defects of multiple PS and/or multiple related detection paths is discussed with respect to method 2200, which may be combined with method 2400. The output of method 2200 may be used to generate stages 2406, 2414, and 2422. In such a case, method 2200 may be performed periodically and provide multiple outputs for generating the multiple images, or may be specifically triggered for the generation of multiple images for method 2400.
Optionally, the generating of the first image (and the second and third images, later) may include: when a PS is determined to be available, a replacement value for at least one pixel associated with the PS is calculated based on a measured detection level of the PS, the PS being identified as not available for the corresponding image. Such information may be used with or independent of the information of multiple neighboring PS. Using multiple detection levels of a PS from other times may include: for example: consider detection information from multiple previous frames (e.g., for multiple still scenes), using another snapshot from a series of image acquisitions used to generate a composite image, such as a High Dynamic Range Image (HDRI) or a multi-wavelength composite image (where several shots are taken using different spectral filters and then combined into a single image).
Note that in the first image (and in any other frame generated based on the detection data of the PDA), a single pixel may be based on the detection data from a single PS or from a combination of PS; likewise, the information from a single PS may be used to determine the pixel color of one or more pixels on the image. For example: a FOV of Θ by Φ degrees (degeres) can be covered with PS of X by Y and can be converted to N by N pixels in the image. A Pixel Value of one of the mxn pixels may be calculated as a sum of Pixel-Value (i, j) =Σ (ap, s·dlp, s) for one or more PS, where DLp, s is the detection level for PS (p, s) of the frame, and ap, s is an average coefficient of the specific Pixel (i, j).
After stage 2406, the first image may then be provided to an external system (e.g., a screen monitor, a memory unit, a communication system, an image processing computer). The first image may then be processed using the one or more image processing algorithms. After stage 2406, the first image may then be processed in other ways as desired.
Stages 2402 through 2406 may be repeated a number of times (reiterated several times) for a number of frames captured by the sensor of the photodetector, whether or not it is a plurality of consecutive frames. Note that in some implementations, the first image may be generated based on multiple detection levels of several frames, such as if a High Dynamic Range (HDR) imaging technique is implemented. In other implementations, the first image is generated by multiple first frame detection levels of a single frame. Multiple instances of stages 2402 and 2406 may follow a single instance of stage 2404 (e.g., if the same FET is used for several frames).
Stage 2408 is performed after receiving the first frame information and includes: a second FET is determined, the second FET being longer than the first FET. The determining of the second FET includes determining a duration (e.g., in milliseconds, portions thereof, or multiples thereof) of the exposure of a plurality of associated PDs. Stage 2408 may further include: additional timing parameters (additional timing parameters) are determined, such as the start time of the exposure, but this is not required. The second FET, which is longer relative to the first FET, may be selected for any reason. One reason for this may include: for example: any one or more of the following: the overall light intensity in the FOV, the light intensity in portions of the FOV, using bracketing (bracketing) techniques, using high dynamic range photography techniques, aperture variation, and the like. The second FET may be any ratio longer than the first FET, whether of a relatively low value (e.g., x 1.1 times, x 1.5 times), of a value exceeding several times (e.g., x 2, x 5), or of a higher value (e.g., x 20, x 100, x 5,000). Referring to the examples of the drawings, stage 2408 may optionally be performed by the controller 2314 and/or the processor 2304. Alternatively, an external system may determine the first FET or affect the placement of the FET via EO system 2300 (e.g., a control system of a vehicle in which EO system 2300 is installed).
Note that optionally, at least one of stages 2408 and 2416 may be replaced by co-determining a new FET (the second FET and/or the third FET, respectively) with an external entity. Such an external entity may be, for example, an external controller, an external processor, an external system. Note that at least one of stages 2408 and 2416 may optionally be replaced by receiving an indication of a new FET (the second FET and/or the third FET, respectively) from an external entity. The indication of the FET may be explicit (e.g., duration in milliseconds) or implicit (e.g., indication of a change in aperture opening and/or Exposure Value (EV) corresponding to the FET, indication of flash duration). Note that optionally, at least one of stage 2408 and stage 2416 may be implemented by receiving the desired DC (or at least a portion of the DC is transferred to the capacitance of the PS) from an external entity, such as if DC mitigation strategies are implemented.
Stage 2410 includes receiving second frame information. The second frame information includes a second frame detection level for each of the plurality of PS of the PDA, the second frame detection level indicating an intensity of light detected by the corresponding PS in the second FET. Note that the second frame (where the detection data for the second frame information is collected) may directly follow the first frame, but this is not essential. The FETs of any one of one or more intermediate frames (if any) between the first frame and the second frame may be equal to the first FET, the second FET, or any other FET (longer or shorter). Referring to the example of the drawings, stage 2410 may optionally be performed by the processor 2304 (e.g., via the readout circuitry 2318).
Stage 2412 includes identifying at least two types of PS of the PDA from among the plurality of PS of the PDD based on the second FET:
a. an available PS group for the second frame (referred to as a "second available PS group (second group of usable PSs)") includes the first PS.
b. An unavailable PS group (referred to as a "second unavailable PS group (asecond group of unusable PSs)") for the second frame includes the second PS, the third PS, and the fourth PS.
That is, since the FET of the second frame is longer, the second PS and the third PS identified as belonging to the first available PS group in stage 2404 (i.e., an available PS group for the first frame as described above) are identified as belonging to the second unavailable PS group (i.e., an unavailable PS group for the second frame as described above) in stage 2412. The identification of stage 2412 may be implemented in different ways, such as any one or more of those discussed above with respect to stage 2404. Multiple PS considered to be available for the shorter FET may be considered unavailable for the longer FET in stage 2412 for various reasons. For example: if such PS has a charge storage capacity (e.g., capacitance) lower than the average charge storage capacity of the PS in the PDA, then the charge storage capacity of the PS can be considered insufficient for both the detection signal and the accumulated DC over a longer integration time. If the DC level is maintained (e.g., the temperature and bias voltage on the PD is unchanged), any PS that cannot be present in the first FET due to its inability to maintain sufficient dynamic range will also be identified as not available for the longer second FET.
Stage 2412 is performed after stage 2408 (because it is based on the multiple outputs of stage 2408). The execution timing of stage 2412 may be changed relative to the execution timing of stage 2410. For example: stage 2412 may optionally be performed before, concurrently with, partially concurrently with, or after performing stage 2410. Referring to the example of the drawings, stage 2412 may optionally be performed by the processor 2304. Examples of methods for performing the identification of stage 2412 are discussed with respect to method 2500.
Stage 2414 includes: a second image is generated based on the second plurality of frame detection levels of the second unavailable PS group regardless of the second plurality of frame detection levels of the second unavailable PS group. Importantly, stage 2414 includes generating the second image while ignoring the plurality of outputs (the plurality of detection levels) of the at least two PS, the plurality of outputs of the at least two PS being used to generate the first image. Based on the FETs of the first frame, these at least two PS are identified as available and are identified as available for generating the first image (i.e., at least the second PS and the third PS). The generation of the second image may be implemented using any suitable method, including any of the methods, techniques, and variations discussed above with respect to the generation of the first image. Regarding the second unavailable PS group, it is noted that since the detection data of those PS are ignored in the generation of the second image, a plurality of replacement values may be calculated in any suitable way (if needed). After stage 2414, the second image may then be provided to an external system (e.g., a screen monitor, a memory unit, a communication system, an image processing computer), may then be processed using one or more image processing algorithms, or may then be processed as desired.
Stages 2410 through 2414 may be repeated multiple times for a number of frames captured by the photodetector sensor, whether or not it is a plurality of consecutive frames. Note that in some implementations, for example: if a High Dynamic Range (HDR) imaging technique is implemented, the second image may be generated based on multiple detection levels for several frames. In other implementations, the second image is generated by multiple second frame detection levels of a single frame. Multiple instances of stages 2410 and 2414 may follow a single instance of stage 2412 (e.g., if the same second FET is used for several frames).
Step 2416 is performed after receiving the second frame information and includes: a third FET is determined, the third FET being longer than the first FET and shorter than the second FET. The determining of the third FET includes: a duration (e.g., in milliseconds, portions, or multiples thereof) of the exposure to the plurality of associated PDs is determined. Stage 2416 may further comprise: a plurality of additional timing parameters, such as the start time of the exposure, are determined, but this is not required. The third FET may be selected for any reason, such as the reasons discussed above with respect to the decision of the second FET in stage 2408. The third FET may be any ratio longer than the first FET, whether of a relatively low value (e.g., x 1.1 times, x 1.5 times), a value exceeding a few times (e.g., x 2, x 5), or any higher value (e.g., x 20, x 100, x 5,000). The third FET may be any ratio shorter than the second FET, whether of a relatively low value (e.g., x 1.1 times, x 1.5 times), more than a few times (e.g., x 2, x 5), or any higher value (e.g., x 20, x 100, x 5,000). Referring to the example of the drawings, stage 2416 may optionally be performed by the controller 2314 and/or the processor 2304. Alternatively, an external system may determine the first FET or influence the setting of the FET via EO system 2300.
Stage 2420 of method 2400 includes receiving the third frame information. The third frame information includes a third frame detection level for each of a plurality of PS of the PDA, the third frame detection level indicating an intensity of light detected by the corresponding PS in the third FET. Note that the third frame (in which the detection data for the third frame information is collected) may directly follow the second frame, but this is not essential. The FETs of any one of one or more intermediate frames (if any) between the second frame and the third frame may be equal to the second FET, the third FET, or any other FET (longer or shorter). Referring to the example of the drawings, stage 2420 may optionally be performed by the processor 2304 (e.g., via the readout circuitry 2318).
Stage 2420 includes identifying at least two types of PS of the PDA from among the plurality of PS of the PDD based on the third FET:
a. an available PS group for the third frame, referred to as a "third available PS group (third group of usable PSs)", includes the first PS and the second PS.
b. An unavailable PS group for the third frame, referred to as a "third unavailable PS group (athird group of unusable PSs)", includes the third PS and the fourth PS.
That is, since the third frame is longer relative to the FET of the first frame, the second PS is identified as belonging to the first available PS group (i.e., an available PS group for the first frame as described above) in stage 2404, and is identified as belonging to the third unavailable PS group (i.e., an unavailable PS group for the third frame as described above) in stage 2420. Since the third frame is longer relative to the FETs of the second frame, the third PS is identified as belonging to the second unavailable PS group (i.e., an unavailable PS group for the second frame as described above) in stage 2412, and is identified as belonging to the third available PS group (i.e., an available PS group for the third frame as described above) in stage 2420.
The identification of stage 2420 may be implemented in different ways, such as any one or more of those discussed above with respect to stage 2404. For various reasons, such as: as discussed above with respect to stage 2412, multiple PS considered to be available for the shorter FET may be considered unavailable for the longer FET in stage 2420. For various reasons, multiple PS that are deemed unavailable for the longer FET may be deemed available for the shorter FET in stage 2420. For example: if such a plurality of PS have a charge storage capacity (e.g., capacitance) that is greater than that of some PS in the second unavailable PS group, then the charge storage capacities of those different PS may be considered sufficient for the detection signal and the accumulated DC in a shorter integration time than the second FET.
Stage 2420 is performed after stage 2416 (because it is based on the multiple outputs of stage 2416). The execution timing of stage 2420 may be changed relative to the execution timing of stage 2416. For example: stage 2420 may optionally be performed before, concurrently with, partially concurrently with, or after stage 2416 is performed. Referring to the example of the drawings, stage 2420 may optionally be performed by the processor 2304 and/or the controller 2314. Examples of methods for performing the identification of stage 2420 are discussed with respect to method 1100.
Stage 2422 includes: and disregarding a plurality of third frame detection levels of the third unavailable PS group, generating a third image based on the plurality of third frame detection levels of the third available PS group. Importantly, stage 2422 includes generating the third image while ignoring the plurality of outputs (detection levels) of at least one PS that are used for the generation of the first image (e.g., the second PS) while utilizing the plurality of outputs of at least one PS that are at the generation of the second image (e.g., the third PS). The generation of the third image may be implemented using any suitable method, including any of the methods, techniques, and variations discussed above with respect to the generation of the first image. With respect to the third unavailable PS group, it is noted that since the generation of these PS detection data at the third image is ignored, a plurality of replacement values may be calculated in any suitable way (if needed). After stage 2422, the third image may be provided to an external system (e.g., a screen monitor, a memory unit, a communication system, an image processing computer). After stage 2422, the third image may be processed using one or more image processing algorithms. After stage 2422, the third image may then be processed in other ways as desired.
Alternatively, the generation of one or more images (e.g., the first image, the second image, the third image) in method 2400 may be based on a previous stage of evaluating DC accumulation for at least one PS of the respective image, such as based at least on the respective FET, electrical measurements in the extracted or near-light signal, and so on. For example: such measurements may include: DC (or another indicative measurement) was measured on a reference PS kept in the dark. The generating of the respective images may include: subtracting an amplitude related to the DC assessment of the PS from the detection signals of one or more PS to give a more accurate characterization of the FOV of the PDA. Optionally, the compensating DC accumulation of this stage is performed only for a plurality of available PS of the respective image.
In a PDA featuring a relatively high DC (e.g., as a result of the type and nature of its PDs), the capacitance of the individual PS in which detected charge is collected may become saturated (partially or fully) due to DC, with little dynamic range for detecting ambient light (arriving from a FOV of the system). Even when means for subtracting a plurality of DC levels from the plurality of detection signals are implemented, such as to normalize the detection data, the lack of dynamic range for detection means that the resulting signal is fully saturated or insufficient to meaningfully detect a plurality of ambient light levels. Since DC from the PD is accumulated in the capacitance (whether parasitic capacitance or residual capacitance of the actual capacitor or other components of the plurality of PS) at the FET, the method uses the FET to determine that the PS is available to the corresponding FET, leaving sufficient dynamic range in the capacitance after collecting the charge of the DC (or at least a relevant portion thereof) for the entire FET. The identification of an unavailable PS group for a frame may include: given the FETs of the respective frames, multiple PS are identified whose dynamic range is below an acceptable threshold (or otherwise is expected to fail a dynamic range sufficiency criterion). Likewise, the identification of an available PS group for a frame may include: given the FETs of the corresponding frame, multiple PS are identified whose dynamic range is above an acceptable threshold (or is otherwise expected to meet a dynamic range sufficiency criterion). The two acceptable thresholds described above may be the same threshold or different thresholds (e.g., if the dynamic ranges of multiple PS are treated differently between those thresholds, such as being identified as belonging to a portion of the available PS group of the associated frame).
Referring generally to method 2400, note that for multiple additional FETs (e.g., a fourth FET, and the like), multiple additional instances of multiple stages 2416, 2418, 2420 and 2422 may be repeated. Such times may be longer, shorter or equal to any previously used FET. It is also noted that, optionally, the first FET, the second FET, and the third FET are a plurality of consecutive FETs (i.e., the PDA does not use other FETs between the first FET and the third FET). Alternatively, other FETs may be used between the first FET and the third FET.
Note that even if the Exposure Value (EV) remains the same, a plurality of different available PS groups and unavailable PS may be determined for different FETs in method 2400. For example: consider a case in which the first FET is extended by a factor q to provide the second FET, but the F number is increased by a factor q so that the total illumination received by the PDA is substantially the same. In such a case, even if the EV remains constant, the second unavailable PS group will include other PS than those included in the first unavailable PS group because the DC accumulation increases by a factor p.
A non-transitory computer readable medium is provided for generating image information based on data of a PDA, the non-transitory computer readable medium comprising a plurality of instructions stored thereon that when executed on a processor perform the steps of: receiving first frame information, said first frame information including a first frame detection level for each of a plurality of PS of said PDA, said first frame detection level indicating a light intensity detected by said respective PS in a first FET; identifying the plurality of PS of the PDD based on the first FET: a first available PS group comprising a first PS, a second PS and a third PS, and a first unavailable PS group comprising a fourth PS; disregarding a plurality of first frame detection levels of the first unavailable PS group, generating a first image based on the first frame detection levels of the first available PS group; (d) Determining a second FET after receiving the first frame information, the FET being longer than the first FET; receiving second frame information, said second frame information including a second frame detection level for each PS of said plurality of PS of said PDA, said second frame detection level indicating a light intensity detected by said respective PS in a second FET; identifying the plurality of PS of the PDD based on a second FET: a second available PS group comprising said first PS, and a second unavailable PS group comprising said second PS, said third PS, and said fourth PS; disregarding a plurality of second frame detection levels of the second unavailable PS group, generating a second image based on the plurality of second frame detection levels of the second available PS group; determining a third FET after receiving the second frame information, the third FET being longer than the first FET and shorter than the second FET; receiving third frame information, said third frame information including a third frame detection level for each PS of said plurality of PS of said PDA, said third frame detection level indicating a light intensity detected by said respective PS in a third FET; identifying a plurality of PS of the PDD based on the third FET: a third available PS group comprising said first PS and said second PS, and a third unavailable PS group comprising said third PS and said fourth PS; and disregarding a plurality of third frame detection levels of the third unavailable PS group, generating a third image based on the plurality of third frame detection levels of the third available PS group.
The non-transitory computer readable medium of the previous paragraph may include: a plurality of additional instructions stored thereon that, when executed on a processor, perform any of the other steps or variations discussed above with respect to method 2400.
FIG. 25 is a flow chart illustrating a method 2500 to generate models for PDA operation in different FETs according to examples of the presently disclosed subject matter. Identifying which PS of the plurality of PS belongs to an available PS group provided with a given FET (and possibly additional parameters such as temperature, bias voltage across the plurality of PDs, capacitance of the plurality of PS, etc.) may be based on a model of the behavior of each PS of the plurality of PS at different FETs. Such modeling may be part of method 2400 or may be performed separately prior to it. For each PS of a plurality of PDAs (e.g., PDA 1602), and possibly for all PS of the PDAs, the various stages 2502, 2504, and 2506 of method 2500 are performed.
Stage 2502 includes: the availability of the respective PS for each FET of a plurality of different FETs is determined. The determination of the availability may be performed in different ways. For example: a detection signal of the PS may be compared to an expected value (e.g., may be completely dark if the illumination level is known, or a known higher illumination level), to an average value in other PS's, to detection levels in other PS's (e.g., if all PS are imaging a color uniformity target), to detection results in other FETs (e.g., determining whether the detection level at duration T, e.g., 200 ns, is about twice the detection level at T/2, e.g., 330 ns), and so on. The determined availability may be a binary value (e.g., available or unavailable), a non-binary value (e.g., a scalar evaluates the availability level or indicates its availability), a set of values (e.g., a vector), or any other suitable format. Alternatively, the same plurality of frame FETs is used for all PS of the plurality of PS, but this is not required. For example: in one non-binary availability assessment, an intermediate value between fully unavailable and fully available may indicate that at other times of availability (or partially available), the detection signal of the respective PS should be combined or averaged with multiple detection signals of multiple neighboring PS and/or with other readings of the same multiple PS.
The method 2500 may include: an optional stage 2504 of measuring charge accumulation capacity and/or saturation parameters of the respective PS. The charge capacity may be measured in any suitable manner, such as using a calibration machine in the manufacturing plant where the photodetectors are manufactured, from the PD, from other power sources in the PS (e.g., current sources), from other power sources in the PDA, or from an external power source. Stage 2504 may be omitted, for example: in case the difference is that the capacitance between the different PS is negligible or simply ignored.
Stage 2506 includes creating an availability prediction model (usability prediction model) for a respective PS that provides an estimate of the availability of the PS when different FETs are operating that are not included in the plurality of FETs that were actively responsible for the availability in stage 2502. The plurality of different FETs may be included in the same duration span of the plurality of FETs of stage 2502, longer therefrom or shorter therefrom. The created usability prediction model may provide different types of usability indications, such as: a binary value (e.g., available or unavailable), a non-binary value (e.g., a scalar evaluation availability or an indicator thereof), a set of values (e.g., a vector), or any other suitable format. The type of availability indicated by the model may be the same type of availability or a difference thereof that was determined in stage 2502. For example: stage 2502 may include: evaluating the DC collected in different FETs, and stage 2504 may include: a time threshold is determined that indicates the maximum allowed FET for which this PS is considered to be available. Alternatively, the availability model may consider the charge accumulation capacity of each PS.
Any suitable means may be used to create the availability prediction model. For example: for different FETs, different DCs may be measured or evaluated for the PD, and then a regression analysis is performed to determine a function (polynomial, exponential, etc.) by which the DCs in other FETs may be evaluated.
Optional stage 2508 includes: for at least a portion of the PDA, at least the plurality of PS including the previous stage, an availability model is compiled. For example: stage 2508 may include: one or more matrices or other types of maps are generated that store a plurality of model parameters for the respective PS in its plurality of cells. For example: if stage 2506 includes creating a DC linear regression function for each PS (p, s), provided by dark current (p, s) =ap, s·τ+bp, s (where τ is the FET and Ap, s and Bp, s are the linear coefficients of the linear regression), then a matrix a may be generated to store a plurality of different Ap, s values, and a matrix B may be generated to store a plurality of different Bp, s values. If desired, a third matrix C may be used to store different capacitance values Cp, s (or different saturation values Sp, s) for a plurality of different PS.
Stage 2506 (or stage 2508, if implemented) may be followed by an optional stage 2510, the optional stage 2510 comprising deciding on the availability of the plurality of PS that is not one of the plurality of FETs for stage 2502 based on the plurality of results of stage 2506 (or stage 2508, if implemented). For example: stage 2510 may include: a mask (e.g., a matrix) of unavailable PS is created for a plurality of different PS of the PDA.
Referring fully to method 2500, stage 2502 can include: the DC for each PS of the PDA at four different FETs (e.g., 33ns, 330ns, 600ns, and 2000 ns) is determined. Stage 2504 may include: deciding a saturation value for each PS, and stage 2506 may include: a polynomial regression is created for each PS's DC accumulation over time (polynomial regression). Stage 2508 in this example may include: a matrix is generated, storing the FETs in each cell, wherein (regression analysis of) the DCs of the PS will saturate the PS. Stage 2510 may include: a new FET is received and a binary matrix is generated that stores a first value (e.g., "0") for each unavailable PS (where the FET is above the stored value) and a second value (e.g., "1") for each available PS (where the FET is below the stored value) to determine whether each cell of the matrix is below or above the stored value.
Any stage of method 2500 may be performed during the manufacture of the PDA (e.g., during factory calibration), during operation of the system (e.g., after an EO system including the PDA is installed in its designated location, such as a vehicle, monitoring system, etc.), or at any other suitable time between or after these times. Different phases may be performed at different times.
Referring fully to method 2400, note that the effect of DC on multiple different PS in multiple different FETs can be extended to measure compared to when operating at different stages with different operating conditions (such as when different are subjected to different temperatures, when multiple different biases are supplied to multiple PDs).
Optionally, the determining a FET (e.g., the second FET, the third FET) as part of method 2400 may include: the corresponding FET is maximized while maintaining a number of unavailable PS for the respective frame below a predetermined threshold. For example: to maximize the collection of multiple signals, method 2400 may include: a FET is set near a threshold that is associated with a predetermined number of unavailable PS (e.g., requiring at least 99% of the PS of the PDA to be available, allowing up to 1% of the PS to be unavailable). Note that in some cases, the maximization may not yield the exact maximum duration, but a duration that is close to it (e.g., above 320% or above 325% of the mathematical maximum duration). For example: the maximum frame duration in a plurality of discrete predefined time spans may be selected.
For example: determining a FET as part of method 2400 may include: a FET is determined that is longer than the other possible FETs, resulting in more PS than a previous FET, such that a higher number of PS's are considered unusable than such other possible FETs, but image quality in the remaining PS's is improved. This may be useful, for example: under relatively dark conditions. Note that alternatively, the decision of the FET (e.g. by attempting to maximize it) may take into account the spatial distribution of PS that are considered to be unavailable in a plurality of different FETs. For example: knowing that in certain areas of the PDA, an accumulated number of PS have a high percentage of the PS that would be considered unusable over a certain FET may result in determining a FET that is below the threshold, especially if this is a significant portion of the FOV (such as in a center of the FOV, or the location of pedestrians or vehicles identified in a previous frame).
The method 2400 may include: a single image is created based on multiple levels of detection of two or more frames detected at multiple different FETs, wherein multiple different unavailable PS groups are used for the different FETs. For example: three FETs may be used: x 1, ×10 and x 100. The determined color for each pixel of the image may be determined based on the multiple detection levels of one or more PS (e.g., FETs in which the PS is available, unsaturated, and detects a non-negligible signal) or multiple detection levels of multiple neighboring PS (e.g., if no available detection signal is provided, even if the corresponding PS is determined to be available, such as because the signal is negligible in such a case). The method 2400 may include: multiple FETs for combining different exposures to a single image are determined (e.g., using high dynamic range imaging techniques, HDR). The decision of such FETs may be based on modeling of different PS availability in a plurality of different FETs, such as the model generated in method 2500. The method 2400 may further include: it is decided to capture a single image in two or more different detection instances (where the multiple detection signals are read separately in each instance and then added), each detection instance providing enough PS available. For example: instead of using a 2 ms FET for a single acquisition of a scene, method 2400 may include: it is decided to capture the scene twice (e.g., two FETs of 1ms, one FET of 1.5ms and one FET of 0.5 ms) so that the number of PS available in each exposure will exceed a predetermined threshold.
Optionally, method 2400 may include: at least one FET is determined based on an availability model of the different PS in the different FETs, such as generated in method 2500, and saturation data of at least one previous frame retrieved by the PDA. The saturation data includes information about the number of PS saturated in the at least one FET of the at least one previous frame (e.g., number of PS, which portions of the PDA) and/or information about the number of PS almost saturated in the at least one FET of the at least one previous frame. The saturation data may relate to an immediately preceding frame (or frames) so it indicates the saturation behaviour (saturation behavior) of a curtain imaging scene (curtain imaged scene).
The method 2400 may further include: the availability of multiple PS of the PDA at multiple different FETs is modeled (e.g., by implementing method 2500 or any other suitable modeling method). Providing an availability model for a plurality of PS at the PDA at a plurality of different FETs (either part of method 2400 or not part of method 2400), method 2400 may include: (a) Determining at least one of the second FET and the third FET based on the results of the modeling; and/or (b) identifying at least one of the plurality of unavailable PS groups based on a result of the modeling.
Optionally, in deciding any one or more FETs, method 2400 may include: a FET is determined that balances between expanding the FET due to darkness of the FOV scene and decreasing the FET to limit the number of PS that are rendered unusable, which increases with longer FETs (e.g., based on the model of method 2500). For example: when at the same temperature and biased on the PD (such that the DC in each FET remains constant), stage 2408 may include: deciding a longer FET because the scene becomes darker (at the cost of a large number of unavailable PS), and stage 2416 may include: a shorter FET is decided because the scene is again lightened (thereby reducing the number of unusable PS). This is particularly important in darker images where the availability of multiple PS caused by DC accumulation (which is due to temperature and operating conditions rather than illumination levels) limits the elongation of the FET, which would be done if DC accumulation did not significantly limit the dynamic range of individual PS. In another example, during a time span in which the scene illumination remains constant, stage 2408 may include: deciding a longer FET that is activated due to the temperature drop (thereby reducing DC and reducing the percentage of PS not available on each FET), and stage 2416 may include: a shorter FET is determined because the temperature of the PDA rises again.
Fig. 26 is a graphical representation of the execution of method 2400 for three frames taken of the same scene in different FETs according to examples of the inventive subject matter. The example scene includes four concentric rectangles, each rectangle being darker than the surrounding rectangles. The different figures of fig. 26 correspond to a stage of method 2400 and are numbered with an equivalent reference numeral having an apostrophe. For example: fig. 2406' matches an execution of stage 2406, and so on. Each rectangle in the lower nine figures represents a single PS, or a pixel directly mapped to such PS (in the lower three figures). In all figures, the positions of the plurality of PS relative to the PDD remain unchanged.
As is common in many types of PDAs, the PDA from which frame information is received may include a number of PS that are bad, defective, or otherwise behaving abnormally (also referred to as bad, defective, or otherwise behaving abnormally pixels). The term "behavioural abnormal PS (Misbehaving PS)" relates broadly to a PS that deviates from its expected response, including but not limited to: stuck, dead, hot, lit, warm, defective and flashing PS (stuck, dead, hot, lit, wall, safe, and flashing PSs). The multiple PS of behavioral anomalies may be a single PS or a cluster of multiple PS. Non-limiting examples of defects that may cause a PS behavioral anomaly include: PS bump bond connectivity, resolution of faults in multiplexers, vignetting, severe undersensitivity of certain PS, nonlinearity, signal linearity differences, low full traps, average variance linearity differences, excessive noise and high DC (PS bump bond connectivity, addressing faults in the multiplexer, vigneting, severe sensitivity deficiency of some PSs, non-linearity, poor signal linearity, low full well, face mean-variance linearity, excessive noise and high dark current). One or more of the PS identified as an unavailable PS in method 2400 may be a permanently failed PS or a PS that is behaving abnormally based on conditions not associated with FETs, such as due to high temperatures. Such PS may be identified as all FETs (e.g., PS 8012.5) that cannot be used in method 2400. Note, however, that due to the limited functionality and sufficiently long FETs (such as PS 8012.4), some functional PS (not "behavioral abnormalities") may be considered unusable in all FETs of method 2400. Optionally, method 2400 may include: the availability of one or more PS of the PDA is determined based on other parameters than FET, such as temperature, electrical parameters, ambient light level. Note that in such a case, a PS rendered unusable for FET reasons cannot generally be considered usable due to other considerations (e.g., temperature) due to its capacitance limitations.
In the example shown:
a. it is possible that under all conditions PS 8012.5 has no output signal, whether in all three FETs (T 1 、T 2 、T 3 ) To which it impinges.
b. Possibly under all conditions, PS 8012.4 outputs a saturation signal, whether in all three FETs (T 1 、T 2 、T 3 ) To which it is impactedLight quantity.
Ps 8012.3 is applied to the shortest FET (T 1 ) Outputs a usable signal, but in the case of a longer FET (T 2 T and T 3 ) An unavailable (saturated) signal is output.
Ps 8012.2 is implemented in a plurality of shorter FETs (T 1 T and T 3 ) Outputs a usable signal, but in the longest FET (T 2 ) An unavailable (saturated) signal is output.
Note that other types of defects and erroneous outputs may also occur. For example, such errors may include: outputting a highly nonlinear signal response, always outputting a signal that is too strong, always outputting a signal that is too weak, outputting a random or semi-random output, and so forth. Also, many PS (such as the first PS 8012.1) can be used for all FETs used in the detection.
Returning to fig. 23, note that the system 2300 may alternatively be an EO system with dynamic PS availability evaluation capability. That is, the EO system 2300 may be able to alternately allocate a number of different PS as available or unavailable based on FETs and possibly other operating parameters, and utilize a number of detection signals for a number of PS only when the time taken at (e.g., an availability model) determines that each PS is available.
In such a case, EO system 2300 includes:
a pda 2302 comprising a plurality of PS 2306, each PS 2306 being operable to output a plurality of detection signals in different frames. The detection signal output by the respective PS 2306 for a frame is indicative of the amount of light impinging on the respective PS in a respective frame (and possibly also the DC of the PD of the respective PS).
b. An availability filtering module (e.g., implemented as part of the processor 2304, or as a separate implementation thereof). The availability filtering module is operable to determine that a PS 2306 is unavailable based on a first FET for each PS 2306 (which may be different between different PS 2306), and to determine that the same PS 2306 is available later based on a second FET that is shorter than the first FET. That is, PS 2306 that are not available at a point (and whose output is ignored in generating one or more images) may become available again later (e.g., if the FET becomes shorter), and the outputs of PS 2306 may be useful in regenerating subsequent images.
c. The processor 2304 is operable to generate a plurality of images based on a plurality of frame detection levels of the plurality of PS 2306. In other configurations of the processor 2304, it is configured to: (a) Excluding a first detection signal of a filtered PS when generating a first image based on a plurality of first frame detection levels, the first detection signal of the filtered PS being determined by the availability filter module to be unavailable for the first image, and (b) including a second detection signal of the filtered PS determined by the availability filter module to be available for the second image when generating a second image based on a plurality of second frame detection levels captured by the PDA after capturing the plurality of first frame detection levels.
Alternatively, the controller 2314 may determine different FETs for different frames based on different illumination levels of multiple objects in the FOV of the EO system.
Alternatively, the controller 2314 may be configured to determine a plurality of FETs for the EO system (e.g., as discussed with respect to method 2400) by maximizing the plurality of FETs while maintaining the number of unusable PS for each frame below a predetermined threshold.
Alternatively, the EO system 2300 may include: at least one occluding PD that is occluded (e.g., by a physical barrier or using deflection optics) from ambient illumination; and a dedicated circuit operable to output an electrical parameter indicative of the level of DC based on the signal level of the at least one shadow PD. The processor 2304 may be configured to generate a plurality of images based on the electrical parameter, based on the respective FETs, and based on the plurality of detection signals of the PDA to compensate for different degrees of DC accumulation in different frames.
Alternatively, the processor 2304 may be used to calculate a replacement value for at least one pixel of the first image associated with the filtered PS based on a detected level of the filtered PS measured when the PS is identified as available. Alternatively, the processor 2304 may be configured to: based on the detection levels of a plurality of adjacent PS, a plurality of substitution values for a plurality of PS are calculated when the detection signals of the respective PS are excluded from the generation of the plurality of images. Alternatively, the processor 2304 may calculate a replacement value based on a first frame detection level of a plurality of neighboring PS being operated on to at least one pixel of the first image associated with the filtered PS.
Optionally, the processor 2304 (or availability filter module if not part of the processor) may be operable to determine a degree of availability for the plurality of PS based on a FET, the degree including a sum of durations of the PDD that sample the plurality of PS that are sensitive to light, and excluding intermediate times between durations that sample the plurality of PS that are not sensitive to light.
Optionally, the processor 2304 may use an availability model generated by the method 2500 to determine when to include and exclude detection signals for different PS captured at different FETs. Alternatively, EO system 2300 may be operable to perform method 2500. Alternatively, the EO system 2300 may be configured to participate in the execution of the method 2500 in conjunction with an external system (such as a factory calibration machine used in the manufacture of the EO system 2300).
Fig. 27 is a flow chart illustrating an example of a method 3500 in accordance with the presently disclosed subject matter. Method 3500 is used to generate a plurality of images based on different subsets of the plurality of PS under different operating conditions. With reference to the examples set forth with respect to the previous figures, method 3500 may be performed by processor 1604, wherein the PDA of method 3500 may optionally be PDA 1602. Method 3500 includes at least a plurality of stages 3510, 3520, 3530, and 3540 that are repeated as a sequence for different frames captured by a PDA. The sequence may be performed entirely for each frame in a stream, but need not be, as discussed in more detail below.
The sequence begins at stage 3510 which receives frame information from the PDA, the frame information indicating a plurality of detection signals for the frame provided by a plurality of PS of the PDA. The frame information may include: the detection level (or levels) for each PS (e.g., between 0 and 1024, three RGB values each between 0 and 255, and the like), or any other format. The frame information may indicate multiple detection signals in an indirect manner (e.g., the level of a neighboring PS or the level of the same PS in a previous frame may be used to give information regarding the detection level of a given PS. The frame information may also include additional information (e.g., sequence numbers, time stamps, operating conditions), some of which may be used in subsequent steps of method 3500. The frame information received from the PDA may include a bad, defective, or otherwise behavioural PS.
Stage 3520 includes receiving operating condition data during the frame duration, the operating condition data indicating a plurality of operating conditions of the PDA. The plurality of operating conditions may be received from different types of entities, such as any one or more of the following entities: the PDA, a controller of the PDA, the at least one processor executing method 3500, one or more sensors, one or more controllers executing at least one processor of method 3500, and the like. Non-limiting examples of the various operating conditions that may be mentioned in stage 3520 include FETs of the PDA (e.g., electronic or mechanical shutters, flash illumination durations, and the like), amplification gain of the PDA or connected circuitry, bias voltages of various PDs supplied to the PDA, ambient light levels, dedicated illumination levels, image processing modes of downstream image processors, filtering (e.g., spectral filtering, polarization) applied to the light, and the like.
Stage 3530 includes determining a defective PS group based on the operating condition data that includes at least one of the plurality of PS and excludes a plurality of other PS. When stage 3530 is performed for different frames based on different operating condition data received for the frames in different corresponding instances of stage 3520, different defective PS groups are selected for different frames having operating conditions that are different from each other. However, the same set of defective pixels may be selected for two frames having different operating conditions (e.g., when the difference in operating conditions is relatively small).
Note that the decision is based on the operating condition data, not on the evaluation of the plurality of PS itself, and thus the defectivity of the various PS included in the different groups is an estimation of their condition, not a statement of their actual operability condition. Thus, a PS included in the defective PS group in stage 3530 is not necessarily defective or inoperable under the plurality of operating conditions indicated in the operating condition data. The decision of stage 3530 is intended to match the actual state of the PDA as accurately as possible.
Step 3540 includes processing the frame information to provide an image representing the frame. The processing is based on a plurality of detection signals of a plurality of PS of the photodetector, but not including a plurality of PS in the defective PS group. That is, the plurality of detection signals from the plurality of PS of the PDA are used to generate an image representative of the FOV (or other scene, or one or more objects where light reaches the PDA), but all detection signals from a plurality of PS are avoided from being included in the defective PS group (which is dynamically determined based on the extracted operating condition data in the relevant frame information, as previously described). Stage 3540 may optionally include calculating a plurality of replacement values to compensate for the plurality of ignored detection signals. Such calculations may include: for example: a replacement value of a defective PS is determined based on a plurality of detection signals of a plurality of adjacent PS. Such calculations may include: for example: a replacement value for a pixel of the image is determined based on the plurality of values for a plurality of adjacent pixels of the image. Any of the techniques discussed above with respect to image generation in method 2400 may also be used for image generation in stage 3540.
An example of performing the method for two frames (a first frame and a second frame) may include: for example:
a. first frame information is received from the PDA indicating a plurality of first detection signals provided by a plurality of PS, including at least a first PS, a second PS, and a third PS, and associated with a first frame duration. A frame duration (frame duration) is the time of light summarized by the PDA to a frame of a single image or a video. The different frame durations may be mutually exclusive, but may optionally be partially overlapping in some embodiments.
b. First operating condition data is received, the first operating condition data being indicative of an operating condition of the PDA during the first frame duration.
c. Determining a first defective PS group including the third PS but excluding the first PS and the second PS based at least on the first operating condition data. The deciding may include: directly determining the first defective PS group, or determining other data that suggests which pixels are considered defective (e.g., determining a complement of non-defective pixels, assigning a defect level to each pixel, and then setting a threshold or other decision criteria).
d. Processing the first frame information based on the first defective PS group to provide a first image such that the processing is based at least on the first detection signals of the first PS and the second PS (optionally after a previous preprocessing, such as digitizing, thresholding, level adjustment, etc.), and ignoring information related to the detection signals of the third PS.
e. Second frame information is received from the PDA, the second frame information indicating a plurality of second detection signals provided by a plurality of detection PS. The second frame information is related to a second frame duration other than the first frame duration.
f. Second operating condition data is received, the second operating condition data being indicative of a plurality of operating conditions of the PDA during the second frame duration, the second operating condition data being different from the first operating condition data. Note that the second operating condition data may be received from the same source as the first operating condition data, but this is not required.
g. Determining data of a second defective PS group, including the second PS and the third PS but not including the first PS, based on the plurality of second operating conditions. The deciding may include: directly determining the second defective PS group, or determining other data that suggests which pixels are considered defective (e.g., determining a complement of non-defective pixels, assigning a defect level to each pixel, and then setting a threshold or other decision criteria).
h. Processing the second frame information based on the second defective PS group to provide a second image such that the processing of the second image information is based at least on the plurality of second detection signals of the first PS and ignoring information related to the plurality of detection signals of the second PS and the third PS.
Fig. 28A illustrates a system 3600 and a plurality of exemplary target objects 3902 and 3904 in accordance with examples of the presently disclosed subject matter. EO system 3600 includes at least a processor 3620 operable to process a plurality of detection signals from at least one PDA (which may be part of the same system, but need not be so) to generate a plurality of images representing a plurality of objects in a FOV of system 3600. The system 3600 may be implemented by a system 2300 and uses like reference numerals (e.g., in such a case, PDA 3610 may be PDA 2302, controller 3640 may be controller 2314, etc.), but this is not required. For brevity, not all of the description provided above with respect to system 2300 is repeated, and it is noted that any combination of one or more components of system 2300 may be implemented in comparison to system 3600, and vice versa. The system 3600 may be a processing system (e.g., a computer, a graphics processing unit) or an EO system further comprising the PDA 3610 and optics. In the latter case, the system 3600 may be any type of EO system that uses a PDA for detection, such as a camera, a spectrometer, a LIDAR, and the like. Optionally, the system 2600 may include: one or more illumination sources 3650 (e.g., a plurality of lasers, a plurality of LEDs) for illuminating a plurality of objects in the FOV (e.g., at least the first FET and the second FET illuminate the objects). Optionally, the system 3600 may include a controller 3640, the controller 3640 may determine different FETs for different frames based on different illumination levels of multiple objects in the FOV of the EO system. Alternatively, those different FETs may include: the first FET and/or the second FET.
Two exemplary targets are shown in fig. 28A: a dark colored car 3902 (body panel with low reflectivity) with a high reflective sign panel, and a black rectangular panel 3904 with a white patch thereon. Note that the system 3600 need not be limited to generating multiple images of multiple low-reflectivity objects with multiple high-reflectivity patches. However, the way the system 3600 generates multiple images of such a target is interesting.
Processor 3620 is configured to receive from a PDA (e.g., PDA 3610, if implemented) a plurality of test results for an object that includes a high reflectivity surface surrounded on all sides by low reflectivity surfaces (e.g., targets 3902 and 3904). The various detection results include: (a) First frame information of the object detected by the PDA in a first FET, and (b) second frame information of the object detected by the PDA in a second FET longer than the first FET. The first frame information and the second frame information indicate a plurality of detection signals output by different PS of the PDA, which in turn indicate a plurality of light intensities of different portions of the object detected by the PDA. Some PS detect light from the low reflectivity portions of the plurality of objects, while at least another PS detects light from the high reflectivity surface.
Based on the different FETs, processor 3620 processes the first frame information and the second frame information in different ways. Fig. 28B illustrates an exemplary first image and the second image of multiple targets 3902 and 3904 according to examples of the presently disclosed subject matter. When processing the first frame information, the processor 3620 is based on the first FET to process the first frame information. It generates a first image comprising a bright area representing the high reflectivity surface surrounded by a dark background representing the low reflectivity surface. This is illustrated in fig. 28B as a plurality of first images 3912 and 3914 (corresponding to the plurality of objects 3902 and 3904 of fig. 28A). When processor 3620 processes the second frame information longer than the first FET based on the second FET. Tt generates a second image that includes a dark background without a bright area. This is illustrated in fig. 28B as a plurality of second images 3922 and 3924 (corresponding to the plurality of objects 3902 and 3904 of fig. 28A).
That is, even if more light of the highly reflective surface reaches the respective PS of the photodetector of the second frame, the image output is not brighter and unsaturated, but darker. Processor 3620 can determine the darker color of the plurality of pixels representing the high reflectivity surface in the second image (which has a plurality of lower intensity signals as they capture the lower reflectivity surface of the object) by using information of neighboring PS as it determines that the plurality of signals from a plurality of relevant PS are not available in that longer second FET. Optionally, processor 3620 may be configured to discard detected light signals corresponding to high reflectivity surfaces when generating the second image based on the second FET (and optionally also based on availability modeling of individual PS, such as discussed with respect to method 2500), and to calculate a dark color for at least one corresponding pixel of the second image in response to detected light intensities of adjacent low reflectivity surfaces of the plurality of objects extracted from adjacent PS. Alternatively, the decision by processor 3620 to discard the information for the respective PS is not based on the detected signal level, but rather on the sensitivity of the respective PS to DC (e.g., limited capacitance). Alternatively, when processing the second frame information, processor 3620 may identify at least one PS detecting light from the high reflectivity surface as unavailable for the second frame based on the second FET, e.g., similar to the plurality of identification stages of method 2400.
Note that the high reflectivity surface may be smaller than the low reflectivity surface and may be surrounded on all sides by the low reflectivity surface, but this is not required. The dimensions (e.g., angular dimensions) of the high reflectivity surface may correspond to a single PS, less than one PS, but may also correspond to several PS. The difference between the high and low reflectivity levels may vary. For example: the reflectivity of the low reflectivity surface may be between 0% and 15%, while the reflectivity of the high reflectivity surface may be between 80% and 100%. In another example, the low reflectivity surface may have a reflectivity between 50% and 55%, while the high reflectivity surface may be a reflectivity between 65% and 70%. For example: the minimum reflectivity of the high reflectivity surface may be x 2, ×3, ×5, ×10, or×100 of the maximum reflectivity of the low reflectivity surface. Optionally, the high reflectivity surface has a reflectivity of greater than 95% in the spectral range that the plurality of PS can detect (e.g., a white surface), and the low reflectivity surface has a reflectivity of less than 5% in the spectral range that the plurality of PS can detect (e.g., a black surface). Note that as described above, a FET may correspond to a fragmented time span (e.g., to several illumination pulses) or a single continuous time span.
Note that alternatively, the amounts of multiple optical signal levels reaching the relevant PS from the high reflectivity surface in the first FET and in the second FET may be similar. This may be accomplished by filtering the incoming light, and changing an f-number of the detection optics 3670 accordingly (e.g., increasing the FET by a factor q and increasing the f-number by a factor q). Optionally, a first Exposure Value (EV) of the PDA in retrieving the first frame information differs from the second EV of the PDA in retrieving the second frame information by less than 1%. Optionally, the difference in FETs is the only major difference between the operating conditions between the first and second frames.
The temperature of the PDA is evaluated as discussed above to calibrate the availability model to different levels of DC. Optionally, the processor 3620 may be further configured to: (a) Processing the reflected detection signal from the object to determine a first temperature estimate of the photo-detection array in retrieving the first frame information and a second temperature estimate of the photo-detection array in retrieving the first frame information, and (b) determining to discard a plurality of detection results corresponding to the high reflectivity surface based on the second FET and the second temperature estimate.
Fig. 29 is a flow chart illustrating a method 3700 for generating image information based on data of a PDA according to examples of the inventive subject matter. Referring to the examples set forth with respect to the previous figures, it is noted that method 3700 may alternatively be performed by system 3600. Any of the variations discussed above with respect to system 3600 may be applied to method 3700 in contrast. In particular, method 3700 (and at least a plurality of stages 3710, 3720, 3730, and 3740 thereof) may be performed by processor 3620.
Stage 3710 includes receiving first frame information from the PDA including a black target of a white area, the first frame information indicating light intensities of different portions of the target detected by the PDA in a first FET. Note that the white area may be replaced by a bright area (or other highly reflective area). For example: any region with a reflectivity higher than 50% may be used instead. Note that the black target may be replaced by a dark area (or other slightly reflective area). For example: any target with a reflectivity below 10% may be used instead.
Stage 3720 includes processing the first frame information based on the first FET to provide a first image including a bright region surrounded by a dark background. Alternatively, any of the image generation processes discussed above with respect to any of stages 2406, 2414, and 2422 of method 2400 may be used to implement stage 3720.
Stage 3730 includes receiving, from the PDA, second frame information of a black target including a white area, the second frame information being indicative of a plurality of light intensities of different portions of the target detected by the PDA in a second FET longer than the first FET.
Step 3740 includes processing the second frame information based on the second FET to provide a second image including a dark background without a bright region. Alternatively, stage 3740 may be implemented using any of the image generation processes discussed above with respect to any of stages 2406, 2414, and 2422 of method 2400, as well as the previous stages of identifying a plurality of available and unavailable PS groups.
Regarding the execution order of method 3700, stage 3720 is performed after stage 3710 and stage 3740 is performed after stage 3730. In addition, any suitable sequence of stages may be used. Method 3700 can optionally further include retrieving the first frame information and/or the second frame information via a PDA.
Alternatively, after receiving the first frame information, the second FET may be determined before receiving the second frame information, the second FET being longer than the first FET. Optionally, the processing of the second frame information may include: discarding light intensity information of the detected white region based on the second FET; a dark color of at least one corresponding pixel of the second image is determined in response to a plurality of light intensities of a plurality of adjacent regions detected by the second frame information. Optionally, the processing of the second frame information may include: based on the second FET, at least one PS is identified that detects light from the white region as unavailable for the second frame. Alternatively, a first Exposure Value (EV) of the PDA in retrieving the first frame information may differ from a second EV of the PDA in retrieving the second frame information by less than 1%.
Optionally, during the first frame exposure time, the accumulation of DC on the PS associated with the low reflectivity data leaves an available dynamic range for the PS, while during the second frame exposure time, the accumulation of DC on that PS leaves an insufficient dynamic range for the PS. In such a case, the PS corresponding to the high reflectance region cannot be used for image generation in the second image, and a replacement color value may be calculated to replace the detection level of loss.
A non-transitory computer readable medium is provided for generating image information based on data of a PDA (including instructions stored thereon), which when executed on a processor, will perform the steps of: (a) Receiving first frame information of a black target from a PDA, said black target including a white area, said first frame information indicating light intensities of different portions of said target detected by said PDA in a first FET; (b) Processing the first frame information based on the first FET to provide a first image including a bright region surrounded by a dark background; (c) Receiving second frame information of the black target from the PDA, the black target including the white area, the second frame information indicating light intensities of different portions of the target detected by the PDA in a second FET longer than the first FET; (d) The second frame information is processed based on the second FET to provide a second image including a dark background without a bright area.
The non-transitory computer readable medium of the previous paragraph may include: other instructions stored thereon that, when executed on a processor, perform any of the other steps or variations discussed above with respect to method 3700.
In the above disclosure, systems, methods, and computer code products are described, as well as ways to use them to photoelectrically capture and generate high quality images. In particular, such systems, methods, and computer code products may be utilized to generate a plurality of high quality SWIR images (or other SWIR sensing data) in the presence of high DCs of PDs. Such PDs may be germanium PDs, but this is not the case in all cases. Some ways of using such systems, methods, and computer program products in a coordinated manner are discussed above, and many others are possible and considered to be part of the inventive subject matter of this disclosure. Any of the systems discussed above may incorporate any one or more of the components of any one or more of the other systems discussed above to achieve higher quality results, in a more efficient or cost-effective manner, or for any other reason to achieve similar results. Also, any of the methods discussed above may incorporate any one or more of the stages of any one or more of the other methods discussed above to achieve higher quality results, to achieve similar results in a more efficient or cost effective manner, or for any other reason.
In the following paragraphs, some non-limiting examples of such combinations are provided to demonstrate some possible synergy.
For example: the imaging systems 100, 100' and 100″ that are short enough to overcome the excessive effects of DC noise may implement multiple PDDs, such as multiple PDDs 1300, 1300', 1600', 1700, 1800, included in the receiver 110 to reduce the time-invariant (direct current, DC) portion of the dark noise. In this way, the capacitances of the plurality of PS are not crushed by the non-time-varying portion of DC that is not accumulated in the detection signal, and the noise of the DC does not shadow the detection signal. Implementing any of a plurality of PDDs 1300, 1300', 1600', 1700, 1800 in any of a plurality of imaging systems 100, 100' and 100″ can be used to extend the frame exposure time to a significant extent (because the DC portion of the DC is not accumulated in the capacitance) while still detecting a meaningful signal.
For example: the imaging systems 100, 100' and 100″ with the integration time set short enough to overcome the excessive effects of DC noise may implement any one or more of the methods 2400, 2500 and 3500 to determine a number of PS available at that frame exposure time, and possibly reduce the frame exposure time (which corresponds to the integration time) to further determine a sufficient number of PS available. Also, the expected ratio between the readout noise and the expected cumulative DC noise level for a given FET and the expected availability of different ones of such PS may be used by the controller to set a balance between the quality of the detected signal, the number of available pixels, and the illumination level required by the light source (e.g., laser 600). The availability models at different FETs may also be used to determine the distance that ranges the plurality of gated images generated by imaging systems 100, 100', and 100″ when applicable. Further incorporation of any of the plurality of PDDs 1300, 1300', 1600', 1700, 1800 as the sensor of such an imaging system would add to the benefits discussed in the previous paragraph.
For example: any one or more of methods 2400, 2500, and 3500 may be implemented by system 1900 (or by any EO system including any of a plurality of PDDs 1300, 1300', 1600', 1700, 1800). The reduction of the effects of the DC accumulation as discussed with respect to system 1900 (or any PDD mentioned) allows for the utilization of longer FETs. Any method may be implemented to facilitate longer FETs because deciding which PS are temporarily unavailable in a relatively longer FET enables the system 1900 (or another EO system with one of the PDDs mentioned) to ignore these PS and optionally replace their detected outputs with data of multiple neighboring PS.
Fig. 30 is a diagram illustrating three diagrams of an exemplary PDA 4110 according to the presently disclosed subject matter. Referring to Chart A, PDA 4110 includes a plurality of PS 4120. The plurality of PS may be sensitive to SWIR light, IR, and/or other portions of the visible spectrum or any other portion of the electromagnetic spectrum. In the illustrated example, the plurality of PS 4120 are arranged in columns and rows, but any other geometric arrangement of PS may be implemented. Each PS of the plurality of PS in the illustrated example is represented by a letter corresponding to a column of the respective PS and a number corresponding to a row of the respective PS.
Some types of PS feature relatively high DC, as discussed in more detail above. Alternatively, PS 4120 can be characterized by a relatively high dark current (such as any of the criteria and examples provided above). One way to reduce the effect of DC on the image captured by a PDA is by subtracting a reference detection value measured by a PS when the PS is obscured from ambient illumination from that same PS. Alternatively, an average of several detection values taken in the occluding state may be subtracted from the actual detection value to reduce noise of the DC measurements. Referring to fig. 31, another method of reducing the effect of DC on images captured by a PDA is discussed and includes using detection data of PS that is shaded from ambient illumination to reduce the effect of detecting DC accumulation in PS of light arriving from a FOV of the PDA (or an EO system in which the PDA is mounted). Note that the two methods may be combined in a single PDA at the same time (i.e., to correct the same light measurement), at different times, or in any other suitable manner.
Diagram B illustrates an arrangement in which many PS in a single rectangular area 4114 of the PDA 4110 (including three columns PS, H through J in the illustrated example) are shielded from ambient light by a solid block (thus measuring only or primarily DC generated by the leds of the multiple PS). Diagram C illustrates an arrangement in which the PS in several areas 4114 of the PDA 4110 are shielded from ambient light by a physical block 4130. One or more areas 4114 may be located in a portion of the PDA 4110 (e.g., adjacent an edge; in a corner), or in remote areas of the PDA (e.g., on opposite edges; in the middle and corners). Each region 4114 may include one or more PS 4120 (e.g., tens, hundreds, thousands, or tens of thousands of PS 4120 may be included in each shadow region 4114). The solid block 4130 may comprise a single continuous block or a plurality of individual block elements (not shown).
Fig. 31 is a diagram illustrating a method of generating an image by a PDA, numbered 4500, in accordance with an example of the presently disclosed subject matter. Referring to the example of the figures, method 4500 may alternatively be performed by a processor, such as any of the processors discussed above with respect to the previous figures.
Stage 4510 of method 4500 comprises obtaining from a PDA a plurality of detection values of different PS measured during a first frame duration, said PDA comprising a plurality of copied PS (a multitude of duplicated PSs), said plurality of detection values comprising:
a. a first detection value for a first PS (e.g., PS C2 in fig. 4610 of fig. 32) indicative of an amount of light impinging on the first PS from a FOV during the first frame duration;
b. a second detection value for a second PS (e.g., PS F1 in fig. 4610) indicative of an amount of light impinging on the second PS from the FOV during the first frame duration;
c. a third detection value for a third PS (e.g., PS G9 in fig. 4610) indicative of an amount of light impinging on the third PS from the FOV during the first frame duration;
d. a fourth detection value (e.g., PS I4 in fig. 4610) for each of the at least one fourth PS is measured while the corresponding fourth PS is shielded from ambient illumination. Said fourth detection value of each respective fourth PS, also called "first dark measurement (first dark measurement)", belongs to the measurement of detection values when it is shaded. The fourth detection value(s) may be measured during the first frame duration, but in some implementations this is not the case;
e. A fifth detected value (e.g., PS J2 in fig. 4610) for each of the at least one fifth PS is measured while the corresponding fifth PS is shielded from ambient illumination. Said fifth detection value of each respective fifth PS is also referred to as "second dark measurement (second dark measurement)". The fifth detection value(s) may be measured during the first frame duration, but in some implementations this is not the case.
The term "multiple replicated photosites (duplicated photosites)" relates to PS that are similar to each other, with minor differences due to manufacturing inaccuracies. Each replicated PS may be made from a set of substantially identical photolithographic masks (a substantially identical set of photolithographic masks) and subjected to a set of substantially identical fabrication processes (e.g., simultaneously with each other). The detection value may be an electrical signal output by the corresponding PS; an electrical signal is received by processing (e.g., amplifying) the output electrical signal; a digital value corresponding to any such electrical signal (e.g., using an analog-to-digital converter, ADC); or a digital signal is received by processing the output signal (e.g., correcting dynamic range differences, internal gain, and the like). Optionally, a majority of the PS of the PDA, including the first PS, the second PS, the third PS, the fourth PS, and the first PS, are copies of each other. Note that not only the plurality of PS themselves may be copies of each other (in the sense discussed above), but also optionally other parts of the system associated with the plurality of PS, such as the entire detection channel or readout channel (e.g., including amplifiers, filters, ADC modules, and the like).
The plurality of stages 4520, 4530 and 4540 comprises determining output values for different "active" PS (the first PS, the second PS and the third PS), including compensating for DC effects with the help of measurements by associated reference PS (the at least one fourth PS and the at least one fifth PS).
Stage 4520 comprises determining a first PS output value (i.e., an output value associated with the first PS) based on subtracting an average of at least one fourth detection value from the first detection value. Stage 4530 comprises determining a second PS output value based on subtracting an average of the at least one fifth detection value from the second detection value. Stage 4540 comprises determining a third PS output value based on subtracting an average of at least one fourth detection value from the third detection value. In case only one fourth PS (or only one fifth PS) is used, the average value is equal to the fourth detection value (or the fifth detection value, respectively). Alternatively, a single fourth detection value may be multiplied by a known factor prior to subtraction. If more than one fourth PS (or one fifth PS) is used, the plurality of corresponding detection values may be averaged using any suitable type of average, such as arithmetic average, median, weighted average, truncated average, mid-range, tail-end average (winsed mean).
Method 4500 continues to stage 4550 where stage 4550 generates a first frame image based on at least the first PS output value, the second PS output value, and the third PS output value. The generation of the first frame image may be further based on additional output values of other PS.
In addition to determining output values corresponding to the first PS, the second PS, and the third PS, method 4500 may include determining output values corresponding to other PS of the PDA. Alternatively, for all (or most) of the PS of the PDA that detect light from the FOV and whose detection values are intended for generating a first frame image, the method 4500 may comprise determining a corresponding output value for each of these PS, based on subtracting from the detection value of the respective PS an average of a plurality of detection values of at least one corresponding reference PS (of the plurality of replicated PS). The plurality of reference PS are a plurality of PS (a group of PSs) of a group that is shielded from light when measuring the above-described detection values. Thus, the following assumes that method 4500 includes, prior to stage 4550, determining an output value for each PS of a plurality of active PS, the detection value of which is provided based on optical measurements of the plurality of corresponding PS during the first frame duration. The decision of each of these output values comprises subtracting an average value of at least one reference detection value (or "dark measurement") of at least one reference PS that was masked when the measurement was performed from the detection value of the corresponding active PS.
Method 4500 can then be repeated for additional frames (continuous or discontinuous) to continuously generate images of the FOV, wherein DC effects on the images are eliminated (or reduced) based on measurements (and thus DC measurements) of reference PS of the occluded group. The mapping between the plurality of active PS and the plurality of reference PS for generating different images of different frames may be kept constant for a long time, even indefinitely (e.g. if decided during manufacture).
Fig. 32 is a diagram illustrating a mapping between different active PS 4120 of a PDA 4110 versus PS 4120 of a reference group (column H, I, J in the illustrated example) in accordance with an example of the presently disclosed subject matter. Fig. 4610 illustrates the connections between the aforementioned PS of method 4500, while fig. 4620 illustrates a complete mapping in which each active PS is mapped to one or more PS of a reference group. In the illustrated example, each active PS is mapped to a single reference PS, the address of which is marked in italics on the original active PS. It can be seen that since the group (group) of the plurality of active PS is larger than the group of the plurality of reference PS, there are multiple sets (sets) of the plurality of active PS, all of which map to a single reference PS (or in another example multiple reference PS of the same group). For example, there are two active PS mapped to reference PS J6, three active PS mapped to reference PS I4, and four active PS mapped to reference PS J5.
Optionally, the plurality of reference values subtracted from the plurality of detection values of the plurality of active PS (e.g., the first PS, the second PS, and the third PS) are determined based on measurements of the plurality of respective reference PS (e.g., the fourth PS and the fifth PS) performed during the first frame duration. However, this is not necessarily so, and such measurements may optionally be performed before (or even after) the first frame. For example, dark measurements of the plurality of reference PS may be performed only once every few frames (e.g., 5, 10, 100), and the same DC measurement may be used to subtract from several different frames in which different detection values are obtained for the plurality of active PS at different times. Optionally, the fourth detection value and the fifth detection value may be measured during the first frame duration.
The plurality of reference detection values may be, although not necessarily, based on a plurality of measurements performed while the plurality of active PS are exposed to light (simultaneously, partially simultaneously or within the same frame duration). Optionally, the fourth detection value and the fifth detection value are measured when the first PS, the second PS, and the third PS are exposed to ambient light.
The association of an active PS with a reference PS can be performed in different ways and at different times. Alternatively, such association (or "mapping") may be performed once and kept constant. In other examples, the mapping process is repeated at different times (e.g., during routine calibration sessions, when a defective PS is detected, etc.). When all of these PS are obscured by ambient light, the mapping process may be based on measuring the plurality of detection values of the plurality of active PS and the plurality of reference PS (thus (or primarily) generating and measuring only dark current (additional signals may be measured, such as readout noise, etc.). Alternatively, such obscured measurements are performed under different operating conditions (e.g. different temperatures, different exposure times, etc.), and for each active PS a set of one (or optionally more than one) PS is based on the similarity of DC behaviour at different selectable conditions.
As previously mentioned, optionally, the match between the active PS and the reference PS remains constant. Optionally, the method 4500 may further include: (a) Determining a plurality of first PS output values for a plurality of additional frames based on subtracting an average of at least one fourth detection value of the respective frame from the first detection value of the respective frame; (b) Determining a plurality of second PS output values for the plurality of additional frames based on subtracting an average of at least one fifth detected value of the respective frame from the second detected value of the respective frame; (c) Determining a plurality of third PS output values for the plurality of additional frames based on subtracting an average of the at least one fourth detection value of the respective frame from the third detection value of the respective frame; and (d) generating a frame image for each additional frame of the plurality of additional frames based at least on the first PS output value, the second PS output value, and the third PS output value of the respective frame.
As previously mentioned, optionally, the match between the active PS and the reference PS is kept constant for a wide range of temperatures. Optionally, each additional frame of the previous paragraph is taken at a PDA at a different temperature, wherein a first temperature of the plurality of different temperatures is at least 20 ℃ higher than a second temperature of the plurality of different temperatures. Optionally, the fourth PS has a DC thermal behavior (i.e., a DC level at different temperatures) equivalent to the DC thermal behavior of the first PS and the third PS; the fifth PS has a DC thermal behavior equivalent to that of the second PS but different from that of the fourth PS. In the presence of more than one fourth PS and/or more than one fifth PS, and optionally, the at least one fourth PS has a combined DC thermal behavior (based on the selected averaging) equivalent to the DC thermal behavior of the first PS and the third PS; the at least one fifth PS has a combined DC thermal behavior that is equivalent to (but different from) the combined DC thermal behavior of the at least one fourth PS. If the multiple DC behaviors are similar at different temperatures, the multiple DC behaviors are equivalent (e.g., the plot of the multiple DC measurements of the selected reference PS is closest, e.g., using least mean squares or other metrics, to the active PS being tested in all of the plots of the reference PS). Note that, alternatively, the plurality of DC measurements of the reference PS may be multiplied by a scalar (or otherwise manipulated in unison) prior to comparison with the plurality of DC measurements of the active PS at different temperatures.
Optionally, prior to making said determination of said plurality of output values, a matching model is obtained, said matching model comprising, for each PS of a majority of PS exposed to FOV illumination during said first frame, at least one matching reference PS matching said corresponding PS based on a detected value of said corresponding PS previously measured when said corresponding PS was masked from ambient illumination. The matches can be saved in any of a number of suitable formats (e.g., one or more look-up tables, LUTs). Optionally, the matching model remains constant for at least one week, wherein a plurality of different frame images are generated based on the matching model on different days of the week. An example of a matching model is provided in FIG. 4620.
The relative number of PS between the active PS group and the reference PS group may vary, as may the absolute number of PS in the reference group. Alternatively, PDA 4110 may include 1,000 (10) or more active PS (e.g., 1K to 10K, 10K to 100K, 100K to 1M, 1M to 10M, 10M to 100M, >100M, or any combination thereof), each matching PS being associated with a shadowed PS selected from a shadowed PS group (e.g., "multiple reference PS"), the shadowed PS including less than 1% of the number of active PS. For example, if there are 2M active PS, 20K or less PS that are obscured by ambient light may be used as a reference group. In other cases, different ratios may be implemented between the number of active PS and reference PS, e.g., <0.1%, 0.1 to 0.5%, 0.5 to 1%, 1 to 2%, 2 to 5%, 5 to 20%, or any combination thereof). Active PS's are also referred to as "matching PS" where one or more reference PS's match them (e.g., by a matching model such as a LUT). Optionally, the matching model includes a match for at least 10K of matched PS, each of the plurality of matched PS being associated with a masked PS selected from a masked PS group including less than 1% of the number of matched PS. Any other ratio range may be used.
With reference to the matching model, optionally, the matching model exactly matches each of a plurality of matched PS to one reference PS. Optionally, the matching model exactly matches each of the plurality of matched PS to one reference PS.
Method 4500 may be implemented in any suitable EO system, where some PS may be shielded from ambient illumination during at least a portion of the time that the EO system detects ambient light using other PS. An EO system operable to generate an image is disclosed, comprising:
a. a PDA comprising a plurality of PS, each PS being operable to output a detection value indicative of the amount of light impinging on the respective PS during a detection duration and a DC level generated by the PS during the detection duration. Note that the plurality of detection values may alternatively be processed slightly in the transition between the plurality of PS and the processor, for example, as discussed elsewhere in the present disclosure, or as known in the art.
b. A shield for shielding a subgroup of the plurality of PS from ambient illumination at least during a first frame duration. Such a shield may block light, reflect light, diffract light or manipulate it in any other suitable way to prevent light from reaching the shielded PS when operated. The shielding may be fixed and/or selective (e.g., movable).
c. A processor operable (e.g., by design, configuration, and the like):
d. obtaining a plurality of detection values for a plurality of different PS of said PDA measured during said first frame duration, said plurality of obtained detection values comprising: (a) A first detection value of a first PS indicative of an amount of light impinging on the first PS from a FOV during the first frame duration; (b) A second detection value of a second PS indicative of an amount of light impinging on the second PS from the FOV during the first frame duration; (c) A third detection value of a third PS indicative of an amount of light impinging on the third PS from the FOV during the first frame duration; (d) A fourth detection value of each fourth PS of the at least one fourth PS measured while the respective fourth PS is shielded from ambient illumination; and (e) a fifth detection value for each of the at least one fifth PS measured while the respective fifth PS is shielded from ambient illumination;
e. determining a first PS output value based on subtracting an average of the at least one fourth detection value from the first detection value;
f. determining a second PS output value based on subtracting an average of the at least one fifth detection value from the second detection value;
g. Determining a third PS output value based on subtracting an average of the at least one fourth detection value from the third detection value; a kind of electronic device with high-pressure air-conditioning system
h. A first frame image is generated based at least on the first PS output value, the second PS output value, and the third PS output value.
Regarding the decision of the plurality of output values, note that additional calculations may be implemented in addition to the subtraction, and that the subtracted values may optionally be preprocessed (e.g. for correcting the linearity of the detection level by the corresponding PS) prior to the subtraction.
Optionally, a majority of the PS of the PDA are copies of each other, the majority including the first PS, the second PS, the third PS, the fourth PS, and the first PS. Optionally, the fourth detection value and the fifth detection value are measured during the first frame duration. Optionally, the fourth detection value and the fifth detection value are measured when the first PS, the second PS, and the third PS are exposed to ambient light. Optionally, the at least one fourth PS has a DC thermal behavior equivalent to that of the first PS and the third PS, and wherein the at least one fifth PS has a different DC thermal behavior equivalent to that of the second PS.
Optionally, the processor is further operable to: (a) Determining a plurality of first PS output values for a plurality of additional frames based on subtracting an average of the at least one fourth detection value for a respective frame from the first detection value for the respective frame; (b) Determining a plurality of second PS output values for the plurality of additional frames based on subtracting an average of the at least one fifth detected value of the respective frame from the second detected value of the respective frame; (c) Determining a plurality of third PS output values for the plurality of additional frames based on subtracting an average of the at least one fourth detection value of the respective frame from the third detection value of the respective frame; and (d) generating a frame image for each additional frame of the plurality of additional frames based at least on the first PS output value, the second PS output value, and the third PS output value of the respective frame. In this case, each of the plurality of additional frames is taken at a different temperature by the PDA, wherein a first temperature of the plurality of different temperatures is at least 20 ℃ higher than a second temperature of the plurality of different temperatures.
Optionally, the EO system may further comprise a memory module for storing a matching model comprising, for each of a majority of photosites exposed to FOV illumination during the first frame, at least one matching reference PS that matches the respective PS based on a detected value of the respective PS that was previously measured while the respective PS was masked from ambient illumination, wherein the processor is operable to determine the first PS output value, the second PS output value, and the third PS output value in further response to the matching model. Optionally, the matching model remains constant for at least one week, and wherein a plurality of different frame images are generated based on the matching model on different days of the week. Optionally, the matching model includes a match for a plurality of matched photosites of at least 10000, each matched photosite of the plurality of matched photosites being associated with a masked PS selected from a masked PS group comprising less than 1% of the number of matched PS.
According to method 4500, a non-transitory computer readable medium having machine-readable instructions may be implemented to generate an image based on detection by a photodetector. For example, a non-transitory computer readable medium is disclosed, comprising instructions stored thereon that, when executed on a processor, perform the steps of:
a. obtaining from the PDA a plurality of detection values of different PS measured during a first frame duration, the PDA comprising a plurality of PS copied, the plurality of detection values comprising: (a) A first detection value of a first PS indicative of an amount of light impinging on the first PS from a FOV during the first frame duration; (b) A second detection value of a second PS indicative of an amount of light impinging on the second PS from the FOV during the first frame duration; (c) A third detection value of a third PS indicative of an amount of light impinging on the third PS from the FOV during the first frame duration; (d) A fourth detection value of each fourth PS of the at least one fourth PS measured while the respective fourth PS is shielded from ambient illumination; and (e) a fifth detection value for each of the at least one fifth PS measured while the respective fifth PS is shielded from ambient illumination;
b. Determining a first PS output value based on subtracting an average of the at least one fourth detection value from the first detection value;
c. determining a second PS output value based on subtracting an average of the at least one fifth detection value from the second detection value;
d. determining a third PS output value based on subtracting an average of the at least one fourth detection value from the third detection value; a kind of electronic device with high-pressure air-conditioning system
e. A first frame image is generated based at least on the first PS output value, the second PS output value, and the third PS output value.
Any of the variations discussed above may be implemented by way of appropriate instructions in a non-transitory computer readable medium, mutatis mutandis (mutandis), mutatis mutandis, and are not repeated for the sake of brevity.
Fig. 33 illustrates a method, numbered 4600, for determining a matching model between PS of a PDA, according to an example of the presently disclosed subject matter. The matching PS represents DC behavior matching different PS. The matching model may be used to reduce the impact of DC on the instantaneous measurement, such as discussed above with respect to method 4500, or for any other use. For example, the decision of the matching model may be used to detect a defective PS (e.g., a PS that does not have a good match may be considered defective).
Stage 4610 includes obtaining a plurality of detection signals for each of a plurality of PS of the PDA at a plurality of different temperatures. For example, the plurality of PS may include the first PS, the second PS, the third PS, the fourth PS, and the fifth PS of method 4500, as well as a plurality of other PS. In example alternative implementations, the plurality of PS may include all PS of the PDA, all PS of the PDA's operation, at least 90% PS of the PDA, or PS of another subset of the PDA. The detection signals of different PS may be detected at a common discrete temperature (e.g., T 1 、T 2 、T 3 、T 4 Etc.) are measured. In other implementations, the detection signal for each PS (or each PS group) can be obtained for different temperatures (e.g., for PS A5 detection, the temperature T can be obtained (A5,1) 、T (A5,2) 、T (A5,3) T and T (A5,4) The signal under, and for PS B5, the temperature T can be obtained (B5,1) 、T (B5,2) T and T (B5,3) The following detection signal). Although not necessarily so, multiple detection signals may be obtained for different PS at different numbers of temperatures. Optionally, the lowest temperature of the plurality of detection signals is obtained for a plurality of PSBelow the freezing temperature of water (0 ℃), while the highest temperature is above the evaporation temperature of water (100 ℃). Other temperature ranges may also be implemented. In just a few examples, measuring the temperature span of the plurality of detection signals for each PS of the plurality of PS may include any one or more of the following temperature ranges: 0 ℃ to 70 ℃, 20 ℃ to 70 ℃, 40 ℃ to 20 ℃, 25 ℃ to 150 ℃, 100 ℃ to 250 ℃, 70 ℃ to 40 ℃. Signal measurements for different PS may also be made over a range of other operating conditions (e.g., exposure time, photodiode bias, etc.).
Stage 4620 of method 4600 includes identifying at least one PS in a reference PS group of the PDA for each PS in an active PS group of the PDA at a different temperature (and possibly also exceeding a span of another operating parameter, such as described above). The selection of the matching reference PS may be done based on least mean square or any other algorithm for finding PS of similar behavior.
Method 4600 may alternatively be implemented by any of the suitable systems discussed in this disclosure. Note that method 4600 may alternatively be implemented by or in conjunction with an external system during manufacture. For the sake of brevity of this disclosure, a non-transitory computer-readable medium including instructions for performing method 4600 (or any other method disclosed in this disclosure) is disclosed only as a reference.
Fig. 34 is a diagram 4700 illustrating an exemplary analog detection signal of 50 PS across four different temperatures according to an example of the presently disclosed subject matter. In the illustrated example, four sets of 50 PS are distinguishable in their DC response to temperature. These groups are referred to as 4710, 4720, 4730, and 4740. In the example provided, a reference group of about 10 reference PS is likely to be sufficient to match the entire PDA. Of course, the actual number of PS prototypes may be greater than 4.
Returning to method 4600, stage 4630 includes creating a matching model that associates each PS in the active PS group with one or more PS of the PDA that possess similar DC behavior identified in stage 4620. Note that stage 4630 may include: PS with one or more matches less than the best match is selected because additional parameters may be considered. For example, in addition to a good match result in stage 4620, stage 4630 may also consider geometric proximity (e.g., to facilitate selection of nearby reference PS). The geometrical proximity can be used, for example, to compensate for different parts of the PDA during instant operation at different temperatures.
Method 4600 may further include: in addition to or alternatively to stage 4630, stage 4630 of identifying at least one defective PS of the PDA. For example, stage 4640 may include: if no matching PS is found for the corresponding active PS that meets a match sufficiency criterion, then a PS of the active PS group is determined to be defective.
Method 4600 may also include a number of additional stages, such as deciding which PS of the PDA is used as the reference PS and which PS is used as the active PS. For example, if method 4600 is performed during fabrication, then light barriers that mask the plurality of reference PS at least during DC measurement times may be positioned based on the quality of matching the plurality of active PS to the reference group. For example, as shown with reference to diagram B of fig. 30, such a stage may include: it is determined whether the barrier 4130 should cover two, three, or four columns (e.g., I to J, H to J, or G to J) of the PDA 4110. In diagram C of fig. 30, such a stage can be used to decide whether a smaller reference area on the side is needed.
Another step of method 4600 may include: a spare match reference PS (backup matching reference PS) is determined for each PS in the active PS group, for example if the reference PS that was originally matched during the life cycle of the PDA becomes defective.
Another use of the matching model that may be used with method 4600 is for determining the temperature difference between different portions of the PDA. For example, based on the matching results of method 4600 (particularly stage 4620), PS groups with similar DC behavior can be identified in different portions of the PDA. For example, figure 35 illustrates a PDA 4800 (which may be PDA 4100 in some examples) in which each PS is classified as one of six ethnic groups ("a", "B", "C", "D", "E" and "F") or defective (labeled XX) if no ethnic group matches can be identified. As shown, the PS of the different groups are dispersed throughout the PDA in the illustrated example. A dark frame (where PS in different parts of the PDA, and optionally the entire PDA) can be taken and the DC measurements of PS belonging to a single population can be compared to each other. Since the DC behavior of the PS of this population is known, the difference in measurement values may be due to the temperature differences of the different parts of the PDA. Of course, information from different households may be combined to create a temperature map of the PDA during an instant operation. Note that the matches that can be used for temperature difference detection may be different from the matches that are used to reduce DC effects (e.g., fewer prototypes/families may be used). The determined temperature difference may be used to calibrate different parts of the image of the later measured detection signal. Note that temperature difference detection based on such a matching model may be implemented on the same system as the DC compensation, or independent thereof.
A variant of method 4600 may include: the PS groups are matched into PS groups, each PS group including at least a minimum number of PS (e.g., three, five, ten, thirty), with or without distinction between reference PS and active PS. Once such matching is achieved, data indicative of the DC behavior of the PS of the population and/or data of operating parameters for reducing the effect of DC on the PS of the corresponding population may be stored for each population, rather than for each individual PS. Thus, many parameters (e.g., expected DC levels and/or other operating parameters at different temperatures, gains used at different temperatures, etc.) may be saved for each population. Thus, the memory requirements for storing data, whether or not certain PS are shaded, are greatly reduced by reducing the effects of DC effects. For example, for a 2 megapixel PDA, many DC parameters may be saved for only 100, 200, or 1,000 groups (as an example), while a LUT associates each PS in a 2M PS with each group.
Note that methods 4500 and 4600 may be used with any of the methods described above as well as any of the PDA, PDD, and EO systems described above. For example, the plurality of obscured PS of PDD 1900 may be used as the plurality of reference PS of methods 4500 and 4600.
Referring to the preceding figures, methods 4500 and 4600, and any combination of two or more of their stages, may be performed by any of the processors discussed above with respect to the preceding figures.
Some of the foregoing method stages may also be implemented in a computer program running on a computer system, the computer program comprising at least code portions for performing the steps of the relevant method when running on or activating a programmable device, such as a computer system, to perform the functions of a device or system according to the invention. Such a method may also be implemented in a computer program running on a computer system, the computer program comprising at least code portions for causing a computer to perform the steps of a method of the invention.
Fig. 36 illustrates a method, numbered 5500, of detecting based on a Short Wave Infrared (SWIR) electro-optic imaging system (also referred to as SWIR EO system or "SEI system") to generate a depth image of a scene, in accordance with an example of the presently disclosed subject matter. The SEI system may be any of the systems discussed above, or any other suitable EO system for SWIR (e.g., a sensor, a camera, a lidar, and the like). Method 5500 may be performed by one or more processors of the SEI system, one or more processors external to the SEI system, or a combination of both.
In method 5500, stage 5510 includes: a plurality of detection signals of the SEI system are obtained, each detection signal indicating an amount of light captured by at least one FPA of the SEI system from a particular direction within a FOV of the SEI system within a respective detection time frame (i.e., the respective detection signal is captured during the detection time frame, e.g., measured from a trigger of illumination of an associated light source such as a laser). The at least one FPA includes a plurality of individual PS, each PS including a germanium (Ge) element, wherein a plurality of impinging photons are converted to a detected charge. Note that method 5500 can be used for any type of PS featuring a high DC, even if Ge is not included but other elements are included.
For each of a plurality of directions within a FOV, a different detection signal (of the aforementioned plurality of detection signals) indicates a reflected SWIR illumination level from a different range of distances along the direction. An example is provided in fig. 5710 in fig. 37 to illustrate the timing of three different detection signals arriving within the FOV from the same direction. The y-axis (ordinate) in the figure represents the level of response of the detection system to reflected photons arriving from the relevant direction. The reflected illumination originates from one or more light sources (e.g., lasers or LEDs), which are optionally controlled by the same processor that controls the FPA, and are reflected from a portion of the FOV (e.g., corresponding to a single PS detectable spatial volume)). Note that different detection signals may be associated with similar but not fully overlapping portions of the FOV (e.g., if the sensor, the scene, or intermediate optics between the two are moving in time). Detection signals from the same PS may be reflected from slightly different angles within the FOV at different detection time windows associated with different detection signals.
Referring to the example of fig. 37, note that fig. 5710 does not show the detection level of each signal, but the response of the plurality of detection signals to a plurality of photons reflected from a perfect reflector at different times from the start of light emission. Fig. 5720 illustrates three objects located at different distances from the SEI system. Note that in many cases, only one object is detected at a time in each direction, which is the object closest to the SEI system. However, in some cases, more than one object may be detected (e.g., if the foreground object is partially transparent, or does not block light from the entire PS). Fig. 5730 illustrates the levels of three return signals in the direction in which one of the objects is present. In this example, one person is in the near field, a dog is in the middle range, and a tree is in the far field (the choice of objects is arbitrary, and typically only the light reflected from a portion of each object is detected by a single PS). The light returned from an object at distance D1 is represented by the humanoid form of three different detection signals (corresponding to different detection timing windows and different ranges from the SEI system). Also, the levels of the plurality of detection signals corresponding to the light reflected from the plurality of objects of the plurality of distances D2 and D3 are correspondingly represented by a dog and a tree symbol. As shown in fig. 5740, reflections from an object located at a given distance may be converted into a tuple (or any other representation of data, such as any suitable form of direction-dependent data structure (DADS)), which represents the relative levels of signals detected at different time windows. In the illustrated example, each number in the tuple represents a detected signal level in a detection window. The indication of the multiple detection levels in the tuple may be corrected for the distance of the sensor (as reflected light from the same object decreases with distance), but this is not necessarily the case. Although three partially overlapping time windows are used in the illustrated example, any number of time windows may be used. The number of time windows may be the same for different regions of the FOV, but this is not necessarily the case.
Stage 5520 includes processing a plurality of detection signals to determine a 3D detection map, the 3D detection map including a plurality of 3D positions of the object being detected in the FOV. The process comprises: compensating for dark current levels accumulated during the collection of the plurality of detection signals generated by the Ge element, and the compensating comprising: different degrees of DC compensation are applied to a plurality of detection signals detected by different PS of at least one focal array.
In addition to or instead of compensating for the accumulated DC, the processing may include: high integration noise levels and/or readout noise levels are compensated during reading of the plurality of detection signals. The compensation may include applying different degrees of noise level compensation to a plurality of detection signals detected by different PS of at least one focal array.
The compensation for DC collection, the readout noise and/or the integration noise can be done in any suitable way, for example by using any combination of one or more of the following: software, hardware, and firmware. In particular, compensation for DC collection may be achieved using any one or more of the above systems, methods, and computer program products, or any combination of any portion thereof. Some non-limiting examples of systems, methods, and computer program products that may be used to compensate for DC and apply the degree of DC compensation to detection signals detected by different PS of at least one focal array are discussed above with reference to fig. 12A-35.
In some implementations, compensation may be performed during acquisition of multiple detection signals (e.g., at the hardware level of the sensor), and processing may be performed on detection signals that have been compensated for DC accumulation (e.g., using the systems and methods discussed with respect to fig. 12A-22).
The compensation within the reference phase 5520 may optionally include: a first DC offset is subtracted from a first detection signal detected by a first PS, the first detection signal corresponding to a first detection range, and a second DC offset is subtracted from a second detection signal detected by the first PS, the second DC offset being different from the first DC offset, the second detection signal corresponding to a second detection range, the second detection range being further from the SEI system than the first detection range.
Optionally, the method 5500 may include coordination of active illumination (e.g., by at least one light source of the SEI system) and acquisition of the plurality of detection signals. Optionally, the method 5500 may include: (a) Triggering the emission of a first illumination (e.g., by a laser or LED) in coordination with initiating the exposure of a first gated image, wherein a plurality of first detection signals are detected for different ones of the plurality of directions; (b) Triggering the emission of a second illumination (e.g., laser, LED) in coordination with initiating the exposure of a second gating image, wherein a plurality of second detection signals are detected for the different directions; and (c) triggering the emission of a third illumination (e.g., laser, LED) in coordination with initiating the exposure of a third gated image, wherein a plurality of third detection signals are detected for the different directions. In this case, the processing of stage 5520 may optionally include: determining a presence of a first object in a first 3D position in a first direction of the different directions based on at least one detection signal from each of the first, second, and third images, and determining a presence of a second object in a second 3D position in a second direction of the different directions based on at least one detection signal from each of the first, second, and third images, wherein the first object is at least twice as far from the SEI system than the second object.
Optionally, applying the different degrees of DC compensation to the plurality of detection signals detected by different PS of the at least one FPA comprises: a plurality of detected DC levels of different reference PS are used that are shielded from light from the FOV.
Optionally, the compensating may include: different degrees of DC compensation are applied to multiple detection signals detected simultaneously by different PS of the at least one FPA.
With respect to integration noise and readout noise, it is noted that compensation for such noise may be associated by the at least one processor executing method 5500 with a number of illumination pulses used to illuminate portions of the FOV during acquisition of the plurality of corresponding detection signals. Different numbers of illumination pulses may result in significant nonlinearities of the plurality of detection signals, which are optionally corrected as part of the processing before determining the distance/3D position of different objects in the FOV.
With reference to using DADS to determine distance/3D locations of different objects in a FOV, note that different transfer functions of DADS (e.g., multiple tuples) versus distance may be used for different directions within the FOV, e.g., to compensate for non-uniformities of detection channels across the FOV (e.g., the sensor and/or the multiple detected objects), non-uniformities of illumination (e.g., using multiple light sources, light source non-uniformities, or optics non-uniformities), and the like.
Different detection signals from the same direction within the FOV correspond to different detection windows, which may be the same distance or different distances. For example, a detection window may correspond to a distance range of about 50m (e.g., between 80m from the SEI system and 130m from the SEI system). In various examples, some or all of the detection windows used to determine a distance/3D position of an object in the FOV may be a range of distances between 0.1m and 10m, between 5m and 25m, between 20m and 50m, between 50m and 100m, between 100m and 250m, and so on. The distance ranges associated with different detection signals may overlap. For example, a first detection window may detect return light from objects between 0m and 50m from the SEI system, a second window may correspond to objects between 25m and 75m away, and a third window may correspond to objects between 50 and 150m away.
Method 5500 may be performed by one or more processors, such as, but not limited to, a processor of any of the above systems. A system for generating a depth image of a scene based on detection of an SEI system is disclosed, the system comprising at least one processor configured to: obtaining a plurality of detection signals of the SEI system, wherein each detection signal of the plurality of detection signals is indicative of an amount of light captured by at least one FPA of the SEI system from a particular direction within a FOV of the SEI system within a respective detection frame, the at least one FPA comprising a plurality of individual PS, each PS comprising a germanium element that converts a plurality of impinging photons into a detected charge, wherein for each of a plurality of directions within a FOV a plurality of different detection signals are indicative of a plurality of reflected SWIR illumination levels from different distance ranges along the direction; and processing the plurality of detection signals to determine a 3D detection map, the 3D detection map comprising a plurality of 3D locations in the field of view where a plurality of objects are detected, wherein the processing comprises: compensating for a plurality of DC levels accumulated during the collecting of the plurality of detection signals caused by the germanium element, wherein the compensating comprises: different degrees of DC compensation are applied to the plurality of detection signals detected by different PS of the at least one FPA.
Optionally, the compensating may include: subtracting a first DC offset from a first detection signal detected by a first PS, the first detection signal corresponding to a first detection range; and subtracting a second DC compensation offset from a second detection signal detected by the first PS, the second DC compensation offset being different from the first DC compensation offset, the second detection signal corresponding to a second detection range, the second detection range being further from the SEI system than the first detection range.
Optionally, the at least one processor may be further configured to: (a) Triggering emission of a first illumination in coordination with initiating exposure of a first gated image, wherein a plurality of first detection signals are detected for different ones of the plurality of directions; (b) Triggering emission of a second illumination in coordination with initiating exposure of a second gated image, wherein a plurality of second detection signals are detected for the different directions; and (c) triggering emission of a third illumination in coordination with initiating exposure of a third gated image, wherein a plurality of third detection signals are detected for the different directions. In this case, as part of the deciding of the 3D detection map, the at least one processor is further configured to decide: (a) Determining the presence of a first object in a first 3D position in a first direction of the different directions based on at least one detection signal from each of the first, second, and third images, and (b) determining the presence of a second object in a second 3D position in a second direction of the different directions based on at least one detection signal from each of the first, second, and third images, wherein the first object is at least twice as far from the SEI system as the second object.
Optionally, applying the different degrees of DC compensation to the plurality of detection signals detected by different PS of the at least one FPA comprises: a plurality of detected DC levels of different reference PS are used that are shielded from light from the FOV.
Optionally, the compensating may include: different degrees of DC compensation are applied to multiple detection signals detected simultaneously by different PS of the at least one FPA.
Optionally, one or more (and possibly all) of the at least one processor may be part of the SEI system.
With reference to the preceding figures, the method 5500 and any combination of two or more stages thereof may be performed by any of the processors discussed above with respect to the preceding figures. With reference to the preceding figures, method 4600 and any combination of two or more stages thereof may be performed by any of the processors discussed above with respect to the preceding figures.
It is noted that while method 5500 and related systems are discussed with respect to generating a depth image of a scene based on detection of a SWIR-based EO imaging system (SEI system), similar methods and systems based on detection of an EO imaging system with high DC or other noise and signal interference may be contrasted with the depth image used to generate a scene, even when operating in other portions of the electromagnetic spectrum.
A non-transitory computer-readable medium provided with machine-readable instructions may be implemented for generating a depth image of a scene based on detection of an SEI system according to the method 5500. For example, a non-transitory computer readable medium is disclosed, comprising instructions stored thereon that, when executed on a processor, perform the steps of:
a. obtaining a plurality of detection signals of the SEI system, each detection signal indicating an amount of light captured by at least one FPA of the SEI system from a particular direction within a FOV of the SEI system within a respective detection time frame, the at least one FPA comprising a plurality of individual PS, each PS comprising germanium elements that convert a plurality of impinging photons into detected charges, wherein for each of a plurality of directions within a FOV a plurality of different detection signals indicate a plurality of reflected SWIR illumination levels from different distance ranges along the direction; a kind of electronic device with high-pressure air-conditioning system
b. Processing the plurality of detection signals to determine a 3D detection map, the 3D detection map comprising a plurality of 3D positions in the FOV at which a plurality of objects are detected, wherein the processing comprises: compensating for a plurality of DC levels accumulated during the collecting of the plurality of detection signals caused by the germanium element, and wherein the compensating comprises: different degrees of DC compensation are applied to the plurality of detection signals detected by different PS of the at least one FPA.
The non-transitory computer-readable medium of the previous paragraph may include additional instructions stored thereon that, when executed on a processor, perform any other step or variation discussed above with respect to method 5500.
Fig. 38A-38C illustrate one sensor numbered 5200 in accordance with examples of the presently disclosed subject matter. The sensor 5200 is operable to detect depth information of an object in its FOV. Note that the sensor 5200 may be a variation (under any terminology) of any of the sensors discussed above, with adaptations discussed below (including the controller 5250 and its functions, and associated switches). For the sake of brevity, many of the details, options, and variations discussed above with respect to the different sensors are not repeated and may be implemented with the necessary modifications in the sensor 5200.
The sensor 5200 includes an FPA 5290, which FPA 5290 in turn includes a plurality of PS 5212, each PS being operable to detect light arriving from an IFOV of the PS. Different PS 5212 are directed in different directions within a FOV 5390 of the sensor 5200. For example, referring to FOV 5390 illustrated in fig. 42, a first PS 5212 (a) can be directed to a first ifet 5312 (a), a second PS 5212 (b) can be directed to a second ifet 5312 (b), and a third PS 5212 (c) can be directed to a third ifet 5312 (c). The portion of the FOV 5390 that can be commonly detected by a plurality of PS (collectively 5210, including a plurality of PS 5212 (a), 5212 (b), and 5212 (c)) of a readout group is labeled 5310. Note that any type of PS 5312 may be implemented, including for example a single photodiode or a plurality of photodiodes. The different PS 5212 of a single readout group 5210 (and optionally even the entire FPA 5290) may be essentially a replica of each other, but this is not necessarily the case, and the different types of PS 5212 may optionally be implemented in a single FPA 5290, even in a single readout group 5210. Different PS 5212 of a single readout group 5210, and optionally even different PS 5212 of the entire FPA 5290, may be sensitive to the same portion of the electromagnetic spectrum or different portions thereof. Any one or more of the types of PS discussed elsewhere in this disclosure (e.g., above) may be implemented as PS 5212.
Note that, optionally, all PS 5212 of a single readout group 5210 are physically adjacent to each other (i.e., each PS 4212 of a readout group 5210 is physically adjacent to at least one other PS 5212 of a readout group 5210) so as to create at least one continuous path through adjacent PS 5212 between any two PS 5212 of a readout group 5210. However, non-contiguous readout groups may also be implemented (e.g., if some PS 5212 of the FPA 5290 are defective, if some PS 5212 of the FPA 5290 are unused (e.g., to save power), or for any other reason. If the FPA 5290 includes multiple readout groups 5210, the multiple readout groups 5210 may include the same number of PS 5212 (but not necessarily so), may include the same type of PS 5212 (but not necessarily so), or may be arranged in the same geometric configuration (e.g., in a 1x3 array, as shown in the examples of fig. 40A, 40B, and 40C).
The sensor 5200 includes at least one readout set 5240, the readout set 5240 including a plurality of readout circuits 5242. Note that any suitable type of readout circuitry, various of which are known in the art, may be implemented as readout circuitry 5242 (or as readout circuitry for other systems discussed in this disclosure). Examples include, but are not limited to, readout circuits that include (or alternatively consist of): a capacitor, an integrator, or a capacitive transimpedance amplifier. Each of the plurality of sense circuits 5242 in a single sense set 5240 is connected by a plurality of switches 5232 (collectively 5230) to a plurality of PS 5212 of the same sense group 5210 of the FPA 5290. A readout circuit 5242 reads signals from one or more PS 5212 connected to the readout circuit 5242 and outputs data (e.g., in an analog or digital manner) to indicate the light level to which the respective one or more PS 5212 are subjected. The output data may be provided to a processor, communicated to another system, stored in a memory module, or used in any other manner. The different sense circuits 5242 of a single sense set are connected to the various PS 5122 of the respective sense group 5210 and are operable to output an electrical signal indicative of the amount of light impinging on the plurality of PS 5212 of the sense group 5210. The readout group 5210 is connected to the respective readout circuit 5242 via at least one of the plurality of switches 5230. Note that the plurality of switches 5232 can be implemented in any suitable switching technology, such as any combination of one or more transistors. Multiple switches 5232 may be implemented as part of the FPA 5290, but need not be so. For example, some or all of the plurality of switches 5232 may be included in a sense die that is electrically (and optionally physically) connected to the FPA 5290. The sense circuit 5242 can be implemented as part of the FPA 5290, but need not be. For example, some or all of the plurality of readout circuits 5242 can be included in a readout wafer that is electrically (and optionally also physically) connected to the FPA 5290.
Further, the sensor 5200 also includes at least one controller 5250, the controller 5250 being configured and operable to change a plurality of switch states of the plurality of switches 5230 such that different readout circuits 5242 of a readout set 5240 are connected to a readout group 5210 (i.e., to a plurality of PS 5212 of the readout group) at different times for exposing the different readout circuits 5242 to reflections of illumination light from objects at different distances from the sensor 5200. The illumination light may be emitted by a light source 5260, the light source 5260 being included in the sensor 5200 or any EO system in which the sensor 5200 is implemented (e.g., a camera, a telescope, a spectrometer). The illumination light may also be emitted by another light source associated with the sensor 5200 (whether controlled by it or a common controller therewith) or by any other light source.
The sensor 5200 further comprises a processor 5220, the processor 5220 being configured to obtain the plurality of electrical signals from the readout set 5240, the plurality of electrical signals being indicative of the detection level of the reflected light collected from the plurality of IFOVs of the plurality of PS 5212 for determining depth information of the object indicative of a distance of the object from the sensor 5200. For example, such an object may be a tower 5382 in the background of FOV 5390, or a tree 5384 in the foreground of FOV 5390. For example, the processor 5200 may implement the method 5500, or any of the techniques described above (e.g., with respect to fig. 36 and 37).
Fig. 38A, 38B and 38C illustrate the same sensor 5200 in different switch states of the readout set 5240, the readout set 5240 being connected to a readout group 5210, the readout group 5210 comprising three PS 5212 (a), 5212 (B) and 5212 (C) in the illustrated example. In fig. 38A, no readout circuit 5242 is connected to any PS 5212, in which case no readout is possible. In fig. 38B, a single readout circuit 5242 (a) is connected to all three PS 5212, enabling a signal to be read by the single readout circuit 5242 that is indicative of the light impinging on all three PS 5212. For example, at different times during a sampling frame, all of the PS 5212 can be sequentially connected to one readout circuit 5242 at a time so that light is collected by all of the PS 5212 of the readout group 5210 at all times, but measured by different readout circuits 5242 at different times. An example of this is provided in fig. 5410 of fig. 39. With reference to the term "frame" also discussed above, note that frames of different durations may be implemented. For example, in a frame rate of 60 Frames Per Second (FPS), each frame corresponds to 1/60 of a second. However, over one or more consecutive time spans (which may be separated in time by a duration of 1/60, in the given example), the frame detection duration may be significantly shorter (e.g., 1 to 100 microseconds).
In fig. 38C, an appropriate subgroup of the plurality of readout circuits, including readout circuits 5242 (b) and 5242 (C) in the illustrated example, is connected to all PS 5212 of readout group 5210, enabling the reading of signals by the plurality of readout circuits 5242 indicative of light impinging on all three PS 5212. Connecting two sense circuits 5212 to a sense group 5210 is illustrated in fig. 5420 and 5430 of fig. 39. More than two sense circuits 5212 can optionally be connected to a sense group 5210, as desired for an implementation. An example of an implementation of connecting multiple readout circuits 5212 to a single readout group 5210 is the transition time between two different detection time windows of different detection signals (e.g., as discussed above with respect to fig. 36 and 37).
For example, at different times during a sampling frame, all PS 5212 can be sequentially connected to one readout circuit 5242 at a time so that light collected by all PS 5212 of the readout group 5210 is measured at all times, but at different times by different readout circuits 5242. An example of this is provided in chart 5410 of fig. 39. In other examples, sometimes only one sense circuit 5242 is connected to the PS 5212, while more than one sense circuit 5242 is connected in parallel to the PS 5212. An example of this is provided in graphs 5420 and 5430 of fig. 39. In yet another example, different subsets of the plurality of sense circuits 5242 can be connected to the PS 5212 of the sense group 5210 in parallel at different times. Regarding all options, note that optionally there may be idle time for any PS 5212 where no sense circuit 5242 is connected to the sense group 5210. Such examples are provided in chart 5440 and chart 5450 of fig. 39. The graph 5460 of fig. 39 illustrates a case where different combinations of connections are implemented in a single frame, a single sense circuit 5242, multiple sense circuits 5242, and no sense circuit 5243 being connected to the sense group 5210 at different times during a detection duration of the sensor.
Fig. 40A-40C illustrate an example sensor 5200 according to the presently disclosed subject matter. Optionally, the switching network 5230 includes switchable circuitry that enables individual readout circuits 5242 to be connected to individual PS 5212 at certain times and to multiple PS 5212 at the same time at other times. In the illustrated example, in fig. 40A, all three sense circuits are disconnected from the plurality of PS, and in fig. 40B, sense circuit 5242 (ROC 1) is connected to all three PS 5212 (a), 5212 (B), and 5212 (c), while the other two sense circuits are disconnected. In fig. 40C, 5242 (ROC 1) and 5242 (ROC 2) and 5242 (ROC 3) are each connected to a single PS 5212. Note that the detected operating parameters (e.g., photodiode bias, amplification gain, etc.) may be different in the two detection states. For example, to handle different amounts of light collected by different numbers of PS 5212.
The sensor 5200 is operable to detect depth information of an object in its FOV. Note that the sensor 5200 may be a variation of any of the sensors discussed above (possibly using different names for the sensors, such as "detector", "photo detector", etc.), with adaptations discussed below (including a controller 5250 and its functions, and associated switches). For the sake of brevity, many of the details, options, and variations discussed above with respect to the different sensors are not repeated and may be implemented with the necessary modifications in the sensor 5200.
In addition, the sensor 5200 can also operate in other detection modes, providing a detection output that does not include depth information. For example, in some detection modes, the sensor 5200 can operate as a camera providing a 2D image, wherein different detection values are indicative of the amount of light reflected from a portion of the FOV for one (or more) detection duration. Note that such detection modes may involve active illumination of the FOV, but need not necessarily do so. In this mode, individual ROCs 5242 can be connected to a single PS 5212, each readout group 5210, or both, respectively (with some ROCs 5242 connected to individual PS 5212 while other ROCs 5242 connected to readout groups 5210).
In many of the examples above, all PS 5212 of the readout group 5210 are connected to one or more ROCs 5242 during the entire measurement time. However, this is not necessarily so, and any other suitable switching scheme for connecting the PS 5212 of the readout group to the ROC(s) 5242. Optionally, in some such switching schemes implemented by the sensor 5200 (e.g., the switching network 5230), during a detection period (e.g., one frame duration, i.e., multiple ROCs 5242 connected to multiple PS 5212 and receiving detection data therefrom during part or all of the detection period), the multiple active ROCs 5242 of a readout set 5240 may be greater than an average number of the multiple active PS 5212 during the respective detection period. For example, a plurality of different ROCs 5242 (e.g., 2, 3, 4, 5, or 10 ROCs 5242) may be used to obtain and output a plurality of electrical signals indicative of different amounts of light detected by a single PS 5212 during different detection durations of a detection frame. For example, a plurality of N different ROCs 5242 (e.g., 2, 3, 4, 5, or 10 ROCs 5242) may be used to obtain and output a plurality of electrical signals indicative of different amounts of light detected by M PS 5212 at one time (1.ltoreq.M.ltoreq.N) during different detection durations of one detection frame. Alternatively, only a suitable subset of the plurality of PS 5212 of a single readout group 5210 can be connected to different combinations, collectively including all ROCs 5240 of a readout set 5240 connected to the respective readout group 5210 during a single frame detection duration, wherein the number of plurality of ROCs 5242 in a readout set 5240 is greater than the number of PS 5212 in the suitable subset. For example, such a switching scheme may be used to prevent saturation of the ROC 5242 or components (e.g., capacitors) associated with the respective ROC 5242 during a detection period. For example, if a sensor 5200 or a processor of another EO system connected to the sensor 5200 is determined that some or all of the ROCs 5242 are saturated during a previous detection duration (e.g., frame), it may separate the frame detection duration (e.g., 6 microseconds (μsec)) between multiple ROCs 5242 (e.g., three ROCs 5242) such that each of the multiple respective ROCs 5242 will be connected to a single PS (or other suitable subset of a saturated readout group 5210) during a different duration of the frame detection duration (e.g., each ROC 5242 will be connected to a single PS 5212 during one third of the time, 2 microseconds). The switching scheme discussed in this paragraph may be implemented by the sensor 5200, which sensor 5200 implements depth detection in any of the manners discussed above with respect to the sensor 5200.
Fig. 41A and 41B illustrate a sensor 5200 having a plurality of readout sets 5240, each readout group being associated with a respective readout group 5210, according to an example of the presently disclosed subject matter. Note that while examples are illustrated in fig. 41A and 41B, each readout set 5240 is associated with a respective readout group 5210 (i.e., all PS 5212 of a single readout group 5210 are only connectable to provide detection data to the multiple ROCs 5242 of a single readout set 5240, and vice versa), this is not necessarily so, and some (or all) PS 5212 may optionally be associated with more than one readout set 5240. Note that reference numerals 5210 and 5240 are not used in fig. 41A and 41B to avoid confusion of drawings. As shown in fig. 41A, each readout circuit 5242 can alternatively be associated with and located physically proximate to a respective PS 5212. As shown in fig. 41B, PS 5212 may alternatively be co-located as a unified FPA 5290, and multiple readout circuits 5242 may be co-located on another portion of sensor 5200 (labeled 5292), either on the same wafer (as shown), or on another die made of another wafer (not shown), or in any other suitable arrangement.
Fig. 43A and 43B are diagrams to illustrate a sensor 5200 according to other examples of the presently disclosed subject matter. As shown in fig. 43A and 43B, the sensor 5200 can optionally include optics 5280 for directing light from the FOV to the various PS 5212. Such optics may include, for example, lenses, mirrors (fixed or movable), prisms, filters, and the like. As shown in fig. 43B, the sensor 5200 may also include an active light source 5260, such as a laser or LED, controlled by the controller 5250, the processor 5220, or another other suitable control module. In this case, the sensor 5200 can include optics 5282 for directing the light of the light source 5260 (if implemented) to the FOV. Such optics 5282 may include, for example, lenses, mirrors (fixed or movable), prisms, filters, and the like. Alternatively, the sensor 5200 may be associated with an external light source (not shown), in which case the external light source may be controlled by the sensor 5200, or by an external control module exchanging illumination timing information with the sensor 5200. Although not illustrated, the sensor 5200 can include any other desired components, such as (and not limited to): (a) a memory module (e.g., for storing at least one of the detection signals output by the active PS 5212 or the plurality of readout circuits 5242, detection information generated by the processor 5220 by processing the detection signals), (b) a power source (e.g., battery, ac power adapter, dc power adapter, e.g., powering the plurality of PS, the plurality of amplifiers or any other component of the sensor 5200), and (c) a hard shell (or any other type of structural support).
Sensor 5200 is an example of a depth sensor operable to detect depth information of an object, comprising:
a. a focal plane array (e.g., 5290) comprising a plurality of PS (e.g., 5212), each PS being operable to detect light arriving from an IFOV (e.g., 5312) of the respective PS. The plurality of PS are arranged such that different PS are pointing in different directions within a FOV of the sensor. The plurality of PS may be divided into a plurality of fixed readout groups (as discussed widely above), but more complex relationships between the plurality of PS and the plurality of ROCs may also be implemented (e.g., if the FPA includes a plurality of additional PS in addition to the plurality of PS, optionally more PS are increased by several orders of magnitude).
b. A plurality of readout circuits (e.g., ROCs 5242) of a readout set (e.g., 5240), each readout circuit connected by a plurality of switches (e.g., 5232) to a plurality of PS of a readout group (e.g., 5210) of the FPA and operable to output an electrical signal indicative of an amount of light impinging on the plurality of PS of the readout group when the readout group is connected to the respective ROC via at least one of the plurality of switches.
c. A controller (e.g., 5250) is operable to change a plurality of switch states of the plurality of switches such that different ROCs of the readout set are connected to the readout group at different times for exposing different ROCs to reflection of illumination light from a plurality of objects positioned at different distances from the sensor.
d. A processor (e.g., 5220) is operable to obtain the plurality of electrical signals from the readout set, to indicate a plurality of detection levels of reflected light collected from the plurality of IFOVs of a plurality of PS of the readout group, and to determine depth information for the object, to indicate a distance of the object from the sensor based on the plurality of electrical signals.
All of the variations, features, components, capabilities, characteristics, etc. discussed above with respect to sensor 5200, as well as any operable combination thereof, may be implemented in reference to the most recent segment of depth sensor. Also, all variations, features, components, capabilities, characteristics, etc. discussed in the next few paragraphs regarding the above-described depth sensor may be implemented in the sensor 5200 as opposed.
Alternatively, the depth sensor may include multiple ROCs of multiple readout sets connected to multiple PS of multiple readout groups of the FPA (e.g., as illustrated in fig. 41A and 41B). Although not necessarily so, all of the sensing sets may be similar in nature to each other, e.g., having the same number of sensing circuits and optionally also the same shape, size, etc. Although not necessarily so, all readout groups may be substantially similar to each other, e.g., have the same number of PS and optionally also the same shape, size, etc. In such a case, the controller of the depth sensor may be selectively operable to operate in different switching modes of the plurality of switches, including at least:
a. A depth detection switching mode in which different ROCs of each of the plurality of readout sets are connected to the respective readout groups at different times, and a plurality of outputs of the plurality of ROCs are used to determine depths of a plurality of objects in the FOV; a kind of electronic device with high-pressure air-conditioning system
b. An image detection switching mode in which a different ROC of each of the plurality of readout sets is coupled to at most one PS, and a plurality of outputs of the plurality of ROCs are used to generate a two-dimensional (2D) image of the plurality of objects in the FOV.
That is, the depth sensor (e.g., sensor 5200) may optionally be operable as an image sensor (e.g., a camera), optionally with a higher resolution (e.g., more data points when compared to a 3D depth model). Different detection switching modes may be implemented by the controller at different times, but alternatively different switching modes may be implemented simultaneously at different parts of the sensor. Fig. 44 is a diagram illustrating an example FPA 5290 in which a depth detection switching mode is implemented concurrently with an image detection switching mode, according to the presently disclosed subject matter. Most of the PS 5122 of the FPA 5290 is operated in an image detection switching mode by the controller 5220, while some PS 5212 of the plurality of readout groups 5210A, 5210B, 5210C and 5210D are operated in a depth detection switching mode. The depth data determined by the processor 5220 for each readout group can also optionally be used for a plurality of adjacent PS 5212. Optionally, the controller of the depth sensor described above (e.g., controller 5250) is operable to control portions of the FPA in a depth detection switching mode and simultaneously control other portions of the FPA in an image detection switching mode.
Optionally, the controller of the depth sensor is operable to change a plurality of switch states of the plurality of switches such that a first electrical signal output by a first ROC of the readout set is indicative of a plurality of detection levels of reflected light collected from the plurality of IFOVs of the plurality of PS of the readout group during at least two time spans separated by a third time span during which a second ROC is connected to and disconnected from the plurality of PS of the readout group, wherein the processor is configured to resolve a plurality of depth ambiguities (depth ambiguities) of the first electrical signal based on a second electrical signal indicative of a plurality of detection levels of reflected light collected by the second ROC from the plurality of IFOVs of the plurality of PS of the readout group at least during the third time span.
Fig. 45A and 45B are diagrams to illustrate multiple switching mechanisms in which the same ROC is connected to the readout group at different times within a time of flight of an illumination pulse, according to examples of the presently disclosed subject matter. Curves 5766, 5768, and 5770 represent the responsiveness of a first ROC, a second ROC, and a third ROC to light impinging on the PS of the readout group at different times during the time of flight, which correspond to different distances from the FPA. It will be apparent to one of ordinary skill in the art that the abscissa may span different ranges (e.g., 0m to 5m, 0m to 100m, 100m to 1km, 0 optical seconds to 1 optical second, etc.) in different implementations, and thus any distance range may be matched, depending on the particular implementation. As can be seen from curve 5766, the first ROC is connected to the plurality of readout groups at two different time spans and disconnects the readout groups between those times, with the second ROC and the third ROC connected to it. There are many combinations of distances D4 and D5, so two objects 5762 and 5764 located at distances D4 and D5 will return signals of the same level (labeled S1) measured by the first ROC. Note that different objects may have different reflectivities. However, the signals measured by the second ROC and the third ROC (labeled S2 and S3, respectively) may be used to resolve ambiguity (resolve the ambiguity) because the plurality of tuples of the plurality of ROCs (three in the illustrated example) are very different, such as a plurality of tuples 5772 (corresponding to objects 5762 and 5774 located at distance D4 from the FPA) and (corresponding to objects 5764 located at distance D5 from the FPA). Note that during more than one continuous time span, more than one ROC may be connected by the plurality of switches to the plurality of readout groups, and that suitable switching mechanisms may be designed to resolve ambiguity in this case. Note that such switching mechanisms may be implemented as part of methods 5500, 5800, 5900, and 9100, for example. The utilization of discontinuous measurement durations as illustrated by curve 5766 may be implemented, for example, to improve the accuracy of distance estimation of objects in the FOV.
Optionally, the controller of the depth sensor is operable to trigger activation of a light source that emits light to the FOV and synchronize changes in the plurality of switch states of the plurality of switches with timing of the triggering. The light source may be part of the depth sensor (e.g., 5260) or external thereto, such as discussed above with respect to sensor 5200.
Alternatively, the plurality of PS of the depth sensor may be implemented on a first wafer and the plurality of readout circuits of the readout set are implemented on a second wafer different from the first wafer. The two wafers may be bonded to each other, or electrically connected in any other suitable manner. The plurality of switches and the controller that selectively connect the plurality of PS to the plurality of ROCs may be implemented on the first wafer, on the same wafer, or a combination of both. Note that alternatively, the second wafer may be a stand-alone product, which enables a FPA designed for an image detector to be utilized as a depth sensor. For example, a sensor control integrated circuit is disclosed, comprising:
a. a plurality of ROCs of at least one readout set;
b. The electrical contacts of the PS of an external FPA (implemented on another wafer) may or may not be permanently connected.
c. For each of at least one readout set, a corresponding plurality of switches are used to selectively connect different ROCs of the respective readout set to a plurality of PS (e.g., if more than one are included) of a readout group corresponding to the respective readout group via associated electrical contacts, wherein different ROCs of the respective readout set are connected to the associated electrical contacts (and are disconnected at other times) by a controller at different times. The plurality of ROCs may be similar to the plurality of ROCs 5242.
d. A controller operable to change a plurality of switch states of the plurality of switches such that different ROCs of the readout set are connected to the associated electrical contacts (and to the respective readout groups via the plurality of contacts when operated) at different times to expose different ROCs to reflection of illumination light by a plurality of objects at different distances from the sensor; a kind of electronic device with high-pressure air-conditioning system.
e. A processor is operable to obtain the plurality of electrical signals from the readout set, to indicate a plurality of detection levels of reflected light collected from the plurality of IFOVs of a plurality of PS of the readout group, and to determine depth information for the object, to indicate a distance of the object from the sensor based on the plurality of electrical signals.
Any of the variations of the multiple ROCs, switches, controllers, or processors discussed above with respect to sensor 5200 may also be compared to a sensor control integrated circuit suitable for use in the previous paragraph.
Reading a combined signal from multiple PS's can significantly change the resolution of the depth sensor (e.g., sensor 5200) according to the manner in which the PS's are divided into readout groups. For example, if each readout group includes a plurality of PS in 1x4 rows, one 2560Wx1440H sensor may only output one 2560Wx360H resolution output. Optionally, to modify the angular resolution of the sensor in at least one axis, an aspheric lens may be included to refract light from the FOV before it reaches the PS. In particular, a cylindrical lens may be implemented to vary the angular resolution of the depth sensor along only one axis. Returning to the numerical example, if the native angular resolution of the 2560Wx1440H sensor is 0.01 ° on each axis (X and Y) to provide a FOV of 25.6 ° X14.4 °, a cylindrical lens may be included to reduce the span of the vertical angle of the FOV (e.g., times twice, from 14.4 ° to 7.2 °) to increase the angular resolution of the depth sensor along the axis (e.g., from 0.04 ° to 0.02 °). Generally, alternatively, if the plurality of PS of the readout group are arranged along a first axis (e.g., 1x4 rows, or 2x3 configuration as described above), the depth sensor may comprise (or be connected to) an aspheric lens that focuses incident light from the FOV to the FPA along the first axis to a greater extent than it would focus the incident light along a second axis perpendicular to the first axis. Note that any other suitable optical component (e.g., an aspherical mirror, a nonlinear prism) may be used in addition to (or instead of) the aspherical lens.
Note that although this is not necessarily the case, the depth sensor (e.g., sensor 5200) can implement any of the forms of the method 5500 described above.
With reference to the foregoing depth sensor, note that a switchable sensor having a similar architecture but with a different processor may use the foregoing switching scheme for targets of non-off depth detection. For example, such a switching scheme may be selectively implemented to evaluate the intensity of detected light intensity that saturates one or more PS of the FPA, as discussed in more detail below. More generally, a switchable optical sensor is disclosed, comprising:
a. a Focal Plane Array (FPA) comprising a plurality of PS, each PS being operable to detect light arriving from an instantaneous field of view (IFOV) of the PS, wherein different PS are directed in different directions within a field of view of the sensor;
b. a plurality of ROCs of a readout set, each ROC connected by a plurality of switches to a plurality of PS of a readout group of the FPA and operable to output an electrical signal indicative of an amount of light impinging on the plurality of PS of the readout group when the readout group is connected to the respective ROC via at least one of the plurality of switches;
c. A controller operable to change a plurality of switch states of the plurality of switches such that different ROCs of the readout set are connected to the readout group at different times to expose different ROCs to reflection of illumination light from a plurality of objects located at different distances from the sensor; a kind of electronic device with high-pressure air-conditioning system
d. A processor configured to obtain the plurality of electrical signals from the readout set, to indicate a plurality of detection levels of reflected light collected from the plurality of IFOVs of the plurality of PS of the readout group, and to generate a 2D model of a plurality of objects in the FOV based on processing the plurality of electrical signals.
All variations, features, components, capabilities, characteristics, etc. discussed above with respect to sensor 5200 or the foregoing depth sensor, as well as any operable combination thereof, may be contrasted with implementing the switchable optical sensor described with respect to the previous paragraph.
Fig. 46 illustrates a method 5800 for detecting depth information of an object in accordance with an example of the presently disclosed subject matter. With reference to the above example, the method 5800 may optionally be performed by the sensor 5200 and/or by the depth sensor described after the description of the sensor 5200.
Stage 5820 of method 5800 includes: during a first duration, a plurality of PS of a readout group consisting of a plurality of PS of a FPA are connected to a first ROC of a sensor. The connection of stage 5820 is performed while maintaining a second ROC and a third ROC disconnected from the PS of the readout group. Although not necessarily so, the plurality of PS may include a germanium (Ge) component that generates charge carriers as a result of light impinging thereon (and the current it generates may be measured to generate an output signal of the corresponding PS). Although not necessarily so, the plurality of PS may be sensitive to SWIR light (while also being sensitive to visible light, or insensitive, whether native or through the use of suitable filters).
Stage 5830 includes obtaining a first electrical signal from the first ROC indicative of an amount of illumination pulses reflected from the object to impinge together on a plurality of PS of the readout group during the first duration.
Stage 5840 includes connecting a plurality of PS of the readout group to the second ROC during a second duration while keeping the first ROC and the third ROC disconnected from the plurality of PS of the readout group. The second duration is different from the first duration and optionally does not overlap with it at all. Alternatively, the second duration may partially overlap with the first duration.
Stage 5850 includes obtaining a second electrical signal from a second ROC indicative of an amount of illumination pulses reflected from the object to impinge together on a plurality of PS of the readout group during the second duration.
Stage 5860 includes connecting the plurality of PS of the readout group to the third ROC during a third duration while keeping the first ROC and the second ROC disconnected from the plurality of PS of the readout group. The third duration is different from the first duration and the second duration and optionally does not overlap either of the two at all. Alternatively, the third duration may partially overlap with the first duration and/or the second duration.
Stage 5870 includes obtaining a third electrical signal from the third ROC indicative of an amount of illumination pulses reflected from the object to impinge collectively on a plurality of PS of the readout group during the third duration.
Note that various orders may be selected for stages 5820 to 5870, and the order shown is only one example. For example, optionally, after all respective readout circuits collect charge (after all of stages 5220, 5240, and 5260 are complete), all three electrical signals can be obtained (in stages 5230, 5250, and 5270). May vary.
Stage 5880 is performed after stages 5230, 5250, and 5270 and includes determining a distance of the object from a sensor based at least on the first electrical signal, the second electrical signal, and the third electrical signal, the sensor including the FPA. For example, the determination of the distance in stage 5880 may be performed based on the relative magnitudes of the plurality of obtained electrical signals. Alternatively, method 5500 or portions thereof may be used in stage 5880.
Alternatively, the first duration, the second duration, and the third duration may all occur during a ToF of a single illumination pulse or a group of pulses. Alternatively, the method 5800 may begin with a phase 5810 of emitting a pulse of light toward the FOV, in which case phases 5820, 5840, and 5860 may be timed based on the timing of the emission of the pulse of light. Alternatively, the method 5800 may include transmitting an illumination pulse, wherein the first duration, the second duration, and the third duration occur within a time-of-flight duration of a detection distance from a pulse round trip (after being reflected from an object in the FOC). Different detection distances may be selected for different uses of the method 5800 (or the aforementioned depth sensor). For example, the detection distance may be between 1m to 10m, 10m to 50m, 50m to 500m, 500m to 10km, 10km to 1 optical second, or any other applicable range. In this case, reflection of the pulse affects a level of at least two of the first electrical signal, the second electrical signal, and the third electrical signal.
As discussed above with respect to the depth sensor (e.g., sensor 5200), alternatively, the same sensor used to determine depth may be used (e.g., simultaneously or at different times) to generate image information. Optionally, the method 5800 may include the following stages:
a. during a simultaneous detection duration: (a) Connecting the first ROC to a first PS of a plurality of PS of the readout group while the first PS is disconnected from the second ROC and the third ROC; (b) Connecting the second ROC to a second PS of the plurality of PS of the readout group while the second PS is disconnected from the first ROC and the third ROC; and (c) connecting the third ROC to a third PS of the plurality of PS of the readout group while the third PS is disconnected from the first ROC and the second ROC.
b. Obtaining a fourth electrical signal from the first ROC, the fourth electrical signal being indicative of an amount of illumination pulses reflected from the object to impinge on the first PS during the simultaneous detection duration; obtaining a fifth electrical signal from the second ROC, the fifth electrical signal being indicative of an amount of illumination pulses reflected from the object to impinge on the second PS during the simultaneous detection duration; and obtaining a sixth electrical signal from the third ROC, the sixth electrical signal being indicative of the amount of illumination pulses reflected from the object to impinge on the third PS during the simultaneous detection duration.
c. A 2D image of the FOV is generated during the simultaneous detection duration, with a color of a first pixel of the 2D image based on the fourth electrical signal, a color of a second pixel of the 2D image based on the fifth electrical signal, and a color of a third pixel of the 2D image based on the sixth electrical signal. Although not necessarily, in each case the color determined for each pixel depends only on one of the fourth, fifth and sixth electrical signals.
Optionally, the simultaneous detection duration (i.e. the duration of performing simultaneous detection for different PS of the aforementioned readout group) is later than the decision of the distance of the object. In another option, the simultaneous detection duration coincides with a distance to an object in the FOV determined based on a plurality of electrical signals of a readout set, the readout set including a plurality of ROCs other than the first ROC, the second ROC, and the third ROC, wherein during different times within the simultaneous detection duration, the method comprises: a plurality of switch states of a plurality of switches connecting the plurality of ROCs to a plurality of PS of a respective readout set are changed such that different ROCs of the readout set are connected to the readout group at different times to expose different ROCs to reflection of illumination light from a plurality of objects located at different distances from the sensor.
Optionally, the third duration is later than the second duration, the second duration is later than the first duration, and the first electrical signal indicates an amount of illumination pulses reflected from the object to impinge together on a plurality of PS of the readout group during the first duration and a fourth duration that is later than the third duration, wherein the determining of the distance of the object comprises: resolving a range ambiguity of a first electrical signal based on at least one of the second electrical signal and the third electrical signal. Additional details regarding fig. 45A and 45B are provided above.
All variations, features, components, capabilities, characteristics, etc. discussed above with respect to sensor 5200, as well as any operable combination thereof, may be implemented in comparison to method 5800.
Fig. 47 illustrates a method, numbered 5900, for correcting saturation detection results in an FPA in accordance with an example of the presently disclosed subject matter. Correction saturation results may be implemented in the sensor 5200 or in any sensor that utilizes gated imaging of different detection windows, whether to assess distances of objects in the FOV or for any other purpose.
Stage 5910 of method 5900 includes obtaining a first electrical signal from a first ROC indicative of illumination pulse amounts reflected from an object within a FOV of the FPA to commonly impinge on a plurality of PS of a readout group during a first duration within a TOF of the respective illumination pulses, wherein during the first duration, a second ROC, a third ROC, and a fourth ROC are disconnected from the plurality of PS of the readout group.
Stage 5920 includes obtaining a second electrical signal from the second ROC indicative of illumination pulse amounts reflected from the object to commonly impinge on a plurality of PS of the readout group during a second duration within the TOF of the respective illumination pulses, wherein during the second duration the first ROC, the third ROC, and the fourth ROC are disconnected from the plurality of PS of the readout group.
Stage 5930 includes obtaining a third electrical signal from the third ROC indicative of illumination pulse amounts reflected from the object to commonly impinge on the plurality of PS of the readout group during a third duration within the TOF of the respective illumination pulses, wherein during the third duration the first, second, and fourth ROCs are disconnected from the plurality of PS of the readout group.
Stage 5940 includes obtaining a fourth electrical signal from the fourth ROC indicative of illumination pulse amounts reflected from the object to commonly impinge on the plurality of PS of the readout group during a fourth duration within the TOF of the respective illumination pulses, wherein during the fourth duration the first ROC, the second ROC, and the third ROC are disconnected from the plurality of PS of the readout group.
Stage 5950 includes: based on the similarity criteria, a matching tuple is found within a plurality of tuples (a preexisting collection of distance-associated detection levels tuples) of a pre-existing collection of a plurality of distance-associated detection levels. For example, a memory module may store a plurality of tuples, the plurality of tuples corresponding to a plurality of relative magnitudes of a plurality of detection signals, the plurality of relative magnitudes of the plurality of detection signals corresponding to a plurality of different detection schedules (e.g., a plurality of different detection durations). For example, tuple T1 may indicate a plurality of relative magnitudes of a plurality of detection signals collected from an object at 1m from the FPA, tuple T54 may indicate a plurality of relative magnitudes of a plurality of detection signals collected from an object at 72m from the FPA, and tuple T429 may indicate a plurality of relative magnitudes of a plurality of detection signals collected from an object at 187m from the FPA. Such a collection of multiple tuples (or any other representation of data, such as any suitable form of direction-dependent data structure (DADS)) may be referred to as a "dictionary" or "reference database (reference database)", and may include any number of reference tuples (e.g., about 10, about 100, about 1,000, about 10,000). Any suitable similarity criterion (e.g., least mean square, minimum average error) may be used. Stage 5950 may include comparing a plurality of detected amplitudes corresponding to a plurality of amplitudes of an electrical signal group consisting of the first electrical signal, the second electrical signal, the third electrical signal, and the fourth electrical signal with a plurality of different tuples of the reference database, and selecting the matching tuple based on a result of the comparison.
Stage 5960 includes identifying that an electrical signal in the group of electrical signals is saturated (i.e., at least one PS of the plurality of PS is saturated during the measurement that caused the saturated signal). Different ways may be used to identify a saturated signal. Such ways include, for example, identifying saturation based on a plurality of differences between the matching tuple and the plurality of obtained signals, identifying saturation based on the magnitudes of the plurality of obtained signals and reference information indicative of a plurality of saturation levels, and the like.
Stage 5970 includes determining a corrected detection level based on the matching tuple and at least one electrical signal in the group of electrical signals, the corrected detection level corresponding to the saturated electrical signal.
Optionally, stage 5970 may include determining the correction detection and the other based on a comparison of a plurality of detected amplitudes corresponding to a plurality of amplitudes of an electrical signal group consisting of the first electrical signal, the second electrical signal, the third electrical signal, and the fourth electrical signal with the matching tuple.
Fig. 48 illustrates correcting multiple saturation detection results based on multiple detection signals that differ in time in accordance with an example of the presently disclosed subject matter.
A diagram 5602 illustrates groups of four electrical signals (labeled S1, S2, S3, and S4 in the diagram). Fig. 5604 illustrates that the matching tuples that match a plurality of detection results 5602 are scaled to fit the plurality of detection results. A comparison of the plurality of detection results 5602 with scaled matching tuples 5604 is shown in fig. 5606, wherein differences resulting from the cut-off of the saturation are labeled 5610. Fig. 5608 schematically illustrates the correction of the detection result to more accurately reflect the amount of light arriving from the object before it is capped (capped) due to saturation (of a capacitor, amplifier or any other component or components of the detection path of the sensor).
Note that while four detection signals and methods 5900 are discussed with respect to fig. 48, a greater number of detection signals may be used than is possible. It should also be noted that even though method 5900 discusses multiple detection signals obtained by multiple ROCs, if the multiple detection signals corresponding to multiple different detection ranges are otherwise collected (e.g., by a standard gated imaging sensor). It is also noted that the timing duration corresponding to each electricity is not necessarily continuous (e.g., similar to the discussion regarding fig. 45A and 45B).
Referring to method 5900, optionally method 5900 may further comprise determining a distance of the object from the FPA based on the matching tuple. Optionally, the method 5900 may further include: a 3D model of the FOV is generated, including a 3D point in the 3D model, indicating a direction of an instantaneous FOV of the readout group, a distance of the object, and a color determined based on the correction detection level.
Optionally, the first electrical signal, the second electrical signal, the third electrical signal, and the fourth electrical signal all represent the amount of light reflected from the object by the same illumination pulse.
Fig. 49 illustrates a method, numbered 9100, for identifying materials of objects based on detection of an SEI system, according to an example of the presently disclosed subject matter. Referring to the example set forth with respect to the previous figures, method 9100 may be performed by sensor 5200, by a system including sensor 5200, or by any other system described above that is capable of gated imaging.
Stage 9110 of method 9100 includes: a plurality of detection signals are obtained, the plurality of detection signals being indicative of an amount of light collected from an instantaneous FOV within a FOV of the SEI system, captured by at least one PS of the SEI system at different times, each detection signal of the plurality of detection signals being indicative of a plurality of SWIR illumination levels reflected from a different distance within the instantaneous FOV.
Stage 9120 includes processing the plurality of detection signals to determine a distance to an object within the FOV. Various methods of determining distance are discussed above. For example, stage 9120 may compare any combination of any one or more stages including method 5800, method 5900, and method 5500.
Stage 9130 includes determining a first reflectivity of illumination of a first SWIR range by the object based on an illumination intensity emitted by the SEI toward the object in the first SWIR range, a plurality of detection levels of illumination light reflected from the object in the first SWIR range, and the distance.
Note that the illumination of stage 9130 may include one or more pulses for determining distance in stage 9120, but this is not required, and other illumination (even by another light source, in some cases) may be used. Optionally, the illumination of stage 9130 is emitted by the same light source whose light is used to determine the distance. Determining the first reflectivity requires information about the emission intensity and its propagation distance, since the intensity of the reflected signal depends on the propagation distance (e.g.. Alpha.1/R 2 ). The transmission amplitude (e.g., transmission_intensity)/R for distance calibration 2 ) And comparing with the detected amplitude to indicate the reflectivity of the object reflecting the light.
Stage 9140 includes: determining material composition information based on the first reflectivity, the material composition information indicating at least one material from which the object is manufactured. Note that different materials differ significantly in a reflectivity in the SWIR portion of the electromagnetic spectrum. In general, additional information to the information discussed above with respect to method 9100 may also be used to determine the composition of the material.
For example, method 9100 can further comprise: determining a second reflectivity of the object for illumination in a second SWIR range based on the intensity of illumination emitted by the SEI toward the object in the second SWIR range, a plurality of detection levels of illumination light reflected from the object in the second SWIR range, and the distance; wherein the deciding comprises: the material composition information is determined based on the first reflectivity and the second reflectivity. Multiple reflectivity levels of the object in three or more different portions of the SWIR portion of the electromagnetic spectrum may also be used, each reflectivity level being calculated based on the distances determined in stage 9120.
Optionally, the method 9100 can include determining a plurality of reflectivities associated with a plurality of different polarizations for the object, the plurality of reflectivities determined based on the determined distance and the detected reflected illumination through the different polarized filters.
Optionally, the determining of the first reflectivity includes: compensating for a plurality of DC levels accumulated during the collecting of the plurality of detection signals caused by germanium (Ge) within the at least one PS, wherein the compensating comprises: different degrees of DC compensation are applied to the plurality of detection signals detected by different PS of the at least one FPA.
The material composition determined using method 9100 can be used for many different purposes. For example, if the method 9100 is implemented by an EO system of a vehicle (e.g., a truck, autopilot), it may be used to distinguish liquid water from ice (particularly so-called "black ice"). The distinction between these two materials can be used to make very different driving decisions (e.g. speed, air pressure). Alternatively, the method 9100 can include determining a first instance of material composition for a first object, the first object comprising liquid water; and determining a second instance of a material composition for a second object, the second object comprising ice.
Some of the foregoing method stages may also be implemented in a computer program for execution on a computer system, including at least portions of code for performing the steps of the related method when run on a programmable apparatus, such as a computer system, or to enable a programmable apparatus to perform the functions of a device or system of the present disclosure. Such a method may also be implemented in a computer program for running on a computer system, comprising at least code portions for causing a computer to perform the steps of a method of the present disclosure.
A computer program is a list of instructions, such as a particular application program and/or an operating system. The computer program may for example comprise one or more of the following: a subroutine, a function, a procedure, a method, an implementation, an executable application, an applet, a servlet, a source code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
The computer program may be stored internally on a non-transitory computer readable medium. All or some of the computer programs may be provided on computer readable media permanently, removably or remotely coupled to an information processing system. The computer readable medium may include, for example, but is not limited to, any of the following: magnetic storage media including magnetic disks and tape storage media; optical storage media such as optical disk media (e.g., CD-ROM, CD-R, etc.) and digital video disk storage media; nonvolatile storage media including semiconductor-based memory cells such as flash memory, EEPROM, EPROM, ROM; a ferromagnetic digital memory; MRAM; volatile storage media include registers, buffers or caches, main memory, RAM, and the like.
A computer process typically includes an executing (running) program or a portion of a program, current program values and state information, and is used by the operating system to manage the execution of the process. An Operating System (OS) is software that manages the sharing of resources of a computer and provides programmers with an interface that is used to access the resources. An operating system processes system data and user input and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system.
The computer system may, for example, include at least one processing unit, associated memory, and a plurality of input/output (I/O) devices. When executing the computer program, the computer system processes information of the computer program and generates result output information via a number of I/O devices.
The connections discussed herein may be any type of connection suitable for transmitting signals from or to various nodes, units or devices, e.g., via intervening devices. Thus, unless otherwise indicated or stated, the connections may be, for example, direct connections or indirect connections. The connections may be illustrated or described with reference to a single connection, multiple connections, unidirectional connections, or bidirectional connections. However, different embodiments may vary the implementation of the connections. For example: separate unidirectional connections may be used instead of bidirectional connections and vice versa. Also, multiple connections may be replaced with a single connection that transmits multiple signals serially or in a time multiplexed (time multiplexed) manner. Likewise, individual connections carrying multiple signals may be separated into various different connections carrying subsets of these signals. Thus, there are many options for transmitting signals.
Alternatively, the illustrated examples may be implemented as circuits on a single integrated circuit or within the same device. Alternatively, the examples may be implemented as any number of separate integrated circuits or separate devices interconnected with each other in a suitable manner. Alternatively, suitable portions of the methods may be implemented as soft or code representations of physical circuitry or may be converted to logical representations of physical circuitry, such as in a hardware description language of any appropriate type.
Other modifications, variations, and alternatives are also possible. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention. It will be understood that the above-described embodiments are cited by way of example only, and that various features thereof and combinations of these features may be altered and modified. While various embodiments have been shown and described, it should be understood that there is no intent to limit the invention by such disclosure, but rather, is intended to cover all modifications and alternate constructions falling within the scope of the invention, as defined in the appended claims.
In the claims or specification of the present invention, adjectives such as "substantially" and "about" modify a condition or relational characteristic of one or more features of an embodiment, unless otherwise indicated, are understood to mean that the condition or characteristic is defined to be within an acceptable tolerance range for operation of the embodiment for a contemplated application. It should be understood that where the claims or specification refer to "a" or "an" element, such reference should not be construed as having only one of the element present.
All patent applications, white papers, and other publicly available data published by the assignee of the present invention and/or the eye-interest limited company of israel tesla (TriEye ltd.) are incorporated herein by reference in their entirety. The references mentioned herein are not admitted to be prior art.

Claims (19)

1. A method of generating an image from an array of photodetectors, characterized by: comprising the following steps:
obtaining a plurality of detection values for different photosites measured during a first frame duration from the photodetector array, the photodetector array including a plurality of replicated photosites, the plurality of detection values comprising:
A first detection value for a first photosite indicative of an amount of light impinging on the first photosite from a field of view during the first frame duration;
a second detection value for a second photosite indicative of an amount of light impinging on the second photosite from the field of view during the first frame duration;
a third detection value for a third photosite indicative of an amount of light impinging on the third photosite from the field of view during the first frame duration;
a fourth detection value for each of the at least one fourth photosite, measured while the respective fourth photosite is shielded from ambient illumination; a kind of electronic device with high-pressure air-conditioning system
A fifth detection value for each of the at least one fifth photosite, measured while the corresponding fifth photosite is obscured from ambient illumination;
determining a first photosite output value based on subtracting an average of the at least one fourth detection value from the first detection value;
determining a second photosite output value based on subtracting an average of the at least one fifth detection value from the second detection value;
Determining a third photosite output value based on subtracting an average of the at least one fourth detection value from the third detection value; a kind of electronic device with high-pressure air-conditioning system
A first frame image is generated based at least on the first photosite output value, the second photosite output value, and the third photosite output value.
2. The method according to claim 1, characterized in that: most photosites of the photodetector array, including the first photosite, the second photosite, the third photosite, the fourth photosite, and the first photosite, are copies of one another.
3. The method according to claim 1, characterized in that: the fourth detection value and the fifth detection value are measured during the first frame duration.
4. The method according to claim 1, characterized in that: the fourth detection value and the fifth detection value are measured when the first photosite, the second photosite, and the third photosite are exposed to ambient light.
5. The method according to claim 1, characterized in that: the at least one fourth photosite has a dark current thermal behavior equivalent to that of the first photosite and the third photosite, and wherein the at least one fifth photosite has a different dark current thermal behavior equivalent to that of the second photosite.
6. The method according to claim 1, characterized in that: further comprises:
determining a plurality of first photosite output values for a plurality of additional frames based on subtracting an average of the at least one fourth detection value for a respective frame from the first detection value for the respective frame;
subtracting an average of the at least one fifth detection value of the respective frame from the second detection value of the respective frame to determine a plurality of second photosite output values for the plurality of additional frames;
subtracting an average of the at least one fourth detection value of the respective frame from the third detection value of the respective frame to determine a plurality of third photosite output values for the plurality of additional frames; a kind of electronic device with high-pressure air-conditioning system
Generating a frame image for each additional frame of the plurality of additional frames based at least on the first photosite output value, the second photosite output value, and the third photosite output value of the respective frame,
wherein each additional frame of the plurality of additional frames is ingested at a different temperature of the photodetector array, wherein a first temperature of the plurality of different temperatures is at least 20 ℃ higher than a second temperature of the plurality of different temperatures.
7. The method according to claim 1, characterized in that: prior to making the determination of the plurality of output values, a matching model is obtained that includes, for each of a majority of photosites exposed to field of view illumination during the first frame, at least one matching reference photosite that matches the corresponding photosite based on a detected value of the corresponding photosite that was previously measured while the corresponding photosite was obscured from ambient illumination.
8. The method according to claim 7, wherein: the matching model remains constant for at least one week, and wherein a plurality of different frame images are generated based on the matching model on different days of the week.
9. An electro-optic system operable to generate a plurality of images, characterized by: comprising the following steps:
a photodetector array comprising a plurality of photosites, each photosite operable to output a detection value indicative of an amount of light impinging on the respective photosite during a detection duration and a level of dark current generated by the photosite during the detection duration;
A shutter for shielding a subgroup of the plurality of photosites from ambient illumination for at least a first frame duration; a kind of electronic device with high-pressure air-conditioning system
A processor operable to:
obtaining a plurality of detection values for a plurality of different photosites of the photodetector array measured during the first frame duration, the plurality of obtained detection values comprising:
a first detection value for a first photosite indicative of an amount of light impinging on the first photosite from a field of view during the first frame duration;
a second detection value for a second photosite indicative of an amount of light impinging on the second photosite from the field of view during the first frame duration;
a third detection value for a third photosite indicative of an amount of light impinging on the third photosite from the field of view during the first frame duration;
a fourth detection value for each of the at least one fourth photosite, measured while the respective fourth photosite is shielded from ambient illumination; a kind of electronic device with high-pressure air-conditioning system
A fifth detection value for each of the at least one fifth photosite, measured while the corresponding fifth photosite is obscured from ambient illumination;
Determining a first photosite output value based on subtracting an average of the at least one fourth detection value from the first detection value;
determining a second photosite output value based on subtracting an average of the at least one fifth detection value from the second detection value;
determining a third photosite output value based on subtracting an average of the at least one fourth detection value from the third detection value; a kind of electronic device with high-pressure air-conditioning system
A first frame image is generated based at least on the first photosite output value, the second photosite output value, and the third photosite output value.
10. An electro-optic system as claimed in claim 9, wherein: the majority of photosites of the photodetector array are copies of each other, including the first photosite, the second photosite, the third photosite, the fourth photosite, and the fifth photosite.
11. An electro-optic system as claimed in claim 9, wherein: the fourth detection value and the fifth detection value are measured during the first frame duration.
12. An electro-optic system as claimed in claim 9, wherein: the fourth detection value and the fifth detection value are measured when the first photosite, the second photosite, and the third photosite are exposed to ambient light.
13. An electro-optic system as claimed in claim 9, wherein: the at least one fourth photosite has a dark current thermal behavior equivalent to that of the first photosite and the third photosite, and wherein the at least one fifth photosite has a different dark current thermal behavior equivalent to that of the second photosite.
14. An electro-optic system as claimed in claim 13, wherein: the processor is further operable to:
determining a plurality of first photosite output values for a plurality of additional frames based on subtracting an average of the at least one fourth detection value for a respective frame from the first detection value for the respective frame;
subtracting an average of the at least one fifth detection value of the respective frame from the second detection value of the respective frame to determine a plurality of second photosite output values for the plurality of additional frames;
subtracting an average of the at least one fourth detection value of the respective frame from the third detection value of the respective frame to determine a plurality of third photosite output values for the plurality of additional frames; a kind of electronic device with high-pressure air-conditioning system
Generating a frame image for each additional frame of the plurality of additional frames based at least on the first photosite output value, the second photosite output value, and the third photosite output value of the respective frame,
wherein each additional frame of the plurality of additional frames is taken at a different temperature by the photodetector array, wherein a first temperature of the plurality of different temperatures is at least 20 ℃ higher than a second temperature of the plurality of different temperatures.
15. An electro-optic system as claimed in claim 9, wherein: also included is a memory module for storing a matching model including, for each of a majority of photosites exposed to field of view illumination during the first frame, at least one matching reference photosite that matches the corresponding photosite based on a detected value of the corresponding photosite previously measured while the corresponding photosite was obscured from ambient illumination, and wherein the processor is operable to determine the first photosite output value, the second photosite output value, and the third photosite output value in further response to the matching model.
16. An electro-optic system as claimed in claim 15, wherein: the matching model remains constant for at least one week, and wherein a plurality of different frame images are generated based on the matching model on different days of the week.
17. An electro-optic system as claimed in claim 15, wherein: the matching model includes matches for a plurality of matched photosites of at least 10000, each matched photosite of the plurality of matched photosites being associated with a selected one of a group of masked photosites comprising less than 1% of the number of matched photosites.
18. A non-transitory computer-readable medium based on detection by a photodetector to generate an image, comprising a plurality of instructions stored thereon, characterized by: when the plurality of instructions are executed on a processor, the steps are performed:
obtaining a plurality of detection values for different photosites measured during a first frame duration from a photodetector array, the photodetector array comprising a plurality of replicated photosites, the plurality of detection values comprising:
A first detection value for a first photosite indicative of an amount of light impinging on the first photosite from a field of view during the first frame duration;
a second detection value for a second photosite indicative of an amount of light impinging on the second photosite from the field of view during the first frame duration;
a third detection value for a third photosite indicative of an amount of light impinging on the third photosite from the field of view during the first frame duration;
a fourth detection value for each of the at least one fourth photosite, measured while the respective fourth photosite is shielded from ambient illumination; a kind of electronic device with high-pressure air-conditioning system
A fifth detection value for each of the at least one fifth photosite, measured while the corresponding fifth photosite is obscured from ambient illumination;
determining a first photosite output value based on subtracting an average of the at least one fourth detection value from the first detection value;
determining a second photosite output value based on subtracting an average of the at least one fifth detection value from the second detection value;
Determining a third photosite output value based on subtracting an average of the at least one fourth detection value from the third detection value; a kind of electronic device with high-pressure air-conditioning system
A first frame image is generated based at least on the first photosite output value, the second photosite output value, and the third photosite output value.
19. The non-transitory computer-readable medium of claim 18, wherein: further comprising a plurality of instructions for:
determining a plurality of first photosite output values for a plurality of additional frames based on an average of subtracting the at least one fourth detection value for a respective frame from the first detection value for the respective frame,
based on subtracting an average of the at least one fifth detection value of the respective frame from the second detection value of the respective frame to determine a plurality of second photosite output values for the plurality of additional frames,
determining a plurality of third photosite output values for the plurality of additional frames based on subtracting an average of the at least one fourth detection value of the respective frame from the third detection value of the respective frame, an
Generating a frame image for each additional frame of the plurality of additional frames based at least on the first photosite output value, the second photosite output value, and the third photosite output value of the respective frame,
Wherein each additional frame of the plurality of additional frames is ingested at a different temperature of the photodetector array, wherein a first temperature of the plurality of different temperatures is at least 20 ℃ higher than a second temperature of the plurality of different temperatures.
CN202180086480.6A 2020-12-26 2021-12-25 System, method and computer program product for generating depth image based on short wave infrared detection information Active CN116745638B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202311779387.0A CN117768801A (en) 2020-12-26 2021-12-25 Method and system for generating scene depth image and method for identifying object material
CN202311781352.0A CN117768794A (en) 2020-12-26 2021-12-25 Sensor, saturation detection result correction method and object depth information detection method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/130,646 2020-12-26
US202163194977P 2021-05-29 2021-05-29
US63/194,977 2021-05-29
PCT/IB2021/062314 WO2022137217A1 (en) 2020-12-26 2021-12-25 Systems, methods and computer program products for generating depth images based on short-wave infrared detection information

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN202311781352.0A Division CN117768794A (en) 2020-12-26 2021-12-25 Sensor, saturation detection result correction method and object depth information detection method
CN202311779387.0A Division CN117768801A (en) 2020-12-26 2021-12-25 Method and system for generating scene depth image and method for identifying object material

Publications (2)

Publication Number Publication Date
CN116745638A CN116745638A (en) 2023-09-12
CN116745638B true CN116745638B (en) 2024-01-12

Family

ID=87543558

Family Applications (4)

Application Number Title Priority Date Filing Date
CN202180079305.4A Pending CN116583959A (en) 2020-11-27 2021-11-27 Method and system for infrared sensing
CN202311779387.0A Pending CN117768801A (en) 2020-12-26 2021-12-25 Method and system for generating scene depth image and method for identifying object material
CN202180086480.6A Active CN116745638B (en) 2020-12-26 2021-12-25 System, method and computer program product for generating depth image based on short wave infrared detection information
CN202311781352.0A Pending CN117768794A (en) 2020-12-26 2021-12-25 Sensor, saturation detection result correction method and object depth information detection method

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN202180079305.4A Pending CN116583959A (en) 2020-11-27 2021-11-27 Method and system for infrared sensing
CN202311779387.0A Pending CN117768801A (en) 2020-12-26 2021-12-25 Method and system for generating scene depth image and method for identifying object material

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202311781352.0A Pending CN117768794A (en) 2020-12-26 2021-12-25 Sensor, saturation detection result correction method and object depth information detection method

Country Status (1)

Country Link
CN (4) CN116583959A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105916462A (en) * 2013-11-21 2016-08-31 埃尔比特系统公司 A medical optical tracking system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4161910B2 (en) * 2004-01-28 2008-10-08 株式会社デンソー Distance image data generation device, generation method, and program
US8823934B2 (en) * 2009-03-27 2014-09-02 Brightex Bio-Photonics Llc Methods and systems for imaging and modeling skin using polarized lighting
KR101565969B1 (en) * 2009-09-01 2015-11-05 삼성전자주식회사 Method and device for estimating depth information and signal processing apparatus having the device
JP2012249237A (en) * 2011-05-31 2012-12-13 Kyocera Document Solutions Inc Image reading apparatus and image forming apparatus provided with the same
US9897688B2 (en) * 2013-11-30 2018-02-20 Bae Systems Information And Electronic Systems Integration Inc. Laser detection and image fusion system and method
CN114942454A (en) * 2019-03-08 2022-08-26 欧司朗股份有限公司 Optical package for a LIDAR sensor system and LIDAR sensor system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105916462A (en) * 2013-11-21 2016-08-31 埃尔比特系统公司 A medical optical tracking system

Also Published As

Publication number Publication date
CN116583959A (en) 2023-08-11
CN117768801A (en) 2024-03-26
CN117768794A (en) 2024-03-26
CN116745638A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
US11810990B2 (en) Electro-optical systems, methods and computer program products for image generation
US11665447B2 (en) Systems and methods for compensating for dark current in a photodetecting device
US11606515B2 (en) Methods and systems for active SWIR imaging using germanium receivers
CN116745638B (en) System, method and computer program product for generating depth image based on short wave infrared detection information
TWI805152B (en) Method, electrooptical system, and non-transitory computer-readable medium for image generation
CN114679531B (en) System for generating image, method for generating image information, and computer-readable medium
KR102604175B1 (en) Systems, methods, and computer program products for generating depth images based on shortwave infrared detection information
TWI795903B (en) Photonics systems and methods
US11811194B2 (en) Passive Q-switched lasers and methods for operation and manufacture thereof
US20240014630A1 (en) Passive q-switched lasers and methods for operation and manufacture thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Ariel Danan

Inventor after: Kuzmin Dan

Inventor after: Ellio Dekker

Inventor after: Shearer. Shearer

Inventor after: Ronnie Dobrsky

Inventor after: Ulahan Bakar

Inventor after: Uriel Levy

Inventor after: Omel Kapac

Inventor after: Nadaf melamud

Inventor before: Ariel Danan

Inventor before: Aaron Gan

Inventor before: Kuzmin Dan

Inventor before: Ellio Dekker

Inventor before: Shearer. Shearer

Inventor before: Ronnie Dobrsky

Inventor before: Ulahan Bakar

Inventor before: Uriel Levy

Inventor before: Omel Kapac

Inventor before: Nadaf melamud

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant