CN110268282B - Providing a dynamic field of view of received light from a dynamic location - Google Patents
Providing a dynamic field of view of received light from a dynamic location Download PDFInfo
- Publication number
- CN110268282B CN110268282B CN201880008180.4A CN201880008180A CN110268282B CN 110268282 B CN110268282 B CN 110268282B CN 201880008180 A CN201880008180 A CN 201880008180A CN 110268282 B CN110268282 B CN 110268282B
- Authority
- CN
- China
- Prior art keywords
- pixels
- detector
- detector pixels
- photodetector
- light beam
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000002131 composite material Substances 0.000 claims abstract description 45
- 238000000034 method Methods 0.000 claims abstract description 40
- 230000003287 optical effect Effects 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 229920006395 saturated elastomer Polymers 0.000 description 2
- 230000032683 aging Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003071 parasitic effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4861—Circuits for detection, sampling, integration or read-out
- G01S7/4863—Detector arrays, e.g. charge-transfer gates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/42—Simultaneous measurement of distance and other co-ordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4816—Constructional features, e.g. arrangements of optical elements of receivers alone
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4817—Constructional features, e.g. arrangements of optical elements relating to scanning
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/491—Details of non-pulse systems
- G01S7/4912—Receivers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4861—Circuits for detection, sampling, integration or read-out
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/487—Extracting wanted echo signals, e.g. pulse detection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/487—Extracting wanted echo signals, e.g. pulse detection
- G01S7/4876—Extracting wanted echo signals, e.g. pulse detection by removing unwanted signals
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
A system and method for providing a dynamic composite field of view in a scanning lidar system, for example, for improving the signal-to-noise ratio of detected light. The dynamic composite field of view may include a subset of the available detector pixels and thus may reduce noise introduced by noise sources that may be scaled with the detector area, such as dark current and gain peaks that may be caused by the capacitance of the photodetector.
Description
Request priority
This patent claims priority from provisional patent application serial No. 62/499,716 filed 24 at 1/2017, the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates to systems and methods for improving the accuracy of systems that receive light from a mobile source, such as in a scanning laser ranging system or a free space optical communication system.
Disclosure of Invention
Some systems, such as lidar systems, may scan a beam of light at a target area and detect the scanned beam reflected or scattered by the target area. The inventors have realized, among other things, that a dynamic field of view may be provided when a light beam is swept across a target area, e.g., an improved signal-to-noise ratio of the detected light may be provided. Other features of the present disclosure are provided in the appended claims, and may optionally be combined with each other in any permutation or combination, unless explicitly indicated otherwise elsewhere in this document.
In one aspect, the disclosure may be characterized as a method of dynamically adjusting a composite field of view in a lidar system having a photosensitive detector. The method may comprise selecting a first set of detector pixels, for example for detecting a portion of a light beam emitted towards the target area. The method may further comprise adjusting the angle of the beam of light emitted towards the target area. The method may further comprise then selecting a second set of detector pixels, for example for detecting a portion of the light beam having the adjustment angle. The method may further comprise subtracting at least one detector pixel from the first set of detector pixels and adding the at least one detector pixel to the first set of detector pixels, e.g. to form a second set of detector pixels. The detected light beam may include an area corresponding to M pixels, and the first set of detector pixels and the second set of detector pixels may include m+1 pixels, and the photosensitive detector may include N pixels, where N is greater than m+1. The method may further comprise scanning the light beam over the target area in a pattern and recording the position of the detected light beam using the total number N of detector pixels. The method may further comprise selecting the first and second sets of detector pixels using the recorded positions of the detected light beams. The method may further comprise adding pixels in the first set of detector pixels prior to processing and adding pixels in the second set of detector pixels prior to processing. The method may further comprise adding pixels in the first set of detector pixels after processing and adding pixels in the second set of detector pixels after processing. The method may further comprise detecting a position of the light beam using a selected first set of detector pixels, and selecting the second set of detector pixels when a center of the detected light beam position is located at a boundary between two pixels of the first set of detector pixels. The method may further include scanning the light beam over the target area and using the total number N of detector pixels to determine an angle at which each light beam passes through a boundary between two detector pixels. The method may further comprise scanning the light beam over the target area and selecting a new set of m+1 detector pixels each time the angle of the light beam corresponds to one of the determined angles.
In one aspect, the disclosure may be characterized as a system for dynamically adjusting a composite field of view in a lidar system. The system may include an emitter configured to emit a light beam at a first angle and then at a second angle toward a target area. The system may also include a photodetector including a plurality of pixels. The system may further include control circuitry configured to select a first set of detector pixels to receive a portion of the light beam from the target area at a first angle and to select a second set of detector pixels to receive a portion of the light beam from the target area at a second angle. The control circuit may be configured to subtract at least one detector pixel from the first set of detector pixels and add the at least one detector pixel to the first set of detector pixels, e.g. to form a second set of detector pixels. The receiving portion of the light beam may include an area corresponding to M detector pixels, and the first set of detector pixels and the second set of detector pixels may include m+1 detector pixels, and the photosensitive detector may include N detector pixels, where N may be greater than m+1. The emitter may be configured to scan the light beam over the target area in a pattern and use the full number N of detector pixels, for example to record the position of the detected light beam. The control circuit may be configured to select the first and second groups of detector pixels using the recorded positions of the detected light beams. The system may further include a summing circuit that may add pixels in the first set of detector pixels prior to processing and pixels in the second set of detector pixels prior to processing. The system may further include a summing circuit to sum the pixels in the first set of detector pixels after processing and to sum the pixels in the second set of detector pixels after processing.
In one aspect, the disclosure may be characterized as a method of dynamically adjusting a composite field of view in a lidar system. The method may include emitting a light beam toward a target area. The method may also include receiving a response beam from the target region to a first set of pixels corresponding to a first composite field of view. The method may further include adjusting an angle of the emitted light beam and removing at least one pixel from the first set of pixels based on the adjusted angle of the emitted light beam and adding the at least one pixel to the first set of pixels to form a second set of pixels corresponding to a second composite field of view. The method may further include then transmitting a beam of light at an adjusted angle to the target area and receiving a response beam of light from the target area to a second set of pixels corresponding to a second field of view. The method may further include sequentially scanning the light beams across the target area and determining at least one angle of the light beams that the received portion of the emitted light beams are aligned with the boundary of the at least two pixels. When the center of the receiving portion of the emitted light beam is aligned with the boundary of at least two pixels, at least one angle of the light beam may be determined.
Drawings
The present disclosure will now be described, by way of example, with reference to the accompanying drawings, in which:
fig. 1 shows a diagram of a scanning lidar system.
Fig. 2 shows a diagram of a method for dynamically adjusting the composite FOV.
Fig. 3 shows a diagram of a method for dynamically adjusting the composite FOV.
Fig. 4 shows a diagram of an electrical system for dynamically adjusting a composite FOV.
Fig. 5 illustrates a method of operation of a scanning lidar system.
Fig. 6 shows signal paths in a scanned lidar system.
Detailed Description
Fig. 1 shows an example of a portion of a lidar system 100. Lidar system 100 may include control circuitry 104, illuminator 105, scanning element 106, optical system 116, photosensitive detector 120, and detection circuitry 124. The control circuit 104 may be connected to the illuminator 105, the scanning element 106, and the detection circuit 124. The photosensitive detector 120 may be connected to a detection circuit 124. During operation, the control circuit 104 may provide instructions to the illuminator 105 and the scan element 106, such as causing the illuminator 105 to emit a beam of light toward the scan element 106 and causing the scan element 106 to direct the beam of light toward the target area 112. In one example, the light emitter 105 may comprise a laser and the scanning element may comprise a vector scanner, such as an electro-optic waveguide. The electro-optic waveguide may adjust the angle of the light beam based on instructions received from the control circuit 104. The target area 112 may correspond to a field of view of the optical system 116. The electro-optic waveguide may scan a light beam over the target area 112 in a series of scan segments 114. The optical system 116 may receive at least a portion of the light beam from the target region 112 and may image the scanned zone 114 onto a photosensitive detector 120 (e.g., a CCD). The detection circuit 124 may receive and process the image of the scan point from the photosensitive detector 120 to form a frame. In an example, the control circuit 104 may select a region of interest that is a subset of the field of view of the optical system and instruct the electro-optic waveguide to scan the region of interest. In an example, the detection circuit 124 may include circuitry for digitizing the received image. In one example, lidar system 100 may be installed in an automobile to facilitate autonomous driving of the automobile. The field of view of the optical system 116 may be associated with the photosensitive detector 120, e.g., the optical system 116 images light onto the photosensitive detector 120. The photosensitive detector 120 may include and be divided into an array of detector pixels 121, and a field of view (FOV) of the optical system may be divided into an array of pixel FOVs 123, where each pixel FOV of the optical system corresponds to a pixel of the photosensitive detector 120.
In one example, one or more signals (e.g., measured charges or currents) from a set of detector pixels 121 may be added together to form a composite FOV (e.g., the sum of the detector pixel FOVs). The composite FOV may be a subset of less than the full FOV. The composite FOV can be dynamically adjusted by changing which pixels are added together. The summation may be accomplished in various ways. For example, the summation may be performed digitally, such as after some processing of each pixel in the active composite FOV. The summation may also be performed by adding current directly (e.g., by summing photocurrent directly from a photodiode, or by converting photocurrent to voltage, e.g., with a transimpedance amplifier, and then summing the voltage) from the photocurrent generated by light incident on the photodetector. Summing may also be performed later in the signal path using a summing amplifier, such as the signal path 600 shown in fig. 6, which may include a Photodetector (PD) 610 providing a signal to a transimpedance amplifier (TIA) 620, the transimpedance amplifier 620 providing a signal to a summing circuit 640 or second amplification stage, and then an analog-to-digital converter (ADC) 650. The first and second multiplexers 630 may process even and odd detector pixels of the photosensitive detector, respectively. In examples where detector pixels may be consecutively numbered, even detector pixels may refer to even-numbered detector pixels and odd detector pixels may refer to odd-numbered detector pixels.
In examples where the image of the object of interest detected (e.g., the reflected or scattered portion of the beam illuminating the target region) may be smaller than the complete FOV, reducing the FOV such as described herein may have a number of advantages. For example, by effectively reducing the effective area of the photodetector using a reduced FOV, noise introduced by one or more noise sources, such as by area scaling of the photodetector, may be reduced. Examples of noise or other artifacts that may be reduced by reducing the active area of the photodetector may include dark current in the photodetector and gain peaks in the op amp circuit, such as peaks caused by capacitance of the photodetector. In addition, the effects of background light may be reduced, and any parasitic light signals that are not present in the active composite FOV but are present in the overall system FOV may be reduced (and in some examples, may be eliminated).
Alternatively or additionally, the signals from the individual pixels may be processed entirely independently, and then summation or one or more other techniques may be performed in post-processing. This may have the disadvantage of duplicating most of the complete signal chain for each individual pixel (e.g. the readout electronics for each individual pixel instead of for as few as one composite pixel). Such repetition may occur in hardware (and thus may result in heat and power consumption and physical space), or in addition, in software and data processing, and in many cases may not be feasible.
A problem that may be associated with dividing the photodetectors into subsets of smaller areas of photodetectors may be that the signal of interest may exist between the FOVs of two (or more) pixels. In such an example, the observed signal may be reduced when reading a single pixel, as most of the light may hit inactive pixels. By dynamically adjusting the composite FOV, this problem can be reduced while still maintaining the benefit of having a reduced FOV. In one example, techniques for summing pixels may include changing the composite FOV, such as tracking a moving object (e.g., scanning a beam of light over an area of the object) while fully capturing light from the object. The composite FOV may be dynamically adjusted, for example, based on one or more of the position of the object, the size of the object, and a calibration of the FOV of each pixel. In an example, the composite FOV may be automatically adjusted when the object crosses a boundary between at least two pixels.
Fig. 2 illustrates a method for dynamically adjusting a composite FOV in a lidar system, such as lidar system 100 shown in fig. 1. In the illustrated method, which has been simplified to include a small number (e.g., less than 10) of detector pixels 121 for clarity, the photosensitive detector may include five detector pixels 121, the target 204 may have a dimension in the scan direction that is greater than one detector pixel, but less than two detector pixels, and three detector pixels at a time may be added to form a composite FOV. As shown in fig. 2, the detector pixels that are summed may be unshaded. In the illustrated method, as the light beam scans the target area, the target (e.g., the reflected or scattered portion of the light beam illuminating the target area) may move from left to right (from a low pixel index to a high pixel index) over five detector pixels of the optical detector. The composite FOV may be dynamically adjusted as the object moves from left to right to provide capture of the entire object by the detector pixels forming the composite FOV. In one example, the pixel with the lowest index may be removed from the sum of detector pixels forming the composite FOV, and the next pixel in the direction of object motion (e.g., a higher index) may be added to the sum of detector pixels forming the composite FOV, such as when the object is centered or otherwise spans two detector pixels in the composite FOV. In one example, the photosensitive detector may include N detector pixels, the size of the target may be captured by m detector pixels, and the composite FOV may be formed by summing the m+1 detector pixels. In the example shown in fig. 2, the object may move in a linear pattern, but the technique of dynamically adjusting the composite FOV may be applied to any pattern in one dimension or two dimensions as long as any pattern can be predetermined. Additionally or alternatively, techniques of summing pixels to form a composite FOV may be applied in one or two dimensions.
In a scanning lidar system such as that shown in fig. 1, a target may be imaged by a pulse of light emitted by a laser, which may be reflected or scattered back from the target area to a receiving optical system. The target position may be determined based on the angle from which the light pulses may be emitted from the laser in order to determine the pattern of the target based on the scanning mode of the laser.
In examples where the target may be brighter than any background signal, such as in an active lidar system as shown in fig. 1, or in a free-space optical communication system, the position of the target may be determined from the respective signals collected by each respective detector pixel in the composite FOV. In examples where the target image hits only two detector pixels, the difference in signal intensity of each individual detector pixel may be used to determine the target position relative to the two pixels. By adjusting which pixels are summed and the corresponding composite FOV as the signal intensities are balanced or divided between the two detector pixels, pixel switching can be automatically handled in real-time.
In an example in which the target changes direction, the change in the direction of the target may be detected based on the signal intensity of each pixel. In such examples, the motion of the target may be slower than the update rate of the detector pixel signal, and the lidar system may track the motion of the target. The position of the target may be determined by image processing data from a photosensitive detector or from another sensor or a set of sensors, such as a camera, an Inertial Measurement Unit (IMU) or GPS.
Fig. 3 shows an example where the size of the target beam may be smaller than a single detector pixel. In the example shown in fig. 3, two detector pixels at a time may be added to form a composite FOV. In such an example where two adjacent detector pixels are summed to form a composite FOV, fig. 4 shows an example of an electronic system 400 for performing the summation. The electronic system 400 may be included in a detection circuit, such as the detection circuit 124. In the example shown in fig. 4, odd pixels may be electrically connected to an input of a first multiplexer 404 (MUX) and even pixels may be electrically connected to an input of a second multiplexer 408. The outputs of the first multiplexer 404 and the second multiplexer 408 may be connected to a summing amplifier 412 to sum any combination of adjacent pixels (e.g., pixel 1 and pixel 2 or pixel 2 and pixel 3). In an example where N detector pixels may be added at a time to form a composite FOV, the detector pixels may be connected to the inputs of N multiplexers, respectively, and the outputs of the respective multiplexers may be added to form the composite FOV.
In examples where the composite FOV is less than the full FOV, the composite FOV may be matched to the dynamic region of interest. Matching the dynamic region of interest to the composite FOV can reduce noise and make the system less susceptible to spurious signals outside the current composite FOV of the photodetector. For example, in a scanned lidar system, direct sunlight or other strong light sources may blindly or saturate some detector pixels, but if these blind or saturated pixels are not active detector pixels, the system may not be obscured or saturated by the signal. In examples where noise from a non-blinding light source may be incident on inactive detector pixels that are not in the active composite FOV, increased noise from the non-blinding light source may also be avoided. In examples where the composite FOV includes a single detector pixel, the receive signal may be reduced when the object moves outside the FOV of the single pixel and is split between multiple pixels.
Fig. 5 illustrates a method of operation of a scanning lidar system, such as lidar system 100 illustrated in fig. 1. The beam may be emitted toward a first target (step 510). The reflected or scattered light beam may then be received from the first target onto a first set of pixels corresponding to a first composite field of view (step 520). The angle of the beam transmitted from the laser may be adjusted, and based on the angle of adjustment of the emitted beam, at least one pixel may be removed from the first set of pixels and may be added to the first set of pixels to form a second set of pixels (step 530). Removing at least one pixel and adding at least one pixel may be referred to as pixel switching. The light beam may then be transmitted to a second target, and the reflected light beam may be received from the second target onto a second set of pixels corresponding to a second field of view (step 540). The light beam may be scanned over the target area and at least one angle may be determined based on a location where the received portion of the emitted light beam may be aligned with a boundary of at least two pixels (steps 550 and 560). In an example, the center of the receiving portion of the transmitted beam may be aligned with the boundary of at least two pixels.
In examples where the pixel switch may be associated with an angle of a beam of light emitted by an illuminator (e.g., illuminator 105 shown in fig. 1), the pixel switch may be used as a calibration angle marker, for example, to provide an indication of a change in a beam control portion (e.g., scanning element 106) of the scanning lidar system. Pixel switching may be used to provide recalibration of the beam control section and may be used to compensate for drift in the beam control section, for example due to aging, misalignment, mechanical shock, temperature drift, laser wavelength drift or any other factor that may affect the beam manipulator. Although some examples herein have been described in the context of a lidar system, the present disclosure is equally applicable to passive receiving systems such as free-space optical communication systems.
Claims (10)
1. A method of dynamically adjusting a composite field of view in an optical detection system having a photosensitive detector, the method comprising:
scanning the first beam in a pattern over the target area and recording the position of the detected first beam using a count of N detector pixels;
selecting a first set of detector pixels for detecting a portion of a second beam of light emitted towards the target area;
adjusting an angle of the second light beam emitted toward the target area; and
selecting a second set of detector pixels for detecting a portion of the second light beam having an adjusted angle, the selecting the first set of detector pixels and selecting the second set of detector pixels including using recorded positions of the detected first light beam, the count of the first set of detector pixels and the count of the second set of detector pixels being less than the count N.
2. The method of claim 1, comprising subtracting at least one detector pixel from the first set of detector pixels, and adding at least one detector pixel to the first set of detector pixels to form the second set of detector pixels.
3. A method according to any one of claims 1 or 2, comprising adding pixels in the first set of detector pixels before digitising and adding pixels in the second set of detector pixels before another digitising.
4. The method of claim 1, comprising detecting a position of the second light beam using the first set of detector pixels, and selecting the second set of detector pixels when a center of the detected position of the second light beam is located at a boundary between two pixels.
5. The method according to claim 1, comprising:
scanning the first light beam over the target area and using all count N detector pixels to determine a respective angle for each light beam passing through the boundary between two detector pixels; and
each time the angle of the light beam corresponds to a respective one of the determined corresponding angles, a new set of m+1 detector pixels is selected.
6. The method of claim 3, wherein summing pixels in the first set of detector pixels prior to digitizing comprises summing respective currents from the first set of detector pixels; and
wherein summing pixels in the second set of detector pixels prior to another digitizing includes summing respective currents from the second set of detector pixels.
7. A system for dynamically adjusting a composite field of view in an optical detection system, the system comprising:
an emitter configured to emit a second beam of light at a first angle and then at a second angle toward the target area;
a photodetector including a plurality of pixels; and
control circuitry configured to:
recording the detected position of the first beam scanned in a pattern over the target area using a count N of the plurality of photodetector pixels;
selecting a first group of the plurality of photodetector pixels to receive a portion of the second light beam from the target region corresponding to a first angle and selecting a second group of the plurality of photodetector pixels to receive a portion of the second light beam from the target region corresponding to a second angle, selecting the first group and selecting the second group including using a detected recording position of the first light beam, a count of the first group of the plurality of photodetector pixels and a count of the second group of the plurality of photodetector pixels being less than the count N.
8. The system of claim 7, wherein the control circuit is configured to subtract at least one photodetector pixel from the first set of photodetector pixels and add at least one photodetector pixel to the first set of photodetector pixels to form the second set of photodetector pixels.
9. The system of any of claims 7 or 8, comprising summing circuitry for adding pixels in the first set of photodetector pixels prior to digitizing and configured to add pixels in the second set of photodetector pixels prior to another digitizing.
10. The system of claim 9, wherein the summing circuit is configured to sum respective currents from the first set of detector pixels prior to digitizing and is configured to sum respective currents from the second set of detector pixels prior to another digitizing.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762449716P | 2017-01-24 | 2017-01-24 | |
US62/449,716 | 2017-01-24 | ||
PCT/US2018/015027 WO2018140480A1 (en) | 2017-01-24 | 2018-01-24 | Providing dynamic field of view for light received from a dynamic position |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110268282A CN110268282A (en) | 2019-09-20 |
CN110268282B true CN110268282B (en) | 2023-11-14 |
Family
ID=61189541
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880008180.4A Active CN110268282B (en) | 2017-01-24 | 2018-01-24 | Providing a dynamic field of view of received light from a dynamic location |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN110268282B (en) |
DE (1) | DE112018000284B4 (en) |
WO (1) | WO2018140480A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12019189B2 (en) | 2018-01-24 | 2024-06-25 | Analog Devices, Inc. | Providing dynamic field of view for light received from a dynamic position |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102669876B1 (en) * | 2019-06-27 | 2024-05-29 | 삼성전자주식회사 | Radar data processing device and method to adjust local resolving power |
JP7428492B2 (en) * | 2019-08-26 | 2024-02-06 | 株式会社ミツトヨ | Inspection method and correction method |
CN111077510B (en) * | 2019-12-16 | 2022-12-02 | 上海禾赛科技有限公司 | Laser radar's receiving terminal and laser radar |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101238393A (en) * | 2005-06-02 | 2008-08-06 | 思拉视象有限公司 | Scanning method and apparatus |
WO2014198629A1 (en) * | 2013-06-13 | 2014-12-18 | Basf Se | Detector for optically detecting at least one object |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3832101B2 (en) * | 1998-08-05 | 2006-10-11 | 株式会社デンソー | Distance measuring device |
US7532311B2 (en) | 2005-04-06 | 2009-05-12 | Lockheed Martin Coherent Technologies, Inc. | Efficient lidar with flexible target interrogation pattern |
GB2464465A (en) * | 2008-10-14 | 2010-04-21 | Insensys Ltd | Method of operating a spectrometer detector array |
US20120170029A1 (en) * | 2009-09-22 | 2012-07-05 | ISC8 Inc. | LIDAR System Comprising Large Area Micro-Channel Plate Focal Plane Array |
JP5267592B2 (en) | 2010-04-09 | 2013-08-21 | 株式会社デンソー | Object recognition device |
ES2512965B2 (en) | 2013-02-13 | 2015-11-24 | Universitat Politècnica De Catalunya | System and method to scan a surface and computer program that implements the method |
DE112015001704T5 (en) | 2014-04-07 | 2016-12-29 | Samsung Electronics Co., Ltd. | Image sensor with high resolution, frame rate and low power consumption |
-
2018
- 2018-01-24 WO PCT/US2018/015027 patent/WO2018140480A1/en active Application Filing
- 2018-01-24 CN CN201880008180.4A patent/CN110268282B/en active Active
- 2018-01-24 DE DE112018000284.5T patent/DE112018000284B4/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101238393A (en) * | 2005-06-02 | 2008-08-06 | 思拉视象有限公司 | Scanning method and apparatus |
WO2014198629A1 (en) * | 2013-06-13 | 2014-12-18 | Basf Se | Detector for optically detecting at least one object |
CN105452894A (en) * | 2013-06-13 | 2016-03-30 | 巴斯夫欧洲公司 | Detector for optically detecting at least one object |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12019189B2 (en) | 2018-01-24 | 2024-06-25 | Analog Devices, Inc. | Providing dynamic field of view for light received from a dynamic position |
Also Published As
Publication number | Publication date |
---|---|
DE112018000284B4 (en) | 2023-04-06 |
CN110268282A (en) | 2019-09-20 |
DE112018000284T5 (en) | 2019-09-26 |
US20190369216A1 (en) | 2019-12-05 |
WO2018140480A1 (en) | 2018-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6899005B2 (en) | Photodetection ranging sensor | |
JP4405154B2 (en) | Imaging system and method for acquiring an image of an object | |
CN111727381B (en) | Multi-pulse LiDAR system for multi-dimensional sensing of objects | |
US11408983B2 (en) | Lidar 2D receiver array architecture | |
CN109425324B (en) | Total station or theodolite with scanning function and receiver and settable receiving range | |
KR101975081B1 (en) | Method and apparatus for high speed acquisition of moving images using pulsed illumination | |
CN110268282B (en) | Providing a dynamic field of view of received light from a dynamic location | |
US10838048B2 (en) | Apparatus and method for selective disabling of LiDAR detector array elements | |
US20050275906A1 (en) | Automatic speed optimization | |
EP3602110B1 (en) | Time of flight distance measurement system and method | |
KR20200033068A (en) | Lidar system | |
US20210055419A1 (en) | Depth sensor with interlaced sampling structure | |
CN114761824A (en) | Time-of-flight sensing method | |
US12019189B2 (en) | Providing dynamic field of view for light received from a dynamic position | |
US20210072359A1 (en) | Photo detection device, electronic device, and photo detection method | |
CN111788495B (en) | Light detection device, light detection method, and laser radar device | |
GB2475077A (en) | Laser spot location detector | |
CN112351270B (en) | Method and device for determining faults and sensor system | |
GB2461042A (en) | Determination of imaged object centroid using CCD charge binning | |
JPH09257417A (en) | Light-receiving apparatus | |
GB2052907A (en) | Thermal object coordinate determination | |
CN116481488A (en) | Laser triangulation ranging device and method based on single photon avalanche diode | |
CN110869801A (en) | Laser scanner for a laser radar system and method for operating a laser scanner | |
JP2021025810A (en) | Distance image sensor, and distance image measurement device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |