WO2011055117A1 - Detector - Google Patents
Detector Download PDFInfo
- Publication number
- WO2011055117A1 WO2011055117A1 PCT/GB2010/002035 GB2010002035W WO2011055117A1 WO 2011055117 A1 WO2011055117 A1 WO 2011055117A1 GB 2010002035 W GB2010002035 W GB 2010002035W WO 2011055117 A1 WO2011055117 A1 WO 2011055117A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sensors
- read
- detector
- sensor
- columns
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 claims abstract description 13
- 238000001444 catalytic combustion detection Methods 0.000 claims description 18
- 238000005286 illumination Methods 0.000 claims description 9
- 230000003287 optical effect Effects 0.000 claims description 5
- 230000005855 radiation Effects 0.000 claims description 5
- 239000004065 semiconductor Substances 0.000 claims description 4
- 239000000758 substrate Substances 0.000 claims description 4
- 238000000034 method Methods 0.000 claims description 2
- 238000003491 array Methods 0.000 claims 1
- 230000002123 temporal effect Effects 0.000 abstract description 21
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 7
- 229910052710 silicon Inorganic materials 0.000 description 7
- 239000010703 silicon Substances 0.000 description 7
- 230000010354 integration Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 239000004593 Epoxy Substances 0.000 description 1
- 229910000661 Mercury cadmium telluride Inorganic materials 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- MCMSPRNYOJJPIZ-UHFFFAOYSA-N cadmium;mercury;tellurium Chemical compound [Cd]=[Te]=[Hg] MCMSPRNYOJJPIZ-UHFFFAOYSA-N 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S3/00—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
- G01S3/78—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
- G01S3/782—Systems for determining direction or deviation from predetermined direction
- G01S3/783—Systems for determining direction or deviation from predetermined direction using amplitude comparison of signals derived from static detectors or detector systems
- G01S3/784—Systems for determining direction or deviation from predetermined direction using amplitude comparison of signals derived from static detectors or detector systems using a mosaic of detectors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/71—Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
- H04N25/72—Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors using frame transfer [FT]
Definitions
- This invention relates to detectors for determining the location of a pulsed laser spot reflected from a scene.
- JP 07016744A US 5 528 294
- JP 07016744A US 5 528 294
- JP 07016744A US 5 528 294
- the invention provides a detector for determining the location of a pulsed laser spot reflected from a scene, comprising a first sensor arranged to image the scene, having pixels arranged in rows and columns, and a first read-out arrangement to read-out the data, a second sensor arranged to image the scene, having pixels arranged in rows and columns, and a second read-out arrangement to read-out the data, wherein the columns of the first and second sensors extend at different orientations relative to the respective images, and wherein at least one of the sensors includes a clocking arrangement for continuously clocking the rows of data to the respective read-out means, and a device to determine the location of the spot from the respective columns in which it is detected.
- the sensor which is continuously clocked that is, such that there is no integration period in which a frame is integrated, provides an accurate column-wise position of the spot and temporal spacing of successive spots, while the other sensor, imaging the spot relative to a differently-oriented column, provides an accurate positioning along the column of the continuously clocked sensor.
- the invention also provides a method of determining the location of a pulsed laser spot reflected from a scene, comprising imaging the scene on a first sensor and on a second sensor, the first sensor having pixels arranged in rows and columns, and a first read-out arrangement to read-out the data, the second sensor having pixels arranged in rows and columns, and a second read-out arrangement to read-out the data, wherein the columns of the first and second sensors extend at different orientations relative to the respective images, and wherein at least one of the sensors is continuously clocked to clock the rows of data continuously to the respective read-out means, and determining the location of the spot from the respective columns in which it is detected.
- the sensors may be CCD sensors, or EMCCD sensors, or Time Delay and Integrate (TDI) CMOS sensors.
- the read-out means may be read-out registers.
- Figure 1 shows in schematic form a part of the first embodiment of the invention
- FIG. 2 shows in schematic form the first embodiment of the invention
- FIG. 3 shows in schematic form the second embodiment of the invention
- Figure 4 is a side view, in schematic form, of the second embodiment of the invention.
- Figure 5 shows in schematic form the third embodiment of the invention.
- the first embodiment of the invention includes a full frame CCD sensor with no conventional image storage region.
- the scene which may contain a pulsed laser spot is imaged onto the image area 1.
- An optical filter (not shown) is provided to let only the laser wavelengths be incident on the device.
- the device has a plurality (typically, 100 to 1000) of rows and columns. The direction of clocking data down the columns is indicated by the arrows. Each row in turn is clocked into the readout register 2 in a parallel fashion and then serially read out of the device through an output amplifier 3 (which can consist of a single stage or multiple stages). This sequence is continually repeated.
- an optical beam splitter arrangement 4 splits incoming radiation from the scene (the incoming ray path being shown by the large headed arrow, the split paths by the angled headed arrows) to be focussed simultaneously onto two such CCD sensors in order to obtain two-dimensional positional information for the spot, as well as temporal information.
- the sensor shown in Figure 1 provides temporal information and spatial information in the x-direction.
- a second sensor having image area 5 provides temporal information and spatial information in the y-direction. In the second sensor, the columns are aligned orthogonal relative to the image compared to the first.
- the first and second sensors are shown as co-planar in the drawing, but would in reality each be normal to the optical axis of each split beam from the beam splitter.
- Each image area receives an identical image from the beam splitter, the columns running from top to bottom for the first sensor and side-to-side for the second sensor.
- the second sensor has a plurality (typically, 100 to 1000) of rows and columns (not shown), the arrows showing the direction of clocking data down the columns.
- each row in turn is clocked into the readout register 6 (clocking arrangement 10 provides the respective clocking voltages for both the sensors) in a parallel fashion and then serially read out of the device through an output amplifier 7 (which can consist of a single stage or multiple stages). This sequence is continually repeated.
- Device 1 1 receives the data streams from the amplifiers 3 and 7, and from these determines the location of a spot and its pulse spacing.
- each sensor is a 1000 x 1000 array
- the spatial position along the 100 th row (the "x" co-ordinate) can be ascertained from the output of amplifier 3, by virtue of signal charge in the data stream from the output register 2 corresponding to the 300 th column.
- the rate of repetition of the signal charge in the output indicates the frequency of the spot.
- the output of amplifier 7 will indicate that the spot is incident on the 100 th column of the second sensor from signal charge in the data stream from the output register 6. This will locate the spot at the 100 th row for the first sensor (the "y” co-ordinate), because the scenes are imaged on the sensors in register.
- the spot frequency can equally be determined from the output 7.
- the spot is imaged in a steady position on the sensors, but this may not in fact be the case. It is quite possible that there will be slight relative movement between the spot and the pair of sensors. For example, the same spot could be imaged at the 100 th row and 300 th column of the first sensor, and then at the 102 nd row and 302 nd column. There will in fact be an area over which it will be assumed that a subsequent spot received originates from the same pulsed laser. However, this will cause a slight error in the value for the temporal pulse spacing, because the pulse in the 102 nd row will be appear in a different position in the data stream from amplifier 3 by virtue of having been clocked down two rows less.
- the positional information from the second sensor (image area 5) is therefore used to correct the temporal pulse spacing, in the present case by compensating for the reduced clocking time to the output register.
- a typical clocking rate for the continuously clocked CCD having 256 columns is 5 microseconds for the time to transfer and readout one row of data, but times within the range of 1 microsecond to 200 microseconds would be suitable. If the other sensor was operated in a conventional integration mode, typical times for the time to integrate a frame would be 20 milliseconds, but times within the ranges 10 milliseconds to 100 milliseconds would be acceptable. Variations may of course be made without departing from the scope of the invention, The sensors may be clocked at different rates, and binning may be employed, so that different number of rows are combined within the readout register.
- the first sensor (image area 1) may be clocked out quickly giving high temporal resolution but the second sensor (image area 5) may be clocked out slowly to increase the effective integration time and thus sensitivity.
- the sensors are arranged so that the columns are orientated orthogonally relative to the imaged scene, the invention extends to the situation where the columns are inclined relative to each other at less than 90 degrees.
- one or both CCD sensors could have multiple read-out registers in place of the one shown, each being arranged to read-out the charges from different groups of columns.
- one or both CCD sensors could have two, or four, read-out registers, each for reading out the charges for half, or a quarter, respectively, of the total number of columns. This increases the speed of read-out, so that the clocking rate of data to the registers can be increased, and with it, the temporal resolution between successive pulses.
- This temporal resolution may be improved further if the laser pulse width is of the same order as the time it takes to transfer and readout one row of data by sampling the shape of the signal pulse to establish when in the clocking sequence it was generated for example.
- the output register could include a region with charge multiplication, that is, the device could employ electron multiplying CCDs (EMCCDs).
- EMCCDs electron multiplying CCDs
- Silicon CCDs may be used, which would be suitable for infrared radiation from lasers operating at a typical wavelength of 1064 nanometres.
- any other sensors with semiconductor substrates may be employed, for example, CMOS TDI (time delay and integration) sensors may be used.
- the two orthogonally arranged sensors (image areas 1, 5) are placed on top of each other.
- the arrows at the extreme left of the drawings indicate the ray path from the scene and, in Figure 4, the arrow between the sensors indicates the transmitted illumination.
- the underlying sensor (image area 1) indicates the spot position in the x-direction as well as temporal information
- the topmost sensor (image area 5) indicates the spot position in the y-direction as well as providing temporal information.
- the arrangement of Figures 3, 4 eliminates the need for the beam splitter arrangement.
- the number of photons interacting with the silicon is dependent on the attenuation coefficient, which is wavelength dependent, and the silicon thickness.
- the absorption length of light in the silicon is approximately 0.8 mm so only a fraction of the incident photons will be absorbed in a typical detector thickness. Those photons that do not interact will be transmitted through the silicon.
- the two sensors may be spaced apart or they may be secured together using a suitable epoxy for example. Care needs to be taken to minimise the reflections using appropriate anti-reflection coatings for the wavelengths being detected. Mounting the sensors close together is advantageous as it allows a larger lens aperture (and hence higher overall sensitivity) while maintaining adequate focus in both detectors.
- the sensors could be front illuminated (illumination is incident on the electrode side of the device) or back illuminated (illumination is incident on the substrate side of the device).
- the thickness of the sensors can be chosen to optimise the signals. Thick silicon in the range 50 ⁇ to 300 ⁇ is preferred for efficient detection of optical signal in the near infrared region of the spectrum. However, a trade-off is required between efficient detection and the transmission of sufficient photons for detection by the lower sensor. One may wish for the signals generated within each sensor to be approximately the same. In this case the top sensor may be chosen to be thinner than the underlying sensor.
- orthogonal electrode arrangement could be manufactured on either side of a single semiconductor die.
- a beam splitter and filter arrangement can be used to focus the illumination from the scene (visible wavelengths for example), onto a conventional imaging device, for example a CMOS, CCD or EMCCD sensor, while directing the pulsed laser wavelengths onto the detector arrangements described above.
- a conventional imaging device for example a CMOS, CCD or EMCCD sensor
- infra-red radiation may be used to obtain a thermal image of the scene instead of a visible one, for example using a Cadmium Mercury Telluride detector, while detecting the pulsed laser wavelengths as described.
- a more compact arrangement described below may be used.
- Imaging devices are not very sensitive to the near infrared wavelengths, especially 1064 nm from Nd.YAG laser illumination. Therefore, provided that the imaging device is thinned sufficiently, most of the infrared photons will be transmitted but the visible photons will be detected and will form the image of the scene.
- a stacked arrangement of Figure 5 can be used to simultaneously image the scene and detect the location and the temporal characteristics of the laser pulse.
- the left-hand arrow indicates the ray path of illumination from the scene, and the arrows to the right the ray paths of the transmitted radiation.
- a filter 9 (typically allowing frequencies between the range 1060 to 1070 nm to pass ) is placed between the visible image imaging device 8 and the laser spot sensors (image areas 1 , 5) to ensure that only the wavelengths of interest are detected by the sensors.
- a typical thickness for the imaging device 8 is from 6 to 20 ⁇ .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Solid State Image Pick-Up Elements (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
A detector for determining the location of a laser spot reflected from a scene, comprising a first sensor (image area 1), such as a CCD full frame sensor, arranged to image the scene, having pixels arranged in rows and columns, and means (2) to read-out the data, a second sensor (image area 5), such as a full-frame CCD sensor, arranged to image the scene, having pixels arranged in rows and columns, and means (6) to read-out the data, wherein the columns of the first and second sensors extend at different, preferably orthogonal, orientations relative to the imaged scene, and at least one of the sensors is continuously clocked, providing an accurate column- wise position of the spot and temporal spacing of successive spots, while the other sensor, imaging the spot relative to a differently-oriented column, provides an accurate positioning along the column of the continuously clocked sensor
Description
DETECTOR
This invention relates to detectors for determining the location of a pulsed laser spot reflected from a scene.
There is a requirement to be able to image a scene and to locate a reflected pulsed infrared laser spot within the scene. Once located, the temporal spacing of the laser pulses needs to be measured. Typical pulse widths can be microseconds or fractions thereof, and the spacing between the pulses can range from milliseconds to seconds. Detectors using a silicon photodiode device organised as a quadrant detector have been used, but these provide only crude positional information. It has also been proposed (US 6 288 383) to use a CCD image sensor of the frame transfer type, but a disadvantage is that if a pulse is present during the frame transfer period, it will lead to an incorrect spot location and thus produce both a location error and temporal spacing error. It is also known (JP 07016744A, US 5 528 294), in the more general application of imaging a scene without any requirement to image a pulsed laser spot, to reduce the effects of any smear by providing two CCD image sensors to image the scene simultaneously with the columns of the sensors running perpendicular to each other relative to the image, and then comparing pixels of the two frames to replace the smeared region from one sensor with the corresponding portion of the normal image from the other.
The invention provides a detector for determining the location of a pulsed laser spot reflected from a scene, comprising a first sensor arranged to image the scene, having
pixels arranged in rows and columns, and a first read-out arrangement to read-out the data, a second sensor arranged to image the scene, having pixels arranged in rows and columns, and a second read-out arrangement to read-out the data, wherein the columns of the first and second sensors extend at different orientations relative to the respective images, and wherein at least one of the sensors includes a clocking arrangement for continuously clocking the rows of data to the respective read-out means, and a device to determine the location of the spot from the respective columns in which it is detected.
With such an arrangement, location errors and thus temporal errors are avoided. The sensor which is continuously clocked, that is, such that there is no integration period in which a frame is integrated, provides an accurate column-wise position of the spot and temporal spacing of successive spots, while the other sensor, imaging the spot relative to a differently-oriented column, provides an accurate positioning along the column of the continuously clocked sensor.
The invention also provides a method of determining the location of a pulsed laser spot reflected from a scene, comprising imaging the scene on a first sensor and on a second sensor, the first sensor having pixels arranged in rows and columns, and a first read-out arrangement to read-out the data, the second sensor having pixels arranged in rows and columns, and a second read-out arrangement to read-out the data, wherein the columns of the first and second sensors extend at different orientations relative to the respective images, and wherein at least one of the sensors is continuously clocked to clock the
rows of data continuously to the respective read-out means, and determining the location of the spot from the respective columns in which it is detected.
The sensors may be CCD sensors, or EMCCD sensors, or Time Delay and Integrate (TDI) CMOS sensors. The read-out means may be read-out registers.
Other preferred features of the invention are defined in the subsidiary claims.
Ways of carrying out the invention will now be described in greater detail, by way of example, with reference to the accompanying drawings, in which:
Figure 1 shows in schematic form a part of the first embodiment of the invention;
Figure 2 shows in schematic form the first embodiment of the invention;
Figure 3 shows in schematic form the second embodiment of the invention;
Figure 4 is a side view, in schematic form, of the second embodiment of the invention; and
Figure 5 shows in schematic form the third embodiment of the invention.
Like reference numerals have been given to like parts throughout all the drawings.
Referring to Figure 1, the first embodiment of the invention includes a full frame CCD sensor with no conventional image storage region. The scene which may contain a pulsed laser spot is imaged onto the image area 1. An optical filter (not shown) is provided to let only the laser wavelengths be incident on the device. The device has a plurality (typically, 100 to 1000) of rows and columns. The direction of clocking data down the columns is indicated by the arrows. Each row in turn is clocked into the readout register 2 in a parallel fashion and then serially read out of the device through an output amplifier 3 (which can consist of a single stage or multiple stages). This sequence is continually repeated.
Should a static scene be focused onto this arrangement the resultant image will be smeared in the vertical direction, as each row will be clocked through the illumination. However, a well-defined bright spot will be observed from the pulsed laser illumination. If the laser pulse width is significantly less than the time it takes to transfer one row and read it out of the sensor, the generated signal will be largely confined to a single row. The column or columns in which the bright spot appears gives the positional information in one direction and the row or rows in which it is readout gives the temporal information. Noting when the next spot appears gives the temporal spacing of
the laser pulses. The minimum resolvable temporal spacing will be twice the time it takes to transfer and readout one row of data and a temporal spacing of 1 is achievable, depending on the spatial resolution required.
Referring to Figure 2, an optical beam splitter arrangement 4 splits incoming radiation from the scene (the incoming ray path being shown by the large headed arrow, the split paths by the angled headed arrows) to be focussed simultaneously onto two such CCD sensors in order to obtain two-dimensional positional information for the spot, as well as temporal information. The sensor shown in Figure 1 provides temporal information and spatial information in the x-direction. A second sensor having image area 5 provides temporal information and spatial information in the y-direction. In the second sensor, the columns are aligned orthogonal relative to the image compared to the first. The first and second sensors (image areas 1 , 5) are shown as co-planar in the drawing, but would in reality each be normal to the optical axis of each split beam from the beam splitter. Each image area receives an identical image from the beam splitter, the columns running from top to bottom for the first sensor and side-to-side for the second sensor.
The second sensor has a plurality (typically, 100 to 1000) of rows and columns (not shown), the arrows showing the direction of clocking data down the columns. As for the first sensor, each row in turn is clocked into the readout register 6 (clocking arrangement 10 provides the respective clocking voltages for both the sensors) in a parallel fashion and then serially read out of the device through an output amplifier 7
(which can consist of a single stage or multiple stages). This sequence is continually repeated. Device 1 1 receives the data streams from the amplifiers 3 and 7, and from these determines the location of a spot and its pulse spacing.
As an example in the case where each sensor is a 1000 x 1000 array, consider a laser spot incident on the 100th row from the top of the first sensor, and the 300th column along that row. The spatial position along the 100th row (the "x" co-ordinate) can be ascertained from the output of amplifier 3, by virtue of signal charge in the data stream from the output register 2 corresponding to the 300th column. The rate of repetition of the signal charge in the output indicates the frequency of the spot. The output of amplifier 7 will indicate that the spot is incident on the 100th column of the second sensor from signal charge in the data stream from the output register 6. This will locate the spot at the 100th row for the first sensor (the "y" co-ordinate), because the scenes are imaged on the sensors in register. Thus, the "x" and "y" co-ordinates of the spot are determined. The spot frequency can equally be determined from the output 7.
Of course, the above example assumes that the spot is imaged in a steady position on the sensors, but this may not in fact be the case. It is quite possible that there will be slight relative movement between the spot and the pair of sensors. For example, the same spot could be imaged at the 100th row and 300th column of the first sensor, and then at the 102nd row and 302nd column. There will in fact be an area over which it will be assumed that a subsequent spot received originates from the same pulsed laser.
However, this will cause a slight error in the value for the temporal pulse spacing, because the pulse in the 102nd row will be appear in a different position in the data stream from amplifier 3 by virtue of having been clocked down two rows less.
The positional information from the second sensor (image area 5) is therefore used to correct the temporal pulse spacing, in the present case by compensating for the reduced clocking time to the output register.
It is not necessary for both CCD sensors to be run continuously, that is, for the read-out registers 2 and 5 to receive charge from the image pixels continually. The CCD which provides the temporal information must be clocked continuously, but the other CCD could be operated in a conventional integration mode, where there is an integration period for integrating a frame where no data is transferred to the read-out register, followed by a read-out period where the data is read out rapidly.
A typical clocking rate for the continuously clocked CCD having 256 columns is 5 microseconds for the time to transfer and readout one row of data, but times within the range of 1 microsecond to 200 microseconds would be suitable. If the other sensor was operated in a conventional integration mode, typical times for the time to integrate a frame would be 20 milliseconds, but times within the ranges 10 milliseconds to 100 milliseconds would be acceptable.
Variations may of course be made without departing from the scope of the invention, The sensors may be clocked at different rates, and binning may be employed, so that different number of rows are combined within the readout register. For example, the first sensor (image area 1) may be clocked out quickly giving high temporal resolution but the second sensor (image area 5) may be clocked out slowly to increase the effective integration time and thus sensitivity. While the sensors are arranged so that the columns are orientated orthogonally relative to the imaged scene, the invention extends to the situation where the columns are inclined relative to each other at less than 90 degrees. Equally, one or both CCD sensors could have multiple read-out registers in place of the one shown, each being arranged to read-out the charges from different groups of columns. For example, one or both CCD sensors could have two, or four, read-out registers, each for reading out the charges for half, or a quarter, respectively, of the total number of columns. This increases the speed of read-out, so that the clocking rate of data to the registers can be increased, and with it, the temporal resolution between successive pulses.
This temporal resolution may be improved further if the laser pulse width is of the same order as the time it takes to transfer and readout one row of data by sampling the shape of the signal pulse to establish when in the clocking sequence it was generated for example.
The output register could include a region with charge multiplication, that is, the device could employ electron multiplying CCDs (EMCCDs). Silicon CCDs may be used,
which would be suitable for infrared radiation from lasers operating at a typical wavelength of 1064 nanometres. Indeed, any other sensors with semiconductor substrates may be employed, for example, CMOS TDI (time delay and integration) sensors may be used.
In a second embodiment of the invention shown in Figures 3 and 4, the two orthogonally arranged sensors (image areas 1, 5) are placed on top of each other. The arrows at the extreme left of the drawings indicate the ray path from the scene and, in Figure 4, the arrow between the sensors indicates the transmitted illumination. The underlying sensor (image area 1) indicates the spot position in the x-direction as well as temporal information, and the topmost sensor (image area 5) indicates the spot position in the y-direction as well as providing temporal information. The arrangement of Figures 3, 4 eliminates the need for the beam splitter arrangement.
The number of photons interacting with the silicon is dependent on the attenuation coefficient, which is wavelength dependent, and the silicon thickness. For a typical 1064 nm wavelength, the absorption length of light in the silicon is approximately 0.8 mm so only a fraction of the incident photons will be absorbed in a typical detector thickness. Those photons that do not interact will be transmitted through the silicon.
The two sensors may be spaced apart or they may be secured together using a suitable epoxy for example. Care needs to be taken to minimise the reflections using
appropriate anti-reflection coatings for the wavelengths being detected. Mounting the sensors close together is advantageous as it allows a larger lens aperture (and hence higher overall sensitivity) while maintaining adequate focus in both detectors.
The sensors could be front illuminated (illumination is incident on the electrode side of the device) or back illuminated (illumination is incident on the substrate side of the device).
The thickness of the sensors can be chosen to optimise the signals. Thick silicon in the range 50μιη to 300μιη is preferred for efficient detection of optical signal in the near infrared region of the spectrum. However, a trade-off is required between efficient detection and the transmission of sufficient photons for detection by the lower sensor. One may wish for the signals generated within each sensor to be approximately the same. In this case the top sensor may be chosen to be thinner than the underlying sensor.
Instead of utilising two separate orthogonal sensors, the orthogonal electrode arrangement could be manufactured on either side of a single semiconductor die.
In order to obtain video imagery of the scene, a beam splitter and filter arrangement can be used to focus the illumination from the scene (visible wavelengths for example), onto
a conventional imaging device, for example a CMOS, CCD or EMCCD sensor, while directing the pulsed laser wavelengths onto the detector arrangements described above. Alternatively, infra-red radiation may be used to obtain a thermal image of the scene instead of a visible one, for example using a Cadmium Mercury Telluride detector, while detecting the pulsed laser wavelengths as described. Alternatively, a more compact arrangement described below may be used.
Conventional imaging devices are not very sensitive to the near infrared wavelengths, especially 1064 nm from Nd.YAG laser illumination. Therefore, provided that the imaging device is thinned sufficiently, most of the infrared photons will be transmitted but the visible photons will be detected and will form the image of the scene. Thus a stacked arrangement of Figure 5 can be used to simultaneously image the scene and detect the location and the temporal characteristics of the laser pulse. The left-hand arrow indicates the ray path of illumination from the scene, and the arrows to the right the ray paths of the transmitted radiation. A filter 9 (typically allowing frequencies between the range 1060 to 1070 nm to pass ) is placed between the visible image imaging device 8 and the laser spot sensors (image areas 1 , 5) to ensure that only the wavelengths of interest are detected by the sensors. A typical thickness for the imaging device 8 is from 6 to 20 μπι.
Claims
1. A detector for determining the location of a pulsed laser spot reflected from a scene, comprising a first sensor arranged to image the scene, having pixels arranged in rows and columns, and a first read-out arrangement to read-out the data, a second sensor arranged to image the scene, having pixels arranged in rows and columns, and a second read-out arrangement to read-out the data, wherein the columns of the first and second sensors extend at different orientations relative to the respective images, and wherein at least one of the sensors includes a clocking arrangement for continuously clocking the rows of data to the respective read-out means, and a device to determine the location of the spot from the respective columns in which it is detected.
2. A detector as claimed in claim 1, wherein the device is also operative to detect the pulse spacing from the data read-out from a sensor having a clocking arrangement for continuously clocking the rows of data to the respective read-out means.
3. A detector as claimed in claim 1 or claim 2, including a visible imaging device.
4. A detector as claimed in claim 3, including an optical filter arranged to receive radiation which has passed through the visible imaging device and to pass wavelengths corresponding to laser illumination to the sensors but not visible wavelengths.
5. A detector as claimed in claim 4, in which the visible imaging device includes a semiconductor substrate whose thickness lies within the region of from 5 to 100 μιη.
6. A detector as claimed in any one of claims 1 to 5, including a beam splitter to produce two images of the scene on the sensors which are spatially separated.
7. A detector as claimed in any one of claims 1 to 5, in which the sensors are superimposed on each other.
8. A detector as claimed in any one of claims 1 to 7, in which the sensors are arranged so that the columns of their respective arrays extend at orthogonal orientations relative to the respective images.
9. A detector as claimed in any one of claims 1 to 8, in which the sensors include semiconductor substrates.
10. A detector as claimed in claim 9, in which the sensors are CCD sensors.
1 1. A detector as claimed in claim 10, in which the CCD sensors are full-frame CCDs.
12. A detector as claimed in claim 10 or claim 1 1 , in which the CCDs are EMCCDs.
13. A detector as claimed in any one of claims 10 to 12, including means to correct pulse spacing information from the read-out means for relative movement of the spot and sensors using the location of the spot determined by the sensors.
14. A detector as claimed in any one of claims 10 to 13, in which the clocking arrangement is operative to transfer each row of data to the read-out means and read it out in a time within the range of from 1 microsecond to 200 microseconds.
15. A detector as claimed in claim 14, in which clocking arrangement is operative to transfer each row of data to the read-out means and read it out in a time within the range of from 1 microsecond to 50 microseconds.
16. A detector as claimed in claim 9, in which the sensors are TDI CMOS sensors.
17. A method of determining the location of a pulsed laser spot reflected from a scene, comprising imaging the scene on a first sensor and on a second sensor, the first sensor having pixels arranged in rows and columns, and a first read-out arrangement to readout the data, the second sensor having pixels arranged in rows and columns, and a second read-out arrangement to read-out the data, wherein the columns of the first and second sensors extend at different orientations relative to the respective images, and wherein at least one of the sensors is continuously clocked to clock the rows of data continuously to the respective read-out means, and determining the location of the spot from the respective columns in which it is detected
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0919381.4A GB2475077B (en) | 2009-11-04 | 2009-11-04 | A Detector for Determining the Location of a Pulsed Laser Spot |
GB0919381.4 | 2009-11-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011055117A1 true WO2011055117A1 (en) | 2011-05-12 |
Family
ID=41501929
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2010/002035 WO2011055117A1 (en) | 2009-11-04 | 2010-11-04 | Detector |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2475077B (en) |
WO (1) | WO2011055117A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013079900A1 (en) | 2011-12-01 | 2013-06-06 | E2V Technologies (Uk) Limited | Detector |
US8736924B2 (en) | 2011-09-28 | 2014-05-27 | Truesense Imaging, Inc. | Time-delay-and-integrate image sensors having variable integration times |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4481539A (en) * | 1981-03-10 | 1984-11-06 | Rca Corporation | Error correction arrangement for imagers |
JPH0716744A (en) | 1993-06-17 | 1995-01-20 | Nippon Steel Corp | Method and device for photographing arc welding |
US5528294A (en) | 1992-08-31 | 1996-06-18 | Samsung Electronics Co., Ltd. | Method for eradicating smear in a charge-coupled device camera |
US6288383B1 (en) | 1999-10-25 | 2001-09-11 | Rafael-Armament Development Authority Ltd. | Laser spot locating device and system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3108702B2 (en) * | 1992-06-08 | 2000-11-13 | 旭光学工業株式会社 | Imaging device |
DE19613394C1 (en) * | 1996-04-03 | 1997-10-02 | Siemens Ag | Image acquisition system and method for image acquisition |
-
2009
- 2009-11-04 GB GB0919381.4A patent/GB2475077B/en not_active Expired - Fee Related
-
2010
- 2010-11-04 WO PCT/GB2010/002035 patent/WO2011055117A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4481539A (en) * | 1981-03-10 | 1984-11-06 | Rca Corporation | Error correction arrangement for imagers |
US5528294A (en) | 1992-08-31 | 1996-06-18 | Samsung Electronics Co., Ltd. | Method for eradicating smear in a charge-coupled device camera |
JPH0716744A (en) | 1993-06-17 | 1995-01-20 | Nippon Steel Corp | Method and device for photographing arc welding |
US6288383B1 (en) | 1999-10-25 | 2001-09-11 | Rafael-Armament Development Authority Ltd. | Laser spot locating device and system |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8736924B2 (en) | 2011-09-28 | 2014-05-27 | Truesense Imaging, Inc. | Time-delay-and-integrate image sensors having variable integration times |
US8964088B2 (en) | 2011-09-28 | 2015-02-24 | Semiconductor Components Industries, Llc | Time-delay-and-integrate image sensors having variable intergration times |
US9049353B2 (en) | 2011-09-28 | 2015-06-02 | Semiconductor Components Industries, Llc | Time-delay-and-integrate image sensors having variable integration times |
US9503606B2 (en) | 2011-09-28 | 2016-11-22 | Semiconductor Components Industries, Llc | Time-delay-and-integrate image sensors having variable integration times |
WO2013079900A1 (en) | 2011-12-01 | 2013-06-06 | E2V Technologies (Uk) Limited | Detector |
GB2497410A (en) * | 2011-12-01 | 2013-06-12 | E2V Tech Uk Ltd | Detector comprising a CCD sensor |
CN104247403A (en) * | 2011-12-01 | 2014-12-24 | E2V技术(英国)有限公司 | Detector |
GB2497410B (en) * | 2011-12-01 | 2015-09-30 | E2V Tech Uk Ltd | Detector |
US9380238B2 (en) | 2011-12-01 | 2016-06-28 | E2V Technologies (Uk) Limited | Detector for determining the location of a laser spot |
CN104247403B (en) * | 2011-12-01 | 2019-05-17 | E2V技术(英国)有限公司 | Detector |
Also Published As
Publication number | Publication date |
---|---|
GB0919381D0 (en) | 2009-12-23 |
GB2475077B (en) | 2015-07-22 |
GB2475077A (en) | 2011-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3724921B1 (en) | Global shutter pixel circuit and method for computer vision applications | |
US9608027B2 (en) | Stacked embedded SPAD image sensor for attached 3D information | |
US6414746B1 (en) | 3-D imaging multiple target laser radar | |
CN107924927B (en) | Solid-state imaging device | |
US20190089944A1 (en) | Imaging pixels with depth sensing capabilities | |
US7608823B2 (en) | Multimode focal plane array with electrically isolated commons for independent sub-array biasing | |
US9445018B2 (en) | Imaging systems with phase detection pixels | |
US7491937B2 (en) | Two-wavelength image sensor picking up both visible and infrared images | |
US10419664B2 (en) | Image sensors with phase detection pixels and a variable aperture | |
EP2591499B1 (en) | Radiation-hardened roic with tdi capability, multi-layer sensor chip assembly and method for imaging | |
JP4351057B2 (en) | Photodetection device, imaging device, and distance image acquisition device | |
CA2831805A1 (en) | Mixed-material multispectral staring array sensor | |
CN109346492B (en) | Linear array image sensor pixel array and object surface defect detection method | |
US11871132B2 (en) | Devices and methods for obtaining three-dimensional shape information using polarization and time-of-flight detection pixel cells | |
WO2011055117A1 (en) | Detector | |
Zander et al. | An image-mapped detector for simultaneous ICP-AES | |
Monteiro et al. | Fast Hartmann-Shack wavefront sensors manufactured in standard CMOS technology | |
US11810342B2 (en) | High resolution fast framing infrared detection system | |
WO2023195286A1 (en) | Photodetection element and electronic device | |
US7795569B2 (en) | Focal plane detector with integral processor for object edge determination | |
CN112468745A (en) | Imaging circuit and method of operating the same | |
LaMarr et al. | Interpixel crosstalk in a 3D-integrated active pixel sensor for x-ray detection | |
JP2005249660A (en) | Positional information detector unit for light | |
Saha | Sensors for the high resolution astronomical imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10779837 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10779837 Country of ref document: EP Kind code of ref document: A1 |