WO2005076037A1 - Gated imaging - Google Patents

Gated imaging Download PDF

Info

Publication number
WO2005076037A1
WO2005076037A1 PCT/IL2005/000085 IL2005000085W WO2005076037A1 WO 2005076037 A1 WO2005076037 A1 WO 2005076037A1 IL 2005000085 W IL2005000085 W IL 2005000085W WO 2005076037 A1 WO2005076037 A1 WO 2005076037A1
Authority
WO
WIPO (PCT)
Prior art keywords
pulse
sensor
energy
time
range
Prior art date
Application number
PCT/IL2005/000085
Other languages
French (fr)
Inventor
Ofer David
Shamir Inbar
Original Assignee
Elbit Systems Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from IL16022004A external-priority patent/IL160220A0/en
Priority claimed from IL16509004A external-priority patent/IL165090A0/en
Application filed by Elbit Systems Ltd. filed Critical Elbit Systems Ltd.
Priority to CA2554955A priority Critical patent/CA2554955C/en
Publication of WO2005076037A1 publication Critical patent/WO2005076037A1/en
Priority to IL177078A priority patent/IL177078A0/en
Priority to US11/496,031 priority patent/US8194126B2/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • G01S17/18Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves wherein range gates are used
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4868Controlling received signal intensity or exposure of sensor

Definitions

  • the disclosed technique relates to optical observation systems in general, and to a method and system for imaging using the principle of gated imaging with active illumination, in particular.
  • Target detection and identification using an imaging system that includes a camera is known in the art.
  • a camera often requires a high level of sensitivity to light for use in poor visibility conditions.
  • a long focal lens is commonly employed to achieve high optical magnification.
  • the low intensity of light reflected from a target, received by a camera used in an imaging system results in low quality image resolution.
  • such a camera cannot produce an image with an adequate signal-to-noise ratio to exploit the total resolution capability of the camera, and to discern fine details of an imaged target for identification purposes. Therefore, when imaging during night or in poor visibility conditions, such cameras require an auxiliary light source to illuminate a target and thereby improve the quality of the captured image.
  • the auxiliary light source may be a laser device capable of producing a light beam that is parallel to the line-of-sight (LOS) of the camera, and that illuminates the field-of-view (FOV) of the camera or a part thereof.
  • LOS line-of-sight
  • FOV field-of-view
  • television systems in general, use a similar illumination method for adequate imaging.
  • long focal lenses in general, have a limited light collecting capability due to their high / number.
  • a high f number reduces the capability of a lens to collect enough photons to generate an adequate image, as compared to lenses with small / numbers.
  • An inherent problem in optical observation systems is the effect inclement weather conditions, such as humidity, haze, fog, mist, smoke and rain, have on the image produced.
  • Particles or substances in the atmosphere may be associated with certain weather conditions. For example, haze results from aerosols in the air. These atmospheric particles or substances may obstruct the area between an observation system and a target to be observed. A similar case may result when an observation system operates in media other than air. For example, in underwater observations, the scattering of water particles, or of air particles above the water, may obstruct the area between an observation system and a target to be observed.
  • the interference of particles or substances in the medium between a system and a target can cause backscatter of the light beam. This is especially true when an auxiliary light source is used to illuminate a target at night, particularly if the illuminating source is located near the camera.
  • the backscatter of the light beam results in blinding of a camera used in an observation system, especially if the camera has a high level of sensitivity, like an Intensified CCD (ICCD).
  • ICCD Intensified CCD
  • the blinding of the camera reduces the contrast of an imaged target relative to the background. This blinding of the camera is referred to as self-blinding because it is partly caused by the observation system itself.
  • contrast reduction significantly lowers the achievable range of imaging and target, or object, detection and identification, with respect to the attainable detection and identification range in daylight conditions.
  • the imaging sensor of a camera may need to be synchronized with respect to the time that the reflected light from the light illuminated target is due to be received by photodetectors located on the observation system.
  • a laser generates short light pulses at a certain frequency.
  • the imaging sensor of the camera is activated at the same frequency, but with a time delay that is related to the frequency.
  • the light beam generated by the laser impinges on the target, and illuminates the target and the surrounding area.
  • the receiving assembly of the imaging sensor of the camera is deactivated. A small part of the light is reflected from the target back towards the camera, which is activated as this reflected light reaches the camera.
  • the camera switches from an "off” state to an “on” state in a synchronized manner with the time required for the pulse to travel to the target and return. After the light reflected from the target has been received and stored, the camera reverts to an "off” state, and the system awaits transmission of the following light pulse. This cycle is repeated at a rate established in accordance with the range from the camera to the target, the speed of light in the observation medium, and the inherent limitations of the laser device and the camera. This technique is known as gated imaging with active illumination to minimize backscatter.
  • US Patent 5,408,541 to Sewell entitled “Method and System for Recognizing Targets at Long Ranges”, is directed to a method and system for recognizing targets at ranges near or equal to ranges at which they are initially detected.
  • a detect sensor such as a radar system or a thermal imaging sensor, detects a target relative to a sensor platform.
  • the detect sensor determines a set of range parameters, such as target coordinates from the sensor platform to the target.
  • the detect sensor transfers the set of range parameters to a laser-aided image recognition sensor (LAIRS).
  • LAIRS uses the set of range parameters to orient the system to the angular location of the target.
  • a laser source illuminates the area associated with the range parameters with an imaging laser pulse to generate reflected energy from the target.
  • a gated television sensor receives the reflected energy from the illuminated target, and highly magnifies and images the reflected energy. The image is then recognized by either using an automatic target recognition system, displaying the image for operator recognition, or both.
  • Sewell requires a preliminary range measurement. Before the laser source illuminates the target, the laser source directs a low power measurement laser pulse toward the target to measure the range between the system and the target. The range sets a gating signal for the gated television sensor. The gated television sensor is gated to turn on only when energy is reflected from the target. It is also noted that the measuring line to the target of the laser ranger must be parallel, in a very accurate manner, to the LOS of the observation system.
  • an imaging system having a transmission source, the transmission source providing at least one energy pulse.
  • the system includes a sensor for receiving pulse reflections of the at least one energy pulse reflected from objects within a depth of a field to be imaged, the depth of field having a minimal range (RMIN)-
  • RMIN minimal range
  • the sensor is enabled to gate detection of the pulse reflections, with a gate timing which is controlled such that the sensor starts to receive the pulse reflections after a delay timing substantially given by the time it takes the at least one energy pulse to reach the minimal range and complete reflecting back to the sensor from the minimal range.
  • the at least one energy pulse and the gate timing are controlled for creating a sensitivity as a function of range for the system, such that an amount of received energy of the pulse reflections, reflected from objects located beyond the minimal range, progressively increases with the range along the depth of a field to be imaged. According to one embodiment, this is provided through synchronization between the timing of the at least one energy pulse and the timing of the gate detection.
  • the at least one energy pulse and the gate timing may be controlled for creating a sensitivity as a function of range for the system, such that an amount of received energy of the pulse reflections, reflected from objects located beyond the minimal range, progressively increases with the range along the depth of a field to be imaged.
  • the amount of received energy of the pulse reflections may increase progressively until an optimal range (R 0 ), be maintained detectable, be substantially constant, or decrease gradually until a maximal range (RMAX), or is directly proportional to the ranges of the objects to be imaged.
  • the at least one energy pulse defines a substantial pulse width (T L ASER) commencing at a start time (T 0 ), and the delay timing is substantially given by the time elapsing from the start time (T 0 ) until twice the minimal range (RM I N) divided by the speed at which the at least one energy pulse travels (v), in addition to the pulse width (T AS E R ) according to the formula: * LASER
  • the at least one energy pulse defines a substantial pulse width (T A SER), a pulse pattern, a pulse shape, and a pulse energy
  • the sensor is enabled to gate detection of the pulse reflections, with a gating time span the sensor is activated (T O N), a duration of time the sensor is deactivated (T 0F F).
  • a synchronization timing of the gating with respect to the at least one energy pulse and at least one of the delay timing, the pulse width, the pulse shape, the pulse pattern, the pulse energy, the gating time span the sensor is activated (T O N), the duration of time the sensor is deactivated (T 0FF ), and the synchronization timing, is determined according to at least one of the depth of a field to be imaged, specific environmental conditions the system is used in, a speed the system is moving at if the system is mounted on a moving platform, and specific characteristics of different objects expected to be found in the depth of field.
  • the pulse width, the duration of time the sensor is deactivated, and the gating time span the sensor is activated may define a cycle time, wherein the at least one energy pulse is provided for a duration of the pulse width, the opening of the sensor is delayed for a duration of the duration of time the sensor is deactivated, and the pulse reflections are received for a duration of the gating time span the sensor is activated.
  • the determination according to at least one of the depth of a field, the specific environmental conditions, the speed the system is moving at if the system is mounted on a moving platform, and the specific characteristics of different objects expected to be found in the depth of field, is preferably a dynamic determination, such as varying in an increasing or decreasing manner over time.
  • the pulse width and the gating time span are limited to reduce the sensitivity of the system to ambient light sources.
  • the pulse width is shortened progressively, the delay timing is lengthened progressively, with the cycle time not changing.
  • the gating time span is shortened progressively, the delay timing is lengthened progressively, with the cycle time not changing.
  • the pulse width and the gating time span are shortened progressively, the delay timing is lengthened progressively, with the cycle time not changing.
  • the gating of the sensor is utilized to create a sensitivity as a function of range for the system by changing a parameter such as changing the shape of the at least one energy pulse, changing the pattern of the at least one energy pulse, changing the energy of the at least one energy pulse, changing a gating time span the sensor is activated (T 0N ), changing a duration of time the sensor is deactivated (T OFF ), changing a pulse width (T S ER ) of the at least one energy pulse, changing the delay timing, and changing a synchronization timing between the gating and the timing of providing the at least one energy pulse.
  • a parameter such as changing the shape of the at least one energy pulse, changing the pattern of the at least one energy pulse, changing the energy of the at least one energy pulse, changing a gating time span the sensor is activated (T 0N ), changing a duration of time the sensor is deactivated (T OFF ), changing a pulse width (T S ER ) of the at least one energy pulse, changing the delay timing
  • the changing of a parameter may be utilized according to at least one of: the depth of field, specific environmental conditions the system is used in, a speed the system is moving at if the system is mounted on a moving platform, and characteristics of different objects expected to be found in the depth of field.
  • a controller for controlling the synchronization is provided, preferably wherein at least one repetition of the cycle time forms part of an individual video frame, and a number of the repetitions forms an exposure number per video frame. Furthermore, preferably, a control mechanism for dynamically controlling and varying the exposure number is also provided. Mutual blinding between the system and a similar system passing one another is optionally eliminated by statistical solutions such as lowering the exposure number, a random or pre-defined change in the timing of the cycle time during the course of the individual video frame, and a change in frequency of the exposure number.
  • Mutual blinding between the system and a similar system passing one another may also be eliminated by synchronic solutions such as establishing a communication channel between the system and the similar system, letting each of the system and the similar system go into listening modes from time to time in which the at least one energy pulse is not emitted for a listening period. After the listening period, any of the system and the similar system resumes emitting the at least one energy pulse if no pulses were collected during the listening period, and after which period the system and the similar system wait until an end of a cyclic sequence before resuming emitting the at least one energy pulse if pulses were collected during the listening period. Furthermore, having the systems change a pulse start transmission time in the individual video frames.
  • the exposure number may be varied by the control mechanism according to a level of ambient light.
  • An image intensifier may be applied, in which case the exposure number may be varied by the control mechanism according to a level of current consumed by the image intensifier.
  • the control mechanism may include image processing means for locating areas in the sensor in a state of saturation, and image processing means for processing a variable number of exposures.
  • image processing means may be utilized to take at least two video frames, one with a high exposure number, the other with a low exposure number, where the exposure numbers of the at least two video frames are determined by the control mechanism.
  • the at least two video frames are combined to form a single video frame by combining dark areas from frames with a high exposure number and saturated areas from frames with a low exposure number.
  • a pulse width (T LAS ER) o the at least one energy pulse is substantially defined in 2x Ro "MIN accordance with the following equation: v J , where v is the speed at which the at least one energy pulse travels.
  • the at least one energy pulse may include several pulses wherein the sensor receives several pulses of the at least one energy pulse reflected from at least one object during the gating time span the sensor is activated.
  • the sensor may be enabled to gate detection of the pulsed reflections, with a gating time span the sensor is activated (T 0N ), and a duration of time the sensor is deactivated (T 0FF ), which are substantially
  • the senor is enabled to gate detection of the pulse reflections in accordance with a Long Pulsed Gated Imaging (LPGI) timing technique.
  • the sensor may also be enabled to gate detection of the pulse reflections with a gating time span the sensor is activated (T 0N ), and a duration of time the sensor is deactivated (TQ FF ), which are substantially
  • v is the speed at which the at least one energy pulse travels.
  • the at least one energy pulse may be in the form of electromagnetic energy or mechanical energy.
  • the sensor may be a Complementary Metal Oxide Semiconductor (CMOS), a Charge Coupled Device (CCD), a Gated
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge Coupled Device
  • Gated CMOS
  • GICID Intensifier Charge Injection Device
  • GICCD Gated Intensified CCD
  • GIAPS Gated Intensified Active Pixel Sensor
  • the sensor may further include an external shutter, at least one photodetector, and may also be enabled to autogate.
  • a display apparatus for displaying images constructed from the light received in the sensor may also be used, for example, a Head Up Display (HUD) apparatus, an LCD display apparatus, a planar optic apparatus, and a holographic based flat optic apparatus.
  • HUD Head Up Display
  • a storage unit for storing images constructed from the pulse reflections received in the sensor may be provided, as well as a transmission device for transmitting images constructed from the pulse reflections received in the sensor.
  • the system may be mounted on a moving platform, and stabilized.
  • Stabilization may preferably include stabilization using a gimbals, stabilization using feedback from a gyroscope to a gimbals, stabilization using image processing techniques, based on a spatial correlation between consecutively generated images of the object to be imaged, and stabilization based on sensed vibrations of the sensor.
  • the system includes at least one ambient light sensor.
  • a pulse detector for detection of pulses emitting from a similar system approaching may be provided, an image-processing unit may be added, a narrow band pass filter may be functionally connected to the sensor, and a spatial modulator shutter, or a spatial light modulator, may be provided.
  • an optical fiber for transmitting the at least one energy pulse towards the objects to be imaged may be added.
  • a polarizer for filtering out incoming energy which does not conform to the polarization of the pulse reflection, emitted from the transmission source providing the at least one polarized energy pulse, may be provided.
  • the sensitivity of the system relates to a gain and responsiveness of the sensor in proportion to an amount of energy received by the sensor, wherein the gain received by the sensor as a function of range R is defined by the follow convolution formula:
  • a value for radiant intensity may be obtained by multiplying the convolution formula by a geometrical propagation attenuation function.
  • the transmission device may be a laser generator, an array of diodes, an array of LEDs, and a visible light source.
  • an imaging method including emitting at least one energy pulse to a target area, receiving at least one reflection of the at least one energy pulse reflected from objects within a depth of a field to be imaged, the depth of field having a minimal range (RMI N ), the receiving includes gating detection of the at least one reflection such that the at least one energy pulse is detected after a delay timing substantially given by the time it takes the at least one energy pulse to reach the minimal range and complete reflecting back, and progressively increasing the received energy of the at least one reflection reflected from objects located beyond the minimal range along the depth of a field to be imaged, by controlling the at least one energy pulse and the timing of the gating.
  • the procedure of increasing includes increasing the received energy of the at least one reflection reflected from objects located beyond the minimal range along the depth of a field to be imaged up to an optimal range (R 0 ). Furthermore, the received energy of the at least one reflection reflected from objects located beyond the optimal range is maintained detectable along the depth of a field to be imaged up to a maximal range (R MAX )- This may be achieved by maintaining the received energy, of the at least one reflection reflected from objects located beyond the optimal range along the depth of a field to be imaged up to the maximal range, substantially constant, by gradually decreasing the received energy, or by increasing the received energy of the at least one reflection in direct proportion to the ranges of the objects within the depth of field to be imaged.
  • R MAX maximal range
  • the at least one energy pulse defines a substantial pulse width (T LAS ER) commencing at a start time (T 0 ), and the delay timing is substantially given by the time elapsing from the start time (T 0 ) until twice the minimal range divided by the speed at which the at least one energy pulse travels (v), in addition to the pulse width (TL ASER ):
  • the at least one energy pulse defines a substantial pulse width (T LASER ), a pulse pattern, a pulse shape, and a pulse energy.
  • the procedure of gating includes a gating time span a sensor utilized for the receiving is activated (T 0N ), a duration of time the sensor is deactivated (T 0 F F ), and a synchronization timing of the gating with respect to the at least one energy pulse.
  • At least one of the delay timing, the pulse width, the pulse shape, the pulse pattern, the pulse energy, the gating time span the sensor is activated (T 0N ), the duration of time the sensor is deactivated (T 0FF ), and the synchronization timing is determined according to at least one of the depth of a field, specific environmental conditions the method is used in, a moving speed of a moving platform if the sensor is mounted on the moving platform, and specific characteristics of different objects expected to be found in the depth of field.
  • the method further includes the procedure of autogating.
  • the procedure of controlling includes progressively changing at least one parameter such as changing a pattern of the at least one energy pulse, changing a shape of the at least one energy pulse, changing the energy of the at least one energy pulse, changing a gating time span a sensor utilized for the receiving is activated (T 0N ), changing a duration of time the sensor is deactivated (T 0FF ), changing an energy pulse width (T LASER ) of the at least one energy pulse, changing the delay timing, and changing a synchronization timing between the gating and the emitting.
  • at least one parameter such as changing a pattern of the at least one energy pulse, changing a shape of the at least one energy pulse, changing the energy of the at least one energy pulse, changing a gating time span a sensor utilized for the receiving is activated (T 0N ), changing a duration of time the sensor is deactivated (T 0FF ), changing an energy pulse width (T LASER ) of the at least one energy pulse, changing the delay timing, and changing a synchronization timing
  • the procedure of controlling may also include changing the at least one parameter according to at least one of the depth of field, the specific environmental conditions the method is used in, the moving speed of the moving platform if the sensor is mounted on the moving platform, and characteristics of different objects expected to be found in the depth of field.
  • the procedure of controlling may further include the sub-procedures of providing the at least one energy pulse for a duration of the pulse width, delaying the opening of the sensor for a duration of the time the sensor is deactivated (T 0FF ), and receiving energy pulses reflected from objects for a duration of the gating time span the sensor is activated (T ON )-
  • the pulse width, the duration of the time the sensor is deactivated (T 0FF ) and the gating time span the sensor is activated (T 0N ) may define a cycle time.
  • at least one repetition of the cycle time may form part of an individual video frame, and a number of repetitions may form an exposure number for the video frame.
  • the method may further include the procedure of eliminating mutual blinding between a system using the method and a similar system using the method, passing one another, by statistical solutions such as lowering the exposure number, a random or pre-defined change in the timing of the cycle time during the course of an individual video frame, and a change in the frequency of the exposure number.
  • the method may further include the procedure of eliminating mutual blinding between a system using the method and a similar system using the method passing one another, by synchronic solutions such as establishing a communication channel between the system and the similar system, letting each of the system and the similar system go into listening modes from time to time in which the at least one energy pulse is not emitted for a listening period.
  • any of the system and the similar system resume emitting the at least one energy pulse if no pulses were collected during the listening period, and after which period the system and the similar system wait until an end of a cyclic sequence before resuming emitting the at least one energy pulse if pulses were collected during the listening period. Furthermore, having the systems change a pulse start transmission time in the individual video frames.
  • the exposure number may be dynamically varied by a control mechanism, such as by adjusting the exposure number according to a level of ambient light, or adjusting the exposure number by the control mechanism according to a level of current consumed by an image intensifier utilized for intensifying the detection of the at least one reflection.
  • the method also includes image processing by locating areas in the sensor in a state of saturation by the control mechanism. The image processing may be applied for a variable number of exposures by the control mechanism.
  • the image processing can include taking at least two video frames, one with a high exposure number, the other with a low exposure number, by image processing of a variable number of exposures, determining exposure numbers for the at least two video frames, and combining frames to form a single video frame by combining dark areas from frames with a high exposure number and saturated areas from frames with a low exposure number.
  • the pulse width and the gating time span the sensor is activated (T 0 N) are limited to eliminate or reduce the sensitivity of the sensor to ambient light sources.
  • the procedure of increasing is dynamic, such as by varying the sensitivity of the sensor in a manner varying over time such as in an increasing, a decreasing, a partially increasing and a partially decreasing manner over time.
  • the procedure of controlling may include shortening the pulse width progressively and lengthening the delay timing progressively, while retaining a cycle time of the gating unchanged, shortening the gating time span progressively and lengthening the delay timing progressively, while retaining a cycle time of the gating unchanged, or shortening the pulse width and the gating time span progressively, lengthening the delay timing progressively, while retaining a cycle time of the gating unchanged.
  • the method includes the procedure of calculating the energy pulse width (T LAS ER), substantially defined in accordance with the
  • the procedure of receiving may include receiving several pulses of the at least one energy pulse reflected from objects during a gating time span a sensor utilized for the receiving is activated (T 0N )-
  • the gating may also include a duration of time the sensor is deactivated (T 0FF )
  • the controlling may include controlling the gating time span the sensor is activated T 0N and a duration of time the sensor is deactivated T 0 FF, substantially defined in accordance with the following equations:
  • the gating may include gating in accordance with a Long Pulsed Gated Imaging (LPGI) timing technique, such as when a gating time span a sensor utilized for the receiving is activated (T 0N ), and a duration of time the sensor is deactivated (T 0FF ), are substantially
  • LPGI Long Pulsed Gated Imaging
  • the procedure of emitting may include emitting at least one energy pulse in the form of electromagnetic energy or mechanical energy, and generating the at least one energy pulse by an emitter such as a laser generator, an array of diodes, an array of LEDs, or a visible light source.
  • an emitter such as a laser generator, an array of diodes, an array of LEDs, or a visible light source.
  • the gating may include gating by a sensor, such as a Complementary Metal Oxide Semiconductor (CMOS), a Charge Coupled Device (CCD), a Gated Intensifier Charge Injection Device (GICID), a Gated Intensified CCD (GICCD), and a Gated Intensified Active Pixel Sensor (GIAPS), and gating with a CCD sensor that includes an external shutter.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge Coupled Device
  • GICID Gated Intensifier Charge Injection Device
  • GICCD Gated Intensified CCD
  • GIAPS Gated Intensified Active Pixel Sensor
  • the method further includes the procedure of intensifying the detection of the at least one reflection, by intensifying the at least one reflection with a gated image intensifier or with a sensor with shutter capabilities.
  • the method also includes displaying at least one image constructed from the received at least one reflection.
  • the displaying may be on a display apparatus, for example, a Head Up Display (HUD), an LCD display, a planar optic display, and a holographic based flat optic display.
  • HUD Head Up Display
  • LCD liquid crystal display
  • planar optic display a planar optic display
  • holographic based flat optic display a holographic based flat optic display.
  • the method also includes storing or transmitting at least one image constructed from the received at least one reflection.
  • the method may also include determining the level of ambient light in the target area, determining if other energy pulses are present in the target area, filtering received energy pulse reflections using a narrow band pass filter, and overcoming glare from other energy pulses by locally darkening the entrance of an image intensifier utilized for the intensifying by using apparatuses such as a spatial modulator shutter, a spatial light modulator, or a liquid crystal display.
  • the procedure of emitting includes emitting at least one polarized electromagnetic pulse
  • the procedure of receiving includes filtering received energy according to a polarization conforming to an expected polarization of the at least one reflection.
  • Figure 1 is a schematic illustration of the operation of a system, constructed and operative in accordance with an embodiment of the disclosed technique
  • Figure 2A is a schematic illustration of a laser pulse propagating through space
  • Figure 2B is a schematic illustration of a laser pulse propagating towards, and reflecting from, an object
  • Figure 3 is a graph depicting gated imaging of both a laser and a sensor as a function of time
  • Figure 4 is a typical sensitivity graph, normalized to 1 , depicting sensitivity of a gated sensor as a function of the range between the sensor and a target;
  • Figure 5 is a graph depicting timing adjustments relating to the pulse width of a laser beam, as a function of time
  • Figure 6 is a graph depicting the observation capability of a system with the timing technique depicted in Figure 5, as a function of range;
  • Figure 7 is a graph depicting a specific instant in time in relation to the scenario depicted in Figure 6, as a function of range;
  • Figure 8 is a graph depicting a specific instant in time after the specific time instant depicted in Figure 7, as a function of range;
  • Figure 9 is a sensitivity graph as a function of range, normalized to 1 , depicting the sensitivity of a gated sensor, in accordance with the timing technique depicted in Figure 5;
  • Figure 10 is a sensitivity graph as a function of range, normalized to 1 , depicting the sensitivity of a gated sensor, in accordance with a long pulse gated imaging timing technique;
  • Figure 11 is a graph depicting the radiant intensity captured by a sensor from reflections from a target and from backscatter, as a function of the range between the sensor and the target, for both a gated and a non- gated sensor, during a simulation;
  • Figure 12 is an intensity graph as a function of time, normalized to 1 , depicting adjustment of the intensity shape or pattern of a laser pulse;
  • Figure 13 is an intensity graph as a function of range, normalized to 1 , depicting the advancement of the intensity shaped or patterned laser pulse depicted in Figure 12;
  • Figure 14 is a sensitivity graph as a function of range, normalized to 1 , depicting the sensitivity of a gated sensor, in accordance with the laser shaping technique depicted in Figure 12;
  • Figure 15 is a graph depicting the sequence of pulse cycles and the collection of photons over an individual field, as a function of time
  • Figure 16 is a graph depicting a timing technique where a laser pulse width is changed dynamically over the course of obtaining an individual frame, as a function of time;
  • Figure 17 is a graph depicting a timing technique where a duration that a sensor unit is activated is changed dynamically over the course of obtaining an individual frame, as a function of time;
  • Figure 18 is a graph depicting a timing technique where both a laser pulse width and a duration that a sensor unit is activated are changed dynamically over the course of obtaining an individual frame, as a function of time;
  • Figure 19 is a graph depicting timing adjustments during the process of obtaining an individual video field, where a total of 6666 exposures are performed, as a function of time;
  • Figure 20 is a graph depicting timing adjustments during the process of obtaining an individual video field, where a total of 100 exposures are performed, as a function of time;
  • Figure 21 is a pair of graphs depicting timing adjustments during the process of obtaining an individual video field, both as a function of time, where the number of exposures in a field is controlled based on an image processing technique;
  • Figure 22 is a schematic illustration of the two image frames acquired in Figure 21 , and the combination of the two frames;
  • Figure 23 which is a pair of graphs depicting a synchronization technique for overcoming mutual blinding, both as a function of time;
  • Figure 24 is a block diagram of a method for target detection and identification, accompanied by an illustration of a conceptual operation scenario, operative in accordance with another embodiment of the disclosed technique;
  • Figure 25 is a schematic illustration of a system, constructed and operative in accordance with another embodiment of the disclosed technique.
  • Figure 26 is a schematic illustration of a system, constructed and operative in accordance with a further embodiment of the disclosed technique.
  • the disclosed technique provides methods and systems for target or object detection, identification and imaging, using optical observation techniques based on the gated imaging principle with active illumination.
  • the disclosed technique is applicable to any kind of imaging in any range scale, including short ranges on the order of hundreds of meters, and also extremely short ranges, such as ranges on the order of centimeters, millimeters and even smaller units of measurement, for industrial and laboratorial applications.
  • target or object refer to any object in general, and although the disclosed technique described herein is with reference to "detection and identification", it is equally applicable to any kind of image acquisition for any purpose, such as picturing, filming, acquiring visual information, and the like.
  • any suitable pulsed emission of electromagnetic energy radiation may be used, including light in the visible and non-visible spectrum, UV, near and far IR, radar, microwave, RF, gamma or other photon radiation, and the like.
  • other pulsed sources of energy may be used, including mechanical energy such as acoustic waves, ultrasound, and the like.
  • the disclosed technique provides for manipulation of the sensitivity and image gain of a gated sensor, as a function of the imaged depth of field, by changing the width of the transmitted laser pulses, by changing the state of the sensor in a manner relating to the distance to the target, by adjusting the number of exposures in a gating cycle, by synchronization of the sensor to the pulse timing, and by other factors.
  • Transmitted or emitted pulses, pulsed energy, pulsed beam and the like refer to at least one pulse, or to a beam of pulses emitted in series.
  • the disclosed technique allows for dynamic imaging and information gathering in real-time.
  • the optical observation system is mounted on a moving platform, for example, a vehicle, such as a military aircraft.
  • polarized light or electromagnetic radiation is employed, thereby providing for filtering out excessive ambient light of undesired reflections from background objects.
  • the transmitting source emits a polarized pulse which reflects from the target objects as a polarized pulse reflection.
  • a polarization filter or polarizer allows into a reflections sensor only incoming energy that conforms to the pulse reflections expected polarization. Most objects would reflect the original polarization of the emitted pulse but some reflective objects may alter such polarization.
  • FIG. 1 is a schematic illustration of the operation of a system, generally referenced 100, constructed and operative in accordance with an embodiment of the disclosed technique.
  • System 100 includes a laser device 102, and a sensor unit 104.
  • Laser device 102 generates a laser beam 106 in the form of a single pulse or a series of continuous pulses.
  • Laser device 102 emits laser beam 106 toward a target 108.
  • Laser beam 106 illuminates target 108.
  • Sensor unit 104 may be a camera, or any other sensor or light collecting apparatus.
  • Sensor unit 104 receives reflected laser beam 110 reflected from target 108.
  • Sensor unit 104 includes at least one photodetector (not shown) for processing and converting received reflected light 110 into an image 112 of the target.
  • Sensor unit 104 may also include an array of photodetectors.
  • Sensor unit 104 may be in one of two states. During an "on" state, sensor unit 104 receives incoming light, whereas during an "off” state sensor unit 104 does not receive incoming light. In particular, the shutter (not shown) of sensor unit 104 is open during the "on" state and closed during the "off” state.
  • Image 112 may be presented on a display, such as a video or television display.
  • the display may be a Head-Up Display (HUD), a Liquid Crystal Display (LCD), a display implemented with a planar optic apparatus, a holographic based flat optic display, and the like.
  • Image 112 may also be stored on a storage unit (not shown), or transmitted by a transmission unit (not shown) to another location for processing.
  • a controller (not shown) controls and synchronizes the operation of sensor unit 104. It is noted that sensor unit 104 can also be enabled to autogate.
  • autogating refers to the automatic opening and closing of the sensor shutter according to the intensity of light received. Autogating is prevalently applied for purposes such as blocking exposure of the sensor unit to excessive light, and has no direct connection to active transmission of pulses, their timing, or their gating.
  • Atmospheric conditions and substances such as humidity, haze, fog, mist, smoke, rain, airborne particles, and the like, represented by zone 114, exist in the surrounding area of system 100.
  • Backscatter from the area in the immediate proximity to system 100 has a more significant influence on system 100 than backscatter from a further distanced area.
  • an interfering particle relatively close to system 100 will reflect back a larger portion of beam 106 than a similar particle located relatively further away from system 100.
  • the area proximate to system 100 from which the avoidance of backscattered light is desirable can be defined with an approximate range designated as R I N-
  • the target is not expected to be located within range R MI N, therefore the removal of the influences of atmospheric conditions or other interfering substances in this range from the captured image is desirable.
  • Such atmospheric conditions and substances can also be present beyond R M
  • Sensor unit 104 is deactivated for the duration of time that laser beam 106 has completely propagated a distance R MIN toward target 108 including the return path to sensor unit 104 from distance R M)N .
  • Range R M N is the minimum range for which sensor unit 104 is deactivated.
  • the distance between system 100 and target 108 is designated range R MAX - It is noted that target 108 does not need to be located at a distance of range R MAX , and can be located anywhere between range R IN and range R MAX - Range R MAX represents the maximal range in which target 108 is expected to be found, as the exact location of target 108 is not known when system 100 is initially used.
  • the disclosed technique provides for the manipulation of the sensitivity and image gain of a gated sensor, it is useful to illustrate how a laser pulse propagates through space. It is also useful to illustrate how a laser pulse propagates towards, and reflects from, an object.
  • Laser pulse 116 emanates from laser device 115, and travels in the direction of arrow 121. Arrow 121 points towards an increase in range.
  • Laser pulse 116 can be considered a train of small packets of energy 117, with each packet "connected" to the next, much like box cars of a real train are connected to one another.
  • the first packet of energy of laser pulse 116 is referred to as head packet 118.
  • the last packet of energy of laser pulse 116 is referred to as tail packet 119.
  • Head packet 118 is coloured black and tail packet 119 is coloured gray for purposes of clarity only.
  • laser pulse 116 is made up of small packets of energy 117, and each packet of energy, at a particular instant, is located at a particular point in space, then laser pulse 116 can be described as having a specified length, spanning from the location where tail packet 119 is located up to the location where head packet 118 is located.
  • the front part of laser pulse 116, where head packet 118 is located, can therefore be referred to as the "head” of the laser pulse
  • the back part of laser pulse 116, where tail packet 119 is located can therefore be referred to as the "tail” of the laser pulse.
  • the middle part of laser pulse 116 namely, the packets located between the head and the tail of the laser pulse, can be referred to as the "body" of the laser pulse.
  • the length of laser pulse 116 can also be described temporally, in terms of how much time laser device 115 is activated, in order to generate enough packets of energy, and to let the packets of energy propagate through space, to cover the range extended by laser pulse 116.
  • FIG. 2B is a schematic illustration of a laser pulse propagating towards, and reflecting from, an object.
  • laser pulses are emitted from laser device 122, and are received by sensor unit 124.
  • Figure 2B illustrates a particular instant in time when various laser pulses, emitted from laser device 122 at different times, are either, propagating towards object 125, reflecting from object 125, or passing object 125.
  • the head of a laser pulse has been coloured black
  • the tail of a laser pulse has been coloured gray.
  • laser pulse 123A is still being generated, as only the head, and part of the body, of laser pulse 123A, has been generated.
  • the tail, and the rest of the body, of laser pulse 123A has not yet been generated.
  • Laser pulse 123A propagates in the direction of arrow 127A towards object 125.
  • Laser pulse 123B is a full, or complete, laser pulse, which propagates in the direction of arrow 127B towards object 125. It is noted that laser pulse 123B has a head, tail and body.
  • Laser pulse 123C has already partially impinged on object 125, as the head, as well as part of the body, of laser pulse 123C, has already impinged on object 125, and has begun to reflect back towards sensor unit 124, in the direction of arrow 127C.
  • the tail, as well as part of the body, of laser pulse 123C has not yet impinged on object 125, and is therefore still propagating away from laser device 122. It is noted that, regarding laser pulse 123C, the head portion of the laser pulse is propagating in a direction of decreased range, back towards sensor unit 124, while, simultaneously, the tail portion of the laser pulse is propagating in a direction of increased range, away from laser device 122.
  • Laser pulse 123D is a full, or complete, laser pulse, which has completely impinged upon and reflected from object 125.
  • Laser pulse 123D propagates in the direction of arrow 127D towards sensor unit 124.
  • Laser pulse 123E has already been partially received by sensor unit 124, as only the tail, and part of the body, of laser pulse 123E is depicted in Figure 2B.
  • the head, and the rest of the body, of laser pulse 123E has already been received by sensor unit 124.
  • Laser pulse 123F is a full, or complete, laser pulse which did not reflect from object 125, and propagates in the direction of arrow 127F.
  • the head, as well as part of the body, of laser pulse 123F has already passed object 125.
  • laser pulse 123F has not yet passed object 125. It is noted that laser pulse 123F was emitted at the same time laser pulse 123C was emitted. It is furthermore noted that not all the laser pulses emitted from laser device 122 will reflect from the same object or location, for example laser pulse 123C as compared to laser pulse 123F. In general, laser device 122 will emit many laser pulses, in order to illuminate an area, as it is not known in advance which objects in the path of the laser pulses will reflect the laser pulses back towards a sensor unit, and how many reflections will be received from the various ranges the laser pulses propagate through.
  • FIG. 3 is a graph, generally designated 120, depicting gated imaging as a function of time, of both a laser and a sensor.
  • a laser pulse is transmitted at time t 0 .
  • the duration of the laser pulse, or the pulse width of the laser beam (in other words, the time the laser is on), is designated T LAS ER, and extends between time t 0 and time ti.
  • T LAS ER The duration of the laser pulse, or the pulse width of the laser beam (in other words, the time the laser is on), is designated T LAS ER, and extends between time t 0 and time ti.
  • time ⁇ and time t 5 there is no transmission of a laser pulse, depicted in Figure 3 by arrows demarcating a laser off time. It is noted that the description herein refers to a square pulse for the sake of simplicity and clarity.
  • the description herein is equally applicable to a general pulse shape or pattern, in which threshold values define the effective beginning, duration and end of the pulse, rendering its analysis analogous.
  • the sensor unit is initially in the "off” state for as long as the laser pulse is emitted, between time t 0 and time ⁇ (T LASER )-
  • the sensor unit is further maintained in the "off” state between time ⁇ and time t , or during time span ⁇ t M i N -
  • the sensor unit remains in the "off” state so as not to receive reflections of the entire laser pulse (including the end portion of the pulse) from objects located within a range R MIN from the system.
  • T O F F the time the sensor unit is in an "off" state, extends from time t 0 to time t 2 .
  • the sensor unit is activated and begins receiving reflections.
  • from the system are received from photons at the rear end of the transmitted pulses which have impinged on these objects.
  • the front portion of the transmitted pulses is not detected for these objects located immediately after range RM IN -
  • the sensor unit first receives reflections from the entire width of the pulses.
  • time span between time t 2 and time t 3 is equal to T LASER -
  • the sensor unit remains in the "on" state until time t 5 .
  • T 0N the time the sensor unit is in an "on” state, extends from time t 2 to time t 5 .
  • the sensor unit still receives the full reflection of the pulses from objects located up to a range designated R ⁇ Reflections from objects beyond this range reflect progressively less portions of the laser pulse.
  • the tail portion of the reflected pulse is cut off to a greater extent, as the sensor shifts from its "on” state to its “off” state, the further away such objects are located beyond R 0 up to a maximal range designated R M AX- RMA X is the range beyond which no reflections are received at all, due to the deactivation of the sensor to its "off” state.
  • R MAX a maximal range designated R M AX- RMA X
  • the sensor unit receives reflections only from photons at the very front end of pulses whose tails are just about to pass range Ri.
  • the time span between time t 4 and time t 5 is equal to T LASE R- Time span ⁇ t A ⁇ corresponds to the time it takes a laser pulse, once it has been fully transmitted, to reach objects located at R MAX -
  • Figure 4 is a typical sensitivity graph, generally designated 130, depicting the sensitivity of the sensor unit, referred to in Figure 1 , as a function of the range between the sensor unit and a target area.
  • the vertical axis represents the relative sensitivity of the sensor unit, and has been normalized to 1.
  • the horizontal axis represents the range between the sensor unit and a target.
  • the term "sensitivity”, referred to in this context, relates to the gain or responsiveness of the sensor unit in proportion to the number of reflected photons actually reaching the sensor unit when it is active, and not to any variation in the performance of the sensor, per se.
  • Variation in the performance of the sensor has no relation to the range from which light is reflected, if the attenuation of light, due to geometrical and atmospheric considerations, is ignored.
  • the attenuation of light due to geometrical and atmospheric considerations is ignored herein for the sake of simplicity. Accordingly, the amount of received energy of the pulse reflections, reflected from objects located beyond a minimal range R MIN , progressively increases with the range along the depth of a field to be imaged.
  • Range R M N is the range up to which the full reflections from a target at this range will impinge upon sensor unit 104, referred to in Figure 1 , in a deactivated state.
  • range R MIN corresponds to the time duration between time t 0 and time t 2 .
  • Range R 0 is the range from which full reflections first arrive at sensor unit 104 while it is activated. The reflections are the consequence of the whole span of the pulse width passing in its entirety over a target located at range R 0 from sensor unit 104.
  • the distance between range R MIN and range R 0 corresponds to the time duration between time t 2 and time t 3 .
  • Range R ⁇ is the range up to which full reflections from objects can still be obtained.
  • Range R MAX is the range for which reflections, or any portion thereof, can still be obtained, i.e. the maximum range for which sensor sensitivity is high enough for detection.
  • the distance between range R-i and range R MAX corresponds to the time duration between time t 4 and time t 5 . It is noted that reflections from objects located beyond R MA X may also be received by sensor unit 104, if such targets are highly reflective. Incoming radiation from objects located at any distance, including distances beyond R M AX, tor example, stars, may also be received by sensor unit 104, if such objects emit radiation at a wavelength detectable by sensor unit 104.
  • the sensitivity of the sensor unit gradually increases to a maximum level of sensitivity.
  • This region includes reflected light mainly from atmospheric sources that cause interference and self-blinding in the sensor unit, therefore a high sensitivity is undesirable in this region.
  • the sensor unit initially encounters the photons of a reflected light beam at the very front end of the transmitted laser pulse, then the photons in the middle of the pulse and finally the photons at the very end of the pulse.
  • the sensor In the region ranging from range R IN up to range R 0 , the sensor doesn't detect most of the front portion of the pulses reflected from objects just beyond R IN , because of the timing of the "on" state of the sensor.
  • the sensor In the region ranging from range R I N up to range R 0 , the sensor incrementally detects more and more of the pulse as it reflects from objects found in further ranges. This incremental detection continues until all of the pulse is received for objects located at R 0 .
  • the duration of the incline in graph 130 is equivalent to the width of the laser pulse
  • the sensor unit remains at maximum sensitivity between range R 0 and range R ⁇ . This is the region where targets are most likely to be located, so a high sensitivity is desirable.
  • the sensitivity of the sensor unit progressively gradually decreases to a negligible level beyond range R ⁇
  • the sensor unit begins to not detect the photons at the very end of the laser pulse, then for further ranges the photons in the middle of the pulse are also not detected, and finally for objects located at RMAX, the photons at the front end of the pulse are not detected, until no photons are received at all.
  • the duration of the decline in graph 130 is equivalent to the width of the laser pulse TLASER- It is noted that the sensitivity depicted in Figure 4 enables sensor unit 104, and in general, system 100, referred to in Figure 1 , to obtain a level of received light energy, in system 100, which is directly proportional to the ranges of targets.
  • FIG. 1 is a graph, generally designated 140, depicting timing adjustments relating to the pulse width of the laser beam.
  • the technique relates to the time sensor unit 104, referred to in Figure 1 , is activated with respect to the pulse width of laser beam 106, referred to in Figure 1.
  • the vertical axis represents the status of a device, such as a laser or a sensor unit, where '1 ' represents a status of a device being on, and O' represents a status of the device being off.
  • the horizontal axis represents time.
  • Time TQFF is the time during which sensor unit 104 is deactivated, immediately after transmitting laser pulse 106.
  • Time TOFF may be determined in accordance with the range from which reflections are not desired (RMIN). thereby preventing reflections from atmospheric conditions and substances, and the self-blinding effect.
  • T OF F may be determined as twice this range divided by the speed of light, in the medium it is traveling in (v), as this is the time span it takes the last photon of the laser pulse to reach the farthest point in the range R M I N and reflect 5 back to the sensor.
  • T 0 FF T 0 FF
  • Time T O N is the time during which sensor unit 104 is activated and receives reflections from a remote target 108, referred to in Figure 1.
  • Time T O N may be determined in accordance with the entire distance the last photon of a pulse that propagates up to R 0 and back to the sensor is unit. Since the sensor unit is activated at time 2XR IN V, after laser pulse 106 has been fully emitted, the last photon of the laser pulse is already distanced 2XR MIN from the sensor unit. The last photon will propagate a further distance of R 0 -(2XR M I N ) until target 108, and a further distance R 0 back to the sensor, summing up to 2X(R 0 -R MIN )- The time it takes to scan
  • T 0N can be calculated using the equation:
  • Figure 6 is a graph, generally designated 150, depicting the observation capability of a system with the timing technique depicted in Figure 5.
  • the vertical axis represents the status of the laser beam, where '1 ' represents a status of the laser beam being on, and '0' represents a status of the laser beam being off.
  • the horizontal axis represents distance.
  • Sensor unit 104 referred to in Figure 1 , is "blind" up to range
  • R MIN In particular, there are no received reflections, generated by laser pulse 106, referred to in Figure 1 , from objects located in the region immediately beyond system 100, referred to in Figure 1 , up to range R MIN -
  • the range in which sensor unit 104 is "blind” is demarcated by arrows in Figure 6 as R 0FF - This blinding is due to the fact that laser pulse 106 propagates throughout path R MI N while system 100 is blind to reflections generated by laser pulse 106 colliding with any object throughout this range, sensor unit 104 having been deactivated during this period.
  • RMIN is the minimum range for which reflections, in their entirety, may encounter sensor unit 104 in the "off" state.
  • Element 152 is an object to be detected, located somewhat beyond range R MI N- Element 154 is an object to be detected, located further away, slightly before range R 0 . To understand how sensitivity as a function of range is achieved, it is helpful to examine how reflections are received from objects located at the range between R MI N and R 0 .
  • FIG. 7 is a graph, generally designated 160, depicting a specific instant in time in relation to the scenario depicted in Figure 6.
  • graph 160 depicts the specific instant at which laser pulse 162 has just completed passing element 152 and continues advancing.
  • the vertical axis represents the status of the laser beam, where '1' represents a status of the laser beam being on, and O' represents a status of the laser beam being off.
  • the horizontal axis represents distance.
  • Reflections from element 152 may be received the moment sensor unit 104, referred to in Figure 1 , is activated, even before the entire pulse width of laser pulse 162 has passed element 154. Therefore, plenty of time is provided for sensor unit 104 to receive reflections that can be intensified from object 154, but only a limited intensifying time is provided for reflections from the closer element 152.
  • Sensor unit 104 may be activated just a short time before the last portion of pulse energy 162 is reflected from element 152, provided that laser beam 106, referred to in Figure 1 , remains on element 152. This portion is proportional to the small distance between R M
  • the total energy received by sensor unit 104 as a consequence of reflections from element 152 is relative to the duration of time during which laser pulse 162 fully passes element 152, and still manages to reflect to a sensor unit, while the sensor unit is in the "on" state.
  • FIG 8 is a graph, generally designated 170, depicting a specific instant in time after the instant depicted in Figure 7.
  • graph 170 depicts the specific instant at which laser pulse 162 has just completed passing element 154 and continues advancing.
  • the vertical axis represents the status of the laser beam, where '1' represents a status of the laser beam being on, and '0' represents a status of the laser beam being off.
  • the horizontal axis represents distance.
  • reflections from element 154 may be received by sensor unit 104, referred to in Figure 1 , as long as laser beam 106 remains incident on element 154.
  • Reflections are no longer received from element 152, as laser pulse 162 has already passed element 152 and any reflections from element 152 have already passed sensor unit 104 in their entirety. Consequently, the reflection intensity absorbed from element 154, located near range R 0 , may be substantially greater than the reflection intensity absorbed from element 152. This difference in absorbed reflection intensity is because the received reflection intensity is determined according to the period during which sensor unit 104 is activated while the element is reflecting thereto. This means that laser pulse 162 may remain incident on element 154 for a longer time than on element 152, during a period that sensor unit 104 is activated, and receiving reflections.
  • FIG. 9 is a sensitivity graph, generally designated 180, in accordance with the timing technique depicted in Figure 5, depicting the sensitivity of a gated sensor.
  • the vertical axis represents relative sensitivity
  • the horizontal axis represents distance.
  • the vertical axis has been normalized to 1.
  • sensor unit 104 During time T 0 FF, referred to in Figure 5, sensor unit 104, referred to in Figure 1 , does not receive any reflections. Time T 0 FF corresponds to range R M IN- At range R IN, sensor unit 104 is activated. Between ranges R M IN and R 0 , the sensitivity of sensor unit 104 increases because increasingly more portions of laser pulse 106, referred to in Figure 1 , reflected from objects located between R M IN and R 0 , are received by sensor unit 104.
  • sensor unit 104 receives pulse reflections, in their entirety, from objects located between R 0 and R ⁇ Between ranges Ri and RM AX , the sensitivity of sensor unit 104 decreases because increasingly less portions of laser pulse 106, reflected from objects located between Ri and RM AX , are received by sensor unit 104. At range R M AX, sensor unit 104 is deactivated, and no portions of laser pulse 106 are received in sensor unit 104.
  • Time T O corresponds to the distance between ranges R MIN and R MA X-
  • graph 180 may not be ideal, because laser pulse 106 may also illuminate elements, especially highly reflective elements, located beyond range RM AX , as laser pulse 106 gradually dissipates. Furthermore, graph 180 may not be ideal because the sensitivity remains constant between the first optimum range R 0 and the last optimum range R L even though further attenuation exists within the range span R ⁇ -R 0 - It is possible to reduce the sensitivity of system 100, referred to in Figure 1 , for receiving reflections originating from beyond range R 0 by other techniques.
  • Such techniques include changing the form or shape of the pulses of laser beam 106, changing the pattern of the pulses of laser beam 106, changing the energy of the pulses of laser beam 106, changing the time that sensor unit 104 is activated, and changing the width of laser pulse 106.
  • Figure 10 is a sensitivity graph, generally designated 184, in accordance with a Long Pulse Gated Imaging (LPGI) timing technique, depicting the sensitivity of a gated sensor.
  • the vertical axis represents relative sensitivity, while the horizontal axis represents distance.
  • the vertical axis has been normalized to 1.
  • the pulse width of the laser beam, T LASE R is set equal to the difference between the time required for the laser beam to traverse the path from the system to the minimal target distance and back (2 R MIN /V) and the time the last photon reflects back from a target located at range Ri, referred to in Figure 9.
  • This time is also equivalent to the duration of time for which a sensor unit is activated, T 0N -
  • TLASER and T 0 N are given by the relation: where v is the speed of light, in the medium it is traveling in.
  • the LPGI timing technique is particularly suited for cases where a large dynamic range, for example from 3 to 30 kilometers, needs to be imaged.
  • RMIN is equal to 3km, and the speed of light is equal to c, the speed of light in a vacuum, then T 0 N and TLASER will substantially equal:
  • T 0 N will be equal in duration to TLASER- TO eliminate backscattered light without loss of contrast while maintaining a high quality image of a target and the background, it is sufficient to switch the sensor unit to the "off' state when the reflected beam has traversed approximately 6 km (3 km each way to and from range R M IN)- It is noted that it may be desirable to lengthen time T 0 FF by the pulse width of the laser beam, TLASER, to ensure that no backscattered reflections from the area up to RMIN are received by the sensor unit. Therefore, the actual time T 0 FF is given by the following equation:
  • TLASER is given by the following equation:
  • T 0F F may be simplified to: 2xR
  • Figure 11 is a graph, generally designated 185, depicting the radiant intensity captured by a sensor unit from reflections from a target and from backscatter, as a function of the range between the sensor unit and the target, for both gated and non- gated imaging, during a simulation.
  • Graph 185 is based on a simulation of a typical airborne system in the conditions specified at the bottom of Figure 11.
  • the vertical axis represents radiant intensity logarithmically, in units of lumens per square meter.
  • the horizontal axis represents range, in units of kilometers.
  • Curve 186 represents the radiant intensity captured by the sensor unit from the residual light intensity dispersed as light reflexes from the target, for a system operating in an LPGI mode.
  • Curve 187 represents the radiant intensity captured by the sensor unit from backscatter as the laser beam deflects off of atmospheric substances, for a system operating in an LPGI mode.
  • Similar curves, 188 and 189, correspondingly, are provided in graph 185, for a system operating in a non-gated mode. It is noted that the form of radiant intensity curve 188 from light reflected from the target in a non-gated mode is given, in general, by the inverse square law of light attenuation, in vacuum, as:
  • LPGI operation improves the contrast of the illuminated target against the backscatter light intensity for any range between 3km and 25km.
  • a system operating in an LPGI mode does not require knowledge of the exact range to a target.
  • system 100 does not require knowledge of the exact range R 0 between system 100 and target 108 ( Figure 1).
  • a rough estimation of range R 0 is sufficient in order to calculate the required pulse width of laser beam 106 ( Figure 1).
  • Such an estimation can extend, in the example of Figure 11 , between 3 to 25 km, which is particularly broad and requires a very rough estimation in comparison to the precise range determination required for modes other than the LPGI operation mode.
  • the radiant intensity of the reflection of light from a target changes by less than a factor of ten over the 4km to 20km range.
  • the radiant intensity of the reflection of light from the target varies by a factor of one hundred over the same range.
  • the relative "flatness" of curve 186 is the result of the gradual increase of sensitivity gain of a gated sensor unit, in proportion to the increase of range, represented in the graph of Figure 10 between R MIN and R 0 -
  • This gradual or progressive increase of sensitivity is "multiplied" by the attenuation of reflected light in the inverse relation (1/r 2 ), in vacuum, proportional to the increase in the range, represented by curve 188 in Figure 11 , resulting in the relatively "flat" curve 186.
  • the range of 4km to 20km between a sensor unit and a target is typical for many applications, particularly military targeting.
  • a system operating in an LPGI mode provides effective observation (i.e. identification and detection) of targets over a versatile depth of field.
  • “Depth of field” refers to the ranges of view confined to certain limits.
  • L(t) is a Boolean function representing the existing reflection of a laser pulse, irrespective of the on/off state of the sensor.
  • C(t) is a Boolean function representing the ability of the sensor to receive incoming light according to the on/off state of the sensor.
  • T LAS E R , T 0N and T 0 FF are as they were defined above (where T 0 F F is the time a sensor unit has been deactivated immediately following the completion of transmission of a laser pulse), v represents the speed at which the laser pulse travels, in the medium in which it is traveling in.
  • the integral is divided by T LASE R to normalize the result.
  • the values for radiant intensity e.g. the curves in graph 185) may be obtained by multiplying the above convolution formula with a rough geometrical
  • FIGS 12-14 These graphs depict a technique according to which laser pulse 106 ( Figure 1 ) is generated with a specific shape (of each pulse) or pattern (of pulses in a beam). Also the pulse energy of the pulses may be varied for similar purposes.
  • laser pulse 106 shape or pattern
  • the intensity is higher at the end of the pulse than at the beginning, then more light from laser pulse 106, reflected from element 152, may be received in sensor unit 104, than light reflected from element 154. This would be true if the gating of sensor unit 104 is synchronized to start receiving reflections when the head of a reflected pulse from R MIN reaches sensor unit 104, and to stop receiving reflections when the tail of the reflected pulse from R MIN reaches sensor unit 104.
  • the energy of emitted pulses, and the synchronization of sensor unit 104 may be selected, such that pulses, to be reflected from objects located at greater distances, will be received in sensor unit 104 with higher intensity (while similar shape or pattern may be retained).
  • the pulse reflections will correspondingly cause greater energy to be received as the reflecting objects are farther distanced, thus partially, fully, or even excessively compensating for the energy attenuation that increases with the reflection distance.
  • Figure 12 is a graph, generally designated 190, depicting an adjustment of the shape or pattern (or energy) of a laser pulse 192.
  • the vertical axis represents the relative intensity of a laser pulse, and the horizontal axis represents time. The vertical axis has been normalized to 1.
  • Time T CON is the duration of time during which system 100 transmits laser pulse 192 at maximum intensity.
  • Time T WAVE is the duration of time during which the intensity of transmitted laser pulse 192 decays in a shaped or patterned manner (or by its controlled energy).
  • T L S ER is the total duration of time that laser pulse 192 is transmitted and equals T C ON+TW
  • a V E - Time TO FF LA S E R is the duration of time during which laser device 102 ( Figure 1 ) is in an "off" state, i.e. laser device 102 does not transmit anything.
  • Time T 0 F F is the duration of time during which sensor unit 104 does not receive anything due to its deactivation.
  • Time T ON is the duration of time during which sensor unit 104 is in the "on" state and receives reflections.
  • the times depicted in Figure 12 are not proportionate, for example T 0 FF LASER may be much greater than TLASER-
  • Figure 13 is a graph, generally designated 200, depicting the advancement of the shaped or patterned laser pulse depicted in Figure 12.
  • the vertical axis represents the relative intensity of a laser pulse impinging upon an element, and the horizontal axis represents distance. The vertical axis has been normalized to 1.
  • Graph 200 depicts a specific instant in time, in particular the moment that laser pulse 192 impinges on an element within the range R MiN . Reflections from the element within range R IN will require an additional amount of time in order to reach sensor unit 104 ( Figure 1 ). Sensor unit 104 will begin to collect photons, after an additional amount of time, in accordance with the shape or pattern of laser pulse 192.
  • Photons in range RMIN exited at the end of laser pulse 192 and were able to reach range R A , when sensor unit 104 is activated. Photons in range RWAVE (between distances R A and R B ) exited at the beginning of the intensity decline of laser pulse 192. Photons in range R C ON (between distances R B and R c ) exited laser device 102 ( Figure 1 ) with maximum intensity at the beginning of the transmission of laser pulse 192.
  • range R M IN depends on time TOFF, which corresponds to the duration of time from the instance the end of laser pulse 192 reaches R M IN to the instance in which the end of laser pulse 192 reflects from R M IN and reaches sensor unit 104.
  • TOFF corresponds to the duration of time from the instance the end of laser pulse 192 reaches R M IN to the instance in which the end of laser pulse 192 reflects from R M IN and reaches sensor unit 104.
  • relations such as -y define a laser pulse shape that drops in intensity r dramatically. This drop in intensity may be advantageous in terms of minimizing reflections from an element at close range (ignoring the attenuation of laser pulse 192).
  • Figure 14 is a sensitivity graph, generally designated 210, in accordance with the laser shaping technique depicted in Figure 12, depicting the sensitivity of a gated sensor.
  • the vertical axis represents relative sensitivity or gain, and the horizontal axis represents distance. The vertical axis has been normalized to 1. It is helpful to compare graph 210 with graph 180 of Figure 9, where the technique of laser shaping was not applied. Accordingly, range R M I N is the range from which reflections generated by the shaped or patterned pulse are not received by sensor unit 104 ( Figure 1).
  • Range R WA VE is the range from which the reflections generated by the shaped or patterned pulse begin to be received and intensified.
  • the curve along R WAV E results from the shape or pattern of the decay of laser pulse 192 ( Figure 12).
  • the gradient along R CON results from the increasing amount of the maximum intensity portion of laser pulse 192, corresponding to T CO N ( Figure 12), detected on sensor unit 104 ( Figure 1).
  • the gradient along R CON also results from the different passing times between a laser pulse and elements in its path, as described with reference to Figures 5-7.
  • Range R 0 to ⁇ is the range from which the intensity of laser pulse 192 is steady.
  • Range R ⁇ to R MAX is the range from which the intensity of laser pulse 192 decreases at a constant rate. Beyond R MAX is the range from which reflections are no longer detected on sensor unit 104.
  • FIGS 15-18 depict techniques for timing adjustments during the process of obtaining an individual video frame of received reflections from a target. These techniques illustrate the ability to change the duration of time sensor unit 104 ( Figure 1 ) is activated and/or the width of laser pulse 106 ( Figure 1 ) in order to achieve maximum sensitivity of system 100 ( Figure 1) at the optimum range R 0 . It is appreciated that limiting the number of transmitted laser pulses without compromising image quality, reduces the sensitivity of the system to extraneous ambient light sources.
  • the array of photodetectors of sensor unit 104 may be a standard video sensor, such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) type of sensor.
  • the CCD type sensor may include an external shutter.
  • Such sensors typically operate at a constant frequency of approximately 50-60Hz. This means that each second the array of photodetectors captures 25-30 frames. To demonstrate the technique, it is assumed that the array of photodetectors operates at 50Hz. The duration of an individual frame is then 20ms.
  • the width of laser pulse 106 in addition to the duration of time that sensor unit 104 is set to the "on" state must add up to 3 ⁇ s. It is noted that the effect of T 0FF is not considered for the purposes of this simplified example.
  • This frequency of operation requires a cycle time of 3 ⁇ s with no time gaps (i.e. waiting times) between the end of laser pulse 106 and the opening of sensor unit 104. It is then possible to transmit up to 6666 pulses and to collect up to 6666 reflected pulses, in the course of an individual field, i.e. a video frame.
  • Figure 15 is a graph, generally designated 220, depicting the sequence of pulse cycles (L) and collection of reflected photons (P) over an individual field.
  • the vertical axis represents the status of a device, where '1' represents a status of the device being on, and '0' represents a status of the device being off.
  • the horizontal axis represents time.
  • a cycle is defined as the time period required for one laser pulse to be transmitted and one reflected photon, or bundle of reflected photons, to be received.
  • a cycle is therefore defined as T L + T P .
  • T L defines an amount of time a laser device is on.
  • T P defines an amount of time a sensor device is on. It is assumed that the lower the number of cycles required for obtaining a quality image, the greater the ability of a system to reduce the effects of ambient light sources, since a higher number of cycles increases a system's potential exposure to ambient light sources.
  • Figure 16 is a graph, generally designated 230, depicting a timing technique where a laser pulse width is changed dynamically over the course of obtaining an individual frame.
  • the vertical axis represents the status of a device, where '1' represents a status of the device being on, and '0' represents a status of the device being off.
  • the horizontal axis represents time.
  • the total width of each cycle remains constant, although the width of laser pulse 106 (T L ) becomes narrower as time progresses, with the gap between T L and T P growing accordingly.
  • the width of laser pulse 106 is very short in comparison to its width in the first cycle, while the waiting time for the array of photodetectors to open (T 0FF ) is very long in comparison to T 0F F in the first cycle.
  • the rate at which the waiting time, before sensor unit 104 is activated, is increased, is equal to the rate at which the width of laser pulse 106 is narrowed.
  • the range R MI N from which no reflections are received from system 100, may be increased. In this manner, system 100 receives more reflections from the remote range than from the near range, and a desired sensitivity as a function of range is achieved.
  • Figure 17 is a graph, generally designated 240, depicting a timing technique where the duration that a sensor unit is activated is changed dynamically over the course of obtaining an individual frame.
  • the vertical axis represents the status of a device, where '1 ' represents a status of the device being on, and '0' represents a status of the device being off.
  • the horizontal axis represents time. Similar to the technique shown in graph 230 of Figure 16, the total width of each cycle remains constant although the duration that sensor unit 104 is set to the "on" state (Tp) becomes narrower as time progresses, with the gap between T L and T P growing accordingly.
  • the duration of T P is very short in comparison to its duration in the first cycle, while the waiting time for the array of photodetectors to open (T 0 FF) is very long in comparison to T 0 FF in the first cycle.
  • the rate at which the waiting time, before sensor unit 104 is activated, is increased, is equal to the rate at which the duration that sensor unit 104 is set "on" is narrowed.
  • the range R M IN from which no reflections are received from system 100, may be increased. In this manner, system 100 receives more reflections from the remote range than from the near range, and a desired sensitivity as a function of range is achieved.
  • Figure 18 is a graph, generally designated 250, depicting a timing technique where both a laser pulse width and the duration that a sensor unit is activated is changed dynamically over the course of obtaining an individual frame.
  • the vertical axis represents the status of a device, where '1 ' represents a status of the device being on, and '0' represents a status of the device being off.
  • the horizontal axis represents time.
  • T 0FF is increased at twice this rate.
  • the time in which sensor unit 104 is activated and thereby susceptible to the effect of ambient light sources is shortened, thus exploiting the energy spent and received, by system 100, to a maximum.
  • system 100 receives more reflections from the remote range than from the near range, and a desired sensitivity as a function of range is achieved.
  • This is provided at the "expense" of narrowing the depth of field, which means having R I N approach R 0 . Having R MIN approach R 0 is desirable when a target range is known more accurately. Narrowing the depth of field can also be compensated by enhancing the pulse intensity.
  • Timing adjustments during the process of obtaining an individual video frame may be employed in order to achieve a desired sensitivity as a function of range. This may involve dynamically changing the width of a laser pulse, the duration of time a sensor unit is set to the "on" state, or both. It is appreciated that the aforementioned techniques of timing adjustments may be integrated and combined with the aforementioned technique of changing the shape, pattern or energy of a laser pulse, as discussed with reference to Figures 12-14.
  • FIGS 19-21 depict techniques for adjusting the number of cycles, or exposures, during the process of obtaining an individual video frame. These techniques serve to eliminate blooming, or self-blinding, arising from high intensity ambient light sources. Additionally, or alternatively, implementation of different image processing techniques may be utilized for this purpose. In particular, the rate of laser pulse transmissions (L) and collection of reflected photons (P) may be changed dynamically, thereby reducing the number of exposures.
  • Figure 19 is a graph, generally designated 260, depicting timing adjustments during the process of obtaining an individual video field, where a total of 6666 exposures are performed.
  • the vertical axis represents the status of a device, where '1' represents a status of the device being on, and O' represents a status of the device being off.
  • the horizontal axis represents time.
  • Figure 20 is a graph, generally designated 260, depicting timing adjustments during the process of obtaining an individual video field, where a total of 100 exposures are performed.
  • the vertical axis represents the status of a device, where '1 ' represents a status of the device being on, and '0' represents a status of the device being off.
  • the horizontal axis represents time.
  • the number of exposures in a field should be dynamically controlled.
  • the number of exposures in a field may be controlled in accordance with several factors. For example, one factor may be the level of ambient light (information which may be received as an input to system 100 from an additional sensor which detects ambient light). Another factor may be the level of current consumed by sensor unit 104 (information which may be obtained via a power supply).
  • Figure 21 is a pair of graphs, generally designated 280 and 290, depicting timing adjustments during the process of obtaining an individual video field, where the number of exposures in a field is controlled by a technique based on image processing.
  • the vertical axis represents the status of a device, where '1' represents a status of the device being on, and '0' represents a status of the device being off.
  • the horizontal axis represents time.
  • This technique involves sensor unit 104 acquiring two frames. In one frame a large number of exposures are obtained, and in the other frame, a small number of exposures are obtained.
  • sensor unit 104 includes at least one photodetector, or an array of photodetectors, that operates faster than standard CCD or CMOS sensors.
  • sensor unit 104 operates at a frequency of 100 Hz.
  • the corresponding duration of each frame is then 10ms.
  • Graphs 280 and 290 depict sensor unit 104 acquiring two frames.
  • system 100 Figure 1
  • graph 290 system 100 performs 50 exposures.
  • the number of exposures in a field may be controlled in accordance with several factors, such as the level of ambient light, the saturation state of the photodetectors, image processing constraints, and the like.
  • the two frames may be combined into a single frame.
  • FIG 22 is a schematic illustration of the two image frames acquired in Figure 21 , and the combination of the two frames. It is assumed that the size of an image frame is 4 pixels, for simplicity. In the first frame 292, which originates from a large number of exposures, the upper pixels become saturated, while the lower pixels retain a reasonable level of gray. In the second frame 294, which originates from a smaller number of exposures, the upper left pixel does not become saturated, whereas the lower pixels are dark areas. In the combined image, the pixels from first frame 292 are combined with the pixels from second frame 294.
  • the combined image frame 296 has less saturated pixels (only the upper right pixel), and less dark area pixels (only the lower right pixel). The overall image quality is thereby increased.
  • the combination of the saturated upper left pixel in first frame 292 and the non-saturated upper left pixel in second frame 294 generates a non-saturated upper left pixel in combined frame 296.
  • the blinding effect includes blinding caused by the operation of a similar system in the vicinity of system 100, herein known as mutual blinding.
  • System 100 may overcome mutual blinding by applying statistical techniques or synchronization techniques.
  • Statistical techniques may include reducing the number of exposures in the course of acquiring an individual video field and possibly compensating by using a greater laser intensity or a higher intensification level from sensor unit 104 ( Figure 1).
  • the techniques may also include a random or predefined change in the timing of cycles throughout a single frame, changing the exposure frequency, or any combination of these techniques.
  • Synchronization techniques may include establishing a communication channel between the two systems, for example, in the RF range. Such a channel would enable the two systems to communicate with each other.
  • Another possible synchronization technique for overcoming mutual blinding is automatic synchronization.
  • Figure 23 is a pair of graphs, generally designated 310 and 320, depicting a synchronization technique for overcoming mutual blinding.
  • the vertical axis represents the status of a device, where '1 ' represents a status of the device being on, and '0' represents a status of the device being off.
  • the horizontal axis represents time.
  • one system enters a "listening period" every so often. When the system is in the listening period, the system refrains from transmitting laser pulses and collecting reflections.
  • section 312 of graphs 300 and 310 the first system may resume activity at the end of its listening period.
  • the first system waits until the end of the cyclic sequence of the second system before resuming activity.
  • system #1 performs a cyclic sequence of 50 exposures before entering a listening period.
  • system #2 performs a cyclic sequence only when system #1 is in a listening period. In this manner, synchronization between system #1 and system #2 ensures that no pulses transmitted by one system are received by the other system, resulting in interference and mutual blinding. It is noted that in the synchronization technique, 50% of the possible exposure time in a frame is allotted to each system.
  • a night vision imaging system mounted on a vehicle may include an interface with the vehicle's computer system (automotive BUS).
  • Two pulse detectors are mounted in the vehicle in which system 100 ( Figure 1) is installed.
  • One pulse detector is installed in the front section of the vehicle, and the other pulse detector is installed in the rear section of the vehicle.
  • the pulse detectors detect if other systems similar to system 100 are operating in vehicles approaching the vehicle of system 100 from the front or from the rear. Since a vehicle approaching from the rear is not likely to cause interference with system 100, synchronization may not be implemented in such a case.
  • An alternative synchronization technique for overcoming mutual blinding involves "sharing". For example, part of the listening period of a frame may be dedicated to detecting pulses transmitted by other systems. If no pulse is detected from another system, system 100 may randomly decide when laser device 102 ( Figure 1) may begin transmitting laser pulses within the same frame span. If a pulse from another system is detected however, system 100 initiates transmission of laser pulses at a random time only after the approaching pulse has ended. Alternatively, each system may randomly change their pulse start transmission timing in each frame. It is appreciated that these synchronization techniques for overcoming mutual blinding allow a system to synchronize with other systems that operate at different rates. Another possible synchronization technique for overcoming mutual blinding involves a synchronizing pulse transmitted by one system at a given time, while the other system adapts itself in accordance with the received synchronizing pulse.
  • FIG. 24 is a block diagram of a method for target detection and identification, accompanied by an illustration of a conceptual operation scenario, generally designated 350, operative in accordance with another embodiment of the disclosed technique.
  • an attack helicopter 352 equipped with an observation system in accordance with an embodiment of the disclose technique, such as system 100 is involved in an anti-tank operation at night.
  • the helicopter crew detects a hot spot 354 at a 15 km range using a FLIR (Forward Looking Infrared) device.
  • the second stage 370 when the helicopter is distanced only 14-15 km from hot spot 354, the surveillance and observation system is activated.
  • FLIR Forward Looking Infrared
  • the identification stage of image 366 is completed by reviewing its recorded details, such as by comparison 364 with other images of potential targets 368 stored in a data bank (not shown).
  • the hot spot is identified as a legitimate target, namely, an enemy tank.
  • the helicopter crew activates a weapons system, for example a missile homing on the thermal radiation emitted by hot spot 354.
  • the activated weapon destroys the target from a relatively distant range, for example from a range of 8-9 km.
  • FIG 25 is a schematic illustration of a system, generally referenced 400, constructed and operative in accordance with another embodiment of the disclosed technique.
  • System 400 is stabilized by a gimbals, and the optical axis of an illuminating laser beam is coupled with the optical axis of an observing section.
  • System 400 includes an external section 402 and an observation section 404.
  • External section 402 includes a laser device 406 and an electronic controller 408.
  • Observation section 404 includes at least one photodetector, or an array of photodetectors, 410, an optical coupling means 412, a coupling lens assembly 414, and an optical assembly 416.
  • Laser device 406 is optically coupled with optical coupling means 412.
  • Electronic controller 408 is coupled with array of photodetectors 410.
  • Array of photodetectors 410 is further coupled with optical coupling means 412.
  • Coupling lens assembly 414 is coupled with optical coupling means 412 and with optical assembly 416.
  • Optical coupling means 412 includes a collimating lens 426, a first mirror 428 and an integrating lens assembly 430.
  • Integrating lens assembly 430 includes a second mirror 432.
  • First mirror 428 is optically coupled with collimating lens 426 and with integrating lens assembly 430.
  • Optical assembly 416 includes an array of objective lenses 442.
  • Gimbals 420 stabilizes observation section 404. Stabilization is required when system 400 is positioned on a continuously moving or vibrating platform, whether airborne, terrestrial or nautical, such as an airplane, helicopter, sea craft, land vehicle, and the like.
  • Observation section 404 may also be stabilized by using feedback from a gyroscope to gimbals 420, by stabilization using image processing techniques, based on the spatial correlation between consecutively generated images, by stabilization based on sensed vibration, or in any combination of the above.
  • External section 402 does not require specialized stabilization and may therefore be packaged separately and located separately from observation section 404.
  • the stabilization may be based on detection of vibrations of the sensor means that influence the image as it is captured.
  • Such sensor means in Figure 25 may include observation section 404, photodetector(s) 410, optical coupling means 412, coupling lens assembly 414, and optical assembly 416, and their rigid packaging.
  • Laser device 406 transmits a pulsed laser beam 422 toward a target.
  • Laser device 406 may be a Diode Laser Array (DLA).
  • the transmitted laser beam 422 propagates through optical fiber 424.
  • Optical fibers are used in system 400 to transmit laser beam 422 because they enable the laser beam spot size to be reduced to the required field-of-view (FOV).
  • Optical fibers also allow for easy packaging.
  • optical fibers transmit laser light such that no speckle pattern is produced when the laser light falls on a surface (laser devices, in general, produce speckle patterns when laser light falls on a surface).
  • Laser device 406 is separate from observation section 404. Since laser device 406 may be inherently heavy this facilitates packaging and results in decreased weight in observation section 404.
  • Transmitted laser beam 42 propagates through optical fiber 424 toward collimating lens 426.
  • Collimating lens 426 collimates transmitted laser beam 422.
  • the collimated laser beam is conveyed toward first mirror 428.
  • First mirror 428 diverts the direction of the collimated laser beam and converges the beam onto integrated lens assembly 430.
  • Converged beam 434 reaches second mirror 432.
  • Second mirror 432 is typically very small.
  • Second mirror 432 couples the optical axis 436 of converged beam 434 with the optical axis 438 of observation section 404.
  • Optical axis 438 is common to array of photodetectors 410 and to optical assembly 416. Second mirror conveys converged beam 434 toward coupling lens assembly 414.
  • Coupling lens assembly conveys the beam toward array of objective lenses 442.
  • Array of objective lenses 442 collimates the beam once more and transmits the collimated laser beam 440 toward a target (not shown).
  • Beam 440 illuminates the target, and the reflections of light impinging on the surface of the target return to optical assembly 416.
  • Optical assembly 416 routes the reflected beam 450 toward array of photodetectors 410 via coupling lens assembly 414.
  • Array of photodetectors 410 processes reflected beam 450 and converts reflected beam 450 into an image displayable on a television.
  • Array of photodetectors 410 may be a CCD (Charge Coupled Device) type sensor.
  • the CCD sensor is coupled by relay lenses to a gated image intensifier, as is known in the art.
  • the CCD type sensor may include external shutters.
  • array of photodetectors 410 may be a Gated Intensified Charge Injection Device (GICID), a Gated Intensified CCD (GICCD), a Gated Image Intensifier, a Gated Intensified Active Pixel Sensor (GIAPS), and the like.
  • GICID Gated Intensified Charge Injection Device
  • GICCD Gated Intensified CCD
  • GIAPS Gated Image Intensifier
  • Advance processing may include, for example, comparing the image with a set of images in a databank of known identified target (see step 380 with reference to Figure 24).
  • the generated displayable television image may be subjected to additional processing.
  • Such processing may include accumulating image frames via a frame grabber (not shown), integration to increase the quantity of light and to improve contrast, electronic stabilization provided by image processing techniques based on the spatial correlation between consecutively generated images, and the like.
  • Controller 408 controls the timing of array of photodetectors 410, and receives the displayable television image via suitable wiring 444.
  • Controller 408 may include an electronics card. Controller 408 controls the timing of array of photodetectors 410 in synchronization with the laser pulses provided by laser device 406. The timing is such that array of photodetectors 410 will be closed during the time period that the laser beam traverses a distance adjacent to system 410 en route to the target (distance R MIN , with reference to Figure 1). Switching the sensor unit to the "off" state immediately after transmitting the laser beam ensures that unwanted reflections, from atmospheric substances and particles and backscatter, are not captured by array of photodetectors 410, and that the self-blinding phenomenon is avoided.
  • FIG 26 is a schematic illustration of a system, generally referenced 500, constructed and operative in accordance with another embodiment of the disclosed technique.
  • the optical axis of an illuminating laser beam in system 500 is essentially parallel with the optical axis of its array of photodetectors.
  • System 500 includes an electronics box 502, an observation module 504, a power supply 506, a narrow field collimator 508, a display 510, a video recorder 512, and a support unit 514.
  • Electronics box 502 includes a laser device 516, a laser cooler 518, a controller 520, a service panel for technicians 522, an image processing unit 524 and a PC (Personal Computer) card 526.
  • Observation module 504 includes an optical assembly 528, a filter 530, an optical multiplier 532, an array of photodetectors, or at least one photodetector, 534, and an electronics card 536.
  • a spatial modulator shutter (not shown) may be located in front of array of photodetectors 534.
  • a narrow field collimator 508 is installed on observation module 504.
  • a power supply 506 is coupled with electronics box 502 via a connector 538.
  • Electronics box 502 is coupled with observation module 504 via a cable 540.
  • Electronics box 502 is optically coupled with narrow field collimator 508 via an optical fiber 542.
  • Video recorder 512 is coupled with electronics box 502 and with display 510.
  • Laser device 516 transmits a pulsed laser beam 544 toward a target (not shown).
  • Laser device 516 may be a DLA.
  • Laser cooler 518 provides cooling for laser device 516.
  • the transmitted laser beam 544 propagates through optical fiber 542 toward narrow field collimator 508.
  • Collimator 508 collimates laser beam 544 so that the optical axis 546 of laser beam 544 is essentially parallel with the optical axis 548 of array of photodetectors 534, and conveys collimated laser beam 544 toward the target.
  • Optical assembly 528 includes an array of narrow field objective lenses (not shown) packaged above support unit 514.
  • Optical assembly 528 conveys reflected beam 550 to filter 530.
  • Filter 530 performs spectral and spatial filtering on reflected beam 550.
  • Filter 530 may locally darken an entrance of array of photodetectors 534 to overcome glare occurring in system 500.
  • Image processing unit 524 provides control and feedback to filter 530.
  • Filter 530 may be an adaptive Spatial Light Modulator (SLM), a spectral frequency filter, a polarization filter, a light polarizer, a narrow band pass filter, or any other mode selective filter.
  • SLM Spatial Light Modulator
  • An SLM filter may be made up of a fransmissive Liquid Crystal Display (LCD), a Micro-Electro-Mechanical System (MEMS), or other similar devices.
  • Filter 530 may also be a plurality of filters. The characteristics of filter 530 suit the energy characteristics of reflected beam 550.
  • image processing unit 524 filter 530 may be programmed to eliminate background radiation surrounding the target which is not within the spectral range of laser device 516. Residual saturation remaining on the eventual image as a result of ambient light sources in the field of view, for example artificial illumination, vehicle headlights, and the like, may be reduced by a factor of approximately 1/1000 th through adaptive SLM filtering.
  • filter 530 conveys reflected beam 550 to optical multiplier 532.
  • Optical multiplier 532 enlarges reflected beam 550. It is noted that filter 530, or optical multiplier 532, or both, may be installed directly on the output end of optical assembly 528. Optical multiplier may also be installed directly on the input end of array of photodetectors 534.
  • Array of photodetectors 534 receives reflected beam 550, and processes and converts reflected beam 550 to image data.
  • Array of photodetectors 534 may be a CCD sensor.
  • the CCD sensor may include external shutters.
  • Array of photodetectors 534 transfers the image data to electronics card 536 via cable 552.
  • Electronics card 536 transfers the image data to electronics box 502 via cable 540.
  • Cable 540 may be any type of wired or wireless communication link.
  • Controller 520 synchronizes the timing of array of photodetectors 534. Controller 520 ensures that array of photodetectors 534 is closed (i.e. the sensor unit is deactivated) when transmitted laser beam 544 traverses a range in the immediate vicinity of a target (i.e.
  • PC card 526 enables a user to interface with electronics box 502.
  • PC card 526 is embedded with image processing capabilities.
  • PC card 526 allows the image received from array of photodetectors 534 to be analyzed and processed. For example, such processing may include comparing the image to pictures of identified targets stored in a data bank, local processing of specific regions of the image, operation of the SLM function, and the like.
  • the generated image may be presented on display 510 or recorded by video recorder 512.
  • the image may be transferred to a remote location by an external communication link (not shown) such as a wireless transmission channel.
  • Power supply 506 supplies power to the components of electronics box 502 via connector 538.
  • Power supply 506 may be a battery, a generator, or any other suitable power source.
  • an input voltage from power supply 506 allows laser device 516 to operate.
  • Support unit 514 supports observation module 504, as well as narrow field collimator 508 installed above observation module 504.
  • Support unit 514 provides for height and rotational adjustments.
  • Support unit 514 may include a tripod (not shown), support legs 554 for fine adjustments, and an integral stabilization system (not shown) including, for example, viscous shock absorbers.
  • DLA laser device 516 allows optical fibers to be used to convey transmitted laser beam 544. This facilitates packaging of laser device 516, which is typically heavy. Laser device 516 is also located separately from observation module 504, resulting in less weight on observation module 504. It is further noted that DLA laser device 516 generates a beam of laser energy having substantially high power for extended periods. Since the generated beam has a high frequency and a relatively low intensity, it may be routed via optical fibers which have limited durability for high intensity power, particularly at the peak. It is further noted that a DLA laser device generates radiation in the near infrared spectral region, which is invisible to the human eye. However, this wavelength is also very close to the visible spectrum. Image intensifiers are very sensitive to this wavelength and provide good image contrast. Thus an image intensifier used in conjunction with a DLA laser device can provide high image quality even for targets at long ranges.
  • a DLA laser device generates non-coherent radiation. Therefore, the generated beam has very uniform radiation and results in an image of higher quality than if a coherent laser beam is used. It is further noted that a DLA laser device can operate in a "snapshot" observation mode. Snapshot observation involves transmitting a series of quick flash bursts, which diminishes the duration of time that the laser device is active. This reduces the exposure of a system and the risk of being detected by a foreign element.
  • a DLA laser device enables switching of the array of photodetectors, allowing the array of photodetectors to operate on very short time spans with respect to the long damped vibrations of the gimbals. Such vibrations may cause blurring in the generated image.
  • a DLA laser device is highly efficient in converting power to light. A DLA laser device delivers more light and less heat than other types of laser devices. Accordingly, the laser in the disclosed technique is transmitted through relatively wide optics and at relatively low intensities, so that the safety range is only a few meters from the laser. In contrast, in systems with laser range finders or laser designators, the safety range may reach tens of kilometers.
  • a DLA laser device is suitable for applications where laser transmission through a water medium is required, for example when performing sea surveillance from an airborne system, when performing underwater observation, or other nautical applications.
  • a laser beam in the blue-green range of the visible spectrum provides optimal performance.
  • pulse emitting means or transmitters
  • the sensor described herein above are located in the same place, which is easier for the simultaneous control of the pulse and the sensor gating, and their timing or their synchronization.
  • This location of the pulse emitting means and the sensor typifies cases when the path between the sensor and the object observed are obscured.
  • the disclosed technique is not limited to such a configuration.
  • the pulse emitter and the sensor may well be situated in two different locations, as long as their control, timing or synchronization are appropriately maintained for creating a sensitivity as a function of the range such that an amount of received energy of pulse reflections, reflected from objects located beyond a minimal range, progressively increases with the range along said depth of a field to be imaged.

Abstract

An imaging system, including a transmission source providing pulse(s), and a gated sensor for receiving pulse reflections from objects located beyond a minimal range. The pulse and the gate timing are controlled for creating a sensitivity as a function of range, such that the amount of the energy received progressively increases with the range. Also an imaging method, including emitting pulse(s) to a target area, receiving reflections of pulses reflected from objects located beyond a minimal range, the receiving includes gating detection of the reflections, and progressively increasing the received energy of the reflections, by controlling the pulses and the timing of the gating.

Description

GATED IMAGING
FIELD OF THE DISCLOSED TECHNIQUE
The disclosed technique relates to optical observation systems in general, and to a method and system for imaging using the principle of gated imaging with active illumination, in particular.
BACKGROUND OF THE DISCLOSED TECHNIQUE
Target detection and identification using an imaging system that includes a camera is known in the art. Such a camera often requires a high level of sensitivity to light for use in poor visibility conditions. Also, a long focal lens is commonly employed to achieve high optical magnification. In conditions of poor visibility, for example at night, the low intensity of light reflected from a target, received by a camera used in an imaging system, results in low quality image resolution. In a case of low quality image resolution, such a camera cannot produce an image with an adequate signal-to-noise ratio to exploit the total resolution capability of the camera, and to discern fine details of an imaged target for identification purposes. Therefore, when imaging during night or in poor visibility conditions, such cameras require an auxiliary light source to illuminate a target and thereby improve the quality of the captured image. The auxiliary light source may be a laser device capable of producing a light beam that is parallel to the line-of-sight (LOS) of the camera, and that illuminates the field-of-view (FOV) of the camera or a part thereof. It is noted that television systems, in general, use a similar illumination method for adequate imaging. Also, long focal lenses, in general, have a limited light collecting capability due to their high / number. A high f number reduces the capability of a lens to collect enough photons to generate an adequate image, as compared to lenses with small / numbers. An inherent problem in optical observation systems is the effect inclement weather conditions, such as humidity, haze, fog, mist, smoke and rain, have on the image produced. Particles or substances in the atmosphere may be associated with certain weather conditions. For example, haze results from aerosols in the air. These atmospheric particles or substances may obstruct the area between an observation system and a target to be observed. A similar case may result when an observation system operates in media other than air. For example, in underwater observations, the scattering of water particles, or of air particles above the water, may obstruct the area between an observation system and a target to be observed. In an observation system integrated with a laser device for target illumination, the interference of particles or substances in the medium between a system and a target can cause backscatter of the light beam. This is especially true when an auxiliary light source is used to illuminate a target at night, particularly if the illuminating source is located near the camera. The backscatter of the light beam results in blinding of a camera used in an observation system, especially if the camera has a high level of sensitivity, like an Intensified CCD (ICCD). The blinding of the camera reduces the contrast of an imaged target relative to the background. This blinding of the camera is referred to as self-blinding because it is partly caused by the observation system itself. During night conditions, contrast reduction significantly lowers the achievable range of imaging and target, or object, detection and identification, with respect to the attainable detection and identification range in daylight conditions.
In order to reduce the influence of particles or substances between an observation system and a target, and at night, in order to achieve longer identification ranges, the imaging sensor of a camera may need to be synchronized with respect to the time that the reflected light from the light illuminated target is due to be received by photodetectors located on the observation system. In particular, a laser generates short light pulses at a certain frequency. The imaging sensor of the camera is activated at the same frequency, but with a time delay that is related to the frequency. The light beam generated by the laser impinges on the target, and illuminates the target and the surrounding area. When the light beam is emitted toward the target, the receiving assembly of the imaging sensor of the camera is deactivated. A small part of the light is reflected from the target back towards the camera, which is activated as this reflected light reaches the camera.
Light which reflects off of particles or substances relatively close to the camera, in comparison to the longer distance between the camera and the target, will reach the receiving assembly of the camera while the camera is still deactivated. This light will therefore not be received by the camera and will not affect the contrast of the image. However, reflected light from the target and its nearby surroundings will reach the camera after the camera has been switched to an "on" state, and so light reflected towards the camera from the target will be fully collected by the camera.
The camera switches from an "off" state to an "on" state in a synchronized manner with the time required for the pulse to travel to the target and return. After the light reflected from the target has been received and stored, the camera reverts to an "off" state, and the system awaits transmission of the following light pulse. This cycle is repeated at a rate established in accordance with the range from the camera to the target, the speed of light in the observation medium, and the inherent limitations of the laser device and the camera. This technique is known as gated imaging with active illumination to minimize backscatter.
US Patent 5,408,541 to Sewell entitled "Method and System for Recognizing Targets at Long Ranges", is directed to a method and system for recognizing targets at ranges near or equal to ranges at which they are initially detected. A detect sensor, such as a radar system or a thermal imaging sensor, detects a target relative to a sensor platform. The detect sensor determines a set of range parameters, such as target coordinates from the sensor platform to the target. The detect sensor transfers the set of range parameters to a laser-aided image recognition sensor (LAIRS). The LAIRS uses the set of range parameters to orient the system to the angular location of the target. A laser source illuminates the area associated with the range parameters with an imaging laser pulse to generate reflected energy from the target. A gated television sensor receives the reflected energy from the illuminated target, and highly magnifies and images the reflected energy. The image is then recognized by either using an automatic target recognition system, displaying the image for operator recognition, or both.
It is noted that Sewell requires a preliminary range measurement. Before the laser source illuminates the target, the laser source directs a low power measurement laser pulse toward the target to measure the range between the system and the target. The range sets a gating signal for the gated television sensor. The gated television sensor is gated to turn on only when energy is reflected from the target. It is also noted that the measuring line to the target of the laser ranger must be parallel, in a very accurate manner, to the LOS of the observation system.
It is an object of the disclosed technique to provide a novel system and method for gated imaging using active illumination that does not require a preliminary range measurement. It is a further object of the disclosed technique to provide for target identification in the FOV of a camera from a minimal range.
SUMMARY OF THE DISCLOSED TECHNIQUE
In accordance with the disclosed technique, there is thus provided an imaging system having a transmission source, the transmission source providing at least one energy pulse. The system includes a sensor for receiving pulse reflections of the at least one energy pulse reflected from objects within a depth of a field to be imaged, the depth of field having a minimal range (RMIN)- The sensor is enabled to gate detection of the pulse reflections, with a gate timing which is controlled such that the sensor starts to receive the pulse reflections after a delay timing substantially given by the time it takes the at least one energy pulse to reach the minimal range and complete reflecting back to the sensor from the minimal range. The at least one energy pulse and the gate timing are controlled for creating a sensitivity as a function of range for the system, such that an amount of received energy of the pulse reflections, reflected from objects located beyond the minimal range, progressively increases with the range along the depth of a field to be imaged. According to one embodiment, this is provided through synchronization between the timing of the at least one energy pulse and the timing of the gate detection. The at least one energy pulse and the gate timing may be controlled for creating a sensitivity as a function of range for the system, such that an amount of received energy of the pulse reflections, reflected from objects located beyond the minimal range, progressively increases with the range along the depth of a field to be imaged. The amount of received energy of the pulse reflections may increase progressively until an optimal range (R0), be maintained detectable, be substantially constant, or decrease gradually until a maximal range (RMAX), or is directly proportional to the ranges of the objects to be imaged.
According to another aspect of the disclosed technique, the at least one energy pulse defines a substantial pulse width (TLASER) commencing at a start time (T0), and the delay timing is substantially given by the time elapsing from the start time (T0) until twice the minimal range (RMIN) divided by the speed at which the at least one energy pulse travels (v), in addition to the pulse width (T ASER) according to the formula:
Figure imgf000008_0001
* LASER
According to another embodiment, the at least one energy pulse defines a substantial pulse width (T ASER), a pulse pattern, a pulse shape, and a pulse energy, the sensor is enabled to gate detection of the pulse reflections, with a gating time span the sensor is activated (TON), a duration of time the sensor is deactivated (T0FF). and a synchronization timing of the gating with respect to the at least one energy pulse, and at least one of the delay timing, the pulse width, the pulse shape, the pulse pattern, the pulse energy, the gating time span the sensor is activated (TON), the duration of time the sensor is deactivated (T0FF), and the synchronization timing, is determined according to at least one of the depth of a field to be imaged, specific environmental conditions the system is used in, a speed the system is moving at if the system is mounted on a moving platform, and specific characteristics of different objects expected to be found in the depth of field. The pulse width, the duration of time the sensor is deactivated, and the gating time span the sensor is activated may define a cycle time, wherein the at least one energy pulse is provided for a duration of the pulse width, the opening of the sensor is delayed for a duration of the duration of time the sensor is deactivated, and the pulse reflections are received for a duration of the gating time span the sensor is activated. The determination according to at least one of the depth of a field, the specific environmental conditions, the speed the system is moving at if the system is mounted on a moving platform, and the specific characteristics of different objects expected to be found in the depth of field, is preferably a dynamic determination, such as varying in an increasing or decreasing manner over time. Optionally, the pulse width and the gating time span are limited to reduce the sensitivity of the system to ambient light sources. For example, the pulse width is shortened progressively, the delay timing is lengthened progressively, with the cycle time not changing. Alternatively, the gating time span is shortened progressively, the delay timing is lengthened progressively, with the cycle time not changing. In addition, the pulse width and the gating time span are shortened progressively, the delay timing is lengthened progressively, with the cycle time not changing. According to another aspect, the gating of the sensor is utilized to create a sensitivity as a function of range for the system by changing a parameter such as changing the shape of the at least one energy pulse, changing the pattern of the at least one energy pulse, changing the energy of the at least one energy pulse, changing a gating time span the sensor is activated (T0N), changing a duration of time the sensor is deactivated (TOFF), changing a pulse width (T SER) of the at least one energy pulse, changing the delay timing, and changing a synchronization timing between the gating and the timing of providing the at least one energy pulse. The changing of a parameter may be utilized according to at least one of: the depth of field, specific environmental conditions the system is used in, a speed the system is moving at if the system is mounted on a moving platform, and characteristics of different objects expected to be found in the depth of field.
Optionally, a controller for controlling the synchronization is provided, preferably wherein at least one repetition of the cycle time forms part of an individual video frame, and a number of the repetitions forms an exposure number per video frame. Furthermore, preferably, a control mechanism for dynamically controlling and varying the exposure number is also provided. Mutual blinding between the system and a similar system passing one another is optionally eliminated by statistical solutions such as lowering the exposure number, a random or pre-defined change in the timing of the cycle time during the course of the individual video frame, and a change in frequency of the exposure number. Mutual blinding between the system and a similar system passing one another may also be eliminated by synchronic solutions such as establishing a communication channel between the system and the similar system, letting each of the system and the similar system go into listening modes from time to time in which the at least one energy pulse is not emitted for a listening period. After the listening period, any of the system and the similar system resumes emitting the at least one energy pulse if no pulses were collected during the listening period, and after which period the system and the similar system wait until an end of a cyclic sequence before resuming emitting the at least one energy pulse if pulses were collected during the listening period. Furthermore, having the systems change a pulse start transmission time in the individual video frames.
The exposure number may be varied by the control mechanism according to a level of ambient light. An image intensifier may be applied, in which case the exposure number may be varied by the control mechanism according to a level of current consumed by the image intensifier. The control mechanism may include image processing means for locating areas in the sensor in a state of saturation, and image processing means for processing a variable number of exposures. Such image processing means may be utilized to take at least two video frames, one with a high exposure number, the other with a low exposure number, where the exposure numbers of the at least two video frames are determined by the control mechanism. The at least two video frames are combined to form a single video frame by combining dark areas from frames with a high exposure number and saturated areas from frames with a low exposure number.
According to a further feature of the invention, a pulse width (TLASER) o the at least one energy pulse is substantially defined in 2x Ro "MIN accordance with the following equation: v J , where v is the speed at which the at least one energy pulse travels.
The at least one energy pulse may include several pulses wherein the sensor receives several pulses of the at least one energy pulse reflected from at least one object during the gating time span the sensor is activated.
The sensor may be enabled to gate detection of the pulsed reflections, with a gating time span the sensor is activated (T0N), and a duration of time the sensor is deactivated (T0FF), which are substantially
Rn ^MIN l ON 2 x defined in accordance with the following equations: and y — - 2_x___ R_-_--- - -i- T OFF LASER v , where T ASER is the pulse width of the at least one energy pulse, and v is the speed at which the at least one energy pulse travels.
Optionally, the sensor is enabled to gate detection of the pulse reflections in accordance with a Long Pulsed Gated Imaging (LPGI) timing technique. The sensor may also be enabled to gate detection of the pulse reflections with a gating time span the sensor is activated (T0N), and a duration of time the sensor is deactivated (TQFF), which are substantially
"MAX ^MIN
* ON — x defined in accordance with the following equations: γ _ 2 x RMAX
OFF ~ and v , where v is the speed at which the at least one energy pulse travels.
The at least one energy pulse may be in the form of electromagnetic energy or mechanical energy.
The sensor may be a Complementary Metal Oxide Semiconductor (CMOS), a Charge Coupled Device (CCD), a Gated
Intensifier Charge Injection Device (GICID), a Gated Intensified CCD (GICCD), a Gated Intensified Active Pixel Sensor (GIAPS), and a Gated Image Intensifier.
The sensor may further include an external shutter, at least one photodetector, and may also be enabled to autogate. A display apparatus for displaying images constructed from the light received in the sensor may also be used, for example, a Head Up Display (HUD) apparatus, an LCD display apparatus, a planar optic apparatus, and a holographic based flat optic apparatus.
A storage unit for storing images constructed from the pulse reflections received in the sensor may be provided, as well as a transmission device for transmitting images constructed from the pulse reflections received in the sensor.
The system may be mounted on a moving platform, and stabilized. Stabilization may preferably include stabilization using a gimbals, stabilization using feedback from a gyroscope to a gimbals, stabilization using image processing techniques, based on a spatial correlation between consecutively generated images of the object to be imaged, and stabilization based on sensed vibrations of the sensor. Optionally, the system includes at least one ambient light sensor. Furthermore, a pulse detector for detection of pulses emitting from a similar system approaching may be provided, an image-processing unit may be added, a narrow band pass filter may be functionally connected to the sensor, and a spatial modulator shutter, or a spatial light modulator, may be provided. Optionally, an optical fiber for transmitting the at least one energy pulse towards the objects to be imaged may be added. Furthermore, a polarizer, for filtering out incoming energy which does not conform to the polarization of the pulse reflection, emitted from the transmission source providing the at least one polarized energy pulse, may be provided. Preferably, the sensitivity of the system relates to a gain and responsiveness of the sensor in proportion to an amount of energy received by the sensor, wherein the gain received by the sensor as a function of range R is defined by the follow convolution formula:
Figure imgf000013_0001
T 1 LASER wherein L(t) defines a Boolean function representing an on/off status of the transmission source , irrespective of a state of the sensor, where L(t) = 1 if the transmission source is on and L(t) = 0 if the transmission source is off. C(t) defines a Boolean function representing an ability of the sensor to receive incoming pulse reflections according to a state of the sensor, where C(t) = 1 if the sensor is in an activated state and C(t) = 0 if the sensor is in a deactivated state, and v is the speed at which the at least one energy pulse travels. A value for radiant intensity may be obtained by multiplying the convolution formula by a geometrical propagation attenuation function.
The transmission device may be a laser generator, an array of diodes, an array of LEDs, and a visible light source.
According to a further aspect of the invention, there is also provided an imaging method, including emitting at least one energy pulse to a target area, receiving at least one reflection of the at least one energy pulse reflected from objects within a depth of a field to be imaged, the depth of field having a minimal range (RMIN), the receiving includes gating detection of the at least one reflection such that the at least one energy pulse is detected after a delay timing substantially given by the time it takes the at least one energy pulse to reach the minimal range and complete reflecting back, and progressively increasing the received energy of the at least one reflection reflected from objects located beyond the minimal range along the depth of a field to be imaged, by controlling the at least one energy pulse and the timing of the gating.
Preferably, the procedure of increasing includes increasing the received energy of the at least one reflection reflected from objects located beyond the minimal range along the depth of a field to be imaged up to an optimal range (R0). Furthermore, the received energy of the at least one reflection reflected from objects located beyond the optimal range is maintained detectable along the depth of a field to be imaged up to a maximal range (RMAX)- This may be achieved by maintaining the received energy, of the at least one reflection reflected from objects located beyond the optimal range along the depth of a field to be imaged up to the maximal range, substantially constant, by gradually decreasing the received energy, or by increasing the received energy of the at least one reflection in direct proportion to the ranges of the objects within the depth of field to be imaged.
Optionally, the at least one energy pulse defines a substantial pulse width (TLASER) commencing at a start time (T0), and the delay timing is substantially given by the time elapsing from the start time (T0) until twice the minimal range divided by the speed at which the at least one energy pulse travels (v), in addition to the pulse width (TLASER):
2xR
T MIN
1 LASER + τ -
Furthermore, the at least one energy pulse defines a substantial pulse width (TLASER), a pulse pattern, a pulse shape, and a pulse energy. The procedure of gating includes a gating time span a sensor utilized for the receiving is activated (T0N), a duration of time the sensor is deactivated (T0FF), and a synchronization timing of the gating with respect to the at least one energy pulse. At least one of the delay timing, the pulse width, the pulse shape, the pulse pattern, the pulse energy, the gating time span the sensor is activated (T0N), the duration of time the sensor is deactivated (T0FF), and the synchronization timing is determined according to at least one of the depth of a field, specific environmental conditions the method is used in, a moving speed of a moving platform if the sensor is mounted on the moving platform, and specific characteristics of different objects expected to be found in the depth of field. Optionally, the method further includes the procedure of autogating.
Preferably, the procedure of controlling includes progressively changing at least one parameter such as changing a pattern of the at least one energy pulse, changing a shape of the at least one energy pulse, changing the energy of the at least one energy pulse, changing a gating time span a sensor utilized for the receiving is activated (T0N), changing a duration of time the sensor is deactivated (T0FF), changing an energy pulse width (TLASER) of the at least one energy pulse, changing the delay timing, and changing a synchronization timing between the gating and the emitting. The procedure of controlling may also include changing the at least one parameter according to at least one of the depth of field, the specific environmental conditions the method is used in, the moving speed of the moving platform if the sensor is mounted on the moving platform, and characteristics of different objects expected to be found in the depth of field. The procedure of controlling may further include the sub-procedures of providing the at least one energy pulse for a duration of the pulse width, delaying the opening of the sensor for a duration of the time the sensor is deactivated (T0FF), and receiving energy pulses reflected from objects for a duration of the gating time span the sensor is activated (TON)- The pulse width, the duration of the time the sensor is deactivated (T0FF) and the gating time span the sensor is activated (T0N) may define a cycle time. Optionally, at least one repetition of the cycle time may form part of an individual video frame, and a number of repetitions may form an exposure number for the video frame.
The method may further include the procedure of eliminating mutual blinding between a system using the method and a similar system using the method, passing one another, by statistical solutions such as lowering the exposure number, a random or pre-defined change in the timing of the cycle time during the course of an individual video frame, and a change in the frequency of the exposure number. Alternatively, the method may further include the procedure of eliminating mutual blinding between a system using the method and a similar system using the method passing one another, by synchronic solutions such as establishing a communication channel between the system and the similar system, letting each of the system and the similar system go into listening modes from time to time in which the at least one energy pulse is not emitted for a listening period. After the listening period, any of the system and the similar system resume emitting the at least one energy pulse if no pulses were collected during the listening period, and after which period the system and the similar system wait until an end of a cyclic sequence before resuming emitting the at least one energy pulse if pulses were collected during the listening period. Furthermore, having the systems change a pulse start transmission time in the individual video frames.
The exposure number may be dynamically varied by a control mechanism, such as by adjusting the exposure number according to a level of ambient light, or adjusting the exposure number by the control mechanism according to a level of current consumed by an image intensifier utilized for intensifying the detection of the at least one reflection. Optionally, the method also includes image processing by locating areas in the sensor in a state of saturation by the control mechanism. The image processing may be applied for a variable number of exposures by the control mechanism. The image processing can include taking at least two video frames, one with a high exposure number, the other with a low exposure number, by image processing of a variable number of exposures, determining exposure numbers for the at least two video frames, and combining frames to form a single video frame by combining dark areas from frames with a high exposure number and saturated areas from frames with a low exposure number. Preferably, the pulse width and the gating time span the sensor is activated (T0N) are limited to eliminate or reduce the sensitivity of the sensor to ambient light sources. Preferably, the procedure of increasing is dynamic, such as by varying the sensitivity of the sensor in a manner varying over time such as in an increasing, a decreasing, a partially increasing and a partially decreasing manner over time. The procedure of controlling may include shortening the pulse width progressively and lengthening the delay timing progressively, while retaining a cycle time of the gating unchanged, shortening the gating time span progressively and lengthening the delay timing progressively, while retaining a cycle time of the gating unchanged, or shortening the pulse width and the gating time span progressively, lengthening the delay timing progressively, while retaining a cycle time of the gating unchanged.
Preferably, the method includes the procedure of calculating the energy pulse width (TLASER), substantially defined in accordance with the
2 x f R* - Rt MIN following equation: ^ v where v is the speed the at least one energy pulse travels at.
The procedure of receiving may include receiving several pulses of the at least one energy pulse reflected from objects during a gating time span a sensor utilized for the receiving is activated (T0N)- The gating may also include a duration of time the sensor is deactivated (T0FF), and the controlling may include controlling the gating time span the sensor is activated T0N and a duration of time the sensor is deactivated T0FF, substantially defined in accordance with the following equations:
Ϊ R
' ON 2 x "0 - 1RMIN ηr 2 x R M MIINN i T1
1 OFF ~ " L LASER and "" v "'""" , where R0 is an optimal range.
Optionally, the gating may include gating in accordance with a Long Pulsed Gated Imaging (LPGI) timing technique, such as when a gating time span a sensor utilized for the receiving is activated (T0N), and a duration of time the sensor is deactivated (T0FF), are substantially
defined in accordance with the following equations:
Figure imgf000017_0001
T = 2x R M,AX
1 OFF and v , where v is the speed the at least one energy pulse travels at.
The procedure of emitting may include emitting at least one energy pulse in the form of electromagnetic energy or mechanical energy, and generating the at least one energy pulse by an emitter such as a laser generator, an array of diodes, an array of LEDs, or a visible light source.
The gating may include gating by a sensor, such as a Complementary Metal Oxide Semiconductor (CMOS), a Charge Coupled Device (CCD), a Gated Intensifier Charge Injection Device (GICID), a Gated Intensified CCD (GICCD), and a Gated Intensified Active Pixel Sensor (GIAPS), and gating with a CCD sensor that includes an external shutter.
Preferably, the method further includes the procedure of intensifying the detection of the at least one reflection, by intensifying the at least one reflection with a gated image intensifier or with a sensor with shutter capabilities.
Optionally, the method also includes displaying at least one image constructed from the received at least one reflection. The displaying may be on a display apparatus, for example, a Head Up Display (HUD), an LCD display, a planar optic display, and a holographic based flat optic display.
Furthermore, the method also includes storing or transmitting at least one image constructed from the received at least one reflection.
The method may also include determining the level of ambient light in the target area, determining if other energy pulses are present in the target area, filtering received energy pulse reflections using a narrow band pass filter, and overcoming glare from other energy pulses by locally darkening the entrance of an image intensifier utilized for the intensifying by using apparatuses such as a spatial modulator shutter, a spatial light modulator, or a liquid crystal display. Optionally, the procedure of emitting includes emitting at least one polarized electromagnetic pulse, and the procedure of receiving includes filtering received energy according to a polarization conforming to an expected polarization of the at least one reflection.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosed technique will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which: Figure 1 is a schematic illustration of the operation of a system, constructed and operative in accordance with an embodiment of the disclosed technique;
Figure 2A is a schematic illustration of a laser pulse propagating through space; Figure 2B is a schematic illustration of a laser pulse propagating towards, and reflecting from, an object;
Figure 3 is a graph depicting gated imaging of both a laser and a sensor as a function of time;
Figure 4 is a typical sensitivity graph, normalized to 1 , depicting sensitivity of a gated sensor as a function of the range between the sensor and a target;
Figure 5 is a graph depicting timing adjustments relating to the pulse width of a laser beam, as a function of time;
Figure 6 is a graph depicting the observation capability of a system with the timing technique depicted in Figure 5, as a function of range;
Figure 7 is a graph depicting a specific instant in time in relation to the scenario depicted in Figure 6, as a function of range;
Figure 8 is a graph depicting a specific instant in time after the specific time instant depicted in Figure 7, as a function of range;
Figure 9 is a sensitivity graph as a function of range, normalized to 1 , depicting the sensitivity of a gated sensor, in accordance with the timing technique depicted in Figure 5;
Figure 10 is a sensitivity graph as a function of range, normalized to 1 , depicting the sensitivity of a gated sensor, in accordance with a long pulse gated imaging timing technique; Figure 11 is a graph depicting the radiant intensity captured by a sensor from reflections from a target and from backscatter, as a function of the range between the sensor and the target, for both a gated and a non- gated sensor, during a simulation; Figure 12 is an intensity graph as a function of time, normalized to 1 , depicting adjustment of the intensity shape or pattern of a laser pulse; Figure 13 is an intensity graph as a function of range, normalized to 1 , depicting the advancement of the intensity shaped or patterned laser pulse depicted in Figure 12; Figure 14 is a sensitivity graph as a function of range, normalized to 1 , depicting the sensitivity of a gated sensor, in accordance with the laser shaping technique depicted in Figure 12;
Figure 15 is a graph depicting the sequence of pulse cycles and the collection of photons over an individual field, as a function of time; Figure 16 is a graph depicting a timing technique where a laser pulse width is changed dynamically over the course of obtaining an individual frame, as a function of time;
Figure 17 is a graph depicting a timing technique where a duration that a sensor unit is activated is changed dynamically over the course of obtaining an individual frame, as a function of time;
Figure 18 is a graph depicting a timing technique where both a laser pulse width and a duration that a sensor unit is activated are changed dynamically over the course of obtaining an individual frame, as a function of time; Figure 19 is a graph depicting timing adjustments during the process of obtaining an individual video field, where a total of 6666 exposures are performed, as a function of time;
Figure 20 is a graph depicting timing adjustments during the process of obtaining an individual video field, where a total of 100 exposures are performed, as a function of time; Figure 21 is a pair of graphs depicting timing adjustments during the process of obtaining an individual video field, both as a function of time, where the number of exposures in a field is controlled based on an image processing technique; Figure 22 is a schematic illustration of the two image frames acquired in Figure 21 , and the combination of the two frames;
Figure 23, which is a pair of graphs depicting a synchronization technique for overcoming mutual blinding, both as a function of time;
Figure 24 is a block diagram of a method for target detection and identification, accompanied by an illustration of a conceptual operation scenario, operative in accordance with another embodiment of the disclosed technique;
Figure 25 is a schematic illustration of a system, constructed and operative in accordance with another embodiment of the disclosed technique; and
Figure 26 is a schematic illustration of a system, constructed and operative in accordance with a further embodiment of the disclosed technique.
DETAILED DESCRIPTION OF THE EMBODIMENTS
The disclosed technique provides methods and systems for target or object detection, identification and imaging, using optical observation techniques based on the gated imaging principle with active illumination. The disclosed technique is applicable to any kind of imaging in any range scale, including short ranges on the order of hundreds of meters, and also extremely short ranges, such as ranges on the order of centimeters, millimeters and even smaller units of measurement, for industrial and laboratorial applications. Accordingly, the terms "target" or "object" refer to any object in general, and although the disclosed technique described herein is with reference to "detection and identification", it is equally applicable to any kind of image acquisition for any purpose, such as picturing, filming, acquiring visual information, and the like. The disclosed technique described herein is with reference to laser light pulses, however any suitable pulsed emission of electromagnetic energy radiation (photons in any known wavelength) may be used, including light in the visible and non-visible spectrum, UV, near and far IR, radar, microwave, RF, gamma or other photon radiation, and the like. Also other pulsed sources of energy may be used, including mechanical energy such as acoustic waves, ultrasound, and the like.
Accordingly, the disclosed technique provides for manipulation of the sensitivity and image gain of a gated sensor, as a function of the imaged depth of field, by changing the width of the transmitted laser pulses, by changing the state of the sensor in a manner relating to the distance to the target, by adjusting the number of exposures in a gating cycle, by synchronization of the sensor to the pulse timing, and by other factors. Transmitted or emitted pulses, pulsed energy, pulsed beam and the like refer to at least one pulse, or to a beam of pulses emitted in series. The disclosed technique allows for dynamic imaging and information gathering in real-time. According to one embodiment, the optical observation system is mounted on a moving platform, for example, a vehicle, such as a military aircraft. The system then provides for the detection and identification of potential military targets in combat situations. According to another aspect of the disclosed technique, polarized light or electromagnetic radiation is employed, thereby providing for filtering out excessive ambient light of undesired reflections from background objects. Accordingly, the transmitting source emits a polarized pulse which reflects from the target objects as a polarized pulse reflection. A polarization filter or polarizer allows into a reflections sensor only incoming energy that conforms to the pulse reflections expected polarization. Most objects would reflect the original polarization of the emitted pulse but some reflective objects may alter such polarization.
Reference is now made to Figure 1 , which is a schematic illustration of the operation of a system, generally referenced 100, constructed and operative in accordance with an embodiment of the disclosed technique.
System 100 includes a laser device 102, and a sensor unit 104. Laser device 102 generates a laser beam 106 in the form of a single pulse or a series of continuous pulses. Laser device 102 emits laser beam 106 toward a target 108. Laser beam 106 illuminates target 108. Sensor unit 104 may be a camera, or any other sensor or light collecting apparatus.
Sensor unit 104 receives reflected laser beam 110 reflected from target 108. Sensor unit 104 includes at least one photodetector (not shown) for processing and converting received reflected light 110 into an image 112 of the target. Sensor unit 104 may also include an array of photodetectors. Sensor unit 104 may be in one of two states. During an "on" state, sensor unit 104 receives incoming light, whereas during an "off" state sensor unit 104 does not receive incoming light. In particular, the shutter (not shown) of sensor unit 104 is open during the "on" state and closed during the "off" state. The term "activated" is used herein to refer to the sensor being in the "on" state, whereas the term "deactivated" is used herein to refer to the sensor being in the "off" state. Image 112 may be presented on a display, such as a video or television display. The display may be a Head-Up Display (HUD), a Liquid Crystal Display (LCD), a display implemented with a planar optic apparatus, a holographic based flat optic display, and the like. Image 112 may also be stored on a storage unit (not shown), or transmitted by a transmission unit (not shown) to another location for processing. A controller (not shown) controls and synchronizes the operation of sensor unit 104. It is noted that sensor unit 104 can also be enabled to autogate. The term autogating refers to the automatic opening and closing of the sensor shutter according to the intensity of light received. Autogating is prevalently applied for purposes such as blocking exposure of the sensor unit to excessive light, and has no direct connection to active transmission of pulses, their timing, or their gating.
Atmospheric conditions and substances, such as humidity, haze, fog, mist, smoke, rain, airborne particles, and the like, represented by zone 114, exist in the surrounding area of system 100. Backscatter from the area in the immediate proximity to system 100 has a more significant influence on system 100 than backscatter from a further distanced area. For example, an interfering particle relatively close to system 100 will reflect back a larger portion of beam 106 than a similar particle located relatively further away from system 100. Accordingly, the area proximate to system 100 from which the avoidance of backscattered light is desirable can be defined with an approximate range designated as R IN- The target is not expected to be located within range RMIN, therefore the removal of the influences of atmospheric conditions or other interfering substances in this range from the captured image is desirable. Such atmospheric conditions and substances can also be present beyond RM|N, but their removal is both problematic and of lesser significance. These atmospheric conditions and substances interfere with laser beam 106 on its way to illuminating target 108, and with laser beam 110 reflected from target 108. Sensor unit 104 is deactivated for the duration of time that laser beam 106 has completely propagated a distance RMIN toward target 108 including the return path to sensor unit 104 from distance RM)N. Range RM|N is the minimum range for which sensor unit 104 is deactivated. The distance between system 100 and target 108 is designated range RMAX- It is noted that target 108 does not need to be located at a distance of range RMAX, and can be located anywhere between range R IN and range RMAX- Range RMAX represents the maximal range in which target 108 is expected to be found, as the exact location of target 108 is not known when system 100 is initially used. In order to clearly explain how the disclosed technique provides for the manipulation of the sensitivity and image gain of a gated sensor, it is useful to illustrate how a laser pulse propagates through space. It is also useful to illustrate how a laser pulse propagates towards, and reflects from, an object. Reference is now made to Figure 2A, which is a schematic illustration of a laser pulse propagating through space. For the sake of simplicity, only classical mechanics considerations are applied in the following description. Laser pulse 116 emanates from laser device 115, and travels in the direction of arrow 121. Arrow 121 points towards an increase in range. Laser pulse 116 can be considered a train of small packets of energy 117, with each packet "connected" to the next, much like box cars of a real train are connected to one another. The first packet of energy of laser pulse 116 is referred to as head packet 118. The last packet of energy of laser pulse 116 is referred to as tail packet 119. Head packet 118 is coloured black and tail packet 119 is coloured gray for purposes of clarity only. Since laser pulse 116 is made up of small packets of energy 117, and each packet of energy, at a particular instant, is located at a particular point in space, then laser pulse 116 can be described as having a specified length, spanning from the location where tail packet 119 is located up to the location where head packet 118 is located. The front part of laser pulse 116, where head packet 118 is located, can therefore be referred to as the "head" of the laser pulse, and the back part of laser pulse 116, where tail packet 119 is located, can therefore be referred to as the "tail" of the laser pulse. Furthermore, the middle part of laser pulse 116, namely, the packets located between the head and the tail of the laser pulse, can be referred to as the "body" of the laser pulse. These definitions of the head and tail of a laser pulse will be herein referred to as such. The length of laser pulse 116 can also be described temporally, in terms of how much time laser device 115 is activated, in order to generate enough packets of energy, and to let the packets of energy propagate through space, to cover the range extended by laser pulse 116.
Reference is now made to Figure 2B, which is a schematic illustration of a laser pulse propagating towards, and reflecting from, an object. In Figure 2B, laser pulses are emitted from laser device 122, and are received by sensor unit 124. Figure 2B illustrates a particular instant in time when various laser pulses, emitted from laser device 122 at different times, are either, propagating towards object 125, reflecting from object 125, or passing object 125. Laser pulses propagating towards, and reflecting from, object 125, follow trajectory 126 and its direction. For the purposes of clarity only, the head of a laser pulse has been coloured black, and the tail of a laser pulse has been coloured gray.
At the particular instant in time illustrated in Figure 2B, laser pulse 123A is still being generated, as only the head, and part of the body, of laser pulse 123A, has been generated. The tail, and the rest of the body, of laser pulse 123A has not yet been generated. Laser pulse 123A propagates in the direction of arrow 127A towards object 125. Laser pulse 123B is a full, or complete, laser pulse, which propagates in the direction of arrow 127B towards object 125. It is noted that laser pulse 123B has a head, tail and body. Laser pulse 123C has already partially impinged on object 125, as the head, as well as part of the body, of laser pulse 123C, has already impinged on object 125, and has begun to reflect back towards sensor unit 124, in the direction of arrow 127C. The tail, as well as part of the body, of laser pulse 123C, has not yet impinged on object 125, and is therefore still propagating away from laser device 122. It is noted that, regarding laser pulse 123C, the head portion of the laser pulse is propagating in a direction of decreased range, back towards sensor unit 124, while, simultaneously, the tail portion of the laser pulse is propagating in a direction of increased range, away from laser device 122. Laser pulse 123D is a full, or complete, laser pulse, which has completely impinged upon and reflected from object 125. Laser pulse 123D propagates in the direction of arrow 127D towards sensor unit 124. Laser pulse 123E has already been partially received by sensor unit 124, as only the tail, and part of the body, of laser pulse 123E is depicted in Figure 2B. The head, and the rest of the body, of laser pulse 123E, has already been received by sensor unit 124. Laser pulse 123F is a full, or complete, laser pulse which did not reflect from object 125, and propagates in the direction of arrow 127F. The head, as well as part of the body, of laser pulse 123F, has already passed object 125. The tail, as well as part of the body, of laser pulse 123F, has not yet passed object 125. It is noted that laser pulse 123F was emitted at the same time laser pulse 123C was emitted. It is furthermore noted that not all the laser pulses emitted from laser device 122 will reflect from the same object or location, for example laser pulse 123C as compared to laser pulse 123F. In general, laser device 122 will emit many laser pulses, in order to illuminate an area, as it is not known in advance which objects in the path of the laser pulses will reflect the laser pulses back towards a sensor unit, and how many reflections will be received from the various ranges the laser pulses propagate through.
Reference is now made to Figure 3, which is a graph, generally designated 120, depicting gated imaging as a function of time, of both a laser and a sensor. In graph 120, a laser pulse is transmitted at time t0. The duration of the laser pulse, or the pulse width of the laser beam (in other words, the time the laser is on), is designated TLASER, and extends between time t0 and time ti. Between time \^ and time t5, there is no transmission of a laser pulse, depicted in Figure 3 by arrows demarcating a laser off time. It is noted that the description herein refers to a square pulse for the sake of simplicity and clarity. It is further noted that the description herein is equally applicable to a general pulse shape or pattern, in which threshold values define the effective beginning, duration and end of the pulse, rendering its analysis analogous. The sensor unit is initially in the "off" state for as long as the laser pulse is emitted, between time t0 and time ^ (TLASER)- The sensor unit is further maintained in the "off" state between time ^ and time t , or during time span ΔtMiN- The sensor unit remains in the "off" state so as not to receive reflections of the entire laser pulse (including the end portion of the pulse) from objects located within a range RMIN from the system. As depicted in Figure 3, TOFF, the time the sensor unit is in an "off" state, extends from time t0 to time t2. At time t2, the sensor unit is activated and begins receiving reflections. The reflections from objects located immediately after range RM| from the system are received from photons at the rear end of the transmitted pulses which have impinged on these objects. The front portion of the transmitted pulses is not detected for these objects located immediately after range RMIN- At time t3, the sensor unit first receives reflections from the entire width of the pulses. The range of the objects, for which the entire width of the pulse is first received, is designated R0. Thus the time span between time t2 and time t3 is equal to TLASER- The sensor unit remains in the "on" state until time t5. As depicted in Figure 3, T0N, the time the sensor unit is in an "on" state, extends from time t2 to time t5. At time t4, the sensor unit still receives the full reflection of the pulses from objects located up to a range designated R^ Reflections from objects beyond this range reflect progressively less portions of the laser pulse. The tail portion of the reflected pulse is cut off to a greater extent, as the sensor shifts from its "on" state to its "off" state, the further away such objects are located beyond R0 up to a maximal range designated RMAX- RMAX is the range beyond which no reflections are received at all, due to the deactivation of the sensor to its "off" state. At time t5, corresponding to receiving reflections from objects at RMAX, the sensor unit receives reflections only from photons at the very front end of pulses whose tails are just about to pass range Ri. Thus the time span between time t4 and time t5 is equal to TLASER- Time span Δt Aχ corresponds to the time it takes a laser pulse, once it has been fully transmitted, to reach objects located at RMAX-
Reference is now made to Figure 4, which is a typical sensitivity graph, generally designated 130, depicting the sensitivity of the sensor unit, referred to in Figure 1 , as a function of the range between the sensor unit and a target area. The vertical axis represents the relative sensitivity of the sensor unit, and has been normalized to 1. The horizontal axis represents the range between the sensor unit and a target. The term "sensitivity", referred to in this context, relates to the gain or responsiveness of the sensor unit in proportion to the number of reflected photons actually reaching the sensor unit when it is active, and not to any variation in the performance of the sensor, per se. Variation in the performance of the sensor has no relation to the range from which light is reflected, if the attenuation of light, due to geometrical and atmospheric considerations, is ignored. The attenuation of light due to geometrical and atmospheric considerations is ignored herein for the sake of simplicity. Accordingly, the amount of received energy of the pulse reflections, reflected from objects located beyond a minimal range RMIN, progressively increases with the range along the depth of a field to be imaged.
Range RM|N is the range up to which the full reflections from a target at this range will impinge upon sensor unit 104, referred to in Figure 1 , in a deactivated state. With reference to Figure 3, range RMIN corresponds to the time duration between time t0 and time t2. Range R0 is the range from which full reflections first arrive at sensor unit 104 while it is activated. The reflections are the consequence of the whole span of the pulse width passing in its entirety over a target located at range R0 from sensor unit 104. With reference to Figure 3, the distance between range RMIN and range R0 corresponds to the time duration between time t2 and time t3. Range R^ is the range up to which full reflections from objects can still be obtained. With reference to Figure 3, the distance between range R0 and range R corresponds to the time duration between time t3 and time t4. Range RMAX is the range for which reflections, or any portion thereof, can still be obtained, i.e. the maximum range for which sensor sensitivity is high enough for detection. With reference to Figure 3, the distance between range R-i and range RMAX corresponds to the time duration between time t4 and time t5. It is noted that reflections from objects located beyond RMAX may also be received by sensor unit 104, if such targets are highly reflective. Incoming radiation from objects located at any distance, including distances beyond RMAX, tor example, stars, may also be received by sensor unit 104, if such objects emit radiation at a wavelength detectable by sensor unit 104.
In graph 130, in the region ranging from range RMIN up to range R0 the sensitivity of the sensor unit gradually increases to a maximum level of sensitivity. This region includes reflected light mainly from atmospheric sources that cause interference and self-blinding in the sensor unit, therefore a high sensitivity is undesirable in this region. In general, the sensor unit initially encounters the photons of a reflected light beam at the very front end of the transmitted laser pulse, then the photons in the middle of the pulse and finally the photons at the very end of the pulse. In the region ranging from range R IN up to range R0, the sensor doesn't detect most of the front portion of the pulses reflected from objects just beyond R IN, because of the timing of the "on" state of the sensor. In the region ranging from range R IN up to range R0, the sensor incrementally detects more and more of the pulse as it reflects from objects found in further ranges. This incremental detection continues until all of the pulse is received for objects located at R0. Thus, the duration of the incline in graph 130 is equivalent to the width of the laser pulse
Figure imgf000032_0001
The sensor unit remains at maximum sensitivity between range R0 and range Rι. This is the region where targets are most likely to be located, so a high sensitivity is desirable. The sensitivity of the sensor unit progressively gradually decreases to a negligible level beyond range R^ In particular, for objects located immediately after range R^ the sensor unit begins to not detect the photons at the very end of the laser pulse, then for further ranges the photons in the middle of the pulse are also not detected, and finally for objects located at RMAX, the photons at the front end of the pulse are not detected, until no photons are received at all. The duration of the decline in graph 130 is equivalent to the width of the laser pulse TLASER- It is noted that the sensitivity depicted in Figure 4 enables sensor unit 104, and in general, system 100, referred to in Figure 1 , to obtain a level of received light energy, in system 100, which is directly proportional to the ranges of targets.
A particular sensitivity as a function of range may be obtained by system 100, referred to in Figure 1 , by the application of several techniques, either individually or in various combinations. These techniques will now be discussed. Reference is now made to Figure 5, which is a graph, generally designated 140, depicting timing adjustments relating to the pulse width of the laser beam. The technique relates to the time sensor unit 104, referred to in Figure 1 , is activated with respect to the pulse width of laser beam 106, referred to in Figure 1. The vertical axis represents the status of a device, such as a laser or a sensor unit, where '1 ' represents a status of a device being on, and O' represents a status of the device being off. The horizontal axis represents time.
Time TQFF is the time during which sensor unit 104 is deactivated, immediately after transmitting laser pulse 106. Time TOFF may be determined in accordance with the range from which reflections are not desired (RMIN). thereby preventing reflections from atmospheric conditions and substances, and the self-blinding effect. In particular, TOFF may be determined as twice this range divided by the speed of light, in the medium it is traveling in (v), as this is the time span it takes the last photon of the laser pulse to reach the farthest point in the range RMIN and reflect 5 back to the sensor. It may be desirable to lengthen the duration of time the sensor unit is deactivated by the duration of the pulse width of the laser beam, to ensure that no backscattered reflections from the area up to RMIN are received in sensor unit 104. Therefore, T0FF can be calculated using the following equation:
ι ,oυ T 1 OIF = 2 X RMIN + ^ T LASER ( \1 ' ) I
V
Time TON is the time during which sensor unit 104 is activated and receives reflections from a remote target 108, referred to in Figure 1. Time TON may be determined in accordance with the entire distance the last photon of a pulse that propagates up to R0 and back to the sensor is unit. Since the sensor unit is activated at time 2XR IN V, after laser pulse 106 has been fully emitted, the last photon of the laser pulse is already distanced 2XRMIN from the sensor unit. The last photon will propagate a further distance of R0-(2XRMIN) until target 108, and a further distance R0 back to the sensor, summing up to 2X(R0-RMIN)- The time it takes to scan
20 this range can be calculated by dividing the range by the speed of light, in the medium it is traveling in. Therefore, T0N can be calculated using the equation:
T0N = 2 X ^ ~ R^ (2) v
It is noted that the aforementioned calculations serve to 5 substantially define the time variables. The final values for these variables can be further refined or customized in accordance with certain factors related to system 100, referred to in Figure 1. Such refinements or customizations will be elaborated upon hereafter, and may include, for example, accounting for specific environmental conditions, the speed of a moving platform (if system 100 is mounted on the moving platform, such as a vehicle), the specific characteristics of targets expected to be located at certain ranges, changing the form of laser pulse 106 and the like.
Reference is now made to Figure 6, which is a graph, generally designated 150, depicting the observation capability of a system with the timing technique depicted in Figure 5. The vertical axis represents the status of the laser beam, where '1 ' represents a status of the laser beam being on, and '0' represents a status of the laser beam being off. The horizontal axis represents distance. Sensor unit 104, referred to in Figure 1 , is "blind" up to range
RMIN- In particular, there are no received reflections, generated by laser pulse 106, referred to in Figure 1 , from objects located in the region immediately beyond system 100, referred to in Figure 1 , up to range RMIN- The range in which sensor unit 104 is "blind" is demarcated by arrows in Figure 6 as R0FF- This blinding is due to the fact that laser pulse 106 propagates throughout path RMIN while system 100 is blind to reflections generated by laser pulse 106 colliding with any object throughout this range, sensor unit 104 having been deactivated during this period. Thus, RMIN is the minimum range for which reflections, in their entirety, may encounter sensor unit 104 in the "off" state.
Element 152 is an object to be detected, located somewhat beyond range RMIN- Element 154 is an object to be detected, located further away, slightly before range R0. To understand how sensitivity as a function of range is achieved, it is helpful to examine how reflections are received from objects located at the range between RMIN and R0.
Reference is now made to Figure 7, which is a graph, generally designated 160, depicting a specific instant in time in relation to the scenario depicted in Figure 6. In particular, graph 160 depicts the specific instant at which laser pulse 162 has just completed passing element 152 and continues advancing. The vertical axis represents the status of the laser beam, where '1' represents a status of the laser beam being on, and O' represents a status of the laser beam being off. The horizontal axis represents distance.
Reflections from element 152 may be received the moment sensor unit 104, referred to in Figure 1 , is activated, even before the entire pulse width of laser pulse 162 has passed element 154. Therefore, plenty of time is provided for sensor unit 104 to receive reflections that can be intensified from object 154, but only a limited intensifying time is provided for reflections from the closer element 152.
Sensor unit 104 may be activated just a short time before the last portion of pulse energy 162 is reflected from element 152, provided that laser beam 106, referred to in Figure 1 , remains on element 152. This portion is proportional to the small distance between RM|N and element 152. This portion is represented by hatched element 156. Sensor unit 104 is activated only when the tail portion, hatched element 156, of a part of laser pulse 162, reflects from element 152. Immediately afterwards, energy is also reflected continuously from element 154, which is being passed by advancing laser pulse 162, also in proportion to the greater distance between RMIN and element 154.
Consequently, the total energy received by sensor unit 104 as a consequence of reflections from element 152 is relative to the duration of time during which laser pulse 162 fully passes element 152, and still manages to reflect to a sensor unit, while the sensor unit is in the "on" state.
Reference is now made to Figure 8, which is a graph, generally designated 170, depicting a specific instant in time after the instant depicted in Figure 7. In particular, graph 170 depicts the specific instant at which laser pulse 162 has just completed passing element 154 and continues advancing. The vertical axis represents the status of the laser beam, where '1' represents a status of the laser beam being on, and '0' represents a status of the laser beam being off. The horizontal axis represents distance. At this instant, reflections from element 154 may be received by sensor unit 104, referred to in Figure 1 , as long as laser beam 106 remains incident on element 154. Reflections are no longer received from element 152, as laser pulse 162 has already passed element 152 and any reflections from element 152 have already passed sensor unit 104 in their entirety. Consequently, the reflection intensity absorbed from element 154, located near range R0, may be substantially greater than the reflection intensity absorbed from element 152. This difference in absorbed reflection intensity is because the received reflection intensity is determined according to the period during which sensor unit 104 is activated while the element is reflecting thereto. This means that laser pulse 162 may remain incident on element 154 for a longer time than on element 152, during a period that sensor unit 104 is activated, and receiving reflections. When sensor unit 104 is activated, the head and tail portions, hatched element 158, of a part of laser pulse 162, reflect from element 154. In such a case, sensor unit 104 receives more energy from an object near the optimal range R0 than from an object closer to system 100, referred to in Figure 1 , for example, an object located slightly beyond range RMιN- Reference is now made to Figure 9, which is a sensitivity graph, generally designated 180, in accordance with the timing technique depicted in Figure 5, depicting the sensitivity of a gated sensor. The vertical axis represents relative sensitivity, and the horizontal axis represents distance. The vertical axis has been normalized to 1. During time T0FF, referred to in Figure 5, sensor unit 104, referred to in Figure 1 , does not receive any reflections. Time T0FF corresponds to range RMIN- At range R IN, sensor unit 104 is activated. Between ranges RMIN and R0, the sensitivity of sensor unit 104 increases because increasingly more portions of laser pulse 106, referred to in Figure 1 , reflected from objects located between RMIN and R0, are received by sensor unit 104. Between ranges R0 and Ri , sensor unit 104 receives pulse reflections, in their entirety, from objects located between R0 and R^ Between ranges Ri and RMAX, the sensitivity of sensor unit 104 decreases because increasingly less portions of laser pulse 106, reflected from objects located between Ri and RMAX, are received by sensor unit 104. At range RMAX, sensor unit 104 is deactivated, and no portions of laser pulse 106 are received in sensor unit 104. Time TO , referred to in Figure 5, corresponds to the distance between ranges RMIN and RMAX-
It is noted that graph 180 may not be ideal, because laser pulse 106 may also illuminate elements, especially highly reflective elements, located beyond range RMAX, as laser pulse 106 gradually dissipates. Furthermore, graph 180 may not be ideal because the sensitivity remains constant between the first optimum range R0 and the last optimum range RL even though further attenuation exists within the range span Rι-R0- It is possible to reduce the sensitivity of system 100, referred to in Figure 1 , for receiving reflections originating from beyond range R0 by other techniques. Such techniques include changing the form or shape of the pulses of laser beam 106, changing the pattern of the pulses of laser beam 106, changing the energy of the pulses of laser beam 106, changing the time that sensor unit 104 is activated, and changing the width of laser pulse 106. These techniques are now discussed.
Reference is now made to Figure 10, which is a sensitivity graph, generally designated 184, in accordance with a Long Pulse Gated Imaging (LPGI) timing technique, depicting the sensitivity of a gated sensor. The vertical axis represents relative sensitivity, while the horizontal axis represents distance. The vertical axis has been normalized to 1.
In the LPGI timing technique, the pulse width of the laser beam, TLASER is set equal to the difference between the time required for the laser beam to traverse the path from the system to the minimal target distance and back (2 RMIN/V) and the time the last photon reflects back from a target located at range Ri, referred to in Figure 9. This time is also equivalent to the duration of time for which a sensor unit is activated, T0N- Thus, both TLASER and T0N are given by the relation:
Figure imgf000038_0001
where v is the speed of light, in the medium it is traveling in. It is noted that LPGI may be considered a particular example of the timing techniques depicted in graphs 140, 150, 160, 170 and 180, in which R0=Rι- The LPGI timing technique is particularly suited for cases where a large dynamic range, for example from 3 to 30 kilometers, needs to be imaged.
By way of an example, if a target is located at a distance of 25km away from system 100 (Figure 1 ), meaning R^ is equal to 25km, and if
RMIN is equal to 3km, and the speed of light is equal to c, the speed of light in a vacuum, then T0N and TLASER will substantially equal:
— ^ — — — ^ = 146.7 sec . From the instant that laser beam 106, referred c to in Figure 1 , is transmitted, the sensor unit operates in an LPGI mode, meaning T0N will be equal in duration to TLASER- TO eliminate backscattered light without loss of contrast while maintaining a high quality image of a target and the background, it is sufficient to switch the sensor unit to the "off' state when the reflected beam has traversed approximately 6 km (3 km each way to and from range RMIN)- It is noted that it may be desirable to lengthen time T0FF by the pulse width of the laser beam, TLASER, to ensure that no backscattered reflections from the area up to RMIN are received by the sensor unit. Therefore, the actual time T0FF is given by the following equation:
T = 2 X RMIN , ( Λ \
1 OFF τ LASER V
V In the particular case of LGPI, TLASER is given by the following equation:
Figure imgf000038_0002
Using equations 4 and 5, T0FF may be simplified to: 2xR„
T = MAX l OFF (6)
Reference is now made to Figure 11 , which is a graph, generally designated 185, depicting the radiant intensity captured by a sensor unit from reflections from a target and from backscatter, as a function of the range between the sensor unit and the target, for both gated and non- gated imaging, during a simulation. Graph 185 is based on a simulation of a typical airborne system in the conditions specified at the bottom of Figure 11. The vertical axis represents radiant intensity logarithmically, in units of lumens per square meter. The horizontal axis represents range, in units of kilometers.
Curve 186 represents the radiant intensity captured by the sensor unit from the residual light intensity dispersed as light reflexes from the target, for a system operating in an LPGI mode. Curve 187 represents the radiant intensity captured by the sensor unit from backscatter as the laser beam deflects off of atmospheric substances, for a system operating in an LPGI mode. Similar curves, 188 and 189, correspondingly, are provided in graph 185, for a system operating in a non-gated mode. It is noted that the form of radiant intensity curve 188 from light reflected from the target in a non-gated mode is given, in general, by the inverse square law of light attenuation, in vacuum, as:
1
2 (7)
where r is the distance between the source and the target. This law is governed by the geometric propagation of a light beam from a source to a target, and accounts for energy attenuation over distance. The light beams propagate through a mainly homogenous medium, and through an atmosphere with an aerosol density profile typical of an elevation above sea level. It is further noted that for curves 186 and 187, both operating in an LPGI mode, no radiant intensity is detected up until 3km, which in Figure 11 represents RMIN, the minimal range, referred to in Figure 1. It is noted that in an LPGI mode, the radiant intensity from backscatter (curve 187) is negligible relative to the effective radiant intensity of the reflection of light impinging on the surface of a target (curve 186). This is the case for the entire range between 3km and 25km. On the other hand, when the system is operating in a non-gated mode, the intensity from backscatter (curve 189) is even higher than the effective intensity of the reflected light from the surface of the target (curve 188). This is the case from the range of 2 km and over. The difference between target and backscatter is lower in a gated mode by up to several orders of magnitude than in a non-gated mode, over equivalent ranges.
Therefore, it is appreciated that LPGI operation improves the contrast of the illuminated target against the backscatter light intensity for any range between 3km and 25km. Thus, a system operating in an LPGI mode does not require knowledge of the exact range to a target. With reference to Figure 1 , system 100 does not require knowledge of the exact range R0 between system 100 and target 108 (Figure 1). A rough estimation of range R0 is sufficient in order to calculate the required pulse width of laser beam 106 (Figure 1). Such an estimation can extend, in the example of Figure 11 , between 3 to 25 km, which is particularly broad and requires a very rough estimation in comparison to the precise range determination required for modes other than the LPGI operation mode.
It is further noted that in an LPGI mode, the radiant intensity of the reflection of light from a target changes by less than a factor of ten over the 4km to 20km range. In contrast, in a non-gated mode, the radiant intensity of the reflection of light from the target varies by a factor of one hundred over the same range. The relative "flatness" of curve 186 is the result of the gradual increase of sensitivity gain of a gated sensor unit, in proportion to the increase of range, represented in the graph of Figure 10 between RMIN and R0- This gradual or progressive increase of sensitivity is "multiplied" by the attenuation of reflected light in the inverse relation (1/r2), in vacuum, proportional to the increase in the range, represented by curve 188 in Figure 11 , resulting in the relatively "flat" curve 186. The range of 4km to 20km between a sensor unit and a target is typical for many applications, particularly military targeting.
It is therefore appreciated that a system operating in an LPGI mode provides effective observation (i.e. identification and detection) of targets over a versatile depth of field. "Depth of field" refers to the ranges of view confined to certain limits. With reference to Figure 1 , system 100 will produce a high quality image for both targets located relatively near system 100 (beyond RMIN) and targets located relatively far away from system 100 (close to R0).
The property of versatile depth of field responsiveness is highly relevant in the context of a television or video image having a relatively low inherent intra-scene dynamic range. This property prevents self-blinding and overexposure of nearby objects in the image, which occurs when using auxiliary information without the gated imaging feature. Self-blinding and overexposure are prevented because no reflected light is observed up to the minimal range (for example, referring to Figure 10, if RMiN=3km). Self-blinding and overexposure are also prevented because the difference in observed intensities between nearby and faraway objects is relatively small (a substantially flat curve for the gated target radiant intensity in graph 185).
The total irradiance lr(R) received by a sensor unit as a function of range R is given by the following convolution formula:
Figure imgf000041_0001
where L(t) is a Boolean function representing the existing reflection of a laser pulse, irrespective of the on/off state of the sensor. L(t) = 1 if the laser is "on" (i.e. the laser pulse exists over that range) and L(t) = 0 if the laser is "off" (i.e. the laser pulse does not exist over that range). C(t) is a Boolean function representing the ability of the sensor to receive incoming light according to the on/off state of the sensor. C(t) = 1 if the sensor unit is in the "on" or activated state, and C(t) = 0 if the senor unit is in the "off" or deactivated state. TLASER, T0N and T0FF are as they were defined above (where T0FF is the time a sensor unit has been deactivated immediately following the completion of transmission of a laser pulse), v represents the speed at which the laser pulse travels, in the medium in which it is traveling in.
The integral is divided by TLASER to normalize the result. The values for radiant intensity (e.g. the curves in graph 185) may be obtained by multiplying the above convolution formula with a rough geometrical
propagation attenuation function of a laser pulse, such as — mentioned r above, or a more accurate attenuation function that takes into account
atmospheric absorption and scattering interferences, such as -je~2yr , r where y is the atmospheric attenuation constant. The y in the power of the natural exponent 'e' is multiplied by '2' because the attenuation function of the laser pulse takes into account the distance covered by the laser pulse to and from a target.
Reference is now made to Figures 12-14. These graphs depict a technique according to which laser pulse 106 (Figure 1 ) is generated with a specific shape (of each pulse) or pattern (of pulses in a beam). Also the pulse energy of the pulses may be varied for similar purposes. These techniques illustrate the ability to change (preferably - progressively or gradually) the gradient shape of laser pulse 106 in order to achieve maximum sensitivity of system 100 (Figure 1 ) at optimal range R0- In accordance with graphs 150, 160 and 170, if a shaped or patterned laser pulse is generated, a substantially small number of photons of laser pulse 106 reflected from element 152 (located near system 100) and a substantially large number of photons of laser pulse 106 reflected from element 154 (located remote from system 100) may be received by sensor unit 104 (Figure 1). For example, if the form (i.e. shape or pattern) of laser pulse 106 is selected such that the intensity is higher at the end of the pulse than at the beginning, then more light from laser pulse 106, reflected from element 152, may be received in sensor unit 104, than light reflected from element 154. This would be true if the gating of sensor unit 104 is synchronized to start receiving reflections when the head of a reflected pulse from RMIN reaches sensor unit 104, and to stop receiving reflections when the tail of the reflected pulse from RMIN reaches sensor unit 104. For analogous purposes, the energy of emitted pulses, and the synchronization of sensor unit 104, may be selected, such that pulses, to be reflected from objects located at greater distances, will be received in sensor unit 104 with higher intensity (while similar shape or pattern may be retained). The pulse reflections will correspondingly cause greater energy to be received as the reflecting objects are farther distanced, thus partially, fully, or even excessively compensating for the energy attenuation that increases with the reflection distance.
Figure 12 is a graph, generally designated 190, depicting an adjustment of the shape or pattern (or energy) of a laser pulse 192. The vertical axis represents the relative intensity of a laser pulse, and the horizontal axis represents time. The vertical axis has been normalized to 1. Time TCON is the duration of time during which system 100 transmits laser pulse 192 at maximum intensity. Time TWAVE is the duration of time during which the intensity of transmitted laser pulse 192 decays in a shaped or patterned manner (or by its controlled energy). TL SER is the total duration of time that laser pulse 192 is transmitted and equals TCON+TWAVE- Time TOFF LASER is the duration of time during which laser device 102 (Figure 1 ) is in an "off" state, i.e. laser device 102 does not transmit anything. Time T0FF is the duration of time during which sensor unit 104 does not receive anything due to its deactivation. Time TON is the duration of time during which sensor unit 104 is in the "on" state and receives reflections. The times depicted in Figure 12 are not proportionate, for example T0FF LASER may be much greater than TLASER-
Figure 13 is a graph, generally designated 200, depicting the advancement of the shaped or patterned laser pulse depicted in Figure 12. The vertical axis represents the relative intensity of a laser pulse impinging upon an element, and the horizontal axis represents distance. The vertical axis has been normalized to 1. Graph 200 depicts a specific instant in time, in particular the moment that laser pulse 192 impinges on an element within the range RMiN. Reflections from the element within range R IN will require an additional amount of time in order to reach sensor unit 104 (Figure 1 ). Sensor unit 104 will begin to collect photons, after an additional amount of time, in accordance with the shape or pattern of laser pulse 192. Photons in range RMIN exited at the end of laser pulse 192 and were able to reach range RA, when sensor unit 104 is activated. Photons in range RWAVE (between distances RA and RB) exited at the beginning of the intensity decline of laser pulse 192. Photons in range RCON (between distances RB and Rc) exited laser device 102 (Figure 1 ) with maximum intensity at the beginning of the transmission of laser pulse 192.
It is appreciated that range RMIN depends on time TOFF, which corresponds to the duration of time from the instance the end of laser pulse 192 reaches RMIN to the instance in which the end of laser pulse 192 reflects from RMIN and reaches sensor unit 104. This is the instance at which sensor unit 104 is activated (T0FF=2XRMIN/V). Photons exiting at the end of laser pulse 192, which may reach sensor unit 104 after a period of time shorter than T0FF, may not arrive when sensor unit 104 is activated. Therefore, photons reflected off of objects located within range RMIN from system 100 (Figure 1 ) will not be received by sensor unit 104 and thus, such objects will not detected by system 100. It is further noted that
relations such as -y define a laser pulse shape that drops in intensity r dramatically. This drop in intensity may be advantageous in terms of minimizing reflections from an element at close range (ignoring the attenuation of laser pulse 192).
Figure 14 is a sensitivity graph, generally designated 210, in accordance with the laser shaping technique depicted in Figure 12, depicting the sensitivity of a gated sensor. The vertical axis represents relative sensitivity or gain, and the horizontal axis represents distance. The vertical axis has been normalized to 1. It is helpful to compare graph 210 with graph 180 of Figure 9, where the technique of laser shaping was not applied. Accordingly, range RMIN is the range from which reflections generated by the shaped or patterned pulse are not received by sensor unit 104 (Figure 1). Range RWAVE is the range from which the reflections generated by the shaped or patterned pulse begin to be received and intensified. The curve along RWAVE results from the shape or pattern of the decay of laser pulse 192 (Figure 12). The gradient along RCON results from the increasing amount of the maximum intensity portion of laser pulse 192, corresponding to TCON (Figure 12), detected on sensor unit 104 (Figure 1). The gradient along RCON also results from the different passing times between a laser pulse and elements in its path, as described with reference to Figures 5-7. Range R0 to ^ is the range from which the intensity of laser pulse 192 is steady. Range R^ to RMAX is the range from which the intensity of laser pulse 192 decreases at a constant rate. Beyond RMAX is the range from which reflections are no longer detected on sensor unit 104. It is therefore possible to further reduce the sensitivity of system 100 (Figure 1 ) at close ranges, and prevent reflections from atmospheric substances in the area substantially near system 100, by generating shaped or patterned laser pulses, in conjunction with a pulse width based timing for switching sensor unit 104 to the "on" state (as discussed with reference to Figures 7-9). It is appreciated that system sensitivity as a function of range is further improved with the implementation of a shaped or patterned laser pulse (or analogously varying the energy of the pulse), in addition to the results achieved by the gating technique per se.
Reference is now made to Figures 15-18. These graphs depict techniques for timing adjustments during the process of obtaining an individual video frame of received reflections from a target. These techniques illustrate the ability to change the duration of time sensor unit 104 (Figure 1 ) is activated and/or the width of laser pulse 106 (Figure 1 ) in order to achieve maximum sensitivity of system 100 (Figure 1) at the optimum range R0. It is appreciated that limiting the number of transmitted laser pulses without compromising image quality, reduces the sensitivity of the system to extraneous ambient light sources.
It is assumed that a video frame based system is utilized. The array of photodetectors of sensor unit 104 may be a standard video sensor, such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) type of sensor. The CCD type sensor may include an external shutter. Such sensors typically operate at a constant frequency of approximately 50-60Hz. This means that each second the array of photodetectors captures 25-30 frames. To demonstrate the technique, it is assumed that the array of photodetectors operates at 50Hz. The duration of an individual frame is then 20ms. Assuming the range of interest for system 100 is 300m, the width of laser pulse 106 in addition to the duration of time that sensor unit 104 is set to the "on" state must add up to 3 μs. It is noted that the effect of T0FF is not considered for the purposes of this simplified example. This frequency of operation requires a cycle time of 3 μs with no time gaps (i.e. waiting times) between the end of laser pulse 106 and the opening of sensor unit 104. It is then possible to transmit up to 6666 pulses and to collect up to 6666 reflected pulses, in the course of an individual field, i.e. a video frame. Figure 15 is a graph, generally designated 220, depicting the sequence of pulse cycles (L) and collection of reflected photons (P) over an individual field. The vertical axis represents the status of a device, where '1' represents a status of the device being on, and '0' represents a status of the device being off. The horizontal axis represents time. A cycle is defined as the time period required for one laser pulse to be transmitted and one reflected photon, or bundle of reflected photons, to be received. A cycle is therefore defined as TL + TP. TL defines an amount of time a laser device is on. TP defines an amount of time a sensor device is on. It is assumed that the lower the number of cycles required for obtaining a quality image, the greater the ability of a system to reduce the effects of ambient light sources, since a higher number of cycles increases a system's potential exposure to ambient light sources.
Figure 16 is a graph, generally designated 230, depicting a timing technique where a laser pulse width is changed dynamically over the course of obtaining an individual frame. The vertical axis represents the status of a device, where '1' represents a status of the device being on, and '0' represents a status of the device being off. The horizontal axis represents time. The total width of each cycle remains constant, although the width of laser pulse 106 (TL) becomes narrower as time progresses, with the gap between TL and TP growing accordingly. By the final cycle, the width of laser pulse 106 is very short in comparison to its width in the first cycle, while the waiting time for the array of photodetectors to open (T0FF) is very long in comparison to T0FF in the first cycle. The rate at which the waiting time, before sensor unit 104 is activated, is increased, is equal to the rate at which the width of laser pulse 106 is narrowed. Thus, the range RMIN, from which no reflections are received from system 100, may be increased. In this manner, system 100 receives more reflections from the remote range than from the near range, and a desired sensitivity as a function of range is achieved. Figure 17 is a graph, generally designated 240, depicting a timing technique where the duration that a sensor unit is activated is changed dynamically over the course of obtaining an individual frame. The vertical axis represents the status of a device, where '1 ' represents a status of the device being on, and '0' represents a status of the device being off. The horizontal axis represents time. Similar to the technique shown in graph 230 of Figure 16, the total width of each cycle remains constant although the duration that sensor unit 104 is set to the "on" state (Tp) becomes narrower as time progresses, with the gap between TL and TP growing accordingly. By the final cycle, the duration of TP is very short in comparison to its duration in the first cycle, while the waiting time for the array of photodetectors to open (T0FF) is very long in comparison to T0FF in the first cycle. The rate at which the waiting time, before sensor unit 104 is activated, is increased, is equal to the rate at which the duration that sensor unit 104 is set "on" is narrowed. Thus, the range RMIN, from which no reflections are received from system 100, may be increased. In this manner, system 100 receives more reflections from the remote range than from the near range, and a desired sensitivity as a function of range is achieved. Figure 18 is a graph, generally designated 250, depicting a timing technique where both a laser pulse width and the duration that a sensor unit is activated is changed dynamically over the course of obtaining an individual frame. The vertical axis represents the status of a device, where '1 ' represents a status of the device being on, and '0' represents a status of the device being off. The horizontal axis represents time.
It is noted that in graph 230 of Figure 16, since time TP remains constant, system 100 will remain sensitive to the effects of ambient light sources for a longer period than the system of graph 240 of Figure 17. Since TP remains constant, reflected energy not emitted by laser source 102 (Figure 1 ) may be received by sensor unit 104. It is further noted that in graph 240 of Figure 17, since time TL remains constant, system 100 will also remain sensitive to the effects of ambient light sources. Since time T remains constant, part of the reflected energy may not return to sensor unit 104 in an activated mode, thereby using energy for a longer time period than the system of graph 230 in Figure 16.
Similar to the techniques shown in graph 230 and graph 240 (Figures 16 and 17), the total width of each cycle remains constant. Both the width of laser pulse 106 (TL) and the duration that sensor unit 104 is set to the "on" state (TP) become narrower as time progresses, with the gap between TL and TP growing accordingly. By the final cycle, the duration of TL and the duration of TP are each very short in comparison to the first cycle, while the waiting time for the array of photodetectors to open, TOFF, is very long in comparison to the first cycle. The rate at which the waiting time before sensor unit 104 is activated is increased is equal to the sum of the rates at which TL and TP are each narrowed. For example, if TL and TP are narrowed at the same rate, then T0FF is increased at twice this rate. In this technique, the time in which sensor unit 104 is activated and thereby susceptible to the effect of ambient light sources is shortened, thus exploiting the energy spent and received, by system 100, to a maximum. In this manner, system 100 receives more reflections from the remote range than from the near range, and a desired sensitivity as a function of range is achieved. This is provided at the "expense" of narrowing the depth of field, which means having R IN approach R0. Having RMIN approach R0 is desirable when a target range is known more accurately. Narrowing the depth of field can also be compensated by enhancing the pulse intensity.
Timing adjustments during the process of obtaining an individual video frame may be employed in order to achieve a desired sensitivity as a function of range. This may involve dynamically changing the width of a laser pulse, the duration of time a sensor unit is set to the "on" state, or both. It is appreciated that the aforementioned techniques of timing adjustments may be integrated and combined with the aforementioned technique of changing the shape, pattern or energy of a laser pulse, as discussed with reference to Figures 12-14.
Reference is now made to Figures 19-21. These graphs depict techniques for adjusting the number of cycles, or exposures, during the process of obtaining an individual video frame. These techniques serve to eliminate blooming, or self-blinding, arising from high intensity ambient light sources. Additionally, or alternatively, implementation of different image processing techniques may be utilized for this purpose. In particular, the rate of laser pulse transmissions (L) and collection of reflected photons (P) may be changed dynamically, thereby reducing the number of exposures.
It is recalled that in the example discussed earlier with reference to Figures 15-18, it is possible to transmit up to 6666 pulses and to collect up to 6666 reflected photons, in the course of an individual field. This means that a maximum number of 6666 cycles or exposures can be performed over a single field. However, it is also possible to perform fewer exposures. Figure 19 is a graph, generally designated 260, depicting timing adjustments during the process of obtaining an individual video field, where a total of 6666 exposures are performed. The vertical axis represents the status of a device, where '1' represents a status of the device being on, and O' represents a status of the device being off. The horizontal axis represents time. Figure 20 is a graph, generally designated 260, depicting timing adjustments during the process of obtaining an individual video field, where a total of 100 exposures are performed. The vertical axis represents the status of a device, where '1 ' represents a status of the device being on, and '0' represents a status of the device being off. The horizontal axis represents time.
Reducing the number of exposures in a field might cause less photons to be collected at sensor unit 104 (Figure 1 ), and thereby cause darkening in the generated image so that low reflection areas may not be visible. Therefore, the number of exposures in a field should be dynamically controlled. The number of exposures in a field may be controlled in accordance with several factors. For example, one factor may be the level of ambient light (information which may be received as an input to system 100 from an additional sensor which detects ambient light). Another factor may be the level of current consumed by sensor unit 104 (information which may be obtained via a power supply).
Figure 21 is a pair of graphs, generally designated 280 and 290, depicting timing adjustments during the process of obtaining an individual video field, where the number of exposures in a field is controlled by a technique based on image processing. The vertical axis represents the status of a device, where '1' represents a status of the device being on, and '0' represents a status of the device being off. The horizontal axis represents time. This technique involves sensor unit 104 acquiring two frames. In one frame a large number of exposures are obtained, and in the other frame, a small number of exposures are obtained. In this embodiment of the disclosed technique, sensor unit 104 includes at least one photodetector, or an array of photodetectors, that operates faster than standard CCD or CMOS sensors. It is assumed that sensor unit 104 operates at a frequency of 100 Hz. The corresponding duration of each frame is then 10ms. Graphs 280 and 290 depict sensor unit 104 acquiring two frames. In the first frame, graph 280, system 100 (Figure 1 ) performs 1000 exposures, and in the second frame, graph 290, system 100 performs 50 exposures. As mentioned earlier, the number of exposures in a field may be controlled in accordance with several factors, such as the level of ambient light, the saturation state of the photodetectors, image processing constraints, and the like. After sensor unit 104 acquires two frames with a particular number of exposures in each, the two frames may be combined into a single frame. Dark areas may be combined from the frame having the larger number of exposures, and saturated areas may be combined from the frame having the smaller number of exposures. Figure 22 is a schematic illustration of the two image frames acquired in Figure 21 , and the combination of the two frames. It is assumed that the size of an image frame is 4 pixels, for simplicity. In the first frame 292, which originates from a large number of exposures, the upper pixels become saturated, while the lower pixels retain a reasonable level of gray. In the second frame 294, which originates from a smaller number of exposures, the upper left pixel does not become saturated, whereas the lower pixels are dark areas. In the combined image, the pixels from first frame 292 are combined with the pixels from second frame 294. The combined image frame 296 has less saturated pixels (only the upper right pixel), and less dark area pixels (only the lower right pixel). The overall image quality is thereby increased. The combination of the saturated upper left pixel in first frame 292 and the non-saturated upper left pixel in second frame 294 generates a non-saturated upper left pixel in combined frame 296.
The technique depicted in Figures 21 and 22 of two frame exposures followed by a frame combination enlarges the dynamic range of system 100 and provides a high quality image even in a saturated environment. It is appreciated that such image processing may be implemented via other techniques. For example, by using an even faster sensor, it is possible to combine a larger number of frames.
It is recalled that the blinding effect includes blinding caused by the operation of a similar system in the vicinity of system 100, herein known as mutual blinding. System 100 may overcome mutual blinding by applying statistical techniques or synchronization techniques. Statistical techniques may include reducing the number of exposures in the course of acquiring an individual video field and possibly compensating by using a greater laser intensity or a higher intensification level from sensor unit 104 (Figure 1). The techniques may also include a random or predefined change in the timing of cycles throughout a single frame, changing the exposure frequency, or any combination of these techniques. Synchronization techniques may include establishing a communication channel between the two systems, for example, in the RF range. Such a channel would enable the two systems to communicate with each other. Another possible synchronization technique for overcoming mutual blinding is automatic synchronization.
Reference is now made to Figure 23, which is a pair of graphs, generally designated 310 and 320, depicting a synchronization technique for overcoming mutual blinding. The vertical axis represents the status of a device, where '1 ' represents a status of the device being on, and '0' represents a status of the device being off. The horizontal axis represents time. In the synchronization technique depicted in Figure 23, one system enters a "listening period" every so often. When the system is in the listening period, the system refrains from transmitting laser pulses and collecting reflections. In the event that a second system does not transmit any pulses while the first system is in its listening period, section 312 of graphs 300 and 310, the first system may resume activity at the end of its listening period. In the event that the second system transmits pulses while the first system is in its listening period, section 314 of graphs 300 and 310, the first system waits until the end of the cyclic sequence of the second system before resuming activity. In graph 300, system #1 performs a cyclic sequence of 50 exposures before entering a listening period. In graph 310, system #2 performs a cyclic sequence only when system #1 is in a listening period. In this manner, synchronization between system #1 and system #2 ensures that no pulses transmitted by one system are received by the other system, resulting in interference and mutual blinding. It is noted that in the synchronization technique, 50% of the possible exposure time in a frame is allotted to each system.
The synchronization technique depicted in Figure 23 may be applied, for example, in a night vision imaging system mounted on a moving platform (such as a vehicle), according to one embodiment of the disclosed technique. For the sake of illustrative purposes, reference will be made to a vehicle, as an example which is applicable to any moving platform. In this embodiment of the disclosed technique, a night vision imaging system mounted on a vehicle may include an interface with the vehicle's computer system (automotive BUS). Two pulse detectors are mounted in the vehicle in which system 100 (Figure 1) is installed. One pulse detector is installed in the front section of the vehicle, and the other pulse detector is installed in the rear section of the vehicle. The pulse detectors detect if other systems similar to system 100 are operating in vehicles approaching the vehicle of system 100 from the front or from the rear. Since a vehicle approaching from the rear is not likely to cause interference with system 100, synchronization may not be implemented in such a case.
An alternative synchronization technique for overcoming mutual blinding involves "sharing". For example, part of the listening period of a frame may be dedicated to detecting pulses transmitted by other systems. If no pulse is detected from another system, system 100 may randomly decide when laser device 102 (Figure 1) may begin transmitting laser pulses within the same frame span. If a pulse from another system is detected however, system 100 initiates transmission of laser pulses at a random time only after the approaching pulse has ended. Alternatively, each system may randomly change their pulse start transmission timing in each frame. It is appreciated that these synchronization techniques for overcoming mutual blinding allow a system to synchronize with other systems that operate at different rates. Another possible synchronization technique for overcoming mutual blinding involves a synchronizing pulse transmitted by one system at a given time, while the other system adapts itself in accordance with the received synchronizing pulse.
Reference is now made to Figure 24, which is a block diagram of a method for target detection and identification, accompanied by an illustration of a conceptual operation scenario, generally designated 350, operative in accordance with another embodiment of the disclosed technique. In conceptual operation scenario 350, an attack helicopter 352 equipped with an observation system in accordance with an embodiment of the disclose technique, such as system 100, is involved in an anti-tank operation at night. In the first stage 360, the helicopter crew detects a hot spot 354 at a 15 km range using a FLIR (Forward Looking Infrared) device. In the second stage 370, when the helicopter is distanced only 14-15 km from hot spot 354, the surveillance and observation system is activated. Only during the course of this stage are laser energy beams emitted in direction 356 and reflected from the target in direction 358. The gated system is operated for only a few seconds. This is sufficient for storing in the system enough images from which an image 366 of target 362 is generated and stored. The radiating laser beam may expose helicopter 352. Limiting this exposure to a few seconds helps to protect helicopter 352 from counter- detection. In the third stage 380, the helicopter crew reverts to a passive operation mode, i.e. a relatively safer operation mode, while advancing towards the identified target 362. When the helicopter arrives at a distance of 12-14 km from target 362, the identification stage of image 366 is completed by reviewing its recorded details, such as by comparison 364 with other images of potential targets 368 stored in a data bank (not shown). In the example shown in Figure 24, the hot spot is identified as a legitimate target, namely, an enemy tank. In the final stage 390, the helicopter crew activates a weapons system, for example a missile homing on the thermal radiation emitted by hot spot 354. The activated weapon destroys the target from a relatively distant range, for example from a range of 8-9 km.
It is noted that in the course of the operation, described by sequence 360 to 390, the system in helicopter 352 had no need to measure the exact range to target 362. Such a measurement would have necessitated operating the laser for an extended period of time, thereby increasing the likelihood of exposure and detection of helicopter 352 by an enemy.
Reference is now made to Figure 25, which is a schematic illustration of a system, generally referenced 400, constructed and operative in accordance with another embodiment of the disclosed technique. System 400 is stabilized by a gimbals, and the optical axis of an illuminating laser beam is coupled with the optical axis of an observing section.
System 400 includes an external section 402 and an observation section 404. External section 402 includes a laser device 406 and an electronic controller 408. Observation section 404 includes at least one photodetector, or an array of photodetectors, 410, an optical coupling means 412, a coupling lens assembly 414, and an optical assembly 416. Laser device 406 is optically coupled with optical coupling means 412. Electronic controller 408 is coupled with array of photodetectors 410. Array of photodetectors 410 is further coupled with optical coupling means 412. Coupling lens assembly 414 is coupled with optical coupling means 412 and with optical assembly 416. Optical coupling means 412 includes a collimating lens 426, a first mirror 428 and an integrating lens assembly 430. Integrating lens assembly 430 includes a second mirror 432. First mirror 428 is optically coupled with collimating lens 426 and with integrating lens assembly 430. Optical assembly 416 includes an array of objective lenses 442. Gimbals 420 stabilizes observation section 404. Stabilization is required when system 400 is positioned on a continuously moving or vibrating platform, whether airborne, terrestrial or nautical, such as an airplane, helicopter, sea craft, land vehicle, and the like. Observation section 404 may also be stabilized by using feedback from a gyroscope to gimbals 420, by stabilization using image processing techniques, based on the spatial correlation between consecutively generated images, by stabilization based on sensed vibration, or in any combination of the above. External section 402 does not require specialized stabilization and may therefore be packaged separately and located separately from observation section 404. The stabilization may be based on detection of vibrations of the sensor means that influence the image as it is captured. Such sensor means in Figure 25 may include observation section 404, photodetector(s) 410, optical coupling means 412, coupling lens assembly 414, and optical assembly 416, and their rigid packaging.
Laser device 406 transmits a pulsed laser beam 422 toward a target. Laser device 406 may be a Diode Laser Array (DLA). The transmitted laser beam 422 propagates through optical fiber 424. Optical fibers are used in system 400 to transmit laser beam 422 because they enable the laser beam spot size to be reduced to the required field-of-view (FOV). Optical fibers also allow for easy packaging. Furthermore, optical fibers transmit laser light such that no speckle pattern is produced when the laser light falls on a surface (laser devices, in general, produce speckle patterns when laser light falls on a surface). Laser device 406 is separate from observation section 404. Since laser device 406 may be inherently heavy this facilitates packaging and results in decreased weight in observation section 404.
Transmitted laser beam 42 propagates through optical fiber 424 toward collimating lens 426. Collimating lens 426 collimates transmitted laser beam 422. The collimated laser beam is conveyed toward first mirror 428. First mirror 428 diverts the direction of the collimated laser beam and converges the beam onto integrated lens assembly 430. Converged beam 434 reaches second mirror 432. Second mirror 432 is typically very small. Second mirror 432 couples the optical axis 436 of converged beam 434 with the optical axis 438 of observation section 404. Optical axis 438 is common to array of photodetectors 410 and to optical assembly 416. Second mirror conveys converged beam 434 toward coupling lens assembly 414. Coupling lens assembly conveys the beam toward array of objective lenses 442. Array of objective lenses 442 collimates the beam once more and transmits the collimated laser beam 440 toward a target (not shown). Beam 440 illuminates the target, and the reflections of light impinging on the surface of the target return to optical assembly 416. Optical assembly 416 routes the reflected beam 450 toward array of photodetectors 410 via coupling lens assembly 414.
Array of photodetectors 410 processes reflected beam 450 and converts reflected beam 450 into an image displayable on a television. Array of photodetectors 410 may be a CCD (Charge Coupled Device) type sensor. In this case, the CCD sensor is coupled by relay lenses to a gated image intensifier, as is known in the art. The CCD type sensor may include external shutters. Alternatively, array of photodetectors 410 may be a Gated Intensified Charge Injection Device (GICID), a Gated Intensified CCD (GICCD), a Gated Image Intensifier, a Gated Intensified Active Pixel Sensor (GIAPS), and the like. It is noted that such sensor types enable advanced processing of the displayable television image, such as enlarging the image, identifying features, and the like. Advance processing may include, for example, comparing the image with a set of images in a databank of known identified target (see step 380 with reference to Figure 24). The generated displayable television image may be subjected to additional processing. Such processing may include accumulating image frames via a frame grabber (not shown), integration to increase the quantity of light and to improve contrast, electronic stabilization provided by image processing techniques based on the spatial correlation between consecutively generated images, and the like.
Controller 408 controls the timing of array of photodetectors 410, and receives the displayable television image via suitable wiring 444.
Controller 408 may include an electronics card. Controller 408 controls the timing of array of photodetectors 410 in synchronization with the laser pulses provided by laser device 406. The timing is such that array of photodetectors 410 will be closed during the time period that the laser beam traverses a distance adjacent to system 410 en route to the target (distance RMIN, with reference to Figure 1). Switching the sensor unit to the "off" state immediately after transmitting the laser beam ensures that unwanted reflections, from atmospheric substances and particles and backscatter, are not captured by array of photodetectors 410, and that the self-blinding phenomenon is avoided.
Reference is now made to Figure 26, which is a schematic illustration of a system, generally referenced 500, constructed and operative in accordance with another embodiment of the disclosed technique. The optical axis of an illuminating laser beam in system 500 is essentially parallel with the optical axis of its array of photodetectors.
System 500 includes an electronics box 502, an observation module 504, a power supply 506, a narrow field collimator 508, a display 510, a video recorder 512, and a support unit 514. Electronics box 502 includes a laser device 516, a laser cooler 518, a controller 520, a service panel for technicians 522, an image processing unit 524 and a PC (Personal Computer) card 526. Observation module 504 includes an optical assembly 528, a filter 530, an optical multiplier 532, an array of photodetectors, or at least one photodetector, 534, and an electronics card 536. A spatial modulator shutter (not shown) may be located in front of array of photodetectors 534. A narrow field collimator 508 is installed on observation module 504. A power supply 506 is coupled with electronics box 502 via a connector 538. Electronics box 502 is coupled with observation module 504 via a cable 540. Electronics box 502 is optically coupled with narrow field collimator 508 via an optical fiber 542. Video recorder 512 is coupled with electronics box 502 and with display 510.
Laser device 516 transmits a pulsed laser beam 544 toward a target (not shown). Laser device 516 may be a DLA. Laser cooler 518 provides cooling for laser device 516. The transmitted laser beam 544 propagates through optical fiber 542 toward narrow field collimator 508. Collimator 508 collimates laser beam 544 so that the optical axis 546 of laser beam 544 is essentially parallel with the optical axis 548 of array of photodetectors 534, and conveys collimated laser beam 544 toward the target.
The reflected beam 550 from the target reaches optical assembly 528. Optical assembly 528 includes an array of narrow field objective lenses (not shown) packaged above support unit 514. Optical assembly 528 conveys reflected beam 550 to filter 530. Filter 530 performs spectral and spatial filtering on reflected beam 550. Filter 530 may locally darken an entrance of array of photodetectors 534 to overcome glare occurring in system 500. Image processing unit 524 provides control and feedback to filter 530. Filter 530 may be an adaptive Spatial Light Modulator (SLM), a spectral frequency filter, a polarization filter, a light polarizer, a narrow band pass filter, or any other mode selective filter. An SLM filter may be made up of a fransmissive Liquid Crystal Display (LCD), a Micro-Electro-Mechanical System (MEMS), or other similar devices. Filter 530 may also be a plurality of filters. The characteristics of filter 530 suit the energy characteristics of reflected beam 550. Using feedback from image processing unit 524, filter 530 may be programmed to eliminate background radiation surrounding the target which is not within the spectral range of laser device 516. Residual saturation remaining on the eventual image as a result of ambient light sources in the field of view, for example artificial illumination, vehicle headlights, and the like, may be reduced by a factor of approximately 1/1000th through adaptive SLM filtering. It is noted that other light sources in the field of view undergo additional filtering by the LPGI technique and by optic enlargement performed before reflected beam 550 is converted to a displayable image. This additional filtering is meant to facilitate separation between background light and the illuminated target, and to prevent blinding due to the presence of intense light sources in the immediate vicinity of system 500. Filter 530 conveys reflected beam 550 to optical multiplier 532. Optical multiplier 532 enlarges reflected beam 550. It is noted that filter 530, or optical multiplier 532, or both, may be installed directly on the output end of optical assembly 528. Optical multiplier may also be installed directly on the input end of array of photodetectors 534. Array of photodetectors 534 receives reflected beam 550, and processes and converts reflected beam 550 to image data. Array of photodetectors 534 may be a CCD sensor. The CCD sensor may include external shutters. Array of photodetectors 534 transfers the image data to electronics card 536 via cable 552. Electronics card 536 transfers the image data to electronics box 502 via cable 540. Cable 540 may be any type of wired or wireless communication link. Controller 520 synchronizes the timing of array of photodetectors 534. Controller 520 ensures that array of photodetectors 534 is closed (i.e. the sensor unit is deactivated) when transmitted laser beam 544 traverses a range in the immediate vicinity of a target (i.e. range RMIN, referring to Figure 1) for both the forward and return paths. PC card 526 enables a user to interface with electronics box 502. PC card 526 is embedded with image processing capabilities. PC card 526 allows the image received from array of photodetectors 534 to be analyzed and processed. For example, such processing may include comparing the image to pictures of identified targets stored in a data bank, local processing of specific regions of the image, operation of the SLM function, and the like. The generated image may be presented on display 510 or recorded by video recorder 512. The image may be transferred to a remote location by an external communication link (not shown) such as a wireless transmission channel.
Power supply 506 supplies power to the components of electronics box 502 via connector 538. Power supply 506 may be a battery, a generator, or any other suitable power source. For example, an input voltage from power supply 506 allows laser device 516 to operate. Support unit 514 supports observation module 504, as well as narrow field collimator 508 installed above observation module 504. Support unit 514 provides for height and rotational adjustments. Support unit 514 may include a tripod (not shown), support legs 554 for fine adjustments, and an integral stabilization system (not shown) including, for example, viscous shock absorbers.
It is noted that DLA laser device 516 allows optical fibers to be used to convey transmitted laser beam 544. This facilitates packaging of laser device 516, which is typically heavy. Laser device 516 is also located separately from observation module 504, resulting in less weight on observation module 504. It is further noted that DLA laser device 516 generates a beam of laser energy having substantially high power for extended periods. Since the generated beam has a high frequency and a relatively low intensity, it may be routed via optical fibers which have limited durability for high intensity power, particularly at the peak. It is further noted that a DLA laser device generates radiation in the near infrared spectral region, which is invisible to the human eye. However, this wavelength is also very close to the visible spectrum. Image intensifiers are very sensitive to this wavelength and provide good image contrast. Thus an image intensifier used in conjunction with a DLA laser device can provide high image quality even for targets at long ranges.
It is further noted that a DLA laser device generates non-coherent radiation. Therefore, the generated beam has very uniform radiation and results in an image of higher quality than if a coherent laser beam is used. It is further noted that a DLA laser device can operate in a "snapshot" observation mode. Snapshot observation involves transmitting a series of quick flash bursts, which diminishes the duration of time that the laser device is active. This reduces the exposure of a system and the risk of being detected by a foreign element.
It is further noted that in an embodiment of the disclosed technique where the system is stabilized on a gimbals, such as system 400 (Figure 25), a DLA laser device enables switching of the array of photodetectors, allowing the array of photodetectors to operate on very short time spans with respect to the long damped vibrations of the gimbals. Such vibrations may cause blurring in the generated image. It is further noted that a DLA laser device is highly efficient in converting power to light. A DLA laser device delivers more light and less heat than other types of laser devices. Accordingly, the laser in the disclosed technique is transmitted through relatively wide optics and at relatively low intensities, so that the safety range is only a few meters from the laser. In contrast, in systems with laser range finders or laser designators, the safety range may reach tens of kilometers.
It is further noted that a DLA laser device is suitable for applications where laser transmission through a water medium is required, for example when performing sea surveillance from an airborne system, when performing underwater observation, or other nautical applications. For such applications, a laser beam in the blue-green range of the visible spectrum provides optimal performance.
It is noted that pulse emitting means (or transmitters) and the sensor described herein above are located in the same place, which is easier for the simultaneous control of the pulse and the sensor gating, and their timing or their synchronization. This location of the pulse emitting means and the sensor typifies cases when the path between the sensor and the object observed are obscured. However, the disclosed technique is not limited to such a configuration. The pulse emitter and the sensor may well be situated in two different locations, as long as their control, timing or synchronization are appropriately maintained for creating a sensitivity as a function of the range such that an amount of received energy of pulse reflections, reflected from objects located beyond a minimal range, progressively increases with the range along said depth of a field to be imaged. The relevant ranges (including minimal, optimal and maximal ranges, and the depth of a field) will then be described with respect to the sensor, rather than the emitter. This may be achieved by various technologies, including communication between the emitter and the sensor, with a controller, using an emitter signal bearing timing or other information picked up by the sensor, atomic clocks, a common clock, and the like. The pulse would then merely "reflect" at some angle rather than "reflect back" at 180 degrees from the observed objects toward the sensor. Thus any use of terms such as "back" in this context herein should be read as also referring to reflections at any angle from the objects to the sensor. Calculations illustrating the "to and fro" path towards the target and back to the sensor merely demonstrate one situation, and can be easily and analogously altered to calculations for other paths.
It will be appreciated by persons skilled in the art that the disclosed technique is not limited to what has been particularly shown and described hereinabove. Rather the scope of the disclosed technique is defined only by the claims, which follow.

Claims

1. An imaging system comprising: a transmission source, said transmission source providing at least one energy pulse; a sensor for receiving pulse reflections of said at least one energy pulse reflected from objects within a depth of a field to be imaged, said depth of field having a minimal range (RMIN), said sensor enabled to gate detection of said pulse reflections, with a gate timing controlled such that said sensor starts to receive said pulse reflections after a delay timing substantially given by the time it takes said at least one energy pulse to reach said minimal range and complete reflecting to said sensor from said minimal range, wherein said at least one energy pulse and said gate timing are controlled for creating a sensitivity as a function of range for said system, such that an amount of received energy of said pulse reflections, reflected from objects located beyond said minimal range, progressively increases with the range along said depth of a field to be imaged.
2. The system according to claim 1 , wherein said at least one energy pulse and said gate timing are controlled for creating a sensitivity as a function of range for said system, such that an amount of received energy of said pulse reflections, reflected from objects located beyond said minimal range, progressively increases with the range along said depth of a field to be imaged until an optimal range (R0).
3. The system according to claim 2, wherein said at least one energy pulse and said gate timing are controlled for creating a sensitivity as a function of range for said system, such that an amount of received energy of said pulse reflections, reflected from objects located beyond said optimal range, is maintained detectable until a maximal range (RMAX)-
4. The system according to claim 3, wherein said at least one energy pulse and said gate timing are controlled for creating a sensitivity as a function of range for said system, such that said amount of received energy of said pulse reflections, reflected from objects located beyond said optimal range, is maintained substantially constant until said maximal range.
5. The system according to claim 3, wherein said at least one energy pulse and said gate timing are controlled for creating a sensitivity as a function of range for said system, such that said amount of received energy of said pulse reflections, reflected from objects located beyond said optimal range, gradually decreases until said maximal range.
6. The system according to claim 1 , wherein said at least one energy pulse and said gate timing are controlled for creating a sensitivity as a function of range for said system, such that an amount of received energy of said pulse reflections is directly proportional to the ranges of said objects to be imaged.
7. The system according to claim 1 , wherein said at least one energy pulse defines a substantial pulse width
(TLASER) commencing at a start time (T0); said delay timing is substantially given by the time elapsing from said start time (T0) until twice said minimal range (RMIN) divided by the speed at which said at least one energy pulse travels (v), in addition
to said pulse width (TLASER): ^^ + TUSER .
V
8. The system according to claim 1 , wherein said at least one energy pulse and said gate timing are controlled for creating a sensitivity as a function of range for said system through synchronization between the timing of said at least one energy pulse and the timing of said gate detection.
9. The system according to claim 1 , wherein said at least one energy pulse defines a substantial pulse width (TLASER), a pulse pattern, a pulse shape, and a pulse energy; said sensor is enabled to gate detection of said pulse reflections, with a gating time span said sensor is activated (T0N), a duration of time said sensor is deactivated (TOFF), and a synchronization timing of the gating with respect to said at least one energy pulse; and wherein at least one of said delay timing, said pulse width, said pulse shape, said pulse pattern, said pulse energy, said gating time span, said sensor is activated (TON), said duration of time said sensor is deactivated (T0FF), and said synchronization timing, is determined according to at least one of said depth of a field to be imaged, specific environmental conditions said system is used in, a speed said system is moving at if said system is mounted on a moving platform, and specific characteristics of different objects expected to be found in said depth of field.
10. The system according to claim 9, wherein said pulse width, said duration of time said sensor is deactivated, and said gating time span said sensor is activated define a cycle time, wherein said at least one energy pulse is provided for a duration of said pulse width, the opening of said sensor is delayed for a duration of said duration of time said sensor is deactivated, and said pulse reflections are received for a duration of said gating time span said sensor is activated.
11. The system according to claim 9, wherein said determination according to at least one of said depth of a field, said specific environmental conditions, said speed said system is moving at if said system is mounted on a moving platform, and said specific characteristics of different objects expected to be found in said depth of field, is a dynamic determination.
12. The system according to claim 11 , wherein said dynamic determination varies in an increasing or decreasing manner over time.
13. The system according to claim 11 , wherein said pulse width and said gating time span are limited to reduce the sensitivity of said system to ambient light sources.
14. The system according to claim 13, wherein said pulse width is shortened progressively, said delay timing is lengthened progressively, and said cycle time does not change.
15. The system according to claim 13, wherein said gating time span is shortened progressively, said delay timing is lengthened progressively, and said cycle time does not change.
16. The system according to claim 13, wherein said pulse width and said gating time span are shortened progressively, said delay timing is lengthened progressively, and said cycle time does not change.
17. The system according to claim 1 , wherein the gating of said sensor is utilized to create a sensitivity as a function of range for said system by changing a parameter selected from the group consisting of changing the shape of said at least one energy pulse, changing the pattern of said at least one energy pulse, changing the energy of said at least one energy pulse, changing a gating time span said sensor is activated (T0N), changing a duration of time said sensor is deactivated (T0FF), changing a pulse width (TLASER) of said at least one energy pulse, changing said delay timing, and changing a synchronization timing between said gating and the timing of providing said at least one energy pulse.
18. The system according to claim 17, wherein said changing of a parameter is utilized according to at least one of: said depth of field, specific environmental conditions said system is used in, a speed said system is moving at if said system is mounted on a moving platform, and characteristics of different objects expected to be found in said depth of field.
19. The system as in any of claims 8 to 18, further comprising a controller for controlling said synchronization.
20. The system according to claim 19, wherein at least one repetition of said cycle time forms part of an individual video frame, and a number of said repetitions forms an exposure number per said video frame.
21. The system according to claim 20, further comprising a control mechanism for dynamically controlling and varying said exposure number.
22. The system according to claim 19, wherein mutual blinding between said system and a similar system passing one another is eliminated by statistical solutions selected from the group consisting of: lowering said exposure number; 5 a random or pre-defined change in said timing of said cycle time during the course of said individual video frame; and a change in frequency of said exposure number.
23. The system according to claim 19, wherein mutual blinding betweeno said system and a similar system passing one another is eliminated by synchronic solutions selected from the group consisting of: establishing a communication channel between said system and said similar system; letting each of said system and said similar system go into5 listening modes from time to time in which said at least one energy pulse is not emitted for a listening period, after which period any of said system and said similar system resumes emitting said at least one energy pulse if no pulses were collected during said listening period, and after which period said system and said similar system0 wait until an end of a cyclic sequence before resuming emitting said at least one energy pulse if pulses were collected during said listening period; and having said systems change a pulse start transmission time in said individual video frames. 5
24. The system according to claim 21 , wherein said exposure number is varied by said control mechanism according to a level of ambient light. o 25. The system according to claim 24, further comprising an image intensifier.
26. The system according to claim 25, wherein said exposure number is varied by said control mechanism according to a level of current consumed by said image intensifier.
27. The system according to claim 21 , wherein said control mechanism comprises image processing means for locating areas in said sensor in a state of saturation.
28. The system according to claim 21 , wherein said control mechanism comprises image processing means for processing a variable number of exposures.
29. The system according to claim 28, wherein said image processing means is utilized to take at least two video frames, one with a high exposure number, the other with a low exposure number; said exposure numbers of said at least two video frames are determined by said control mechanism; and said at least two video frames are combined to form a single video frame by combining dark areas from frames with a high exposure number and saturated areas from frames with a low exposure number.
30. The system according to claim 2, wherein a pulse width (TLASER) of said at least one energy pulse is substantially defined in accordance
( R - R with the following equation: 2χ — - — — , where v is the speed at
V v ) which said at least one energy pulse travels.
31. The system according to claim 1 , wherein said at least one energy pulse comprises several pulses and wherein said sensor receives several pulses of said at least one energy pulse reflected from at least one object during said gating time span said sensor is activated.
32. The system according to claim 2, wherein said sensor is enabled to gate detection of said pulsed reflections, with a gating time span said sensor is activated (T0N), and a duration of time said sensor is deactivated (T0FF), which are substantially defined in accordance with
the following equations: T0N = 2χ R - R v/uW,/„v \ I a _ r_ιr_, „ T — 2 χ RIM,IN i T αi lU 1 0FF - ~τ~ l LASER
where TLASER is the pulse width of said at least one energy pulse, and v is the speed at which said at least one energy pulse travels.
33. The system according to claim 3, wherein said sensor is enabled to gate detection of said pulse reflections in accordance with a Long Pulsed Gated Imaging (LPGI) timing technique.
34. The system according to claim 33, wherein said sensor is enabled to gate detection of said pulse reflections with a gating time span said sensor is activated (T0N), and a duration of time said sensor is deactivated (T0FF), which are substantially defined in accordance with
the following equations: T0N
Figure imgf000072_0001
where v is the speed at which said at least one energy pulse travels.
35. The system according to claim 1 , wherein said at least one energy pulse is selected from the group consisting of electromagnetic energy and mechanical energy.
36. The system according to claim 1 , wherein said sensor is selected from the group consisting of: a Complementary Metal Oxide Semiconductor (CMOS), a Charge Coupled Device (CCD), a Gated Intensifier Charge Injection Device (GICID), a Gated Intensified CCD (GICCD), a Gated Intensified Active Pixel Sensor (GIAPS), and a Gated Image Intensifier.
37. The system according to claim 1 , wherein said sensor comprises an external shutter.
38. The system according to claim 1 , wherein said sensor comprises at least one photodetector.
39. The system according to claim 1 , wherein said sensor is enabled to autogate.
40. The system according to claim 1 , further comprising a display apparatus for displaying images constructed from said light received in said sensor.
41. The system according to claim 40, wherein said display apparatus comprises a Head Up Display (HUD) apparatus.
42. The system according to claim 40, wherein said display apparatus comprises an LCD display apparatus.
43. The system according to claim 40, wherein said display apparatus comprises a planar optic apparatus.
44. The system according to claim 40, wherein said display apparatus comprises a holographic based flat optic apparatus.
45. The system according to claim 1 , further comprising a storage unit for storing images constructed from said pulse reflections received in said sensor.
46. The system according to claim 1 , further comprising a transmission device for transmitting images constructed from said pulse reflections received in said sensor.
47. The system according to claim 1 , wherein said system is mounted on a moving platform.
48. The system according to claim 1 , wherein said system is stabilized.
49. The system according to claim 48, wherein said stabilization is selected from the group consisting of: stabilization using a gimbals, stabilization using feedback from a gyroscope to a gimbals, stabilization using image processing techniques, based on a spatial correlation between consecutively generated images of said object to be imaged, and stabilization based on sensed vibrations of said sensor.
50. The system according to claim 1 , further comprising at least one ambient light sensor.
51 . The system according to claim 1 , further comprising a pulse detector for detection of pulses emitting from a similar system approaching.
52. The system according to claim 1 , further comprising an image- processing unit.
53. The system according to claim 1 , further comprising a narrow band pass filter functionally connected to said sensor.
54. The system according to claim 1 , further comprising a spatial modulator shutter.
55. The system according to claim 1 , further comprising a spatial light modulator. ^-J
56. The system according to claim 1 , further comprising an optical fiber for transmitting said at least one energy pulse towards said objects to be imaged.
57. The system according to claim 1 , further comprising a polarizer for filtering out incoming energy, which does not conform to the polarization of said pulse reflection.
58. The system according to claim 57, wherein said transmission source provides at least one polarized energy pulse.
59. The system according to claim 10, wherein said sensitivity of said system relates to a gain and responsiveness of said sensor in proportion to an amount of energy received by said sensor, wherein said gain received by said sensor as a function of range
R is defined by the follow convolution formula:
Figure imgf000075_0001
wherein L(t) defines a Boolean function representing an on/off status of said transmission source , irrespective of a state of said sensor, wherein L(t) = 1 if said transmission source is on and L(t) = 0 if said transmission source is off, wherein C(t) defines a Boolean function representing an ability of said sensor to receive incoming pulse reflections according to a state of said sensor, wherein C(t) = 1 if said sensor is in an activated state and C(t) = 0 if said sensor is in a deactivated state, and where v is the speed at which said at least one energy pulse travels.
60. The system according to claim 59, wherein a value for radiant intensity is obtained by multiplying said convolution formula by a geometrical propagation attenuation function.
61 . The system according to claim 1 , wherein said transmission device is selected from the group consisting of a laser generator, an array of diodes, an array of LEDs, and a visible light source.
62. An imaging method, the method comprising the procedures of: emitting at least one energy pulse to a target area; receiving at least one reflection of said at least one energy pulse reflected from objects within a depth of a field to be imaged, said depth of field having a minimal range (RMIN), said receiving comprises gating detection of said at least one reflection such that said at least one energy pulse is detected after a delay timing substantially given by the time it takes said at least one energy pulse to reach said minimal range and complete reflecting; and progressively increasing the received energy of said at least one reflection reflected from objects located beyond said minimal range along said depth of a field to be imaged, by controlling said at least one energy pulse and the timing of said gating.
63. The method according to claim 62, wherein said procedure of increasing comprises increasing the received energy of said at least one reflection reflected from objects located beyond said minimal range along said depth of a field to be imaged up to an optimal range (Ro).
64. The method according to claim 63, further comprising the procedure of maintaining detectable the received energy of said at least one reflection reflected from objects located beyond said optimal range along said depth of a field to be imaged up to a maximal range
65. The method according to claim 64, wherein said procedure of maintaining comprises maintaining substantially constant said received energy of said at least one reflection reflected from objects located beyond said optimal range along said depth of a field to be imaged up to said maximal range.
66. The method according to claim 64, wherein said procedure of maintaining comprises gradually decreasing said received energy of said at least one reflection reflected from objects located beyond said optimal range along said depth of a field to be imaged up to said maximal range.
67. The method according to claim 62, wherein said procedure of increasing comprises increasing the received energy of said at least one reflection in direct proportion to the ranges of said objects within said depth of field to be imaged.
68. The method according to claim 62, wherein said at least one energy pulse defines a substantial pulse width (TLASER) commencing at a start time (T0); and said delay timing is substantially given by the time elapsing from said start time (T0) until twice said minimal range divided by the speed at which said at least one energy pulse travels (v), in addition
to said pulse width (TLASER):
Figure imgf000078_0001
69. The method according to claim 62, wherein said at least one energy pulse defines a substantial pulse width (TLASER), a pulse pattern, a pulse shape, and a pulse energy; said procedure of gating comprises a gating time span a sensor utilized for said receiving is activated (TON), a duration of time said sensor is deactivated (TOFF), and a synchronization timing of said gating with respect to said at least one energy pulse; and wherein at least one of said delay timing, said pulse width, said pulse shape, said pulse pattern, said pulse energy, said gating time span said sensor is activated (TON), said duration of time said sensor is deactivated (T0FF), and said synchronization timing is determined according to at least one of said depth of a field, specific environmental conditions said method is used in, a moving speed of a moving platform if said sensor is mounted on said moving platform, and specific characteristics of different objects expected to be found in said depth of field.
70. The method according to claim 62, further comprising the procedure of autogating.
71. The method according to claim 62, wherein said procedure of controlling comprises progressively changing at least one parameter selected from the group consisting of changing a pattern of said at least one energy pulse, changing a shape of said at least one energy pulse, changing the energy of said at least one energy pulse, changing a gating time span a sensor utilized for said receiving is activated (TON), changing a duration of time said sensor is deactivated (T0FF), changing an energy pulse width (TLASER) of said at least one energy pulse, changing said delay timing, and changing a synchronization timing between said gating and said emitting.
72. The method according to claim 71 , wherein said procedure of controlling comprises changing said at least one parameter according to at least one of said depth of field, said specific environmental conditions said method is used in, said moving speed of said moving platform if said sensor is mounted on said moving platform, and characteristics of different objects expected to be found in said depth of field.
73. The method according to claim 71 , wherein said procedure of controlling comprises the sub-procedures of providing said at least one energy pulse for a duration of said pulse width, delaying the opening of said sensor for a duration of said time said sensor is deactivated (T0FF), and receiving energy pulses reflected from objects for a duration of said gating time span said sensor is activated (T0N), and wherein said pulse width, said duration of said time said sensor is deactivated (T0FF) and said gating time span said sensor is activated (TON) define a cycle time.
74. The method according to claim 73, wherein at least one repetition of said cycle time forms part of an individual video frame, and a number of said repetitions forms an exposure number for said video frame.
75. The method according to claim 74, further comprising the procedure of eliminating mutual blinding between a system using said method and a similar system using said method, passing one another, by statistical solutions selected from the group consisting of lowering said exposure number, a random or pre-defined change in said timing of said cycle time during the course of said individual video frame, and a change in the frequency of said exposure number.
76. The method according to claim 74, further comprising the procedure of eliminating mutual blinding between a system using said method and a similar system using said method passing one another, by synchronic solutions selected from the group consisting of: establishing a communication channel between said system and said similar system; letting each of said system and said similar system go into listening modes from time to time in which said at least one energy pulse is not emitted for a listening period, after which period any of said system and said similar system resume emitting said at least one energy pulse if no pulses were collected during said listening period, and after which period said system and said similar system wait until an end of a cyclic sequence before resuming emitting said at least one energy pulse if pulses were collected during said listening period; and having said systems change a pulse start transmission time in said individual video frames.
77. The method according to claim 74, further comprising the procedure of dynamically varying said exposure number by a control mechanism.
78. The method according to claim 77, wherein said procedure of dynamically varying comprises adjusting said exposure number according to a level of ambient light by said control mechanism.
79. The method according to claim 77, further comprising the procedure of intensifying said detection of said at least one reflection, and wherein said procedure of dynamically varying comprises adjusting said exposure number by said control mechanism according to a level of current consumed by an image intensifier utilized for said intensifying.
80. The method according to claim 77, further comprising the procedure of image processing by locating areas in said sensor in a state of saturation by said control mechanism.
81 . The method according to claim 77, further comprising the procedure of image processing for a variable number of exposures by said control mechanism.
82. The method according to claim 81 , wherein said procedure of image processing comprises: taking at least two video frames, one with a high exposure number, the other with a low exposure number, by image processing of a variable number of exposures; determining exposure numbers for said at least two video frames; and combining frames to form a single video frame by combining dark areas from frames with a high exposure number and saturated areas from frames with a low exposure number.
83. The method according to claim 71 , wherein said procedure of increasing is dynamic.
84. The method according to claim 71 , wherein said pulse width and said gating time span said sensor is activated (T0N) are limited to eliminate or reduce the sensitivity of said sensor to ambient light sources.
85. The method according to claim 83, wherein said procedure of increasing comprises varying the sensitivity of said sensor in a manner varying over time selected from the group consisting of an increasing, a decreasing, and a partially increasing and a partially decreasing, manner over time.
86. The method according to claim 85, wherein said procedure of controlling comprises shortening said pulse width progressively and lengthening said delay timing progressively, while retaining a cycle time of the said gating unchanged.
87. The method according to claim 85, wherein said procedure of controlling comprises shortening said gating time span progressively and lengthening said delay timing progressively, while retaining a cycle time of said gating unchanged.
88. The method according to claim 85, wherein said procedure of controlling comprises shortening said pulse width and said gating time span progressively, lengthening said delay timing progressively, while retaining a cycle time of said gating unchanged.
89. The method according to claim 63, comprising calculating said energy pulse width (TLASER), substantially defined in accordance with
( R -R the following equation: 2χ — — — , where v is the speed said at
V v ) least one energy pulse travels at.
90. The method according to claim 62, wherein said procedure of receiving comprises receiving several pulses of said at least one energy pulse reflected from objects during a gating time span a sensor utilized for said receiving is activated (TON)-
91. The method according to claim 68, wherein said procedure of receiving comprises receiving several pulses of said at least one energy pulse reflected from objects during a gating time span a sensor utilized for said receiving is activated (T0N); said procedure of gating comprises a duration of time said sensor is deactivated (T0FF); and said procedure of controlling comprises controlling said gating time span said sensor is activated T0N and a duration of time said sensor is deactivated T0FF, substantially defined in accordance with
the following equations: τ0N = 2 χ (R° RM,N ) and τ0FF = 2 x RM,N + T 1,ASER '
where R0 is an optimal range.
92. The method according to claim 62, wherein said procedure of gating comprises gating in accordance with a Long Pulsed Gated Imaging
(LPGI) timing technique.
93. The method according to claim 92, wherein a gating time span a sensor utilized for said receiving is activated (T0N), and a duration of time said sensor is deactivated (T0FF), are substantially defined in
accordance with the following equations: 7/, = 2 χ MAX - RMA, 1N and
2 x R τ0FF = **« - where v is the speed said at least one energy pulse
V travels at, where RMAX is a maximal range.
94. The method according to claim 62, wherein said procedure of emitting comprises emitting at least one energy pulse selected from the group consisting of: electromagnetic energy and mechanical energy.
95. The method according to claim 62, wherein said procedure of emitting comprises generating said at least one energy pulse by an emitter selected from the group consisting of a laser generator, an array of diodes, an array of LEDs, and a visible light source.
96. The method according to claim 62, wherein said procedure of gating comprises gating by a sensor selected from the group consisting of: a Complementary Metal Oxide Semiconductor (CMOS), a Charge Coupled Device (CCD), a Gated Intensifier Charge Injection Device (GICID), a Gated Intensified CCD (GICCD), and a Gated Intensified
Active Pixel Sensor (GIAPS).
97. The method according to claim 96, wherein said procedure of gating comprises gating with a CCD sensor that includes an external shutter.
98. The method according to claim 62, further comprising the procedure of intensifying said detection of said at least one reflection.
99. The method according to claim 98, wherein said procedure of intensifying comprises intensifying with a device selected from the group consisting of: a gated image intensifier and a sensor with shutter capabilities.
100. The method according to claim 62, further comprising the procedure of displaying at least one image constructed from said received at least one reflection.
101. The method according to claim 100, wherein said procedure of displaying comprises displaying on a display apparatus selected from the group consisting of a Head Up Display (HUD), an LCD display, a planar optic display, and a holographic based flat optic display.
102. The method according to claim 62, further comprising the procedure of storing at least one image constructed from said received at least one reflection on a storage unit for storing images.
106. The method according to claim 62, further comprising the procedure of transmitting at least one image constructed from said received at least one reflection.
107. The method according to claim 62, further comprising the procedure of determining the level of ambient light in said target area.
108. The method according to claim 62, further comprising the procedure of determining if other energy pulses are present in said target area.
109. The method according to claim 62, further comprising the procedure of filtering received energy pulse reflections using a narrow band pass filter.
110. The method according to claim 98, further comprising the procedure of overcoming glare from other energy pulses by locally darkening the entrance of an image intensifier utilized for said intensifying by using an apparatus selected from the group consisting of a spatial modulator shutter, a spatial light modulator, and a liquid crystal display.
111. The method according to claim 62, wherein said procedure of emitting comprises emitting at least one polarized electromagnetic pulse, and said procedure of receiving comprises filtering received energy according to a polarization conforming to an expected polarization of said at least one reflection.
PCT/IL2005/000085 2004-02-04 2005-01-24 Gated imaging WO2005076037A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA2554955A CA2554955C (en) 2004-02-04 2005-01-24 Gated imaging
IL177078A IL177078A0 (en) 2004-02-04 2006-07-25 Gated imaging
US11/496,031 US8194126B2 (en) 2004-02-04 2006-07-27 Gated imaging

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IL160220 2004-02-04
IL16022004A IL160220A0 (en) 2004-02-04 2004-02-04 Laser gated imaging
IL16509004A IL165090A0 (en) 2004-11-08 2004-11-08 Gated imaging
IL165090 2004-11-08

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/496,031 Continuation US8194126B2 (en) 2004-02-04 2006-07-27 Gated imaging

Publications (1)

Publication Number Publication Date
WO2005076037A1 true WO2005076037A1 (en) 2005-08-18

Family

ID=34839946

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2005/000085 WO2005076037A1 (en) 2004-02-04 2005-01-24 Gated imaging

Country Status (4)

Country Link
US (1) US8194126B2 (en)
CA (1) CA2554955C (en)
IL (1) IL177078A0 (en)
WO (1) WO2005076037A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100365436C (en) * 2006-04-26 2008-01-30 浙江大学 Regionalized lighting detection method
WO2010084493A1 (en) * 2009-01-26 2010-07-29 Elbit Systems Ltd. Optical pixel and image sensor
EP2322953A1 (en) * 2008-07-30 2011-05-18 National University Corporation Shizuoka University Distance image sensor and method for generating image signal by time-of-flight method
US7990451B2 (en) 2006-11-20 2011-08-02 Ben Gurion University Of The Negev Research And Development Authority Optical pixel and image sensor
WO2011107987A1 (en) * 2010-03-02 2011-09-09 Elbit Systems Ltd. Image gated camera for detecting objects in a marine environment
US8194126B2 (en) 2004-02-04 2012-06-05 Elbit Systems Ltd. Gated imaging
EP2767924A3 (en) * 2013-02-15 2015-04-08 Hella KGaA Hueck & Co. A method and device for recognising pulsing light sources
WO2016136410A1 (en) * 2015-02-23 2016-09-01 Mitsubishi Electric Corporation System and method for determining depth image representing distances to points of scene
WO2017009848A1 (en) * 2015-07-14 2017-01-19 Brightway Vision Ltd. Gated structured imaging
US10031229B1 (en) 2014-12-15 2018-07-24 Rockwell Collins, Inc. Object designator system and method
CN108431629A (en) * 2015-12-21 2018-08-21 株式会社小糸制作所 Vehicle image acquiring device, control device, include vehicle image acquiring device or control device vehicle and vehicle image acquiring method
US20190056498A1 (en) * 2016-03-01 2019-02-21 Brightway Vision Ltd. Gated imaging apparatus, system and method

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7655895B2 (en) * 1992-05-05 2010-02-02 Automotive Technologies International, Inc. Vehicle-mounted monitoring arrangement and method using light-regulation
AU2003247148A1 (en) * 2002-08-05 2004-02-23 Elbit Systems Ltd. Vehicle mounted night vision imaging system and method
WO2007011522A2 (en) * 2005-07-14 2007-01-25 Gm Global Technology Operations, Inc. Remote perspective vehicle environment observation system
US7507940B2 (en) * 2006-01-20 2009-03-24 Her Majesty The Queen As Represented By The Minister Of National Defence Of Her Majesty's Canadian Government Laser underwater camera image enhancer
JP2008228282A (en) * 2007-02-13 2008-09-25 Matsushita Electric Ind Co Ltd Image processing device
EP2763398B1 (en) 2007-12-21 2018-10-31 Photonis Netherlands B.V. Use of an image sensor array in laser range gated imaging
US20090201380A1 (en) * 2008-02-12 2009-08-13 Decisive Analytics Corporation Method and apparatus for streamlined wireless data transfer
US9313376B1 (en) * 2009-04-01 2016-04-12 Microsoft Technology Licensing, Llc Dynamic depth power equalization
GB2496083B (en) * 2010-08-29 2016-01-06 Goldwing Design & Construction Pty Ltd Method and apparatus for a metal detection system
DE102011010334B4 (en) * 2011-02-04 2014-08-28 Eads Deutschland Gmbh Camera system and method for observing objects at a great distance, in particular for monitoring target objects at night, mist, dust or rain
WO2013028649A1 (en) * 2011-08-23 2013-02-28 Bae Systems Information And Electronic Systems Integration Inc. Fiber optically coupled laser rangefinder for use in a gimbal system
GB2494908B (en) * 2011-09-26 2014-04-30 Elbit Systems Ltd Image gating using an array of reflective elements
US9723233B2 (en) 2012-04-18 2017-08-01 Brightway Vision Ltd. Controllable gated sensor
EP2856207B1 (en) * 2012-05-29 2020-11-11 Brightway Vision Ltd. Gated imaging using an adaptive depth of field
US10390004B2 (en) * 2012-07-09 2019-08-20 Brightway Vision Ltd. Stereo gated imaging system and method
FR2995699B1 (en) * 2012-09-20 2015-06-26 Mbda France INFRARED IMAGING ECARTOMETER AND AUTOMATIC TARGET TRACKING AND TRACKING SYSTEM
US10354448B1 (en) 2013-03-15 2019-07-16 Lockheed Martin Corporation Detection of optical components in a scene
IL227265A0 (en) 2013-06-30 2013-12-31 Brightway Vision Ltd Smart camera flash
IL233356A (en) * 2014-06-24 2015-10-29 Brightway Vision Ltd Gated sensor based imaging system with minimized delay time between sensor exposures
IL233692A (en) * 2014-07-17 2017-04-30 Elbit Systems Electro-Optics Elop Ltd System and method for analyzing quality criteria of a radiation spot
CN105991935B (en) * 2015-02-15 2019-11-05 比亚迪股份有限公司 Exposure-control device and exposal control method
CN105991934B (en) * 2015-02-15 2019-11-08 比亚迪股份有限公司 Imaging system
CN105991933B (en) * 2015-02-15 2019-11-08 比亚迪股份有限公司 Imaging sensor
US9945936B2 (en) * 2015-05-27 2018-04-17 Microsoft Technology Licensing, Llc Reduction in camera to camera interference in depth measurements using spread spectrum
CN105391948A (en) * 2015-11-05 2016-03-09 浙江宇视科技有限公司 Front-end equipment having night-vision fog-penetrating function and control method thereof
US11204425B2 (en) 2015-12-21 2021-12-21 Koito Manufacturing Co., Ltd. Image acquisition device for vehicles and vehicle provided with same
WO2017110413A1 (en) 2015-12-21 2017-06-29 株式会社小糸製作所 Image acquisition device for vehicles, control device, vehicle provided with image acquisition device for vehicles and control device, and image acquisition method for vehicles
CN108431630A (en) * 2015-12-21 2018-08-21 株式会社小糸制作所 Vehicle image acquiring device, control device, include vehicle image acquiring device or control device vehicle and vehicle image acquiring method
RU2645122C2 (en) * 2016-02-17 2018-02-15 Наталия Михайловна Волкова Active-pulsed television night vision device
US11397250B2 (en) 2016-06-27 2022-07-26 Sony Corporation Distance measurement device and distance measurement method
CN106375657A (en) * 2016-08-23 2017-02-01 江苏北方湖光光电有限公司 System for realizing extended depth of field under narrow pulse gating of image intensifier
IL247944B (en) * 2016-09-20 2018-03-29 Grauer Yoav Pulsed light illuminator having a configurable setup
CN108259744B (en) * 2018-01-24 2020-06-23 北京图森智途科技有限公司 Image acquisition control method and device, image acquisition system and TOF camera
US10775486B2 (en) 2018-02-15 2020-09-15 Velodyne Lidar, Inc. Systems and methods for mitigating avalanche photodiode (APD) blinding
US11435229B2 (en) * 2018-10-24 2022-09-06 SA Photonics, Inc. Hyperspectral imaging systems
KR20200058948A (en) * 2018-11-20 2020-05-28 삼성전자주식회사 Spectrum measurement apparatus, method for correcting light source temperature change of spectrum, apparatus and method for estimating analyte concentration
US10855896B1 (en) 2018-12-13 2020-12-01 Facebook Technologies, Llc Depth determination using time-of-flight and camera assembly with augmented pixels
US10791286B2 (en) 2018-12-13 2020-09-29 Facebook Technologies, Llc Differentiated imaging using camera assembly with augmented pixels
US10791282B2 (en) * 2018-12-13 2020-09-29 Fenwick & West LLP High dynamic range camera assembly with augmented pixels
US10520437B1 (en) 2019-05-24 2019-12-31 Raytheon Company High sensitivity sensor utilizing ultra-fast laser pulse excitation and time delayed detector
JP7257275B2 (en) * 2019-07-05 2023-04-13 株式会社日立エルジーデータストレージ 3D distance measuring device
US11092545B2 (en) * 2019-07-18 2021-08-17 The United States Of America, As Represented By The Secretary Of The Navy Laser diode turret radiation source for optical spectrometry
US10902623B1 (en) 2019-11-19 2021-01-26 Facebook Technologies, Llc Three-dimensional imaging with spatial and temporal coding for depth camera assembly
US11194160B1 (en) 2020-01-21 2021-12-07 Facebook Technologies, Llc High frame rate reconstruction with N-tap camera sensor
IL279407A (en) * 2020-12-13 2022-07-01 Qualcomm Inc Multimode radar system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3941999A (en) * 1975-04-01 1976-03-02 The United States Of America As Represented By The Secretary Of The Army Automatic focus pulse gated system
US3947119A (en) * 1974-02-04 1976-03-30 Ball Brothers Research Corporation Active sensor automatic range sweep technique
US4151415A (en) * 1977-10-31 1979-04-24 Varo, Inc. Active imaging system using variable gate width time programmed dwell
US4708473A (en) * 1984-02-08 1987-11-24 Dornier Gmbh Acquisition of range images
EP0353200A2 (en) * 1988-06-27 1990-01-31 FIAT AUTO S.p.A. Method and device for instrument-assisted vision in poor visibility, particularly for driving in fog
US4915498A (en) * 1988-04-19 1990-04-10 Malek Joseph H Range imaging sensor
US5408541A (en) * 1993-01-05 1995-04-18 Lockheed Corporation Method and system for recognizing targets at long ranges
EP0750202A1 (en) * 1996-03-01 1996-12-27 Yalestown Corporation N.V. Method of observing objects under low levels of illumination and a device for carrying out the said method
WO2004013654A1 (en) * 2002-08-05 2004-02-12 Elbit Systems Ltd. Vehicle mounted night vision imaging system and method
WO2004072678A1 (en) * 2003-02-16 2004-08-26 Elbit Systems Ltd. Laser gated camera imaging system and method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5414439A (en) * 1994-06-09 1995-05-09 Delco Electronics Corporation Head up display with night vision enhancement
EP0867747A3 (en) * 1997-03-25 1999-03-03 Sony Corporation Reflective display device
US6861809B2 (en) * 1998-09-18 2005-03-01 Gentex Corporation Headlamp control to prevent glare
US20020180866A1 (en) * 2001-05-29 2002-12-05 Monroe David A. Modular sensor array
CA2554955C (en) * 2004-02-04 2010-09-14 Elbit Systems Ltd. Gated imaging

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3947119A (en) * 1974-02-04 1976-03-30 Ball Brothers Research Corporation Active sensor automatic range sweep technique
US3941999A (en) * 1975-04-01 1976-03-02 The United States Of America As Represented By The Secretary Of The Army Automatic focus pulse gated system
US4151415A (en) * 1977-10-31 1979-04-24 Varo, Inc. Active imaging system using variable gate width time programmed dwell
US4708473A (en) * 1984-02-08 1987-11-24 Dornier Gmbh Acquisition of range images
US4915498A (en) * 1988-04-19 1990-04-10 Malek Joseph H Range imaging sensor
EP0353200A2 (en) * 1988-06-27 1990-01-31 FIAT AUTO S.p.A. Method and device for instrument-assisted vision in poor visibility, particularly for driving in fog
US5408541A (en) * 1993-01-05 1995-04-18 Lockheed Corporation Method and system for recognizing targets at long ranges
EP0750202A1 (en) * 1996-03-01 1996-12-27 Yalestown Corporation N.V. Method of observing objects under low levels of illumination and a device for carrying out the said method
WO2004013654A1 (en) * 2002-08-05 2004-02-12 Elbit Systems Ltd. Vehicle mounted night vision imaging system and method
WO2004072678A1 (en) * 2003-02-16 2004-08-26 Elbit Systems Ltd. Laser gated camera imaging system and method

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8194126B2 (en) 2004-02-04 2012-06-05 Elbit Systems Ltd. Gated imaging
CN100365436C (en) * 2006-04-26 2008-01-30 浙江大学 Regionalized lighting detection method
US7990451B2 (en) 2006-11-20 2011-08-02 Ben Gurion University Of The Negev Research And Development Authority Optical pixel and image sensor
EP2322953A1 (en) * 2008-07-30 2011-05-18 National University Corporation Shizuoka University Distance image sensor and method for generating image signal by time-of-flight method
EP2322953A4 (en) * 2008-07-30 2012-01-25 Univ Shizuoka Nat Univ Corp Distance image sensor and method for generating image signal by time-of-flight method
US8537218B2 (en) 2008-07-30 2013-09-17 National University Corporation Shizuoka University Distance image sensor and method for generating image signal by time-of-flight method
WO2010084493A1 (en) * 2009-01-26 2010-07-29 Elbit Systems Ltd. Optical pixel and image sensor
US9513367B2 (en) 2010-03-02 2016-12-06 Elbit Systems Ltd. Image gated camera for detecting objects in a marine environment
WO2011107987A1 (en) * 2010-03-02 2011-09-09 Elbit Systems Ltd. Image gated camera for detecting objects in a marine environment
EP2767924A3 (en) * 2013-02-15 2015-04-08 Hella KGaA Hueck & Co. A method and device for recognising pulsing light sources
US10031229B1 (en) 2014-12-15 2018-07-24 Rockwell Collins, Inc. Object designator system and method
WO2016136410A1 (en) * 2015-02-23 2016-09-01 Mitsubishi Electric Corporation System and method for determining depth image representing distances to points of scene
US9897698B2 (en) 2015-02-23 2018-02-20 Mitsubishi Electric Research Laboratories, Inc. Intensity-based depth sensing system and method
WO2017009848A1 (en) * 2015-07-14 2017-01-19 Brightway Vision Ltd. Gated structured imaging
CN108431629A (en) * 2015-12-21 2018-08-21 株式会社小糸制作所 Vehicle image acquiring device, control device, include vehicle image acquiring device or control device vehicle and vehicle image acquiring method
US20190056498A1 (en) * 2016-03-01 2019-02-21 Brightway Vision Ltd. Gated imaging apparatus, system and method

Also Published As

Publication number Publication date
US20070058038A1 (en) 2007-03-15
CA2554955C (en) 2010-09-14
US8194126B2 (en) 2012-06-05
IL177078A0 (en) 2006-12-10
CA2554955A1 (en) 2005-08-18

Similar Documents

Publication Publication Date Title
US8194126B2 (en) Gated imaging
EP1595162B1 (en) Laser gated camera imaging system and method
JP7086001B2 (en) Adaptive optical raider receiver
EP3161521B1 (en) Gated sensor based imaging system with minimized delay time between sensor exposures
US8994819B2 (en) Integrated optical detection system
US20140009611A1 (en) Camera System and Method for Observing Objects at Great Distances, in Particular for Monitoring Target Objects at Night, in Mist, Dust or Rain
WO2006090356A1 (en) Add-on laser gated imaging device for associating with an optical assembly
KR20190028356A (en) Range - Gate Depth Camera Assembly
CN111880194B (en) Non-field-of-view imaging apparatus and method
EP2824418A1 (en) Surround sensing system
US20200241141A1 (en) Full waveform multi-pulse optical rangefinder instrument
Busck et al. High accuracy 3D laser radar
CN106646500A (en) Self-adaptive closed loop adjustment laser range finding method and device
Kapustin et al. Active pulse television measuring systems for ensuring navigation of transport means in heavy weather conditions
EP1714168B1 (en) Airborne laser image capturing system and method
US20130329055A1 (en) Camera System for Recording and Tracking Remote Moving Objects
JP2010522410A (en) A system for artificial contrast amplification in image visualization.
Redman et al. Anti-ship missile tracking with a chirped AM ladar-update: design, model predictions, and experimental results
David et al. Advance in active night vision for filling the gap in remote sensing
JP2021027437A (en) Spatial optical communication device and spatial optical communication method
JP2001083248A (en) Monitoring apparatus
Steinvall Potential of preemptive DIRCM systems
JPH10104526A (en) Astronomical observation device
KR20230136214A (en) active imaging system
Gilligan Range gated underwater viewing

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DPEN Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 177078

Country of ref document: IL

WWE Wipo information: entry into national phase

Ref document number: 11496031

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2554955

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2841/CHENP/2006

Country of ref document: IN

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Ref document number: DE

122 Ep: pct application non-entry in european phase
WWP Wipo information: published in national office

Ref document number: 11496031

Country of ref document: US