IL303684A - Improved system, method, and computer progrqm product, which may use a single detector, for finding/tracking targets - Google Patents
Improved system, method, and computer progrqm product, which may use a single detector, for finding/tracking targetsInfo
- Publication number
- IL303684A IL303684A IL303684A IL30368423A IL303684A IL 303684 A IL303684 A IL 303684A IL 303684 A IL303684 A IL 303684A IL 30368423 A IL30368423 A IL 30368423A IL 303684 A IL303684 A IL 303684A
- Authority
- IL
- Israel
- Prior art keywords
- beams
- plural
- scan
- computer
- typically
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4802—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
- G01S17/26—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves wherein the transmitted pulses use a frequency-modulated or phase-modulated carrier wave, e.g. for pulse compression of received signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/42—Simultaneous measurement of distance and other co-ordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4814—Constructional features, e.g. arrangements of optical elements of transmitters alone
- G01S7/4815—Constructional features, e.g. arrangements of optical elements of transmitters alone using multiple transmitters
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4816—Constructional features, e.g. arrangements of optical elements of receivers alone
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4861—Circuits for detection, sampling, integration or read-out
- G01S7/4863—Detector arrays, e.g. charge-transfer gates
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Description
Improved System, Method, And Computer Program Product, Which May Use A Single Detector, For Finding/Tracking Targets FIELD OF THIS DISCLOSURE The present invention relates generally to object detection, and more particularly to LIDARs. BACKGROUND FOR THIS DISCLOSURE The problem of scanning, detecting, and tracking an object, with an active (transmitting) system, was intensively treated during WW2 by the invention of radar. This field is very diverse, from many frequency ranges, through different beam profiles, scanning methods, detection algorithms, antenna designs etc. In the last decades this field has been further upgraded by the invention of LIDAR, which added the VIS (visible) & NIR (near infra-red) electromagnetic spectrum option. Efforts are being made to harness LIDAR technology to the transportation industry, mostly to the autonomic vehicle enterprise. Attempts to apply LIDAR technology to the problem of scanning, detecting, and tracking drones (and more generally small objects) are known, and, typically, mature technologies such as radar (RF transmitter), acoustic transmitter, VIS, NIR, SWIR, MWIR and LWIR passive detectors (cameras) are employed; mixed combinations may also be tested. US Patent 10,489,925 describes measuring distance of an object, with overlapping beams. US Patent 8,120,761 describes measuring position of an object. US Patent 6,335,700 to Ashihara describes a multi-beam radar system. US Patents 9,915,728 and 9,885,777 B2 describe state of the art scanning radar systems. Korean patent document KR102050756, and "V-RBNN Based Small Drone Detection in Augmented Datasets for 3D LADAR System" by Byeong Hak Kim, describe a micro target detection device. Co-owned US patent document US11236970B2 describes a method for 1-dimensional scanning, inter alia.
Tianxiang Zheng et al, "Frequency-multiplexing photon-counting multi-beam LIDAR" is an article (https://opg.optica.org/prj/fulltext.cfm?uri=prj-7-12-1381&id=423084 ) in Photonics Research Vol. 7, Issue 12, pp. 1381-1385 (2019) https://doi.org/10.1364/PRJ.7.001381, which describes a multi-beam photon-counting LIDAR system having only one single-pixel single-photon detector employed to simultaneously detect multi-beam echoes. In the Zheng frequency-multiplexing multi-beam LIDAR, each beam is from an independent laser source with different repetition rates and independent phases, thus photon counts from different beams can be discriminated. Published US patent document 2017/0219695 describes multiple pulse LIDAR measurements where each LIDAR measurement beam illuminates a location in a three-dimensional environment with a sequence of multiple pulses of illumination light. Light reflected from the location is detected by a photosensitive detector of the LIDAR system. Published US patent document US2020041618 describes a matrix light source and detector device for solid-state LIDAR. Japanese patent document JP2005214851 describes an object detector. Published US patent document US2007181786 describes a device for monitoring spatial areas. US patent US11181364 describes object detection for a motor vehicle. US patents US6819407 and US11513221 and published US patent document US2022091261 describe distance measuring. US patent US6327029 describes a range finding device. US patent US5760886 describes a scanning-type distance measurement device. Published US patent document US11340340 describes a state of the art LIDAR system. The disclosures of all publications and patent documents mentioned in the specification, and of the publications and patent documents cited therein directly or indirectly, are hereby incorporated by reference, other than subject matter disclaimers or disavowals. If the incorporated material is inconsistent with the express disclosure herein, the interpretation is that the express disclosure herein describes certain embodiments, whereas the incorporated material describes other embodiments. Definition/s within the incorporated material may be regarded as one possible definition for the term/s in question.
SUMMARY OF CERTAIN EMBODIMENTS Certain embodiments of the present invention seek to provide a system which scans a Field of Regard (FOR) for target objects (aka targets), the system comprising a multi-beam illuminator which may be controlled by a controller e.g. as shown in Fig. and which generates a multi-beam comprising plural beams, differentially (e.g., uniquely) coded according to a coding scheme, which respectively illuminate plural zones within the FOR and/or a detector which knows the coding scheme and, accordingly, detects which of the plural zones within the FOR, the target objects illuminated by the multi-beam belong to; and/or a scanner which controls the multi-beam to scan the FOR, thereby to yield a high throughput scanning & detection system, useful for large FOR applications, which determines which target objects are present in each of the plural zones. Certain embodiments of the present invention seek to provide circuitry typically comprising at least one processor in communication with at least one memory, with instructions stored in such memory executed by the processor to provide functionalities which are described herein in detail. Any functionality described herein may be firmware-implemented or processor-implemented, as appropriate. At least the following embodiments may be provided: Embodiment 1. A system which scans a Field of Regard (FOR) for target objects (aka targets), the system comprising a multi-beam illuminator which generates a multi-beam comprising plural beams, differentially (e.g., uniquely) coded according to a coding scheme, which respectively illuminate plural zones within the FOR; and/or a detector which knows the coding scheme and, accordingly, detects which of the plural zones within the FOR, the target objects illuminated by the multi-beam belong to; and/or a scanner which controls the multi-beam to scan the FOR, to yield a high throughput scanning & detection system, useful for, inter alia, large FOR applications, which determines which target objects are present in each of the plural zones. Target objects may, for example, include drones. The plural beams may or may not overlap. The detector may be small e.g., including only a single pixel or only a few pixels, or only single cell or multi-cell as in Silicon Photomultiplier (SiPM) or in Multi-pixel photon counter (MPPC). The scanner may, for example, comprise a single-axis scanner or a double-axis scanner. The illuminator may, for example, comprise a one- dimensional array of light sources which are deployed along a first dimension of an environment and which scan along a second dimension of the environment. It is appreciated that more than one one-dimensional array of light sources may be provided. Any suitable controller e.g., hardware processor/s may be used to control the multi-beam and/or detect which zone targets belong to, depending on the known coding scheme. Embodiment 2. The system of any preceding embodiment/s wherein at least one pair of the plural beams overlap, and scan typically continuously in at least one dimension. Typically, several, most or all pairs of adjacent beams each create an overlapping pair of light spots on the environment. The resolution yielded by this method exceeds the resolution that would have resulted if the same number of beams would have been used with no overlap between beams. Embodiment 3. The system of any preceding embodiment/s wherein the multi-beam illuminator has a profile, and wherein there is at least one Dynamic change of the multi-beam profile in real time. Embodiment 4. The system of any preceding embodiment/s wherein the plural beams scan in jumps. Embodiment 5. The system of any preceding embodiment/s wherein the plural beams scan continuously in 1D. Embodiment 6. The system of any preceding embodiment/s wherein the plural beams scan continuously in 2D. Embodiment 7. The system of any preceding embodiment/s wherein the plural beams have an angular scan velocity which varies over time. Embodiment 8. The system of any preceding embodiment/s wherein the plural beams are encoded using a random sequence. Embodiment 9. The system of any preceding embodiment/s wherein the plural beams are encoded using a comb with a time shift. Embodiment 10. The system of any preceding embodiment/s wherein the plural beams are encoded using a comb with varying time periods. Embodiment 11. The system of any preceding embodiment/s wherein the plural beams are encoded using a comb with varying wavelengths.
Embodiment 12. The system of any preceding embodiment/s wherein dynamic angular resolution change occurs at least during a scan phase. Embodiment 13. The system of any preceding embodiment/s wherein dynamic angular resolution change occurs at least during a tracking phase. Embodiment 14. The system of any preceding embodiment/s wherein range resolution changes dynamically, using codes e.g., using the coding scheme. Embodiment 15. The system of any preceding embodiment/s wherein the detector comprises a single lens with a large point spread function (PSF). Embodiment 16. The system of any preceding embodiment/s wherein the detector defines an array of pixels or cells. Embodiment 17. A system according to any preceding embodiment/s wherein, when performing a multi-stage objection detection including at least first and second stages, a controller which controls the multi-beam illuminator reverts from one beam modulation scheme to another in real time, including using a first modulation scheme in the first stage, and a second modulation scheme in the second stage. For example, the first stage of detection may include locking onto an object, whereas the second stage may include tracking the same object. Embodiment 18. The system of any preceding embodiment/s wherein the detector comprises at least one solid-state photodetector. Embodiment 19. The system of any preceding embodiment/s wherein the detector comprises at least one single-photon avalanche diode (SPAD). Embodiment 20. The system of any preceding embodiment/s wherein the detector comprises a 1D array of single-photon avalanche diodes (SPADs). Embodiment 21. The system of any preceding embodiment/s wherein the detector comprises a 2D array of single-photon avalanche diodes (SPADs). Embodiment 22. The system of any preceding embodiment/s wherein the detector comprises at least one Silicon Photon multiplier (SiPM). Embodiment 23. The system of any preceding embodiment/s wherein the detector comprises at least one Multi-Pixel Photon Counter (MPPC). Embodiment 24. A method which scans a Field of Regard (FOR) for target objects (aka targets), the method comprising: generating a multi-beam comprising plural beams which are differentially (e.g., uniquely) coded according to a known coding scheme, and which respectively illuminate plural zones within the FOR; and/or according to the known coding scheme, detecting which of the plural zones within the FOR, the target objects illuminated by the multi-beam belong to; and/or controlling the multi-beam to scan the FOR, thereby to yield high throughput scanning & detection, useful for large FOR applications. Embodiment 25. A computer program product, comprising a non-transitory tangible computer readable medium having computer readable program code embodied therein, the computer readable program code adapted to be executed to implement a method which scans a Field of Regard (FOR) for target objects (aka targets), the method comprising: generating a multi-beam comprising plural beams which are differentially (e.g., uniquely) coded according to a known coding scheme, and which, respectively, illuminate plural zones within the FOR; and/or, typically according to the known coding scheme, detecting which of the plural zones within the FOR, the target objects illuminated by the multi-beam belong to; and/or controlling the multi-beam to scan the FOR, to yield high throughput scanning & detection, useful for large FOR applications inter alia. It is appreciated that any reference herein to, or recitation of, an operation being performed, e.g., if the operation is performed at least partly in software, is intended to include both an embodiment where the operation is performed in its entirety by a server A, and also to include any type of "outsourcing" or "cloud" embodiments in which the operation, or portions thereof, is or are performed by a remote processor P (or several such), which may be deployed off-shore or "on a cloud", and an output of the operation is then communicated to, e.g. over a suitable computer network, and used by, server A. Analogously, the remote processor P may not, itself, perform all of the operations, and, instead, the remote processor P itself may receive output/s of portion/s of the operation from yet another processor/s P', may be deployed off-shore relative to P, or "on a cloud", and so forth. Also provided, excluding signals, is a computer program comprising computer program code means for performing any of the methods shown and described herein when the program is run on at least one computer; and a computer program product, comprising a typically non-transitory computer-usable or -readable medium e.g. non-transitory computer -usable or -readable storage medium, typically tangible, having a computer readable program code embodied therein, the computer readable program code adapted to be executed to implement any or all of the methods shown and described herein. The operations in accordance with the teachings herein may be performed by at least one computer specially constructed for the desired purposes, or a general-purpose computer specially configured for the desired purpose by at least one computer program stored in a typically non-transitory computer readable storage medium. The term "non-transitory" is used herein to exclude transitory, propagating signals or waves, but to otherwise include any volatile or non-volatile computer memory technology suitable to the application. Any suitable processor/s, display and input means may be used to process, display e.g. on a computer screen or other computer output device, store, and accept information such as information used by or generated by any of the methods and apparatus shown and described herein; the above processor/s, display and input means including computer programs, in accordance with all or any subset of the embodiments of the present invention. Any or all functionalities of the invention shown and described herein, such as but not limited to operations within flowcharts, may be performed by any one or more of at least one conventional personal computer processor, workstation, or other programmable device or computer or electronic computing device or processor, either general-purpose or specifically constructed, used for processing; a computer display screen and/or printer and/or speaker for displaying; machine-readable memory such as flash drives, optical disks, CDROMs, DVDs, BluRays, magnetic-optical discs or other discs; RAMs, ROMs, EPROMs, EEPROMs, magnetic or optical or other cards, for storing, and keyboard or mouse for accepting. Modules illustrated and described herein may include any one or combination or plurality of a server, a data processor, a memory/computer storage, a communication interface (wireless (e.g., BLE) or wired (e.g., USB)), and a computer program stored in memory/computer storage. The term "process" as used above is intended to include any type of computation or manipulation or transformation of data represented as physical, e.g., electronic, phenomena which may occur or reside e.g., within registers and /or memories of at least one computer or processor. Use of nouns in singular form is not intended to be limiting; thus, the term processor is intended to include a plurality of processing units which may be distributed or remote, the term server is intended to include plural typically interconnected modules running on plural respective servers, and so forth. The above devices may communicate via any conventional wired or wireless digital communication means, e.g., via a wired or cellular telephone network, or a computer network such as the Internet.
The apparatus of the present invention may include, according to certain embodiments of the invention, machine readable memory containing or otherwise storing a program of instructions which, when executed by the machine, implements all or any subset of the apparatus, methods, features, and functionalities of the invention shown and described herein. Alternatively, or in addition, the apparatus of the present invention may include, according to certain embodiments of the invention, a program as above which may be written in any conventional programming language, and optionally a machine for executing the program, such as but not limited to a general-purpose computer, which may optionally be configured or activated in accordance with the teachings of the present invention. Any of the teachings incorporated herein may, wherever suitable, operate on signals representative of physical objects or substances. The embodiments referred to above, and other embodiments, are described in detail in the next section. Any trademark occurring in the text or drawings is the property of its owner and occurs herein merely to explain or illustrate one example of how an embodiment of the invention may be implemented. Unless stated otherwise, terms such as, "processing", "computing", "estimating", "selecting", "ranking", "grading", "calculating", "determining", "generating", "reassessing", "classifying", "generating", "producing", "stereo-matching", "registering", "detecting", "associating", "superimposing", "obtaining", "providing", "accessing", "setting" or the like, refer to the action and/or processes of at least one computer/s or computing system/s, or processor/s or similar electronic computing device/s or circuitry, that manipulate and/or transform data which may be represented as physical, such as electronic, quantities e.g. within the computing system's registers and/or memories, and/or may be provided on-the-fly, into other data which may be similarly represented as physical quantities within the computing system's memories, registers, or other such information storage, transmission or display devices, or may be provided to external factors e.g. via a suitable data network. The term "computer" should be broadly construed to cover any kind of electronic device with data processing capabilities, including, by way of non-limiting example, personal computers, servers, embedded cores, computing systems, communication devices, processors (e.g. digital signal processor (DSP), microcontrollers, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.) and other electronic computing devices. Any reference to a computer, controller, or processor is intended to include one or more hardware devices e.g., chips, which may be co-located or remote from one another. Any controller or processor may, for example, comprise at least one CPU, DSP, FPGA or ASIC, suitably configured in accordance with the logic and functionalities described herein. Any feature or logic or functionality described herein may be implemented by processor/s or controller/s configured as per the described feature or logic or functionality, even if the processor/s or controller/s are not specifically illustrated for simplicity. The controller or processor may be implemented in hardware, e.g., using one or more Application-Specific Integrated Circuits (ASICs) or Field-Programmable Gate Arrays (FPGAs), or may comprise a microprocessor that runs suitable software, or a combination of hardware and software elements. The present invention may be described, merely for clarity, in terms of terminology specific to, or references to, particular programming languages, operating systems, browsers, system versions, individual products, protocols and the like. It will be appreciated that this terminology or such reference/s is intended to convey general principles of operation clearly and briefly, by way of example, and is not intended to limit the scope of the invention solely to a particular programming language, operating system, browser, system version, or individual product or protocol. Nonetheless, the disclosure of the standard or other professional literature defining the programming language, operating system, browser, system version, or individual product or protocol in question, is incorporated by reference herein in its entirety. Elements separately listed herein need not be distinct components, and alternatively may be the same structure. A statement that an element or feature may exist is intended to include (a) embodiments in which the element or feature exists; (b) embodiments in which the element or feature does not exist; and (c) embodiments in which the element or feature exist selectably, e.g., a user may configure or select whether the element or feature does or does not exist. Any suitable input device, such as but not limited to a sensor, may be used to generate or otherwise provide information received by the apparatus and methods shown and described herein. Any suitable output device or display may be used to display or output information generated by the apparatus and methods shown and described herein. Any suitable processor/s may be employed to compute or generate or route, or otherwise manipulate or process information as described herein and/or to perform functionalities described herein and/or to implement any engine, interface or other system illustrated or described herein. Any suitable computerized data storage e.g., computer memory, may be used to store information received by or generated by the systems shown and described herein. Functionalities shown and described herein may be divided between a server computer and a plurality of client computers. These or any other computerized components shown and described herein may communicate between themselves via a suitable computer network. The system shown and described herein may include user interface/s e.g. as described herein, which may, for example, include all or any subset of an interactive voice response interface, automated response tool, speech-to-text transcription system, automated digital or electronic interface having interactive visual components, web portal, visual interface loaded as web page/s or screen/s from server/s via communication network/s to a web browser or other application downloaded onto a user's device, automated speech-to-text conversion tool, including a front-end interface portion thereof and back-end logic interacting therewith. Thus, the term user interface or "UI" as used herein includes also the underlying logic which controls the data presented to the user e.g., by the system display and receives and processes and/or provides to other modules herein, data entered by a user e.g., using her or his workstation/device. BRIEF DESCRIPTION OF THE DRAWINGS Example embodiments are illustrated in the various drawings. Specifically: Figs. 1 – 18a, 21a - 21b and 22 are diagrams illustrating embodiments of the invention. Figs. 18b, 18c, 19 – 20, 23 – 24, 25a – 25c, 26a – 26c, 27 are graphs useful in understanding embodiments of the invention. Methods and systems included in the scope of the present invention may include any subset or all of the elements shown in the specifically illustrated implementations by way of example, in any suitable arrangement e.g., as shown. Computational, functional, or logical components described and illustrated herein can be implemented in various forms, for example, as hardware circuits, such as but not limited to custom VLSI circuits or gate arrays or programmable hardware devices, such as but not limited to FPGAs, or as software program code stored on at least one tangible or intangible computer readable medium and executable by at least one processor, or any suitable combination thereof. A specific functional component may be formed by one particular sequence of software code, or by a plurality of such, which collectively act or behave or act as described herein with reference to the functional component in question. For example, the component may be distributed over several code sequences, such as but not limited to objects, procedures, functions, routines, and programs, and may originate from several computer files which typically operate synergistically. Each functionality or method herein may be implemented in software (e.g., for execution on suitable processing hardware such as a microprocessor or digital signal processor), firmware, hardware (using any conventional hardware technology such as Integrated Circuit technology), or any combination thereof. Functionality or operations stipulated as being software-implemented may alternatively be wholly or fully implemented by an equivalent hardware or firmware module, and vice-versa. Firmware implementing functionality described herein, if provided, may be held in any suitable memory device, and a suitable processing unit (aka processor) may be configured for executing firmware code. Alternatively, certain embodiments described herein may be implemented partly or exclusively in hardware, in which case all or any subset of the variables, parameters, and computations described herein may be in hardware. Any module or functionality described herein may comprise a suitably configured hardware component or circuitry. Alternatively or in addition, modules or functionality described herein may be performed by a general purpose computer or more generally by a suitable microprocessor, configured in accordance with methods shown and described herein, or any suitable subset, in any suitable order, of the operations included in such methods, or in accordance with methods known in the art. Any logical functionality described herein may be implemented as a real time application. if and as appropriate, and which may employ any suitable architectural option, such as but not limited to FPGA, ASIC, or DSP, or any suitable combination thereof. Any hardware component mentioned herein may in fact include either one or more hardware devices e.g., chips, which may be co-located or remote from one another. Any method described herein is intended to include within the scope of the embodiments of the present invention also any software or computer program performing all or any subset of the method’s operations, including a mobile application, platform, or operating system e.g., as stored in a medium, as well as combining the computer program with a hardware device to perform all or any subset of the operations of the method. Data can be stored on one or more tangible or intangible computer readable media stored at one or more different locations, different network nodes, or different storage devices at a single node or location. It is appreciated that any computer data storage technology, including any type of storage or memory, and any type of computer components and recording media that retain digital data used for computing for an interval of time, and any type of information retention technology, may be used to store the various data provided and employed herein. Suitable computer data storage or information retention apparatus may include apparatus which is primary, secondary, tertiary, or off-line; which is of any type or level or amount or category of volatility, differentiation, mutability, accessibility, addressability, capacity, performance and energy use; and which is based on any suitable technologies such as semiconductor, magnetic, optical, paper, and others. DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS According to certain embodiments, light detection and ranging (aka LIDAR (typically alone e.g., without augmentation by other radar systems) is used to solve the problem of how to scan for and/or detect and/or track objects such as drones which typically present on a backdrop of a much larger background whose visual characteristics, e.g., color, typically differ from the visual characteristics, e.g., color of the object. Certain embodiments seek to provide a system to scan, detect, and track small objects, such as drones, over (typically relatively featureless or simple or uniform) background, such as sky. Certain embodiments employ a multi-beam illuminator with a small single detector (such as Single Photon Avalanche Photodiode – SPAD/ multicell SPAD). The multi-beam is typically elongate in cross-section e.g., oval, typically with a relatively large beam cross-section aspect ratio e.g., 5 or less or 10 or 15 or more, such that a 1D scan beam or 1D line beam is a good approximation for many embodiments and may be assumed. It is appreciated, however, that any suitable illuminator beam widths, along the scanning axis as well as in the perpendicular direction, may be used, and may depend on all or any subset of the following parameters: illuminator power, total scan time, pulse frequency, background level. From the detection system point of view, the total FOR – Field of Regard (FOR) – is typically a 2D area. This allows to cover the entire FOR with the 1D line beam and a single axis scanning system. In some embodiments this scanning is done continuously, making the scanning mechanism a relatively simple system. In some embodiments the multi-beams have overlap, a feature which improves detection resolution. In some embodiments scanning beam features such as, say, the flux and/or angular resolution and/or range accuracy and/or direction, may, electronically, e.g., simply and rapidly, be changed. Embodiments herein may be provided alone or in any suitable combination to yield a high throughput scanning & detection system, useful for large FOV applications. Certain embodiments seek to provide a system for finding and/or tracking targets using a single detector. Certain embodiments seek to provide a system for finding moving or stationary targets in an area (Field of Regard FOR) of typically uniform visual appearance (typically the targets’ dimensions are at least 2 orders of magnitude, and possibly 3 or or 5 or more orders of magnitude, smaller than the area’s dimensions), by providing transmitting beams, aka search beams, which overlap. Typically, all, most, some, or at least one pair of adjacent search beams (e.g., from among a linear array of light sources generating search beams) overlap. The overlap increases accuracy of measuring target direction by determining whether the target is in only one of the beams, or is in an overlap area between the beams. In the latter case, the target is typically intermediate the two beams, whereas in the former case, the target is known to be outside the area of overlap, intermediate the two beams. Certain "lion in the desert" embodiments seek to provide convenient binary search e.g. by (e.g. in each iteration) partitioning the scanning zone into two halves, scanning with a first ("low") resolution in each half to identify where the targeted object ("lion") is located, and, after detection, repeating the process, with a second resolution, higher than the first resolution, only in that half, and then partitioning again, and so on.
Certain embodiments perform linear scan (1D scan) with a "line cross section" beam, which typically includes overlapping beams. Certain embodiments are configured for controlling the up to down (say) angular scan profile (angular velocity vs. time), and/or each beam’s power, to yield a sophisticated power efficient algorithm. A system for finding targets or designated objects in an environment e.g., a sector of sky, may require a high scan rate if a real-time representation of targets’ locations in the environment is desired. It would thus be convenient for each beam generated by the system to illuminate as large an area as possible, so that a given scan-speed results in as many images as possible of each portion of the sector. If each beam illuminates a smaller area, the system may need to divert the direction of the beams frequently or at a high rate. However, there is a tradeoff: since the direction of the target is determined by the direction of the beam that illuminated the target, a large illumination area per beam yields low accuracy for measuring the direction to the target. Transmitter systems may transmit plural beams, rather than a single beam, to illuminate a sector, with each beam extending along a different known direction, and each beam being coded uniquely. It is appreciated that any suitable scheme may be employed to code beams uniquely, including each beam having its own color, which is but one example among many of how to achieve unique coding, as described in further detail herein. According to certain embodiments, the transmitter is configured to transmit plural beams, including at least one pair of overlapping adjacent beams. Typically, each beam B from among the plural beams overlaps with the beams adjacent to B. The overlap improves the accuracy of target direction measurement, by differentiating between targets or objects illuminated by only one of the beams, as opposed to targets or objects which are illuminated by two of the beams, from which the system deduces that these targets are located in the overlap area between beams. Embodiments may be characterized by all or any subset of the following: a. Use of fewer beams which, due to the overlap as described, does not affect direction measurement accuracy b. Continuous one-dimensional scanning c. Scanner need move only along one dimension – thus only one degree of movement is required d. Continuous or discrete jumping movement of the scanner e. Simplified optics of the receiver (e.g., single-pixel) f. Modulating the broadcast in real time, according to the measurements obtained as described herein, which may be used to provide all or any subset of the following: f1. Flexibility and effective resource management f2. Improved SNR in object/target detection phase, relative e.g., to non- overlapping beams f3. Improved direction measurement resolution in object/target tracking phase, relative e.g., to non-overlapping beams f4. Energy conservation e.g., illuminate only the target, after discovery thereof, or illuminating only part of the FOR by switching on/off each beam separately, or dynamically changing each beam’s illumination power f5. Range measurement with higher accuracy f6. Better target identification/modulation/characterization, in 3D It is appreciated that the overlap embodiments described herein enable tasks which involve detecting drones against a sky background in real-time, or, more generally, tasks which involve detection of a (moving or stationary) target which is visually distinct from its environment to be accomplished using (only e.g.) LIDAR which facilitates development of systems which are smaller and/or lower-power, compared to drone detection systems based on radar, cameras in the visible or thermal field, or sound waves. Typically, the target is distinct from the environment because the target reflects extra light, that, once added to a visually uniform environment’s light, generates a distinguishable feature in the detector, relative to uniform intensity elsewhere. For instance, given a blue point light source on a blue uniform environment, a uniform intensity results, however when the point light source turns on, the scene appears blue, yet there is extra blue light from the blue point light source which appears as an intensity peak over the uniform intensity level of the blue environment, allowing targets to be distinguished from their environment. This is the mechanism that allows to distinguish the target. Color (for example) may be used to reduce some of the scene intensity – e.g., if an environment is blue and red, and a point light source is only blue, adding a blue filter reduces only the intensity of the background. Other ways to reduce scene intensity include making the point light source blink at a known frequency, then sampling the scene within the blinking-on time windows, and lowering the scene intensity.
The target is typically much smaller than the environment e.g., given target dimensions of less than a meter, the environment dimensions may be several kilometers or several dozen kilometers, or several hundred kilometers. It is appreciated that the overlap embodiments described herein yield detection and tracking at a high scanning rate, measuring range, and speed, and facilitate building of a three-dimensional image of the environment and/or of an individual target in real time. All references herein to "real time" are also intended to include near-real time. It is appreciated that a wide variety of objects may be found using embodiments herein, such as but not limited to drones, vehicles, and projectiles. Also, a wide variety of actuators may be controlled by detection and/or tracking outputs generated by the system herein e.g., if the object being found is a projectile, the object may be tracked long enough to estimate point of launch, and then a suitable weapon may be controlled to fire at point of launch. Alternatively, or in addition, a projectile may be tracked long enough to estimate a geographical area in which the object will fall to the ground, and air-raid warnings may be activated accordingly in these geographical areas. Alternatively, or in addition, a projectile may be intercepted based on tracking outputs generated by the system herein. Certain embodiments are now described in detail, with reference to Figs. 1 – 8b. Figure 1 illustrates a scenario in which the search area (henceforth Field of Regard (FOR)) is denoted by a square. There are (by way of example) two specific targets denoted by circles, target I near the center, and target II toward the bottom of the FOR. The background is not shown since it is assumed to be uniform (it is assumed that visual differences within the background are small, relative to visual differences between background and target). Plural search beams (of which three beams are shown for simplicity) are denoted by ellipses respectively; the scan is performed, in the example, from left to right, as denoted by an arrow. The illuminator beam comprises plural (e.g., in the illustrated embodiment) beams and is referred to below as the "composite" beam. The scanning direction is typically the direction of motion of the composite beam e.g., horizontal in the embodiment of Fig. 1. The line-of-sight (LOS) direction is perpendicular to the plane of Fig. 1 in the illustrated embodiment. The vertical direction, in Fig. 1, or more generally the direction perpendicular to the scan direction and to the LOS direction is termed herein the "perpendicular" direction. The narrow beam axis is along the scanning direction (e.g., horizontal). The broad beam axis is along the vertical direction. Beam overlap between adjacent beams occurs, as shown, e.g., along the broad axis, and optionally along the scanning axis as well (e.g., during tracking phase as described elsewhere herein with reference to Fig. 21b by way of example). Typically, the target size is smaller than the narrow beam axis and/or is smaller than the overlap length along the perpendicular direction. It is appreciated that in the illustrated embodiment, the FOR is not scanned using two-dimensional motion of a single beam (typically of circular cross-section). Instead, scanning occurs along only one axis and the FOR is scanned using plural (e.g., 3) beams which together cover the perpendicular axis. The beams may be oval rather than circular in cross section. The beams are typically arranged along a line, e.g., in a linear array, and certain adjacent beams, or all adjacent beams, overlap. Typically, the scanning speed is adapted to the overall mission time and/or illumination intensity and/or receiver sensitivity, etc., and continuous scanning is provided, rather than discrete "jumps", to simplify the design of the typically one-dimensional scanning system. For example, perhaps it is desired to scan a 30Xdegrees FOR. A 30X1 degree illumination beam (typically including plural overlapping beams) may be provided, and perhaps the total allowed scan time (Tscann) is 1 sec. Thus within 1 sec the illumination beam is continuously rotated by 30 degrees in one 1 direction e.g., the scanning direction. If a constant angular scanning velocity is assumed, the beam is typically rotated by 30 degrees per sec. If, somewhere in the FOR, there is a single point size object, then at some point along the 1 sec scan period, the illumination beam may "sweep" this object for 1/30 sec, because the beam angular width, along the narrow axis (scanning direction), is 1 deg. The detected signal (assuming no noise) would thus have a ramp up and ramp down profile of about 1/sec duration. As explained elsewhere herein, to identify each zone each beam may be encoded, say with a unique sequence of pulses. Any suitable method may be employed to differentially e.g., uniquely encode each beam and, accordingly, to later decode each beam so encoded. For example, some sequences may include more pulses than others, and/or each sequence may have a unique pulse length, thus some have short pulses and some long and/or some sequences may include pulses whose duration is, say, SHORT SHORT SHORT LONG, whereas other sequences may include pulses whose duration is, say, LONG LONG LONG SHORT and/or in each pulse sequence, the timing of each pulse is randomly selected, and, therefore, statistically, each such sequence may be assumed to be unique. There are many ways to achieve this uniqueness (including but not limited to uniqueness of wavelength, amplitude modulation, polarization). Three example uniqueness schemes are now described which encode beams with discrete pulse sequences. For simplicity, in each example 2 sequences are to be distinguished, however, more generally, it may be desired to have n > 2 unique sequences. In the 1st example, as shown in Fig. 23, the 2 sequences have the same pulse repetition time - each pulse is applied every 100 time units. Each sequence is an infinite pulse comb. The difference between the 2 sequences is the initial time – while the first sequence begins at time zero , the 2nd sequence begins 29 (say) time units later. If each transmitted pulse in the first sequence returns from the target before the next pulse of the second sequence is transmitted, the total sequence of received pulses may be conveniently (perhaps after noise-filtering) split or partitioned or decoded back into the 2 original sequences e.g. pulses [1,3,5,7…] belongs to sequence 1, pulses [2,4,6,8….] belong to sequence 2. Another example is depicted in Fig. 24 in which the 2 sequences start at the same time, but their pulse repetition time is uniquely different – the 1st sequence’s repetition time is 73 time units, versus 51 time units for the second sequence. Given that the two lack a common multiple till 1000 (the length of time axis in the illustrated example) the series of the total received pulses may be reconstructed as a linear combination of the 2 sequences, for decoding purposes. One way to demonstrate this is by convolving the first and second sequences as shown in Fig. 25 which includes graphs a, b and c. In graph a, sequences 1 and 2 were convolved to yield low values whereas auto-convolution of each sequence as shown in graphs b and c yields a clear signal (b & c), with a peak value once the sequence fully coincides with itself. The 1st sequence has 14 pulses in the illustrated example, while the 2nd sequence includes 20. It is appreciated that for this type of sequences (pulse comb with constant pulse repetition time), the auto-convolution has a broad ramp up and ramp down (see the triangle shapes in Fig. 25’s graphs b and c). The height of the triangle is related to the detection capability, whereas the horizontal base of the triangle is related to the arrival time accuracy. It is appreciated that (e.g., for optimization relative to the example of Fig. 25a, b, and c) a set of sequences may be generated in which the auto-convolution peak value remains high, whereas the auto-convolution shape is very narrow. This yields a better result, since detection capability is maintained, whereas range accuracy is improved. Such sequences are depicted in Fig. 27 in which each sequence is generated with a set of pulses with different and random pulse repletion times. As shown in Fig. 26a, b, and c which includes convolution graphs a, b and c for this embodiment, in graph (a), the convolution between 2 sequences is poor, yet the auto-convolution shown in Fig. 26, graphs b & c, is clear and sharp. Each sequence may have, say, 100 pulses or any other number of pulses which provides a desired degree of certainty or level of confidence for detection of the correct sequence among all other sequences (according to the number of beams). It is appreciated that any suitable known considerations may be employed to determine how many pulses to provide in a given sequence e.g., depending on signal and/or noise and/or sequence period and/or background features and/or target fixtures. In order to have 100 pulses during a 1/30 sec time, the average pulse frequency of each sequence is f = 30*100 = 3KHz which, therefore, is a rough or preliminary estimation of the pulse frequency in this example. Typically, the target is small relative to the area of overlap and relative to the beam width along the scanning direction, e.g., the shorter dimension of the oval as described elsewhere herein and (e.g. given that there are no objects in the background that are large relative to the targets or objectives to be detected e.g. trees or buildings in the background of a scene in which cars or humans or animals/birds are to be detected ) therefore there are two types of return signal: either the signal returns from the target entirely from one beam, as is the case for target II (of Fig. 1) toward the bottom of the FOR which contains information only from the bottom of the three beams, or the returned signal contains information from two beams, as is the case for target I toward the center of the FOR which contains information from the two top beams. Therefore, once the system knows how to differentiate between the types of returns, the resolution in the perpendicular direction increases from 3 regions (if there are 3 beams) to five regions (twice the number of beams, minus 1). For differentiation, note that if the top, middle, and bottom beams are blue, red, and green respectively, return signals may, depending on their position along the vertical axis, be either blue, a mixture of blue and red, red, a mixture of red and green, or green, thereby improving the resolution, by using the overlap between beams, for small targets being detected against a uniform background. Any suitable methods may be used for differentiating between the various (e.g., two) types of returning beams. Fig. 2 shows another example system with (by way of example) six beams, as opposed to only three beams in Fig. 1. As shown, each beam typically shines in a different direction, resulting in continuous or full coverage of the search/illumination area. In Fig. 2 there is no beam overlap; the perpendicular FOR direction is tiled with beams. The transmitter is shown on the top and the receiver (on the bottom left) may include but a single lens and a single detector (single cell) for all the beams, to achieve a simple, compact, and cost-effective receiver. Fig. 3 is an intensity profile for each of the 6 beams in Fig. 2 respectively; for simplicity, an identical and square-shaped intensity profile is assumed for each beam, however this is not intended to be limiting; see Figs. 18a, b, and c. Fig. 4 graphs modulation as a function of time, for flashes in each of two typically adjacent beams, providing a schedule for firing beams according to certain embodiments. If beams 1 (top row) and 2 (bottom row) transmit flashes simultaneously as shown in the top graph, these beams typically cannot be separated upon reception, because simultaneous transmitted flashes will typically be received simultaneously. If, however, the flashes are intermittent or flash modulation alternates, as shown in the bottom graph, each received flash may be matched to the corresponding transmitting flash which allows the system to determine whether the area the flash was reflected from was illuminated by beam 1 or by beam 2. Thus, according to certain embodiments, an alternate (rather than simultaneous, as in the top graph) schedule for certain light sources is employed, to enable the system to distinguish the beams those sources generated. In practice, the beam intensity profile typically changes gradually over the area rather than sharply, yielding trapezoidal shapes (e.g., as in Fig. 5) rather than the square profiles shown in Fig. 3. Given two beams, three directional zones are obtained, rather than two, e.g., right, or left or center, rather than just right or left, where the center is the area of overlap between beams. According to certain embodiments, a suitable beam overlap may be provided, and later each illumination unit may be tested and characterized during manufacture stage, to achieve the desired overlap.
An assumption that the target is much smaller (say by 2 or 3 or 4 or more orders of magnitude) than the FOR typically makes object localization clear, and typically becomes more valid as detection distance increases. For instance – suppose a drone of size e.g., diameter 0.5 meters is to be detected. If the scanning range is 2 km, the drone angular size is 0.5/2000 rad= 0.015 degrees. Typically, every distinguishable zone in the multi-beam profile should be larger than 0.015 degrees, to ensure that the system resolution capability matches the objects to be detected. More generally, the size of zones in the multi-beam profile may be provided to be large enough to ensure a good match between objects to be detected and resolution capability (or resolution of the direction of detection) of the detection system. An advantage of certain embodiments is that a single physical system may be provided, including, say, a linear array of light sources to cover one dimension of an area to be monitored, and a detector e.g. single-pixel detector per illuminator beam/ composite beam, and various modulation schemes may be used, and may even be varied in real time, to suit different detection tasks which the system may be asked to perform; in each modulation scheme, all or some of the light sources may be modulated dynamically, either in groups, or individually. For example, one modulation scheme may be used to identify objects, and another scheme may be used to track objects, once identified. Overlap between beams may be provided in one modulation scheme, but not another. Or, the extent of overlap between beams may vary from one scheme to another. Figs. 6a – 8b show how, by appropriate modulation of the beams, beam properties (aka beam characteristics) may be changed according to certain embodiments, e.g., intensity and/or direction of illumination and/or resolution of the direction of detection. These beam properties may be changed at any time, thus adapting to the needs of the task e.g., in real time or near-real time. The change, being purely electronic (frequency of pulses/flashes and/or coordination of pulses/flashes between different beams e.g., simultaneous vs. alternating and/or intensity of pulses/flashes and/or turning beams off, etc.), may be effected in real time or near-real time. "Pulse" and "flash" are used herein interchangeably. Fig. 6a shows a first modulation scheme in which all beams are simultaneous. This may be used for high-resolution searches in the object detection stage. Fig. 6b shows a second modulation scheme which may be used for low- resolution searches in the object detection stage. As shown, given a linear array of beams, the first half of the array are turned on simultaneously, and this alternates with the second half of the array being turned on simultaneously, such that half of the beams are on, then are turned off, and the other half of the beams are turned on, then the first half of the beams are turned on again and the second half are turned off, and so forth. Note that the resolution in Fig. 6b is lower than in Fig. 6a. In Fig. 6A the resolution corresponds to the width of one of the 8 columns, whereas in Fig. 6B the resolution corresponds to the width of one of the 2 columns. According to certain embodiments, a suitable profile for the search beam (e.g., more or less power in certain areas, different resolution at different stages) may be selected (or even varied in real time) using any suitable technology e.g., as described elsewhere herein. Figs. 7a, 7b, and 7c show a three-stage modulation scheme suitable for binary searches. The first stage is the same as the scheme of Fig. 6b. In the second stage, after the object is found to be, say, in the area illuminated by the second half of the linear array, then the same scheme is used, but only within the second half of the array, whereas the beams in the first half of the linear array, where the object is known not to be, remain off. Thus, if, for example, the second half of the array includes beams 5 to 8, then, as shown in Fig. 7b, the second stage of the binary search modulation scheme involves turning on beams 5 and 6 simultaneously, but alternately with beams 7 and 8, such that beams 5 and 6 go off when beams 7 and 8 go on, and vice versa. In the third stage (assuming the number of beams to be 2 to the power of 3 as in the illustrated example) shown in Fig. 7c, only the beams which illuminate the area in which the object is known to be are turned on, half at a time, alternately; in the illustrated example this means that the two beams in the quarter of the linear array in which the object is known to be, are alternated. It is appreciated that if, say, there are 16 beams in the array, the scheme may have four stages, and, more generally, if there are 2^n beams in the array, the scheme may have n-1 stages; in the last stage, typically, alternation is between two sets of beams, each including but a single beam. Fig. 8a is a fourth modulation scheme suited for high resolution tracking. In this scheme, as shown, all beams alternate. All or some pairs of adjacent beams may overlap. In the actual illustrations, only Fig. 8a shows (angular) overlap, whereas Figs. 6a – 6b, Figs. 7a – 7c and Fig. 8b are shown without overlap. Fig. 8b is a fifth modulation scheme suited for generating a 2D map or depiction of the area being scanned, typically in real time or near-real time. As shown, a 2- dimensional array of light sources may be used. According to one embodiment, the beams (if the array is 3 x 3 in size, as in the illustrated embodiment) are turned on and off alternately e.g., sequentially; a first light source is turned on, then a second light source is turned on and the first light source is turned off, then a third source is turned on while the second source is turned off, and so forth. According to another embodiment, the resolution required in some areas, illuminated by a given subset of the beams such as, say, beams 2 and 5 and 6, is smaller, whereas the resolution required in other areas, illuminated by the remaining beams namely beams 1, 3, 4, 7 – 9, is larger. In this case, alternation may occur between the individual beams which do not belong to the subset, and between all beams within the subset. For example, beam 1 might be turned on and then off, followed by the same scheme for the remaining beams not in the subset, namely beams 3, 4, and 7 – 9. Then, all beams in the subset may simultaneously be turned on and then off, and then beam 1 might be turned on again, and so forth. It is appreciated that several subsets of beams for simultaneous operation, typically of different sizes, may be defined. Also, the activation (simultaneously) of the beams in the subset may occur more than once in the scheme, whereas each individual beam is activated only once, or vice versa: the activation (simultaneously) of the beams in the subset may occur only once in the scheme, whereas certain (or all) individual beams are activated alternately, but more than once. Embodiments for a Scan Detect & Track System according to the present invention which are typically particularly suited for small targets over plain background, are now described in detail. Embodiments may be provided in any suitable combination. "Plain" background is intended to include any background characterized in that the features in the background are larger than the size of the target. For example, a background whose features are an order of magnitude larger than the target, or 2 orders of magnitude larger, or 3 orders of magnitude, or more, may be considered large. Background may be considered as low frequency, time varying features, at the detector output signal, whereas the target signal may comprise a time peak with high frequencies e.g. frequencies which are an order of magnitude, or more, higher relative to the background features. Transmitter – overlapping beams Fig. 9 depicts an example transmitter. Many variations are possible; in the example assume four fiber coupled lasers generate four overlapping beams. The fibers' exits are arranged along a line in a lens focal plane, thus generating four far field beams with partial overlap. The result is seven distinguishable zones from the four beams, as indicated by numerals 1 – 7, or, more generally, N such beams will yield 2N-1 zones. There are many ways to generate the partial overlapping beams e.g., as shown in Fig. 10a or with mirrors e.g., as shown in Fig. 10b. Transmitter – encoding the beams To distinguish between the beams, each beam should be uniquely encoded e.g., by using beams which each have at least one unique property or characteristic, such as differently colored beams and/or beams which are each transmitted using a different pulse sequence. For instance, each beam may have a specific wavelength, and the detection unit may chromatically separate the beams (chromatic beam splitter, dispersion effects etc.). Alternatively, or in addition, each beam may be transmitted with a unique pulse sequence. This is illustrated in Fig. 9 by the vertical sets of four arrows between the controller and the lasers (a-d). Each arrow represents a controller pulse that eventually generates a light pulse. The vertical axis denotes time. In practice there may be many more than four pulses in each sequence. The detector, in turn, is typically fast enough to detect each such short pulse, and to re-construct the sequence of each laser from the mixed sequence of the entire signal. Assuming, for simplicity, that there is only one target at a time, and that the target is smaller than each distinguishable zone, the mixed sequence would be at most, or in the worst case, a linear combination of two sequences. An advantage of this method is the ability to use a "single pixel" fast detector in the receiver, such as a Single Photon Avalanche Diode (SPAD) or Multi-pixel photon counter (MPPC). In practice, the aspect of this "unique pulse sequence" method may be reduced to generating orthogonal digital sequences. In Fig. 9, non-limiting examples of pulse sequences are shown. As shown, a & b are two comb sequences with the same frequency. By time shifting them, they may be distinguished at the receiver. Sequences c & d are both random sequences, and thus have a high degree of orthogonality. Sequences like a & b, but with non-rational frequencies, may also be used, and yield low orthogonality. Receiver According to certain embodiments, a very small single detector may be used at the receiver module. The optic should concentrate or focus the light coming from the Field of View (FOV), on the detector e.g., as shown in Fig. 11. Any suitable concentrated light spot shape which does not fall outside the detector may be employed. Thus, the optic system Point Spread Function (PSF) need not be small, which simplifies the design and/or size and/or cost. The receiver may, for example, be made of a single lens, e.g., as shown in Fig. 11. The term "detector" as used herein may comprise any light sensitive device that converts incoming light into electric signals. Any suitable type of detector may be used herein, such as but not limited to a 2D pixel array wherein, as in a camera, each pixel is a separate light sensitive sub-unit of the detector 1D pixel array, multi-pixel photon counter (MPPC), or a device with only one light sensitive unit, termed loosely "single pixel" herein, such as Single Photon Avalanche Diode (SPAD). In multi-pixel photon counters (MPPCs) there are typically plural light sensitive sub-units (like pixels in camera) that read the signal simultaneously, but the total output is a single result e.g. a sum of all the sub-units’ signals. In such detectors the sub-units may be termed "cells", whereas sub-units of the camera detectors may be termed "pixels". Resolution Providing overlap, e.g., using an overlapped encoded multi-beam illuminator, is useful to improve resolution. For example, gaining seven distinguishable spots or zones, using only four beams, along one dimension, means the resolution has improved by 7/4. Yet, in some scenarios this may not be the case. If the objects are larger than the distinguishable spots, the mixed sequence of the entire signal would be more complicated and may make re-constructing the scene difficult or impossible e.g., as illustrated in Fig. 12a, in which the trapezoid and circle are each larger than each area or zone. In contrast, in Fig. 12b, the objects are both smaller than each of the zones, and thus it is easier for the system to determine that one object exists in zone 2, and another in zone 7, although it may be difficult or impossible to know each object’s shape. In some applications (in the car industry, for instance) the sensed background may contain many undesirable large objects (buildings, trees etc.), at similar distance as the objectives are (cars, pedestrians etc.) which may render the overlapping multi-beam embodiment herein less desirable. In contrast, tasks which require sensing objects which are few in number and/or small, over a typically distant, typically flat or uniform background (e.g., drone detection against the sky), may greatly benefit from embodiments herein. Scan Method – covering the entire FOR Having described above the method of overlapped encoded multi-beam illuminators, example scanning methods for scanning the Field of Regard (FOR), which is the area in which the target is to be sought, are now described. In Fig. 13, the FOR is denoted as a square, using a dashed line. The square may be tiled with circles – which is the beam cross section on the FOR plane. Instead of using many beams, a single beam may be used, and then the FOR may be scanned using a 2D scanning mechanism. One way to do this is to "jump" the beam, then hold the beam in position till sufficient acquisition time elapses. This results in a 2D mechanism that rapidly moves the beam, and then, in a very short time, fixes or stabilizes the beam direction. The smaller the beam diameter, the more jumps are needed, which may require a more complex mechanism. An alternative embodiment would be to move the beam continuously over the entire FOR, with a 2D mechanism which may be simpler than the "jump" embodiment. However, when using continuous scanning with a multiple pixel detector (camera), the camera scanning movement typically generates a movement of the target image along the detector, which typically worsens performance e.g., as shown in Fig. 13. According to other embodiments, an oval (composite) beam profile/cross-section may be used for scanning, e.g., as shown in Fig. 14, yielding flux on the FOR plane which is higher than in the embodiment of Fig. 13. Advantageously, the higher flux yields the same signal with shorter exposure time, which in turn reduces the background level which may be highly desirable. The tradeoff is typically that more "jumps" with shorter acquisition time are needed, which is a difficult mechanism to design, build, and maintain. One trade-off would be to use more beams, with a 1D scan mechanism, e.g., as shown in Fig. 15a or in co-owned US patent US11236970B2. It is appreciated, in view of the above, that in the embodiment of Fig. 15b, seven zones, rather than only four, are operational. For an oval beam profile, a small or large size object is typically determined by two parameters – A & B e.g., as shown in Fig. 15b. An object which is "small" along the scan direction, is smaller than B, the beam width, along the scan direction, in a plane perpendicular to the Line of Sight (LOS) and coinciding with or passing through the object. An object which is "small" along the perpendicular to the scan direction, is smaller than A, which is the beam distinguishable zone length, in a plane perpendicular to the Line of Sight (LOS) and coinciding with or passing through the object. Receiver – decoding the beams Decoding may be performed as shown in Fig. 16. As shown, in the example, two objects – represented as a square and a circle - exist in a FOR which is scanned with a multi-beam illuminator (four beams in the illustrated example), e.g., from left to right. When the square object is illuminated, only the unique pulse sequence "a" appears in the total received signal, and hence the object is in the seventh zone (shown as the top zone in Fig. 16 ). Its shape – square – typically cannot be determined. Subsequently, the circle object is illuminated. Now, the total received signal is composed of pulse sequences "c" and "d", indicating the object is in zone 2 (the overlap between pulse sequences "c" and "d"). Dynamic change of scan profile The ability to dynamically change the multi-beam profile by proper decoding of the beams, may save energy and may improve the scan phase. For instance, if the FOR is a circle as shown (shaded) on the left in Fig. 17, the multi-beam profile may change its shape as a function of time. At the top (assuming top-to-bottom scan movement) illuminate with two beams (two typically overlapping black ellipses). Later turn on four (typically overlapping) beams, after that, at the point where the circle is broadest, use six beams, then later four, and finally two. If the scanning area is an ellipse as shown shaded in Fig. 17 on the right, turn on only two beams at a time. This method (in which a profile changes depending on the FOR, and even depending on the FOR’s dynamically changing size as the beam scans), may save illumination power and/or increase laser flux on target. By appropriate modulation of the beams, beam properties (aka beam characteristics) may be changed, e.g., as noted. For example: Variable Dynamic Scan Resolution To achieve variable dynamic scan resolution, consider for example a two oval beam profile (cross-section) e.g., as in Fig. 18a; in the illustrated example there are three zones – left, center, & right. The intensity profile along the dashed horizontal line is depicted in the graph of Fig. 18b; the intensity profile or cross-section is perpendicular to the scan direction. The graph’s horizontal direction is perpendicular to the scan direction. Its unit may be either a unit of length or an angular unit (degrees or radians); for simplicity, the graph is presented in arbitrary units. This yields, in the illustrated example, three distinguishable zones: [-80 -20], [-20 20], and [20 80]. This type of beam profile has variable resolution e.g., a first resolution of ±30 at the edges, and a higher or better resolution of ±20 at the center. Practically, the above profile was made of eight similar beams, e.g., as shown in Fig. 18c. This was obtained by using two different "unique pulse sequences" – a first sequence for beams 1-4, and another pulse sequence, for beams 5-8, in the illustrated example. A "lion in the desert" embodiment, characterized by dynamic change of scan resolution, is now described with reference to Fig. 19. The ability to dynamically, and by electronic means (e.g., a controller), change the beam profile, enables resolution and power to be conveniently varied, during the detection/tracking phases. At the beginning of the detection stage (e.g., as shown in Fig.18c), the eight beams may be split into two halves. At this point the resolution is low/ poor – left or right, although, if, at this stage, the target has been detected at the center, high resolution results right from the beginning. Suppose, without loss of generality, that the target was detected in the left half; in this case, the beam is reshaped to illuminate the left half alone, by operating only beams 1-4, whereas beams 5-8 are off. In contrast, if the target were to have been detected in the right half, beams 5 – 8 would be on, and beams 1 – off. Unlike the previous case, beams 1-2 now typically get the same "unique pulse sequence", and beams 3-4 get another "unique pulse sequence", as shown in Fig. 19. In the next iteration the options may include all or any subset of: Detection on the left – use beam 1 with a "unique pulse sequence", and beam with another "unique pulse sequence". Detection on the right – use beam 3 with a "unique pulse sequence", and beam with another "unique pulse sequence". Detection on the center – use beam 2 with a "unique pulse sequence", and beam 3 with another "unique pulse sequence".
An embodiment characterized by high resolution during tracking phase is now described with reference to Fig. 20. Generally, the resolution depends on each beam profile. In general, monotonic steep profile yields better resolution. In Fig 20, two profiles are depicted using dotted and dashed lines respectively. Each profile is made from two identical Gaussians. The dashed profile is u0001u0002u0003u0004u0005u0006u0007btu0007b+u0001u0002u0003u0004fu0006u0007btu0007b and the dotted profile is u0001u0002u0003u0004u0005u0006u0007tu0007+u0001u0002u0003u0004fu0006u0007tu0007. In the ±5 region (along the horizontal axis) both profiles are similar, yet the dotted profile may give a better resolution, namely, the value of where the target is, along the horizontal axis, may be more accurate (with a smaller error) e.g., as shown in Fig. 20. Thus, the multi-beam illuminator may have two more accurate beams (at the center, for instance). In this embodiment, at the end of the "Lion in the Desert" stage, the beam may be steered to place the object between the two more accurate beams, and the tracking phase may be more accurate. For example, in Fig. 21a, the two lightly shaded beams in the middle are more accurate than the two more heavily shaded beams on the right, and also more accurate than the two heavily shaded beams on the left. Once the system has locked onto a target, only the target may be illuminated, which conserves power. For example, referring to Fig. 22, it is appreciated that if different (e.g., non-identical or unique) beams are used, and the setup is then steered according to on-going measurements, the detection stage may be with variable low resolution, and then the tracking stage may be performed with a higher resolution. For instance, at the beginning of the detection stage 6 (say) beams may be split into two halves – yellow and red e.g., as shown shaded lightly and heavily, respectively, in Fig. 22, top row. The resolution is poor, and indicates only left or right, although, if, at this stage, the target happens to be detected at the center, between red (shown heavily shaded) and yellow (shown lightly shaded), high resolution is gained right at the start. Now assume, without loss of generality, that the target has been identified in the red half. Then the system may decode the two right beams to red and yellow e.g., as shown shaded lightly and heavily, respectively, in the middle row of Fig. 22. Once the target is re-detected, the resolution is twice the previous stage, albeit typically still low. At this stage the system may steer the setup to locate the target between the two central beams which are narrower than the others. This yields a better resolution for the tracking stage, e.g., as shown in Fig. 22, bottom row, where, again, the two central beams are red and yellow e.g., shaded heavily and lightly, respectively, in the actual drawing. Azimuth & Elevation during tracking phase According to another embodiment, information perpendicular to the scanning direction may be obtained during the tracking phase. The azimuth direction (horizontal in Fig. 21b) may be measured e.g., by directing plural beams, all having the same vertical direction, toward different horizontal directions. During the scanning phase the beam elevation (vertical in Fig. 21b) changes, and may be measured at any given time, e.g., by knowing the multi-beam elevation vs. time, up to the beam’s vertical angular width. For better resolution, plural (e.g., 2 in the illustrated embodiment) overlapping beams in the vertical axis may be provided as well (e.g., the 2 lightly shaded circles above and below the vertical beam in Fig. 21b). The object may be positioned among the 4 lightly shaded beams in the illustrated embodiment, e.g., by steering the illuminator toward the object at the end of the detect phase, and continuously illuminating the object during the track phase). Then, accurate measurements in both directions (elevation – up/down & azimuth – right/left) may be obtained e.g., with 4 coded sequences. Efficient power scan profile: By controlling the up to down angular scan profile (velocity vs. time), and/or each beam power, a sophisticated power efficient algorithm results (and may be applied). Dynamic change of "unique pulse sequence" Typically, though not necessarily, during the detection stage it is better to use a long unique pulse sequence like a & b in Figs. 9 and/or 16. Yet this type of sequence yields less accuracy on the object distance. Once the object has been detected, other types of unique pulse sequence, e.g., random sequences, which may be shorter, may be used. Such type of sequences may improve the accuracy of object distance measurement e.g., as described elsewhere herein with reference to Figs. 23, 24 by way of example. Conveniently, the sequence change may be effected electronically in a controller.
It Is appreciated that providing a set of light sources, then effecting dynamic changes in the beam transmission profile, may or may not involve providing overlap between some or all adjacent light sources. Also, overlap between light sources may be provided in a system which does not effect or carry out dynamic changes in the beam transmission profile. Thus, the feature of dynamic changes in beam transmission profiles is orthogonal to, or independent of, the feature of providing overlap between adjacent beams to increase object detection resolution without increasing a given number of light sources covering an area to be scanned. It is appreciated that applicability of embodiments herein is not limited to the optical electro-magnetic spectrum, and may also be used in a context of an RF, acoustic, or other active scanning system. According to certain embodiments, the system may be calibrated by deploying objects at known locations and measuring certain outputs then, when, in real time, objects at unknown locations are encountered, determining the locations of the objects encountered in real time by interpolating between the outputs measured when objects at known locations were deployed, thereby to determine which real-world locations correspond to (e.g. are illuminated by) various light sources respectively; it is appreciated that once this is known, the intersection between real-world locations or zones illuminated by both of two adjacent overlapping beams respectively is known as well, and this intersection is then known to be the real world location of objects illuminated by two beams. However, calibration is only one possible solution for determining which zones or real-world locations (which may be expressed either in meters or in angles) correspond to (or are illuminated by) various light sources, and any other solution described herein or known in the art may alternatively be employed. It is appreciated that use of pulsed light sources is but one possible solution; ensuring unique identification of each of plural zones may alternatively be achieved by coding the zone in any suitable manner, such as but limited to use of color and/or pulse. Alternatively, however, a sinusoidal wave may be used, in which case encoding by frequency (of the light source’s beams e.g.) may be used to achieve unique identification of zones. Alternatively, chirp technology may be employed, or analog type encoding, which is common in radar, but may require more transmitting power. Digital encoding is typically suitable particularly for small objects e.g., drones, in which the received signal is weak.
Regarding use of colors, it is appreciated that given N light sources, less than N different colors may be used e.g., n < N colors, which repeat, may be employed. For example, given a linear array of light sources and given n = 3, the sources, going from left to right, may be red, blue, yellow, red, blue, yellow, etc. In this embodiment, beams which are adjacent are uniquely coded, whereas beams which are not adjacent are not (necessarily) uniquely coded. An advantage of certain embodiments is that ensuring the detector's FOV fits in shape, and/or is well aligned with the Illuminator solid angle, yields a desirable reduction of background level, given that the larger the detector FOV, the higher is the background level – which is undesirable. Other than drone detection, the system and methods herein may be used to detect a wide variety of moving (or stationary) objects, for a broad variety of applications such as seekers for precision-guided munitions, port security, rigs, convoys, port depth mapping, air defense systems, and situational awareness systems. For example, it may be desired to detect birds in a given area such as an airport, e.g., given that birds are pests for airports, due to collisions of birds with aircraft, aka bird strikes, which may even cause an aircraft to crash. Detection of individual birds may be desirable or, e.g., to identify migration patterns, detection of flocks of birds (which may optionally be detected as a single object) may be desired. It is appreciated that embodiments described and illustrated herein, may be combined–- unless physically impossible or specifically indicated herein to be undesirable. It is appreciated that terminology such as "mandatory", "require", "need" and "must" refer to implementation choices made within the context of a particular implementation or application described herein for clarity, and are not intended to be limiting, since, in an alternative implementation, the same elements might be defined as not mandatory and not required, or might even be eliminated altogether. Components described herein as software may, alternatively, be implemented wholly or partly in hardware and/or firmware, if desired, using conventional techniques, and vice-versa. Each module or component or processor may be centralized in a single physical location or physical device, or distributed over several physical locations or physical devices.
Included in the scope of the present disclosure, inter alia, are electromagnetic signals in accordance with the description herein. These may carry computer-readable instructions for performing any or all of the operations of any of the methods shown and described herein, in any suitable order, including simultaneous performance of suitable groups of operations, as appropriate. Included in the scope of the present disclosure, inter alia, are machine-readable instructions for performing any or all of the operations of any of the methods shown and described herein, in any suitable order; program storage devices readable by machine, tangibly embodying a program of instructions executable by the machine to perform any or all of the operations of any of the methods shown and described herein, in any suitable order i.e. not necessarily as shown, including performing various operations in parallel or concurrently, rather than sequentially as shown; a computer program product comprising a computer useable medium having computer readable program code, such as executable code, having embodied therein, and/or including computer readable program code for performing, any or all of the operations of any of the methods shown and described herein, in any suitable order; any technical effects brought about by any or all of the operations of any of the methods shown and described herein, when performed in any suitable order; any suitable apparatus or device or combination of such, programmed to perform, alone or in combination, any or all of the operations of any of the methods shown and described herein, in any suitable order; electronic devices each including at least one processor and/or cooperating input device and/or output device and operative to perform, e.g., in software, any operations shown and described herein; information storage devices or physical records, such as disks or hard drives, causing at least one computer or other device to be configured so as to carry out any or all of the operations of any of the methods shown and described herein, in any suitable order; at least one program pre-stored e.g. in memory or on an information network such as the Internet, before or after being downloaded, which embodies any or all of the operations of any of the methods shown and described herein, in any suitable order, and the method of uploading or downloading such, and a system including server/s and/or client/s for using such; at least one processor configured to perform any combination of the described operations, or to execute any combination of the described modules; and hardware which performs any or all of the operations of any of the methods shown and described herein, in any suitable order, either alone or in conjunction with software.
Any computer-readable or machine-readable media described herein is intended to include non-transitory computer- or machine-readable media. Any computations or other forms of analysis described herein may be performed by a suitable computerized method. Any operation or functionality described herein may be wholly or partially computer-implemented e.g., by one or more processors. The invention shown and described herein may include (a) using a computerized method to identify a solution to any of the problems or for any of the objectives described herein, the solution optionally including at least one of a decision, an action, a product, a service, or any other information described herein that impacts, in a positive manner, a problem or objectives described herein; and (b) outputting the solution. The system may, if desired, be implemented as a network, e.g., a web-based system employing software, computers, routers, and telecommunications equipment, as appropriate. Any suitable deployment may be employed to provide functionalities e.g., software functionalities shown and described herein. For example, a server may store certain applications, for download to clients, which are executed at the client side, the server side serving only as a storehouse. Any or all functionalities e.g., software functionalities shown and described herein, may be deployed in a cloud environment. Clients e.g., mobile communication devices such as smartphones, may be operatively associated with, but external to the cloud. The scope of the present invention is not limited to structures and functions specifically described herein, and is also intended to include devices which have the capacity to yield a structure, or perform a function, described herein, such that even though users of the device may not use the capacity, they are, if they so desire, able to modify the device to obtain the structure or function. Any "if -then" logic described herein is intended to include embodiments in which a processor is programmed to repeatedly determine whether condition x, which is sometimes true and sometimes false, is currently true or false, and to perform y each time x is determined to be true, thereby to yield a processor which performs y at least once, typically on an "if and only if" basis e.g., triggered only by determinations that x is true, and never by determinations that x is false. Any determination of a state or condition described herein, and/or other data generated herein, may be harnessed for any suitable technical effect. For example, the determination may be transmitted or fed to any suitable hardware, firmware, or software module, which is known or which is described herein to have capabilities to perform a technical operation responsive to the state or condition. The technical operation may, for example, comprise changing the state or condition, or may more generally cause any outcome which is technically advantageous, given the state or condition or data, and/or may prevent at least one outcome which is disadvantageous, given the state or condition or data. Alternatively, or in addition, an alert may be provided to an appropriate human operator or to an appropriate external system. Features of the present invention, including operations, which are described in the context of separate embodiments, may also be provided in combination in a single embodiment. For example, a system embodiment is intended to include a corresponding process embodiment, and vice versa. Also, each system embodiment is intended to include a server-centered "view" or client centered "view", or "view" from any other node of the system, of the entire functionality of the system, computer-readable medium, apparatus, including only those functionalities performed at that server or client or node. Features may also be combined with features known in the art, and particularly, although not limited to, those described in the Background section, or in publications mentioned therein. Conversely, features of the invention, including operations, which are described for brevity in the context of a single embodiment or in a certain order, may be provided separately or in any suitable sub-combination, including with features known in the art (particularly although not limited to those described in the Background section or in publications mentioned therein) or in a different order. "e.g." is used herein in the sense of a specific example which is not intended to be limiting. Each method may comprise all or any subset of the operations illustrated or described, suitably ordered e.g., as illustrated or described herein. Devices, apparatus, or systems shown coupled in any of the drawings may in fact be integrated into a single platform in certain embodiments, or may be coupled via any appropriate wired or wireless coupling, such as but not limited to optical fiber, Ethernet, Wireless LAN, HomePNA, power line communication, cell phone, Smart Phone (e.g. iPhone), Tablet, Laptop, PDA, Blackberry GPRS, Satellite including GPS, or other mobile delivery. It is appreciated that in the description and drawings shown and described herein, functionalities described or illustrated as systems and sub-units thereof can also be provided as methods and operations therewithin, and functionalities described or illustrated as methods and operations therewithin can also be provided as systems and sub-units thereof. The scale used to illustrate various elements in the drawings is merely exemplary and/or appropriate for clarity of presentation, and is not intended to be limiting. Any suitable communication may be employed between separate units herein e.g., wired data communication and/or in short-range radio communication with sensors such as cameras e.g., via WiFi, Bluetooth, or Zigbee. It is appreciated that implementation via a cellular app as described herein is but an example, and, instead, embodiments of the present invention may be implemented, say, as a smartphone SDK, as a hardware component, as an STK application, or as suitable combinations of any of the above. Any processing functionality illustrated (or described herein) may be executed by any device having a processor, such as but not limited to a mobile telephone, set-top-box, TV, remote desktop computer, game console, tablet, mobile e.g. laptop or other computer terminal, or embedded remote unit, which may either be networked itself (may itself be a node in a conventional communication network e.g.), or may be conventionally tethered to a networked device (to a device which is a node in a conventional communication network or is tethered directly or indirectly/ultimately to such a node). Any operation or characteristic described herein may be performed by another actor outside the scope of the patent application, and the description is intended to include apparatus, whether hardware, firmware, or software, which is configured to perform, enable, or facilitate that operation or to enable, facilitate, or provide that characteristic. The terms processor or controller or module or logic, as used herein, are intended to include hardware such as computer microprocessors or hardware processors, which, typically, have digital memory and processing capacity, such as those available from, say Intel and Advanced Micro Devices (AMD). Any operation or functionality or computation or logic described herein may be implemented entirely or in any part on any suitable circuitry, including any such computer microprocessor/s as well as in firmware or in hardware, or any combination thereof. It is appreciated that elements illustrated in more than one drawing, and/or elements in the written description, may still be combined into a single embodiment, except if otherwise specifically clarified herein. Any of the systems shown and described herein may be used to implement, or may be combined with, any of the operations or methods shown and described herein. It is appreciated that any features, properties, logic, modules, blocks, operations, or functionalities described herein, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment, except where the specification or general knowledge specifically indicates that certain teachings are mutually contradictory, and cannot be combined. Any of the systems shown and described herein may be used to implement, or may be combined with, any of the operations or methods shown and described herein. Conversely, any modules, blocks, operations, or functionalities, described herein, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination, including with features known in the art. Each element e.g., operation described herein, may have all characteristics and attributes described or illustrated herein, or, according to other embodiments, may have any subset of the characteristics or attributes described herein. References herein to "said (or the) element x" having certain (e.g., functional or relational) limitations/characteristics, are not intended to imply that a single instance of element x is necessarily characterized by all the limitations/characteristics. Instead, "said (or the) element x" having certain (e.g. functional or relational) limitations/characteristics is intended to include both (a) an embodiment in which a single instance of element x is characterized by all of the limitations/characteristics and (b) embodiments in which plural instances of element x are provided, and each of the limitations/characteristics is satisfied by at least one instance of element x, but no single instance of element x satisfies all limitations/characteristics. For example, each time L limitations/characteristics are ascribed to "said" or "the" element X in the specification or claims (e.g. to "said processor" or "the processor"), this is intended to include an embodiment in which L instances of element X are provided, which respectively satisfy the L limitations/characteristics, each of the L instances of element X satisfying an individual one of the L limitations/characteristics. The plural instances of element X need not be identical. For example, if element X is a hardware processor, there may be different instances of X, each programmed for different functions and/or having different hardware configurations (e.g., there may be 3 instances of X: two Intel processors of different models, and one AMD processor).
Claims (25)
1. CLAIMS 1. A system which scans a Field of Regard (FOR) for target objects (aka targets), the system comprising: a multi-beam illuminator which generates a multi-beam comprising plural beams, differentially (e.g., uniquely) coded according to a coding scheme, which respectively illuminate plural zones within the FOR; a detector which knows the coding scheme and, accordingly, detects which of the plural zones within the FOR, the target objects illuminated by the multi-beam belong to; and a scanner which controls the multi-beam to scan the FOR, thereby to yield a high throughput scanning & detection system, useful for large FOR applications, which determines which target objects are present in each of the plural zones.
2. The system of claim 1 wherein at least one pair of said plural beams overlap, and scan typically continuously in at least one dimension.
3. The system of claim 1 wherein the multi-beam illuminator has a profile and wherein there is at least one Dynamic change of the multi-beam profile in real time.
4. The system of claim 1 wherein the plural beams scan in jumps.
5. The system of claim 1 wherein the plural beams scan continuously in 1D.
6. The system of claim 1 wherein the plural beams scan continuously in 2D.
7. The system of claim 1 wherein the plural beams have an angular scan velocity which varies over time.
8. The system of claim 1 wherein the plural beams are encoded using a random sequence.
9. The system of claim 1 wherein the plural beams are encoded using a comb with a time shift.
10. The system of claim 1 wherein the plural beams are encoded using a comb with varying time periods.
11. The system of claim 1 wherein the plural beams are encoded using a comb with varying wavelengths.
12. The system of claim 1 wherein dynamic angular resolution change occurs at least during a scan phase.
13. The system of claim 1 wherein dynamic angular resolution change occurs at least during a tracking phase.
14. The system of claim 1 wherein range resolution changes dynamically, using codes e.g., using said coding scheme.
15. The system of claim 1 wherein the detector comprises a single lens with a large point spread function (PSF).
16. The system of claim 1 wherein the detector defines an array of pixels or cells.
17. A system according to claim 1 wherein, when performing a multi-stage objection detection including at least first and second stages, a controller which controls the multi-beam illuminator reverts from one beam modulation scheme to another in real time, including using a first modulation scheme in the first stage and a second modulation scheme in the second stage.
18. The system of claim 1 wherein the detector comprises at least one solid-state photodetector.
19. The system of claim 18 wherein the detector comprises at least one single-photon avalanche diode (SPAD).
20. The system of claim 19 wherein the detector comprises a 1D array of single-photon avalanche diodes (SPADs).
21. The system of claim 19 wherein the detector comprises a 2D array of single- photon avalanche diodes (SPADs).
22. The system of claim 1 wherein the detector comprises at least one Silicon Photon multiplier (SiPM).
23. The system of claim 1 wherein the detector comprises at least one Multi-Pixel Photon Counter (MPPC).
24. A method which scans a Field of Regard (FOR) for target objects (aka targets), the method comprising: generating a multi-beam comprising plural beams which are differentially (e.g., uniquely) coded according to a known coding scheme, and which respectively illuminate plural zones within the FOR; according to the known coding scheme, detecting which of the plural zones within the FOR, the target objects illuminated by the multi-beam belong to; and controlling the multi-beam to scan the FOR, thereby to yield high throughput scanning & detection, useful for large FOR applications.
25. A computer program product, comprising a non-transitory tangible computer readable medium having computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method which scans a Field of Regard (FOR) for target objects (aka targets), the method comprising: generating a multi-beam comprising plural beams which are differentially (e.g., uniquely) coded according to a known coding scheme, and which respectively illuminate plural zones within the FOR; according to the known coding scheme, detecting which of the plural zones within the FOR, the target objects illuminated by the multi-beam belong to; and controlling the multi-beam to scan the FOR, thereby to yield high throughput scanning & detection, useful for large FOR applications. For the Applicants, REINHOLD COHN AND PARTNERS By: 5
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IL303684A IL303684A (en) | 2023-06-13 | 2023-06-13 | Improved system, method, and computer progrqm product, which may use a single detector, for finding/tracking targets |
| PCT/IL2024/050521 WO2024257083A1 (en) | 2023-06-13 | 2024-05-28 | Improved system, method, and computer program product, which may use a single detector, for finding/tracking targets |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IL303684A IL303684A (en) | 2023-06-13 | 2023-06-13 | Improved system, method, and computer progrqm product, which may use a single detector, for finding/tracking targets |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| IL303684A true IL303684A (en) | 2025-01-01 |
Family
ID=93851532
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| IL303684A IL303684A (en) | 2023-06-13 | 2023-06-13 | Improved system, method, and computer progrqm product, which may use a single detector, for finding/tracking targets |
Country Status (2)
| Country | Link |
|---|---|
| IL (1) | IL303684A (en) |
| WO (1) | WO2024257083A1 (en) |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3460520B1 (en) * | 2017-09-25 | 2023-07-19 | Hexagon Technology Center GmbH | Multi-beam laser scanner |
-
2023
- 2023-06-13 IL IL303684A patent/IL303684A/en unknown
-
2024
- 2024-05-28 WO PCT/IL2024/050521 patent/WO2024257083A1/en active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| WO2024257083A1 (en) | 2024-12-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12153134B2 (en) | Accurate photo detector measurements for LIDAR | |
| US12320696B2 (en) | Multispectral ranging and imaging systems | |
| US10859678B2 (en) | Micromirror array for feedback-based image resolution enhancement | |
| US10739189B2 (en) | Multispectral ranging/imaging sensor arrays and systems | |
| JP6789926B2 (en) | Methods and systems for lidar transmission | |
| US7026600B2 (en) | System and method of identifying an object in a laser beam illuminated scene based on material types | |
| US8294879B2 (en) | Multi-directional active sensor | |
| CN109557522A (en) | Multi-beam laser scanner | |
| US12117565B2 (en) | Methods and systems for dithering active sensor pulse emissions | |
| US20170234977A1 (en) | Lidar system and multiple detection signal processing method thereof | |
| US11579264B2 (en) | Optoelectronic sensor, method and vehicle | |
| CN104898125A (en) | Low cost small size LIDAR for automotive | |
| KR20140079090A (en) | Laser emitter module and laser detecting system applied the same | |
| KR102787166B1 (en) | LiDAR device and operating method of the same | |
| KR20170134944A (en) | Method and apparatus for scanning particular region using optical module | |
| KR20230063363A (en) | Devices and methods for long-range, high-resolution LIDAR | |
| WO2024257083A1 (en) | Improved system, method, and computer program product, which may use a single detector, for finding/tracking targets | |
| Gim et al. | Suitability of various LiDAR and radar sensors for application in robotics: A measurable capability comparison | |
| US12025701B2 (en) | Dynamic signal control in flash LiDAR | |
| US12292534B2 (en) | LiDAR systems for near-field and far-field detection, and related methods and apparatus |