IL146076A - Method and system for improved object acquisition and identification - Google Patents

Method and system for improved object acquisition and identification

Info

Publication number
IL146076A
IL146076A IL146076A IL14607601A IL146076A IL 146076 A IL146076 A IL 146076A IL 146076 A IL146076 A IL 146076A IL 14607601 A IL14607601 A IL 14607601A IL 146076 A IL146076 A IL 146076A
Authority
IL
Israel
Prior art keywords
information
lighting
target
environmental information
digital picture
Prior art date
Application number
IL146076A
Other languages
Hebrew (he)
Other versions
IL146076A0 (en
Inventor
Ofer Solomon
Original Assignee
Rafael Armament Dev Authority
Ofer Solomon
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rafael Armament Dev Authority, Ofer Solomon filed Critical Rafael Armament Dev Authority
Priority to IL146076A priority Critical patent/IL146076A/en
Publication of IL146076A0 publication Critical patent/IL146076A0/en
Publication of IL146076A publication Critical patent/IL146076A/en

Links

Landscapes

  • Image Analysis (AREA)

Description

Ώ ΠΤΙ inm ϋΊηχυ ntiTDi ΠΕΓΟΪ7 πζπυηι niru] METHOD AND SYSTEM FOR IMPROVED OBJECT ACQUISITION AND IDENTIFICATION METHOD AND SYSTEM FOR IMPROVED OBJECT ACQUISITION AND IDENTIFICATION FIELD AND BACKGROUND OF THE INVENTION The present invention relates to image processing systems and more particularly, to a method of, and system for improved autonomous or semi- autonomous object acquisition and identification.
The oldest and best-known imaging processing system used by humans has been our visual system. The high-resolution eye sensor, when coupled to the brain via the optic nerve, is the ultimate objective in sensor, signal processor design.
Despite the elegant design of the eye-brain system, it has serious shortcomings. For example, the eye is limited in the wavelengths to which it is sensitive, the human eye does not have capability at extended range, sees poorly at night and in transmission-attenuated atmospheres, and is tricked rather easily.
In response to these limitations, humans have developed devices to view the environment far beyond the human sensing system, devices that enable the accomplishment of increasingly complex tasks. However, the wealth of additional data often overwhelm our ability to quickly process all the information and make decisions based thereon.
In medical imaging, a wide range of modalities has improved diagnostic capabilities. Also, our ability to acquire imagery has increased: An echo planar image can be generated in 100 msec, and four computerized tomography scan images can be generated in a second. Medical cost containment prevents an increase in the number of physicians, but the increasing quantity of images and image types increases the need for more radiologists. The human need for processing help is also found in automatic fingerprint and face recognition, manufacturing controls and inventory screening, and robotics.
In the military, sensors have been developed for viewing the battlefield, even at night and during obscuring weather, allowing 24-hour, "all-weather" performance. Image intensifiers, thermal imaging, high-resolution television, and lasers are prime examples of the technologies employed by the military. The data from these sensors pour in along with demands on the soldier to make rapid decisions. The soldier, like the radiologist, is overloaded with information from a vast array of sensors while responding to demands of life-threatening dimensions. The soldier needs to efficiently use all sensor information and requires image processing to aid the decision-making process.
These requirements are the origin of the concept of Automatic Target Recognition (ATR) in the military and Guided (or computer-aided) Diagnostics in the medical community. ATR is a generic term used to describe various automated and semi-automated functions carried out on images, sensor data to perform operations ranging from the simple - cueing a human observer to a potential target, to the complex - autonomous object acquisition and identification. ATR is the machine function of detecting, classifying, recognizing, and/or identifying an object without the need of human intervention.
In the military, the most sophisticated example of ATR is the fire-and- forget, lock-on-after-launch missile. Here, an ATR is supposed to recognize the candidate targets in the scene after it has been launched, select the target of choice, track the target during the flight, make final aim point selection, and conduct terminal guidance to the target.
Various performance criteria for detecting and identifying a target from background clutter and system noise are in common usage in the military community (see L. M. Biberman, Perception of Displayed Information, New York: Plenum Press, 1973, pp. 183-187; J.A. Ratches, "Static Performance Model for Thermal Imaging Systems", Optical Engineering, Vol. 15, No. 6, 1976, pp. 525-536), including: 1) Probability of detection is the probability of correctly discriminating an object in the image from background and system noise. 2) Probability of classification is the probability of correctly determining the class of a detected target. In the case of Army tactical target acquisition, this means, by way of example, telling if the target is tracked or wheeled. 3) Probability of recognition is the probability of correctly determining the class membership of the target. Again, for Army tactical targets, is the tracked vehicle a tank, an armored personnel carrier, or a self propelled gun? 4) Probability of identification is the probability of correctly determining the exact identity of the target, e.g., for automobiles, is it a Ford, Chevrolet, or Plymouth? ) False alarm probability is the probability of an error in detection, classification, recognition, and/or identification. The units of false- alarm rate are false alarms per square degree in object space.
It has been recognized by many experts in the field that the current performance of ATR technology is still disappointingly poor. In summarizing 10 years of research conducted by several U.S. Army laboratories in the field of ATR, Ratches, et al. ("Aided and Automatic Target Recognition Based Upon Sensory Inputs from Image Forming Systems", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 9, September 1997) conclude that acceptable autonomous operation of ATR in military applications is still an unattainable goal: "As seen in this survey of the state-of-the-art of ATR technology as measured by Army Laboratories, the present state-of-the-art of ATR systems is still far from imitating the performance of the eye-brain combination except for a few select ATR implementations. For some low to medium clutter scenarios in which the number of target possibilities is not very great, the target acquisition performance by the ATR is good enough to be useful to the military. The useful ATR systems are realizable in hardware that host algorithm implementations that exist and have been demonstrated against militarily relevant scenes. However, current ATR algorithms cannot be accurately modeled; there is little correlation between known image metrics and observed ATR performance." Ratches, et al., conclude that: "experimental results justify the pursuit of sensor fusion by the military in order to realize significant improvements in performance. Improvements imply maintaining probabilities of detection, classification, recognition, and identification of larger classes of targets while reducing false alarms in higher clutter levels. This means moving from one set of ROCs [Receiver-Operator-Curves] to another higher performance set by introducing new, independent information. The biological examples of using several senses to make decisions about scene content support the concept. However, the lack of theoretical guidelines from image science does not suggest how to proceed down this path." It is also suggested by Ratches, et al., that "what are truly needed are new ideas on the information content of a scene and how to take advantage of it. It does not appear that a great increase in processor computing power is required." There is therefore a recognized need for, and it would be highly advantageous to have, a method of, and a system for improved autonomous or semi-autonomous object acquisition and identification, in which the probability of detection, recognition, and identification is improved, and in which the false alarm probability in detecting, recognizing, and identifying a target is appreciably reduced.
SUMMARY OF THE INVENTION The present invention is a system for and method of autonomous object acquisition and identification. The method includes the steps of: (a) detecting optical information from a field of view; (b) detecting environmental information from the field of view; (c) providing the environmental information to a comparison module having an image analyzer and a target model base builder, and (d) processing the optical information within the comparison module, using the environmental information, so as to compensate for optically-obscuring conditions.
According to another aspect of the present invention, there is provided a system for improved object acquisition and identification, the system including: (a) a detector for capturing an optical stimulus from a field of view; (b) a module including: (i) an image analyzer for processing data obtained from the detector, and (ii) a target model base builder for building target models, the module being designed and configured to: process environmental information pertaining to the field of view, and compensate for optically-obscuring conditions using the environmental information.
According to further features in the described preferred embodiments, the environmental information is lighting information.
According to further features in the described preferred embodiments, the lighting information includes measured incident light intensity.
According to further features in the described preferred embodiments, the lighting information includes a camera shutter speed.
According to further features in the described preferred embodiments, the lighting information includes a camera f-number.
According to further features in the described preferred embodiments, the lighting information includes lighting direction.
According to further features in the described preferred embodiments, the lighting information includes light softness.
According to further features in the described preferred embodiments, the lighting information includes overcast intensity.
According to further features in the described preferred embodiments, the lighting information includes lighting dynamics.
According to further features in the described preferred embodiments, the lighting dynamics includes overcast dynamics.
According to further features in the described preferred embodiments, the method further includes the step of: (e) producing a compensated digital picture.
According to further features in the described preferred embodiments, the processing of the optical stimulus includes: (i) producing a raw digital picture from the optical stimulus, and (ii) processing the raw digital picture using the environmental information so as to compensate for the optically- obscuring conditions.
According to further features in the described preferred embodiments, the method further includes the step of: (e) producing a compensated digital picture.
According to further features in the described preferred embodiments, the environmental information is provided to the image analyzer within the comparison module.
According to further features in the described preferred embodiments, the environmental information is provided to the target model base builder within the comparison module.
According to further features in the described preferred embodiments, the processing includes: (i) producing a digital picture from the optical stimulus, and (ii) producing a compensated target model using the environmental information.
According to further features in the described preferred embodiments, the processing further includes: (iii) comparing the digital picture with the compensated target model.
According to further features in the described preferred embodiments, the method further includes the step of: (f) comparing input data from the target model base builder with the compensated digital picture.
According to further features in the described preferred embodiments, the method further includes the step of: (g) determining a match probability based on similarity criteria between the input data from the target model base builder and the compensated digital picture.
According to further features in the described preferred embodiments, the providing of the environmental information is performed manually.
According to further features in the described preferred embodiments, the determining of a match probability is performed manually.
According to further features in the described preferred embodiments, the method further includes the step of: (e) situating a detector in an immediate vicinity of the target field, to obtain the environmental information.
According to further features in the described preferred embodiments, a single detector is utilized to perform step (a).
According to further features in the described preferred embodiments, at least one of steps (a)-(d) is performed at least twice, so as to track a target that is initially disposed within the field of view.
According to further features in the described preferred embodiments, steps (a)-(d) are performed at least twice.
According to further features in the described preferred embodiments, the system further includes: (c) a environmental information detector for detecting the environmental information.
According to further features in the described preferred embodiments, the environmental information is lighting information.
According to further features in the described preferred embodiments, the image analyzer is designed and configured to receive the lighting information, to produce a compensated digital picture by utilization of the lighting information, and to compare the compensated digital picture with a target model from the target model base builder.
According to further features in the described preferred embodiments, the system further includes: (d) an output unit for outputting the compensated digital picture.
According to further features in the described preferred embodiments, the target model base builder is designed and configured to receive the lighting information and to produce a compensated digital model base picture by utilization of the lighting information.
According to further features in the described preferred embodiments, the image analyzer is designed and configured to compare the compensated digital model base picture with optical data obtained from the optical stimulus.
The present invention successfully addresses the shortcomings of the existing technologies by providing a system for and method of autonomous object acquisition and identification that is more sophisticated, accurate, and reliable than the art known heretofore.
BRIEF DESCRIPTION OF THE DRAWINGS The invention is herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.
In the drawings: Figure 1 is a schematic illustration of a prior art ATR system; Figure 2 is a typical, schematic logical flow diagram of the prior art system of Figure 1 ; Figure 3 is schematic logical flow diagram of a method according to one aspect of the present invention; Figure 4 is schematic logical flow diagram of a method according to a second aspect of the present invention, and Figure 5 is a schematic illustration of a system according to one aspect of the present invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention is a system for and method of autonomous object acquisition and identification.
The principles and operation of the system and method according to the present invention may be better understood with reference to the drawings and the accompanying description.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawing. The invention is capable of other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
Referring now to the drawings, Figure 1 is a schematic illustration of a prior art ATR system. A detector 10 detects and captures an optical stimulus from a field containing a potential target. Detector 10 then communicates the optical information to a video unit 12, wherein a digital picture is produced. The digital picture is then input to an analyzer 14. Additional input is provided to analyzer 14 from a target model base builder 16. This input typically includes digital representations of various target types and scenarios. Typically, the target types within target model base builder 16 include various non-target objects (e.g., passenger vehicle, tractor, etc.) that are not "hits", but serve as a basis for positive identification of non-hits.
Analyzer 14 compares the digital picture received from video unit 12 with digital representations of the target types and scenarios within target model base builder 16. In the event that the statistical correlation between the digital picture and digital representations from target model base builder 16 exceeds a pre-determined level, analyzer 14 determines (and outputs) that a hit has been made.
A logical flow diagram of such a prior art method is provided in Figure 2. A detector detects 20 an optical stimulus from a field containing a potential target, and communicates 22 the optical information to a video unit. A digital picture is produced 24 and subsequently undergoes at least one analyzing step 26.
Additional input for the analysis is transferred from a target model base builder to the analyzer. Various target types and scenarios are provided 28 to the target model base builder, which builds a model base (or feature base) 30 for these respective target types and scenarios and inputs the information to analyzing step 26. As used herein in the specification and in the claims section that follows, the term "model base" is used in a general sense to include a feature base.
Preferably, the processing performed to produce the library is performed off-line (i.e., not in real time) as much as possible, in order to reduce real-time processing requirements during an actual event. Analyzing step 26 includes comparing the digital picture produced in step 24 with the digital representation produced in step 30.
Optionally, analyzing step 26 is provided 32 with detector parameters such as detector distance from target, angle, and field of view.
In analyzing step 26, if the statistical correlation between the digital picture of the target and digital representations from target model base builder 16 exceeds a pre-determined level, a hit or match is output.
Although various deficiencies associated with such prior-art systems have been delineated above, one additional, particularly problematic deficiency is the lack of a compensation mechanism. In the human eye-brain system, by way of example, the brain often knows or presupposes how a target object is supposed to look. In the event of various optically-obscuring conditions, including poor or uneven lighting, glare, glint, overlighting, air humidity, wind, rain, distance, etc., the brain downplays or otherwise processes at least a portion of the optical information relayed from the eye, such that the person "sees" the target object in a more complete fashion, as if the optically-obscuring conditions are damped or even non-existent. Thus, an inexperienced photographer may capture an image that appears to be perfect, yet, upon examination of the photograph, a glaring unevenness in the lighting/shading is revealed. In this case, the processed photograph exactly reflects the image detected by the detecting element of the camera. As a result of optically-obscuring conditions, however, it may be difficult or even impossible to identify, from the processed photograph, the target object and/or elements thereof, even though the eye-brain system of the photographer had "seen" the target object and elements thereof clearly and in good detail.
The present invention teaches a system and method of compensating for optically-obscuring conditions, so as to better detect and identify target objects. Moreover, the present invention actually utilizes the optical data from such optically-obscuring conditions to effect the compensation.
As used herein in the specification and in the claims section that follows, the term "optically-obscuring conditions" and the like refer to a wide variety of conditions that influence the appearance of a target image. These conditions may include, but are not limited to, poor or uneven lighting, glare, glint, overlighting, shadows or shadowing, air humidity, wind, and rain. The term "optically-obscuring conditions" is meant to encompass optically-changing conditions, in which the appearance of a target image changes as a function of lighting conditions. These changes are often correlated with the time of day at which the target image is captured. The term "optically-obscuring conditions" is also used in a more general sense to include electromagnetic wave frequencies outside the range of visible light, e.g., infra-red light. In the case of infra-red light, the optically-obscuring conditions are actually thermally-obscuring conditions, which fall within the general definition of optically-obscuring conditions as used herein.
As used herein in the specification and in the claims section that follows, the term "environmental information" refers to data on environmental factors that influence the appearance of a target or field of view. The term "environmental information" typically refers to radiometric information, including lighting information of various kinds, as described hereinabove regarding optically-obscuring conditions. The term is also used in a more general sense to include other kinds of environmental information, including meteorological information such as air humidity, wind, and rain information.
A logical flow diagram of a method according to one aspect of the present invention is provided in Figure 3. A detector detects 20 an optical stimulus from a field containing a potential target, and communicates 22 the optical information to a video unit. A digital picture is produced 24. This picture is often far from ideal in that incorporated within are optical data generated under optically-obscuring conditions.
In this aspect of the improved method of the present invention, information on such optically-obscuring conditions is input 34 to comparison module 35, which includes a compensating step 36, an analyzing step 26, and a model base building step 30. In Figure 3, the information is input 34 directly to compensating step 36. This information may include, but is not limited to, exposure metering information such as measured incident light, shutter speed, etc. Often, such information is readily available, but the tremendous potential inherent in this information is not utilized to improve the genuine information content of a target object or scene. In compensating step 36, the digital picture produced in step 24 is processed to damp, substantially remove, or otherwise overcome incidental optical data resulting from optically-obscuring conditions. It must be emphasized that the input 34 of information on optically-obscuring conditions can be received from an automatic source, such as a light detector, a semi-automatic source, or a manual source. For example, lighting conditions in the vicinity of the target object can be input by an operator who has estimated or measured the lighting direction and intensity. On relatively clear days, the lighting direction can also be obtained by inputting the time of day at which the image was captured and the approximate coordinates of the target position.
In a preferred embodiment, the information input 34 to compensating step 36 is lighting information. The lighting information may include, but is not limited to, a light intensity scale in which dim lighting is near one end of the intensity scale, and bright lighting is near the other end of the scale. The lighting information preferably includes lighting direction or directions, which is of prime importance in determining the shading or partial shading of the target object. In addition, the intensity of the lighting and directionality thereof can introduce fictitious effects into a digital picture. With the requisite input on lighting conditions provided to compensating step 36, the fictitious effects can be removed from the digital picture, producing thereby an essentially "true", unobscured digital picture. This is also referred to herein as a "compensated" digital picture.
The compensated digital picture is subsequently input to analyzing step 26, along with the conventional provided by a target model base builder. As described above in the treatment of prior-art methods, various target types and scenarios are provided 28 to the target model base builder, which builds 30 a model base for these respective target types and scenarios and inputs the information to analyzing step 26. Preferably, the processing performed to produce the library is performed off-line (i.e., not in real time) as much as possible, in order to reduce real-time processing requirements during an actual event. Analyzing step 26 includes comparing the unobscured digital picture produced in compensating step 36 with the digital representations produced in step 30.
Preferably, analyzing step 26 is provided 32 with image detector parameters such as detector distance from target, angle, and field of view.
In analyzing step 26, if the statistical correlation between the digital picture of the target and digital representations from target model base builder 16 exceeds a pre-determined level, a hit or match is output.
A logical flow diagram of a method according to another aspect of the present invention is provided in Figure 4. In this aspect of the improved method of the present invention, information on optically-obscuring conditions is input 34 to a comparison module 37, which includes a model base building step 38 and an analyzing step 26. As described above, various target types and scenarios are provided 28 to the target model base builder, which builds 38 a model base for these respective target types and scenarios. In addition, the target model base builder is input 34 with information on optically-obscuring conditions and builds 38 a modified digital model of one or more target types and scenarios. At least one modified digital picture is then input to analyzing step 26.
Preferably, the processing performed to produce the modified library is performed off-line (i.e., not in real time) as much as possible, in order to reduce real-time processing requirements during an actual event.
By way of example: 100 different lighting combinations are used to generate an expanded library having 100 different digital pictures for each target type and/or scenario. The image received from video unit is input to the analyzer. The lighting number (e.g., 17, which represents bright lighting coming from due east) is input to library or to analyzer, such that the pictures can be compared for detection and identification purposes in virtually an instantaneous manner.
It can be appreciated that many modifications to the above-described method will be apparent to one skilled in the art.
Figure 5 is a schematic illustration of a system 100 according to one aspect of the present invention. As in prior-art systems, a detector 10 detects and registers an optical stimulus from a field containing a potential target. Detector 10 then communicates the optical information to a video unit 12, wherein a digital picture is produced. The digital picture is then input to an analyzer 14. Additional input is provided to analyzer 14 from a target model base builder 16, as described in greater detail hereinabove.
Detector 10 optionally includes a radiometer 44 in a single, integral unit. In this case, detector 10 is designed and configured to communicate, in addition to the image data, detected environmental information, and more preferably, detected lighting information, to analyzer 14. Analyzer 14 is designed and configured to process the digital picture according to the detected lighting information received and to compensate for erroneous visual data produced by optically-obscuring conditions, such that genuine visual information content can be extracted from a target scene, and a modified, substantially unobscured digital picture can be produced.
Optionally, radiometer 44 is physically separate from detector 10, such that each respective task is performed by a separate unit. All other considerations being equal, it is generally preferred that radiometer 44 be disposed so as to detect the lighting information (or other object-obscuring information) in a manner that closely approximates the lighting in the vicinity of the target. In many instances, however, the lighting in the vicinity of the target can be satisfactorily approximated by the lighting at a location substantially remote from the target, e.g., at the location of the airplane, helicopter, or ground position from which the optical information on the target is captured. Also, it is not always practically possible to situate radiometer 44 in the vicinity of the target, such that a remotely-situated radiometer 44 enables the production of a less accurate, yet useful, modified digital picture.
It should be emphasized that detector 10 and radiometer 44 can be selected from a wide variety of devices known in the art, and that any requisite adaptation to a system according to the present invention can be performed by one with ordinary skill in the art.
Analyzer 14 then compares the unobscured digital picture received from video unit 12 with digital representations of the target types and scenarios within target model base builder 16. In the event that the statistical correlation between the digital picture and digital representations from target model base builder 16 exceeds a pre-determined level, analyzer ' 14 determines (and outputs) that a hit has been made.
The system of the present invention can also operate in a semi-autonomous fashion. For example, the above-described, unobscured digital picture can (in addition to or instead of the autonomous comparison with information from target model base builder 16) be displayed or printed to enable target matching to be performed by an operator.
In addition, a model from target model base builder 16 can be adapted on the basis of measured or estimated lighting conditions. After undergoing the lighting correction, the model can then be displayed. The operator can then utilize this displayed model as a more accurate source of comparison with the captured image. Consequently, the accuracy of detection, classification, identification, etc., of a target is greatly enhanced.
The lighting information detected from the field of view may include one or more of the following: measured incident light intensity, shutter speed and f-number (e.g., of a camera), light direction, light "softness", overcast intensity, and lighting dynamics. The light direction is often of special importance.
As used herein in the specification and in the claims section that follows, the term "light softness" and the like refer to the diffusivity of the light, i.e., the extent to which the lighting is projected from all directions.
As used herein in the specification and in the claims section that follows, the term "overcast intensity" refers to the extent to which the lighting is affected by an overcast condition. Such a condition directly influences other lighting parameters, including, but not limited to, light softness and measured incident light intensity.As used herein in the specification and in the claims section that follows, the term "lighting dynamics" refers to the rate at which one or more of the various lighting parameters are subject to change. The term "lighting dynamics" includes "overcast dynamics", an important parameter that expresses the rate at which the overcast intensity is subject to change. By way of example, a very high value of the overcast dynamics may provide a strong indication against using various lighting conditions in the processing and comparing of a target image with library images. In such a case, the lighting information may actually result in a less accurate comparison, because the lighting information is radically different from the lighting conditions at the instant that the target image was captured.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention.

Claims (22)

1. WHAT IS CLAIMED IS: 1. A method of improved object acquisition and identification, the method comprising the steps of: (a) detecting optical information from a field of view; (b) detecting environmental information from said field of view; (c) providing said optical information and said environmental information to a comparison module having an image analyzer and a target model base builder, and (d) processing said optical information within said comparison module, using said environmental information, so as to compensate for optically-obscuring conditions.
2. The method of claim 1, wherein said environmental information is lighting information.
3. The method of claim 2, wherein said lighting information includes measured incident light intensity.
4. The method of claim 2, wherein said lighting information includes camera shutter speed. to obtain said environmental information. 23. The method of claims 1 and 2, wherein a single detector is utilized to perform step (a). 24. The method of claims 1, 2, 12 and 17, wherein at least one of steps (a)-(d) is performed at least at two distinct time frames, so as to track a target that is initially disposed within said field of view. 2
5. The method of claim 24, wherein steps (a)-(d) are performed at least at two distinct time frames. 2
6. A system for improved object acquisition and identification, the system comprising: (a) a detector for capturing an optical stimulus from a target field; (b) a module including: (i) an image analyzer for processing data obtained from said detector, and (ii) a target model base builder for building target models, said module being designed and configured to: process environmental information pertaining to said target field, and 28 5. The method of claim 2, wherein said lighting information includes camera f-number. 6. The method of claim 2, wherein said lighting information includes light direction.
7. The method of claim 2, wherein said lighting information includes light softness.
8. The method of claim 2, wherein said lighting information includes overcast intensity.
9. The method of claim 2, wherein said lighting information includes lighting dynamics.
10. The method of claim 9, wherein said lighting dynamics includes overcast dynamics.
11. The method of claim 1, further comprising the step of: (e) producing a compensated digital picture. 25
12. The method of claims 1 and 2, wherein said processing of said optical stimulus includes: (i) producing a raw digital picture from said optical stimulus, and (ii) processing said raw digital picture using said environmental information so as to compensate for said optically-obscuring conditions.
13. The method of claim 12, further comprising the step of: (e) producing a compensated digital picture.
14. The method of claims 1 and 2, wherein said environmental information is provided to said image analyzer within said comparison module.
15. The method of claims 1 and 2, wherein said environmental information is provided to said target model base builder within said comparison module.
16. The method of claim 15, wherein said processing includes: (i) producing a digital picture from said optical stimulus, and (ii) producing a compensated target model using said environmental information. 26
17. The method of claim 16, wherein said processing further includes: (iii) comparing said digital picture with said compensated target model.
18. The method of claim 1 1, further comprising the step of: (f) comparing input data from said target model base builder with said compensated digital picture.
19. The method of claim 18, further comprising the step of: (g) determining a match probability based on similarity criteria between said input data from said target model base builder and said compensated digital picture.
20. The method of claims 1 and 2, wherein said providing of said environmental information is performed manually.
21. The method of claim 19, wherein said determining a match probability is performed manually.
22. The method of claims 1 and 2, further comprising the step of: (e) situating a detector in an immediate vicinity of said field of view, 27 compensate for optically-obscuring conditions using said environmental information. 27. The system of claim 26, further comprising: (c) a environmental - information detector for detecting said environmental information. 28. The system of claim 27, wherein said environmental information is lighting information. 29. The system of claim 28, wherein said image analyzer is designed and configured to receive said lighting information, to produce a compensated digital picture by utilization of said lighting information, and to compare said compensated digital picture with a target model from said target model base builder. 30. The system of claim 28, further comprising: (d) an output unit for outputting said compensated digital picture. 31. The system of claim 28, wherein said target model base builder is designed and configured to receive said lighting information and to produce a compensated digital model base picture by utilization of said lighting information. 32. The system of claim 31, wherein said image analyzer is designed and configured to compare said compensated digital model base picture with optical data obtained from said optical stimulus. 33. The method of claim 2, wherein said lighting information includes shadow information. 34. The method of claim 1, wherein said environmental information is radiometric information. 35. The method of claim 1, wherein said radiometric information is infra-red information. 36. The method of claim 1, wherein said environmental information is air humidity. 37. The method of claim 1, wherein said environmental information is wind information. 38. The method of claim 1, wherein said environmental information is rain information.
IL146076A 2001-10-19 2001-10-19 Method and system for improved object acquisition and identification IL146076A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
IL146076A IL146076A (en) 2001-10-19 2001-10-19 Method and system for improved object acquisition and identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
IL146076A IL146076A (en) 2001-10-19 2001-10-19 Method and system for improved object acquisition and identification

Publications (2)

Publication Number Publication Date
IL146076A0 IL146076A0 (en) 2002-12-01
IL146076A true IL146076A (en) 2006-08-20

Family

ID=28053215

Family Applications (1)

Application Number Title Priority Date Filing Date
IL146076A IL146076A (en) 2001-10-19 2001-10-19 Method and system for improved object acquisition and identification

Country Status (1)

Country Link
IL (1) IL146076A (en)

Also Published As

Publication number Publication date
IL146076A0 (en) 2002-12-01

Similar Documents

Publication Publication Date Title
US6900729B2 (en) Thermal signature intensity alarmer
CN107016690B (en) Unmanned aerial vehicle intrusion detection and identification system and method based on vision
Ratches et al. Aided and automatic target recognition based upon sensory inputs from image forming systems
US8761445B2 (en) Method and system for detection and tracking employing multi-view multi-spectral imaging
CN100353749C (en) Monitoring device composed of united video camera
US20060242186A1 (en) Thermal signature intensity alarmer system and method for processing thermal signature
US10949677B2 (en) Method and system for detecting concealed objects using handheld thermal imager
CN108731587A (en) A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
US20100002910A1 (en) Method and Apparatus for Developing Synthetic Three-Dimensional Models from Imagery
EP2711730A1 (en) Monitoring of people and objects
US20210256244A1 (en) Method for authentication or identification of an individual
US20150235102A1 (en) Detecting a target in a scene
Mahajan et al. Detection of concealed weapons using image processing techniques: A review
CN112257617B (en) Multi-modal target recognition method and system
Hadi et al. Fusion of thermal and depth images for occlusion handling for human detection from mobile robot
IL146076A (en) Method and system for improved object acquisition and identification
Huang et al. Occlusion handling of visual tracking by fusing multiple visual clues
Amador-Salgado et al. Knife detection using indoor surveillance camera
GB2506688A (en) Detection of a target in a scene using hyperspectral imaging
KR102302907B1 (en) Stereo awareness apparatus, and method for generating disparity map in the stereo awareness apparatus
Manen Thermal-Inertial Localization and 3D-Mapping to Increase Situational Awareness in Smoke-Filled Environments
Pérez-Jácome et al. Target detection from coregistered visual-thermal-range images
Hasim et al. ROBUST HUMAN DETECTION WITH OCCLUSION HANDLING BY FUSION OF THERMAL AND DEPTH IMAGES FROM MOBILE ROBOT
BOONE et al. Signal processing for missile guidance- Prospects for the future
Aluvalu Motion Detection and Alert System

Legal Events

Date Code Title Description
FF Patent granted
KB Patent renewed
KB Patent renewed
MM9K Patent not in force due to non-payment of renewal fees