IL279342A - Method and systems for enhancing depth perception of a non-visible spectrum image of a scene - Google Patents

Method and systems for enhancing depth perception of a non-visible spectrum image of a scene

Info

Publication number
IL279342A
IL279342A IL279342A IL27934220A IL279342A IL 279342 A IL279342 A IL 279342A IL 279342 A IL279342 A IL 279342A IL 27934220 A IL27934220 A IL 27934220A IL 279342 A IL279342 A IL 279342A
Authority
IL
Israel
Prior art keywords
visible spectrum
spectrum image
image
depth
data
Prior art date
Application number
IL279342A
Other languages
Hebrew (he)
Inventor
Ophir Yoav
Original Assignee
Elbit Systems Ltd
Ophir Yoav
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Elbit Systems Ltd, Ophir Yoav filed Critical Elbit Systems Ltd
Priority to IL279342A priority Critical patent/IL279342A/en
Priority to EP21902874.3A priority patent/EP4260553B1/en
Priority to PCT/IL2021/051467 priority patent/WO2022123570A1/en
Publication of IL279342A publication Critical patent/IL279342A/en
Priority to US18/331,203 priority patent/US12530791B2/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the three-dimensional [3D] impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/22Measuring arrangements characterised by the use of optical techniques for measuring depth
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/40Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images giving the observer of a single two-dimensional [2D] image a perception of depth
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/00Two-dimensional [2D] image generation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/00Three-dimensional [3D] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/00Three-dimensional [3D] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three-dimensional [3D] modelling for computer graphics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • H04N13/268Image signal generators with monoscopic-to-stereoscopic image conversion based on depth image-based rendering [DIBR]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Optics & Photonics (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Description

SHJ733 - AIR282IL -1- METHODS AND SYSTEMS FOR ENHANCING DEPTH PERCEPTION OF A NON- VISIBLE SPECTRUM IMAGE OF A SCENE TECHNICAL FIELD The invention relates to methods and systems for enhancing a depth perception of a non-visible spectrum image of a scene.
BACKGROUND Non-visible spectrum sensors (e.g., infrared sensors) can capture images that are not in the visible spectrum, i.e. non-visible spectrum images, in a wide variety of applications. However, non-visible spectrum images have poor depth perception. That is, a non-visible spectrum image poorly perceives three-dimensional (3D) features that are present within the scene that is displayed in the non-visible spectrum image.Thus, there is a need in the art for new methods and systems for enhancing a depth perception of a non-visible spectrum image of a scene.References considered to be relevant as background to the presently disclosed subject matter are listed below. Acknowledgement of the references herein is not to be inferred as meaning that these are in any way relevant to the patentability of the presently disclosed subject matter.U.S. Patent No. 6,157.733 ("Swain"), published on December 5, 2000. discloses one or more monocular cues being extracted from an original image and combined to enhance depth effect. An original image is acquired and segmented into one or more objects. The objects are identified as being either in the foreground or the background, and an object of interest is identified. One or more depth cues are then extracted from the original image, including shading, brightness, blur and occlusion. The depth cues may be in the form of one or more intermediate images having an improved depth effect. The depth cues are then combined or applied to create an image with enhanced depth effect.U.S. Patent Application Publication No. 2015/0208054 ("Michot"), published on July 23, 2015, discloses a method of generating a depth cue for three dimensional video content. The method comprises the steps of (a) detecting three dimensional video content that will appear in observer space when displayed; (b) identifying a reference projection parameter; (c) estimating a location of a shadow that would be generated by the detected content as a SHJ733 - AIR282IL -2- consequence of a light source emitting light according to the reference projection parameter; and (d) projecting light content imitating a shadow to the estimated location to coincide with display of the three dimensional video content. Also disclosed are a computer program product for carrying out a method of generating a depth cue for three dimensional video content and an apparatus for generating a depth cue for three dimensional video content.
GENERAL DESCRIPTION In accordance with a first aspect of the presently disclosed subject matter, there is provided a method for enhancing a depth perception of a non-visible spectrum image of a scene, the method comprising: capturing the non-visible spectrum image at a capture time, by at least one non-visible spectrum sensor; obtaining three-dimensional (3D) data of one or more regions within the scene independently of the non-visible spectrum image, the one or more regions comprising part or all of the scene; generating one or more depth cues based on the 3D data; applying the depth cues to the non-visible spectrum image to generate an enhanced depth perception image having an enhanced depth perception relative to the non- visible spectrum image; and displaying the enhanced depth perception image.In some cases, the 3D data includes a priori data associated with coordinates of a fixed coordinate system established in space, the a priori data being available prior to the capture time, and wherein one or more of the depth cues are generated based on the a priori data and an actual position and orientation of the non-visible spectrum sensor at the capture time.In some cases, at least some of the 3D data is obtained from a depth sensor that is distinct from the non-visible spectrum sensor, based on one or more readings by the depth sensor, and wherein one or more of the depth cues are generated based on the readings.In some cases, the 3D data includes a priori data associated with coordinates of a fixed coordinate system established in space, the a priori data being available prior to the capture time, and wherein one or more of the depth cues are generated prior to the capture time, based on the a priori data and an expected position and orientation of the non-visible spectrum sensor at the capture time.In some cases, the method further comprises: recording the non-visible spectrum image to provide a recording of the non-visible spectrum image; wherein at least one of the depth cues is applied to the non-visible spectrum image as recorded.
SHJ733 - AIR282IL -3- In some cases, at least some of the 3D data is based on readings that are obtained by an active 3D scanner from at least one scan of one or more of the regions within the scene, and wherein one or more of the depth cues are generated based on the at least some of the 3D data.In some cases, the active 3D scanner is a Light Detection and Ranging (LiDAR).In some cases, the depth cues include one or more of the following: (a) one or more shadows; (b) one or more virtual objects that are based on a corresponding one or more physical objects that are of a known size; (c) haze; or (d) perspective.In some cases, at least some of the shadows are generated by one or more virtual light sources.In some cases, the method further comprises: selecting one or more selected light sources of the virtual light sources.In some cases, the method further comprises: for at least one selected light source of the selected light sources, defining one or more parameters of the at least one selected light source, the one or more parameters including a position and an orientation of the at least one selected light source.In some cases, for at least one selected light source of the selected light sources, one or more parameters of the at least one selected light source are defined by a user, the one or more parameters including a position and an orientation of the at least one selected light source.In some cases, one or more selected light sources of the virtual light sources are selected by a user, and the user defines one or more parameters of the selected light sources, the one or more parameters including a position and an orientation of each selected light source of the selected light sources.In some cases, at least some of the shadows are generated based on a known position and orientation of an existing light source that illuminates the scene at the capture time.In some cases, for each virtual object of the virtual objects that is applied to the non- visible spectrum image, an actual size of the respective virtual object is based on the known size of a physical object to which the respective virtual object corresponds.In some cases, the virtual objects in the enhanced depth perception image are distinguishable from real objects in the enhanced depth perception image.In some cases, the haze is applied to the non-visible spectrum image by altering one or more local characteristics of the non-visible spectrum image.
SHJ733 - AIR282IL -4- In some cases, the local characteristics include one or more of the following: (a) a Modulation Transfer Function (MTF) of the non-visible spectrum image, (b) one or more histogram distributions of the non-visible spectrum image, or (c) a change in a hue of the non-visible spectrum image.In some cases, the perspective is applied to the non-visible spectrum image byincluding one or more contour lines in the enhanced depth perception image.In accordance with a second aspect of the presently disclosed subject matter, there is provided a method for enhancing a depth perception of a non-visible spectrum image of a scene, the method comprising: capturing the non-visible spectrum image at a capture time, by at least one non-visible spectrum sensor, the non-visible spectrum image including one or more objects; classifying one or more of the objects without deriving three-dimensional (3D) data from the non-visible spectrum image, giving rise to one or more classified objects; generating one or more depth cues based on one or more parameters associated with the classified objects; applying the depth cues to the non-visible spectrum image to generate an enhanced depth perception image having an enhanced depth perception relative to the non- visible spectrum image; and displaying the enhanced depth perception image.In some cases, the depth cues include one or more of the following: (a) one or more shadows; (b) one or more virtual objects that are based on a corresponding one or more physical objects that are of a known size; (c) haze; or (d) perspective.In some cases, at least some of the shadows are generated by one or more virtual light sources.In some cases, at least some of the shadows are generated based on a known position and orientation of an existing light source that illuminates the scene at the capture time.In accordance with a third aspect of the presently disclosed subject matter, there is provided a system for enhancing a depth perception of a non-visible spectrum image of a scene, the system comprising: at least one non-visible spectrum sensor configured to capture the non-visible spectrum image at a capture time; and a processing circuitry configured to: obtain three-dimensional (3D) data of one or more regions within the scene independently of the non-visible spectrum image, the one or more regions comprising part or all of the scene;generate one or more depth cues based on the 3D data; apply the depth cues to the non-visible spectrum image to generate an enhanced depth perception image having an enhanced depth perception relative to the non-visible spectrum image; and display the enhanced depth perception image.
SHJ733 - AIR2821L -5 - In some cases, the 3D data includes a priori data associated with coordinates of a fixed coordinate system established in space, the a priori data being available prior to the capture time, and wherein one or more of the depth cues are applied based on the a priori data and an actual position and orientation of the non-visible spectrum sensor at the capture time.In some cases, at least some of the 3D data is obtained from a depth sensor that is distinct from the non-visible spectrum sensor, based on one or more readings by the depth sensor, and wherein one or more of the depth cues are generated based on the readings.In some cases, the 3D data includes a priori data associated with coordinates of a fixed coordinate system established in space, the a priori data being available prior to the capture time, and wherein one or more of the depth cues are generated prior to the capture time, based on the a priori data and an expected position and orientation of the non-visible spectrum sensor at the capture time.In some cases, the processing circuitry is further configured to: record the non-visible spectrum image to provide a recording of the non-visible spectrum image; wherein at least one of the depth cues is applied to the non-visible spectrum image as recorded.In some cases, at least some of the 3D data is based on readings that are obtained by an active 3D scanner from at least one scan of one or more of the regions within the scene, and one or more of the depth cues are generated based on the at least some of the 3D data.In some cases, the active 3D scanner is a Light Detection and Ranging (LiDAR).In some cases, the depth cues include one or more of the following: (a) one or more shadows; (b) one or more virtual objects that are based on a corresponding one or more physical objects that are of a known size; (c) haze; or (d) perspective.In some cases, at least some of the shadows are generated by one or more virtual light sources.In some cases, the processing circuitry is further configured to: select one or more selected light sources of the virtual light sources.In some cases, the processing circuitry is further configured to: for at least one selected light source of the selected light sources, define one or more parameters of the at least one selected light source, the one or more parameters including a position and an orientation of the at least one selected light source.In some cases, for at least one selected light source of the selected light sources, one or more parameters of the at least one selected light source are defined by a user of the SHJ733 - AIR282IL -6- system, the one or more parameters including a position and an orientation of the at least one selected light source.In some cases, one or more selected light sources of the virtual light sources are selected by a user of the system, and the user defines one or more parameters of the selected light sources, the one or more parameters including a position and an orientation of each selected light source of the selected light sources.In some cases, at least some of the shadows are generated based on a known position and orientation of an existing light source that illuminates the scene at the capture time.In some cases, for each virtual object of the virtual objects that is applied to the non- visible spectrum image, an actual size of the respective virtual object is based on the known size of a physical object to which the respective virtual object corresponds.In some cases, the virtual objects in the enhanced depth perception image are distinguishable from real objects in the enhanced depth perception image.In some cases, the haze is applied to the non-visible spectrum image by altering one or more local characteristics of the non-visible spectrum image.In some cases, the local characteristics include one or more of the following: (a) a Modulation Transfer Function (MTF) of the non-visible spectrum image, (b) one or more histogram distributions of the non-visible spectrum image, or (c) a change in a hue of the non-visible spectrum image.In some cases, the perspective is applied to the non-visible spectrum image by including one or more contour lines in the enhanced depth perception image.In accordance with a fourth aspect of the presently disclosed subject matter, there is provided a system for enhancing a depth perception of a non-visible spectrum image of a scene, the system comprising: at least one non-visible spectrum sensor configured to capture the non-visible spectrum image at a capture time, the non-visible spectrum image including one or more objects; and a processing circuitry configured to: classify one or more of the objects without deriving three-dimensional (3D) data from the non-visible spectrum image, giving rise to one or more classified objects; generate one or more depth cues based on one or more parameters of the classified objects; apply the depth cues to the non-visible spectrum image to generate an enhanced depth perception image having an enhanced depth perception relative to the non-visible spectrum image; and display the enhanced depth perception image.
SHJ733 - AIR282IL In some cases, the depth cues include one or more of the following: (a) one or more shadows; (b) one or more virtual objects that are based on a corresponding one or more physical objects that are of a known size; (c) haze; or (d) perspective.In some cases, at least some of the shadows are generated by one or more virtual light sources.In some cases, at least some of the shadows are generated based on a known position and orientation of an existing light source that illuminates the scene at the capture time.In accordance with a fifth aspect of the presently disclosed subject matter, there is provided a non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code, executable by processing circuitry of a computer to perform a method for enhancing a depth perception of a non-visible spectrum image of a scene, the method comprising: capturing the non-visible spectrum image at a capture time, by at least one non-visible spectrum sensor; obtaining three-dimensional (3D) data of one or more regions within the scene independently of the non-visible spectrum image, the one or more regions comprising part or all of the scene; generating one or more depth cues based on the 3D data; applying the depth cues to the non- visible spectrum image to generate an enhanced depth perception image having an enhanced depth perception relative to the non-visible spectrum image; and displaying the enhanced depth perception image.In accordance with a sixth aspect of the presently disclosed subject matter, there is provided a non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code, executable by processing circuitry of a computer to perform a method for enhancing a depth perception of a non-visible spectrum image of a scene, the method comprising: capturing the non-visible spectrum image at a capture time, by at least one non-visible spectrum sensor, the non-visible spectrum image including one or more objects; classifying one or more of the objects without deriving three-dimensional (3D) data from the non-visible spectrum image, giving rise to one or more classified objects; generating one or more depth cues based on one or more parameters associated with the classified objects; applying the depth cues to the non-visible spectrum image to generate an enhanced depth perception image having an enhanced depth perception relative to the non-visible spectrum image; and displaying the enhanced depth perception image.
SHJ733 - AIR282IL -8- BRIEF DESCRIPTION OF THE DRAWINGS In order to understand the presently disclosed subject matter and to see how it may be carried out in practice, the subject matter will now be described, by way of non-limiting examples only, with reference to the accompanying drawings, in which: Fig. 1 is a block diagram schematically illustrating an example of a system for enhancing a depth perception of a non-visible spectrum image of a scene, in accordance with the presently disclosed subject matter; and Fig. 2 is a flowchart illustrating a first example of a sequence of operations for enhancing a depth perception of a non-visible spectrum image of a scene, in accordance with the presently disclosed subject matter; Fig. 3 is a flowchart illustrating a second example of a sequence of operations for enhancing a depth perception of a non-visible spectrum image of a scene, in accordance with the presently disclosed subject matter; Fig. 4 is a schematic diagram illustrating a schematic optical instrument for displaying an enhanced depth perception image of the scene, in accordance with the presently disclosed subject matter; Fig. 5 is a schematic diagram illustrating an exploded view of an enhanced eyepiece of the schematic optical instrument, in accordance with the presently disclosed subject matter; Fig. 6 is a schematic diagram illustrating another perspective of the schematic optical instrument, in accordance with the presently disclosed subject matter; and Fig. 7 is an optical diagram illustrating optical components of the enhanced eyepiece, in accordance with the presently disclosed subject matter.

Claims (48)

SHJ733 - AIR282IL -22-
1. CLAIMS: 1. A method for enhancing a depth perception of a non-visible spectrum image of a scene, the method comprising:capturing the non-visible spectrum image at a capture time, by at least one non-visible spectrum sensor;obtaining three-dimensional (3D) data of one or more regions within the scene independently of the non-visible spectrum image, the one or more regions comprising part or all of the scene;generating one or more depth cues based on the 3D data;applying the depth cues to the non-visible spectrum image to generate an enhanced depth perception image having an enhanced depth perception relative to the non-visible spectrum image; anddisplaying the enhanced depth perception image.
2. The method of claim 1, wherein the 3D data includes a priori data associated with coordinates of a fixed coordinate system established in space, the a priori data being available prior to the capture time, and wherein one or more of the depth cues are generated based on the a priori data and an actual position and orientation of the non-visible spectrum sensor at the capture time.
3. The method of claim 1, wherein at least some of the 3D data is obtained from a depth sensor that is distinct from the non-visible spectrum sensor, based on one or more readings by the depth sensor, and wherein one or more of the depth cues are generated based on the readings.
4. The method of claim 1, wherein the 3D data includes a priori data associated with coordinates of a fixed coordinate system established in space, the a priori data being available prior to the capture time, and wherein one or more of the depth cues are generated prior to the capture time, based on the a priori data and an expected position and orientation of the non- visible spectrum sensor at the capture time. SHJ733 - AIR282IL -23-
5. The method of claim 1, further comprising:recording the non-visible spectrum image to provide a recording of the non-visible spectrum image;wherein at least one of the depth cues is applied to the non-visible spectrum image as recorded.
6. The method of claim 1, wherein at least some of the 3D data is based on readings that are obtained by an active 3D scanner from at least one scan of one or more of the regions within the scene, and wherein one or more of the depth cues are generated based on the at least some of the 3D data.
7. The method of claim 6, wherein the active 3D scanner is a Light Detection and Ranging (LiDAR).
8. The method of claim 1, wherein the depth cues include one or more of the following:(a) one or more shadows;(b) one or more virtual objects that are based on a corresponding one or more physical objects that are of a known size;(c) haze; or(d) perspective.
9. The method of claim 8, wherein at least some of the shadows are generated by one or more virtual light sources.
10. The method of claim 9, further comprising:selecting one or more selected light sources of the virtual light sources.
11. The method of claim 10, further comprising:for at least one selected light source of the selected light sources, defining one or more parameters of the at least one selected light source, the one or more parameters including a position and an orientation of the at least one selected light source. SHJ733 - AIR282IL -24-
12. The method of claim 10, wherein, for at least one selected light source of the selected light sources, one or more parameters of the at least one selected light source are defined by a user, the one or more parameters including a position and an orientation of the at least one selected light source.
13. The method of claim 9, wherein one or more selected light sources of the virtual light sources are selected by a user, and wherein the user defines one or more parameters of the selected light sources, the one or more parameters including a position and an orientation of each selected light source of the selected light sources.
14. The method of claim 8, wherein at least some of the shadows are generated based on a known position and orientation of an existing light source that illuminates the scene at the capture time.
15. 15. The method of claim 8, wherein, for each virtual object of the virtual objects that isapplied to the non-visible spectrum image, an actual size of the respective virtual object is based on the known size of a physical object to which the respective virtual object corresponds. 20
16. The method of claim 8, wherein the virtual objects in the enhanced depth perceptionimage are distinguishable from real objects in the enhanced depth perception image.
17. The method of claim 8, wherein the haze is applied to the non-visible spectrum image by altering one or more local characteristics of the non-visible spectrum image.
18. The method of claim 17, wherein the local characteristics include one or more of the following: (a) a Modulation Transfer Function (MTF) of the non-visible spectrum image, (b) one or more histogram distributions of the non-visible spectrum image, or (c) a change in a hue of the non-visible spectrum image.
19. The method of claim 8, wherein the perspective is applied to the non-visible spectrumimage by including one or more contour lines in the enhanced depth perception image. SHJ733 - AIR282IL -25- !0
20. A method for enhancing a depth perception of a non-visible spectrum image of a scene, the method comprising:capturing the non-visible spectrum image at a capture time, by at least one non-visible spectrum sensor, the non-visible spectrum image including one or more objects;classifying one or more of the objects without deriving three-dimensional (3D) data from the non-visible spectrum image, giving rise to one or more classified objects;generating one or more depth cues based on one or more parameters associated with the classified objects;applying the depth cues to the non-visible spectrum image to generate an enhanced depth perception image having an enhanced depth perception relative to the non-visible spectrum image; anddisplaying the enhanced depth perception image.
21. The method of claim 20, wherein the depth cues include one or more of the following:(a) one or more shadows;(b) one or more virtual objects that are based on a corresponding one or more physical objects that are of a known size;(c) haze; or(d) perspective.
22. The method of claim 21, wherein at least some of the shadows are generated by one or more virtual light sources.
23. The method of claim 21, wherein at least some of the shadows are generated based on a known position and orientation of an existing light source that illuminates the scene at the capture time.
24. A system for enhancing a depth perception of a non-visible spectrum image of a scene, the system comprising:at least one non-visible spectrum sensor configured to capture the non-visible spectrum image at a capture time; anda processing circuitry configured to: SHJ733 - AIR282IL -26- obtain three-dimensional (3D) data of one or more regions within the scene independently of the non-visible spectrum image, the one or more regions comprising part or all of the scene;generate one or more depth cues based on the 3D data;apply the depth cues to the non-visible spectrum image to generate an enhanced depth perception image having an enhanced depth perception relative to the non-visible spectrum image; anddisplay the enhanced depth perception image.
25. The system of claim 24, wherein the 3D data includes a priori data associated with coordinates of a fixed coordinate system established in space, the a priori data being available prior to the capture time, and wherein one or more of the depth cues are applied based on the a priori data and an actual position and orientation of the non-visible spectrum sensor at the capture time.
26. The system of claim 24, wherein at least some of the 3D data is obtained from a depth sensor that is distinct from the non-visible spectrum sensor, based on one or more readings by the depth sensor, and wherein one or more of the depth cues are generated based on the readings.
27. The system of claim 24, wherein the 3D data includes a priori data associated with coordinates of a fixed coordinate system established in space, the a priori data being available prior to the capture time, and wherein one or more of the depth cues are generated prior to the capture time, based on the a priori data and an expected position and orientation of the non- visible spectrum sensor at the capture time.
28. The system of claim 24, wherein the processing circuitry is further configured to: record the non-visible spectrum image to provide a recording of the non-visible spectrum image;wherein at least one of the depth cues is applied to the non-visible spectrum image as recorded. SHJ733 - A1R282IL -27-
29. The system of claim 24, wherein at least some of the 3D data is based on readings that are obtained by an active 3D scanner from at least one scan of one or more ot the regions within the scene, and wherein one or more of the depth cues are generated based on the at least some of the 3D data.
30. The system of claim 29, wherein the active 3D scanner is a Light Detection and Ranging (LiDAR).
31. The system of claim 24, wherein the depth cues include one or more of the following:(a) one or more shadows;(b) one or more virtual objects that are based on a corresponding one or more physical objects that are of a known size;(c) haze; or(d) perspective.
32. The system of claim 31, wherein at least some of the shadows are generated by one or more virtual light sources.
33. The system of claim 32, wherein the processing circuitry is further configured to: select one or more selected light sources of the virtual light sources.
34. The system of claim 33, wherein the processing circuitry is further configured to: for at least one selected light source of the selected light sources, define one or more parameters of the at least one selected light source, the one or more parameters including a position and an orientation of the at least one selected light source.
35. The system of claim 33, wherein, for at least one selected light source of the selected light sources, one or more parameters of the at least one selected light source are defined by a user of the system, the one or more parameters including a position and an orientation of the at least one selected light source.
36. The system of claim 32, wherein one or more selected light sources of the virtual light sources are selected by a user of the system, and wherein the user defines one or more SHJ733 - AIR282IL -28- parameters of the selected light sources, the one or more parameters including a position and an orientation of each selected light source of the selected light sources.
37. The system of claim 31, wherein at least some of the shadows are generated based on a known position and orientation of an existing light source that illuminates the scene at the capture time.
38. The system of claim 31, wherein, for each virtual object of the virtual objects that is applied to the non-visible spectrum image, an actual size of the respective virtual object is based on the known size of a physical object to which the respective virtual object corresponds.
39. The system of claim 31, wherein the virtual objects in the enhanced depth perception image are distinguishable from real objects in the enhanced depth perception image.
40. The system of claim 31, wherein the haze is applied to the non-visible spectrum image by altering one or more local characteristics of the non-visible spectrum image.
41. The system of claim 40, wherein the local characteristics include one or more of the following: (a) a Modulation Transfer Function (MTF) of the non-visible spectrum image, (b) one or more histogram distributions of the non-visible spectrum image, or (c) a change in a hue of the non-visible spectrum image.
42. The system of claim 31, wherein the perspective is applied to the non-visible spectrum image by including one or more contour lines in the enhanced depth perception image.
43. A system for enhancing a depth perception of a non-visible spectrum image of a scene, the system comprising:at least one non-visible spectrum sensor configured to capture the non-visible spectrum image at a capture time, the non-visible spectrum image including one or more objects; anda processing circuitry configured to: SHJ733 - AIR282IL -29- classify one or more of the objects without deriving three-dimensional (3D) data from the non-visible spectrum image, giving rise to one or more classified objects;generate one or more depth cues based on one or more parameters of the classified objects;apply the depth cues to the non-visible spectrum image to generate an enhanced depth perception image having an enhanced depth perception relative to the non-visible spectrum image; anddisplay the enhanced depth perception image.
44. The system of claim 43, wherein the depth cues include one or more of the following:(a) one or more shadows;(b) one or more virtual objects that are based on a corresponding one or more physical objects that are of a known size;(c) haze; or(d) perspective.
45. The system of claim 44, wherein at least some of the shadows are generated by one or more virtual light sources.
46. The system of claim 44, wherein at least some of the shadows are generated based on a known position and orientation of an existing light source that illuminates the scene at the capture time.
47. A non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code, executable by processing circuitry of a computer to perform a method for enhancing a depth perception of a non-visible spectrum image of a scene, the method comprising:capturing the non-visible spectrum image at a capture time, by at least one non-visible spectrum sensor;obtaining three-dimensional (3D) data of one or more regions within the scene independently of the non-visible spectrum image, the one or more regions comprising part or all of the scene; SHJ733 - AIR282IL -30- generating one or more depth cues based on the 3D data;applying the depth cues to the non-visible spectrum image to generate an enhanced depth perception image having an enhanced depth perception relative to the non-visible spectrum image; anddisplaying the enhanced depth perception image.
48. A non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code, executable by processing circuitry of a computer to perform a method for enhancing a depth perception of a non-visible spectrum image of a scene, the method comprising:capturing the non-visible spectrum image at a capture time, by at least one non-visible spectrum sensor, the non-visible spectrum image including one or more objects;classifying one or more of the objects without deriving three-dimensional (3D) data from the non-visible spectrum image, giving rise to one or more classified objects;generating one or more depth cues based on one or more parameters associated withthe classified objects;applying the depth cues to the non-visible spectrum image to generate an enhanced depth perception image having an enhanced depth perception relative to the non-visible spectrum image; anddisplaying the enhanced depth perception image. For the Applicants, Shalev, Jencmen & Co. Dan Davidson Advocate, Patent Attorney
IL279342A 2020-12-09 2020-12-09 Method and systems for enhancing depth perception of a non-visible spectrum image of a scene IL279342A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
IL279342A IL279342A (en) 2020-12-09 2020-12-09 Method and systems for enhancing depth perception of a non-visible spectrum image of a scene
EP21902874.3A EP4260553B1 (en) 2020-12-09 2021-12-09 Method, system and computer program for enhancing depth perception of a non-visible spectrum image of a scene
PCT/IL2021/051467 WO2022123570A1 (en) 2020-12-09 2021-12-09 Methods and systems for enhancing depth perception of a non-visible spectrum image of a scene
US18/331,203 US12530791B2 (en) 2020-12-09 2023-06-08 Methods and systems for enhancing depth perception of a non-visible spectrum image of a scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
IL279342A IL279342A (en) 2020-12-09 2020-12-09 Method and systems for enhancing depth perception of a non-visible spectrum image of a scene

Publications (1)

Publication Number Publication Date
IL279342A true IL279342A (en) 2022-07-01

Family

ID=82608791

Family Applications (1)

Application Number Title Priority Date Filing Date
IL279342A IL279342A (en) 2020-12-09 2020-12-09 Method and systems for enhancing depth perception of a non-visible spectrum image of a scene

Country Status (1)

Country Link
IL (1) IL279342A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030014212A1 (en) * 2001-07-12 2003-01-16 Ralston Stuart E. Augmented vision system using wireless communications
EP2031559A1 (en) * 2007-08-29 2009-03-04 ETH Zürich Augmented visualization in two-dimensional images
US20150208054A1 (en) * 2012-10-01 2015-07-23 Telefonaktiebolaget L M Ericsson (Publ) Method and apparatus for generating a depth cue
US9995936B1 (en) * 2016-04-29 2018-06-12 Lockheed Martin Corporation Augmented reality systems having a virtual image overlaying an infrared portion of a live scene
US20190012828A1 (en) * 2017-07-07 2019-01-10 Electronics And Telecommunications Research Institute Virtual content-mixing method for augmented reality and apparatus for the same
US20190360810A1 (en) * 2015-11-13 2019-11-28 FLIR Belgium BVBA Video sensor fusion and model based virtual and augmented reality systems and methods
CN111145122A (en) * 2019-12-27 2020-05-12 常州工学院 Method for removing haze from single image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030014212A1 (en) * 2001-07-12 2003-01-16 Ralston Stuart E. Augmented vision system using wireless communications
EP2031559A1 (en) * 2007-08-29 2009-03-04 ETH Zürich Augmented visualization in two-dimensional images
US20150208054A1 (en) * 2012-10-01 2015-07-23 Telefonaktiebolaget L M Ericsson (Publ) Method and apparatus for generating a depth cue
US20190360810A1 (en) * 2015-11-13 2019-11-28 FLIR Belgium BVBA Video sensor fusion and model based virtual and augmented reality systems and methods
US9995936B1 (en) * 2016-04-29 2018-06-12 Lockheed Martin Corporation Augmented reality systems having a virtual image overlaying an infrared portion of a live scene
US20190012828A1 (en) * 2017-07-07 2019-01-10 Electronics And Telecommunications Research Institute Virtual content-mixing method for augmented reality and apparatus for the same
CN111145122A (en) * 2019-12-27 2020-05-12 常州工学院 Method for removing haze from single image

Similar Documents

Publication Publication Date Title
US11948282B2 (en) Image processing apparatus, image processing method, and storage medium for lighting processing on image using model data
US11115633B2 (en) Method and system for projector calibration
US10701332B2 (en) Image processing apparatus, image processing method, image processing system, and storage medium
Mori et al. A survey of diminished reality: Techniques for visually concealing, eliminating, and seeing through real objects
US20250166323A1 (en) Live in-camera overlays
KR101370356B1 (en) Stereoscopic image display method and apparatus, method for generating 3D image data from a 2D image data input and an apparatus for generating 3D image data from a 2D image data input
US10762649B2 (en) Methods and systems for providing selective disparity refinement
US10659750B2 (en) Method and system for presenting at least part of an image of a real object in a view of a real environment, and method and system for selecting a subset of a plurality of images
US20130335535A1 (en) Digital 3d camera using periodic illumination
KR101618776B1 (en) Method for Enhancing 3-Dimensional Depth Image
US9990738B2 (en) Image processing method and apparatus for determining depth within an image
JP2010510569A (en) System and method of object model fitting and registration for transforming from 2D to 3D
US20150379720A1 (en) Methods for converting two-dimensional images into three-dimensional images
JP2010510573A (en) System and method for synthesizing a three-dimensional image
JP2010522469A (en) System and method for region classification of 2D images for 2D-TO-3D conversion
CN105611267B (en) Merging of real world and virtual world images based on depth and chrominance information
JPWO2020075252A1 (en) Information processing equipment, programs and information processing methods
CN112073640B (en) Panoramic information acquisition pose acquisition method, device and system
US12530791B2 (en) Methods and systems for enhancing depth perception of a non-visible spectrum image of a scene
IL279342A (en) Method and systems for enhancing depth perception of a non-visible spectrum image of a scene
KR101849696B1 (en) Method and apparatus for obtaining informaiton of lighting and material in image modeling system
TWI768231B (en) Information processing device, recording medium, program product, and information processing method
IL315453A (en) Methods and systems for increasing the perception of depth in images in the invisible spectrum of a scene
JP2022059879A (en) Image processing system, image processing method, and program
EP3062291A1 (en) An image processing method and apparatus for determining depth within an image