IL298239B2 - Imaging system and method - Google Patents

Imaging system and method

Info

Publication number
IL298239B2
IL298239B2 IL298239A IL29823922A IL298239B2 IL 298239 B2 IL298239 B2 IL 298239B2 IL 298239 A IL298239 A IL 298239A IL 29823922 A IL29823922 A IL 29823922A IL 298239 B2 IL298239 B2 IL 298239B2
Authority
IL
Israel
Prior art keywords
light
radiation
polarizer
polarization
aperture
Prior art date
Application number
IL298239A
Other languages
Hebrew (he)
Other versions
IL298239A (en
IL298239B1 (en
Inventor
Luria Gilad
Miklatzky Efraim
Original Assignee
Scenera Tech Ltd
Luria Gilad
Miklatzky Efraim
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Scenera Tech Ltd, Luria Gilad, Miklatzky Efraim filed Critical Scenera Tech Ltd
Priority to IL298239A priority Critical patent/IL298239B2/en
Publication of IL298239A publication Critical patent/IL298239A/en
Priority to PCT/IL2023/051175 priority patent/WO2024105664A1/en
Priority to EP23810179.4A priority patent/EP4551902B1/en
Priority to IL320027A priority patent/IL320027A/en
Priority to JP2025526239A priority patent/JP2025540911A/en
Priority to KR1020257019015A priority patent/KR20250107890A/en
Priority to CN202380071981.6A priority patent/CN120112769A/en
Publication of IL298239B1 publication Critical patent/IL298239B1/en
Publication of IL298239B2 publication Critical patent/IL298239B2/en
Priority to US19/081,428 priority patent/US12429325B2/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J9/00Measuring optical phase difference; Determining degree of coherence; Measuring optical wavelength
    • G01J9/02Measuring optical phase difference; Determining degree of coherence; Measuring optical wavelength by interferometric methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B9/00Measuring instruments characterised by the use of optical techniques
    • G01B9/02Interferometers
    • G01B9/02015Interferometers characterised by the beam path configuration
    • G01B9/02032Interferometers characterised by the beam path configuration generating a spatial carrier frequency, e.g. by creating lateral or angular offset between reference and object beam
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B9/00Measuring instruments characterised by the use of optical techniques
    • G01B9/02Interferometers
    • G01B9/02041Interferometers characterised by particular imaging or detection techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B9/00Measuring instruments characterised by the use of optical techniques
    • G01B9/02Interferometers
    • G01B9/02083Interferometers characterised by particular signal processing and presentation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4865Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B2290/00Aspects of interferometers not specifically covered by any group under G01B9/02
    • G01B2290/70Using polarization in the interferometer

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Signal Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Description

IMAGING SYSTEM AND METHOD TECHNOLOGICAL FIELD The present invention is generally in the field of object imaging, and particularly relates to optical imaging of surface contours of objects.
BACKGROUND This section intends to provide background information concerning the present application, which is not necessarily prior art. Noncontact optical measurement techniques are used to obtain topographic information by using electromagnetic (EM) wave (e.g., light/radiation) signals to acquire three-dimensional (3D) surface profile (contour) information of an inspected object. Many 3D imaging technologies are being developed recently for provision of full surface profile image data of inspected objects. Typical 3D shape measurement methods are based on stereo vision, light field/plenoptics, structured light, time-of-flight (TOF), digital fringe projection (DFP), and interferometric imaging techniques. 3D imaging techniques have many commercial applications, including inter alia, metrology/3D object modelling, virtual/augmented reality, remote sensing, medical diagnostic, biometrics and suchlike. It is desirable to provide 3D imaging implementations that are fast, portable, compact, efficient and consume low power. Structured-light imaging techniques can be used to measure precise 3D shapes in resolutions of less than a millimeter. However, structured-light 3D imaging techniques tend to be complicated due to requirements for time varying irradiating in different axial directions and focusing of the patterned light, resulting in relatively long 3D measurement time durations. These techniques are also limited in their spatial resolution due to the grid points for which the image can be calculated. TOF range-imaging (e.g., LIDAR) techniques can be used to measure distances to various points on the surface of an inspected object, by measuring the amount of time required for light to travel between the object and the light source and constructing a "distance map" image of the object based on the measured distances. However, TOF is not ideal for 3D modeling as it requires measuring round-trip times of a plurality of different beams to image surface areas of the inspected objects. Both structured light and TOF techniques require a unique illumination source and the images obtained by these techniques can mainly provide information for the distance of the object but typically unable to determine visible range grayscale/color level of the imagery data. In order to create a grey scale 3D image, and yet alone a 3D colorful or grayscale image, the information arising from these techniques has to be fused with information from a standard 2D color sensor. Stereovision requires a plurality of cameras located at different locations to obtain accurate 3D object information. Plenoptics imaging requires complex algorithms and dedicated computer hardware for 3D image reconstruction, thus spatial resolution is often reduced. Holograms are constructed by recording interference patterns of coherent reference and object reflection beams. The inspected object is typically irradiated with laser light and interference patterns between reference beams and beams reflected from the object are recorded. Holographic interferometry aims to obtain phase information from a single image, which typically requires complex and relatively sophisticated optical imaging equipment. US Patent Publication No. 2018/164438 discloses a method for providing distance information of a scene with a time-of-flight camera, comprising the steps of emitting a modulated light pulse towards the scene, receiving reflections of the modulated light pulse from the scene, evaluating a time-of-flight information for the received reflections of the modulated light pulse, and deriving distance information from the time-of-flight information for the received reflections, whereby a spread spectrum signal is applied to a base frequency of the modulation of the light pulse, and the time-of-flight information is evaluated under consideration of the a spread spectrum signal applied to the base frequency of the modulation of the light pulse. Also disclosed a time-of-flight camera for providing distance information from a scene, whereby the time-of-flight camera performs the above method. The holographic interferometer disclosed in US Patent Publication No. 2020/1417comprises at least one imaging device capturing an interference pattern created by at least two light beams, and at least one aperture located in an optical path of at least one light beam of the at least two light beams, wherein the at least one aperture is located away from an axis of the at least one light beam, thus transmitting a subset of the at least one light beam collected at an angle range. US Patent Publication No. 2017/201727 discloses a light-field imaging system and a method for generating light-field image data. The system comprising an imaging lens unit, a detector array and a polychromatic patterned filter located in optical path of collected light, being at an intermediate plane between the lens unit and the detector array. The method comprising: acquiring image data of a region of interest by passing input light coming from said region of interest through said imaging lens unit and said polychromatic patterned filter to be detected by said detector array to generate corresponding image data; and processing said image data to determined light components passing through different regions of said polychromatic patterned filter corresponding to different colors and different parts of the region of interest to provide light-field image data of said region of interest. US Patent Publication No. 2005/0007603 discloses a method of wavefront analysis including applying a transform to the wavefront, applying a plurality of different phase changes to the transformed wavefront, obtaining a plurality of intensity maps, wherein the plurality of different phase changes are applied to region of the transformed wavefront, corresponding to a shape of the light source. US Patent Publication No. 2020/278257 discloses an optical detection system for detecting data on the optical mutual coherence function of input field. The system comprising an encoder having similar unit cells, and an array of sensor cells located at a distance downstream of said unit cells with respect to a general direction of propagation of input light. The array defines a plurality of sub-array unit cells, each sub-array corresponding to a unit cell of the encoder, and each sub-array comprising a predetermined number M of sensor elements. The encoder applies predetermined modulation to input light collected by the system, such that each unit cell of said encoder directs a portion of the collected input light incident thereon onto sub-array unit cell corresponding therewith and one or more neighboring sub-array unit cells within a predetermined proximity region. The number M is determined in accordance with a predetermined number of sub-arrays unit cells within the proximity region.
GENERAL DESCRIPTION The 3D imaging techniques commonly used nowadays typically require dedicated illumination source(s) and oftentimes expensive equipment and/or substantial computational resources, for construction of a 3D model for imaged object(s). Thus, 3D imaging implementations available heretofore are typically not optimal for use with regular off the shelf imaging devices (e.g., digital cameras, smart devices, tablets, etc.) that are so common and readily available nowadays for average households and businesses. There is thus a need for fast, portable, compact, efficient and low power consuming 3D imaging techniques that can be used to implement reliable 3D imaging with regular portable, or non-portable, devices e.g., equipped with optical imagers/camera and processing means. The present disclosure provides 3D imaging techniques that can be used to construct relatively simple and inexpensive 3D imaging systems. In a broad aspect the present application discloses techniques usable for determining optical path difference (OPD) between different rays of light/radiation (or alternately – wavefronts) emitted from object(s) imaged by an optical system. The OPD is determined by analyzing measured energy/intensity distribution(s) of components of the light/radiation received from the object via at least two partial apertures with different (e.g., linear) polarization orientations, and determining a phase shift between the light/radiation components received from the at least two partial apertures based of the energy/intensity distribution(s) measured and known properties of the at least two partial apertures. This is achieved in some embodiments by passing the light/radiation rays from the object through an aperture assembly having at least two partial apertures having predetermined geometrical and/or optical properties (e.g., distance(s) between the partial apertures, polarization orientation, etc.), and configured for causing passage of the polarized light/radiation through the at least two partial apertures towards a detector assembly with predefined polarization orientations, with or without phase difference therebetween. An optical assembly e.g., comprising at least one imaging lens, and having pre-determined optical characteristics (e.g., focal length, focal plane location, etc.) is used in some implementations to collect the light/radiation from the aperture assembly and direct it to the detector assembly. The detector assembly is configured in some embodiments to separate from the light/radiation received from the aperture assembly, two or more components having two or more different polarization orientations, and direct these light/radiation components onto a sensor device (e.g., imager) of the detector assembly for measuring their intensities and generating measurement data/signals indicative thereof. In alternative embodiments the optical system is configured to apply time dependent polarization, or time dependent OPD/phase shifts, to the light/radiation propagating towards a sensor device of the detector assembly, for thereby measuring time dependent intensity/energy by said sensor device. Processing means are used to process and analyze the measurement data/signals generated by the sensor device and determine the OPD based thereon, and on the known design parameters of the optical system e.g., polarization orientations of the partial apertures, distance between said partial apertures, optical system focal length, etc. The OPD can then be used to determine the distance between different points on the imaged object and the imaging device. The unpolarized light/radiation propagating from the imaged object is passed in some embodiments through at least one front linear polarizer element configured to apply a predefined polarization orientation (e.g., 45°) to the light/radiation received from the imaged object, and direct the linearly polarized light passing therethrough towards the aperture assembly. Optionally, but in some applications preferably, the detector assembly, comprises one or more bandpass filters configured to enable passage of only certain wavelength ranges of the light/radiation towards the sensor device of the detector assembly. In some embodiments the bandpass filter is positioned in another location i.e., not in the detector assembly e.g., in the optical assembly. Alternatively, or additionally, in possible applications the imaged object is illuminated/irradiated by a narrow band light/radiation, and in such applications the one or more bandpass filters are not necessarily required in the detector assembly, or elsewhere in the system. The detector assembly comprises in possible applications an array of polarizer elements having a defined allocation of two or more different polarization orientations, configured to apply two or more different polarization orientations to the light/radiation propagating towards the sensor device of the detector assembly. Each one of the polarizer elements of the array can be configured to direct the polarized light/radiation emerging therefrom onto respective one or more sensing elements (pixels) of the sensing device. The efficiency of the detector assembly can be improved by using a microlens array anterior to the array of polarizer elements. The bandpass filter located in the detector assembly (or elsewhere in the system), comprises, in some embodiments, a defined spatial distribution of bandpass filter elements arranged, such that the wavelength range (e.g., "Red", "Green", "Blue",…) of each bandpass filter element is at least partially different from the wavelength ranges of the bandpass filter elements horizontally and vertically adjacently located thereto. This way, the bandpass filter can be configured to direct light/radiation of certain wavelength ranges and certain polarization orientations received from polarizer elements of the array of polarizer elements onto respective sensor elements of the sensor device. Optionally, the optical system comprises a retarder element (e.g., /4) e.g., configured to apply a constant phase shift between the differently polarized light/radiation from the aperture assembly before reaching the array of polarizer elements. Alternatively, the detector assembly comprises an array of retardation elements having a defined arrangement of one, two, or more different retardation elements spatially alternatingly distributed therein with null-retardation elements (i.e., not causing a phase shift) to form a rectangular alternating grid configured for passing (e.g., circularly) polarized light/radiation onto some of the polarizer elements of the array of polarizer elements having certain one or more polarization orientations. The present application also provides techniques for determining OPD between different rays of light/radiation (wavefronts) emitted from object(s) imaged by an optical system utilizing an aperture assembly having two or more partial apertures configured to controllably apply at respective different time instances two or more predefined phase differences between the light/radiation passing through at least two of its partial apertures at each time instance. A sensor device can be used to measure the intensity of the light/radiation received from the aperture assembly for each of the two or more different predefined phase differences thereby applied at each time instance and generate measurement data indicative thereof. One or more processors are used in some embodiments to process and analyze the measurement data generated by the sensor device for each one of the two or more predefined phase differences applied by the aperture assembly and determine the OPD based thereon. In some embodiments the one or more processors are configured to determine the OPD from measurement data generated by the sensor device for at least two, or three, or more, different phase differences applied to the light/radiation by the aperture assembly at different time instances. The determined OPD can be then used to determine a distance between the sensor device and the object. An optical assembly (e.g., comprising one or more imaging lenses) is placed in some embodiments between the aperture assembly and the sensor device. Optionally, but in some embodiments preferably, the aperture assembly comprises at least two concentric partial apertures. The one or more processors can be configured to generate control signals for causing the aperture assembly to apply the one or more different predefined phase differences to the light/radiation passing through at least two of its partial apertures at each time instance. The aperture assembly can be configured to apply different predefined phase differences to the light/radiation passing through at least two of its partial apertures by at least one of the following techniques: changing thickness of a medium placed in at least one of the partial apertures, changing refraction index of a medium placed in at least one of the partial apertures, and/or changing curvature of a lens placed in at least one of the partial apertures. Alternatively, or additionally, a controllable variable polarizer is used in the detector assembly, or in its vicinity, to controllably change at different time instances the polarization of the light/radiation from the at least two partial apertures of the aperture assembly. The intensity/energy of the different polarization orientations of the light/radiation passed through the controllable variable polarizer at the different time instances is measure by the detector assembly, and a phase difference between the light components received from the partial apertures of the aperture assembly is determined based thereon. The determined phase difference is then used to determine the OPD and a distance of the imaged object. According to one aspect there is provided an imaging device comprising a detector assembly having sensor elements configured to measure intensity of light/radiation thereby received and generate measurement data/signals indicative thereof, an aperture assembly having at least two partial apertures configured to divide light/radiation received from an object and passed through the aperture assembly into at least two portions having different polarization orientations, a polarizer arrangement located between the detector assembly and the aperture assembly, said polarizer arrangement configured to spatially affect at least two different polarization orientations to light/radiation passing therethrough, and a processor configured to process the measurement data/signals from the sensor elements associated with the spatially affected at least two different polarization orientations of said at least two portions of light/radiation from said aperture assembly, and determine based thereon a distance of the object from the imaging device. An optical assembly can be used to direct light/radiation from the aperture assembly to the detector assembly. The polarizer arrangement is configured in some embodiments to define a spatial distribution of at least two different polarization orientations. The processor can be accordingly configured to process the measurement data/signals from respective at least two sensor elements of the detector assembly associated with the spatially affected at least two different polarization orientations of the at least two portions of light/radiation from said aperture assembly. Optionally, but in some embodiments preferably, the polarizer arrangement is embedded in the detector assembly. Optionally, the aperture assembly is configured to apply a predefined phase difference between the at least two portions of the light/radiation. The processor can be accordingly configured and operable to determine the distance of the object based on the imaging predefined phase difference and the measurement data/signals. The device comprises in some embodiments at least one polarizer configured to apply a predefined polarization orientation to the light/radiation received by the aperture assembly. Optionally, the device comprises a light/radiation source configured to illuminate the object with light/radiation having a defined polarization orientation. Alternatively, or additionally, the device comprises a light/radiation source configured to illuminate the object with light/radiation having a define band of wavelengths. The device comprises in some embodiments one or more bandpass filters (e.g., embedded in the detector assembly) each configured to limit passage of light radiation therethrough towards the sensor elements of the detector assembly to a different range of wavelengths. The one or more bandpass filters can form a defined spatial distribution of bandpass filter elements. in some embodiments the one or more bandpass filters are arranged such that there is no spatial overlapping portions between them. The one or more bandpass filter elements forms in some embodiments a spatial distribution of at least one bandpass element of a predefined wavelength range and at least one passthrough non-filtering element allowing complete passage of the light/radiation therethrough. The polarizer arrangement comprises in possible embodiments a plurality of polarizer elements configured to form a defined spatial distribution of polarizer elements arranged such that the polarization orientation of at least some of the polarizer elements is different from polarization orientations of polarizer elements adjacently located thereto. The one or more bandpass filters can be configured to direct light/radiation of certain wavelength ranges onto respective polarizer elements of the polarizer arrangement having certain polarization orientations. Alternatively, the polarizer elements can be configured to direct light/radiation of certain polarization orientations onto respective one or more bandpass filters having certain wavelength ranges. In some embodiments a microlens array is located anterior to the sensor elements of the detector assembly. The device may comprise a retarder element configured to affect a desired polarization to the light/radiation. The device comprises in some embodiments an array of retardation elements arranged in the detector assembly and having a defined arrangement of different retardation elements spatially alternatingly distributed therein. The array of retardation elements can be configured to form a rectangular alternating grid of the of retardation elements to thereby pass polarized light/radiation onto some of the polarizer elements of the array of polarizer elements having certain one or more polarization orientations. The aperture assembly is configured in some embodiments to controllably apply two or more transitory predefined different phase changes to light/radiation passing through at least one of its partial apertures. The processor can be accordingly configured to process and analyze the measurement data generated by the sensor device for the two or more transitory predefined different phase changes controllably applied by the aperture assembly and determine a distance between the sensor device and the object based thereon. Optionally, the processor is configured to use measurement data generated by the sensor device for at least three differently defined phase changes applied to the light/radiation passing through at least one of the apertures of the aperture assembly. The aperture assembly can be configured to apply phase changes to the light/radiation passing through at least one of its apertures by at least one of the following techniques: changing thickness of a medium placed in at least one of the apertures; changing refraction index of a medium placed in at least one of the apertures; changing curvature of a lens forming at least one of the apertures.
The polarizer arrangement is configured in possible embodiments to controllably change in time polarization orientations of light/radiation components transferred therethrough. The processor can be accordingly configured to process and analyze the measurement data generated by the sensor device for two or more different polarization orientations controllably affected by the polarizer arrangement and determine a distance between the sensor device and the object based thereon. Optionally, the processor is configured to further determine one or more colors and/or grey-scale levels from the measurement data for each sensor element of the sensor device. A relative rotation of polarizations of some of the polarization elements is used in possible embodiments to calibrate the device. In one application, a combined rotation of some of the polarization elements is used to manipulate the energy distribution measured by the detector assembly, to thereby improve sensitivity. In another aspect there is provided an imaging method comprising: dividing light/radiation from an object into at least two portions having different polarization orientations; spatially affecting at least two different polarization orientations to the at least two light/radiation portions; measuring intensity of the spatially polarized light/radiation and generating measurement data/signals indicative thereof; and processing the measurement data/signals and determining based thereon a distance of the object. The method comprises in some embodiments affecting a spatial distribution of at least two different polarization orientations to the at least two light/radiation portions, and processing the measurement data/signals associated with the at least two different polarization orientations. The dividing of the light/radiation into at least two portions can be applied to light/radiation components having a predefined polarization orientation. The method can comprise radiating the object with light/radiation having a defined polarization orientation. Alternatively, or additionally, the method comprising illuminating the object with light/radiation having a define band of wavelengths. The method comprising in possible embodiments filtering the light/radiation by one or more bandpass filters having different wavelength ranges. For example, the method can include forming a defined spatial distribution of the filtering of the light/radiation by the one or more bandpass filters. The method comprises in possible embodiments forming a defined distribution of polarization orientations by the spatially affecting of the at least two different polarization orientations to the at least two light/radiation portions. Bandpass filters of the one or more bandpass filters can be associated with polarization orientations of the defined distribution of polarization orientations. Alternatively, polarization orientations of the defined distribution of polarization orientations can be associated with bandpass filters of the one or more bandpass filters. The method can further comprise affecting a phase shift to light/radiation so as to affect a desired polarization orientation thereto. The method can comprise forming a defined spatial distribution of different phase shifts to the at least two light/radiation portions. In some embodiments the method comprising controllably applying two or more transitory predefined different phase changes to the at least two light/radiation potions, processing and analyzing the measurement data generated for the two or more transitory predefined different phase changes controllably applied by the aperture assembly and determining the distance of the object based thereon. The applying of the phase changes to the light/radiation portions can comprise at least one of the following: changing thickness of a medium through which at least one of the light/radiation portions is passed; changing refraction index of a medium through which at least one of the light/radiation portions is passed; changing curvature of a lens through which at least one of the light/radiation portions is passed. The method can comprise controllably changing in time polarization orientations of light/radiation components of the at least two light/radiation potions, and processing and analyzing the measurement data generated for the controllably affected two or more different polarization orientations and determining the distance of the object based thereon. The method may further comprise determining one or more colors and/or grey-scale levels from the measurement data. A calibration step can include affecting a relative rotation of polarization orientation to one or more components of the light/radiation from the object. Optionally, a combined rotation of polarization orientations to components of the light/radiation from the object is used to cause an energy distribution in the measurement data.
BRIEF DESCRIPTION OF THE DRAWINGS In order to understand the invention and to see how it may be carried out in practice, embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings. Features shown in the drawings are meant to be illustrative of only some embodiments of the invention, unless otherwise implicitly indicated. In the drawings like reference numerals are used to indicate corresponding parts, and in which: Figs. 1A to 1D schematically illustrate an imaging system according to some possible embodiments, wherein Fig. 1A shows a possible implementation of the imaging system, Fig. 1B shows a front polarizer device as used in some embodiments, Fig. 1C shows an aperture assembly as used in some embodiments, and Fig. 1D shows a detector assembly as used in some embodiments; Figs. 2A to 2J show aperture assembly configurations according to some possible embodiments, wherein Fig. 2A shows an aperture assembly having at least one concentric partial aperture associated with a first polarization orientation and at least one non-concentric partial aperture associated with a second polarization orientation, Fig. 2B shows an aperture assembly having at least one non-concentric partial aperture associated with a first polarization orientation and at least one other non-concentric partial aperture associated with a second polarization orientation, Fig. 2C shows an aperture assembly having at least one concentric partial aperture associated with a first polarization orientation and a plurality of non-concentric partial apertures associated with a second polarization orientation, Fig. 2D shows an aperture assembly having at least two non-concentric partial apertures of associated with different polarization orientations located at opposing sides of the aperture assembly, Fig. 2E shows an aperture assembly having two half-circle-shaped partial apertures associate with different polarization orientations, Figs. 2F and 2G show aperture assemblies having at least two concentric partial apertures configured apply a predefined phase difference to light/radiation passed therethrough, Figs. 2H and 2I show polarizer and aperture assembly arrangements as used in possible embodiments, and Fig. 2J shows a possible setup of the front polarizer and aperture assembly; Figs. 3A to 3E show detector assembly arrangements according to some possible embodiments, wherein Fig. 3A shows a detector assembly utilizing a microlens array, Fig. 3B shows a detector assembly arrangement utilizing a microlens array and a phase retarder, Fig. 3C shows a detector assembly arrangement utilizing a microlens array and an array of phase retarders, Fig. 3D shows a detector assembly arrangement utilizing an array of polarizer elements and a bandpass filter, Fig. 3E shows a possible arrangement of phase retardation elements within the detector assembly; Figs. 4A to 4C schematically illustrate imaging techniques according to possible embodiments, wherein Fig. 4A shows light/radiation propagation in an imaging setup comprising a polarizer, aperture assembly, optical assembly, and a detector assembly, Fig. 4B shows light/radiation propagation in an imaging setup comprising an aperture assembly and a detector assembly, and Fig. 4C illustrates light/radiation propagation through the aperture assembly used in possible embodiments; Figs. 5A to 5H schematically illustrate an imaging system according to other possible embodiments, wherein Fig. 5A shows a possible implementation of the imaging system, Fig. 5B shows a possible configuration of an aperture assembly used in the imaging system, Fig. 5C shows a possible configuration of a bandpass filter of the imaging system, Fig. 5D shows a possible detector assembly of the imaging system, Fig. 5E shows a possible front polarizer configuration of the imaging system, Fig. 5F shows a possible intermediate polarizer of the imaging system, Fig. 5G shows a detector assembly wherein groups of polarizer cells of polarization orientations are associated with a certain bandpass filter within a distribution of bandpass filters, and Fig. 5H shows a detector assembly wherein cells of bandpass filters are associated with a certain polarization orientation; Figs. 6A to 6E schematically illustrate an imaging system according to yet other possible embodiments, wherein Fig. 6A illustrates a possible implementation of the imaging system, Fig. 6B shows a front polarizer device as used in some embodiments, Fig.6C shows an aperture assembly as used in some embodiments, Fig. 6D shows a detector assembly as used in some embodiments, and Fig. 6E demonstrates wavefronts propagations in possible embodiments; Figs. 7A to 7C schematically illustrate controllable phase shifting device according to possible embodiments, wherein Fig. 7A demonstrate variable phase shifting by controllable thickness change, Fig. 7B demonstrates variable phase shifting by controllable refraction index change, and Fig. 7C demonstrates variable phase shifting by controllable lens power/curvature change; Figs. 8A to 8D schematically illustrate an imaging system arrangement according to yet other possible embodiments, wherein Fig. 8A shows a possible implementation of the imaging system, Fig. 8B shows a front polarizer device as used in some embodiments, Fig. 8C shows an aperture assembly as used in some embodiments, and Fig. 8D shows a detector assembly as used in some embodiments; and Figs. 9A and 9B are flowcharts of a distance measurement procedures according to some possible embodiments.
DETAILED DESCRIPTION OF EMBODIMENTS One or more specific and/or alternative embodiments of the present disclosure will be described below with reference to the drawings, which are to be considered in all aspects as illustrative only and not restrictive in any manner. It shall be apparent to one skilled in the art that these embodiments may be practiced without such specific details. In an effort to provide a concise description of these embodiments, not all features or details of an actual implementation are described at length in the specification. Elements illustrated in the drawings are not necessarily to scale, or in correct proportional relationships, which are not critical. Emphasis instead being placed upon clearly illustrating the principles of the invention such that persons skilled in the art will be able to make and use optical path difference (OPD)/distance detection setups for determination of three-dimensional coordinates of objects, once they understand the principles of the subject matter disclosed herein. This invention may be provided in other specific forms and embodiments without departing from the essential characteristics described herein. The following disclosure provides techniques for determining distances of object points (also referred to herein as point-sources) based on light/radiation emitted from said point(s) and having said light/radiation pass, following a linear polarizer device, through two or more partial apertures having predetermined geometrical and optical properties (e.g., distances, polarization orientation, surface area, or suchlike) and configured for passing light/radiation therethrough with predefined polarization orientation differences (e.g., 90°) among the portions of the light/radiation passed through the two or more partial apertures so as to provide a distribution of the intensity/energy of these light/radiation portions for measurement by an optical detector assembly. An array (e.g., rectangular grid) of polarizer elements configured with a plurality of predetermined spatially distributed polarization orientations is used in some embodiments to affect a spatial intensity/energy distribution of the light/radiation portions measured at the detector assembly. Processing means can be used to process measurement data/signals generated by the optical detector assembly responsive to the light/radiation portions thereby received e.g., from the array of polarizer elements, determine an intensity/energy distribution of the received light/radiation portions, compute based on the determined intensity/energy distribution phase differences between the light/radiation portions passed through the two or more partial apertures, and determine the OPDs of light/radiation passed through the two or more partial apertures based on the determined phase differences and the geometrical/optical properties of the partial apertures. In some embodiments two or more partial apertures are used to controllably apply a plurality predefined phase differences between light/radiation portion received from the object and passed through the two or more partial apertures at different time instances, and the phase differences between the light/radiation portion passed through the two or more partial apertures are determined by solving an equation system constructed utilizing parameters determined from a corresponding plurality of measurement data/signals acquired by the detector assembly for the different phase differences applied at the different time instances. OPDs between the light/radiation portion passed through the two or more partial apertures can be than determined from the determined phase differences. In some embodiments a controllable variable polarizer is used in the optical system for passage therethrough of different portions of light/radiation of different predetermined polarization orientations emerging from points of the object in the field of view of the optical system, at different time instances. The energies/intensities of the different portions of the light/radiation at the different time instances and different predetermined polarization orientations is measured by the detector assembly, and respective phase differences between the light/radiation portion passed through the two or more partial apertures can be then determined by solving an equation system constructed utilizing parameters determined from a corresponding plurality of measurement data/signals acquired by the sensor device of the detector assembly for the different polarization orientations applied at the different time instances. OPDs between the light/radiation portion passed through the two or more partial apertures can be than determined from the determined phase differences. The OPDs determined in the various embodiments disclosed herein can be used to determine distances between points on the object in the field of view of the optical system, with respect to the optical system itself. For an overview of several example features, process stages, and principles of the invention, the examples of distance determination setups illustrated schematically and diagrammatically in the figures are intended for 3D imaging purposes. These detection setups are shown as one example implementation that demonstrates a number of features, processes, and principles used for construction of 3D model/image of one or more surface areas of an imaged object, but they are also useful for other applications and can be made in different variations. Therefore, this description will proceed with reference to the shown examples, but with the understanding that the invention recited in the claims below can also be implemented in myriad other ways, once the principles are understood from the descriptions, explanations, and drawings herein. All such variations, as well as any other modifications apparent to one of ordinary skill in the art and useful in 3D imaging applications may be suitably employed, and are intended to fall within the scope of this disclosure. Reference is now made to Fig. 1A which is a simplified pictorial illustration of an optical system 100 in accordance with a possible embodiment useful for determining three-dimensional coordinates of at least some portion of object(s) 130 in the field of view (FOV) of said optical system 100. The optical system 100 comprises a front polarizer 101, an aperture assembly 102 which is optionally, but in some embodiments, preferably, located at the aperture plane of optical system 100, an optical assembly 103, an intermediate (e.g., patterned) polarizer 105, a bandpass filter 104, and a sensor device 106 located at an image plane 106f of the optical assembly 100 with respect to object the 130. The sensor device 106 (e.g., CCD or CMOS imager) is coupled to a processor unit 107 configured and operable to analyze measurement data/signals (also referred to herein as image data/signals) acquired by the sensor device 106 and indicative of intensity/energy of the incident light/radiation received therein. The processor unit 107 comprises one or more processing units (CPU and/or GPU) and memories 107m configured and operable to store program code and other data usable for operating the optical system 100, and for processing and analyzing the measurement data/signals acquired by the sensor device 106. The processor unit 107 may be used to present to user(s) of the optical system 100 information associated with the acquired measurement data/signals, and/or the model/image thereby constructed (whether three-dimensional or not), through a display device/system (not shown). The optical assembly 103 (e.g., imaging lens) may comprise several optical elements, and it is designed and constructed in some embodiments such that the image thereby formed on the sensor device 106 is substantially diffraction limited throughout the relevant wavelength range for which the optical system 1is being used. For example, but without limiting, the wavelength range of the optical system 100 can be defined within a narrowband wavelength range e.g., of up to few nanometers. Alternatively, in possible embodiments the wavelength range of the optical system 100 can be defined within a broadband range e.g., the visible wavelength range, which is typically defined as the 400nm to 700nm wavelength range. Properties of the bandpass filter 104 can be determined based on the required wavelength range of optical system 100, and other parameters arising from the physical requirements of the optical system 100 e.g., the coherence length of the light/radiation required for the optical system 100, which is discussed in detail hereinbelow. Optionally, but in some embodiments preferably, the sensor device 106 is a monochrome sensor. The optical system 100 is designed in such embodiments for operation with short wavelength range light/radiation (i.e., wavelength range < 30% of the mid-wavelength of said range), and in such cases the bandpass filter 104 can be located in various positions along the optical path (130t) of the optical system 100. For example, in possible embodiments the bandpass filter 104 is positioned before the optical assembly 103 with respect to the direction of propagation of light/radiation 130t in the system 100. Alternatively, the bandpass filter 104 can be implemented as part of the coatings of the imaging lens 103 or of other surfaces of the optical assembly 103. In some embodiments, wherein the object 1imaged by the optical system 100 via the sensor device 106 is illuminated by a narrow band (optional, designated in a dashed-line box) light/radiation source 133, the bandpass filter 1may be redundant/omitted. As illustrated in Fig. 1A, the light/radiation 130t traversing through the front polarizer 101 reaches the aperture assembly 102, which is located in some embodiments at the aperture plane of the optical assembly (e.g., of imaging lens) 103, with a predefined linear polarization orientation defined by the front polarizer 101 e.g., 45° according to the embodiment exemplified in Fig. 1B, and/or by the optional light/radiation source 133. As exemplified in Fig. 1C, the aperture assembly 102 comprises in some embodiments a central/circular partial aperture 108, and a ring/annular partial aperture 109 concentric with the central circular partial aperture 108. The central circular partial aperture 108 and the ring/annular partial aperture 1are arranged and configured such that the polarization of the light/radiation 130t emerging from the aperture assembly 102 after traversing through its ring/annular partial aperture 109 is substantially perpendicular to the polarization of the light/radiation emerging from the aperture assembly 102 after traversing through its central/circular partial aperture 108. Optionally, but in some embodiments preferably, the aperture assembly 102 is configured such that the amount of energy of the light/radiation 130t emerging from the ring/annular partial aperture 109 is similar/substantially equals to the energy amount of the light/radiation 130t emerging from the central/circular partial aperture 108. The above- described attributes can be achieved, for example, as exemplified in Fig. 1C, by configuring the ring/annular partial aperture 109 and the central circular partial aperture 108 with similar/substantially equal surface areas, and such that their polarization directions (e.g., by utilizing suitable polarizer elements in each partial aperture) are substantially perpendicular one with respect to the other. For example, and without limiting, the direction of polarization affected at the ring/annular partial aperture 109 can be substantially perpendicular to the polarization direction affected by the central circular partial aperture 108, while the direction of the polarizations affected by each one of the partial apertures 108,109 is about 45° with respect to the polarization direction affected by the front polarizer 101 (the arrowed lines shown in Figs. 1B and 1C represent the orientation of polarized light/radiation emerging from the front polarizer 101 and from the partial apertures of the aperture assembly 102, respectively). In possible embodiments wherein polarized illumination is applied by the (optional) light/radiation source 133, the polarization directions of the polarizers used at the central circular partial aperture 108 and at the ring/annular partial aperture 109 can be configured to be 45° relative to the direction of polarization of the polarized illumination produced by the (optional) light/radiation source 133 i.e., the front polarizer 101 can be redundant/omitted in such configurations. As illustrated in Fig. 1D, in some embodiments the intermediate polarizer 105 is a type of patterned polarizer comprising an array of small-sized polarizer elements 105e, the size of which is comparable to/about the size of the pixels 106p of the sensor device 106. It is preferable that the size of each polarizer element 105e of said array (105) is substantially equal to the size of a single pixel 106p. Optionally, the size of each polarizer element 105e of the intermediate polarizer array (105) equals to/is about the size of several/ a group of pixels 106p of the sensor device 106. Optionally, but in some embodiments preferably, all different polarizer elements 105e of said intermediate polarizer array are formed in a similar/same size. In possible embodiments the intermediate polarizer 105 includes sets/groups 105s of several (e.g., two, four or other, preferably, but not limited to, even number of) local polarizer elements 105e having respective number of different polarization directions (denoted by arrowed lines). In the specific and non-limiting example shown in Fig. 1D the intermediate polarizer 105 comprises rectangular/square-shaped sets/groups 105s of four local polarizer elements 105e having the following four different polarization directions: 0° horizontal polarization (HP), 90° vertical polarization (VP), 45° diagonal (DP) and 135°, also noted alternately as -45o, (anti-) diagonal (ADP) (both relative to the horizonal plane). As shown in Fig. 1A and 1D, the intermediate (e.g., patterned) polarizer 105 can be aligned relative to the sensor device 106, such that each polarizer element 105e thereof (the orientation of which is either in the HP, VP, DP or ADP direction) is aligned with a single pixel 106p of the sensor device 106. Accordingly, the four (4) different polarizer elements 105e, of each polarizers' set/group 105s of the intermediate (e.g., patterned) polarizer 105 are adjacent to each other, though their specific order does not affect the performance of the optical system 100. It is noted that the polarizer elements 105e of the polarizers' sets/groups 105s may be arranged differently e.g., in different locations along the width and/or length of the intermediate (e.g., patterned) polarizer 105, and/or in either different number of polarizer elements in each polarizers' set/group 105s, and/or different polarization orientations (e.g., 20°, 100°, 170°, -50°), or order, and/or including some null polarizing elements (i.e., elements through which all of the polarization orientations of the incoming light/radiation are substantially passed). While in the specific example of Fig. 1A the sensor device 106 is located at the image plane 106f of the optical system 100, the intermediate (e.g., patterned) polarizer 105 is located at the vicinity of the image plane 106f i.e., at, or in the vicinity, of the sensor device 106. The vicinity range between the sensor device 106 and the intermediate polarizer 105 that can be such the distance between the intermediate (e.g., patterned) polarizer 105 and the sensor device 106 is below the focal depth of the optical assembly (e.g., imaging lens) 103. It is thus appreciated that though the intermediate (e.g., patterned) polarizer 105 and the sensor device 106 are presented as separate components, in some embodiments they are formed as a single assembly, denoted in the figures as detector assembly 110. The detector assembly 110 (e.g., a "monochrome assembly", which can be similar to, or implemented by, the "monochrome sensor assembly" IMX250MZR manufactured by Sony Semiconductor Solutions Incorporated) is configured in some embodiments to include a microlens array, as exemplified in Fig. 3A, wherein a microlens array 111 is located anterior to the intermediate (e.g., patterned) polarizer 105 and is configured to improve the efficiency of the detector assembly 100 to collect as much energy as possible in each sensor element/pixel 106p of the sensor device 106. Alternatively, and in some embodiments preferably, the detector assembly 110 may also include a quarter-wavelength ( /4 waveplate) retarder waveplate, as exemplified in Fig. 3B, wherein the quarter-wavelength retarder waveplate 111a is positioned anterior to the intermediate polarizer 105 e.g., between the intermediate polarizer 105 and microlens array 111, for applying a constant phase shift to the differently polarized light/radiation portions reaching the intermediate polarizer 105. Optionally, but in some embodiments preferably, a local variable intermediate retarder waveplate, such as illustrated in Fig. 3C, is positioned anterior to the intermediate polarizer 105. For example, in the non-limiting example of Fig. 3C, the intermediate (e.g., patterned) retarder 111b is located in the vicinity of the sensor device 106 e.g., between the microlens array 111 and the intermediate polarizer 105. In some embodiments the intermediate retarder 111b includes an array of retardation elements 111s e.g., comprising two or more different retardation elements alternatingly distributed over adjacent cells/groups 111c thereof. In the intermediate retarder 111b shown in Fig. 3C, a rectangular alternating grid is formed using /4 retardation elements 111s spatially alternated therein with null (i.e., zero retardation, allowing passage of light/radiation therethrough without causing a phase shift) retardation elements 111n. This way, the /4 retardation elements 111s can be arranged to direct the phase-shifted light/radiation therefrom onto polarizer elements 105e of the intermediate polarizer 105 having certain one or more specific polarization orientations, and the null retardation elements 111n can be arranged to direct non-phase-shifted light/radiation onto polarizer elements 105e of other polarization orientations of the intermediate polarizer 105. Fig. 3E shows an arrangement of retardation elements 111e of the retarder 111b according to some possible embodiments. The retardation elements 111e can be aligned with/overlap cells/groups 111c of polarization elements 105e of the intermediate polarizer 105 of the intermediate polarizer 105, such that the same retardation/phase shift is applied to the light/radiation reaching the polarization elements 105e of the cell/group 111c. This can be achieved using a respective retardation element 111e of the same retardation/phase shift for each polarization element 105e within the cell/group 111c, or by using a smaller number of (e.g., a single) retardation elements 111e of the same retardation/phase shift aligned with/overlapping the polarization elements 105e of the cell/group 111c. In this non-limiting example the retardation elements 111e of the retarder 111b comprises retardation elements (e.g., /4) 111s and null (zero) retardation elements 111n of the retarder 111b arranged to form a rectangular/square alternating grid of retardations/phase shifts. Particularly, each retardation/phase shift of the rectangular/square alternating grid is aligned with/overlap a respective cell/group 111c of all different polarization orientations of the polarizer elements 105e of the intermediate polarizer 105, such that the retardation/phase shift between adjacently located cells/groups 111c of retardation elements 111e differs by a predefined amount e.g., /4. Such spatial distribution of retardation elements 111e can be similarly achieved utilizing pairs of retardation elements 111n and 111s having different retardations/phase shifts e.g., ( /4, 0), ( /4, /2), ( /2, 3 /4), etc. In possible embodiments the retarder waveplate 111a, e.g., a /4 waveplate, shown in Fig. 3B, can be placed anterior to the detector assembly 110 along the optical path (130t) of the optical system 100, and not necessarily as part of detector assembly 110. In such possible embodiments, the retarder waveplate 111a affects the polarization of the whole image, which, though does not affect the relative difference between the original horizontal and vertical polarizations of the light/radiation reaching the sensor device 106, may affect the sensitivity of the optical system 100 to changes in the polarization reaching the intermediate polarizer 105, and thus affect the power/intensity measured at the sensor device 106. Similar effect can be achieved by applying phase retardation (e.g., by applying an extra thickness) to some of the partial apertures of the aperture assembly 102 e.g., partial aperture 109, as long as it maintains the general features of the aperture assembly 102 having at least two partial apertures configured such that the light/radiation passing through its the partial apertures have substantially similar/equal energy amount, substantially perpendicular polarization orientations difference therebetween i.e., of the partial wavefronts emerging through the partial apertures. Fig. 3D shows a detector assembly 110 according to some embodiments utilizing an intermediate polarizer 105 comprising a plurality of polarizer elements 105e and a bandpass filter 104 located between the intermediate polarizer 105 and the sensor device 106. Each polarizer element 105e of the intermediate polarizer 105 is configured for passage of light/radiation therethrough at a predefined polarization orientation, which is different from polarization orientation of the light/radiation that passes through polarizer elements 105e locally adjacent thereto. In embodiments wherein the intermediate polarizer 105 has three, or less, different polarization orientations, the polarizer elements 105e are distributed such that the polarization orientation of each polarizer element 105e is different from the polarization orientations of the polarizer elements 105e horizontally and vertically adjacently located thereto. This way, a spatial distribution of polarization orientations of the polarizer elements 105e is obtained by the intermediate polarizer 105. Each polarizer element 105e of the intermediate polarizer 105 can be aligned with/overlap respective one or more sensor elements 106p of the sensor device 106, such that a distribution of light/radiation intensities/energies can be measured by the sensor device 1responsive to the passage of the light/radiation through the polarizer elements 105e of the intermediate polarizer 105. In this specific and non-limiting example the polarizer elements 105e of the intermediate polarizer 105 are configured to provide a spatial distribution of light/radiation in the HP, VP, DP and ADP, polarization orientations, but other predefined polarization orientations can be similarly used. The bandpass filter 104 is configured for passage of light/radiation therethrough only within a predefined wavelength range (e.g., in the 475 to 625 nanometers range) towards the sensor device 106. In possible embodiments, each local polarizer element 105e, of the intermediate (e.g., patterned) polarizer 105 is arranged so as to cover a plurality of pixels of the sensor device 106. In such possible embodiments each local polarizer element 105e within a specific polarization set/group 105s of the intermediate polarizer 105 is configured for covering the same number of pixels 106p of the sensor device 106, though in possible embodiments different polarizations sets/groups 105s within the intermediate polarizer 105 may be configured to cover different number of pixels 106p of the sensor device 106. In possible embodiments the intermediate (e.g., patterned) polarizer 105 is configured to include different number of polarization orientations of its polarizer elements 105e, whether it is as little as two, three, or more than four (4), while the polarization orientation of each polarizer element 105e is aligned with a single pixel 106p, or with a plurality of pixels 106p, of the sensor device 106. In yet other possible embodiments the orientations of the different polarization elements 105e of the polarizer element 105e of the intermediate (e.g., patterned) polarizer 105 is not necessarily as demonstrated in Fig. 1A, and different polarization orientations may be similarly applicable. In possible embodiments the intermediate (e.g., patterned) polarizer 105 is located away from the sensor device 106 along the optical path (130t), while the intermediate (e.g., patterned) retarder 105 is located at the image plane 106f. In such possible embodiments a relay lens (not shown) can be used to re-image the image plane 106f onto the sensor device 106, which accordingly will be located further away from the optical assembly (e.g., imaging lens) 103. Reference is now made to Fig. 4A, schematically illustrating propagation of light/radiation in the optical system 100 used to image the object 130 onto its sensor device (106) included as part of the detector assembly 110. In this non-limiting example, the object 130 is located at the focal plane 130f of the optical system 100, such that the light/radiation received from the imaged object 130 by the optical system 100 is focused onto the sensor device (106) of the detector assembly 110. For the sake of simplicity: (1) the object 130 is illustrated as an object having a linear front surface area located at the focal plane 130f; (2) the bandpass filter (104) of the detector assembly 110, though not shown in Fig. 4A, may be positioned in any suitable location along the optical path (130t) of the optical system 100; and (3) the intermediate (e.g., patterned) polarizer (105) and the sensor device (106), though not explicitly shown in Fig. 4A, are embedded in the detector assembly 110. The wavefront 131f emitted from a point-source p0 on the object 130 is divided into two (or more) separate wavefront portions 132a,132c after it passes through the two (or more) partial apertures 108,109 of the aperture assembly 102. Optionally, but in some embodiments preferably, the two (or more) separate wavefront portions 132a,132c that pass through the aperture assembly 102 are of equal energy, and they are perpendicularly polarized one with respect to the other e.g., the direction of polarization of the central wavefront potion 132c which emerges following traversing through the central circular partial aperture 108 is substantially perpendicular to the direction of polarization of the annular wavefront portion 132a, which emerges following traversing through ring/annular partial aperture 109. As mentioned hereinabove, since the optical system 100 is designed in some embodiments to be diffraction limited, both the annular and the central wavefront portions, 132a and 132c respectively, reach the sensor device 106 having substantially the same phase. As exemplified in Fig. 1A, in some embodiments the light/radiation passing through the central partial aperture 108 of the aperture assembly 102 is linearly-horizontally polarized, and the light/radiation passing through its ring/annular partial aperture 109 is linearly-vertically polarized. Reviewing the polarization evolution of the wavefront 131f emerging from the point p0, it turns into a 45° oriented linearly polarized light/radiation following passage through the front polarizer 101, and it is then divided into the central wavefront component 132c having a linear horizontal polarization, and into the annular wavefront portion 132a having a linear vertical polarization. As exemplified herein, in some embodiments, both the central wavefront portion 132c and the annular wavefront portion 132a substantially have the same amount of energy, and their directions of polarizations are perpendicular one with respect to the other. Once both wavefront portions 132c,132a reach the intermediate (e.g., patterned) polarizer 105, which in some embodiment is part of the detector assembly 110 i.e., it is located at, or near, the image plane of the optical system 100, they substantially have the same phase. Thus, once reaching the image plane 160f, the combined polarization of the two wavefront portions 132c,132a results in a 45° linearly polarized light/radiation. Accordingly, due to the different polarizer elements 105e of the intermediate (e.g., patterned) polarizer 105, as demonstrated in Fig. 1D, the pixels 106p of the sensing device 1corresponding to the 45° linear (DP) polarizer elements 105e of the polarizer 105 will receive/measure maximal light/radiation energy (normalized to 100%), the pixels 106p corresponding to the horizontal (HP) and vertical (VP) polarizations will typically receive/measure half of the maximal light/radiation energy (normalized to 50%), and the pixels 106p corresponding to the -45° (ADP) polarization will receive/measure minimum, or even zero, light/radiation energy. For the purpose of the following description, it is assumed that the phase difference applied to the light/radiation by the optical assembly (e.g., imaging lens) 103 after its passage through the aperture assembly 102 i.e., the phase difference between the central wavefront portion 132c and the ring/annular wavefront portion 132a, arising from optical system 100 is considered to be negligibly small. Thus, the central portion of the wavefront 131f (denoted as 131c i.e., which becomes the central wavefront portion 132c) and the annular portion of the wavefront 131f (denoted as 131a i.e., which becomes the ring/annular wavefront portion 132a) reach the central partial aperture 108 and the ring/annular partial aperture 109 (respectively) of the aperture assembly 102, having substantially the same phase. This assumption is not required for the purpose of this embodiment but is rather used for simplicity of the discussion and shall be referred to with reference to Figs. 4B and 4C hereinbelow. Fig. 4B schematically illustrates the wavefronts 131f and 135 respectively emerging from the off-axis point-source p1 located on the focal plane 130f, and from the off-axis point-source p2 located away from focal plane 130f i.e., further away from aperture assembly 102, which are on the same axis connecting the central point-source of the aperture plane of optical system 100 and point-source p1. For the sake of simplicity, most elements of the optical system 100 are not shown in Fig. 4B, wherein only the aperture assembly 102 and the detector assembly 110 are shown. As seen, the wavefront 131f emerging from the off-axis point-source p1 has a smaller radius when arriving at the aperture assembly 102 relative to the radius of wavefront 1emerging from the point-source p2. Under the simplicity assumption provided hereinabove, the central portion 131c of the wavefront 131f, and the ring/annular portion 131a of the wavefront 131f, reach, respectively, the central partial aperture 108 and the ring/annular partial aperture 109 of the aperture assembly 102, with the same phase. Since the wavefront 135 reaches the aperture assembly 102 having a larger radius then that of the wavefront 131f while reaching aperture assembly 102, and since the wavefront 131f reaches the aperture assembly 102 having the same phase for all partial wavefronts (as per the assumption detailed hereinabove), it is evident that the central portion 135c of the wavefront 135 reaching the central partial aperture 108 has a different phase than that of the ring/annular portion 135a of the wavefront 135 reaching ring/annular partial aperture 109. Reviewing the polarization evolution of the wavefront 135 emerging from the off-axis point p2, it turns into a 45° oriented linearly polarized light/radiation following passage through the front polarizer (101, shown in Fig. 1A), it is then divided by the aperture assembly 102 into the central wavefront portion 136c and the ring/annular wavefront portion 136a. Both the central wavefront portion 136c and ring/annular wavefront portion 136a have preferably the same energy and, typically, polarization orientations that are perpendicular one with respect to the other. Since the central and the ring/annular wavefront portions, 136c and 136a, were formed from the central wavefront 135c and the ring/annular wavefront portion 135a, respectively, of the wavefront 135, there is a phase shift/difference between them. Once both of the central and the ring/annular wavefront portions, 136c and 136a, reach the detector assembly 110 located at, or near, the image plane 160f of the optical system 100, due to the phase shift/difference between the two partial wavefronts 136c and 136a, the combined polarization orientations of these partial wavefronts at the detector assembly 110 is not necessarily linear. For example, in case the phase difference between the central wavefront portion 136c and the ring/annular wavefront portion 136a is 90° when reaching the intermediate (e.g., patterned) polarizer (105, shown in Fig. 1A), located at, or in the vicinity of, the image plane 160f, the combined polarization orientations of the two wavefront portions obtained is typically circular. Thus, due to the different local polarizer elements (105e) of the intermediate (e.g., patterned) polarizer 105, as demonstrated in Fig. 1A, the measurement data/signals acquired by all four pixels (106p) corresponding to the horizontal (HP), vertical (VP), 45° (DP) and -45° (ADP) polarizer elements (105e) of each polarizers' set (105s), will each exhibit the same amount of energy. For any other phase difference between the two wavefront portions 136c,136a a different energy distribution between the measurement data/signals acquired by the various pixels (106p) corresponding to the four polarizer elements (105e) of each polarizers' set (105s), will be obtained. Accordingly, the distribution of the energy of the light/radiation received/measured by two or more pixels 106p of the sensor device 106 from respective two or more polarizer elements 105e of the intermediate polarizer 105 having different polarization orientations, is indicative of the phase difference between the wavefront portions (e.g., 136c,136a) that have passed through the partial apertures (e.g., 108,109) of the aperture assembly 102. The phase difference between the two (or more) wavefront portions that passed through the partial apertures of the aperture assembly 102 can be determined based on the distribution of the energy measured by the two or more pixels 106p of the sensor device 106 from respective two or more polarizer elements 105e of the intermediate polarizer 105. Based on the determined phase difference and the known system parameters, such as polarization orientation, geometrical dimensions, location of and distance(s) between, the partial apertures of the aperture assembly, system focal length, etc., the distance between the off-axis point p2 and the optical system 100 can be computed, as will be explained hereinbelow in details. Similar to Fig. 4B, Fig. 4C schematically illustrates light/radiation beams 140 and 1respectively emerging from the off-axis point-source P1 located on the focal plane 130f, and from the off-axis point-source P2 located away from focal plane 130f i.e., further away from optical assembly (e.g., imaging lens) 103. For the purpose of simplicity, the following adjustments have been made in Fig. 4C: (1) most optical elements of the optical system 1are not shown, such that only the aperture assembly 102 and the detector assembly 110 are presented explicitly; (2) the ring/annular partial aperture 109 has been narrowed down to a thin ring at the outer perimeter of the aperture assembly 102; and (3) the central/circular partial aperture 108 has been narrowed down to a small circle around the center of the aperture assembly 102. As an alternative to Fig. 4B, Fig. 4C illustrates the wavefronts as rays/beams rather than arched-shaped wavefronts. As explained in detail hereinabove, the light/radiation beams 140, which are illustrated as central light/radiation beams 140c traversing through central partial aperture 108, and ring/annular light/radiation beams 140a traversing through ring/annular partial aperture 109, emerging from the off-axis source-point P1, reach the sensor device (106) embedded in the detector assembly 110, while all portions of the light/radiation breams are in the same phase (due to the diffraction limited properties of optical system 100). As explained in detail hereinabove with reference to Fig. 4B, the light/radiation beams 141 emerging from the off- axis source-point P2, comprised of the central light/radiation beams portion 141c and the annular light/radiation beams portion 141a passing through the circular/central partial aperture 108 and the ring/annular partial aperture 109, respectively, reach the detector assembly 110 as central light/radiation beams 142c and annular/ring light/radiation beams 142a respectively, with a phase difference between them. As explained hereinabove with reference to Fig. 4B, it is assumed for the sake of simplicity that a phase difference is arising primarily from the different optical paths of the annular/ring light/radiation beams 141a and of the central light/radiation beams 141c traveling from the off-axis point-source P2 to the aperture assembly 102. As will be explained hereinbelow in details, the phase shift between the central and ring/annular light/radiation beams portions 142c,142a is directly related to the optical path difference (OPD) caused due to the passage of the light/radiation portions through the different partial apertures of the aperture assembly 102, which is directly correlated to the off-axis location of the ring/annular partial aperture 109 and the on-axis location of the central/circular partial aperture 108, which is used in some embodiments to determine the horizonal distance between the off-axis point-source P2 and the aperture assembly 102, denoted as Z1 in Fig. 4C. The distances R1u and R0u of the remote off-axis point-source P2 and of the off-axis point-source P1 located on the focal plane 130f, respectively, from the center of the central partial aperture 108 of the aperture assembly 102, can be determined as follows: ? = ? + ? − 2?? cos − ? = ? + ? − 2?? sin(? ); and ? = ? + ? − 2?? cos − ? = ? + ? − 2?? sin(? ), where θ is the angle between the central rays R1C,R0C and the focal axis 142, d is the distance between the average radius of the ring/annular partial aperture 109 and the average radius of the central partial aperture 108, where in this specific and non-limiting example, as illustrated and exemplified in Fig. 4C, the distance d can be approximated by the radius of the ring/annular partial aperture 109. Accordingly, the optical path difference (OPD) between the two rays R1u and R0u is – OPD(u) = R1u - R0u For very small values of θ, and for the sake of simplicity, sin(θ) ≈ θ, and thus the distances R1u and R0u can be determined with relatively good accuracy as follows: ? = ? 1 + − ; ? = ? 1 + − .
Based on the following approximation √1 + ? ≈ 1 + , and for the sake of simplicity, the distances R1u and R0u can be rewritten as follows: ? = ? 1 + − ; ? = ? 1 + − , and the full OPD between light/radiation beams 142c,142a vs the beams 141c,141a can therefore be approximated as follows: ??? = (? − ? ) − (? − ? ) = − = − , ??? = ∙ ≈ ∙∆ = ∙∆ .
Calculation of the phase shift/difference φ (radians) between the light/radiation beams portions of the light/radiation passed through the partial apertures of the aperture assembly 102, that corresponds to the above OPD expression, can carried out as follows: ? =2? ∙ ???  and thus: ? = 2? ∙∆  (1) wherein ∆? = ? − ? , and  is the wavelength for which the OPD is calculated. In embodiments wherein the optical system 100 is designed to operate in some wavelength ranges, the average phase shift can be calculated by defining  as the central wavelength of the wavelength range used. As explained hereinabove with reference to Fig. 4B, assuming the optical path following the aperture assembly 102 introduces negligible phase difference between the light/radiation beams portion 142c and the ring/annular light/radiation beams portion 142a, which are the light/radiation beams portions obtained following the passage of the light/radiation beams 141 originating from the point-source P2 located away from the focal plane 130f through aperture assembly 102, the phase difference between these light/radiation beams 142c,142a is determined by the phase shift φ determined by equation (1) hereinabove. It is therefore clear that there is a unique correlation (up to the ambiguity resulting from full π phase shift i.e., 180°), between (1) the phase shift/difference between the annular and central light/radiation beams portions, 142a and 142c (denoted as ? ), and (2) the energy distribution on/measured by the pixels 106p of the sensor device 106, that correspond to different polarization elements 105e of the intermediate polarizer 105. This correlation can be expressed by the following equation: ? (? , ? ) = (1 + sin 2? ∙ cos ? ) (2) where ? is the combined energies of the light/radiation reaching the partial apertures of the aperture assembly 102, ? is (e.g., as exemplified in the polarization setup of Fig. 1A) the orientation of the respective polarization element 105e of the intermediate polarizer 105, and ? is the phase shift between the two beams portions of the light/radiation passing through the partial apertures of the aperture assembly 102. Thus, by measuring the intensity/energy ? (? , ? ) of the light/radiation on each pixel 106p (e.g., two or more different pixels) of the sensor device 106 corresponding to the different polarization orientations (e.g., HP, VP, DP, ADP), the phase shift ? can be determined from equation (2), leading to the determination of ∆? from equation (1), and thus allowing determining the distance to the point P2. In possible embodiments the at least two intensity/energy ? (? , ? ) measurement obtained from the pixels 106p for determining the phase shift ? (and therefrom the distance), includes intensity/energy measurements corresponding one of DP and ADP polarization orientations of the light/radiation received from the intermediate polarizer 105. It is noted that in different configurations of the optical system 100, equation (2) may be more complex and include additional parameters. Such configurations are, for instance, where the polarization orientations of the partial apertures (108,109) are not perpendicular, or the energy traversing through said apertures are not the same/in equality. In such configurations equation (2) may include parameters, such as the specific polarization orientation of each one of the partial apertures (108,109), or their relative surface area. It is also noted that simultaneous rotation of polarizer 101 and aperture assembly 1may lead to different energy distribution measured by pixels 106p which are respective to the polarization of each of the polarization elements 105e. Such rotation may be used to optimize the sensitivity of the system. As noted above, the wavelength range of the light/radiation should be limited in some embodiments to approximately 30% of the center wavelength. In case a wider wavelength range is applied, it is probable that the phase shift/difference φ calculation shall become more difficult to interpret since each specific wavelength has its own phase shift/difference φ, which are multiplexed at the sensor device 106 as different polarization orientations. It is noted that there may be an ambiguity in calculating the phase shift/difference φ, due to the cyclical properties of the wave i.e., the OPD at polarization orientation δ yields similar/same results to the OPD at polarization orientation π+δ. The maximal distance for which there is no ambiguity, noted as ∆? (?? ), can be calculated for OPD=λ/2 (half the wavelength of light/radiation emerging from the object). By applying the approximation ∆? ≪ ? the following expression is obtained from equation (1) – ≈ ∙∆  ∆? (?? ) ≈ ∙ ? (3) wherein ∆? (?? ) provides the non-ambiguous range based on the system parameters: λ, d and ? . As explained in details hereinabove with reference to Figs. 4A, 4B and 4C, it is demonstrated that there is a correlation between: (1) the distance between a point in the field of view of the optical system 100 and the optical system itself (e.g., aperture assembly 102); (2) the phase of the different portions of the wavefront that reach the detector assembly 110; and (3) the energy distribution of adjacent pixels (106p) corresponding to different local polarizer elements (105e) of the intermediate polarizer 105 embedded in the detector assembly 110. Thus, the measured/determined energy distribution of the light/radiation reaching the detector assembly 110 is indicative of the distance between the point-source of the light/radiation and the detector assembly 110 (i.e., sensor device 106). This information, with or without the additional grey level of the pixels (106p), can be used to determine three- dimensional coordinates of the location of the point-source in the field of view of the optical system 100.
Reference is now made to Figs. 2A to 2J schematically illustrating configurations of the partial apertures of the aperture assembly 102 described hereinabove according to possible embodiments, which are denoted herein as 102-1, 102-2,… . The arrowed lines shown in Figs. 2A to 2J denote the polarization orientation of each partial aperture. Typically, the aperture assembly 102 comprises at least two (2) partial apertures and, in combination with the front polarizer 101, example of which is illustrated in Fig. 1B, they are constructed such that light/radiation passing through front polarizer 101, and thereafter emerging following passage through the partial apertures of the aperture assembly 102, is divided into at least two partial wavefronts that are preferably energetically equally divided between two polarization orientations that are optionally, but in some embodiments preferably, perpendicular one with respect to the other (e.g., the energy is divided such that the light/radiation having the vertical polarization has substantially the same energy as that of the light/radiation having the horizontal polarization). As described hereinabove, this can be achieved, as exemplified herein with reference to Figs. 1A, 1B and 1C, by using polarizers in each of the partial apertures of the aperture assembly (102) which polarization orientations are perpendicular one with respect to the other, as part of two equal-surface partial apertures of the aperture assembly (102) and both said polarizations are oriented at 45o relative to the polarization of front polarizer 101. Fig. 2A shows an alternative embodiment to the aperture assembly 102 configuration described hereinabove with reference to Fig. 1A. The aperture assembly 102-1 shown in Fig. 2A comprises an axial/concentric partial aperture 108-1 of HP orientation, and a non-axial (i.e., non-concentric) partial aperture 109-1 of VP orientation. As another alternative embodiment of the aperture assembly 102, the aperture assembly 102-2 shown in Fig. 2B comprises two non-symmetrical partial apertures, 108-2 and 109-2, of the VP and HP orientations, respectively. Yet, another alternative to the aperture assembly 102, the aperture assembly 102-3 shown in Fig. 2C comprises an axial/concentric partial aperture 108-3 of the VP orientation, and a plurality of non-axial partial apertures 109-3 (four such non-concentric aperture are exemplified in Fig. 2C) of the HP orientation. As yet another alternative embodiment of the aperture assembly 102, the aperture assembly 102-4 shown in Fig. 2D comprises symmetrical partial apertures, A4 and B4, of the VP and HP orientations. As yet another alternative embodiment of the aperture assembly 102, the aperture assembly 102-5 shown in Fig. 2E comprises two symmetrical half-circle-shaped partial apertures A5 and B5, of the VP and HP orientations, respectively. It is noted that the symmetrical configuration, either the one illustrated in Fig. 2D or the one illustrated in Fig. 2E, may not be optimal for extraction of three-dimensional information for on-axis objects, though it may be useful for off-axis objects. Reference is now made to Fig. 2F, schematically illustrating a yet another alternative embodiment of the aperture assembly 102, denoted as aperture assembly 102-6, which comprises a circular concentric partial aperture A6 and a ring/annular partial aperture B6. Other than using polarizers as part of the construction of the aperture assembly 102 (as described with reference to e.g., Fig. 1A), obtaining equal power portions of the light/radiation following passage through the aperture assembly 102-6 with polarization orientations that are perpendicular one with respect to the other, can be obtained using a half-wave waveplate/retarder (i.e., using a /2 waveplate instead of a polarization orienting element) located at the circular concentric partial aperture A6 (it can be equally done by placing the /waveplate/retarder at the ring/annular partial aperture B6). The orientation of the /waveplate/retarder should be oriented at 45° relative to the incoming polarization following the polarizer 101, in order to obtain a perpendicular polarization orientation between the two partial apertures A6 and B6. In yet further alternative embodiments of the aperture assembly 102, denoted as aperture assembly 102-7 in Fig. 2G, a half waveplate/retarder (i.e., a /2 waveplate is used instead of a polarization orienting element) is used in one of the partial apertures, illustrated herein as the circular concentric partial aperture A7, and another half waveplate/retarder (i.e., a /2 waveplate) is used in the other partial aperture illustrated herein as ring/annular partial aperture A7, in order to obtain equal power portions of the light/radiation following passage through the aperture assembly 102-7 with polarization orientations that are perpendicular one with respect to the other. The half waveplate plates are preferably oriented at 22.5o and -22.5° relative to the polarization orientation of the incoming light/radiation following the polarizer 101. The use of such waveplates/retarders as described with reference to Figs. 2F and 2G or other alternatives, can be similarly extended to implement in the alternative embodiments of the aperture assembly 102 schematically illustrated in Figs. 2A to 2E. It is noted that additional alternative implementations of waveplates may also be applicable as to apply perpendicular polarization orientations between partial apertures of the aperture assembly 102 and its respective alternatives. It is noted that the alternative embodiments of the aperture assembly 102 that comprise the retarder waveplates can be configured for producing two wavefront portions which polarization orientations are substantially perpendicularly oriented one with respect to the other. It is also noted that in embodiments wherein the optical system 100 is used for a broadband range, such as the visible light/radiation wavelength range, the half-wave waveplate (or the combination of quarter waveplate and three-quarters waveplate) are preferably designed for mid-range wavelength, i.e., in the vicinity of 550 nanometers for visible wavelength range. In such embodiments it may be useful to use an additional polarizer (not shown) following each of the waveplates, and at the vicinity of the aperture plane, to thereby verify that all of the wavelength range that emerges having a well-defined pre-determined polarization. Such embodiments, which includes the waveplate(s) and an additional polarizer, though may seem redundant, may be applied to save light/radiation energy. Reference is now made to Fig. 2H schematically illustrating a yet another configuration of the combination of the front polarizer 101, denoted in Fig. 2H as polarizer 101-1, and of the aperture assembly 102, denoted in Fig. 2H as aperture assembly 102-8, which comprises a circular concentric partial aperture A8 and a ring/annular partial aperture B8. In this configuration, the surface area of the circular concentric partial aperture A8 and that of the ring/annular partial aperture B8 are not similar/equal. This is compensated by orienting the polarizations of the light/radiation propagating from the aperture assembly 102-8 and front polarizer 101-1 such that the polarization of light/radiation following the passage of the light/radiation through the front polarizer 101-1 is not oriented at 45° between the two perpendicular polarizations generated by circular aperture A8 and ring aperture B8, but rather it is inclined more towards the polarization direction of the partial aperture having a smaller surface area, such that the multiplication of the surface area of each partial aperture by the relative polarization orientation provides substantially similar/equal results. By way of example, in Fig. 2H, if the area ratio between the circular concentric partial aperture A8 and the concentric ring/annular partial aperture B8 is 2:3, then the polarization of the front polarizer 101-1 is determined to be oriented at 34° relative to the polarization orientation of the circular aperture A8, such that the amount of energy of the light/radiation obtained from the partial apertures A8 and B8 will substantially same/equal. Reference is now made to Fig. 2I schematically illustrating yet another configuration of the combination of the front polarizer 101, denoted in Fig. 2I as front polarizer 101-2, and of the aperture assembly 102, denoted in Fig. 2I as aperture assembly 102-9, which comprises the circular concentric partial aperture A9 and the ring/annular concentric partial aperture B9. In this configuration, the direction of polarization applied by the front polarizer 101-2 can be controlled and varied, for example, by control data/signals 107c generated by the processor unit 107, e.g., by using a liquid crystal device controlled by an external polarization controller (not shown). In addition, in possible embodiments, the diameter of at least one partial aperture of the aperture assembly 102-9 can be (e.g., electrically) controlled and varied by an external aperture controller 127 (which may be part of the processor unit 107) e.g., by introducing a controlled mechanical shutter device 120 between the front polarizer 101-2 and the aperture assembly 102-9, which can be controllably opened or closed. By implementing changes to the total size of the aperture assembly 102-9 exposed to the light/radiation from the imaged object (e.g., by using the controlled shutter device 120), respective changes the proportional relative surface area of the ring/annular concentric partial aperture B9 and of the circular concentric partial aperture A9. In order to maintain substantially similar energy distribution between the two partial apertures, it is possible to change the orientation of the front polarizer device 101-2 using control data/signals 107c, similar to the description hereinabove in reference to Fig. 2H. By using such configuration, the optical system 100 can be useful for determining three dimensional coordinates of at least some portion of objects (130) in its field of view, while changing the F-Number of the optical system 100, which changes the resolution of the system in all of the three-dimensional (3D) axes. Accordingly, in some embodiments the setup of Fig. 2I is calibrated before use to adjust the control data/signals 107c generated by the processor unit 107, and/or the control signals generated by the aperture controller 127, to provide the desired energetic equilibrium between the light/radiation wavefronts obtained from the partial apertures A8 and A9 of the aperture assembly 102-9. It is noted that while no electrical control is available in the configuration presented in some embodiments, such as in the configuration depicted in Fig. 1, control of relative polarization orientations of polarizer 101 vs. the polarization orientations of the partial apertures of aperture assembly 102 (and the various configurations thereof) e.g., by manually rotating the polarizer 101, can enable calibration of the optical system 100 performance and compensate for variations in orientation of the polarization of the partial apertures 108,109, or their surface areas. Reference is now made to Fig. 2J schematically illustrating yet another configuration of the combination of the front polarizer 101, and of the aperture assembly 102 denoted as aperture assembly 102-10 in Fig. 2J. The aperture assembly 102-10 comprises a first partial aperture 180, a second partial aperture 181 and a common partial aperture 182 formed by an intersection of the partial apertures 180 and 181. The first partial aperture 180 and the second partial aperture 181 have similar/equal surface areas, and they are constructed such that the orientations of polarizations of light/radiation passed therethrough are perpendicular one with respect to the other, and also such that the orientation of polarization of each one of the first and second partial apertures 180,181 is oriented at 45° with respect of the orientation of the polarization of the front polarizer 101. As illustrated in Fig. 2J, the orientation of the polarization of the front polarizer 101 is oriented 45° (DP with respect to the 'x'-axis), the orientation of the polarization of the first partial aperture 180 is vertical (VP), the orientation of the polarization of the second partial aperture 181 is horizontal (HP), and the orientation of the polarization of the common partial aperture 182 is 45° (DP) i.e., same/similar to the polarization orientation of the front polarizer 101. Alternatively, but in some embodiments preferably, the common partial aperture 182 has no polarization at all (which guarantees that the same polarization of light/radiation obtained from the front polarizer 101 is maintained after passage through the common partial aperture 122). This construction/configuration provides that the light/radiation obtained following passage through the front aperture 101 and the aperture assembly 102-10 is typically divided into two wavefront portions having same/similar energy, and perpendicular polarization orientation one with respect to the other. Particularly, the energy of the vertically polarized (VP) light/radiation is comprised of the energy of the light/radiation traversing through the first partial aperture 180 plus half of the energy of the light/radiation traversing through the common partial aperture 182, while the energy of the horizontally polarized (HP) light/radiation is comprised of the energy of the light/radiation traversing through the second partial aperture 181 plus half of the energy of the light/radiation traversing through the common partial aperture 182. It is noted that more complex structures/configurations of the aperture assembly 102- 10 can be utilized e.g., based on the construction/configuration illustrated in Fig. 2H. It is further noted that alternative configurations of the aperture assembly 102 can be applicable such that the polarization of the portion of light/radiation emerging from each of the partial apertures of the aperture assembly 102, is not necessarily linear, as long as said polarizations are differently oriented (e.g., oppositely oriented circular polarizations). Reference is now made to Fig. 5A schematically illustrating an optical system 200 in accordance with some possible embodiments, for generation of colored, three-dimensional coordinates of imaged object(s) 230 in the field of view (FOV) of the optical system 200. The optical system 200 comprises a front polarizer 201, an aperture assembly 202 posterior to the front polarizer 201, an optical assembly (e.g., imaging lens) 203 and a detector assembly 210. The detector assembly 210 comprises in some embodiments an intermediate polarizer 205, bandpass filter 204 and a sensor device 206. In this specific and non-limiting example, the bandpass filter 204 is located between the intermediate polarizer 205 and the sensor device 206, but it may be similarly placed in other locations within detector assembly 210, while being anterior to sensor device 206. In addition, though the optical assembly 203 is shown in Fig. 5A (and also in Figs. 1A and 4A) between the aperture assembly and the detector assembly, is other embodiments it may be differently located along the optical path (e.g., 230t or 130t, in Figs. 5A and 1A) of the light/radiation from the object 230, 130 (similarly respectively) or alternatively, components thereof may be distributed along the optical path (230t,130t) in any suitable form per specific implementation requirements. The sensor device 206 (e.g., CCD or CMOS imager), located at the image plane 206f (with respect to object 230), is coupled to a processor unit 207 having one or more processing units (CPU/GPU) and memories 207m configured and operable to store and execute software code for operating the optical system 200, analyze the measurement data/signals (also referred to herein as image data) acquired by the sensor device 206 and/or produce a 3D model/image of at least some portion of the imaged object 230. Optionally, the processor unit 207 is also configured and operable to present to the user information/image(s) associated with the measurement data/signals acquired by the sensor device 206, and/or with a model/image associated with the imaged object thereby constructed, whether three dimensional or not, through a display system/device (not shown). The optical assembly (e.g., imaging lens) 203 comprises in some embodiments several optical elements. Optionally, but in some embodiments preferably, the optical assembly 203 is configured such that the image generated thereby on the sensor device 206 is diffraction limited throughout the relevant wavelength range for which the optical system 200 is to be used. The optical system 200 can be specifically designed for operation in the visible wavelength range, typically defined within the 400 to 700nm wavelength range. Optionally, but in some embodiments preferably, the bandpass filter 204 is configured to allow passage therethrough of no more than 30% of the mid-wavelength range for which the optical system 200 is designed. Thus, in the visible range where mid-wavelength is approximately 550nm, the wavelength band of the bandpass filter should be less than approximately 150nm, for the optical system 200 to be operable for determination of three-dimensional coordinates of point-sources in its field of view, exemplified by object 230. Thus, in order to cover the entire visible wavelength range (400-700nm), at least two different bandpass filter elements are needed in the bandpass filter 204 for the "colored" operation of the optical system 200 e.g., as exemplified in Figs. 5A-5H wherein the bandpass filter 204 comprises three (3) different bandpass filter elements. The bandpass filter 204, which is located in some embodiments as close as possible to the image plane 206f of the optical assembly (e.g., imaging lens) 203 i.e., as close as possible to the sensor device 206, can comprise several bandpass filter elements, such as shown in Fig. 5C, used to filter therethrough portions of the visible light/radiation range. In some embodiments the bandpass filter 204 is comprised of the following three (3) different bandpass filter elements: "B" - configured to limit passage of light/radiation therethrough to the 400 to 500 nm wavelength range; "G" - configured to limit passage of light/radiation therethrough to the 500 to 600 nm wavelength range; and "R" - configured to limit passage of light/radiation therethrough to the 600 to 700nm wavelength range. The bandpass filter element "B" (of the 400 to 500nm wavelength range) is sometimes referred to herein as a "Blue" filter, the bandpass filter element "G" (of 500 to 600nm wavelength range) is sometimes referred to herein as "Green" filter, and the bandpass filter element "R", (of 600 to 700nm wavelength range) is sometimes referred to herein as "Red" filter. The three bandpass filter elements are also collectively referred to as "colors" hereinafter. It is noted that the wavelength range of each one of the "B", "G" and "R", bandpass filter elements may vary, while the concept of dividing the wavelength range into three (3) different bandpass filter elements, is maintained. Such configuration of the bandpass filter 204 is formed in some embodiments directly on an outer surface of the sensor device 206 e.g., implemented as a Bayer pattern, which is typically constructed as illustrated in Fig. 5C. Similar to the bandpass filter 204, the intermediate (e.g., patterned) polarizer 205 is also located in some embodiments as close as possible to the image plane 206f defined by the optical assembly 203 i.e., as close as possible to the sensor device 206. Therefore, though the intermediate (e.g., patterned) polarizer 205, the bandpass filter 204, and the sensor device 206, are presented in Fig. 5A as separate components, similar to the preceding figures, they can be embedded into a single light/radiation detector, denoted as the detector assembly 210. The detector assembly 210 is configured in some embodiments to also include a microlens array, as illustrated in Fig. 5D, wherein the microlens array 211 is located anterior to the intermediate (e.g., patterned) polarizer 205 i.e., between the optical assembly 203 and the intermediate polarizer 205. The detector assembly 210 may be a type of "color sensor", similar to, or implemented by, the IMX250MYR sensor manufactured by Sony Semiconductor Solutions Incorporated.
The light/radiation traversing through the front polarizer 201 reaches the aperture assembly 202, which is located in some embodiments at the aperture plane of the optical assembly (e.g., imaging lens) 203, having a predefined linear polarization (e.g., 45°) orientation affected due to the front polarizer 201 e.g., as illustrated in Fig. 5E, where an arrowed line denotes the orientation of polarization of the light/radiation passing through front polarizer 201. With reference to Fig. 5B, the aperture assembly 202 comprises in some embodiments a circular concentric partial aperture 208 (e.g., having HP orientation), and a side partial aperture 209 (e.g., having VP orientation). Optionally, but in some embodiments preferably, the concentric and the side partial apertures 208,209 are configured such that the orientation of the polarization of the light/radiation emerging from the aperture assembly 202 after traversing through the side partial aperture 209 is substantially perpendicular to the orientation of the polarization of the light/radiation emerging from aperture assembly 202 after traversing through the concentric partial aperture 208. Optionally, but in some embodiments preferably, the aperture assembly 202 is also configured such that the amount of energy of the light/radiation emerging from the side partial aperture 209 is similar/equal to the amount of energy of the light/radiation emerging from the concentric partial aperture 208. The above described attributes can be achieved, for example, as demonstrated in Fig. 5B, by configuring the side partial aperture 209 and the concentric partial aperture 208 such that they have similar/equal surface areas, and by using in these partial apertures two different polarizer elements which orientations of polarizations are perpendicular one with respect to the other i.e., one polarizer element is placed at the side partial aperture 2and the other polarizer element is located at the concentric partial aperture 208, while the orientation of polarization of each one of said polarizer elements is 45° relative to the polarization direction of the front polarizer 201. The arrowed lines shown in Fig. 5B represent the orientation of the polarization of the light/radiation passing through the concentric and side partial apertures 208 and 209. It is noted that the front polarizer 201 and the aperture assembly 202 may be alternatively configured similar, or substantially identical, to the aperture assembly configurations described hereinabove with reference to either Figs. 1A to 1E or Figs. 2A to 2J. As illustrated in Fig. 5F, the intermediate (e.g., patterned) polarizer 205 can be configured to include an array of polarizer elements (denoted by short arrowed lines) 205e e.g., in the following four different polarization orientations: a 0° horizontal polarization (HP), a 90° vertical polarization (VP), a 45° diagonal polarization (DP) and a 135° (alternately referred to as -45o) anti-diagonal polarization (ADP) (both relative to the horizonal plane). The detector assembly 210 may comprise different configurations of the bandpass filter 204 and patterned polarizer 205. Two such configurations are exemplified hereinbelow with reference to Figs. 5G and 5H. A possible embodiment of the detector assembly 210 is schematically illustrated in Fig. 5G, wherein groups of polarization orientations 205g of the array of polarizer elements (205e) of the intermediate (e.g., patterned) polarizer 205 are aligned relative to the bandpass elements 204e of the bandpass filter 204 and the sensor elements (pixels) 206p of the sensor device 206, such that it covers/overlaps a group of at least four (4) pixels 206p thereof associated with one or more bandpass elements of the bandpass filter 204 having the same "color" i.e., the same bandpass filtering wavelength range. Fig. 5H shows another possible configuration of the detector assembly 210 wherein groups of one or more polarizer elements 205e of the intermediate (e.g., patterned) polarizer 205-1 having the same polarization orientation are aligned relative to/with a group of bandpass elements (204g) comprising a combination of the bandpass elements of the bandpass filter 2and a group of the sensor elements (pixels) 206p of the sensor device 206. In such embodiments, each group of bandpass elements 204g covers/overlaps a group 206g of at least four (4) sensor elements (pixels) 206p of the sensor device 206. As exemplified in Fig. 5H, each group of bandpass elements 204g has one "R" bandpass element, one "B" bandpass element, and two "G" bandpass elements, arranged such that the "G" bandpass elements within the group 204g are each located diagonally with respect to the other bandpass elements. It is however noted that other combinations of the bandpass filter elements are similarly applicable e.g., using in each group of bandpass elements one "G" bandpass element, one "B" bandpass element, and two "R" bandpass elements. It is noted In this respect that in possible embodiments each group 205g of polarization orientations of the intermediate (e.g., patterned) polarizer 205 is configured to cover/overlap more than four (4) pixels 206p of the sensor device 206 (of the same "color" or different "colors") e.g., while having each group 205g of polarization orientations covering/overlapping the same number of pixels 206p. Alternatively, in possible embodiments the intermediate (e.g., patterned) polarizer 205 can be configured such that each group 205g of its polarization orientations covers/overlaps less than four (4) bandpass filter elements of the bandpass filter 205, though it may not contribute to optimal performance of the optical system 200. It is noted that the intermediate (e.g., patterned) polarizer 205-1 may be configured to include groups 205g of polarizer elements (205e) having only two (2) different polarization orientations, while each group 205g of polarization orientations is aligned with a group 206g of four (4) or more adjacently located pixels 206p of the sensor device 206, each of which is associated with a different "color". The orientation of polarization of the different polarizer elements (205e) of the intermediate (e.g., patterned) polarizer 205-1 is however not necessarily as demonstrated in Fig. 5F to 5G, and different polarization orientations may be similarly applicable, while it is preferable that the different polarizations are substantially different from each other. In reference to Figs. 5A to 5H, similar to the description detailed hereinabove with reference to Figs. 1A to 1E, the energy distribution of adjacently located pixels 206p associated with the "same color" of the bandpass filter 204 and corresponding to different local polarizer elements 205e is indicative of the distance between a point-source of the imaged object 230 at the field of view of the optical system 200, and the system itself. Combined with grey level (energy) measured by the pixels 206p, three dimensional coordinates of the point-source, as well as its "color", can be derived. It is noted that in order to determine the distance based on the description and equations (1) and (2) detailed hereinabove, the calculation is based on the energy distribution of measurement data/signals acquired from adjacently located sensor elements (pixels) 206p associated with the same "color" elements of the bandpass filter 204, that correspond to different polarization elements of the intermediate polarizer 205-1, rather than adjacent sensor elements (pixels) 206p that are not associated with the "same color" elements. As described hereinabove and expressed in equation (3), the ambiguity range of the distance between an object and the optical system used to provide three dimensional coordinates of said object is dependent, in some embodiments, among other parameters, on the mid-range wavelength of the wavelength range in which the system is operating. Thus, since the system described hereinabove with reference to Figs. 5A to 5H operates in several wavelength ranges simultaneously (i.e., "Red", "Green" and "Blue"), they each correspond to its own ambiguity range. The different ambiguity ranges can therefor assist in increasing the ambiguity range of the entire system as it is built on three (3) separate ranges, each of which is independently analyzed to determine the distance, but simultaneously analyzed to determine the ambiguity range. It is noted that the bandpass filter 204 may comprise more (or less) filter elements in each cell/group 204g. It is particularly noted that the bandpass filter 204 may include three (3, e.g., "R", "G", and "B") color filters and an additional narrow band filter either within the wavelength range covered by the partial "color" filters, or outside of said wavelength range, where in such case an external illumination source (not shown) may be required. In such embodiments the three (3, e.g., "R", "G", and "B") color filters of each cell/group 204g the bandpass filter 204 can be used to determine colors of the imaged point-sources, and the th color filter of each cell/group 204g of bandpass filters can be used to determine the distance of the source-point according to any of the embodiments disclosed herein. Reference is now made to Fig. 6A which is a simplified pictorial illustration of an optical system 600 in accordance with a possible embodiment, useful for determining three-dimensional coordinates of at least some portion of object(s) 630 in the field of view (FOV) of said optical system 600. The optical system 600 comprises a front polarizer 601, an aperture assembly 602, which is located, optionally but in some embodiments preferably, at the aperture plane of optical system 600 and configured to manipulate a wavefront (e.g., light/radiation) emerging from object 630 towards sensor device 606, an optical assembly 603, a bandpass filter 604, an intermediate linear polarizer 605, and a sensor device 606 located at the image plane 606f of the optical assembly 603 with respect to the object 630. The sensor device 6(e.g., CCD or CMOS imager) is coupled to a processor unit 607 configured and operable to analyze measurement data/signals (also referred to herein as image data/signals) generated by the sensor device 606. The processor unit 607 comprises one or more processing units (CPU/GPU) and memories 607m configured and operable to store program code and other data usable for operating the optical system 600, and for processing and analyzing the measurement data/signals generated by the sensor device 606. The processor unit 607 may be used to present to user(s) of the system 600 information associated with the acquired measurement data/signals, and/or the model/image thereby constructed (whether three-dimensional or not), through a display device/system (not shown). The optical assembly 603 (e.g., imaging lens) may comprise several optical elements, and it is designed and constructed in some embodiments such that the image thereby formed on the sensor device 606 is in some embodiments diffraction limited throughout the relevant wavelength range for which optical system 600 is being used. For example, but without limiting, the wavelength range of the optical system 600 can be defined within a narrowband wavelength range e.g., of up to few nanometers. Alternatively, in possible embodiments the wavelength range of the optical system 600 can be defined within a broadband range e.g., the visible wavelength range, which is typically defined as the 400nm to 700nm wavelength range. Properties of the bandpass filter 604 can be determined based on the required wavelength range of optical system 600, and other parameters arising from the physical requirements of the optical system 600 e.g., the coherence length of the light/radiation required for the optical system 600, which is discussed in detail hereinbelow. Optionally, but in some embodiments preferably, the sensor device 606 is a monochrome sensor configuring the optical system 600 for operation with short wavelength range light/radiation (i.e., wavelength range < 30% of the mid-wavelength of said range). In such embodiments the bandpass filter 604 can be located in various positions along the optical path (630t) of the optical system 600. For example, in possible embodiments the bandpass filter 604 is positioned before/anterior to the optical assembly 603 with respect to the direction of propagation of light/radiation 630t in the system 600, or it can be implemented as part of the coatings of the imaging lens (603) or of other surfaces of the optical assembly 603. In some embodiments, wherein the object 630 imaged by the optical system 600 via the sensor device 606 is illuminated by a narrow band (optional, designated in a dashed-line box) light/radiation source 633, the bandpass filter 604 may be redundant/omitted. It is noted that the bandpass filter 604 may also be configured to include several bandpass filters e.g., as illustrated in Fig. 5C, whereas in such case the bandpass filter 604 should be located in the vicinity of image plane 606f. As illustrated in Fig. 6A, the light/radiation 630t traversing through the front polarizer 601 reaches the aperture assembly 602, which is located in some embodiments at the aperture plane of the optical assembly (e.g., of imaging lens) 603, with a predefined linear polarization defined by the front polarizer 601 e.g., 45° according to the embodiment exemplified in Fig. 6B. As exemplified in Fig. 6C, the aperture assembly 602 comprises in some embodiments a central circular partial aperture 608, and a ring/annular partial aperture 609 concentric with the central circular partial aperture 608. The central circular partial aperture 608 and the ring/annular partial aperture 609 are arranged and configured such that the polarization of the light/radiation 630t emerging from the aperture assembly 602 after traversing through its ring/annular partial aperture 609 is substantially perpendicular to the polarization of the light/radiation emerging from the aperture assembly 602 after traversing through its circular partial aperture 608. Optionally, but in some embodiments preferably, the aperture assembly 602 is configured such that the amount of energy of the light/radiation 630t emerging from the ring/annular partial aperture 609 is similar/substantially equals to the energy amount of the light/radiation 630t emerging from the circular partial aperture 608. The above-described attributes can be achieved, for example, as exemplified in Fig. 6C, by configuring the ring/annular partial aperture 609 and the central circular partial aperture 608 with similar/substantially equal surface areas, and such that their polarization directions (e.g., by utilizing a suitable polarizer element in each partial aperture) are substantially perpendicular one with respect to the other. For example, and without limiting, the direction of polarization affected at the ring/annular partial aperture 609 can be substantially perpendicular to the polarization direction affected by the central circular partial aperture 608, while the direction of the polarizations affected by each one of the apertures 608,609 is about 45° with respect to the polarization direction affected by the front polarizer 601 (the arrowed lines shown in Figs. 6B and 6C represent the orientation of polarized light/radiation emerging from front polarizer 601 and from the aperture assembly 602, respectively). In possible embodiments wherein polarized illumination is applied by the (optional) light/radiation source 633, the polarization directions of the polarizers used at the central circular partial aperture 608 and at the ring/annular partial aperture 609 can be configured to be 45° relative to the direction of polarization of the polarized illumination produced by the (optional) light/radiation source 633 i.e., the front polarizer 6can be redundant/omitted in such configurations. As illustrated in Fig. 6D, in some embodiments the intermediate polarizer 605 is a linear polarizer, such as exemplified in Fig. 6D having a polarization orientation of 45° i.e., similar to the polarization orientation of the front polarizer 601. It is noted that the intermediate polarizer 605 may be located at various locations along the optical path (630t) of the system 600, while it is located posterior to the aperture assembly 602 and anterior to sensor device 606. An intermediate retarder, e.g., a rectangular grid/array of retarder elements 111s (e.g., λ/4 retardation) and 111n (e.g., null-retardation) such as illustrated in Fig. 3C, may also be incorporated in some embodiments into the optical system 600. Reference is now made to Fig. 6E which is similar to Fig. 4C discussed in detail hereinabove. Fig. 6E schematically illustrates wavefronts 640 and 641 respectively emerging from the off-axis point-source P1 located on the focal plane 630f, and from the off-axis point-source P2 located away from focal plane 630f i.e., further away from optical assembly (e.g., imaging lens) 603. For the sake of simplicity, the following adjustments have been made in Fig. 6E: (1) most optical elements of the optical system 600 are not shown, such that only aperture assembly 602 and the detector assembly 610 are presented explicitly, wherein the detector assembly 610 comprises the sensor device 606, and may comprise also the intermediate linear polarizer 605, the bandpass filter 604 and a microlens array (similar to the array exemplified in Fig. 1E); (2) the ring/annular aperture 609 has been narrowed down to a thin ring at the outer perimeter of the aperture assembly 602; and (3) the central aperture 608 has been narrowed down to a small circle around the center of the aperture assembly 602. Fig. 6E schematically illustrates the wavefront traversing through optical system 600 as light/radiation rays/beams rather than arched-shaped wavefronts. In some embodiments, as noted, the aperture assembly 602 is configured to manipulate wavefront emerging from point objects in the FOV of the optical system 600 traversing towards detector device 610. Such manipulation is manifested by applying a predetermined phase shift between light/radiation beams portions 642a traversing through the ring/annular partial aperture 609 and the light/radiation beams portions 642c traversing through the central partial aperture 608 e.g., responsive to control signals 607c generated by the processor 607. Techniques of applying such phase shift is discussed in detail hereinbelow with reference to Figs. 7A to 7C. Optionally, but in some embodiments preferably, three or more different (but two can suffice in some cases) phase shifts β0, β1 and β2, between the two wavefronts/beams portions 642a and 642c are applied in the time domain (i.e., at different time instances). As a specific and non-limiting example, in possible embodiments, the affected phase shifts are 0 (zero) radians for the phase shift β0, radians for the phase shift β1, and radians for the phase shift β2. Due to the construction of the optical system 600, the energy measured by the sensor device 606 is dependent on the phase shift between the light/radiation beams portion 642c and the wavefront/beams portion 642a. For example, in case the phase shift is zero when the wavefront/beams portions converge onto the sensor assembly 610 the polarization orientation of the combined wavefront is substantially 45° (as also discussed with reference to Figs. 1A to 4C). Since the polarization orientation of the intermediate polarizer 606 is 45°, all the energy is transmitted through it, and maximal energy is measured by the sensor device 606 at the relevant pixel 606p. If the phase difference between the wavefront/beams portions 642c, 642a is , the combined polarization of the wavefront/beams portions is circular, and the energy measured by sensor device 606 at the relevant pixel 606p is about 50% of the maximal light/radiation energy from each partial aperture. Respectively, different phase shifts lead to a different polarization at the sensor assembly 606, thus leading to different energy/intensities measured by the sensor device 606. Preferably, but need not necessarily, the affected β0, β1, β2,… phase shifts are evenly distributed within the interval [0, ? /2] (or [-? /4, ? /4]) radians. In the configuration exemplified in Figs. 6A to 6D, the outcome results in that: • at each phase shift value (βi, where i≥0 is an indexing integer) the two wavefront/beams portions (642c,642a) converging from the partial apertures (608,609) result in a different polarization when reaching the intermediate polarizer 605; • when the two wavefront/beams portions (642c, 642a) are at a 0 (zero) phase difference a maximal intensity power is obtained/measured at the respective pixel 606p at the image plane 606f (in this specific example where the polarization orientation of intermediate polarizer 605 is similar to that of the front polarizer 601); and • when the two wavefront portions (642a, 642c) are at a ? /2 phase difference the resulting polarization at the sensor assembly is -45° (ADP) resulting in a minimum intensity power being obtained/measured at respective pixel 606p at the image plane 606f. Similar to the description hereinabove with reference to Fig. 4C, the phase shift between the wavefronts/beams portions 640 and 641 emerging from point-sources P1 and Prespectively, up until the aperture assembly 602 can be expressed as follows: ? = ∙∆  (1) (using the same annotation as in the hereinabove description) It is therefore straightforward to conclude that once a deliberate phase shift βi is applied between wavefront/beams portions 642c and 642a while traversing through the aperture assembly 602, the total phase shift between the wavefronts/beams portions once they emerge from the aperture assembly 602 is: ? + β? . It is further straight forward to see, based on equation (2) derived hereinabove, that the energy obtained on the respective pixels 606p of wavefront emerging from a point in the FOV of optical system 600 can be determined as follows: ?? (? , β? ) = (1 + cos(? + β? )) (4) where E0 is the combined energies/intensities of the light/radiation reaching the partial apertures of aperture assembly 602, and other annotations are as detailed hereinabove. Thus, by measuring Ei e.g., light/radiation intensity measured by the sensor elements (pixels) 606p as received from a specific point-source in the FOV of optical system 6(exemplified by point P2) at different points in time corresponding to different predefined ?? , the phase shift ? can be determined by solving an equation system constructed based on equation (4), and thus ∆? can be also determined based on equation (1), and therefore the distance Z1 to point P2 can be also determined based on the focal length of the optical system 600, and other optical characteristics thereof.
Fig. 7A schematically illustrates an implementation of the aperture assembly 602, wherein the phase between the wavefront portions (642c,642a) traversing through the central partial aperture 608 and the ring/annular partial aperture 609 is controllably changed by altering the effective thickness T1, T2 and T3, to affect the predetermined phase differences β1, β2 and β3 e.g., β0: 0 (zero) radians; β1: radians; and β2: radians, by changing the thickness of the central partial aperture 608 and/or of the outer patrial aperture 609 of the aperture assembly 602. The thickness change can be applied e.g., by mechanical pressure on a relatively soft or partially liquid material which may comprise either partial aperture 608, 609 or both, or by using several structures having different thicknesses in each partial aperture 608 or 609, and alternating between them. Fig. 7B schematically illustrates an implementation of the aperture assembly 6wherein the phase difference between the wavefront/beams portions (642c and 642a) traversing through the central partial aperture 608 and the ring/annular partial aperture 609 is controllably changed by altering the index of refraction of the material n0, n1 and n2, to affect the predetermined phase differences β0, β1 and β2 e.g., β0 = 0 (zero) radians; β1 = radians; and β2 = radians, by controllably changing the index of refraction of the central portion 6 and/or of outer portion 609 of the aperture assembly 602. The refractive index change can be applied, e.g., by using electro-optical crystal material for either one of the partial apertures 608, 609, or for both of them, and applying electrical voltage to the applicable partial aperture. Fig. 7C schematically illustrates an implementation of the aperture assembly 602 wherein the phase between the wavefront/beams portions (642c and 642a) traversing through the central partial aperture 608 and the ring/annular partial aperture 609 is controllably changed by altering the lens power/curvature c0, c1 and c2 of tunable optics (such as EL-3-10, manufactured by Optotune, Switzerland AG) to affect the predetermined phase differences β0, β1 and β2 e.g., β0 = 0 (zero) radians; β1 = radians; and β2 = radians, by controllably changing the curvature of the central partial aperture 608 and/or of the outer ring/annular partial aperture 609 of the aperture assembly 602. The curvature ci can be changed in possible embodiments by using "soft" material optics and applying electrically controlled mechanical pressure. Optionally, the phase difference between the wavefront/beams portions (642c and 642a) traversing through the central partial aperture 608 and the ring/annular partial aperture 609 is controllably changed by configuring the aperture assembly 602 to combine two or more of the implementations schematically illustrated in Figs. 7A to 7C e.g., by altering the thickness T0, T1 and T23, of the central partial aperture 608 and/or of the outer ring/annular partial aperture 609, and/or the index of refraction of the material n0, n1 and n2, of the central partial aperture 608 and/or of the outer ring/annular partial aperture 609 of the aperture assembly 602, and/or by altering the lens power/curvature c0, c1 and c2 of the central partial aperture 608 and/or of the outer ring/annular partial aperture 609 of the aperture assembly 602, e.g., to affect the phase differences, e.g., β0 = 0 (zero) radians; β1 = radians; and β2 = radians. It is noted that while the phase changes/differences applied between the central and ring partial apertures, 608 and 609 respectively, can be controlled as illustrated in Figs. 7A-7C, the structure and polarization orientation of the partial apertures can be implemented also using any of the embodiments shown in Figs. 2A to 2J. Reference is now made to Fig. 8A which is a simplified pictorial illustration of an optical system 800 in accordance with a possible embodiment useful for determining three-dimensional coordinates of at least some portion of object(s) 830 in the field of view (FOV) of said optical system 800. The optical system 800 comprises a front polarizer 801, an aperture assembly 802 which is located, optionally but in some embodiments preferably, at the aperture plane of the optical system 800, an optical assembly 803, a bandpass filter 804, a controllable variable polarizer 805, and a sensor device 806 located at the image plane 806f of the optical assembly 803 with respect to the object 830. The sensor device 806 (e.g., CCD or CMOS imager) is coupled to a processor unit 807 configured and operable to analyze measurement data/signals (also referred to herein as image data/signals) acquired by the sensor device 806. In some embodiments, wherein the object 830 imaged by the detector assembly 810 of the optical system 800 is illuminated by a narrow band (optional, designated in a dashed-line box) light/radiation source 833, the bandpass filter 804 may be redundant/omitted. The processor unit 807 comprises one or more processing units (CPU/GPU) and memories 807m configured and operable to store program code and other data usable for operating the optical system 800, and for processing and analyzing the measurement data/signals generated by the sensor device 806. The processor unit 807 may be used to present to user(s) of the system information associated with the acquired measurement data/signals, and/or the model/image thereby constructed (whether three-dimensional or not), through a display device/system (not shown). The optical assembly 803 (e.g., imaging lens) may comprise several optical elements, and it is designed and constructed in some embodiments such that the image thereby formed on the sensor device 806 is in some embodiments diffraction limited throughout the relevant wavelength range for which the optical system 8is being used. Other than the controllably variable polarizer 805, which replaces the intermediate (e.g., patterned) polarizer 105, the optical system 800 is similar in many aspects to the optical system 100 described hereinabove with reference to Figs. 1A to 5H and can be accommodated in the same way, including various configurations of the aperture assembly 802, which can be implemented as exemplified in Figs. 2A to 2J. The bandpass filter 804 can be a narrowband wavelength range filter, or it may include different "colors" (and located at the vicinity of the focal plane 806f). As exemplified in Fig. 8B, the front polarizer 801 is configured such that its polarization orientation is at 45°, while the aperture assembly 802 (as illustrated in Fig. 8C) comprises the central partial aperture 808 limiting the passage of light/radiation therethrough to horizontally polarized light/radiation components, and the ring/annular partial aperture 8limiting the passage light/radiation therethrough to vertically polarized light/radiation components. Both partial apertures (808,809) are also configured to allow light/radiation therethrough such that the portion of light/radiation energy traversing through the central partial aperture 808 is typically similar to the portion of light/radiation energy traversing through the ring/annular partial aperture 809. The controllable variable polarizer 805 can be positioned in various locations along the optical path (830t), as long as it is located posterior to aperture assembly 802 and anterior to sensor device 806. As illustrated in Fig. 8D, the variable polarizer 805 is configured such that light/radiation traversing through it emerges at a polarization orientation θi (hereafter referred to as the polarization orientation of controllable variable polarizer 805). The polarization orientation θi of the controllable variable polarizer 805 can be controlled and changed in different points in time e.g., by control signals 807c generated by the control processor unit 807, thus, at any specific time (ti) the polarization orientation is denoted as θi. The polarization orientation θi of the controllable variable polarizer 805 can be controlled electrically (e.g., using liquid crystal), mechanically (e.g., using a physically rotating variable polarizer 805), or in any suitable alternative method. As exemplified hereinabove with reference to Fig. 4C using equation (2), the energy measured by a sensor element (pixel) 806p of the sensor deice 806 corresponds to the phase shift between the wavefronts/beams emerging from a point-source in the FOV of the optical system 800 reaching the ring/annular aperture 809 vs. the central aperture 808 (said phase shift is denoted as ? ), and the polarization orientation of the controllable variable polarizer 805, θi [replacing the annotation of α in equation (2)]. By applying the variable polarization orientation θi of controllable variable polarizer 805 at different points in time (ti), the energy measured in each such point in time can be expressed as follows: ? (? , ?? ) = (1 + sin 2?? ∙ cos ? ) (5) where the annotations in equation (5), (other than ?? which replaces α) are similar to those described with reference to equation (2). By measuring the energy (i.e., intensity) of the light/radiation reaching the sensor elements (pixel) 806p of the sensor device 806 at different points in time (ti), corresponding to at least two (2) different polarization orientations ?? affected by the controllable variable polarizer 805, an equation system can be constructed based on equation (5), from which the phase shift ? can be derived and hence the distance from (1) the optical system 800 to (2) the point-source in the field of view of optical system 800 from which said light/radiation has emerged. It is noted that, optionally, but in some embodiments preferably, the polarization orientation θi may not be the spatially the same throughout the controllable variable polarizer 805, though in such case the controllable variable polarizer 805 is required to be located at the vicinity of sensor device 806 i.e., in the vicinity of focal plane 806f. It is further noted that throughout the present disclosure, in addition to determining the distance of a point in the FOV of an optical system according to any of the embodiments disclosed herein, the local energy which has been denoted as E0 in equations (2), (4), and (5) is also determined, providing a "grey scale" level for each pixel, and can be presented to the user in means described hereinabove as part of the implementation of processor 107/607/807. Fig. 9A is a flowchart illustrating a distance measurement procedure 150 according to some possible embodiments. In this specific and non-limiting example an initial measurement (u1) is carried out at an initial state of the aperture assembly, in which its apertures are configured to pass the light/radiation from the object without applying any phase changes thereto. Next (u2), a first predefined phase difference is applied to the light/radiation from the object passed through the two or more partial apertures of the aperture assembly, and a corresponding measurement (u3) is carried out. Optionally, but in some embodiments preferably, one or more additional phase differences (u21) are applied to the light/radiation from the object passed through the two or more apertures of the aperture assembly, and corresponding measurements (u31) are carried out.
Depending on the application and configuration of the optical system, additional phase changes can be applied as necessary, and corresponding intensity measurements are made. After measuring the intensity of the light/radiation received by the sensor device for to two or more (and in some embodiments three or more) different phase differences applied to the light/radiation from the object passed through the two or more partial apertures of the aperture assembly, the distance of at least one point on the object is determined (u4) e.g., by solving an equation system constructed based on equation (4) to determine the phase shift ? and thereafter using equation (1) to determine the distance. Fig. 9B illustrates a process 150', similar to the process 150 described hereinabove with reference to Fig. 9, that can be carried out using the embodiments utilizing controlled variable polarization orientation, where rather than application of phase change between the different partial apertures of the aperture assembly, different polarization orientations (?? ) are applied to the variable controlled polarizer and the corresponding intensity measurement are taken, as described in details hereinabove with reference to fig. 8A-8D. The steps u1', u2', u3', u4', u21' and u31', of the process 150' are mutatis mutandis similar to the corresponding steps u1, u2, u3, u4, u21 and u31, of the process 150, and thus will not be described herein in details, for the same of brevity. Relative terms such as "lower," "upper," "horizontal," "vertical," "above," "below," "up," "down," "top" and "bottom", as well as derivatives thereof (e.g., "horizontally," "downwardly," "upwardly," etc.), and similar adjectives in relation to orientation of the described elements/components refer to the manner in which the illustrations are positioned on the paper, not as any limitation to the orientations in which these elements/components can be used in actual applications. It should be understood that throughout this disclosure, where a process or method is shown or described, the steps/acts of the method may be performed in any order and/or simultaneously, and/or with other steps/acts not-illustrated/described herein, unless it is clear from the context that one step depends on another being performed first. In possible embodiments not all of the illustrated/described steps/acts are required to carry out the method. As described hereinabove and shown in the associated figures the present application provides designs and techniques for determining distance of an imaged object(s), as well as grey scale and/or color intensity, of at least some portion of the object(s) located in a field of view of an optical system. Derivation of the distance is performed in possible embodiments by measuring the energy distribution of polarized light e.g., using a polarizer array, as well as an aperture assembly configured to divide a wavefront emerging from the imaged object(s) into at least two partial wavefronts with similar/equal energy and different polarization orientations. As noted hereinabove, it possible to evaluate the relevant parameters of the object(s) in the FOV of the optical system (e.g., distance, color intensity) while the partial apertures do not result in similar/equal energy portions, and/or while the polarization orientation between the light/radiation portions passing through the different partial apertures are not perpendicular one with respect to the other. Such configuration, though not optimal, are possible and the equations expressing such configuration re more complex and not detailed in the current disclosure for the sake brevity. The determination of the distance can be done by derivation of the phase difference between at least two partial wavefronts, which is indicative of the required distance. This technique is usable for 3D imaging, and related methods. While particular embodiments of the application have been described, it will be understood, however, that the disclosure hereof is not limited thereto, since modifications may be made by those skilled in the art, particularly in light of the foregoing teachings. As will be appreciated by the skilled person, the invention can be carried out in a great variety of ways, employing more than one technique from those described above, all without exceeding the scope of the claims.

Claims (38)

298239/ - 50 - CLAIMS:
1. An imaging device comprising: a detector assembly having sensor elements configured to measure intensity of light/radiation thereby received and generate measurement data/signals indicative thereof; an aperture assembly having at least two partial apertures configured to divide light/radiation received from an object and passed through said aperture assembly into at least two portions having different polarization orientations; a polarizer arrangement located between said detector assembly and said aperture assembly, said polarizer arrangement configured to spatially affect at least two different polarization orientations to light/radiation passing therethrough; and a processor configured to process the measurement data/signals from the sensor elements associated with said spatially affected at least two different polarization orientations of said at least two portions of light/radiation from said aperture assembly, and determine based thereon a distance of said object from said imaging device.
2. The device of claim 1 comprising an optical assembly configured to direct light/radiation from the aperture assembly to the detector assembly.
3. The device of claim 1 or 2 wherein the polarizer arrangement is configured to define a spatial distribution of at least two different polarization orientations, and wherein the processor is configured to process the measurement data/signals from respective at least two sensor elements of the detector assembly associated with said spatially affected at least two different polarization orientations of the at least two portions of light/radiation from said aperture assembly.
4. The device of claim 3 wherein the polarizer arrangement is embedded in the detector assembly.
5. The device of any one of the preceding claims wherein the aperture assembly is configured to apply a predefined phase difference between the at least two portions of the light/radiation, and wherein the processor is configured and operable to determine the distance of the object based on said predefined phase difference and the measurement data/signals.
6. The device of any one of the preceding claims comprising at least one polarizer configured to apply a predefined polarization orientation to the light/radiation received by the aperture assembly. 298239/ - 51 -
7. The device of any one of the preceding claims comprising a light/radiation source configured to illuminate the object with light/radiation having a defined polarization orientation.
8. The device of any one of the preceding claims comprising a light/radiation source configured to illuminate the object with light/radiation having a defined band of wavelengths.
9. The device of any one of the preceding claims comprising one or more bandpass filters each configured to limit passage of light/radiation therethrough towards the sensor elements of the detector assembly to a different range of wavelengths.
10. The device of claim 9 wherein the one or more bandpass filters form a defined spatial distribution of bandpass filter elements.
11. The device of claim 9 or 10 wherein the one or more bandpass filters are arranged such that there is no spatial overlapping portions therebetween.
12. The device of any one of claims 9 to 11 wherein the one or more bandpass filters form a spatial distribution of at least one bandpass element of a predefined wavelength range and at least one passthrough non-filtering element allowing complete passage of the light/radiation therethrough.
13. The device of any one of the preceding claims wherein the polarizer arrangement comprises a plurality of polarizer elements configured to form a defined spatial distribution of polarizer elements arranged such that the polarization orientation of at least some of said polarizer elements is different from polarization orientations of polarizer elements adjacently located thereto.
14. The device of claim 13 wherein one or more bandpass filters are configured to direct light/radiation of certain wavelength ranges onto respective polarizer elements of the polarizer arrangement having certain polarization orientations.
15. The device of claim 13 wherein the polarizer elements are configured to direct light/radiation of certain polarization orientations onto respective one or more bandpass filters having certain wavelength ranges.
16. The device of claims 14 or 15 wherein the wavelength ranges of the one or more bandpass filters or bandpass filter elements are within the "Red", "Green", "Blue" wavelength ranges.
17. The device of any one of claims 9 to 16 wherein one or more bandpass filters are embedded as part of the detector assembly.
18. The device of any one of the preceding claims comprising a microlens array located anterior to the sensor elements of the detector assembly. 298239/ - 52 -
19. The device of any one of the preceding claims comprising a retarder element configured to affect a desired polarization to light/radiation.
20. The device of any one of the preceding claims comprising an array of retardation elements in the detector assembly and having a defined arrangement of different retardation elements spatially alternatingly distributed therein.
21. The device of claim 20 wherein the array of retardation elements is configured to form a rectangular alternating grid of the of retardation elements to thereby pass polarized light/radiation onto some of the polarizer elements of the array of polarizer elements having certain one or more polarization orientations.
22. The device of any one of the preceding claims wherein the processor is configured to further determine one or more colors and/or grey-scale levels from the measurement data for each sensor element of the sensor device.
23. The device of any one of the preceding claims wherein a relative rotation of polarizations of some of the polarization elements is used to calibrate the device.
24. The device of any one of the preceding claims wherein a combined rotation of some of the polarization elements is used to manipulate the energy distribution measured by the detector assembly, to thereby improve sensitivity.
25. An imaging method comprising: dividing light/radiation from an object into at least two portions having different polarization orientations; spatially affecting at least two different polarization orientations to said at least two light/radiation portions; measuring intensity of the spatially polarized light/radiation and generating measurement data/signals indicative thereof; and processing the measurement data/signals and determining based thereon a distance of said object.
26. The method of claim 25 wherein the dividing of the light/radiation into at least two portions is applied to light/radiation components having a predefined polarization orientation.
27. The method of any one of claims 25 to 26 comprising radiating the object with light/radiation having a defined polarization orientation.
28. The method of any one of claims 25 to 27 comprising illuminating the object with light/radiation having a define band of wavelengths.
29. The method of any one of claims 25 to 28 comprising filtering the light/radiation by one or more bandpass filters having different wavelength ranges.
30. The method of claim 29 comprising forming a defined spatial distribution of the filtering of the light/radiation by the one or more bandpass filters. 298239/ - 53 -
31. The method of any one of claims 25 to 30 comprising forming a defined distribution of polarization orientations by the spatially affecting of the at least two different polarization orientations to the at least two light/radiation portions.
32. The method of claim 31 comprising associating bandpass filters of the one or more bandpass filters with polarization orientations of the defined distribution of polarization orientations.
33. The method of claim 31 comprising associating polarization orientations of the defined distribution of polarization orientations with bandpass filters of the one or more bandpass filters.
34. The method of any one of claims 25 to 33 comprising affecting a phase shift to light/radiation so as to affect a desired polarization thereto.
35. The method of any one of claims 25 to 34 comprising forming a defined spatial distribution of different phase shifts to the at least two light/radiation portions.
36. The method of any one of claims 25 to 35 comprising determining one or more colors and/or grey-scale levels from the measurement data.
37. The method of any one of claims 25 to 36 comprising a calibration step including affecting a relative rotation of polarization orientation to one or more components of the light/radiation from the object.
38. The method of any one of claims 25 to 36 comprising performing a combined rotation of polarization orientations to components of the light/radiation from the object to cause an energy distribution in the measurement data.
IL298239A 2022-11-15 2022-11-15 Imaging system and method IL298239B2 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
IL298239A IL298239B2 (en) 2022-11-15 2022-11-15 Imaging system and method
PCT/IL2023/051175 WO2024105664A1 (en) 2022-11-15 2023-11-14 Imaging system and method
EP23810179.4A EP4551902B1 (en) 2022-11-15 2023-11-14 Imaging system and method
IL320027A IL320027A (en) 2022-11-15 2023-11-14 Imaging system and method
JP2025526239A JP2025540911A (en) 2022-11-15 2023-11-14 Imaging systems and methods
KR1020257019015A KR20250107890A (en) 2022-11-15 2023-11-14 Imaging systems and methods
CN202380071981.6A CN120112769A (en) 2022-11-15 2023-11-14 Imaging system and method
US19/081,428 US12429325B2 (en) 2022-11-15 2025-03-17 Imaging system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
IL298239A IL298239B2 (en) 2022-11-15 2022-11-15 Imaging system and method

Publications (3)

Publication Number Publication Date
IL298239A IL298239A (en) 2022-12-01
IL298239B1 IL298239B1 (en) 2024-02-01
IL298239B2 true IL298239B2 (en) 2024-06-01

Family

ID=89845888

Family Applications (1)

Application Number Title Priority Date Filing Date
IL298239A IL298239B2 (en) 2022-11-15 2022-11-15 Imaging system and method

Country Status (1)

Country Link
IL (1) IL298239B2 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6639683B1 (en) * 2000-10-17 2003-10-28 Remy Tumbar Interferometric sensor and method to detect optical fields
JP2008292939A (en) * 2007-05-28 2008-12-04 Graduate School For The Creation Of New Photonics Industries Quantitative phase microscope
US20170052430A1 (en) * 2014-05-14 2017-02-23 Sony Corporation Imaging device and imaging method
US20200116558A1 (en) * 2018-08-09 2020-04-16 Ouster, Inc. Multispectral ranging/imaging sensor arrays and systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6639683B1 (en) * 2000-10-17 2003-10-28 Remy Tumbar Interferometric sensor and method to detect optical fields
JP2008292939A (en) * 2007-05-28 2008-12-04 Graduate School For The Creation Of New Photonics Industries Quantitative phase microscope
US20170052430A1 (en) * 2014-05-14 2017-02-23 Sony Corporation Imaging device and imaging method
US20200116558A1 (en) * 2018-08-09 2020-04-16 Ouster, Inc. Multispectral ranging/imaging sensor arrays and systems

Also Published As

Publication number Publication date
IL298239A (en) 2022-12-01
IL298239B1 (en) 2024-02-01

Similar Documents

Publication Publication Date Title
EP4551902B1 (en) Imaging system and method
US12435968B2 (en) Multiple channel locating
EP3423858B1 (en) 3d imaging system and method
US6552808B2 (en) Methods and apparatus for splitting, imaging, and measuring wavefronts in interferometry
US8786755B2 (en) Method and apparatus for polarization imaging
US11781913B2 (en) Polarimetric imaging camera
US7230717B2 (en) Pixelated phase-mask interferometer
US20070211256A1 (en) Linear-carrier phase-mask interferometer
CN108459417B (en) A monocular narrow-band multispectral stereo vision system and using method thereof
Brock et al. A pixelated micropolarizer-based camera for instantaneous interferometric measurements
KR101125842B1 (en) 3 wavelength digital holographic microscope and data processing method thereof
IL298239B2 (en) Imaging system and method