IL258918B - Electro-optical means to support efficient autonomous car neural processing by imprinting depth cues in the raw data - Google Patents

Electro-optical means to support efficient autonomous car neural processing by imprinting depth cues in the raw data

Info

Publication number
IL258918B
IL258918B IL258918A IL25891818A IL258918B IL 258918 B IL258918 B IL 258918B IL 258918 A IL258918 A IL 258918A IL 25891818 A IL25891818 A IL 25891818A IL 258918 B IL258918 B IL 258918B
Authority
IL
Israel
Prior art keywords
light source
image
camera
cameras
electro
Prior art date
Application number
IL258918A
Other languages
Hebrew (he)
Other versions
IL258918A (en
Inventor
Korech Omer
Original Assignee
Korech Omer
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Korech Omer filed Critical Korech Omer
Priority to IL258918A priority Critical patent/IL258918B/en
Publication of IL258918A publication Critical patent/IL258918A/en
Publication of IL258918B publication Critical patent/IL258918B/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C1/00Measuring angles
    • G01C1/02Theodolites
    • G01C1/04Theodolites combined with cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Mechanical Engineering (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Description

Inventor Omer Korech, Ph.D.
Title Electro-optical means to support efficient autonomous car neural network processing by imprinting depth cues in the raw data Field of Invention The technology relates to the general field of depth sensing for autonomous cars. It has specific application to reduce the amount of necessary computation needed for depth extraction from 2D images.
Background One of the most significant challenges for autonomous vehicles is equipping the car with an accurate depth sensor, without increasing the cost significantly.
Autonomous vehicles Depth Sensing usually rely on either Time of Flight (ToF) sensors (such as LiDAR, RADAR and Ultrasonic), or image sensors supported by computer vision algorithms. The disclosed invention belongs to the second group sensors.
In the former group (ToF sensors), first a signal (light or sound, pulse or modulation) is transmitted. Then a sensor detects the reflection off the target object to record the signal time of flight. Knowing that and the constant speed of light or sound, the system can calculate how far away the target object is.
The latter (image based) sensors are potentially Reacher in information (such as object recognition) and typically cheaper in sensors bill of material. However, extracting explicit depth values from images currently necessitate heavy computation (numerous instructions), which is manifested in long processing time or expensive processors. As a benchmark example: applying contemporary deep neural network algorithms to extract the information from the data of a single camera, requires Nvidia’s Titan X (very expensive state-of-the-art GPU) in order to process frames per second. A more realistic approach to extract depth information from images would involve camera array rather than a single camera. The camera array approach uses multiple cameras placed at different positions to capture multiple images of the same target, and a depth map is calculated based on geometry. This is also called "stereoscopic view" or "stereoview". The addition of cameras adds little information (as evidently humans can drive with one eye closed). However, the information is easier to extract by known algorithms, compared to the case of single camera. Nevertheless, the camera array approach still necessitates intensive computation in order to find matching points in multiple images, also known as registration. The registration computation might be too challenging to be processed satisfactorily in real time with current affordable hardware.
Summary The disclosed configurations of cameras, light sources and filters result in images of the object and its corresponding shadow that enable calculating the distance to the object, while circumventing the challenge of finding matching points in multiple frames (frame being a full digital image captured by a camera), thus reducing the computation significantly.
Counterintuitively, the angle between the image of an object and the image of the object shadow, does not depend (or depends very weakly) on the object distance (under the conditions imposed by the electro-optical system described in this patent application). The value of the aforementioned angle is a strong function of the known camera location relative to the light source, hence it is known, or at least highly predictable. Consequently, the image of an object and the image of the object shadow may be matched easily (computationally wise) within the same frame. For non-flying objects, the intersection between object and its shadow happens to occur on the ground. Since the ground height in known, only two more dimensions are needed to completely describe the 3D location of the object. The intersection between the contour of the shadow and the object images provide this 2D information. Note that the intersection between the continuation of the shadow and object contour may be located in the frame even in the case where intersection itself is not in the camera(s) direct line of sight, i.e. when it is obscured by other object. The scope of this invention is limited to the electro-optical layout, while the above-described algorithm is given for conceptual illustration purposes, because its validity is limited to synthetic examples. In realistic implementation, the enhanced data will be analyzed using contemporary neural networks (denoted hereafter as NN). NN framework is fundamentally numerical (non-analytical). As such, NN are less amenable for rigorously proving general image enhancement claims regarding the a-priori required computation. Nevertheless, it is well understood (Henry W. Lin, 2017) that the NN optimal depth is directly related to the number of nonlinear operations needed to extract the data. Thus, since the analytical data extraction of the enhanced data is simpler, as described above, it is expected that the enhanced data will enable shallower and smaller network to perform the same task. Smaller (fewer neurons) network results in faster training and faster real time processing.
Brief Description of the Drawings Various embodiments of the invention are disclosed in the following detailed description. The architype of the embodiments is represented in the accompanying drawings.
Fig. 1 Illustrates the relative spatial locations of the components whose location is a meaningful parameter: the light source (1); the camera, which is composed of a sensor (2) in the lens (3) image plane; and the road (4) whose distance relative to the optical system is determined by the car (5).
Fig. 2 Illustrates a corresponding abstracted optical system. The lens is represented by a pin-hole (6); The sensor (2) is further enlarged for brevity of a subsequent ray-trace model; A typical emitted ray (7) from the light source (1) is blue false-colored, while the corresponding scattered ray (8) from the road (4) to the pin-hole (6) is yellow false-colored (in reality the ray’s color/wavelength is conserved upon scattering).
Fig. 3. Shows ray-trace of the above optical system, with the addition of an external object (9), whose distance is to be estimated by the system. Emitted rays (7) from the source (1) towards the object (9) and the road (4) are relevant, hence traced. Conversely, for the scattered rays, only rays scattered towards the aperture (6) are sensed by the sensor (2) therefore traced (8), with a proper weighting. The resulted image (denoted hereafter as ‘frame’) on the sensor plane (2) is calculated by a commercial ray-trace software and shown in figure 4.
Fig. 4. Shows the image formed on the sensor due to a close object, as modeled in figure 3. Note that one of the object contour lines, labeled as (10) has a corresponding contour line the shadow, labeled as (11). Like any line pair in Euclidian geometry, there is a certain angle between them, denoted as (12). For the sake of completeness, it should be noted that the image quality has been artificially enhanced by two manipulations for brevity purposes. First, an upper limit on the grayscale has been imposed. The limit is only few decibel (db) lower than the maximum intensity (for comparison, common camera dynamic range is typically few tens of db). Secondly, only the image region of interest is traced, i.e. the road section between the object and car has been ignored.
In order to demonstrate the independence between the object distance and the above-mentioned angle, the ray-trace has been repeated while the object distance has been increased significantly.
Fig. 5. Shows ray-trace with essentially same electro-optical system as in figure 3. However, the object distance from the car has been greatly extended. The resulted image on the sensor plane is shown in figure 6.
Fig. 6. Shows the image formed on the sensor due to a far object, as modeled in figure 5. Note that similarly to fig. 4, one of the object contour lines, labeled in fig. 6 as (13) has a corresponding shadow contour line, labeled as (14). The angle between them, denoted as (15) has the same value as the corresponding angle (12) in the case of close object. As a side note, the main differences between the images of the close and far objects is the size of the images of the objects and the image intensities.
Fig. 7. Illustrates a more sophisticated image acquisition system that may replace the camera in the previous drawings. In this system, there are two cameras, each of which is composed of a sensor (16) and a lens (17). The cameras are placed in mirror image locations with respect to a beam splitter (18). The purpose of the system is to differentiate the light (19) emanating from the car’s light source from ambient light, by choosing a beam splitter that has significantly different Transmission/Reflection (T/R) ratio for the light source, than the T/R ratio for ambient (external) light.
Fig. 8. Illustrates another system with the same functionality as the system in figure 7, with the difference that in fig. 8 the two sensors (20) use a common lens (21), while in fig. 7 each sensor (16) had its own lens (17). The configuration in figure 8 has more demanding optical requirements (for the lens (21) back focal length, beam splitter (22) coatings and sensors (20) alignment), but may save the cost and space of one lens.
Definitons: Frame or Digital Image - the full image captured by the camera Object image – the part of the image within the frame that corresponds to the object Object – The item in 3D space whose distance is to be estimated External light – Light that emanates from sources other than the source described in the figures, e.g. the sun, streetlight, other cars or other light sources within the car.
T/R - Transmittance/Reflectance ratio of light, dictated by the beam splitter. In the context of this patent T/R is generalized to include the effect of filters on the sensors, in addition to the beam splitters.
Detailed Description and Preferred Embodiment I. Overview and General Discussion The observation that enables the patent is that under the illumination conditions of the electo-optical system (first described in fig. 1), the angle between a contour segment of the object and the corresponding shadow, is independent on the object distance. This observation is exemplified by ray trace in figures 4 and 6, and calculated in: Appendix I – Projective geometry calculations.
Based on the above observation, it should be easier to match to the image of an object contour, its corresponding shadow (as suggested in this patent) within the same digital image frame, rather than matching the corresponding image captured from different point of view, in different frame (as practiced in prior art, either directly by means of registration or indirectly by neural networks).
Once the image of an object contour and its corresponding shadow are matched, their intersection may easily be calculated in the 2D image space. Adding the 1D prior knowledge that the object and shadow contours intersect on the ground, results in 3D (complete) characterization of the intersection point in real (3D) space. Hence the object distance may be deduced.
The frame images presented in figure 4 and figure 6 have been retrieved by considering a single light source (the light source of the electro-optical system). However, in reality, there are numerous other external light sources that may compromise the contrast of the desired image or add misleading features. In order to factor out the effect of the external light sources, the purposed embodiment use two cameras and a beam-splitter, as described in either figure 7 or figure 8. Had the beam splitter split the light 50/50 (T/R), then the two cameras would had captured nominally identical frames. However, the beam splitter and/or filters are engineered such that the captured frames by the two cameras are practically identical only when illuminated by an external light source, thus pixel wise subtraction of one image from the other factors out the external light source influence. At the same time, when illuminated with the intended light source, the intensity of each pixel in the image of one camera is a constant multiplication of the corresponding pixel in the image of the other camera. Thus pixel wise subtracting the images, when all light sources (external and the intended) are present, yields the same information content as if the frame was captured in absence of the external light source (up to irrelevant multiplication by a constant). Trivially, the same technique may be extended to cases where the beam splitter T/R in not exactly 50/50 for external light. To be explicit, it is sufficient that the T/R ratio for external light source will be different than the T/R ratio of the same system for the intended light source.
The following is a description of implicit embodiments to illustrate the principles of the invention. The embodiments are provided to illustrate aspects of the invention, but the invention is not limited to any embodiment. The scope of the invention encompasses numerous alternatives, modifications and equivalent; it is limited only by the claims.
Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. However, the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
II. Detailed Discussion of a Preferred Implementation of the Invention First embodiment – single light source, 50/50 beam splitter, and polarizers In this embodiment, the cameras pair are placed at one side of the car (left or right). The car’s headlight at the opposite side will serve as the intended illumination source. A polarization filter is to be applied to the aforementioned headlamp cover, such that the emanating light is linearly polarized. One camera sensor should be convers with polarizer (aka analyzer) with polarization axis parallel to the headlamp polarizer, while the other camera should have polarizer that transmit the orthogonal polarization. For simplicity let’s assume the beam splitter is a 50/50 T/R non-polarizing beam splitter. In this configuration, the two cameras will capture nominally identical frames, when illuminated by the nominally non-polarized external light source. At the same time, only one of the two cameras (the camera whose sensor is covered with polarizer with polarization axis parallel to the light source polarizer’s polarization axis) will be influenced by the intended light source. Thus, pixel wise difference between the frames captured by the cameras will result in a frame as if it was captured by the intended light source (up to shot noise and possibly saturation effects, which have been neglected for brevity in this description).
Note, that had a polarized beam splitter would have been used, the two polarizers (aka analyzers) sheets, on top of the camera sensors, would be redundant. Moreover, the SNR would had been increased. However, as polarizer sheets are typically cheaper than heavy duty beam splitter, the preferred embodiment uses polarizers. Nevertheless, the fact that polarized beam splitter could be used, illustrates that the requirement for 50/50 non-polarizing beam splitter is significant only for the description simplicity, while in practice, cheaper beam splitter can be used.
Second embodiment – Two light sources, two beam splitters, and polarizers The abovementioned first embodiment may be duplicated, as a left-right mirror image, with a minor modification: The polarizers of the mirror image system are to be rotated by 45 degrees around the optical axis (the axis perpendicular to the polarizer sheets). For definiteness of the description, let us denote the left headlamp as the light source of subsystem A. The other ingredients of subsystem A are the camera pair on the right side, along with their polarizers and beam splitter. Similarly, the right headlamp will be denoted as the light source of subsystem B. The polarization axis of light source B (the right headlamp) is at 45 degrees with respect to the polarization axis of each of the two analyzers of subsystem A. Hence, each of the two analyzers of subsystem A will be influenced by the right headlamp light (belonging to subsystem B) in the same manner. Therefore, the light emanating from the right headlamp (B) will not be distinguished from other non-polarized external light, because its influence will be factored out during the pixel wise subtraction. At the same time, the resultant frame of subsystem B (resulted from pixel wise subtraction of the camera pair in subsystem B) will only be influenced by sub-system B light source (the right side light source). In effect, the two sub-systems are independent. The two mirror image sub-systems (mirror image up to rotation of the polarization axis), may be designed even more symmetrically if the polarization axes are chosen to be at 22.5° (=45/2 degrees) with respect to either the horizon or vertical. In this case, the mirror image symmetry is complete (and not just up to rotation of the polarization axis). The fully symmetric configuration may be advantageous if the stereoscopic viewpoints of the two mirror image sub-systems is to be utilized as in prior art (in addition to the independent depth cues of each sub-system that form the subject of this invention). The advantage of the fully symmetric configuration is reduced sensitivity to unexpected polarized stray light, assuming the stray light polarization is nearly horizontal or vertical, which might be the case due to the left-right symmetry of abundant objects. The reduced sensitivity of the full system as stereoscopic system stems from the similarity in sensitivity of each of the two subsystems.
Third embodiment –Polarizers without beam splitter The beam splitter arrangement, as suggested in either fig. 7 or fig. is advantageous, because it eliminates the need for registration between the two cameras within a pair. Nevertheless, the beam splitter can be precluded. Without the beam splitter the two cameras in the camera pair should be mounted side by side. Although the registration in the case without beam splitter is not perfect, as long as the ratio of the distance between cameras to the distance to the object of interest is smaller than the ratio of the camera pixel pitch to the lens focal length, the registration should be sufficient. In cases where the aforementioned ratios condition is violated, the registration might be corrected with minor shift in the pixels location prior to the pixel wise subtraction.
Generalization The above three embodiments used polarization to discriminate the sensitivity of the cameras to the intended light source from the sensitivity to the external light. Thus, the three embodiments (First embodiment – single pair with a beam splitter, Second embodiment - two pairs with beam splitters, Third embodiment – one or two pairs without beam splitter) do not differ in the discrimination mechanism (the discrimination mechanism is polarization in all the above embodiments).
In the next embodiments another discrimination mechanisms are suggested. The below-mentioned mechanism may be used with beam splitter and one pair of cameras (as in the first embodiment) or two pairs of cameras (as in the second embodiment), or without beam-splitters (as in the third embodiment). Hence, each of the following embodiment could be repeated in three versions.
The motivation to use discriminating mechanisms other than polarization is that polarization dumps about half of the headlamp power. The dumping result in doubling the power requirements for the headlamp and might compromise the headlamp polarizer due to heat.
Another generalization side note: The disclosed embodiments are illustrative, not restrictive. While specific configurations have been described, it is understood that there are other alternative ways of implementing the invention.
Fourth embodiment – Absorption filters on the intended light source and the cameras The common headlamps use incandescent lights (Xenon or Halogen). These have broad spectral band, unfortunately with large spectral overlap with external light sources. Nevertheless, incorporating absorption material to the plastic that encapsulates the light source grants the headlamp a distinguished spectral signature. The same absorption material could be applied to one of the two cameras in the pair. In this embodiment, the cameras have nominally equal sensitivity to the headlamp, but slighty non-equal sensitivity to external light source, thus the external light influence may be differentiated from the light coming from the relevant headlamp. This approach is very cheap to implement, but might not prove sensitive enough.
Fifth embodiment – Absorption filters on the intended light source and interference filter on the cameras A more sensitive approach that manipulates the spectrum (wavelengths) of incandescent light headlamp involves using the absorption filter as in the fourth embodiment, but only for the plastic that encapsulates the light source. The cameras filters, on the other end, transmit (rather than absorb) a narrow band spectrum. (Usually, narrow band transmission spectrum is fabricated using coating technology, rather than cheap absorption material. The coatings may be part of the beam splitter, rather than implemented directly on the sensor.) To be more explicit, one of the cameras should have filter with spectral transmission spectrum similar to the headlamp filter absorption spectrum. The second camera should ideally have filter with similar transmission spectrum, but offset in wavelength such that there is little or no overlap with the absorption spectrum of the material in the plastic. Nevertheless, the method will work also if the second camera has no filter at all.
Sixth embodiment –LED as intended light source and absorption filters on cameras LEDs spectral bandwidth is typically narrow enough to be considered monochromatic for the sake of this patent. Thus, if LEDs rather than the car headlamp are used for the intended light source, it is sufficient that one of the two cameras will be covered with absorption filter that blocks the LED light to reach the camera. Alternatively, or in addition to the absorption filter, a dichroic beam splitter can be used. The dichroic beam splitter should have significantly different transmission and reflection (T/R) characteristics within the LED spectral bandwidth, when compared to its T/R characteristics in the rest of the relevant spectrum. (The relevant spectrum, 400-1000 nm, is determined by silicon cameras spectral response). The LEDs preferred wavelength is around 760 or 940 nm, where the sun radiation is relatively lower than other parts of the relevant spectrum.
Seventh embodiment –LED as intended light source with temporal filter In this embodiment, the intended light source is turned on and off with duty cycle of say 50% and period of say 20 milliseconds (ms). That is, the light is turned on for 10 ms then turned off for 10 ms then on again and off again and so on and so forth. The two cameras in the corresponding camera pair are synchronized such that the (electronic) shutter of one of the two cameras is open only when the light is on, while the shutter of the second camera is open only when the light is off. If a mirror image sub system is used, as in the second embodiment, a phase shift of quarter cycle (5 ms if the suggested parameters values are used) is to be implemented between the two pairs, assuming a duty cycle of 50%. In this case, each camera pair is completely sensitive for its intended LED, while the other LED illuminates both cameras during half of their open shutter period, thus rendering the cameras insensitive to the unintended LED (after pixel wise subtraction).
The reason that this embodiment does not use the car’s headlamp is that incandescent light may not be switched on and off very rapidly. In future, when LED become bright enough to serve as headlamps, this method could be implemented with the LEDs as the car headlamp.
Eighth embodiment – Structured / Holographic light source In this embodiment, structuring the intended light source in fixed and known pattern enables discriminating it from different light sources. For example, the left (or right) flash lamp illumination is modulated with horizontal (or vertical) stripes. The data from the corresponding camera in the other side, that is the right (or left) camera, is filtered, such that the spatial horizontal (or vertical) frequency that matches the light source frequency is greatly enhanced. An optical concept that can achieve such an illumination is composed of collimator that collimates the light source, followed by either Fresnel lens or holographic element (such as grating in the case of stripes). A three steps image processing algorithm that can filter the data may be composed of a) Wavelet Transform (or DFT); b) Multiplying the coefficient of the relevant frequency by big factor; c) Inverse transform. It is expected that the resulted image will be similar to an image taken with an effectively much brighter intended light source. The short comes of the { Transform – Filter - Inverse transform } image processing can be corrected by the post processing of neural network.
While this embodiment has lower differentiation capability than the other embodiments, it has the fringe benefit of taking advantage of the well-known and established methodology of extract Structured-light 3D scanner (in addition to the shadows concept introduced in this patent).

Claims (3)

1.WHAT IS CLAIMED IS: 1. An electro-optical system for extracting depth information from a digital image, comprising: (a) at least one light source; (b) at least one camera being spatially separated from said at least one light source, said at least one camera being configured for capturing a digital image of an environment illuminated by said at least one light source; and (c) a processing unit executing an algorithm for: (i) detecting an object illuminated by said light source and a shadow of said object based on a predicted angle that is formed within the captured digital image between the image of said shadow and the image of said object; and (ii) calculating a position of said object based on a ground surface height at which said shadow and said object converge.
2. The system of claim 1, wherein (ii) is effected by extracting an intersection between said shadow and said object.
3. The system of claim 1, wherein said predicted angle is determined by a spatial relationship between said at least one camera and said at least one light source. 15
IL258918A 2018-04-24 2018-04-24 Electro-optical means to support efficient autonomous car neural processing by imprinting depth cues in the raw data IL258918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
IL258918A IL258918B (en) 2018-04-24 2018-04-24 Electro-optical means to support efficient autonomous car neural processing by imprinting depth cues in the raw data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
IL258918A IL258918B (en) 2018-04-24 2018-04-24 Electro-optical means to support efficient autonomous car neural processing by imprinting depth cues in the raw data

Publications (2)

Publication Number Publication Date
IL258918A IL258918A (en) 2018-06-28
IL258918B true IL258918B (en) 2022-08-01

Family

ID=66624583

Family Applications (1)

Application Number Title Priority Date Filing Date
IL258918A IL258918B (en) 2018-04-24 2018-04-24 Electro-optical means to support efficient autonomous car neural processing by imprinting depth cues in the raw data

Country Status (1)

Country Link
IL (1) IL258918B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009008447A (en) * 2007-06-26 2009-01-15 Mazda Motor Corp Obstacle recognition device of vehicle
EP3692329A1 (en) * 2017-10-06 2020-08-12 Aaron Bernstein Generation of one or more edges of luminosity to form three-dimensional models of objects

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009008447A (en) * 2007-06-26 2009-01-15 Mazda Motor Corp Obstacle recognition device of vehicle
EP3692329A1 (en) * 2017-10-06 2020-08-12 Aaron Bernstein Generation of one or more edges of luminosity to form three-dimensional models of objects

Also Published As

Publication number Publication date
IL258918A (en) 2018-06-28

Similar Documents

Publication Publication Date Title
CN107607040B (en) Three-dimensional scanning measurement device and method suitable for strong reflection surface
US20220307819A1 (en) Systems and methods for surface normals sensing with polarization
Baek et al. Compact single-shot hyperspectral imaging using a prism
US20180308282A1 (en) Shape measuring apparatus and method
US9767545B2 (en) Depth sensor data with real-time processing of scene sensor data
US20150256813A1 (en) System and method for 3d reconstruction using multiple multi-channel cameras
JP6990859B2 (en) Depth acquisition device, depth acquisition method and program
JP6361631B2 (en) In-vehicle sensor, vehicle lamp, and vehicle
US8345100B2 (en) Shadow removal in an image captured by a vehicle-based camera using an optimized oriented linear axis
CN104783767B (en) Device and method for detecting human body microcirculation by means of orthogonal polarization spectral imaging
CA2886159A1 (en) Automatic classification system for cars
CN107194881B (en) A kind of reflective method of removal image based on photometric stereo
CN107003110A (en) Image processing apparatus and image processing method
US20050111705A1 (en) Passive stereo sensing for 3D facial shape biometrics
CN107451561A (en) Iris recognition light compensation method and device
US20140210964A1 (en) Object distance determination from image
JP2008246004A (en) Pupil detection method
US20020140670A1 (en) Method and apparatus for accurate alignment of images in digital imaging systems by matching points in the images corresponding to scene elements
CN109886883A (en) Real-time polarization fog-penetrating imaging image enhancement processing method
CN106017683A (en) Obtaining spectral information from a moving object
US20170214901A1 (en) Method and apparatus for obtaining depth image by using time-of-flight sensor
CN109540038A (en) The adaptive light filling measurement method of machine vision based on colored multichannel double frequency phase shift
CN103486979A (en) Hybrid sensor
JPWO2020066236A1 (en) Depth acquisition device, depth acquisition method and program
JP2017207447A (en) Reflection light detection device and reflection light detection method