EP2926162A1 - Procede de production d'images avec information de profondeur et capteur d'image - Google Patents
Procede de production d'images avec information de profondeur et capteur d'imageInfo
- Publication number
- EP2926162A1 EP2926162A1 EP13789585.0A EP13789585A EP2926162A1 EP 2926162 A1 EP2926162 A1 EP 2926162A1 EP 13789585 A EP13789585 A EP 13789585A EP 2926162 A1 EP2926162 A1 EP 2926162A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- pixel
- distance
- pulse
- light
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
- G01S17/18—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves wherein range gates are used
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4861—Circuits for detection, sampling, integration or read-out
- G01S7/4863—Detector arrays, e.g. charge-transfer gates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4865—Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/487—Extracting wanted echo signals, e.g. pulse detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/254—Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/296—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- the invention relates to the production of images associating at each point of the image a depth, that is to say a distance between the observed point and the camera that produces the image.
- the production of images with an associated depth is used in particular but not exclusively for the visualization of relief images: in this application, for example, it is possible to produce an image with a camera, and to produce depth values associated with each point; then, from this single image, one can produce a left image and a right image different from each other; a point in the scene occupies the same position in the left image and in the right image if it is at an infinite distance; if it is closer to the camera, it occupies different positions, spaced laterally a distance greater as the point is closer.
- the left and right images projected simultaneously but each observed by a respective eye give the impression of relief. Most often, the depth is obtained by two cameras apart from each other.
- the invention proposes a solution with a single camera.
- the present invention proposes a method for producing images of a scene by volume including distance information of each point of the scene, this method using a source pulsed light and an image sensor, the sensor comprising pixels capable of accumulating, in a respective storage node of each pixel, charges generated by the light, the method comprising the emission of N successive trains of light pulses from the light source and under the control of a reference clock, N being an integer representing the number of depth levels desired for the information of the light source. relief, and, iteratively for each pulse train of rank i among the N pulse trains: a) emission of the i th train of light pulses, the pulses being emitted at times determined from the reference clock and at intervals determined from this clock,
- a1) integrating charges for each light pulse of the i th train during a short integration time slot T int beginning with a time offset t, with respect to the pulse, this time offset representing a travel time of the light pulse between the light source and the sensor after reflection on a point located at an i th distance (d,) of the sensor, the i th time shift t, being the same for all light pulses of the i th pulse train and the time offset values t, for the N trains being different from each other to correspond to different distances with respect to the sensor and being spaced from each other by an increment of duration greater than the integration duration T int ,
- each train of pulses is intended for the observation of the points of the scene which are situated at a definite distance d, the other points being excluded from this observation.
- the points closer than the distance d are not seen because the light pulse reflected by these points arrives before time t, that is, before a load integration time slot begins. (from t, to t, + T int ).
- the more distant points are not seen because the light impulse reflected by these points arrive too late, while the time slot of load integration is already finished.
- the time slot for integration of charges of duration T int is preferably established between the end of a reset signal of the photodiode, common to all the pixels, and the end of a charge transfer signal common to all the pixels.
- the charge transfer signal allows charge transfer from a photodiode to the charge storage node of the pixel.
- the reset signal vacates the charges of the photodiode and prevents the integration of charges therein.
- the light pulses are brief and the time slots of charge integration are also brief because it is this brevity that allows a precision of location of the distance d ,.
- the duration of the integration slots is less than the difference of two neighboring time offsets such that t, and t i + i if we want to correctly distinguish the corresponding neighboring distances d, and d i + i.
- the light pulses are numerous (if possible) in each pulse train to compensate for their brevity and to ensure the cumulative reception of a sufficient quantity of photons before the reading of the accumulated charges in the storage node of each pixel at the end of the train. pulse.
- the signal provided by a pixel for a given pulse train exists if a point of the scene observed by this pixel is at the distance d associated with this train and does not exist if the point of the scene is not at this distance.
- the level of the signal provided, representing the amount of charge accumulated at the end of the pulse train, is approximately proportional to the reflectance or albedo of the point, with however a degraded signal level if the distance from this point is such that that the return of the reflected pulse coincides only partially with the load integration slot.
- N frames of the scene are obtained with depth information associated with each image, and from there, depth information for each pixel is obtained.
- the resulting information can be transmitted either in the form of N images representing N view planes corresponding to N different distances, or in the form of a single image gathering the luminances of the N images, added pixel by pixel, associated with a matrix of distances representing for each pixel the distance associated with this pixel, i.e., the distance from the point of the scene portion observed by the pixel.
- the distance associated with the pixel may be the unique distance d, for which the pixel has received a signal, or for which the pixel has received the strongest signal among the N images. But it may also be, as will be indicated later, a distance calculated by interpolation if the pixel has received a non-zero signal for several different distances.
- Time offsets t values differ from each other by a value one might call "time increment" values t. If the duration of a light pulse is T imp and the duration of a load integration time slot is T int , then the duration increment of the values t , which defines the resolution in depth, is preferably equal to the sum of the durations T imp and T int . These two durations can be equal or nearly equal. If the increment between two offsets t, is greater than this sum T im p + Ti nt , there is a risk of missing reflections on points situated between a distance d and a distance d i + i.
- a point situated at approximately the distance d can give a response on the rank i pulse train corresponding to this distance d, but also a response on the rank i pulse train. -1 or i + 1, and one can have a discrimination problem of the most relevant distance value among several possible values.
- a distance information from the different responses for example by selecting the distance for which the response is the highest, or by calculating a distance by interpolation from the distances for which a signal is received by a pixel: for example, a weighted interpolation distance is calculated on three values from the distance d, corresponding to the train pulse of rank i for which the signal read has the highest value and from the signals read for this pixel and corresponding to the distances dn and / or d i + i, by assigning to each distance a weight corresponding to the received signal level.
- the interpolation can be done over five consecutive distances or even more.
- An array of numerical values of distances is then established, associating with each pixel of the sensor a distance of a point of the scene observed by this pixel.
- the invention relates to an image pickup apparatus comprising a pixel matrix image sensor and a light source capable of providing light pulses, the apparatus providing an image of the scene and distance information associated with each pixel of the pixel array, each pixel having a photodiode, photodiode reset means, charge storage means in the pixel, and means for reading accumulated charges in the storage node, the light source having means for providing N light pulse trains calibrated in duration and in intervals, the apparatus further comprising sequencing means for controlling the reset and the transfer charge of the photodiode to the storage node, and the sequencing means being synchronized with respect to the light pulses, characterized in that the Sequencing means are arranged to produce, for each of the N light pulse trains, a charge integration during a short time slot shifted with respect to each light pulse by an identical time shift (t,) for all the pulses.
- FIG. 1 represents the general principle of producing images of a scene with distance information according to the invention
- FIG. 2 represents successive images produced from the scene of FIG. 1;
- FIG. 3 represents the constitution of a pixel and its reading circuit for the implementation of the invention
- FIG. 4 represents a chronogram of operation of the method
- FIG. 5 represents the detail of the synchronization of the signals for the establishment of an integration time slot following a light pulse.
- the process according to the invention is shown schematically in FIG. It uses a CAM camera associated with an LS pulse light source, the operation of the image sensor of the camera being synchronized with respect to the operation of the light source.
- the light source may be a source in the near infrared, particularly in the case of the production of images intended for the observation or the detection of obstacles in the fog.
- the camera includes a lens and an image sensor.
- the sensor includes an array of active pixels and internal sequencing circuits for establishing the internal control signals, including row and column control signals that allow the integration of generated photo charges and the reading of these charges.
- control means for synchronizing the operation of the image sensor with respect to the Impulse operation of the light source are provided. They can be part of the light source, or the camera, or a SYNC electronic circuit connected to both the camera and the light source. These control means comprise a reference clock which is used by the sequencing circuits of the image sensor, to ensure synchronization.
- a scene with objects in relief is represented in front of the camera, that is to say that the different parts of the scene are not all located at the same distance from the camera.
- the light source emits short pulses of light.
- a pulse When a pulse is emitted, it is reflected by the objects of the scene and the pulse reflected by an object or part of an object situated in an observation plane P, at a distance d, returns to the image sensor with a delay t, proportional to this distance.
- the light source is assumed to be at the same distance d as the image sensor.
- the time t is then equal to 2d, / c where c is the speed of light.
- Deciding to collect an image by integrating photo charges generated during a very narrow time slot corresponding only to the moment of return of a brief pulse which has been reflected by the points of the scene situated in a plane P ,, to a distance d , we produce an image that contains only the points of the scene located in this plane.
- FIG. 2 illustrates the different IM to IM 6 images that can be obtained with the sensor if only the light signal arriving at time t is collected for each image, that is to say if n ' Observe that the object parts located in the plane P, at the distance d ,, this for different planes P, ranging for example from Pi to P 6 .
- N successive images of the scene are produced, each image corresponding only to a determined plane P.
- An information distance is therefore inherently contained in the image succession obtained since, at each pixel of the sensor, it can be associated, depending on whether or not it provides a signal in the different images or on the value of this signal in the different images, a distance to the camera.
- FIG. 3 is a reminder of the conventional constitution of a matrix image sensor pixel in CMOS technology and its reading circuit, which make it possible to implement the invention.
- the pixel conventionally comprises a photodiode PH and a charge storage node ND in which the charges generated by the photodiode can be stored during an integration time T int .
- the pixel furthermore comprises a plurality of MOS transistors which serve to control the pixel to define the integration time and to extract a signal representing the quantity of charges stored during the integration time.
- the pixel comprises:
- a transistor T1 which resets the potential of the photodiode before starting a new integration period of duration Tint; this transistor is controlled by a global reset signal RG common to all the pixels of the matrix; the end of the signal RG defines the beginning of the integration duration T int .
- a charge transfer transistor T2 which makes it possible to empty in the storage node ND the charges generated after an integration duration T int ; this transistor is controlled by a charge transfer signal TR which may be common to all the pixels; the end of this signal defines the end of the integration duration T int;
- a reset transistor T3 which makes it possible to reset the potential of the storage node after reading the quantity of charges that has been stored therein; this transistor is controlled by a reset signal RST which may be common to all the pixels;
- a read transistor T4 which is mounted as a voltage follower and which makes it possible to transfer from its gate to its source the potential level of the charge storage node;
- a selection transistor T5 which is connected to the source of the transistor T4 and which makes it possible to transfer to a column conductor COL (common to the pixels of the same column of the matrix) the potential of the load storage node when it is desired to read the quantity of charges stored in the load storage node; this transistor is controlled by a line selection signal SEL common to all the pixels of a line; the pixels are read line by line.
- COL common to the pixels of the same column of the matrix
- the reading circuit external to the pixel matrix and connected to the different column conductors, comprises a sampling circuit which samples, for example in two capacitors Cr and Cs, the potential of the column conductor by means of switches Kr and Ks, respectively at a time when the storage node has been reset and at a time when it is desired to determine the amount of charge accumulated in the storage node.
- the difference between the sampled potentials in the capacities represents the amount of accumulated charges. It can be read by a differential amplifier AMP, then digitized, or directly digitized, for example using a counter, a linear voltage ramp, and a comparator.
- Figure 4 shows the chronogram that leads to the production of N successive images representing the elements of the scene at different distances.
- the line LP represents the light pulses.
- the line INT represents the integration periods of the image sensor following each pulse.
- the N images are obtained by producing N light pulse trains TR, where i is an integer index of 1 to N, and where N is the number of planes P, at different distances d, for which we want to collect a picture.
- Each pulse train comprises several pulses, regularly distributed over the duration of the pulse train.
- the pulses are clocked by a reference clock not shown, which may be part of the sensor or the light source or control means mentioned above, and which serves to synchronize the operation of the light source and that of the sensor.
- a reference clock not shown, which may be part of the sensor or the light source or control means mentioned above, and which serves to synchronize the operation of the light source and that of the sensor.
- the image sensor For each pulse of a pulse train TR , the image sensor records the photo charges generated during an integration time slot of duration T int . If we take as time reference (for each pulse) the beginning of the pulse, the time slot of duration T int begins at a time t, and ends at a time t, + T int .
- the value t is the time shift between the light pulse and the start of the integration slot. It represents the distance traveled by the light pulse to go to the plane P, and return to the sensor.
- the charges generated by the light during this duration T int are stored in the storage node ND of each pixel at the end of the slot. They are accumulated with the charges already stored in this node, resulting from other pulses of the same train TR ,.
- the duration t, and duration T int are the same for all pulse train TR.
- the storage nodes are then all reset by a signal
- the information contained in the image I M is essentially the amount of light coming from the pulse and reflected by parts of objects located at the distance d, or in the vicinity of this distance. It is assumed here that the ambient illumination is negligible compared to the illumination provided by the pulsed light source.
- the integration time T int for the pulses of the train TR i + is preferably the same as for the train TR ,. But the delay t i + i from which this integration time begins is different from the delay t ,, and it corresponds to a distance d i + i different from d ,.
- the charges generated in the photodiodes during the time interval of t i + i to t i + i + T int following a light pulse are stored in the respective storage nodes and accumulated in these nodes with the charges generated by the other light pulses. of the same train TR i + .
- An IM i + image is read after receiving the last pulse. Then the storage nodes are reset again. And so on, the N pulse trains are emitted and give rise to N IM images IM N , the set of N images providing both an observation of the scene, the signal level of each pixel of the albedo-dependent image of the point observed by this pixel, and a distance information associated with each pixel of the image, which is the distance corresponding to the image in which this pixel has provided a maximum signal level.
- FIG. 5 shows the practical way in which integration slots are produced.
- the line LP represents the emission of a light pulse of duration T imp .
- the line RG represents the global reset signal of the photodiodes of the sensor, which prevents the integration of charges into the photodiode as long as it is at the high level and which authorizes it when it ends, that is to say when it goes down to the low level.
- the falling edge of the reset signal RG i.e. the end of this signal, is emitted with a time offset t, after the start of the light pulse. This falling edge defines the beginning of the integration duration T int .
- the signal TR defines the transfer of charges from the photodiode to the storage node. The beginning of this signal is later or simultaneous with the end of the reset signal. It is the end of this transfer signal which defines the end of the transfer and therefore the end of the integration time. It takes place after a duration T int following the end of the reset signal RG.
- the INT line represents the resulting integration time.
- the time shift t between the light pulse and the integration time window is precisely an offset between the start of the light pulse and the beginning of the integration of charges in the photodiodes. of the sensor.
- the offset is counted differently, for example between the middle of the light pulse and the middle of the integration slot T int .
- the choice can be made for example according to the relative durations T int and T imp which are not necessarily equal.
- the distance resolution that is to say the step of separation of the different observation planes P , is governed by the separation between the different time offset values t ,, t i + i, etc. corresponding to the different pulse trains.
- an integration window beginning at time ti includes not only a pulse reflected by the plane P, but also a pulse reflected by the plane P i + or PM. This risk exists if the duration of the pulses is too long or if the duration of the integration slots is too long.
- the duration which separates two pulses in the pulse train is in principle such that it is possible to hold N times (T imp + T int ) between two successive light pulses, N being the number of images desired and therefore the number of different distances observed. This duration is therefore at least N. (T imp + T int ). If there are Z pulses in the train, the duration of the train is ZN (T imp + Tint) - And since there are N pulse trains, the duration of obtaining a global image, is that is to say, from N IM images to IM N , is ZN 2. (T imp + T int ).
- the duration of the light pulse is preferably equal to or less than the duration of integration, otherwise part of the light energy would be systematically lost by the sensor, even when the pulse is reflected exactly in the plane ⁇ , corresponding to the pulse train.
- T int it is advantageous to choose approximately equal to T imp .
- the number of pulses in each pulse train will be limited by the desired production rate for the overall image and the ability to achieve very short light pulses and very short integration times.
- T imp + T int less than or equal to 20 nanoseconds (round trip light 6 meters in 20 nanoseconds), in practice 10 nanoseconds for T imp and 10 nanoseconds for T int .
- the duration between two pulses of a train is then 200 nanoseconds. If there are 10 pulses per train, the total duration of the N trains is 20 milliseconds, which gives the possible rate for the supply of an overall image comprising N images.
- the level of digital signal coming from a pixel for a given image IM depends on the albedo (reflective power) of the point of the scene which has reflected a light pulse and which is therefore globally in the plane Pi observed by this image produced by the ith pulse train.
- the luminous pulses have a non-zero duration; time slots for integration also have a non-zero duration.
- the light pulse may coincide only partially with the time slot corresponding to it, for example because the point observed by the pixel is not exactly at the distance d, but at a distance slightly greater or less than d,. In this case, the signal level obtained is lower than it should be considering the albedo of the point.
- the simplest is to consider the N images and to select the IM image, for which the signal level provided by this pixel is the highest among the different values. for this same pixel in the N images.
- the associated distance is the distance di.
- one can prefer to carry out a weighted interpolation on several images in the following way: one selects the image IM, for which the level of signal of the pixel is the highest, as well as the neighboring images ⁇ ⁇ and IM i + i, and we calculate an average distance which is the normalized weighted sum (a.di-i + bd, + cd i + i) of the distances dn, d, and d i + i, where a, b, and c represent the relative signal levels of the pixel in the three images, normalized to 1, i.e., (a + b + c) 1.
- the weighting can be performed on a larger number of consecutive images, for example on 5 images with the same principle.
- the output of the camera may consist of a group of N frames, the association processing of a distance to each pixel being made outside the camera.
- the camera provides on the one hand an image of the luminances and on the other hand an array of distances associating a distance value with each pixel.
- the luminance image is constituted by a numerical value of luminance for each pixel. This value can be the maximum value obtained for this pixel in the N images. But it can be obtained in other ways, for example by a concatenation of the different numerical values obtained in the N images. This concatenation can be for example the sum of the digital values detected in the N images, or the sum of the numerical values exceeding a minimum threshold (to avoid adding noise by the low value signals which do not necessarily correspond to a true reflection of 'light pulse).
- luminance matrix image and the distance matrix it is also possible to process the luminance matrix image and the distance matrix to reconstruct a binocular image, that is to say a left image and a right image that are transformations of the image.
- luminance image such that the luminance value assigned to a pixel of the luminance matrix is assigned to a pixel of the left image and a pixel of the right image which are offset with respect to each other (with respect to the lateral edges of the pixel matrix) by a given distance which is all the greater as the associated distance provided by the camera for this pixel is lower.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Measurement Of Optical Distance (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Studio Devices (AREA)
- Focusing (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1261270A FR2998666B1 (fr) | 2012-11-27 | 2012-11-27 | Procede de production d'images avec information de profondeur et capteur d'image |
PCT/EP2013/073844 WO2014082864A1 (fr) | 2012-11-27 | 2013-11-14 | Procede de production d'images avec information de profondeur et capteur d'image |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2926162A1 true EP2926162A1 (fr) | 2015-10-07 |
Family
ID=47902112
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP13789585.0A Withdrawn EP2926162A1 (fr) | 2012-11-27 | 2013-11-14 | Procede de production d'images avec information de profondeur et capteur d'image |
Country Status (7)
Country | Link |
---|---|
US (1) | US9699442B2 (fr) |
EP (1) | EP2926162A1 (fr) |
JP (1) | JP6320406B2 (fr) |
CN (1) | CN104884972A (fr) |
CA (1) | CA2892659A1 (fr) |
FR (1) | FR2998666B1 (fr) |
WO (1) | WO2014082864A1 (fr) |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10282623B1 (en) * | 2015-09-25 | 2019-05-07 | Apple Inc. | Depth perception sensor data processing |
US10397546B2 (en) * | 2015-09-30 | 2019-08-27 | Microsoft Technology Licensing, Llc | Range imaging |
US11297258B2 (en) * | 2015-10-01 | 2022-04-05 | Qualcomm Incorporated | High dynamic range solid state image sensor and camera system |
CN107370913B (zh) * | 2016-05-11 | 2021-03-16 | 松下知识产权经营株式会社 | 摄像装置、摄像系统以及光检测方法 |
US10451713B2 (en) * | 2016-09-16 | 2019-10-22 | Analog Devices, Inc. | Interference handling in time-of-flight depth sensing |
EP3301477A1 (fr) * | 2016-10-03 | 2018-04-04 | Xenomatix NV | Système de télémétrie d'un objet |
EP3301479A1 (fr) * | 2016-10-03 | 2018-04-04 | Xenomatix NV | Procédé d'atténuation d'éclairage d'arrière-plan à partir d'une valeur d'exposition d'un pixel dans une mosaïque, et pixel pour une utilisation dans celle-ci |
CN108259702B (zh) * | 2016-12-28 | 2022-03-11 | 手持产品公司 | 一种用于同步多传感器成像器中的照明定时的方法和系统 |
EP3343246A1 (fr) * | 2016-12-30 | 2018-07-04 | Xenomatix NV | Système de caractérisation de l'environnement d'un véhicule |
US10928489B2 (en) * | 2017-04-06 | 2021-02-23 | Microsoft Technology Licensing, Llc | Time of flight camera |
WO2019014494A1 (fr) * | 2017-07-13 | 2019-01-17 | Apple Inc. | Comptage d'impulsions précoce-retardées pour capteurs de profondeur émettant de la lumière |
JP7198507B2 (ja) * | 2017-08-08 | 2023-01-04 | 国立大学法人静岡大学 | 距離画像測定装置及び距離画像測定方法 |
US10670722B2 (en) * | 2017-08-15 | 2020-06-02 | Samsung Electronics Co., Ltd. | Increase depth resolution and depth accuracy in ToF sensors by avoiding histogrammization |
CN115628808A (zh) | 2017-11-24 | 2023-01-20 | 浜松光子学株式会社 | 光子计数装置和光子计数方法 |
EP3633406B1 (fr) * | 2018-07-18 | 2022-05-11 | Shenzhen Goodix Technology Co., Ltd. | Système à temps de vol et procédé d'étalonnage |
US10708514B2 (en) * | 2018-08-30 | 2020-07-07 | Analog Devices, Inc. | Blending depth images obtained with multiple exposures |
US11486984B2 (en) | 2018-12-26 | 2022-11-01 | Beijing Voyager Technology Co., Ltd. | Three-dimensional light detection and ranging system using hybrid TDC and ADC receiver |
US11506764B2 (en) | 2018-12-26 | 2022-11-22 | Beijing Voyager Technology Co., Ltd. | System and methods for ranging operations using multiple signals |
WO2020139380A1 (fr) * | 2018-12-26 | 2020-07-02 | Didi Research America, Llc | Système de détection de lumière et de télémétrie tridimensionnelles utilisant un récepteur hybride à tdc et can |
CN110087057B (zh) * | 2019-03-11 | 2021-10-12 | 歌尔股份有限公司 | 一种投影仪的深度图像获取方法和装置 |
US11438486B2 (en) * | 2019-08-26 | 2022-09-06 | Qualcomm Incorporated | 3D active depth sensing with laser pulse train bursts and a gated sensor |
US11768277B2 (en) * | 2019-11-05 | 2023-09-26 | Pixart Imaging Incorporation | Time-of-flight sensor and control method thereof |
KR20210055821A (ko) | 2019-11-07 | 2021-05-18 | 삼성전자주식회사 | 깊이의 측정 범위에 기초하여 동작하는 센서 및 이를 포함하는 센싱 시스템 |
CN113744355B (zh) * | 2020-05-29 | 2023-09-26 | 杭州海康威视数字技术股份有限公司 | 一种脉冲信号的处理方法、装置及设备 |
CN112584067A (zh) * | 2020-12-14 | 2021-03-30 | 天津大学合肥创新发展研究院 | 基于脉冲间隔的脉冲图像传感器的噪声消除方法及装置 |
WO2022137919A1 (fr) * | 2020-12-22 | 2022-06-30 | パナソニックIpマネジメント株式会社 | Dispositif d'imagerie |
CN113281765A (zh) * | 2021-05-21 | 2021-08-20 | 深圳市志奋领科技有限公司 | 背景抑制光电传感器 |
EP4235219A1 (fr) * | 2022-02-28 | 2023-08-30 | Imasenic Advanced Imaging, S.L. | Capteur d'image à balayage de profondeur |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS4862462A (fr) * | 1971-12-03 | 1973-08-31 | ||
JPH09178853A (ja) * | 1995-12-25 | 1997-07-11 | Hitachi Ltd | イメージングレーザー測距装置 |
JP5115912B2 (ja) * | 2001-02-23 | 2013-01-09 | 独立行政法人日本原子力研究開発機構 | 高速ゲート掃引型3次元レーザーレーダー装置 |
KR100770805B1 (ko) * | 2001-08-06 | 2007-10-26 | 지멘스 악티엔게젤샤프트 | 3차원 거리측정 이미지를 기록하기 위한 방법 및 장치 |
US7382008B2 (en) | 2006-05-02 | 2008-06-03 | Eastman Kodak Company | Ultra-small CMOS image sensor pixel using a photodiode potential technique |
EP2106527A2 (fr) * | 2007-01-14 | 2009-10-07 | Microsoft International Holdings B.V. | Procédé, dispositif et système d'imagerie |
WO2008152647A2 (fr) * | 2007-06-15 | 2008-12-18 | Ben Gurion University Of The Negev Research And Development Authority | Procédé et appareil d'imagerie tridimensionnelle |
KR101448152B1 (ko) | 2008-03-26 | 2014-10-07 | 삼성전자주식회사 | 수직 포토게이트를 구비한 거리측정 센서 및 그를 구비한입체 컬러 이미지 센서 |
JP5192880B2 (ja) * | 2008-03-31 | 2013-05-08 | 三菱重工業株式会社 | 監視装置 |
JP5485288B2 (ja) | 2008-11-25 | 2014-05-07 | テトラビュー, インコーポレイテッド | 高解像度三次元撮像のシステムおよび方法 |
FR2940463B1 (fr) * | 2008-12-23 | 2012-07-27 | Thales Sa | Systeme d'imagerie passive equipe d'un telemetre |
JP5713159B2 (ja) * | 2010-03-24 | 2015-05-07 | 独立行政法人産業技術総合研究所 | ステレオ画像による3次元位置姿勢計測装置、方法およびプログラム |
JP2011211535A (ja) * | 2010-03-30 | 2011-10-20 | Sony Corp | 固体撮像素子およびカメラシステム |
US9052381B2 (en) * | 2010-05-07 | 2015-06-09 | Flir Systems, Inc. | Detector array for high speed sampling of an optical pulse |
US8569700B2 (en) * | 2012-03-06 | 2013-10-29 | Omnivision Technologies, Inc. | Image sensor for two-dimensional and three-dimensional image capture |
US8890812B2 (en) * | 2012-10-25 | 2014-11-18 | Jds Uniphase Corporation | Graphical user interface adjusting to a change of user's disposition |
-
2012
- 2012-11-27 FR FR1261270A patent/FR2998666B1/fr active Active
-
2013
- 2013-11-14 EP EP13789585.0A patent/EP2926162A1/fr not_active Withdrawn
- 2013-11-14 WO PCT/EP2013/073844 patent/WO2014082864A1/fr active Application Filing
- 2013-11-14 CN CN201380069412.4A patent/CN104884972A/zh active Pending
- 2013-11-14 US US14/647,492 patent/US9699442B2/en active Active
- 2013-11-14 JP JP2015543396A patent/JP6320406B2/ja active Active
- 2013-11-14 CA CA2892659A patent/CA2892659A1/fr not_active Abandoned
Non-Patent Citations (3)
Title |
---|
JENS BUSCK ET AL: "Gated viewing and high-accuracy three-dimensional laser radar", APPLIED OPTICS, vol. 43, no. 24, 20 August 2004 (2004-08-20), WASHINGTON, DC; US, pages 4705, XP055377094, ISSN: 0003-6935, DOI: 10.1364/AO.43.004705 * |
PETER CENTEN ET AL: "R22: A Multi-Functional Imager for TOF and High Performance Video Applications Using a Global Shuttered 5 m Cmos Pixel", 9 June 2011 (2011-06-09), XP055377092, Retrieved from the Internet <URL:http://www.imagesensors.org/Past Workshops/2011 Workshop/2011 Papers/R22_Centen_Multifunctional.pdf> [retrieved on 20170530] * |
See also references of WO2014082864A1 * |
Also Published As
Publication number | Publication date |
---|---|
FR2998666B1 (fr) | 2022-01-07 |
WO2014082864A1 (fr) | 2014-06-05 |
JP6320406B2 (ja) | 2018-05-09 |
CN104884972A (zh) | 2015-09-02 |
US9699442B2 (en) | 2017-07-04 |
FR2998666A1 (fr) | 2014-05-30 |
US20150319422A1 (en) | 2015-11-05 |
JP2016506492A (ja) | 2016-03-03 |
CA2892659A1 (fr) | 2014-06-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2926162A1 (fr) | Procede de production d'images avec information de profondeur et capteur d'image | |
BE1022488B1 (fr) | Systeme d'appareil de prise de vues a temps-de-vol | |
US7911496B2 (en) | Method of generating range images and apparatus therefor | |
TWI780462B (zh) | 距離影像攝像裝置及距離影像攝像方法 | |
FR3042912A1 (fr) | Capteur d'images a grande gamme dynamique | |
FR3033973A1 (fr) | Procede de reconstruction 3d d'une scene | |
EP3423860B1 (fr) | Dispositif de détection d'un spot laser | |
FR2996957A1 (fr) | Procede de lecture d'un pixel | |
FR3054093B1 (fr) | Procede et dispositif de detection d'un capteur d'images | |
EP2538665B1 (fr) | Detecteur a fonction d'imagerie selective et procédé de detection d'eclairs | |
EP3386191B1 (fr) | Capteur matriciel a codage temporel sans arbitrage | |
EP2056126B1 (fr) | Procédé de détection d'une impulsion lumineuse réfléchie sur un objet pour déterminer la distance de l'objet, capteur et dispositif de mise en oeuvre | |
EP3979648A1 (fr) | Dispositif de compensation du mouvement d'un capteur événementiel et système d'observation et procédé associés | |
EP3310039B1 (fr) | Dispositif électronique d'analyse d'une scène | |
FR2583882A1 (fr) | Dispositif de mesure de la vitesse et de la position d'un mobile par rapport au sol | |
EP2926544A1 (fr) | Procede de capture d'image avec un temps d'integration tres court | |
EP3069319B1 (fr) | Système et un procédé de caractérisation d'objets d'intérêt présents dans une scène | |
EP2735886B1 (fr) | Procédé d'imagerie 3D | |
FR3115145A1 (fr) | Dispositif d'acquisition d'une image 2d et d'une image de profondeur d'une scene | |
FR3066271B1 (fr) | Capteur de mouvement et capteur d'images | |
WO2014131726A1 (fr) | Procede de production d'images et camera a capteur lineaire | |
FR3131163A1 (fr) | Système d’observation et procédé d’observation associé | |
FR3116977A1 (fr) | Dispositif de compensation du mouvement d’un capteur événementiel, systèmes et procédés associés | |
FR3138529A1 (fr) | Acquisition de distances d'un capteur à une scène | |
FR2943179A1 (fr) | Capteur d'image mos et procede de lecture avec transistor en regime de faible inversion. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20150615 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: FEREYRE, PIERRE Inventor name: DIASPARRA, BRUNO Inventor name: PREVOST, VINCENT |
|
DAX | Request for extension of the european patent (deleted) | ||
TPAC | Observations filed by third parties |
Free format text: ORIGINAL CODE: EPIDOSNTIPA |
|
17Q | First examination report despatched |
Effective date: 20170607 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20171018 |