WO2016146938A1 - Procede de reconstruction 3d d'une scene - Google Patents

Procede de reconstruction 3d d'une scene Download PDF

Info

Publication number
WO2016146938A1
WO2016146938A1 PCT/FR2016/050575 FR2016050575W WO2016146938A1 WO 2016146938 A1 WO2016146938 A1 WO 2016146938A1 FR 2016050575 W FR2016050575 W FR 2016050575W WO 2016146938 A1 WO2016146938 A1 WO 2016146938A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
sensor
function
events
luminance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/FR2016/050575
Other languages
English (en)
French (fr)
Inventor
Ieng SIO-HOÏ
Benosman RYAD
Shi BERTRAM
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Centre National de la Recherche Scientifique CNRS
Universite Pierre et Marie Curie
Institut National de la Sante et de la Recherche Medicale INSERM
Original Assignee
Centre National de la Recherche Scientifique CNRS
Universite Pierre et Marie Curie
Institut National de la Sante et de la Recherche Medicale INSERM
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Centre National de la Recherche Scientifique CNRS, Universite Pierre et Marie Curie, Institut National de la Sante et de la Recherche Medicale INSERM filed Critical Centre National de la Recherche Scientifique CNRS
Priority to CN201680016173.XA priority Critical patent/CN107750372B/zh
Priority to EP16713966.6A priority patent/EP3272119B1/fr
Priority to US15/556,596 priority patent/US11335019B2/en
Priority to JP2017549009A priority patent/JP6839091B2/ja
Priority to KR1020177029337A priority patent/KR102432644B1/ko
Publication of WO2016146938A1 publication Critical patent/WO2016146938A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present invention relates to the field of 3D reconstruction of a scene, especially when it is captured using asynchronous sensors.
  • Asynchronous event-based vision sensors deliver compressed digital data as events.
  • Event-based vision sensors have the advantage of removing redundancy, reducing latency, and increasing the range of time dynamics and gray levels compared to conventional cameras.
  • the output of such a vision sensor may consist, for each pixel address, in a sequence of asynchronous events representative of changes in reflectance of the scene as they occur.
  • Each pixel of the sensor is independent and detects changes in light intensity greater than a threshold since the emission of the last event (for example a contrast of 15% on the logarithm of the intensity). When the intensity change exceeds the set threshold, an ON or OFF event is generated by the pixel as the intensity increases or decreases (DVS sensors).
  • a threshold since the emission of the last event (for example a contrast of 15% on the logarithm of the intensity).
  • an ON or OFF event is generated by the pixel as the intensity increases or decreases (DVS sensors).
  • Some asynchronous sensors associate detected events with absolute light intensity measurements (ATIS sensors).
  • the senor Since the sensor is not sampled on a clock like a conventional camera, it can account for the sequencing of events with a very high temporal precision (for example of the order of 1 ⁇ ). If we use such sensor to reconstruct a sequence of images, one can reach a frame rate of several kilohertz, against a few tens of hertz for conventional cameras.
  • the present invention aims to improve the situation.
  • the present invention proposes a method specially adapted to asynchronous sensors for reconstructing scenes observed in 3D.
  • the present invention thus aims at a 3D reconstruction method of a scene, the method comprising:
  • first asynchronous information from a first sensor having a first pixel matrix arranged with respect to the scene, the first asynchronous information comprising, for each pixel of the first matrix, first successive events coming from said pixel; receiving a second asynchronous information from a second sensor having a second matrix of pixels arranged with regard to the scene, the second asynchronous information comprising, for each pixel of the second matrix, second successive events from said pixel, the second sensor being distinct from the first sensor;
  • the cost function comprises at least one of:
  • luminance component being a function of at least:
  • motion component being a function of at least:
  • temporal values relating to the occurrence of spatially localized events at a predetermined distance from a pixel of the second sensor are temporal values relating to the occurrence of spatially localized events at a predetermined distance from a pixel of the second sensor.
  • the cost function can further include:
  • temporal component being a function of a difference between:
  • the cost function may furthermore comprise:
  • geometrical component said geometrical component being a function:
  • the luminance signal of the pixel of the first sensor and the pixel of the second sensor comprising a maximum encoding a time of occurrence of a luminance variation
  • the convolution core may be a Gaussian of predetermined variance.
  • said luminance component can be further function:
  • said motion component can be further function:
  • said motion component can be function, for a given time:
  • said motion component can be function:
  • the present invention also provides a device for 3D reconstruction of a scene, the method comprising:
  • a processor adapted for pairing a first event among the first successive events with a second event among the second successive events according to a minimization of a cost function; in which the cost function comprises at least one of:
  • luminance component being a function of at least:
  • motion component being a function of at least:
  • temporal values relating to the occurrence of spatially localized events at a predetermined distance from a pixel of the second sensor are temporal values relating to the occurrence of spatially localized events at a predetermined distance from a pixel of the second sensor.
  • a computer program implementing all or part of the method described above, installed on a pre-existing equipment, is in itself advantageous.
  • the present invention also relates to a computer program comprising instructions for implementing the method described above, when this program is executed by a processor.
  • This program can use any programming language (for example, a language object or other), and be in the form of a source code interpretable, partially compiled code or fully compiled code.
  • Figure 6 described in detail below can form the flow chart of the general algorithm of such a computer program.
  • FIG. 1 is a block diagram of an asynchronous light sensor of the ATI S type
  • FIG. 2 is a diagram showing events generated by an asynchronous sensor placed opposite a scene comprising a rotating star;
  • FIG. 3 is an example of the calculation of a luminance component for two points of two distinct sensors
  • FIGS. 4a and 4b are examples of representation of an activity signal of a given pixel
  • FIG. 4c is a representation of motion maps generated using separate asynchronous sensors
  • FIGS. 5a and 5b are representations of examples of calculation of geometric components in one embodiment of the invention.
  • FIG. 6 illustrates a flow chart showing an embodiment according to the invention
  • FIG. 7 illustrates a device for implementing an embodiment according to the invention.
  • Figure 1 illustrates the principle of ATIS.
  • a pixel 101 of the matrix constituting the sensor has two elements photosensitive 102a, 102b, such as photodiodes, respectively associated with electronic detection circuits 103a, 103b.
  • the sensor 102a and its circuit 103a produce a pulse P 0 when the light intensity received by the photodiode 102a varies by a predefined amount.
  • the pulse P 0 marking this intensity change triggers the electronic circuit 103b associated with the other photodiode 102b.
  • This circuit 103b then generates a first pulse P and a second pulse P 2 as soon as a given amount of light (number of photons) is received by the photodiode 102b.
  • the time difference between the pulses Pi and P 2 is inversely proportional to the light intensity received by the pixel 101 just after the appearance of the pulse P 0 .
  • the asynchronous information from the ATIS comprises two combined pulse trains for each pixel (104): the first pulse train P 0 indicates the times when the light intensity has changed beyond the detection threshold, while the The second train consists of the pulses Pi and P 2 , the time difference of which indicates the corresponding light intensities, or gray levels.
  • An event e (p, t) coming from a pixel 101 of position p in the matrix of the ATIS then comprises two types of information: a temporal information given by the position of the pulse P 0 , giving the instant t of the event, and gray level information given by the time difference between pulses Pi and P 2 .
  • each point p identifies an event e (p, t) generated asynchronously at a given instant.
  • t at a pixel p of the sensor, of position ar the movement of a star rotating at constant angular velocity as
  • the majority of these points are distributed in the vicinity of a generally helical surface.
  • the figure shows a number of events at a distance from the helical surface that are measured without corresponding to the actual motion of the star. These events are acquisition noise.
  • the events e (p, t) can then be defined by all of the following information: with C the spatial domain of the sensor, pol the polarity representing the direction of the luminance change (eg 1 for an increase or -1 for a decrease) and I (p, t) the luminous intensity signal from point p to l moment t.
  • the intensity signal of the pixel situated at the coordinate p then makes it possible to code the luminance information temporally. This information can be directly from the electronic circuit of the sensor with a minimum of transformation.
  • FIG. 3 is an example of the calculation of a luminance component for two points p and q of two distinct u and v sensors.
  • the surfaces composing the observed scene are Lambert surfaces (ie surfaces whose luminance is the same as whatever the angle of observation).
  • the luminance component can be expressed as: I u (Pi, t) I v ( ⁇ li, t) dt
  • the core g a (t) is advantageously a Gaussian of variance ⁇ . It can also be a gate function of width ⁇ .
  • FIG. 4a shows three possible activity signals t ⁇ S, for three pixels pi, p 2 and p 3 of the sensor (and for a given value of polarity pol).
  • S (pi, t), S (p 2 , t) or S (p 3 , t) is zero.
  • S (pi, t) takes a predetermined threshold value (here h, this value h may be unitary).
  • the value of the activity signal S (p-i, t) then decreases gradually after this event to tend towards 0.
  • the value of the activity signal S can be set to the sum (possibly weighted) of the current value of S just before the event 422 (ie 0 ) and h.
  • the decay of the curve S will start from the value h + h 0 as shown in FIG . 4b .
  • the value of the curve S is set to the value h regardless of the value of h 0 (ie the events prior to the last event (ie the subsequent event) are ignored).
  • the last event time defined as follows:
  • T (p, pol, i) max (t / ) ⁇ j ⁇ i or
  • T (p, pol, t) max (t / )
  • p ⁇ T p, pol, t defines a map of the times of the last events of the same polarity occurring temporally just before a reference time (i.e. t).
  • p ⁇ S (p, pol, t) is a function of this set of times T (p, pol, t).
  • p ⁇ S (p, pol, t) tT p, pol, t)
  • a predetermined time constant ⁇ S may be any function decreasing with time t over an interval comprising as lower bound T ⁇ p, pol, t)).
  • a pixel map S representative of the "coolness" of events of these pixels is advantageous because it allows a continuous and simple representation of discontinuous concepts (i.e. events). This created map makes it possible to transform the representation of the events in a simple domain of apprehension.
  • This S function is representative of a "freshness" of events occurring for this pixel.
  • the cards 401 and 402 of FIG. 4c are representations of this function S for a given time t and for two asynchronous sensors capturing the movement of the same hand from two different points of view.
  • the darkest points represent points whose latest events are the most recent with respect to time t (i.e. having the largest value of S).
  • the clearest points represent points whose last events are the farthest from time t (ie, having the smallest value of S, the background of the image is grayed out to make it easier to highlight the clear values, although the background corresponds to null values of the function S).
  • the scattered dark spots correspond to capturing noise from the sensors.
  • each pixel p of the card has the value S (p, t 0 ).
  • the set of points near p define a set v u (p) (405) and the set of points close to q define a set v v (g) (406) ) (N is the cardinal of these sets).
  • the motion component for a given instant t, can be expressed as follows:
  • FIGS. 5a and 5b are representations of examples of calculation of geometric components in one embodiment of the invention.
  • R u is the projection center of the sensor 501 and R v is the projection center of the sensor 502.
  • This epipolar line l uv is defined as the intersection of the plane (X (t), R u , R v ) with the sensor 502.
  • a point p of the first sensor 501 defines an epipolar line 1 v (p) on the second sensor 502 and a point q of the second sensor 502 defines an epipolar line 1 u (q) on the first sensor 501.
  • a point p of the first sensor and a point q of the second sensor define an epipolar intersection i w (p, q) on the third sensor;
  • a point p of the first sensor and a point r of the third sensor define an epipolar intersection i v (p, r on the second sensor;
  • a point q of the second sensor and a point r of the third sensor define an epipolar intersection i u (q, r) on the first sensor.
  • the camera has more than three sensors (eg Q sensors), it is possible to generalize the previous formula by considering that the epipolar intersection of a sensor is the point closest to all the sensors. Epipolar lines defined on this sensor by the current points of the other sensors (for example, minimizing the sum of the distances or minimizing the square of the distances of the said points to the epipolar lines).
  • Figure 6 illustrates a flow chart showing an embodiment according to the invention.
  • step 603 On receiving two sets of asynchronous events 601 and 602 coming from two different asynchronous sensors and facing the same scene, it is possible to select two events of these sensors (step 603, defined by the pixel p t and the time t u for the first sensor and the pixel qj and the time t 2 j for the second sensor).
  • a temporal component (step 605)
  • a luminance component (step 607).
  • the calculated distances (or the position of the point X (t) in the space) are then returned (612).
  • FIG. 7 illustrates a device for implementing an embodiment according to the invention.
  • the device comprises a computer 700, comprising a memory 705 for storing instructions for implementing the method, the received measurement data, and temporary data for performing the various steps of the method as described above. .
  • the computer further comprises a circuit 704.
  • This circuit can be, for example:
  • processor capable of interpreting instructions in the form of a computer program
  • a programmable electronic chip such as an FPGA (for "Field Programmable Gate Array”).
  • This computer has an input interface 703 for receiving the events of the sensors, and an output interface 706 for the delivery of the distances 707.
  • the computer can comprise, to allow easy interaction with a user, a screen 701 and a keyboard 702.
  • the keyboard is optional, especially in the context of a computer in the form of a touch pad, for example.
  • FIG. 6 is a typical example of a program whose instructions can be implemented with the device described. As such, FIG. 6 may correspond to the flowchart of the general algorithm of a computer program within the meaning of the invention.
  • the flowchart of FIG. 6 may also include an iteration over the events e ⁇ t ⁇ ) to associate several events of the first sensor with events of the second sensor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
PCT/FR2016/050575 2015-03-16 2016-03-15 Procede de reconstruction 3d d'une scene Ceased WO2016146938A1 (fr)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201680016173.XA CN107750372B (zh) 2015-03-16 2016-03-15 一种场景三维重建的方法、装置及计算机可读介质
EP16713966.6A EP3272119B1 (fr) 2015-03-16 2016-03-15 Procédé et dispositif de reconstruction 3d d'une scene
US15/556,596 US11335019B2 (en) 2015-03-16 2016-03-15 Method for the 3D reconstruction of a scene
JP2017549009A JP6839091B2 (ja) 2015-03-16 2016-03-15 シーンを3d再構成するための方法
KR1020177029337A KR102432644B1 (ko) 2015-03-16 2016-03-15 장면의 3d 재구성 방법

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1552154A FR3033973A1 (fr) 2015-03-16 2015-03-16 Procede de reconstruction 3d d'une scene
FR1552154 2015-03-16

Publications (1)

Publication Number Publication Date
WO2016146938A1 true WO2016146938A1 (fr) 2016-09-22

Family

ID=53879566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FR2016/050575 Ceased WO2016146938A1 (fr) 2015-03-16 2016-03-15 Procede de reconstruction 3d d'une scene

Country Status (7)

Country Link
US (1) US11335019B2 (OSRAM)
EP (1) EP3272119B1 (OSRAM)
JP (1) JP6839091B2 (OSRAM)
KR (1) KR102432644B1 (OSRAM)
CN (1) CN107750372B (OSRAM)
FR (1) FR3033973A1 (OSRAM)
WO (1) WO2016146938A1 (OSRAM)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3690736A1 (en) 2019-01-30 2020-08-05 Prophesee Method of processing information from an event-based sensor
EP3694202A1 (en) 2019-02-11 2020-08-12 Prophesee Method of processing a series of events received asynchronously from an array of pixels of an event-based light sensor

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764078B (zh) * 2018-05-15 2019-08-02 上海芯仑光电科技有限公司 一种事件数据流的处理方法及计算设备
CN108765487B (zh) * 2018-06-04 2022-07-22 百度在线网络技术(北京)有限公司 重建三维场景的方法、装置、设备和计算机可读存储介质
US12131544B2 (en) * 2019-05-16 2024-10-29 Prophesee Method for capturing motion of an object and a motion capture system
CN111369482B (zh) * 2020-03-03 2023-06-23 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN116134289B (zh) 2020-09-07 2025-08-22 发那科株式会社 三维测量装置
JP7648652B2 (ja) 2020-11-25 2025-03-18 ファナック株式会社 三次元計測装置及び三次元計測プログラム
JP2022188988A (ja) * 2021-06-10 2022-12-22 キヤノン株式会社 情報処理装置、情報処理方法およびプログラム

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100731350B1 (ko) * 2005-07-26 2007-06-21 삼성전자주식회사 공간적 균일성 보상 장치 및 그 방법
US8290250B2 (en) * 2008-12-26 2012-10-16 Five Apes, Inc. Method and apparatus for creating a pattern recognizer
US8797387B2 (en) * 2011-04-27 2014-08-05 Aptina Imaging Corporation Self calibrating stereo camera
EP2574511B1 (en) * 2011-09-30 2016-03-16 Honda Research Institute Europe GmbH Analyzing road surfaces
FR2983998B1 (fr) * 2011-12-08 2016-02-26 Univ Pierre Et Marie Curie Paris 6 Procede de reconstruction 3d d'une scene faisant appel a des capteurs asynchrones
FR2985065B1 (fr) * 2011-12-21 2014-01-10 Univ Paris Curie Procede d'estimation de flot optique a partir d'un capteur asynchrone de lumiere
KR101896666B1 (ko) * 2012-07-05 2018-09-07 삼성전자주식회사 이미지 센서 칩, 이의 동작 방법, 및 이를 포함하는 시스템
KR20140056986A (ko) * 2012-11-02 2014-05-12 삼성전자주식회사 모션 센서 어레이 장치, 상기 모선 센서 어레이를 이용한 거리 센싱 시스템, 및 거리 센싱 방법
KR20140095793A (ko) * 2013-01-25 2014-08-04 삼성디스플레이 주식회사 패널 얼룩 평가 방법 및 시스템
US9767571B2 (en) * 2013-07-29 2017-09-19 Samsung Electronics Co., Ltd. Apparatus and method for analyzing image including event information
US10055013B2 (en) * 2013-09-17 2018-08-21 Amazon Technologies, Inc. Dynamic object tracking for user interfaces
KR101861515B1 (ko) * 2016-04-21 2018-05-25 송경희 급수 가압용 부스터 펌프 시스템에서 설정 양정에 대한 회전수별 유량 계산 방법

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JURGEN KOGLER ET AL: "Event-Based Stereo Matching Approaches for Frameless Address Event Stereo Data", 26 September 2011, ADVANCES IN VISUAL COMPUTING, SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 674 - 685, ISBN: 978-3-642-24027-0, XP019166064 *
PAUL ROGISTER ET AL: "Asynchronous Event-Based Binocular Stereo Matching", IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, IEEE, PISCATAWAY, NJ, USA, vol. 23, no. 2, 1 February 2012 (2012-02-01), pages 347 - 353, XP011406609, ISSN: 2162-237X, DOI: 10.1109/TNNLS.2011.2180025 *
POSCH CHRISTOPH ET AL: "Retinomorphic Event-Based Vision Sensors: Bioinspired Cameras With Spiking Output", PROCEEDINGS OF THE IEEE, IEEE. NEW YORK, US, vol. 102, no. 10, 1 October 2014 (2014-10-01), pages 1470 - 1484, XP011559302, ISSN: 0018-9219, [retrieved on 20140916], DOI: 10.1109/JPROC.2014.2346153 *
T. DELBRÜCK ET AL.: "Activity-Driven, Event-Based Vision Sensors", PROCEEDINGS OF 2010 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS, 2010, pages 2426 - 2429

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3690736A1 (en) 2019-01-30 2020-08-05 Prophesee Method of processing information from an event-based sensor
WO2020157157A1 (en) 2019-01-30 2020-08-06 Prophesee Method of processing information from an event-based sensor
US11995878B2 (en) 2019-01-30 2024-05-28 Prophesee Method of processing information from an event-based sensor
EP3694202A1 (en) 2019-02-11 2020-08-12 Prophesee Method of processing a series of events received asynchronously from an array of pixels of an event-based light sensor
WO2020165106A1 (en) 2019-02-11 2020-08-20 Prophesee Method of processing a series of events received asynchronously from an array of pixels of an event-based light sensor
US20220100658A1 (en) * 2019-02-11 2022-03-31 Prophesee Method of processing a series of events received asynchronously from an array of pixels of an event-based light sensor
US11871125B2 (en) 2019-02-11 2024-01-09 Prophesee Method of processing a series of events received asynchronously from an array of pixels of an event-based light sensor

Also Published As

Publication number Publication date
CN107750372A (zh) 2018-03-02
FR3033973A1 (fr) 2016-09-23
JP2018516395A (ja) 2018-06-21
KR20180020952A (ko) 2018-02-28
KR102432644B1 (ko) 2022-08-16
US11335019B2 (en) 2022-05-17
EP3272119A1 (fr) 2018-01-24
CN107750372B (zh) 2021-12-10
JP6839091B2 (ja) 2021-03-03
US20180063506A1 (en) 2018-03-01
EP3272119B1 (fr) 2021-11-10

Similar Documents

Publication Publication Date Title
EP3272119B1 (fr) Procédé et dispositif de reconstruction 3d d'une scene
CA2859900C (fr) Procede d'estimation de flot optique a partir d'un capteur asynchrone de lumiere
EP3138079B1 (fr) Procédé de suivi de forme dans une scène observée par un capteur asynchrone de lumière
EP3271869B1 (fr) Procédé de traitement d'un signal asynchrone
WO2013052781A1 (en) Method and apparatus to determine depth information for a scene of interest
FR2882160A1 (fr) Procede de capture d'images comprenant une mesure de mouvements locaux
US11539895B1 (en) Systems, methods, and media for motion adaptive imaging using single-photon image sensor data
EP3435332A1 (fr) Dispositif électronique et procédé de génération, à partir d'au moins une paire d'images successives d'une scène, d'une carte de profondeur de la scène, drone et programme d'ordinateur associés
FR2952743A3 (fr) Procede d'estimation du mouvement d'un instrument d'observation a defilement survolant un corps celeste
WO2014174061A1 (fr) Suivi visuel d'objet
EP3979648A1 (fr) Dispositif de compensation du mouvement d'un capteur événementiel et système d'observation et procédé associés
FR2919407A1 (fr) Procede, dispositif et systeme pour la fusion d'informations provenant de plusieurs capteurs.
EP2943935B1 (fr) Estimation de mouvement d'une image
EP3360055A1 (fr) Procede d'optimisation de decomposition d'un signal asynchrone
WO2006032650A1 (fr) Procede de detection et de pistage de cibles ponctuelles, dans un systeme de surveillance optronique
EP4341897B1 (fr) Procédé et dispositif de traitement d'une séquence d'images pour la détermination de vignettes poursuivies dans ladite séquence d'images
EP3370205A1 (fr) Procédé et dispositif électronique de détermination d'un modèle en perspective d'un objet, programme d'ordinateur et appareil électronique d'affichage associés
EP4203490A1 (fr) Système d observation et procédé d observation associé
WO2026006545A1 (en) Systems, methods, and media for concurrent depth and motion estimation using indirect time of flight imaging
WO2018178104A1 (fr) Procédé et dispositif d'acquisition et de restitution d'images numériques avec une dynamique étendue
FR3103940A1 (fr) Procédé et dispositif de traitement d’images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16713966

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2016713966

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 15556596

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2017549009

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20177029337

Country of ref document: KR

Kind code of ref document: A