WO2023186582A1 - Sensing arrangement, method and computer program - Google Patents

Sensing arrangement, method and computer program Download PDF

Info

Publication number
WO2023186582A1
WO2023186582A1 PCT/EP2023/056943 EP2023056943W WO2023186582A1 WO 2023186582 A1 WO2023186582 A1 WO 2023186582A1 EP 2023056943 W EP2023056943 W EP 2023056943W WO 2023186582 A1 WO2023186582 A1 WO 2023186582A1
Authority
WO
WIPO (PCT)
Prior art keywords
vision sensor
dynamic vision
field
view
scene
Prior art date
Application number
PCT/EP2023/056943
Other languages
French (fr)
Inventor
Dario BRESCIANINI
Carter FANG
Original Assignee
Sony Group Corporation
Sony Europe B. V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Group Corporation, Sony Europe B. V. filed Critical Sony Group Corporation
Publication of WO2023186582A1 publication Critical patent/WO2023186582A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/245Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2513Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns

Definitions

  • Examples relate to a sensing arrangement, a method and a computer program.
  • 3 -dimensional (3D) laser scanners capture a 3D structure of an object or a scene. They can operate based on time-of-flight or triangulation.
  • Triangulation-based systems leverage two components, namely a laser and an imaging sensor for observing how the laser beam or fan interacts with an object or a scene. By calibrating these components with respect to one another, a triangulation of the observations of the imaging sensor can be performed. Thus, a single 3D point measurement for each observed laser pixel can be recovered.
  • a speed and an accuracy of these systems can be limited by individual components. To maximize speed, the imaging sensor requires a high framerate, and the laser requires a high sweeping frequency. There exists, however, a trade-off between speed and accuracy. Increasing accuracy may require higher image resolution, which may require additional computation. With high resolution, however, these systems can achieve good accuracies in the micrometer range.
  • a drawback of these system is an inefficient use of data and a limited range.
  • an image may be captured and may be used to recover a set of 3D measurements. But only a small subset of the image pixels may be used. For these reasons, triangulation-based systems are used in applications requiring high accuracy measurements at close range, such as parts inspection.
  • Time-of-flight-based systems capture the 3D structure of a scene using time-of-flight, rather than image analysis.
  • Laser beams are emitted at known deviation angles and the time required to receive the reflected beam is used to determine the distance to the surface.
  • light imaging, detection and ranging sensors can describe a structure of a scene with many 3D point measurements known as a “point-cloud”.
  • point-cloud 3D point measurements
  • triangulation-based systems these systems can operate in large distances of several kilometers.
  • a drawback of these system is that they can only scan a limited number of points resulting in relatively low resolution.
  • the present disclosure provides a sensing arrangement, comprising a first dynamic vision sensor and a second dynamic vision sensor.
  • the first dynamic vision sensor is calibrated to observe a first field of view of a scene and the second dynamic vision sensor is calibrated to observe a second field of view of the scene, which is larger than the first field of view of the scene.
  • the present disclosure provides a method, comprising detecting reflected light from a scene with a first dynamic vision sensor and detecting reflected light from the scene with a second dynamic vision sensor.
  • a first field of view of the first dynamic vision sensor of the scene is smaller than a second field of view of the second dynamic vision sensor of the scene.
  • the present disclosure provides a computer program having a program code for performing the method as described above, when the computer program is executed on a computer, a processor, or a programmable hardware component.
  • Fig. 1 shows an example of a sensing arrangement
  • Fig. 2 shows another example of sensing arrangement
  • Fig. 3 shows a block diagram of an example of a method.
  • the sensing arrangement 100 comprises a first dynamic vision sensor 110 (DVS) and a second dynamic vision sensor 120.
  • the first dynamic vision sensor 110 is calibrated to observe a first field of view 130 of a scene 150 and the second dynamic vision sensor 120 is calibrated to observe a second field of view 140 of the scene 150, which is larger than the first field of view 130 of the scene 150.
  • a sensing of the scene 150 can be improved, especially by combining the first DVS 110 with the first field of view 130 different from the second field of view 140 of the second DVS 120.
  • the first DVS 110 can be used to scan a first region-of-interest and the second DVS 120 can be used to scan a second region-of-interest.
  • the first region-of-interest may be smaller as the second region-of-interest, and thus a spatial resolution for scanning the first region-of-interest may be higher as a spatial resolution for scanning the second region-of-interest.
  • an accuracy for scanning the first region-of-interest with the first DVS 110 may be higher than an accuracy for scanning the second region-of-interest with the second DVS 120.
  • a high-bandwidth adaptive spatial resolution three-dimensional scan of arbitrary elements-of-interest (e.g., in a first region-of-interest) within a scene 150 with high resolution can be performed.
  • the second DVS 120 can be used to scan the scene 150 for a (second) element/region-of-interest, which could be scanned by the first DVS 110 with a higher spatial resolution.
  • a detection of the second element/region-of-interest can be performed faster, since the second field of view 140 may cover a larger area of the scene 150 and the scan of the first ele- ment/region-of-interest may still provide a desired spatial resolution due to the higher accuracy of the smaller first field of view 130 of the first DVS 110.
  • the second DVS 120 can be used to scan a large area of the scene 150, e.g., to determine the first region-of-interest for the first DVS 110.
  • the first region-of-interest for the first DVS 110 can be determined (e.g., by a processing circuitry).
  • the first field of view 130 can be determined or adjusted in an improved way.
  • the first field of view 130 can be determined or adjusted to comprises an area of the scene 150 for which a high spatial resolution is needed to determine a depth information, e.g., an area of the scene 150 comprising an element-of-interest.
  • a(n achievable) spatial resolution/accuracy of the first DVS 110 can be substantially identical to a(n achievable) spatial resolution/accuracy of the second DVS 120.
  • a spatial resolution/accuracy of the scan of the first field of view 130 may be increased in comparison to a spatial resolution/accuracy of the scan of the second field of view 140.
  • each DVS 110, 120 could in principle be used to scan either a smaller field of view or a larger field of view compared to the other DVS 110, 120.
  • the spatial resolution/accuracy of the first DVS 110 may be different from the spatial resolution/accuracy of the second DVS 120, e.g., the spatial resolution/accuracy of the first DVS 110 may be better.
  • the first DVS 110 may be assigned to a smaller field of view (e.g., the first field of view 130), e.g., for scanning an element-of-interest
  • the second DVS 120 may be assigned to a larger field of view (e.g., the second field of view 140), e.g., for scanning the (whole) scene 150.
  • a DVS 110, 120 may capture a light intensity (e.g., a brightness, luminous intensity) change of light received from the scene 150 over time.
  • the DVS 110, 120 may include pixels operating independently and asynchronously. The pixels may detect the light intensity change as it occurs. The pixels may stay silent otherwise. The pixels may generate an electrical signal, e.g., called event, which may indicate per-pixel light intensity a change by a predefined threshold. Accordingly, the DVS 110, 120 may be an example for an event-based image sensor.
  • Each pixel may include a photo-sensitive element exposed to the light received from the scene 150.
  • the received light may cause a photocurrent in the photo-sensitive element depending on a value of light intensity of the received light.
  • a difference between a resulting output voltage and a previous voltage reset-level may be compared against the predefined threshold.
  • a circuit of the pixel may include comparators with different bias voltages for an ON- and an OFF-threshold.
  • the comparators may compare an output voltage against the ON- and the OFF-threshold.
  • the ON- and the OFF-threshold may correspond to a voltage level higher or lower given by the predefined threshold than the voltage reset-level, respectively.
  • an ON- or an OFF- event may be communicated to a periphery of the DVS 110, 120, respectively.
  • the voltage reset-level may be newly set to the output voltage that triggered the event.
  • the pixel may log a light-intensity change since a previous event.
  • the periphery of the DVS 110, 120 may include a readout circuit to associate each event with a time stamp and pixel coordinates of the pixel that recorded the event. A series of events captured by the DVS 110, 120 at a certain perspective and over a certain time may be considered as an event stream.
  • a DVS 110, 120 have a much higher bandwidth than a traditional sensor as each pixel responds asynchronously to a light intensity change.
  • a DVS 110, 120 may achieve a temporal resolution of 1 ps. 3D points may be triangulated at the same temporal resolution of 1 ps. Thus, a resulting complete depth scans at rates larger than 1 kHz can be achieved.
  • an accuracy of the first dynamic vision sensor 110 for the first field of view 130 may be larger than an accuracy of the second dynamic vision sensor 120 for the second field of view 140.
  • a scan of the first region-of-interest can be improved, e.g., increasing an accuracy of the first DVS 110.
  • a (spatial) resolution of the first dynamic vision sensor 110 for the first field of view 130 may be larger than a (spatial) resolution of the second dynamic vision sensor 120 for the second field of view 140.
  • the spatial resolution of the first DVS 110 may be chosen, such that the first region-of-interest can be scanned with a desired accuracy.
  • At least one of the first dynamic vision sensor 110 or the second dynamic vision sensor 120 may be configured to determine information, which could be used to determine a depth information of the scene 150.
  • a depth information of the scene 150 can be determined.
  • the sensing arrangement 100 can be an arrangement for depth sensing of the scene 150.
  • the sensing arrangement 100 may further comprise a first lens corresponding to the first dynamic vision sensor 110 and a second lens corresponding to the second dynamic vision sensor 120.
  • the second lens may have a wider field of view as the first lens.
  • the sensing arrangement 100 may further comprise a light source to emit light onto the scene 150.
  • an event which can be detected by the first DVS 110 and/or the second DVS 120, can be controlled/triggered by the light source.
  • a light intensity at the first DVS 110 and/or the second DVS 120 can be controlled by the light source, which may increase a signal-to-noise ratio.
  • the sensing arrangement 100 may further comprise an optical diffraction grating to generate a light pattern that is cast onto the scene 150 and reflected by the scene 150 towards the first dynamic vision sensor 110 and the second dynamic vision sensor 120.
  • an optical diffraction grating to generate a light pattern that is cast onto the scene 150 and reflected by the scene 150 towards the first dynamic vision sensor 110 and the second dynamic vision sensor 120.
  • the sensing arrangement 100 may further comprise a scanning mirror that can be used to change an illuminance of the light pattern onto the scene 150.
  • the illuminance of the light pattern can be controlled by the scanning mirror, e.g., by an orientation of the scanning mirror, such that a correlation between the illuminance of the light pattern and an event determined by the first DVS 110 and/or the second DVS 120 can be determined.
  • an event determined by the first DVS 110 and/or the second DVS 120 may be assigned to a specific orientation of the scanning mirror.
  • the scanning mirror can be used to trigger events at the first DVS 110 and/or the second DVS 120.
  • the scanning mirror can be used to direct the light pattern towards the first/second re- gion-of-interest in the scene 150, e.g., towards the first field of view 130. This way, a scanning of the fist region-of-interest can be increased.
  • the sensing arrangement 100 may further comprise a further scanning mirror that can be used to change the first field of view 130 captured by the first dynamic vision sensor 110.
  • the first field of view 130 captured by the first DVS 110 can be con- trolled/adjusted by the further scanning mirror, e.g., by an orientation of the further scanning mirror.
  • an event determined by the first DVS 110 may be assigned to a specific orientation of the further scanning mirror and thus a specific first field of view 130.
  • the further scanning mirror can be used to trigger events at the first DVS 110 for the first field of view 130.
  • the sensing arrangement 100 may further comprise processing circuitry communicatively coupled to the first dynamic vision sensor 110, the second dynamic vision sensor 120 and at least one of the scanning mirror or the further scanning mirror.
  • the processing circuitry may be configured to control an orientation of at least one of the scanning mirrors or the further scanning mirror and to receive information from the first dynamic vision sensor 110 and the second dynamic vision sensor 120. This way, the processing circuitry can determine a correlation between an orientation of the scanning mirror and the first DVS 110 and/or the second DVS 120.
  • the processing circuitry may be further configured to read events of at least one of the first dynamic vision sensor 110 or the second dynamic vision sensor 120 for time synchronization between an orientation of at least one of the scanning mirror or the further scanning mirror and at least one of the first dynamic vision sensor 110 or the second dynamic vision sensor 120.
  • the processing circuitry can perform a desired opera- tion/calculation. For example, for each event triggered at the first DVS 110, which correlates with an assigned orientation or movement of the scanning mirror, a depth map for the first region-of-interest (or the first field of view) can be calculated by the processing circuitryin an example, the processing circuitry may be further configured to determine a (first) re- gion-of-interest in the scene 150 for the first field of view 130.
  • the first region- of-interest may be determined based on depth information determined by events detected by the second DVS 120 for the second field of view 140.
  • the second field of view 140 can be used to determine the first region-of-interest for the first DVS 110.
  • the first region-of-interest may have substantially a same size or may be smaller as a size of the first field of view 130.
  • the whole first region-of-interest can be scanned with the first field of view 130.
  • the size of the first region-of-interest can be larger as the size of the first field of view 130.
  • the first field of view 130 can be rasterized of the first region-of-interest to scan the whole first region-of-interest.
  • the processing circuitry may be further configured to adapt the first field of view 130 to the (first) region-of-interest.
  • the first region-of-interest for the first DVS 110 may change over time, e.g., during a depth sensing of the scene 150. Consequently, the first field of view 130 may be needed to be adapted. This way, the first field of view 130 can be adjusted to the scene 150, e.g., to the first region-of-interest of the scene 150 which need to be scanned with a higher spatial resolution.
  • the processing circuitry may be further configured to scan the (first) region-of-interest by adapting the first field of view 130 based on the orientation of the further scanning mirror. This way, the first field of view 130 can be used to rasterize the first region-of-interest.
  • the processing circuitry may be further configured to change the orientation of the further scanning mirror based on the orientation of the scanning mirror. This way, the first field of view 130 can be adjusted to an illuminance of the scene 150, an event triggered by the scanning mirror, etc.
  • Fig. 1 may comprise one or more optional additional features corresponding to one or more aspects mentioned in connection with the proposed concept or one or more examples described below (e.g., Fig. 2 - 3).
  • Fig. 2 shows another example of a sensing arrangement 200.
  • the sensing arrangement may comprise a laser 255 which emits a laser beam 258.
  • the laser beam 258 may be fanned out by an optical diffraction grating 260, producing one or several laser lines that constitute a laser plane 270.
  • the laser plane 270 may be specularly reflected at a scanning mirror 266, producing a laser plane 272.
  • the laser plane 272 may finally illuminate an object-of- interest, a region-of-interest or the whole scene O.
  • the laser plane 272 may intersect the scene O and may be diffusely reflected.
  • a portion of the diffuse reflection 276 may travel towards a wide field of view lens 222 and may be captured by a second imaging sensor 120, e.g., a second DVS 120. Another portion of the diffuse reflection 274 may travel towards a further scanning mirror 268 and may be reflected towards a narrow field of view lens 212 with high magnification to be finally captured by a first imaging sensor 110, e.g., a first DVS 110.
  • Either, or both, of the imaging sensors 110 and 120 may be a DVS 110, 120.
  • the pixels of a DVS 110, 120 may respond asynchronously to brightness changes in the image plane.
  • a dynamic vision sensor can measure such brightness changes with a high temporal resolu- tion, e.g., 1 ps. Due to the asynchronous response, bandwidth may be not wasted on inactive pixels during the triangulation process and scanning bandwidth of more than 500 Hz for the entire scene O can be achieved.
  • the first DVS 110 may be used to observe reflected light from the scene for the first field of view.
  • the depth of the scene may already need to be known such that a viewing direction of DVS 110 can be set correctly, e.g., the field of view.
  • the scene can be observed by the second DVS 120 with a larger field of view.
  • the scanning mirror 266 and the further scanning mirror 268 may be two-dimensional scanning mirrors for rotating around both a horizontal axis and a vertical axis.
  • the further scanning mirror 268 may also be a one-dimensional scanning mirror rotating only around the vertical axis.
  • the scanning mirror 266 and the further scanning mirror 268 may be actuated using galvanometer or MEMS actuators which achieve driving frequencies in the range of 100 Hz - 10 kHz depending on the mirror size.
  • a control device 290 e.g., the processing circuitry as described above, may be connected to the imaging sensors 110, 120 and the scanning mirrors 266, 268 for time synchronization and control of an orientation of the scanning mirror 266 and/or the further scanning mirror 268.
  • Object, motion or other feature detection algorithms may run on the control device 290 to help identify regions-of-interest in the image and/or point-cloud captured by imaging sensor 120. These regions-of-interest may be used to control the further scanning mirror 268 in order to scan these regions-of-interest at high resolution, e.g., the first region-of-interest. These algorithms may be learned using machine learning techniques such as reinforcement learning, where a reward is given if the resulting high-resolution scan does contain an interesting feature, yielding an active sensing device.
  • the further scanning mirror 268 may change its gaze direction multiple times during a complete scan of the scene O to obtain a high-resolution scan of multiple regions-of-interest and/or to rasterize a (first) region-of-interest. More details and aspects are mentioned in connection with the examples described above and/or below.
  • the example shown in Fig. 2 may comprise one or more optional additional features corresponding to one or more aspects mentioned in connection with the proposed concept or one or more examples described above (e.g., Fig. 1) and/or below (e.g., Fig. 3).
  • Fig. 3 shows a block diagram of an example of a method 300 for sensing.
  • the method 300 comprises detecting 310 reflected light from a scene with a first dynamic vision sensor and detecting 320 reflected light from the scene with a second dynamic vision sensor.
  • a first field of view of the first dynamic vision sensor of the scene is larger than a second field of view of the second dynamic vision sensor of the scene.
  • a sensing arrangement as described above e.g., with reference to Figs. 1-2, may be used.
  • Fig. 3 may comprise one or more optional additional features corresponding to one or more aspects mentioned in connection with the proposed concept or one or more examples described above (e.g., Fig. 1 - 2).
  • a sensing arrangement comprising a first dynamic vision sensor and a second dynamic vision sensor.
  • the first dynamic vision sensor is calibrated to observe a first field of view of a scene and the second dynamic vision sensor is calibrated to observe a second field of view of the scene, which is larger than the first field of view of the scene.
  • sensing arrangement of any one of (1) to (6) further comprising an optical diffraction grating to generate a light pattern that is cast onto the scene and reflected by the scene towards the first dynamic vision sensor and the second dynamic vision sensor.
  • the sensing arrangement of (7) further comprising a scanning mirror that can be used to change an illuminance of the light pattern onto the scene.
  • sensing arrangement of any one of (8) to (9) further comprising processing circuitry communicatively coupled to the first dynamic vision sensor, the second dynamic vision sensor and at least one of the scanning mirror or the further scanning mirror and configured to control an orientation of at least one of the scanning mirror or the further scanning mirror and receive information from the first dynamic vision sensor and the second dynamic vision sensor.
  • a method comprising detecting reflected light from a scene with a first dynamic vision sensor and detecting reflected light from the scene with a second dynamic vision sensor.
  • a first field of view of the first dynamic vision sensor of the scene is larger than a second field of view of the second dynamic vision sensor of the scene.
  • a computer program having a program code for performing the method of (16), when the computer program is executed on a computer, a processor, or a programmable hardware component.
  • a non-transitory machine-readable medium having stored thereon a program having a program code for performing the method of (16), when the program is executed on a processor or a programmable hardware.
  • Examples may further be or relate to a (computer) program including a program code to execute one or more of the above methods when the program is executed on a computer, processor or other programmable hardware component.
  • steps, operations or processes of different ones of the methods described above may also be executed by programmed computers, processors or other programmable hardware components.
  • Examples may also cover program storage devices, such as digital data storage media, which are machine-, processor- or computer-readable and encode and/or contain machine-executable, processorexecutable or computer-executable programs and instructions.
  • Program storage devices may include or be digital storage devices, magnetic storage media such as magnetic disks and magnetic tapes, hard disk drives, or optically readable digital data storage media, for example.
  • Other examples may also include computers, processors, control units, (field) programmable logic arrays ((F)PLAs), (field) programmable gate arrays ((F)PGAs), graphics processor units (GPU), application-specific integrated circuits (ASICs), integrated circuits (ICs) or system-on-a-chip (SoCs) systems programmed to execute the steps of the methods described above.
  • FPLAs field programmable logic arrays
  • F field) programmable gate arrays
  • GPU graphics processor units
  • ASICs application-specific integrated circuits
  • ICs integrated circuits
  • SoCs system-on-a-chip
  • aspects described in relation to a device or system should also be understood as a description of the corresponding method.
  • a block, device or functional aspect of the device or system may correspond to a feature, such as a method step, of the corresponding method.
  • aspects described in relation to a method shall also be understood as a description of a corresponding block, a corresponding element, a property or a functional feature of a corresponding device or a corresponding system.
  • the processing circuitry described above may be a computer, processor, control unit, (field) programmable logic array ((F)PLA), (field) programmable gate array ((F)PGA), graphics processor unit (GPU), application-specific integrated circuit (ASICs), integrated circuits (IC) or system-on-a-chip (SoCs) system.
  • FPLA field programmable logic array
  • F field programmable gate array
  • GPU graphics processor unit
  • ASICs application-specific integrated circuit
  • IC integrated circuits
  • SoCs system-on-a-chip

Abstract

A sensing arrangement is provided. The sensing arrangement comprises a first dynamic vision sensor and a second dynamic vision sensor. The first dynamic vision sensor is calibrated to observe a first field of view of a scene and the second dynamic vision sensor is calibrated to observe a second field of view of the scene, which is larger than the first field of view of the scene.

Description

Sensing Arrangement, Method and Computer Program
Field
Examples relate to a sensing arrangement, a method and a computer program.
Background
3 -dimensional (3D) laser scanners capture a 3D structure of an object or a scene. They can operate based on time-of-flight or triangulation.
Triangulation-based systems leverage two components, namely a laser and an imaging sensor for observing how the laser beam or fan interacts with an object or a scene. By calibrating these components with respect to one another, a triangulation of the observations of the imaging sensor can be performed. Thus, a single 3D point measurement for each observed laser pixel can be recovered. A speed and an accuracy of these systems can be limited by individual components. To maximize speed, the imaging sensor requires a high framerate, and the laser requires a high sweeping frequency. There exists, however, a trade-off between speed and accuracy. Increasing accuracy may require higher image resolution, which may require additional computation. With high resolution, however, these systems can achieve good accuracies in the micrometer range. A drawback of these system is an inefficient use of data and a limited range. At each laser position, an image may be captured and may be used to recover a set of 3D measurements. But only a small subset of the image pixels may be used. For these reasons, triangulation-based systems are used in applications requiring high accuracy measurements at close range, such as parts inspection.
Time-of-flight-based systems capture the 3D structure of a scene using time-of-flight, rather than image analysis. Laser beams are emitted at known deviation angles and the time required to receive the reflected beam is used to determine the distance to the surface. By emitting many such beams from the sensor, light imaging, detection and ranging sensors can describe a structure of a scene with many 3D point measurements known as a “point-cloud”. Converse to triangulation-based systems, these systems can operate in large distances of several kilometers. A drawback of these system is that they can only scan a limited number of points resulting in relatively low resolution.
Thus, there may be a need to improve a sensing arrangement, especially for sensing a 3D- structure, e.g., to determine a depth information.
Summary
This demand is met by sensing arrangements and methods in accordance with the independent claims. Advantageous embodiments are addressed by the dependent claims.
According to a first aspect, the present disclosure provides a sensing arrangement, comprising a first dynamic vision sensor and a second dynamic vision sensor. The first dynamic vision sensor is calibrated to observe a first field of view of a scene and the second dynamic vision sensor is calibrated to observe a second field of view of the scene, which is larger than the first field of view of the scene.
According to a second aspect, the present disclosure provides a method, comprising detecting reflected light from a scene with a first dynamic vision sensor and detecting reflected light from the scene with a second dynamic vision sensor. A first field of view of the first dynamic vision sensor of the scene is smaller than a second field of view of the second dynamic vision sensor of the scene.
According to a third aspect, the present disclosure provides a computer program having a program code for performing the method as described above, when the computer program is executed on a computer, a processor, or a programmable hardware component.
Brief description of the Figures
Some examples of apparatuses and/or methods will be described in the following by way of example only, and with reference to the accompanying figures, in which
Fig. 1 shows an example of a sensing arrangement; Fig. 2 shows another example of sensing arrangement; and
Fig. 3 shows a block diagram of an example of a method.
Detailed Description
Some examples are now described in more detail with reference to the enclosed figures. However, other possible examples are not limited to the features of these embodiments described in detail. Other examples may include modifications of the features as well as equivalents and alternatives to the features. Furthermore, the terminology used herein to describe certain examples should not be restrictive of further possible examples.
Throughout the description of the figures same or similar reference numerals refer to same or similar elements and/or features, which may be identical or implemented in a modified form while providing the same or a similar function. The thickness of lines, layers and/or areas in the figures may also be exaggerated for clarification.
When two elements A and B are combined using an “or”, this is to be understood as disclosing all possible combinations, i.e. only A, only B as well as A and B, unless expressly defined otherwise in the individual case. As an alternative wording for the same combinations, "at least one of A and B" or "A and/or B" may be used. This applies equivalently to combinations of more than two elements.
If a singular form, such as “a”, “an” and “the” is used and the use of only a single element is not defined as mandatory either explicitly or implicitly, further examples may also use several elements to implement the same function. If a function is described below as implemented using multiple elements, further examples may implement the same function using a single element or a single processing entity. It is further understood that the terms "include", "including", "comprise" and/or "comprising", when used, describe the presence of the specified features, integers, steps, operations, processes, elements, components and/or a group thereof, but do not exclude the presence or addition of one or more other features, integers, steps, operations, processes, elements, components and/or a group thereof. Fig. 1 shows an example of a sensing arrangement 100. The sensing arrangement 100 comprises a first dynamic vision sensor 110 (DVS) and a second dynamic vision sensor 120. The first dynamic vision sensor 110 is calibrated to observe a first field of view 130 of a scene 150 and the second dynamic vision sensor 120 is calibrated to observe a second field of view 140 of the scene 150, which is larger than the first field of view 130 of the scene 150. By combining the first DVS 110 and the second DVS 120 a sensing of the scene 150 can be improved, especially by combining the first DVS 110 with the first field of view 130 different from the second field of view 140 of the second DVS 120. For example, the first DVS 110 can be used to scan a first region-of-interest and the second DVS 120 can be used to scan a second region-of-interest. The first region-of-interest may be smaller as the second region-of-interest, and thus a spatial resolution for scanning the first region-of-interest may be higher as a spatial resolution for scanning the second region-of-interest. Further, an accuracy for scanning the first region-of-interest with the first DVS 110 may be higher than an accuracy for scanning the second region-of-interest with the second DVS 120.
By combining two DVS 110, 120 with different field of views 130, 140 a high-bandwidth adaptive spatial resolution three-dimensional scan of arbitrary elements-of-interest (e.g., in a first region-of-interest) within a scene 150 with high resolution can be performed. The second DVS 120 can be used to scan the scene 150 for a (second) element/region-of-interest, which could be scanned by the first DVS 110 with a higher spatial resolution. Thus, a detection of the second element/region-of-interest can be performed faster, since the second field of view 140 may cover a larger area of the scene 150 and the scan of the first ele- ment/region-of-interest may still provide a desired spatial resolution due to the higher accuracy of the smaller first field of view 130 of the first DVS 110.
For example, by combining the first DVS 110 and the second DVS 120 with different field of views 130, 140 the second DVS 120 can be used to scan a large area of the scene 150, e.g., to determine the first region-of-interest for the first DVS 110. Based on information provided by the second DVS 120 the first region-of-interest for the first DVS 110 can be determined (e.g., by a processing circuitry). Thus, the first field of view 130 can be determined or adjusted in an improved way. For example the first field of view 130 can be determined or adjusted to comprises an area of the scene 150 for which a high spatial resolution is needed to determine a depth information, e.g., an area of the scene 150 comprising an element-of-interest. For example, a(n achievable) spatial resolution/accuracy of the first DVS 110 can be substantially identical to a(n achievable) spatial resolution/accuracy of the second DVS 120. Thus, due to the smaller first field of view 130 a spatial resolution/accuracy of the scan of the first field of view 130 may be increased in comparison to a spatial resolution/accuracy of the scan of the second field of view 140. With this setup, each DVS 110, 120 could in principle be used to scan either a smaller field of view or a larger field of view compared to the other DVS 110, 120. Alternatively, the spatial resolution/accuracy of the first DVS 110 may be different from the spatial resolution/accuracy of the second DVS 120, e.g., the spatial resolution/accuracy of the first DVS 110 may be better. For this case, the first DVS 110 may be assigned to a smaller field of view (e.g., the first field of view 130), e.g., for scanning an element-of-interest, and the second DVS 120 may be assigned to a larger field of view (e.g., the second field of view 140), e.g., for scanning the (whole) scene 150.
A DVS 110, 120 may capture a light intensity (e.g., a brightness, luminous intensity) change of light received from the scene 150 over time. The DVS 110, 120 may include pixels operating independently and asynchronously. The pixels may detect the light intensity change as it occurs. The pixels may stay silent otherwise. The pixels may generate an electrical signal, e.g., called event, which may indicate per-pixel light intensity a change by a predefined threshold. Accordingly, the DVS 110, 120 may be an example for an event-based image sensor.
Each pixel may include a photo-sensitive element exposed to the light received from the scene 150. The received light may cause a photocurrent in the photo-sensitive element depending on a value of light intensity of the received light. A difference between a resulting output voltage and a previous voltage reset-level may be compared against the predefined threshold. For instance, a circuit of the pixel may include comparators with different bias voltages for an ON- and an OFF-threshold. The comparators may compare an output voltage against the ON- and the OFF-threshold. The ON- and the OFF-threshold may correspond to a voltage level higher or lower given by the predefined threshold than the voltage reset-level, respectively. When the ON- or the OFF-threshold is crossed, an ON- or an OFF- event may be communicated to a periphery of the DVS 110, 120, respectively. Then, the voltage reset-level may be newly set to the output voltage that triggered the event. In this manner, the pixel may log a light-intensity change since a previous event. The periphery of the DVS 110, 120 may include a readout circuit to associate each event with a time stamp and pixel coordinates of the pixel that recorded the event. A series of events captured by the DVS 110, 120 at a certain perspective and over a certain time may be considered as an event stream.
Thus, a DVS 110, 120 have a much higher bandwidth than a traditional sensor as each pixel responds asynchronously to a light intensity change. A DVS 110, 120 may achieve a temporal resolution of 1 ps. 3D points may be triangulated at the same temporal resolution of 1 ps. Thus, a resulting complete depth scans at rates larger than 1 kHz can be achieved.
In an example, an accuracy of the first dynamic vision sensor 110 for the first field of view 130 may be larger than an accuracy of the second dynamic vision sensor 120 for the second field of view 140. Thus, a scan of the first region-of-interest can be improved, e.g., increasing an accuracy of the first DVS 110.
In an example, a (spatial) resolution of the first dynamic vision sensor 110 for the first field of view 130 may be larger than a (spatial) resolution of the second dynamic vision sensor 120 for the second field of view 140. For example, the spatial resolution of the first DVS 110 may be chosen, such that the first region-of-interest can be scanned with a desired accuracy.
In an example, at least one of the first dynamic vision sensor 110 or the second dynamic vision sensor 120 may be configured to determine information, which could be used to determine a depth information of the scene 150. Thus, a depth information of the scene 150 can be determined. For example, the sensing arrangement 100 can be an arrangement for depth sensing of the scene 150.
In an example, the sensing arrangement 100 may further comprise a first lens corresponding to the first dynamic vision sensor 110 and a second lens corresponding to the second dynamic vision sensor 120. The second lens may have a wider field of view as the first lens. Thus, the first field of view 130 and the second field of view 140 can be adjusted by use of the respective first lens or second lens, which may ease an adjustment of the field of views 130, 140. In an example, the sensing arrangement 100 may further comprise a light source to emit light onto the scene 150. Thus, an event, which can be detected by the first DVS 110 and/or the second DVS 120, can be controlled/triggered by the light source. Further, a light intensity at the first DVS 110 and/or the second DVS 120 can be controlled by the light source, which may increase a signal-to-noise ratio.
In an example, the sensing arrangement 100 may further comprise an optical diffraction grating to generate a light pattern that is cast onto the scene 150 and reflected by the scene 150 towards the first dynamic vision sensor 110 and the second dynamic vision sensor 120. Thus, by using the optical diffraction grating an illumination of the scene 150 can be adjusted.
In an example, the sensing arrangement 100 may further comprise a scanning mirror that can be used to change an illuminance of the light pattern onto the scene 150. This way, the illuminance of the light pattern can be controlled by the scanning mirror, e.g., by an orientation of the scanning mirror, such that a correlation between the illuminance of the light pattern and an event determined by the first DVS 110 and/or the second DVS 120 can be determined. For example, an event determined by the first DVS 110 and/or the second DVS 120 may be assigned to a specific orientation of the scanning mirror. Thus, the scanning mirror can be used to trigger events at the first DVS 110 and/or the second DVS 120. Further, the scanning mirror can be used to direct the light pattern towards the first/second re- gion-of-interest in the scene 150, e.g., towards the first field of view 130. This way, a scanning of the fist region-of-interest can be increased.
In an example, the sensing arrangement 100 may further comprise a further scanning mirror that can be used to change the first field of view 130 captured by the first dynamic vision sensor 110. This way, the first field of view 130 captured by the first DVS 110 can be con- trolled/adjusted by the further scanning mirror, e.g., by an orientation of the further scanning mirror. For example, an event determined by the first DVS 110 may be assigned to a specific orientation of the further scanning mirror and thus a specific first field of view 130. Thus, the further scanning mirror can be used to trigger events at the first DVS 110 for the first field of view 130. In an example, the sensing arrangement 100 may further comprise processing circuitry communicatively coupled to the first dynamic vision sensor 110, the second dynamic vision sensor 120 and at least one of the scanning mirror or the further scanning mirror. The processing circuitry may be configured to control an orientation of at least one of the scanning mirrors or the further scanning mirror and to receive information from the first dynamic vision sensor 110 and the second dynamic vision sensor 120. This way, the processing circuitry can determine a correlation between an orientation of the scanning mirror and the first DVS 110 and/or the second DVS 120.
In an example, the processing circuitry may be further configured to read events of at least one of the first dynamic vision sensor 110 or the second dynamic vision sensor 120 for time synchronization between an orientation of at least one of the scanning mirror or the further scanning mirror and at least one of the first dynamic vision sensor 110 or the second dynamic vision sensor 120. This way, the processing circuitry can perform a desired opera- tion/calculation. For example, for each event triggered at the first DVS 110, which correlates with an assigned orientation or movement of the scanning mirror, a depth map for the first region-of-interest (or the first field of view) can be calculated by the processing circuitryin an example, the processing circuitry may be further configured to determine a (first) re- gion-of-interest in the scene 150 for the first field of view 130. For example, the first region- of-interest may be determined based on depth information determined by events detected by the second DVS 120 for the second field of view 140. Thus, the second field of view 140 can be used to determine the first region-of-interest for the first DVS 110. For example, the first region-of-interest may have substantially a same size or may be smaller as a size of the first field of view 130. Thus, the whole first region-of-interest can be scanned with the first field of view 130. Alternatively, the size of the first region-of-interest can be larger as the size of the first field of view 130. In this case, the first field of view 130 can be rasterized of the first region-of-interest to scan the whole first region-of-interest.
In an example, the processing circuitry may be further configured to adapt the first field of view 130 to the (first) region-of-interest. For example, the first region-of-interest for the first DVS 110 may change over time, e.g., during a depth sensing of the scene 150. Consequently, the first field of view 130 may be needed to be adapted. This way, the first field of view 130 can be adjusted to the scene 150, e.g., to the first region-of-interest of the scene 150 which need to be scanned with a higher spatial resolution.
In an example, if the (first) region-of-interest is larger than the first field of view 130 the processing circuitry may be further configured to scan the (first) region-of-interest by adapting the first field of view 130 based on the orientation of the further scanning mirror. This way, the first field of view 130 can be used to rasterize the first region-of-interest.
In an example, the processing circuitry may be further configured to change the orientation of the further scanning mirror based on the orientation of the scanning mirror. This way, the first field of view 130 can be adjusted to an illuminance of the scene 150, an event triggered by the scanning mirror, etc.
More details and aspects are mentioned in connection with the examples described below. The example shown in Fig. 1 may comprise one or more optional additional features corresponding to one or more aspects mentioned in connection with the proposed concept or one or more examples described below (e.g., Fig. 2 - 3).
Fig. 2 shows another example of a sensing arrangement 200. The sensing arrangement may comprise a laser 255 which emits a laser beam 258. The laser beam 258 may be fanned out by an optical diffraction grating 260, producing one or several laser lines that constitute a laser plane 270. The laser plane 270 may be specularly reflected at a scanning mirror 266, producing a laser plane 272. The laser plane 272 may finally illuminate an object-of- interest, a region-of-interest or the whole scene O. The laser plane 272 may intersect the scene O and may be diffusely reflected. A portion of the diffuse reflection 276 may travel towards a wide field of view lens 222 and may be captured by a second imaging sensor 120, e.g., a second DVS 120. Another portion of the diffuse reflection 274 may travel towards a further scanning mirror 268 and may be reflected towards a narrow field of view lens 212 with high magnification to be finally captured by a first imaging sensor 110, e.g., a first DVS 110.
Either, or both, of the imaging sensors 110 and 120 may be a DVS 110, 120. The pixels of a DVS 110, 120 may respond asynchronously to brightness changes in the image plane. A dynamic vision sensor can measure such brightness changes with a high temporal resolu- tion, e.g., 1 ps. Due to the asynchronous response, bandwidth may be not wasted on inactive pixels during the triangulation process and scanning bandwidth of more than 500 Hz for the entire scene O can be achieved.
For example, the first DVS 110 may be used to observe reflected light from the scene for the first field of view. In particular the depth of the scene may already need to be known such that a viewing direction of DVS 110 can be set correctly, e.g., the field of view. Based on the information provided by the first DVS 110 the scene can be observed by the second DVS 120 with a larger field of view.
The scanning mirror 266 and the further scanning mirror 268 may be two-dimensional scanning mirrors for rotating around both a horizontal axis and a vertical axis. The further scanning mirror 268 may also be a one-dimensional scanning mirror rotating only around the vertical axis. The scanning mirror 266 and the further scanning mirror 268 may be actuated using galvanometer or MEMS actuators which achieve driving frequencies in the range of 100 Hz - 10 kHz depending on the mirror size.
A control device 290, e.g., the processing circuitry as described above, may be connected to the imaging sensors 110, 120 and the scanning mirrors 266, 268 for time synchronization and control of an orientation of the scanning mirror 266 and/or the further scanning mirror 268.
Object, motion or other feature detection algorithms may run on the control device 290 to help identify regions-of-interest in the image and/or point-cloud captured by imaging sensor 120. These regions-of-interest may be used to control the further scanning mirror 268 in order to scan these regions-of-interest at high resolution, e.g., the first region-of-interest. These algorithms may be learned using machine learning techniques such as reinforcement learning, where a reward is given if the resulting high-resolution scan does contain an interesting feature, yielding an active sensing device. Depending on the applied scanning pattern of the scanning mirror 266, the further scanning mirror 268 may change its gaze direction multiple times during a complete scan of the scene O to obtain a high-resolution scan of multiple regions-of-interest and/or to rasterize a (first) region-of-interest. More details and aspects are mentioned in connection with the examples described above and/or below. The example shown in Fig. 2 may comprise one or more optional additional features corresponding to one or more aspects mentioned in connection with the proposed concept or one or more examples described above (e.g., Fig. 1) and/or below (e.g., Fig. 3).
Fig. 3 shows a block diagram of an example of a method 300 for sensing. The method 300 comprises detecting 310 reflected light from a scene with a first dynamic vision sensor and detecting 320 reflected light from the scene with a second dynamic vision sensor. A first field of view of the first dynamic vision sensor of the scene is larger than a second field of view of the second dynamic vision sensor of the scene. For performing the method 300 a sensing arrangement as described above, e.g., with reference to Figs. 1-2, may be used.
More details and aspects are mentioned in connection with the examples described above. The example shown in Fig. 3 may comprise one or more optional additional features corresponding to one or more aspects mentioned in connection with the proposed concept or one or more examples described above (e.g., Fig. 1 - 2).
The following examples pertain to further embodiments:
(1) A sensing arrangement, comprising a first dynamic vision sensor and a second dynamic vision sensor. The first dynamic vision sensor is calibrated to observe a first field of view of a scene and the second dynamic vision sensor is calibrated to observe a second field of view of the scene, which is larger than the first field of view of the scene.
(2) The sensing arrangement of (1), wherein an accuracy of the first dynamic vision sensor for the first field of view is larger than an accuracy of the second dynamic vision sensor for the second field of view.
(3) The sensing arrangement of any one of (1) to (2) wherein a resolution of the first dynamic vision sensor for the first field of view is larger than a resolution of the second dynamic vision sensor for the second field of view. (4) The sensing arrangement of any one of (1) to (3), wherein at least one of the first dynamic vision sensor or the second dynamic vision sensor is configured to determine information, which could be used to determine a depth information of the scene.
(5) The sensing arrangement of any one of (1) to (4), further comprising a first lens corresponding to the first dynamic vision sensor and a second lens corresponding to the second dynamic vision sensor, wherein the second lens has a wider field of view as the first lens.
(6) The sensing arrangement of any one of (1) to (5) further comprising a light source to emit light onto the scene.
(7) The sensing arrangement of any one of (1) to (6) further comprising an optical diffraction grating to generate a light pattern that is cast onto the scene and reflected by the scene towards the first dynamic vision sensor and the second dynamic vision sensor.
(8) The sensing arrangement of (7) further comprising a scanning mirror that can be used to change an illuminance of the light pattern onto the scene.
(9) The sensing arrangement of any one of (7) to (8) further comprising a further scanning mirror that can be used to change the first field of view captured by the first dynamic vision sensor.
(10) The sensing arrangement of any one of (8) to (9) further comprising processing circuitry communicatively coupled to the first dynamic vision sensor, the second dynamic vision sensor and at least one of the scanning mirror or the further scanning mirror and configured to control an orientation of at least one of the scanning mirror or the further scanning mirror and receive information from the first dynamic vision sensor and the second dynamic vision sensor.
(11) The sensing arrangement of (10) wherein the processing circuitry is further configured to read events of at least one of the first dynamic vision sensor or the second dynamic vision sensor for time synchronization between an orientation of at least one of the scanning mirror or the further scanning mirror and at least one of the first dynamic vision sensor or the second dynamic vision sensor. (12) The sensing arrangement of any one of (10) to (11) wherein the processing circuitry is further configured to determine a region of interest in the scene for the first field of view.
(13) The sensing arrangement of (12) wherein the processing circuitry is further configured to adapt the first field of view to the region of interest.
(14) The sensing arrangement of any one of (12) to (13) if the region of interest is larger than the first field of view the processing circuitry is further configured to scan the region of interest by adapting the first field of view based on the orientation of the further scanning mirror.
(15) The sensing arrangement of any one of (12) to (14) wherein the processing circuitry is further configured to change the orientation of the further scanning mirror based on the orientation of the scanning mirror.
(16) A method, comprising detecting reflected light from a scene with a first dynamic vision sensor and detecting reflected light from the scene with a second dynamic vision sensor. A first field of view of the first dynamic vision sensor of the scene is larger than a second field of view of the second dynamic vision sensor of the scene.
(17) A computer program having a program code for performing the method of (16), when the computer program is executed on a computer, a processor, or a programmable hardware component.
(18) A non-transitory machine-readable medium having stored thereon a program having a program code for performing the method of (16), when the program is executed on a processor or a programmable hardware.
Examples may further be or relate to a (computer) program including a program code to execute one or more of the above methods when the program is executed on a computer, processor or other programmable hardware component. Thus, steps, operations or processes of different ones of the methods described above may also be executed by programmed computers, processors or other programmable hardware components. Examples may also cover program storage devices, such as digital data storage media, which are machine-, processor- or computer-readable and encode and/or contain machine-executable, processorexecutable or computer-executable programs and instructions. Program storage devices may include or be digital storage devices, magnetic storage media such as magnetic disks and magnetic tapes, hard disk drives, or optically readable digital data storage media, for example. Other examples may also include computers, processors, control units, (field) programmable logic arrays ((F)PLAs), (field) programmable gate arrays ((F)PGAs), graphics processor units (GPU), application-specific integrated circuits (ASICs), integrated circuits (ICs) or system-on-a-chip (SoCs) systems programmed to execute the steps of the methods described above.
It is further understood that the disclosure of several steps, processes, operations or functions disclosed in the description or claims shall not be construed to imply that these operations are necessarily dependent on the order described, unless explicitly stated in the individual case or necessary for technical reasons. Therefore, the previous description does not limit the execution of several steps or functions to a certain order. Furthermore, in further examples, a single step, function, process or operation may include and/or be broken up into several sub-steps, -functions, -processes or -operations.
If some aspects have been described in relation to a device or system, these aspects should also be understood as a description of the corresponding method. For example, a block, device or functional aspect of the device or system may correspond to a feature, such as a method step, of the corresponding method. Accordingly, aspects described in relation to a method shall also be understood as a description of a corresponding block, a corresponding element, a property or a functional feature of a corresponding device or a corresponding system.
The processing circuitry described above may be a computer, processor, control unit, (field) programmable logic array ((F)PLA), (field) programmable gate array ((F)PGA), graphics processor unit (GPU), application-specific integrated circuit (ASICs), integrated circuits (IC) or system-on-a-chip (SoCs) system.
The following claims are hereby incorporated in the detailed description, wherein each claim may stand on its own as a separate example. It should also be noted that although in the claims a dependent claim refers to a particular combination with one or more other claims, other examples may also include a combination of the dependent claim with the subject matter of any other dependent or independent claim. Such combinations are hereby explicitly proposed, unless it is stated in the individual case that a particular combination is not intended. Furthermore, features of a claim should also be included for any other independent claim, even if that claim is not directly defined as dependent on that other independent claim.

Claims

Claims
What is claimed is:
1. A sensing arrangement, comprising: a first dynamic vision sensor; and a second dynamic vision sensor; wherein the first dynamic vision sensor is calibrated to observe a first field of view of a scene and the second dynamic vision sensor is calibrated to observe a second field of view of the scene, which is larger than the first field of view of the scene.
2. The sensing arrangement according to claim 1, wherein an accuracy of the first dynamic vision sensor for the first field of view is larger than an accuracy of the second dynamic vision sensor for the second field of view.
3. The sensing arrangement according to claim 1, wherein a resolution of the first dynamic vision sensor for the first field of view is larger than a resolution of the second dynamic vision sensor for the second field of view.
4. The sensing arrangement according to claim 1, wherein at least one of the first dynamic vision sensor or the second dynamic vision sensor is configured to determine information, which could be used to determine a depth information of the scene.
5. The sensing arrangement according to claim 1, further comprising: a first lens corresponding to the first dynamic vision sensor; and a second lens corresponding to the second dynamic vision sensor, wherein the second lens has a wider field of view as the first lens. The sensing arrangement according to claim 1, further comprising a light source to emit light onto the scene. The sensing arrangement according to claim 1, further comprising an optical diffraction grating to generate a light pattern that is cast onto the scene and reflected by the scene towards the first dynamic vision sensor and the second dynamic vision sensor. The sensing arrangement according to claim 7, further comprising a scanning mirror that can be used to change an illuminance of the light pattern onto the scene. The sensing arrangement according to claim 7, further comprising a further scanning mirror that can be used to change the first field of view captured by the first dynamic vision sensor. The sensing arrangement according to claim 8, further comprising processing circuitry communicatively coupled to the first dynamic vision sensor, the second dynamic vision sensor and at least one of the scanning mirror or the further scanning mirror and configured to: control an orientation of at least one of the scanning mirror or the further scanning mirror; and receive information from the first dynamic vision sensor and the second dynamic vision sensor.
11. The sensing arrangement according to claim 10, wherein the processing circuitry is further configured to read events of at least one of the first dynamic vision sensor or the second dynamic vision sensor for time synchronization between an orientation of at least one of the scanning mirror or the further scanning mirror and at least one of the first dynamic vision sensor or the second dynamic vision sensor.
12. The sensing arrangement according to claim 10, wherein the processing circuitry is further configured to determine a region of interest in the scene for the first field of view.
13. The sensing arrangement according to claim 12, wherein the processing circuitry is further configured to adapt the first field of view to the region of interest.
14. The sensing arrangement according to claim 12, wherein if the region of interest is larger than the first field of view the processing circuitry is further configured to scan the region of interest by adapting the first field of view based on the orientation of the further scanning mirror.
15. The sensing arrangement according to claim 12, wherein the processing circuitry is further configured to change the orientation of the further scanning mirror based on the orientation of the scanning mirror.
16. A method, comprising: detecting reflected light from a scene with a first dynamic vision sensor; and detecting reflected light from the scene with a second dynamic vision sensor, wherein a first field of view of the first dynamic vision sensor of the scene is larger than a second field of view of the second dynamic vision sensor of the scene. 17. A computer program having a program code for performing the method according to claim 16, when the computer program is executed on a computer, a processor, or a programmable hardware component.
PCT/EP2023/056943 2022-03-29 2023-03-17 Sensing arrangement, method and computer program WO2023186582A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP22164903 2022-03-29
EP22164903.1 2022-03-29

Publications (1)

Publication Number Publication Date
WO2023186582A1 true WO2023186582A1 (en) 2023-10-05

Family

ID=81324896

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/056943 WO2023186582A1 (en) 2022-03-29 2023-03-17 Sensing arrangement, method and computer program

Country Status (1)

Country Link
WO (1) WO2023186582A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190045173A1 (en) * 2017-12-19 2019-02-07 Intel Corporation Dynamic vision sensor and projector for depth imaging
WO2020163663A1 (en) * 2019-02-07 2020-08-13 Magic Leap, Inc. Lightweight and low power cross reality device with high temporal resolution
WO2022056145A1 (en) * 2020-09-09 2022-03-17 Velodyne Lidar Usa, Inc. Apparatus and methods for long range, high resolution lidar

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190045173A1 (en) * 2017-12-19 2019-02-07 Intel Corporation Dynamic vision sensor and projector for depth imaging
WO2020163663A1 (en) * 2019-02-07 2020-08-13 Magic Leap, Inc. Lightweight and low power cross reality device with high temporal resolution
WO2022056145A1 (en) * 2020-09-09 2022-03-17 Velodyne Lidar Usa, Inc. Apparatus and methods for long range, high resolution lidar

Similar Documents

Publication Publication Date Title
CN109458928B (en) Laser line scanning 3D detection method and system based on scanning galvanometer and event camera
US11550056B2 (en) Multiple pixel scanning lidar
US6600168B1 (en) High speed laser three-dimensional imager
US6750974B2 (en) Method and system for 3D imaging of target regions
US6366357B1 (en) Method and system for high speed measuring of microscopic targets
US6098031A (en) Versatile method and system for high speed, 3D imaging of microscopic targets
US9797708B2 (en) Apparatus and method for profiling a depth of a surface of a target object
WO2019076072A1 (en) Optical distance measurement method and apparatus
CN110325879A (en) System and method for compress three-dimensional depth sense
US20040021877A1 (en) Method and system for determining dimensions of optically recognizable features
EP3465249A1 (en) Multiple pixel scanning lidar
WO2023186582A1 (en) Sensing arrangement, method and computer program
US20220196386A1 (en) Three-dimensional scanner with event camera
KR20190129693A (en) High-sensitivity low-power camera system for 3d structured light application
WO2022195954A1 (en) Sensing system
US11736816B2 (en) Image sensor circuitry for reducing effects of laser speckles
WO2023186581A1 (en) Arrangement for depth sensing, device, methods and computer program
JP2626611B2 (en) Object shape measurement method
WO2023187951A1 (en) Computer system, method, and program
JP2731681B2 (en) 3D measurement system
JP4032556B2 (en) 3D input device
WO2022113877A1 (en) Three-dimensional-measurement device and three-dimensional-measurement method
WO2023057343A1 (en) Apparatuses and methods for event guided depth estimation
JPH01250706A (en) Method and apparatus for measuring shape of three-dimensional curved surface
JP2023127498A (en) Information processing device and distance estimation system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23712246

Country of ref document: EP

Kind code of ref document: A1