EP3782117A1 - Procédé, dispositif et support d'enregistrement lisible par ordinateur comprenant des instructions pour le traitement de données de capteur - Google Patents

Procédé, dispositif et support d'enregistrement lisible par ordinateur comprenant des instructions pour le traitement de données de capteur

Info

Publication number
EP3782117A1
EP3782117A1 EP19715031.1A EP19715031A EP3782117A1 EP 3782117 A1 EP3782117 A1 EP 3782117A1 EP 19715031 A EP19715031 A EP 19715031A EP 3782117 A1 EP3782117 A1 EP 3782117A1
Authority
EP
European Patent Office
Prior art keywords
measurement
sensor
camera
data
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19715031.1A
Other languages
German (de)
English (en)
Inventor
Simon Steinmeyer
Marek Musial
Carsten Deeg
Thorsten Bagdonat
Thorsten Graf
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Volkswagen AG
Original Assignee
Volkswagen AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Volkswagen AG filed Critical Volkswagen AG
Publication of EP3782117A1 publication Critical patent/EP3782117A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20072Graph-based image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to a method, an apparatus and a computer readable storage medium having instructions for processing sensor data.
  • the invention further relates to a motor vehicle in which a method according to the invention or a
  • inventive device is used.
  • DE 10 201 1 013 776 A1 describes a method for detecting or tracking objects in a vehicle environment.
  • the objects are detected from an optical flow on the basis of a determination of corresponding pixels in at least two images.
  • a distance of the objects from the optical flux is determined on the basis of the determination of the corresponding pixels in the at least two images.
  • DE 10 2017 100 199 A1 describes a method for detecting pedestrians.
  • a first step an image of an area near a vehicle is received.
  • the image is processed to determine locations where pedestrians are likely to reside within the image.
  • the particular locations of the image are then processed using a second neural network to determine if a pedestrian is present.
  • a notification is sent to a driver assistance system or an automated driving system.
  • the neural networks may include a deep convolution network.
  • Camera sensor and a 3D sensor are present, which cover this.
  • 3D sensors are laser scanners or a radar sensor with elevation measurement.
  • object tracking sets up object hypotheses, which are confirmed and updated by new sensor measurements.
  • a Kalman filter For example, a Kalman filter.
  • an on-board computer determines that data of a new object corresponding to the object is available.
  • the on-board computer registers the data of the new object and estimates an expected location and appearance for the object according to a prediction algorithm to generate a predicted track for the object.
  • the on-board computer analyzes the movement for the object, including comparing the predicted track with an existing track associated with the object and in a database of the object
  • Classical object tracking involves a number of challenges, especially in the association step, to avoid ambiguity.
  • the dynamic state can not always be well estimated: depending on the measurements and condition of a track, no Cartesian velocity vector is often known. Acceleration can only be estimated by prolonged observation. This can lead to large errors in the prediction step.
  • an object can oppose Behave the dynamic model, eg by abrupt braking. This deviant
  • a laser scanner takes e.g. particularly well-reflecting surfaces true, such as
  • Radar cross section well true, such as taillights, kinked sheets, etc.
  • sensors of an object different points appropriate, which may be far away from each other, but are assigned to the same object.
  • some sensors e.g. Radar sensors, a comparatively low selectivity, so here the ambiguity problem is exacerbated.
  • an erroneous handling of ambiguities can lead to misassociations, in which object tracks are associated with incorrect measurement data and updated. This can have unpleasant consequences.
  • an edge development may be erroneously assigned a lateral velocity. The edge development then appears dynamic and migrates into the driving tube. This may cause emergency braking due to a "ghost object".
  • a perimeter building e.g. assigning a laser scanner scanned bollard to a nearby dynamic object, e.g. a vehicle that just passes the bollard. This prevents the bollard being recognized as such in time, which can lead to a collision with the edge development.
  • a method for processing sensor data comprises the steps:
  • a computer-readable storage medium includes instructions that, when executed by a computer, cause the computer to perform the following sensor data processing steps:
  • an apparatus for processing sensor data comprises:
  • a data fusion unit for fusing the camera images with the 3D measurement points to data of a virtual sensor.
  • the concept of a virtual sensor is introduced. This fuses the measurement data from the camera and 3D sensors on an earlier measurement point level and thus abstracts the individual sensors. The resulting data from the virtual sensor can be used in the subsequent object tracking
  • the inventive solution prevents the object hypotheses of different sensors with systematic errors from being fused over time in a common model, whereby association errors easily occur. This enables a robust environment perception, which
  • the merging of the image data with the 3D measurement points to data of a virtual sensor comprises:
  • the 3D measurement points are synchronized with the camera images. This is particularly advantageous since the optical flow automatically takes into account foreign and proper movements correctly. It is not
  • the determination of pixels in at least one of the camera images to be associated with one of the 3D measurement points at a time of the measurement comprises:
  • the entire camera image can be converted to the measurement time of the 3D sensor.
  • the 3D measuring points can be projected from the depth-measuring sensor into the camera image.
  • the pixels can be treated as infinitely long beams that intersect with the 3D measurement points.
  • the determination of pixels in at least one of the camera images to be associated with one of the 3D measurement points at a time of the measurement comprises:
  • a time to collision for the pixels of the camera images is determined from the optical flow. From the time to collision, the optical flow and a distance measurement for a 3D measuring point can then be a Cartesian
  • Speed vector can be calculated for this 3D measuring point. This one can For example, it can be used to distinguish overlapping objects of the same class. Previous sensors have to track objects over time by means of dynamic and association models for such a distinction, which is relatively error prone.
  • a time to collision from a 3D measurement is determined from a radial relative speed and a distance measurement. From the time to collision and the optical flow can then be a Cartesian
  • Speed vector can be calculated for this 3D measuring point.
  • This approach has the advantage that the measurement of the time to collision is particularly accurate when the radial relative velocity comes for example from a radar sensor.
  • object movements both horizontally and vertically (optical flow) can be observed quite accurately in the image.
  • the resulting velocity vector is therefore generally more accurate than estimating the time to collision alone from the image.
  • the 3D measurement points are extended by attributes from at least one of the camera images.
  • the attributes can be, for example, the (averaged) optical flow or the position in the image space of the associated pixel (s) of the camera image.
  • the velocity vector, a can be, for example, the (averaged) optical flow or the position in the image space of the associated pixel (s) of the camera image.
  • Doppler speed, the reflectivity or the radar cross section or the confidence are added.
  • the additional attributes allow the realization of a more robust object tracking or even a better segmentation.
  • a camera image is segmented near a measurement time of the 3D measurement.
  • measurement points of the 3D sensor are precisely projected into the image by means of the optical flow, and their measurement attributes are stored in further dimensions. This allows a cross-sensor
  • the segmentation is preferably carried out by a neural network.
  • Segmentation avoids association errors and ambiguities between two classes can be resolved.
  • Class information or identifiers resulting from the segmentation are also preferably added to the 3D measurement points as attributes.
  • an algorithm for object tracking is applied to the data of the virtual sensor.
  • This algorithm preferably takes one accumulating sensor data fusion. Accumulating sensor data fusion enables filtering of the data over time and therefore reliable object tracking.
  • Fig. 1 shows schematically the course of a classical object tracking
  • Fig. 2 shows schematically a method for processing sensor data
  • FIG. 3 schematically shows the merging of camera images with 3D measuring points
  • Fig. 4 shows a first embodiment of a device for processing
  • Fig. 5 shows a second embodiment of an apparatus for processing
  • Fig. 6 schematically illustrates a motor vehicle in which a solution according to the invention is realized
  • Fig. 7 schematically shows the concept of a virtual sensor
  • Fig. 8 shows schematically the concept of a virtual sensor with classifier.
  • Fig. 1 shows schematically the course of a classical object tracking.
  • Input variables for object tracking are sensor data E and track states transformed into the measurement space.
  • a first step 10 an attempt is made to associate a measurement with a track. It then checks 11 to see if the association was successful. If this is the case, the corresponding track will be updated. 12. If the association fails, however, a new track will be initialized. 13. This procedure is repeated for all measurements.
  • the object tracking object is an object list A.
  • the associated tracks are predicted 16 for the next measurement time and the resulting track states are transformed into the measurement space 17 again for the next pass of the object tracking 17.
  • Fig. 2 shows schematically a method for processing sensor data.
  • camera images are acquired by a camera 20.
  • 3D measurement points are also acquired by at least one 3D sensor.
  • at least one of the camera images can be segmented 22, e.g. through a neural network.
  • the camera images are then fused by a data fusion unit with the 3D measurement points to data of a virtual sensor 23.
  • an optical flow is determined, which is used for the synchronization of image and 3D measurement points.
  • the 3D measurement points can be extended by attributes from at least one of the camera images.
  • an object tracking algorithm may be applied to the data of the virtual sensor.
  • the algorithm may e.g. make an accumulating sensor data fusion.
  • the data of the virtual sensor can be segmented. The segmentation can in turn be done by a neural network.
  • FIG. 3 schematically shows the fusion of camera images with 3D measurement points to data of a virtual sensor.
  • a first step at least a first
  • From the optical flow can optionally be determined for the pixels of the camera images a time to collision 31. From the time to collision, the optical flow and a
  • a velocity vector for this 3D measurement point can also be calculated.
  • a velocity vector for this 3D measurement point can also be calculated.
  • a time to collision can be determined from a 3D measurement. From the time to collision and the optical flow, a Cartesian velocity vector for this 3D measurement point can then be calculated. On the basis of the optical flow finally pixels in at least one of
  • Camera images determine 32 associated with one of the 3D measurement points. For this purpose, first a camera image in the temporal proximity of a measurement time of the 3D sensor based on the optical flow can be converted. The 3D measuring points can then be projected into the converted camera image.
  • the device 40 has an input 41, via which camera images 11, 12 of a camera 61 and 3D measuring points MP of at least one 3D sensor 62, 64 can be received.
  • the device 40 also optionally has a segmenter 42 for segmenting at least one camera image or a further measurement-enhanced camera image 11, 12, e.g. by means of a neural network.
  • a data fusion unit 43 By a data fusion unit 43, the camera images 11, I2 are fused with the 3D measuring points MP to data VS of a virtual sensor.
  • the 3D measuring points MP can be extended by attributes from at least one of the camera images 11, 12.
  • the data fusion unit 43 can calculate an optical flow from at least a first camera image 11 and a second camera image I2 in a first step.
  • a time to collision for the pixels of the camera images 11, 12 can be determined from the optical flow. From time to collision, the optical flow and a
  • Velocity vector for this 3D measuring point MP can be calculated.
  • a time to collision can be determined from a 3D measurement.
  • a Cartesian velocity vector for this 3D measurement point MP can then be calculated.
  • the data fusion unit 43 determines pixels in at least one of the camera images 11, 12 assigned to one of the 3D measurement points MP. For this purpose, firstly a camera image 11, 12 in the temporal proximity of a measurement time MP of the 3D sensor 62, 64 can be converted on the basis of the optical flow.
  • the 3D measuring points MP can then be projected into the converted camera image.
  • a likewise optional object tracker 44 can perform an object tracking on the basis of the data VS of the virtual sensor.
  • the object tracker 44 may e.g. a
  • the data VS of the virtual sensor or the results of the object tracking or the segmentation are output for further processing.
  • the segmenter 42, the data fusion unit 43 and the object tracker 44 may be controlled by a control unit 45. Via a user interface 48
  • the data accumulating in the device 40 can be stored in a memory 46 of the device 40, for example for later evaluation or for use by the device
  • the segmenter 42, the data fusion unit 43, the object tracker 44 and the control unit 45 may be implemented as dedicated hardware, for example as integrated circuits. Of course, they can also be partially or fully combined or implemented as software running on a suitable processor, such as a GPU or a CPU.
  • the input 41 and the output 47 may be implemented as separate interfaces or as a combined bidirectional interface.
  • FIG. 5 shows a simplified schematic representation of a second embodiment of a device 50 for processing sensor data.
  • the device 50 has a processor 52 and a memory 51.
  • device 50 is a computer or controller.
  • the memory 51 stores instructions which, when executed by the processor 52, cause the device 50 to execute the steps according to one of the described methods.
  • the filed in memory 51
  • the device 50 has an input 53 for receiving information, in particular sensor data. Data generated by the processor 52 is provided via an output 54. In addition, they can be stored in the memory 51.
  • the input 53 and the output 54 may be combined to form a bidirectional interface.
  • the processor 52 may include one or more processing units, such as microprocessors, digital signal processors, or combinations thereof.
  • the memories 46, 51 of the described embodiments can have both volatile and non-volatile memory areas and a wide variety of memory devices and
  • FIG. 6 schematically illustrates a motor vehicle 50 in which a solution according to the invention is realized.
  • the motor vehicle 60 has a camera 61 for capturing camera images and a radar sensor 62 for capturing 3D measuring points.
  • the motor vehicle 60 has a device 40 for processing sensor data, by means of which the camera images with the 3D measuring points are fused to data of a virtual sensor.
  • Further components of the motor vehicle 60 are ultrasonic sensors 63 and a lidar system 64 for environmental detection, a data transmission unit 65 and a number of assistance systems 66, one of which is shown by way of example.
  • the motor vehicle 60 has a camera 61 for capturing camera images and a radar sensor 62 for capturing 3D measuring points.
  • the motor vehicle 60 has a device 40 for processing sensor data, by means of which the camera images with the 3D measuring points are fused to data of a virtual sensor.
  • Further components of the motor vehicle 60 are ultrasonic sensors 63 and a lidar system 64 for environmental detection, a data transmission unit 65
  • Assistance systems may use the data provided by the device 20, such as for object tracking.
  • a connection to service providers can be established, for example for retrieving
  • a memory 67 is provided. Of the
  • Data exchange between the various components of the motor vehicle 50 takes place via a network 68.
  • Fig. 7 shows schematically the concept of a virtual sensor as a basis for a
  • Input variables for the sensor fusion through a data fusion unit 43 are 3D measurement points of a 3D sensor (radar 62) and
  • the camera 61 may already be processing the
  • Structure from Motion Structure from motion
  • This processing of the camera images can also be achieved through the
  • Data fusion unit 43 are made. Furthermore, the camera 61 can transmit information about the camera position. Further possible data sources are ultrasound sensors 63 or a lidar system 64. Data fusion unit 43 merges the data over a very short period of time. The 3D points from the data fusion unit 42 become then passed an accumulating sensor data fusion 44, which a
  • a major challenge for data fusion is that the sensors 61, 62 measure at different times. Therefore, a precise synchronization of the data of the various sensors 61, 62 is required. For the synchronization of
  • Sensors 61, 62 is preferably used the determined from the camera images optical flow. The basic principles of synchronization will be explained below.
  • At least two camera images are now used, e.g. the camera images before and after the measurement time t to first calculate an optical flux o.
  • the image which is the recording time t closest to the measurement time of the 3D sensor, used.
  • the difference time between the recording time of this picture and the measurement is At.
  • the optical flux o is measured in the image space (polar space).
  • TTC time to collision
  • the entire image can be converted to the measurement time t of the 3D sensor.
  • the 3D measuring points from the depth-measuring sensor can easily be projected into the image.
  • the pixels can be treated as infinitely long rays, which intersect with the 3D measurement points.
  • all optical flow vectors can be rendered with line algorithms so that in each pixel the bounding box of the vector is specified. If several flow vectors overlap in one pixel, the bounding box is correspondingly enlarged so that both vectors are contained in the box.
  • the search algorithm now only needs to consider the bounding box in which the searched pixel must be contained.
  • search trees eg. B. of quadtrees (quaternaries), similar to collision detection.
  • the 3D measuring point usually has an angular uncertainty, z. B. by beam expansion. Therefore, preferably all pixels within the uncertainty are taken into account in order to extend the 3D measurement point by attributes from the image.
  • the attributes may, for example, be the averaged optical flux o (o x , o y ) or the position in the image space p (p x , p y ).
  • the 3D measurement points can additionally be extended by the class resulting from the segmentation as well as the associated identifier.
  • the resulting points from the virtual sensor can be clustered into high quality object hypotheses because they contain extensive information to separate classes.
  • these are the class information and the identifier from the segmentation and the Cartesian velocity vector, the z. B. is useful for overlapping objects of the same class.
  • the extended 3D measurement points or clusters from the virtual sensor or the clusters are then transferred to an accumulating sensor data fusion, which allows filtering over time.
  • an accumulating sensor data fusion which allows filtering over time.
  • Changes in the individual image segments can be determined over time, which can be implemented particularly efficiently.
  • a time to collision can be determined. This describes when a point penetrates the main plane of the camera optics.
  • the TTC can be calculated:
  • a hole camera model is used for the mathematical representation. From the image position p x , p y (in pixels), the TTC (in s), the optical flux o (in pixels / s) and the distance measurement d in the direction of the image plane of the camera sensor (in m), a Cartesian velocity vector v ( in m / s) for the 3D measurement, which is relative to the ego movement in the camera coordinate system. It should be noted that optical flux o and pixel position p in the image space are given, while the velocities v xy z are determined in the camera coordinate system.
  • a camera constant K is needed that takes into account the image width b (in m) and the resolution D (pixels per m) of the imaging system.
  • the speeds are then as follows d
  • the radial relative speed can be used to stabilize the measurement:
  • Doppler speed By means of this relative speed and distance measurement can be determined by quotient forming an alternative TTC. This is especially useful for features near the camera's point of expansion, where there is little optical flow. This therefore affects objects in the driving tube.
  • the driving tube is usually covered by a particularly large number of sensors, so that the information is generally available.
  • Fig. 8 shows schematically the concept of a virtual sensor with classifier. The concept largely corresponds to the concept known from FIG. 7. Currently being to
  • Image classification often uses folded neural networks. If possible, these require locally associable data, which naturally exists in an image.
  • Neighbor pixels often belong to the same object and describe the neighborhood in the polar image space.
  • the neural networks rely not only on image data, which provide little data in low light conditions and also make distance estimates generally difficult. In other dimensions, therefore, measurement data from other sensors, in particular from laser and radar measurements, are projected into the state space. For a good performance, it makes sense to synchronize the measurement data by means of the optical flow, so that the neural networks can make good use of the data locality.
  • the synchronization can be carried out in the following manner.
  • the starting point is a camera image, which is as close as possible to all sensor data from the recording time.
  • further data are now annotated:
  • this includes the shift in the image, for example using the optical flow.
  • the pixels are identified which according to the pixel shift associate with the available 3D measurement data, for example from laser or radar measurements.
  • Beam expansions are, are here usually affected several pixels.
  • the associated pixels are extended by additional dimensions and the measurement attributes are entered accordingly. Possible attributes are for example: in addition to the distance measurement from laser, radar or ultrasound, the Doppler speed of the radar, the reflectivity and the
  • the camera image which has been synchronized and enhanced with measurement attributes, will now be displayed with a
  • Classifier or segmenter 42 preferably with a folded neural network, classified. In this case, all information can now be generated as described above in connection with FIG. 7.
  • a camera is assumed that can be modeled as a pinhole camera. This assumption serves only to make the transformations easier to handle. If the used camera can not be adequately modeled as a pinhole camera, distortion models can instead be used to generate views similar to those of the camera
  • Coordinate systems are defined. In total, five coordinate systems are defined:
  • Tv ⁇ -w (t) is the transformation that transforms a 3D point in the world coordinate system into the ego vehicle's 3D coordinate system. This transformation depends on time t, as the ego vehicle moves over time.
  • Ts ⁇ -v is the transformation that transforms a 3D point in the 3D coordinate system of the ego vehicle into the 3D coordinate system of the 3D sensor.
  • Tc ⁇ -v is the transformation that transforms a 3D point in the 3D coordinate system of the ego vehicle into the 3D coordinate system of the camera.
  • Pi ⁇ c is the transformation that projects a 3D point in the 3D coordinate system of the camera into the 2D image coordinate system.
  • a world point moving in the world coordinate system e.g. a point on one
  • Vehicle can be described by x w (t).
  • Equations (6) and (7) are linked together by the movement of the ego vehicle and the movement of the world point. While there is information about the movement of the ego vehicle, the movement of the world point is unknown.
  • Equation (11) represents a relationship between the optical flux vector and the
  • Flow vector between the camera images taken at the times t 0 and t 2 and Ax v (t 0 , t 2 ) is the corresponding motion vector of the world point expressed in C v .
  • the optical flux vector is thus the projection of the motion vector in 3D space.
  • the measurements of the camera and the 3D sensor can not be combined directly with each other. It must first be introduced as an additional assumption that the motion in the image plane is linear between times t 0 and t 2 . Under this assumption, the pixel belonging to a world point is determined by:
  • equation (6) can be used to determine the pixel coordinates of the world point that it would have in a virtual camera image recorded at time t:
  • Equation (16) represents a relationship between the measurements of the camera and the
  • equation (16) establishes a complete relationship, i. there are no unknown sizes.
  • Segment at least one camera image

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un procédé, un dispositif et un support d'enregistrement lisible par ordinateur comprenant des instructions pour le traitement de données de capteur. Au cours d'une première étape, des images sont enregistrées par une caméra (20). Des points de mesure 3D sont également détectés par au moins un capteur 3D (21). En option, au moins une des images de caméra peut être segmentée (22). Les images de caméra sont ensuite fusionnées avec les points de mesure 3D par une unité de fusion de données pour obtenir des données d'un capteur virtuel (23). Enfin, les données obtenues sont fournies en sortie pour un traitement ultérieur (24).
EP19715031.1A 2018-04-18 2019-03-27 Procédé, dispositif et support d'enregistrement lisible par ordinateur comprenant des instructions pour le traitement de données de capteur Pending EP3782117A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102018205879.2A DE102018205879A1 (de) 2018-04-18 2018-04-18 Verfahren, Vorrichtung und computerlesbares Speichermedium mit Instruktionen zur Verarbeitung von Sensordaten
PCT/EP2019/057701 WO2019201565A1 (fr) 2018-04-18 2019-03-27 Procédé, dispositif et support d'enregistrement lisible par ordinateur comprenant des instructions pour le traitement de données de capteur

Publications (1)

Publication Number Publication Date
EP3782117A1 true EP3782117A1 (fr) 2021-02-24

Family

ID=66001192

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19715031.1A Pending EP3782117A1 (fr) 2018-04-18 2019-03-27 Procédé, dispositif et support d'enregistrement lisible par ordinateur comprenant des instructions pour le traitement de données de capteur

Country Status (5)

Country Link
US (1) US11935250B2 (fr)
EP (1) EP3782117A1 (fr)
CN (1) CN111937036A (fr)
DE (1) DE102018205879A1 (fr)
WO (1) WO2019201565A1 (fr)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018205879A1 (de) 2018-04-18 2019-10-24 Volkswagen Aktiengesellschaft Verfahren, Vorrichtung und computerlesbares Speichermedium mit Instruktionen zur Verarbeitung von Sensordaten
US11899099B2 (en) * 2018-11-30 2024-02-13 Qualcomm Incorporated Early fusion of camera and radar frames
DE102019134985B4 (de) * 2019-12-18 2022-06-09 S.M.S, Smart Microwave Sensors Gmbh Verfahren zum Erfassen wenigstens eines Verkehrsteilnehmers
DE102020005343A1 (de) * 2020-08-31 2022-03-03 Daimler Ag Verfahren zur Objektverfolgung von mindestens einem Objekt, Steuereinrichtung zur Durchführung eines solchen Verfahrens, Objektverfolgungsvorrichtung mit einer solchen Steuereinrichtung und Kraftfahrzeug mit einer solchen Objektverfolgungsvorrichtung
US20220153262A1 (en) * 2020-11-19 2022-05-19 Nvidia Corporation Object detection and collision avoidance using a neural network
US12106492B2 (en) * 2021-11-18 2024-10-01 Volkswagen Aktiengesellschaft Computer vision system for object tracking and time-to-collision
US20230273308A1 (en) * 2022-01-31 2023-08-31 Qualcomm Incorporated Sensor based object detection
US20230350050A1 (en) * 2022-04-27 2023-11-02 Toyota Research Institute, Inc. Method for generating radar projections to represent angular uncertainty
CN115431968B (zh) * 2022-11-07 2023-01-13 北京集度科技有限公司 车辆控制器、车辆及车辆控制方法
EP4369045A1 (fr) * 2022-11-14 2024-05-15 Hexagon Technology Center GmbH Filtrage de points réfléchis dans un balayage lidar 3d par évaluation conjointe de données lidar et de données d'image avec un classificateur de points de réflexion
DE102023100418A1 (de) 2023-01-10 2024-07-11 Bayerische Motoren Werke Aktiengesellschaft Verfahren und Vorrichtung zur Nachverfolgung des lateralen Abstands eines Objektes

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101188588B1 (ko) * 2008-03-27 2012-10-08 주식회사 만도 모노큘러 모션 스테레오 기반의 주차 공간 검출 장치 및방법
US8704887B2 (en) 2010-12-02 2014-04-22 GM Global Technology Operations LLC Multi-object appearance-enhanced fusion of camera and range sensor data
DE102010063133A1 (de) * 2010-12-15 2012-06-21 Robert Bosch Gmbh Verfahren und System zur Bestimmung einer Eigenbewegung eines Fahrzeugs
DE102011013776A1 (de) 2011-03-12 2011-11-10 Daimler Ag Verfahren zur Erfassung und/oder Verfolgung von Objekten
US9255989B2 (en) 2012-07-24 2016-02-09 Toyota Motor Engineering & Manufacturing North America, Inc. Tracking on-road vehicles with sensors of different modalities
DE102012023030A1 (de) 2012-11-26 2014-05-28 Audi Ag Verfahren zur Ermittlung der Bewegung eines Kraftfahrzeugs
DE102014211166A1 (de) * 2013-11-20 2015-05-21 Continental Teves Ag & Co. Ohg Verfahren, Fusionsfilter und System zur Fusion von Sensorsignalen mit unterschiedlichen zeitlichen Signalausgabeverzügen zu einem Fusionsdatensatz
CN110171405B (zh) * 2014-05-22 2021-07-13 御眼视觉技术有限公司 基于检测对象制动车辆的系统和方法
DE102015012809A1 (de) * 2015-10-02 2017-04-06 Audi Ag Bildaufnahmeeinrichtung für ein Kraftfahrzeug und Verfahren zum Betreiben einer derartigen Bildaufnahmeeinrichtung
US10482331B2 (en) 2015-11-20 2019-11-19 GM Global Technology Operations LLC Stixel estimation methods and systems
US20170206426A1 (en) 2016-01-15 2017-07-20 Ford Global Technologies, Llc Pedestrian Detection With Saliency Maps
US10328934B2 (en) * 2017-03-20 2019-06-25 GM Global Technology Operations LLC Temporal data associations for operating autonomous vehicles
US10444759B2 (en) * 2017-06-14 2019-10-15 Zoox, Inc. Voxel based ground plane estimation and object segmentation
US10535138B2 (en) * 2017-11-21 2020-01-14 Zoox, Inc. Sensor data segmentation
US10984257B2 (en) * 2017-12-13 2021-04-20 Luminar Holdco, Llc Training multiple neural networks of a vehicle perception component based on sensor settings
US20190235521A1 (en) * 2018-02-01 2019-08-01 GM Global Technology Operations LLC System and method for end-to-end autonomous vehicle validation
US11435752B2 (en) * 2018-03-23 2022-09-06 Motional Ad Llc Data fusion system for a vehicle equipped with unsynchronized perception sensors
DE102018205879A1 (de) 2018-04-18 2019-10-24 Volkswagen Aktiengesellschaft Verfahren, Vorrichtung und computerlesbares Speichermedium mit Instruktionen zur Verarbeitung von Sensordaten
DE102019201565A1 (de) 2019-02-07 2020-08-13 Aktiebolaget Skf Lagerkäfigsegment mit einer Stoßkante im Bereich eines zu bildenden Stegs

Also Published As

Publication number Publication date
US11935250B2 (en) 2024-03-19
DE102018205879A1 (de) 2019-10-24
WO2019201565A1 (fr) 2019-10-24
CN111937036A (zh) 2020-11-13
US20210158544A1 (en) 2021-05-27

Similar Documents

Publication Publication Date Title
WO2019201565A1 (fr) Procédé, dispositif et support d'enregistrement lisible par ordinateur comprenant des instructions pour le traitement de données de capteur
DE69322306T2 (de) Gegenstandserkennungssystem mittels Bildverarbeitung
EP1589484B1 (fr) Procédé pour la détection et/ou le suivi d'objets
EP1531343B1 (fr) Procédé de suivi d'objets
EP1298454A2 (fr) Procédé de reconnaissance et de suivi d'objets
DE102016114535A1 (de) Wegbestimmung für automatisierte Fahrzeuge
DE102016225595A1 (de) Verfahren und Anordnung zur Kalibrierung mindestens eines Sensors eines Schienenfahrzeugs
DE112014003818T5 (de) Objektschätzvorrichtung und Objektschätzverfahren
EP3877776A1 (fr) Procédé et unité de traitement servant à déterminer une information concernant un objet dans un champ environnant d'un véhicule
WO2020244717A1 (fr) Détection, reconstruction 3d et suivi de plusieurs objets indéformables mobiles les uns par rapport aux autres
DE102021113651B3 (de) System zur Sensordatenfusion für die Umgebungswahrnehmung
EP2200881A1 (fr) Procédé pour évaluer le déplacement relatif d'objets vidéo et système d'assistance à la conduite pour des véhicules automobiles
DE102018100909A1 (de) Verfahren zum Rekonstruieren von Bildern einer Szene, die durch ein multifokales Kamerasystem aufgenommen werden
EP1460454A2 (fr) Procédé de traitement combiné d'images à haute définition et d'images vidéo
DE102006039104A1 (de) Verfahren zur Entfernungsmessung von Objekten auf von Bilddaten eines Monokamerasystems
DE102017212513A1 (de) Verfahren und System zum Detektieren eines freien Bereiches innerhalb eines Parkplatzes
WO2021170321A1 (fr) Procédé de détection d'objets en mouvement dans l'environnement d'un véhicule, et véhicule à moteur
DE102014113372B4 (de) Filtervorrichtung
EP2579228A1 (fr) Procédé et système de génération d'une représentation numérique d'un environnement de véhicule
DE102021206475A1 (de) Hindernisdetektion im Gleisbereich auf Basis von Tiefendaten
DE102021101336A1 (de) Verfahren zur Auswertung von Sensordaten eines Abstandssensors, Ermittlungseinrichtung, Computerprogramm und elektronisch lesbarer Datenträger
DE102011111856B4 (de) Verfahren und Vorrichtung zur Detektion mindestens einer Fahrspur in einem Fahrzeugumfeld
DE102020200875A1 (de) Verfahren zum Bereitstellen von Sensordaten durch eine Sensorik eines Fahrzeugs
DE10148063A1 (de) Verfahren zur Erkennung und Verfolgung von Objekten
DE102023203319A1 (de) Spurgebundene Hinderniserkennung

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20201118

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20230202