WO2020119997A1 - Verfahren zur bestimmung einer relativbewegung mittels einer digitalen bilderfolge - Google Patents
Verfahren zur bestimmung einer relativbewegung mittels einer digitalen bilderfolge Download PDFInfo
- Publication number
- WO2020119997A1 WO2020119997A1 PCT/EP2019/079550 EP2019079550W WO2020119997A1 WO 2020119997 A1 WO2020119997 A1 WO 2020119997A1 EP 2019079550 W EP2019079550 W EP 2019079550W WO 2020119997 A1 WO2020119997 A1 WO 2020119997A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- partial
- flow field
- flow
- flow fields
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Definitions
- the invention relates to a method for determining a relative movement of a device to at least one object by means of a digital image sequence.
- Driver assistance systems record scenarios with the help of electronic systems that, for. B. by means of radar, lidar, ultrasound or camera systems to record and process data to characterizing parameters
- the processing of the data can, depending on the
- a three-dimensional model is available for analysis.
- the apparent movement of the object in an image sequence can be analyzed using two-dimensional image coordinates.
- a relative movement of a device to at least one object can e.g. B. based on scaling change of object detections or based on a scale change estimate of an optical flow from the image sequences.
- the estimate of the relative movement is subject to the
- Scaling changes in object detections of a number of inaccuracies such as B. the often erroneous assumption of known and fixed object widths or there are inaccuracies due to the variation in the extent of the object in successive camera images.
- the optical flow belonging to the target object must be selected from the flow fields, which can be done via a movement segmentation.
- Movement segmentation of the optical flow is a non-trivial and computation-intensive problem.
- the resulting results are typically very noisy, so subsequent calculations must treat a large number of outliers.
- a collision time TTC time-to-collision or time-to-contact
- the optical flow estimate is calculated with a fixed time interval over the entire image.
- DE10 2011 006 629 A1 describes a method for determining the collision time of a vehicle equipped with a video system on the basis of an evaluation of video images of the vehicle surroundings, whereby a
- Collision parameter TTC for determining the collision time of the vehicle with any part of the vehicle environment is calculated.
- the collision parameter TTC is calculated by integrating the expansion rate of the optical flow field of the entire vehicle environment recorded by the video system in the video image.
- the invention further relates to a
- Vehicle guidance system for performing the method according to the invention.
- the object of the invention is a method for determining a
- Computer program product and a computer-readable storage medium for determining a relative movement of a device to at least one Object based on a digital one captured by the location of the facility
- the invention is based on the knowledge that objects farther away from the device appear in an image of a sequence of images, e.g. B. was captured in perspective with a camera, images smaller than nearby objects. Therefore, according to the invention, optical flow fields are calculated from pairs of images of such an image sequence with different time intervals from one another and with different sizes, the partial area on the images in which the object is depicted being taken into account. This partial area in which the object is depicted can also be referred to as a "bounding box" or as a "region of interest ROI". Suitable flow fields are then selected from a multiplicity of flow fields for determining relative movements on the basis of criteria which improve the reliability and accuracy of this estimate.
- the device for at least one object, based on a digital image sequence of the object detected by the location of the device.
- the device can be a motor vehicle that itself has a movement relative to a surface.
- the device can also be stationary, relative to which an object such.
- the optical flow describes the displacement of the image points of an object recorded at a time i in an image of an image sequence, in relation to the image points recorded at time j, using flow vectors which are assigned to the current image points and thus define the optical flow field. Furthermore, in the method according to the invention, the at least one object, whose relative movement is to be determined, is located in the most current image in a partial image area of the digital image sequence and this
- Drawing area is assigned to the object.
- a region is thus identified for each image in the image sequence in which the object was detected, and it is sufficient for the further calculations to consider this partial image region ROI (region of interest) which characterizes the object.
- ROI region of interest
- a rectangular area is assigned to the object, which largely
- the partial image area can also be derived from a so-called “semantic segmentation”.
- the at least one object can be located only for selected images of the sequence of images. If several objects are located in the most current image in each case, the objects can be tracked through the entire method by means of the assigned partial image regions by assigning one partial image area to one object.
- a multiplicity of optical partial flow fields is formed from the multiplicity of optical flow fields, a respective partial flow field resulting as an intersection of one of the flow fields of the multiplicity of flow fields with the partial image area of the most current image of the image sequence.
- the calculation of the flow field is based on the most current image of the image sequence and the flow field is calculated on the other image of the image pair that has the corresponding time interval to the current image.
- the object was located in the partial image area in the current image and the intersection of this partial image area of the image with the flow field, which was calculated on the basis of this most recent image, determines the partial flow field that the
- At least one partial flow field is selected according to at least one criterion in order to promote the estimation of a change in the scale of the object.
- the change in scale for the at least one object is estimated based on the at least one selected partial flow field.
- pairs of flow vectors are formed, by means of which the change in scaling is calculated. It is therefore beneficial for the scale estimation if a suitable partial flow field is selected, because the scale estimate is based on this partial flow field.
- a suitable partial flow field is selected for each object based on the criteria. For this purpose, a unique identification is made for each object or for each associated drawing area.
- the partial flow field that is the most suitable can be selected according to predefined criteria.
- the quality of the scaling calculation is therefore improved by selecting the most suitable optical partial flow field from different time intervals. Because if z. B. an object is located at a greater distance from the facility, z. B. a flow field with a larger time interval between the images of a corresponding pair of images, a longer flow vector can be used for the estimation. To estimate the change in scale, a different time interval for calculating the optical flow can be used at a time i than at time j.
- Criterion for selecting a partial flow field takes into account the degree of coverage of the partial image area with the calculated flow field. This ensures that the object is located in an image area within the calculated flow field and that a sufficient proportion of the object represented by the partial image area can be taken into account. This is especially important if e.g. B. the flow field is not calculated for the entire area of the images of the image sequence to reduce the computational effort. This is explained in more detail below. Additionally or alternatively, the criterion can be an absolute size of the intersection of the
- the criterion for the selection of a partial flow field be a quantity of a signal-noise Ratio of flow vectors of the partial flow field and additionally or alternatively a quality measure from the calculation of the flow vectors is taken into account. Reliability in the quality of the partial flow field is given either by a determined signal-to-noise ratio of the flow vectors or also by a quality value that was determined when the flow vectors were calculated. Furthermore, the criterion for selecting a partial flow field can additionally or alternatively take into account the number of the flow vectors in the partial flow field and / or the length of the flow vectors of the partial flow field.
- the criterion for selecting a partial flow field takes into account a detection quality of the location of the object in the partial image area of the image of the image sequence. If the location of an object in a partial area of an image is certain, the correct flow vectors are also selected in the partial flow field, which improves the estimate.
- the criterion can also take into account the characteristics of the image areas in the intersection of the partial image area and the calculated flow field.
- the properties of an image in the sequence of images are also included in the selection of the partial flow fields, which have an obvious influence on the quality, for example.
- B. have the flow vectors. These can be contrasts, homogeneity, gray value distribution, gradients or more complex features such as a jHistogram of oriented gradients HOG, in which the distribution of local intensities or the arrangement of edges are evaluated, and similar variables that characterize the meaningfulness of the image.
- FV> alpha * (SZ / SZMAX) * FVMAX assumes and also takes into account the size of the time interval with which the partial flow field was calculated. This criterion primarily evaluates the number of flow vectors in the partial flow field in comparison to the number in the other partial flow fields. This formula is derived later in this description. The size of the time interval is in the way
- the multiplicity of optical flow fields be calculated from an image section of the images of the image pairs, the image section representing a partial area of the respective images.
- the object z. B. is a vehicle that is moving on a surface in front of the device relative to the device, shown at a greater distance from the device in a different image detail of an image of a sequence of images than an object that is in a close range in front of the device .
- the calculation of the plurality of optical flow fields is carried out for at least two distinguishable image sections of the images of the image pairs and with different time intervals in each case.
- the at least two distinguishable image sections are arranged with respect to their arrangement and size on the images of the image pairs in accordance with perspective aspects for the location of objects at different distances from the device.
- the partial image area of an image in which distant objects are imaged in front of the device is significantly smaller than in the case of objects in the near area in front of the device, provided that the objects move on an essentially flat surface in front of the device.
- Ratios can e.g. For example, an image section for distant objects can be placed smaller and farther in the direction of the upper edge of the images in the sequence of images than an image section for objects in the vicinity of the device.
- the at least two image sections can be distinguished by their areal extent, the smaller image section is arranged entirely within the larger image section, and the respectively assigned one
- the time interval for calculating the flow fields is larger, the smaller the
- Image section is.
- Calculation of a large number of optical flow fields can be calculated from image sections of the image pairs, the image sections and / or the
- Assignment of the image sections to time intervals of the image pairs can be determined after the acquisition of the respective image sequence.
- Information from the location of objects used to calculate the flow fields used parameters such as B. the location, the size and the assigned time interval can be adjusted.
- a computer program product which comprises instructions which, when the program is executed by a computer, cause the computer to carry out the method described above.
- a computer-readable storage medium which comprises commands which, when the program is executed by a computer, cause the computer to carry out the method described above.
- the invention also provides a system for determining a relative movement of the system to at least one object.
- This system has one
- Such a device for capturing digital image sequences can, for. B. a digital camera, a digital video camera. Furthermore, the system has an evaluation unit which is coupled to the device in such a way that the image sequences are transferred to the evaluation unit.
- This can e.g. B. with a bus system, which couples the device for recording digital image sequences with the evaluation unit.
- the evaluation unit is set up, the method according to the invention described above
- FIGS. 1 and 2 Exemplary embodiments of the invention are shown in FIGS. 1 and 2 and are explained in more detail below. It shows:
- FIG. 1 shows a flow chart of a method for determining a
- Figure 2 System for determining a relative movement of a device to at least one object.
- FIG. 1 the method steps for determining a relative movement of a device to at least one object are shown in flow diagram 10.
- For the procedure are a sequence of digital images for both
- a multiplicity of optical flow fields ⁇ U (x, y, x) ⁇ are calculated.
- Each flow field ⁇ U (x, y, x) ⁇ is formed from a set of flow vectors U, which are each assigned to the coordinates x and y of an image.
- the flow vectors U are calculated from image pairs of the sequence of digital images with a time interval x.
- the individual flow fields ⁇ U (x, y, x ⁇ are calculated with a large number of different time intervals t: i. .H from a current image and images of the sequence of images.
- Step S7 is an optional embodiment of the exemplary embodiment.
- Step S7 precedes step S2, the calculation of a large number of flow fields.
- the calculation of the diversity of the optical flow field in S2 is based only on an image section of the images of the pairs of images, the image section thus representing a partial area of the respective images.
- step S7 the definition of a smaller image section is thus provided for the large number of flow field calculations, as outlined in FIG. 1 in step S7.
- an area such. B. a sky can be hidden from an image that is not relevant for the estimation of the change in scale, thereby reducing the computational effort.
- step S7 for the calculation of the plurality of optical flow fields in step S2, it is specified that image sections of the images of the images that can be distinguished
- Image pairs can each be given different time intervals.
- This calculation of the diversity of flow fields S2 can be carried out with any number of image sections and associated time intervals. As already shown, this enables objects with their partial image area, which are shown in the distance to be smaller due to perspective reasons, to be completely captured during the flow field calculation using a smaller image section. Thus, the computing effort for the calculation of the
- the estimate from one image section can also be compared with the estimate from another image section.
- Image sections of different sizes should be centered and nested, and the longest time interval assigned to the smallest image section, the largest time interval to the largest and the middle image section assigned a time interval that lies between the other two time intervals.
- the method according to the invention thus enables a robust and precise estimate of the change in scale, which can also be adapted to other circumstances, for objects in the near and far range.
- step S3 at least one object is located in a partial image area of an image of the sequence of images and the respective object is assigned to a partial area.
- the result of the location S3 is a partial image area of an image of the sequence of images in which the object was located and to which it was assigned.
- a partial flow field in each case results from an intersection of a calculated flow field ⁇ U (x, y, x) ⁇ with the at least one partial image area, which results in a plurality of partial flow fields with the plurality of calculated flow fields ⁇ U (x, y, x) ⁇ be formed.
- Each partial flow field has the time interval used to determine this flow field ⁇ U (x, y, x) ⁇
- step S5 a partial flow field is selected from this large number of partial flow fields according to a criterion, the selection of a suitable criterion promoting the estimation of a change in the scale of the object.
- the criteria for this selection of at least one suitable partial flow field from the large number of partial flow fields are based on the one hand on necessary ones
- the partial image area has a sufficient number of flow vectors for estimating the change in scale, by means of which the change in scale is estimated.
- quality criteria such as the signal-to-noise ratio or
- the quality criteria for the selection of a partial flow field can refer to the flow vectors of the partial flow fields, to features of the object detection in the
- the quality size of the flow vectors can be provided via an optional data exchange VI between the calculation of the diversity of the flow fields in step S2 and the selection in step S5. Furthermore, quality criteria can relate to features of the drawing area that result from the location of the at least one object.
- an optional data exchange V2 can take place between the object detection in step S3 and the selection of partial flow fields in step S5.
- more complex methods such as automated or learned classifiers, e.g. B. from “machine learning” (e.g. decision trees) or “deep learning” e.g. B. can be determined “offline", used to select the most suitable partial flow field.
- machine learning e.g. decision trees
- deep learning e.g. B.
- Coverage area SZ ie the intersection of the partial image area and the calculated flow field, is above a certain minimum size, is taken into account for the selection process. For the partial flow fields selected in this way, the number of
- Coverage area SZMAX and the maximum number of flow vectors FVMAX in a respective coverage area SZ of one of the selected partial flow fields are determined.
- alpha is a weighting factor to be selected, e.g. can be determined empirically.
- the most suitable flow field is selected separately for each cover area.
- step S6 after the selection of the most suitable partial flow range, the change in scale is estimated.
- scale changes S6 pairs of flow vectors are formed from the overlap area in order to move objects along the optical axis of the
- Image acquisition system to be able to quantify relative movements.
- one pixel in one image is mapped to several pixels in another image and vice versa.
- a collision time TTC time-to-collision or time-to-contact
- the method shown can be implemented by means of a computer program product which comprises the commands which, when the program is executed by a computer, cause the computer to carry out this method with all of its optional features.
- Computer program product can also be read on a computer
- Storage medium can be saved.
- FIG. 2 shows a system 20 for determining a relative movement of this system 20 with at least one object.
- This system has a device for capturing digital image sequences 1 from e.g. B. objects.
- Such a device for capturing digital image sequences can, for. B. be a digital camera or a digital video camera.
- the system 20 has an evaluation unit 2 which is coupled to the device 1 in such a way that the image sequences are transferred to the evaluation unit 2.
- This can e.g. B. with a bus system la, which couples the device for recording digital image sequences 1 with the evaluation unit 2.
- a bus system la which couples the device for recording digital image sequences 1 with the evaluation unit 2.
- Evaluation unit 2 set up to carry out the inventive method described above and to provide the result of the method at an output 3 of the evaluation unit.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217021389A KR20210102343A (ko) | 2018-12-13 | 2019-10-29 | 디지털 이미지 시퀀스를 이용하여 상대 운동을 결정하기 위한 방법 |
US17/278,006 US11989889B2 (en) | 2018-12-13 | 2019-10-29 | Method for determining a relative movement using a digital image sequence |
CN201980082459.1A CN113168703A (zh) | 2018-12-13 | 2019-10-29 | 用于借助数字图像序列来确定相对运动的方法 |
JP2021533439A JP7337165B2 (ja) | 2018-12-13 | 2019-10-29 | デジタル画像シーケンスを用いた相対運動の決定方法 |
EP19797642.6A EP3895123A1 (de) | 2018-12-13 | 2019-10-29 | Verfahren zur bestimmung einer relativbewegung mittels einer digitalen bilderfolge |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102018221617.7 | 2018-12-13 | ||
DE102018221617.7A DE102018221617A1 (de) | 2018-12-13 | 2018-12-13 | Verfahren zur Bestimmung einer Relativbewegung mittels einer digitalen Bilderfolge |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020119997A1 true WO2020119997A1 (de) | 2020-06-18 |
Family
ID=68426443
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2019/079550 WO2020119997A1 (de) | 2018-12-13 | 2019-10-29 | Verfahren zur bestimmung einer relativbewegung mittels einer digitalen bilderfolge |
Country Status (7)
Country | Link |
---|---|
US (1) | US11989889B2 (de) |
EP (1) | EP3895123A1 (de) |
JP (1) | JP7337165B2 (de) |
KR (1) | KR20210102343A (de) |
CN (1) | CN113168703A (de) |
DE (1) | DE102018221617A1 (de) |
WO (1) | WO2020119997A1 (de) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120019655A1 (en) * | 2009-04-15 | 2012-01-26 | Toyota Jidosha Kabushiki Kaisha | Object detection device |
DE102011006629A1 (de) | 2011-04-01 | 2012-10-04 | Robert Bosch Gmbh | Verfahren zur Bestimmung der Kollisionszeit eines mit einem Videosystem ausgerüsteten Fahrzeuges und Fahrzeugführungssystem |
US20120314071A1 (en) * | 2011-04-27 | 2012-12-13 | Mobileye Technologies Ltd. | Pedestrian collision warning system |
US20170203744A1 (en) * | 2014-05-22 | 2017-07-20 | Mobileye Vision Technologies Ltd. | Systems and methods for braking a vehicle based on a detected object |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4269781B2 (ja) * | 2003-05-27 | 2009-05-27 | 日本電気株式会社 | オプティカルフロー検出システム、検出方法および検出プログラム |
JP4919036B2 (ja) * | 2007-01-30 | 2012-04-18 | アイシン精機株式会社 | 移動物体認識装置 |
JP5012718B2 (ja) * | 2008-08-01 | 2012-08-29 | トヨタ自動車株式会社 | 画像処理装置 |
JP5338566B2 (ja) * | 2009-08-25 | 2013-11-13 | 富士通株式会社 | 車両検出装置、車両検出プログラム、および車両検出方法 |
US9008363B1 (en) * | 2013-01-02 | 2015-04-14 | Google Inc. | System and method for computing optical flow |
-
2018
- 2018-12-13 DE DE102018221617.7A patent/DE102018221617A1/de active Pending
-
2019
- 2019-10-29 JP JP2021533439A patent/JP7337165B2/ja active Active
- 2019-10-29 US US17/278,006 patent/US11989889B2/en active Active
- 2019-10-29 WO PCT/EP2019/079550 patent/WO2020119997A1/de unknown
- 2019-10-29 KR KR1020217021389A patent/KR20210102343A/ko active Search and Examination
- 2019-10-29 EP EP19797642.6A patent/EP3895123A1/de active Pending
- 2019-10-29 CN CN201980082459.1A patent/CN113168703A/zh active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120019655A1 (en) * | 2009-04-15 | 2012-01-26 | Toyota Jidosha Kabushiki Kaisha | Object detection device |
DE102011006629A1 (de) | 2011-04-01 | 2012-10-04 | Robert Bosch Gmbh | Verfahren zur Bestimmung der Kollisionszeit eines mit einem Videosystem ausgerüsteten Fahrzeuges und Fahrzeugführungssystem |
US20120314071A1 (en) * | 2011-04-27 | 2012-12-13 | Mobileye Technologies Ltd. | Pedestrian collision warning system |
US20170203744A1 (en) * | 2014-05-22 | 2017-07-20 | Mobileye Vision Technologies Ltd. | Systems and methods for braking a vehicle based on a detected object |
Non-Patent Citations (1)
Title |
---|
LEI CHEN ET AL: "Real-Time Optical Flow Estimation Using Multiple Frame-Straddling Intervals", JOURNAL OF ROBOTICS AND MECHATRONICS, vol. 24, no. 4, 20 August 2012 (2012-08-20), JP, pages 686 - 698, XP055648203, ISSN: 0915-3942, DOI: 10.20965/jrm.2012.p0686 * |
Also Published As
Publication number | Publication date |
---|---|
JP7337165B2 (ja) | 2023-09-01 |
EP3895123A1 (de) | 2021-10-20 |
JP2022515046A (ja) | 2022-02-17 |
US20210350548A1 (en) | 2021-11-11 |
CN113168703A (zh) | 2021-07-23 |
KR20210102343A (ko) | 2021-08-19 |
US11989889B2 (en) | 2024-05-21 |
DE102018221617A1 (de) | 2020-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102018116111B4 (de) | Ein einheitliches tiefes faltendes neuronales Netzwerk für die Abschätzung von Freiraum, die Abschätzung der Objekterkennung und die der Objektstellung | |
DE112009001727B4 (de) | Bildverarbeitungsvorrichtung zum Berechnen eines optischen Flusses | |
DE102006012914B4 (de) | System und Verfahren zur Bestimmung des Abstands zu einem vorausfahrenden Fahrzeug | |
DE112009000949T5 (de) | Detektion eines freien Fahrpfads für ein Fahrzeug | |
DE102009038364A1 (de) | Verfahren und System zur automatischen Objekterkennung und anschließenden Objektverfolgung nach Maßgabe der Objektform | |
DE102006057552A1 (de) | System und Verfahren zur Messung des Abstands eines vorausfahrenden Fahrzeugs | |
DE102014206704A1 (de) | Aktualisierung der kalibrierung einer verkehrskamera unter verwendung einer szenenanalyse | |
DE102017218366A1 (de) | Verfahren und system zur fussgängererfassung in einem fahrzeug | |
DE102013205854B4 (de) | Verfahren zum Detektieren eines freien Pfads mittels temporärer Koherenz | |
WO2013178407A1 (de) | Verfahren und vorrichtung zur verarbeitung stereoskopischer daten | |
DE102016104730A1 (de) | Verfahren zum Detektieren eines Objekts entlang einer Straße eines Kraftfahrzeugs, Rechenvorrichtung, Fahrerassistenzsystem sowie Kraftfahrzeug | |
EP3931798B1 (de) | Schätzung der bewegung einer bildposition | |
DE102022102934A1 (de) | Ereignisbasierte fahrzeugposenschätzung unter verwendung monochromatischer bildgebung | |
WO2018054521A1 (de) | Verfahren zur selbstlokalisierung eines fahrzeugs | |
DE112021004200T5 (de) | Objekterkennungsvorrichtung | |
DE102013214496A1 (de) | Vorrichtung zum bestimmen eines zur verfügung stehenden parkraumes und verfahren derselben | |
EP2037407B1 (de) | Verfahren zur Objekterfassung | |
WO2020119997A1 (de) | Verfahren zur bestimmung einer relativbewegung mittels einer digitalen bilderfolge | |
DE102007027958A1 (de) | Verfahren zur Optimierung eines stereoskopischen Bildes | |
DE102018127738B4 (de) | Kamerablockadeerfassung für autonome Fahrsysteme | |
DE102015118941B4 (de) | Probabilistisches Verfolgungsverfahren für Partikel in einem Fluid | |
DE102010044112B4 (de) | Fluchtpunktbestimmungsvorrichtung und Fluchtpunktbestimmungsprogramm | |
DE102020208080A1 (de) | Erkennung von Objekten in Bildern unter Äquivarianz oder Invarianz gegenüber der Objektgröße | |
DE102019126431A1 (de) | Verfahren und Vorrichtung zur Lokalisierung eines Fahrzeugs | |
DE102018207411A1 (de) | Verfahren zur Ermittlung von Messinformationen in einem optischen Koordinatenmessgerät |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19797642 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021533439 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20217021389 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019797642 Country of ref document: EP Effective date: 20210713 |