WO2023061955A1 - Verfahren und vorrichtung zum bestimmen einer eigenposition eines fahrzeugs - Google Patents

Verfahren und vorrichtung zum bestimmen einer eigenposition eines fahrzeugs Download PDF

Info

Publication number
WO2023061955A1
WO2023061955A1 PCT/EP2022/078136 EP2022078136W WO2023061955A1 WO 2023061955 A1 WO2023061955 A1 WO 2023061955A1 EP 2022078136 W EP2022078136 W EP 2022078136W WO 2023061955 A1 WO2023061955 A1 WO 2023061955A1
Authority
WO
WIPO (PCT)
Prior art keywords
landmark
lines
line
vehicle
detection
Prior art date
Application number
PCT/EP2022/078136
Other languages
German (de)
English (en)
French (fr)
Inventor
Roland Kube
Michael Holicki
Ralph Hänsel
Timo Iken
Carolin Last
Stefan Wappler
Original Assignee
Volkswagen Aktiengesellschaft
Cariad Se
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Volkswagen Aktiengesellschaft, Cariad Se filed Critical Volkswagen Aktiengesellschaft
Priority to CN202280067941.XA priority Critical patent/CN118103881A/zh
Publication of WO2023061955A1 publication Critical patent/WO2023061955A1/de

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Definitions

  • the present invention relates to a method for determining a vehicle's own position by detecting a landmark and a data set relating to a detection position of the vehicle, assigning the detected landmark to a map mark of a map and obtaining the own position from the data set relating to the detection position and a mapped position of the map mark .
  • the present invention relates to a corresponding device for determining one's own position and a corresponding vehicle with such a device.
  • the word position is also used as a substitute for the word pose, i.e. position and orientation.
  • Motor vehicles and in particular autonomously driving motor vehicles have a large number of sensors in order to perceive their surroundings.
  • one or more cameras and possibly also ultrasonic sensors, radar sensors and the like are used in the motor vehicles.
  • the cameras are to be used to extract natural landmarks such as bollards and ground markings.
  • the recorded images are usually compared with external 3D maps, with which the vehicle's own position can be determined.
  • US 2016/0305794 A1 discloses a vehicle position estimation system that estimates a position of a vehicle using a landmark.
  • a control device installed on board a vehicle acquires landmark information including position information of a landmark that can be recognized on a road where the vehicle is expected to be located.
  • the on-vehicle control device evaluates the recognition results of the landmark as the recognition object and transmits the recognition results to the server with the landmark image recognized by the camera.
  • the server combines the results of the landmark recognition evaluation and the recognized images received from the camera and reflects them in recognition scores for the landmark information, which are transmitted to the onboard controller.
  • a method and a system for classifying data points in a point cloud that display the surroundings of a vehicle are known from publication US 2019/0226853 A1.
  • features of a digital map that relate to an assumed current position of the vehicle are used.
  • Such methods and systems can be used to detect road actors, such as other vehicles, in the vicinity of a vehicle.
  • the method or system can preferably be used in highly and fully automated driving applications.
  • the object of the present invention is to present a method and a device with which different types of objects can be extracted from sensor images (in particular camera images) in order to be able to determine one's own position.
  • a method for determining an inherent position of a vehicle is provided.
  • the vehicle is preferably a motor vehicle, but in principle it can be any type of vehicle that moves in an environment and may have to orient itself.
  • a landmark and a data set relating to a detection position of the vehicle are detected.
  • the landmark is detected, for example, by a camera that generates a corresponding image of the landmark. If necessary, the landmark is also recorded in a sequence of several images. However, the detection can also take place with a different sensor system, which is based, for example, on ultrasound, radar or laser technology.
  • a data set relating to a detection position of the vehicle is recorded.
  • This data record can include, for example, odometry data and a position of the vehicle relative to the landmark. If necessary, the data set can also include rough absolute positioning data (eg GPS data).
  • the detected landmark is then assigned to a map mark on a map.
  • the map is preferably a digital 3D map in which landmarks are entered in more or less detail. The landmarks entered in the map are referred to as map marks in the present document, in contrast to the real landmarks.
  • the vehicle's own position is obtained from the data set relating to the detection position and a mapped position of the map mark. Since the exact position of the landmark is known from the corresponding map mark, the absolute detection position, ie the vehicle's own position, can be inferred from the data record, which directly or indirectly contains the detection position of the vehicle when detecting the landmark.
  • the detection of the landmark includes an extraction of at least two lines of a raw image of the landmark.
  • the raw image of the landmark is, for example, a pixel image that is obtained with a corresponding sensor system (e.g. camera) of the vehicle.
  • the raw image can also be a pre-processed image of the sensor system.
  • At least two lines are extracted from the raw image using line extraction. This extraction takes place as part of the detection of the landmark. In principle, of course, more than two lines can also be recorded for a landmark. It is often necessary to extract a large number of lines in order to be able to recognize a certain type of landmark.
  • each line is assigned a direction. This creates a vector that not only reflects the course of the line, but can also carry additional information based on its direction.
  • a descriptor is generated for each line.
  • the descriptor contains information about which color transition or brightness transition is perpendicular to the relevant line in relation to the relevant direction. For example, if a line is viewed along the assigned direction, the line in the raw image can characterize a light-dark transition or a dark-light transition. Because of the specified direction, a transition from left to right or right to left can also be taken into account. For example, a white pavement marking represents a white stripe on dark asphalt. The white stripe has two parallel lines.
  • a direction can be assigned to the lines, eg a direction away from the camera.
  • One of the two lines represents a dark-to-light transition and the other line represents a light-to-dark transition with respect to the assigned direction.
  • the corresponding information regarding the transitions is called a descriptor.
  • a floor marking can be inferred in a resource-saving manner.
  • the corresponding landmark is thus determined from the at least two lines and the associated descriptors.
  • each line is assigned position information about whether it runs mainly horizontally or vertically, and the position information is used to determine the landmark.
  • the extrinsic parameters determine the position of the camera in an external coordinate system (e.g. world coordinate system)
  • the intrinsic parameters do not depend on the position and orientation of the camera, but describe the internal geometry of the camera (e.g. focal length et cetera).
  • the detected lines can be classified into horizontal and vertical lines. It is not mandatory that lines run exactly horizontally or vertically.
  • the main directional component of the line which runs either horizontally or vertically, can be decisive.
  • This position information (vertical or horizontal) is now used to determine the landmark. Ground markings usually have horizontal main directional components (depending on the camera position) and bollards usually have vertical lines.
  • the two lines can be determined whether the two lines have an intersection point, with relevant information being used to determine the landmark. If the two lines do not intersect but are parallel, for example, it can be a straight, strip-shaped line ground marking or a bollard. If, on the other hand, the two lines intersect, it can specifically be a corner of a parking box bordered by ground markings. The point of intersection can therefore point to a very specific landmark, namely a corner of a parking box. Especially in a multi-story car park or in a parking lot, such an intersection can be a very helpful hint for orientation.
  • a direction can now be assigned to the two lines that intersect at the point of intersection.
  • the vector product of both vectors gives a product vector that is perpendicular to both vectors.
  • the vectors together with the product vector form a legal system, which clearly results in the orientation of the product vector.
  • the vector product or the direction of the product vector can be used in a simple manner to classify a landmark more closely. For example, a left corner of a parking box has an up product vector due to its intersecting lines, while a right corner has a down product vector. In this way, corners of parking boxes can be easily distinguished.
  • this property of the vector product can be used for rectangular landmarks.
  • Such rectangular landmarks can also be, for example, ringed bollards, for which alternating red and white rectangles result, for example, in a camera image.
  • This constellation of rectangles can also be easily analyzed using the vector products.
  • At least two corners where there are intersections of lines are determined.
  • At one of the at least two corners the vector product has a first direction and at the other of the at least two corners the vector product has a second direction opposite the first direction. This is because one of the vectors at, for example, the left front corner of the rectangular landmark has an opposite direction to the corresponding vector at the right front corner of the rectangular landmark. Because of this anti-parallelism, the product vectors also have opposite directions.
  • two parallel vertical lines are determined, an object is assigned to these lines on the basis of the descriptors of the two lines, a change in the distance between the two lines when the vehicle is traveling is determined using odometry a width of the object is inferred from the change and the object is identified on the basis of the width.
  • This procedure can be particularly useful for identifying bollards.
  • a bollard or other vertically oriented object (such as a sign pole) has two parallel, vertical lines.
  • the descriptors describe a brightness or color transition at the edges of the object. To narrow it down further, the actual width of the object must be determined.
  • the movement path is known through odometry, so that the distance of the object and thus the distance between the two lines can be deduced. If an object is further away, the distance between the lines changes less when the vehicle is moving than for an object that is closer to the vehicle. Since the width of the object can now be inferred from the change in the distance, the object can be identified more closely on the basis of this width. For example, if a width of 10 centimeters is determined, this object can be identified as a bollard. A street sign pole can be ruled out in this case.
  • lines extracted during the detection of the landmark are classified into three groups: vertical lines, non-vertical lines below a detected horizon and non-vertical lines above the horizon.
  • Such a classification is helpful for identifying the detected landmarks.
  • the vertical lines are candidates for vertical objects such as bollards.
  • non-vertical lines below the horizon are candidates for ground markings. Non-perpendicular lines above the horizon can usually be discarded.
  • extracted lines that have endpoints within a predetermined maximum distance are grouped using a predetermined camera intrinsic and camera extrinsic, and the grouping is used to determine the landmark.
  • line pairs with directly adjacent end points can represent segments of a common 3D straight line.
  • Other pairs of lines, on the other hand, which also have end points in the immediate vicinity or with a specified maximum distance, can have intersections.
  • Such a pair of lines with an intersection point can then be used, for example, to identify a corner of a rectangular landmark (eg parking box).
  • Such a grouping of lines with adjacent end points can therefore be used for further identification of the landmarks.
  • the invention also includes developments of the device according to the invention, which have features as have already been described in connection with the developments of the method according to the invention. For this reason, the corresponding developments of the device according to the invention are not described again here.
  • the invention also includes the combinations of features of the described embodiments.
  • 1 shows a schematic block diagram of an exemplary embodiment of a method according to the invention
  • 2 shows a diagram of how the landmark detection according to the invention works using the example of ground markings
  • FIG. 3 shows a schematic embodiment of an exemplary embodiment of a vehicle according to the invention.
  • the exemplary embodiments explained below are preferred exemplary embodiments of the invention.
  • the described components each represent individual features of the invention that are to be considered independently of one another, which also develop the invention independently of one another and are therefore also to be regarded as part of the invention individually or in a combination other than that shown.
  • the exemplary embodiments described can also be supplemented by further features of the invention already described.
  • FIG. 1 shows a possible embodiment of a method according to the invention with a number of individual steps S1 to S7.
  • a raw image is captured in a first step S1. This can be done, for example, by a camera, an ultrasonic sensor, a radar sensor and the like.
  • a second step S2 at least two lines are extracted from the raw image of the landmark as part of the detection of a landmark. If necessary, more than two lines are also extracted.
  • a predeterminable type of landmark is to be identified on the basis of the extracted lines.
  • a direction is assigned to each line in a step S3. The direction can be defined by a starting point and an ending point of the line.
  • a descriptor is generated for each line, which contains information about which color transition or brightness transition is present perpendicularly to the relevant line in relation to the respective direction.
  • the descriptor contains information about a light-dark transition. This also requires the information on which side of the line the dark area is and on which side the light area is. Therefore, reference is made to the assigned direction of the line. The direction can be used to clearly define on which side the light area and the dark area lie (e.g. left or right).
  • the landmark is determined from the at least two lines and the associated descriptors.
  • the lines and the descriptors related to the direction of a line Carrying color information or brightness information perpendicular to the line, enable assignment to a landmark in the simplest case. If further information is available, this can also be used to identify a landmark more closely.
  • the detected landmark can be assigned to a map mark of a map in a step S6.
  • additional information may be necessary, for example via odometry or other positioning systems. In this case, therefore, information on the detection position or pose of the vehicle must be obtained.
  • the data set also contains information about the relative position or pose of the vehicle to the landmark, for example.
  • the vehicle's own position can now be obtained according to step S7. Since, for example, the absolute position of the map mark and the relative position of the vehicle when the landmark is detected are known, the vehicle's own position when the landmark is detected can be determined from this.
  • a line extraction can be carried out in which, for example, lines or line segments are extracted from a gray value camera image using a standard line detector (e.g. the LSD method; Line Segment Detector).
  • a standard line detector e.g. the LSD method; Line Segment Detector.
  • Each segment can be represented by a start and end point in the image, and the order of start and end points is used to assign the line a direction that can be used to encode a light-dark transition perpendicular to the line.
  • a rough classification of lines can take place.
  • the extracted lines can be divided into three groups, for example: Potentially perpendicular lines (ie, lines that lie in a plane that is perpendicular to the ground plane and passes through the camera center). These lines are candidates for vertical objects such as bollards.
  • Non-perpendicular lines below the horizon are candidates for floor markings.
  • Non-perpendicular lines above the horizon can be discarded.
  • pairs of lines or line segments are analyzed that have end points in the immediate vicinity, i.e. at a specified maximum distance.
  • these pairs can be divided into two classes:
  • Pairs of lines that could come from a common 3D straight line which, due to camera distortion, for example, breaks down into several segments during the projection that do not lie on a straight line (these line pairs could possibly be discarded).
  • ground markings can be extracted.
  • Such pavement markings are typically white stripes on dark asphalt pavements. Such ground markings are reliably recognizable due to their geometry and their contrast. Therefore, ground markings are often extracted from the data as the first landmark types. In particular, these are lines or segments below the horizon, which can carry the direction of the light-dark transition as a descriptor or additional information.
  • intersection landmarks can be extracted.
  • Such intersection landmarks are, for example, the floor markings 1 of parking stalls 2, as shown by way of example in FIG.
  • the four parking spaces 2 shown are separated from a lane 3 by a first marking section 4 .
  • the parking boxes 2 are again separated by second marking sections 5 separated.
  • the second marking sections 5 are perpendicular to the first marking section 4, resulting in rectangular parking boxes 2.
  • the first marking section 4 directly meets the second marking sections 5. Due to the fact that the parking bays are dark here and the floor markings are light, a line detector can clearly see the transitions or boundaries as lines.
  • a first line 6 is located at the dark-light transition from a parking garage 2 to the first marking section 4.
  • a second line 7 is located at the dark-light transition to a second marking section 5.
  • the two lines 6 and 7 intersect in an intersection point 8.
  • line 6 is assigned a direction in the image to the right and line 7 is assigned an upward direction in the image.
  • the respective direction of a line preferably points away from the point of intersection 8 .
  • a second class of landmarks can now be identified on the basis of such intersection points 8 .
  • the lines are also provided with a descriptor containing the directions assigned to the lines by definition.
  • the two lines 6 and 7 divide the area around the intersection point 8 into two sectors, a light and a dark sector.
  • the main direction of the bright sector can also be added to the descriptor as a "white vector".
  • a third vector 9 can be determined from the two vectors, which are defined by the lines 6 and 7 including their directions, by vector multiplication.
  • This third vector 9 is perpendicular to the other two vectors based on lines 6 and 7.
  • the third vector 9 protrudes from the image plane. Protruding out of the image plane, it marks a left corner of a parking box 2.
  • the vector based on the first line 6 would have an opposite direction, so the vector product results in a third vector, pointing into the drawing plane. It therefore has an opposite direction to the third vector 9 of the left corner of the parking stall 2 shown in FIG. 2.
  • the left and right corners of a parking stall 2 can thus be easily distinguished using the vector product.
  • linear bollards can be extracted.
  • linear bollards can be viewed as pairs of parallel lines. The distance between the lines serves as a descriptor. The distance between the lines can be obtained, for example, by analyzing the bollard several times during a journey. Finally, on the basis of additional odometry data resulting from the journey, the actual distance between the lines, ie the thickness of the bollard, can be determined.
  • a bollard can thus be distinguished from the pole of a traffic sign, for example.
  • the extraction of ringed bollards can take place.
  • the ringed bollards have known colors (e.g. red/white).
  • the individual rings of the bollard represent rectangles, which in turn can be identified in a similar way to the rectangular parking boxes.
  • the rectangles are linked to linear, vertical objects using the known camera parameters. These objects represent the extracted bollards.
  • the number of rings and the width of the object can be used as a descriptor.
  • FIG. 3 schematically shows a vehicle (here a motor vehicle) 10 which has a device for determining an inherent position or pose of the vehicle 10 .
  • the device 11 has a detection device 12 which is represented here symbolically as a camera on the side mirror 13 .
  • the detection device 12 can also be based on other sensors, as has already been mentioned above (for example ultrasound, radar, etc.).
  • the detection device 12 also serves to record a data set relating to a detection position of the vehicle. This means that the detection device 12 must be able to detect a position of the device 11 or of the vehicle 10, if necessary, in relation to the landmark.
  • This detection position can be a rough position or an estimated position. In many cases, the recorded landmark can only be assigned to a map mark by the rough position. If necessary, odometry techniques and/or other position detection techniques are used to determine the rough position.
  • the device 11 also includes an assignment device 14 for assigning the landmark detected by the detection device 12 to a map mark of a map. Furthermore, the device 11 includes a determination device 15 for obtaining the own position from the data set relating to the detection position and a mapped position of the map mark.
  • the detection device 12 is able to extract at least two lines of a raw image of the landmark and to assign a direction to each line.
  • the detection device is designed to generate a descriptor for each line, which contains information about which color transition or brightness transition is present perpendicularly to the relevant line in relation to the respective direction.
  • the detection device 12 can determine the landmark from the at least two lines and the associated descriptors.
  • the invention or the exemplary embodiments enable landmarks to be identified from camera images, for example, independently of special camera models.
  • the method described can therefore be transferred to any camera model with known parameters.
  • the landmark detection method is largely independent of lighting and weather conditions and the type of scene being viewed.
  • the method can also be implemented without manual annotation effort and without graphics cards and with moderate CPU effort.
  • descriptors are provided that allow comparison with a 3D map.
  • the 3D map should contain the same types of landmarks as are detected from the images. However, the 3D map does not necessarily have to be generated with information from a camera image.
  • a camera system permanently installed in a vehicle with known intrinsic and extrinsic is used.
  • a computer integrated in the vehicle can have modules that carry out the landmark detection from the camera images.
  • a localization module of the computer can compare the extracted landmarks with the 3D map and use this to determine the position or pose of the vehicle.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
PCT/EP2022/078136 2021-10-11 2022-10-10 Verfahren und vorrichtung zum bestimmen einer eigenposition eines fahrzeugs WO2023061955A1 (de)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280067941.XA CN118103881A (zh) 2021-10-11 2022-10-10 用于确定车辆自身位置的方法及装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102021126288.7A DE102021126288A1 (de) 2021-10-11 2021-10-11 Verfahren und Vorrichtung zum Bestimmen einer Eigenposition eines Fahrzeugs
DE102021126288.7 2021-10-11

Publications (1)

Publication Number Publication Date
WO2023061955A1 true WO2023061955A1 (de) 2023-04-20

Family

ID=84329929

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/078136 WO2023061955A1 (de) 2021-10-11 2022-10-10 Verfahren und vorrichtung zum bestimmen einer eigenposition eines fahrzeugs

Country Status (3)

Country Link
CN (1) CN118103881A (zh)
DE (1) DE102021126288A1 (zh)
WO (1) WO2023061955A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160305794A1 (en) 2013-12-06 2016-10-20 Hitachi Automotive Systems, Ltd. Vehicle position estimation system, device, method, and camera device
US20170248960A1 (en) * 2015-02-10 2017-08-31 Mobileye Vision Technologies Ltd. Sparse map for autonomous vehicle navigation
DE102016214028A1 (de) * 2016-07-29 2018-02-01 Volkswagen Aktiengesellschaft Verfahren und System zum Bestimmen einer Position einer mobilen Einheit
US20190226853A1 (en) 2016-09-28 2019-07-25 Tomtom Global Content B.V. Methods and Systems for Generating and Using Localisation Reference Data
CN110197173A (zh) * 2019-06-13 2019-09-03 重庆邮电大学 一种基于双目视觉的路沿检测方法
WO2021004810A1 (de) * 2019-07-11 2021-01-14 Volkswagen Aktiengesellschaft Verfahren und vorrichtung zum kamerabasierten bestimmen eines abstandes eines bewegten objektes im umfeld eines kraftfahrzeugs

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4866951B2 (ja) 2009-09-16 2012-02-01 株式会社日立製作所 測位組み合わせ決定システム
DE102016205964A1 (de) 2016-04-11 2017-10-12 Volkswagen Aktiengesellschaft Verfahren und System zum Bestimmen einer globalen Position einer ersten Landmarke
DE102017201663A1 (de) 2017-02-02 2018-08-02 Robert Bosch Gmbh Verfahren zur Lokalisierung eines höher automatisierten, z.B. hochautomatisierten Fahrzeugs (HAF) in einer digitalen Lokalisierungskarte
DE102018209607A1 (de) 2018-06-14 2019-12-19 Volkswagen Aktiengesellschaft Verfahren und Vorrichtung zum Bestimmen einer Position eines Kraftfahrzeugs
DE102019216722A1 (de) 2019-10-30 2021-05-06 Zf Friedrichshafen Ag Verfahren zum Lokalisieren eines Fahrzeugs in einer digitalen Karte

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160305794A1 (en) 2013-12-06 2016-10-20 Hitachi Automotive Systems, Ltd. Vehicle position estimation system, device, method, and camera device
US20170248960A1 (en) * 2015-02-10 2017-08-31 Mobileye Vision Technologies Ltd. Sparse map for autonomous vehicle navigation
DE102016214028A1 (de) * 2016-07-29 2018-02-01 Volkswagen Aktiengesellschaft Verfahren und System zum Bestimmen einer Position einer mobilen Einheit
US20190226853A1 (en) 2016-09-28 2019-07-25 Tomtom Global Content B.V. Methods and Systems for Generating and Using Localisation Reference Data
CN110197173A (zh) * 2019-06-13 2019-09-03 重庆邮电大学 一种基于双目视觉的路沿检测方法
WO2021004810A1 (de) * 2019-07-11 2021-01-14 Volkswagen Aktiengesellschaft Verfahren und vorrichtung zum kamerabasierten bestimmen eines abstandes eines bewegten objektes im umfeld eines kraftfahrzeugs

Also Published As

Publication number Publication date
CN118103881A (zh) 2024-05-28
DE102021126288A1 (de) 2023-04-13

Similar Documents

Publication Publication Date Title
DE102015203016B4 (de) Verfahren und Vorrichtung zur optischen Selbstlokalisation eines Kraftfahrzeugs in einem Umfeld
EP3105547B1 (de) Verfahren zur ermittlung der absoluten position einer mobilen einheit und mobile einheit
DE102010006828B4 (de) Verfahren zur automatischen Erstellung eines Modells der Umgebung eines Fahrzeugs sowie Fahrerassistenzsystem und Fahrzeug
DE102015209467A1 (de) Verfahren zur Schätzung von Fahrstreifen
EP3529561B1 (de) System und verfahren zur erzeugung von digitalen strassenmodellen aus luft- oder satellitenbildern und von fahrzeugen erfassten daten
DE102017128294A1 (de) Fahrzeugumgebung-abbildungssysteme und -verfahren
DE102009050501A1 (de) Verbesserte Detektion eines freien Pfads bei Vorhandensein eines Verkehrsinfrastrukturindikators
DE102009050502A1 (de) Detektion eines freien Pfads unter Verwendung eines hierarchischen Ansatzes
EP2629243A1 (de) Verfahren zum Erkennen und Verfolgen von Fahrspurmarkierungen
DE102009050505A1 (de) Detektion eines freien Pfads durch Strassenmodellerstellung
DE102011109569A1 (de) Verfahren zur Fahrspurerkennung mittels einer Kamera
DE102016210534A1 (de) Verfahren zum Klassifizieren einer Umgebung eines Fahrzeugs
DE102018123393A1 (de) Erkennung von Parkflächen
DE102018211368A1 (de) Verfahren zur Beschreibung einer Umgebung eines Fahrzeugs durch die Topologie der befahrenen Straße
DE102016211730A1 (de) Verfahren zur Vorhersage eines Fahrbahnverlaufs einer Fahrbahn
DE102020112825A1 (de) Verfahren zum Erfassen von relevanten statischen Objekten innerhalb einer Fahrspur sowie Recheneinrichtung für ein Fahrerassistenzsystem eines Fahrzeugs
DE102016106978A1 (de) Verfahren zum Betreiben eines Fahrerassistenzsystems eines Kraftfahrzeugs, Recheneinrichtung, Fahrerassistenzsystem sowie Kraftfahrzeug
DE102015010514B4 (de) Verfahren zur Ermittlung von Karteninformationen für einen Kartendatensatz einer Navigationsumgebung und Recheneinrichtung
WO2019063208A1 (de) Verfahren zum automatisierten identifizieren von parkflächen und/oder nicht-parkflächen
DE102015006569A1 (de) Verfahren zur bildbasierten Erkennung des Straßentyps
DE102016212774A1 (de) Verfahren und Vorrichtung zur Erzeugung einer Umfeldkarte sowie zur Lokalisierung eines Fahrzeugs
EP3704631A2 (de) Verfahren zur ermittlung einer entfernung zwischen einem kraftfahrzeug und einem objekt
WO2023061955A1 (de) Verfahren und vorrichtung zum bestimmen einer eigenposition eines fahrzeugs
DE102021207609A1 (de) Verfahren zur Kennzeichnung eines Bewegungsweges
DE102018121274B4 (de) Verfahren zum Visualisieren einer Fahrabsicht, Computerprogrammprodukt und Visualisierungssystem

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22801097

Country of ref document: EP

Kind code of ref document: A1