WO2023213482A1 - Procédé et dispositif de génération d'une image d'environnement pour un assistant de stationnement d'un véhicule - Google Patents

Procédé et dispositif de génération d'une image d'environnement pour un assistant de stationnement d'un véhicule Download PDF

Info

Publication number
WO2023213482A1
WO2023213482A1 PCT/EP2023/058624 EP2023058624W WO2023213482A1 WO 2023213482 A1 WO2023213482 A1 WO 2023213482A1 EP 2023058624 W EP2023058624 W EP 2023058624W WO 2023213482 A1 WO2023213482 A1 WO 2023213482A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
environment
identified
image
open spaces
Prior art date
Application number
PCT/EP2023/058624
Other languages
German (de)
English (en)
Inventor
Philipp Hüger
Carolin Last
Hellward Broszio
Original Assignee
Volkswagen Aktiengesellschaft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Volkswagen Aktiengesellschaft filed Critical Volkswagen Aktiengesellschaft
Publication of WO2023213482A1 publication Critical patent/WO2023213482A1/fr

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/301Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • B60R2300/607Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective from a bird's eye viewpoint
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/806Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for aiding parking

Definitions

  • the invention relates to a method and a device for generating an image of the surroundings for a parking assistant of a vehicle.
  • the invention is based on the object of improving a method and a device for generating an image of the surroundings for a parking assistant of a vehicle.
  • a method for generating an image of the environment for a parking assistant of a vehicle wherein an environment of the vehicle is recorded at several positions of the vehicle using several environment cameras and the environment is recorded using at least one distance-resolving sensor; wherein, based on captured sensor data from the at least one distance-resolving sensor, open spaces and raised objects in the environment are identified, with a local environment map based on the captured sensor data and/or the identified open spaces and the identified raised objects is generated, with identified open spaces and identified raised objects being linked to corresponding camera images depending on the position and the detection direction, an image signal for an image of the surroundings in a top view of the vehicle being generated by the camera images being at least partially dependent on the position and the detection direction on the open areas and Objects are projected into the environment map, and the generated image signal is provided.
  • a device for generating an image of the surroundings for a parking assistant of a vehicle comprising a plurality of surrounding cameras that are set up to capture the surroundings of the vehicle, at least one distance-resolving sensor that is set up to capture the surroundings, and a data processing device, wherein the data processing device is set up to identify open spaces and raised objects in the environment based on sensor data recorded at several positions of the vehicle from the at least one distance-resolving sensor, to generate a local map of the surrounding area based on the recorded sensor data and / or the identified open spaces and the identified raised objects To link open spaces and identified raised objects with camera images corresponding to them, depending on the position and the direction of detection, to generate an image signal for an image of the surroundings in a top view of the vehicle, and to project the camera images for this purpose onto the open spaces and objects in the surroundings map at least partially depending on the position and depending on the direction of detection, and that to provide generated image signal.
  • the method and the device make it possible to generate an improved camera-based image of the environment in a top view.
  • several positions of the vehicle are taken into account, so that not only an image of the environment for a current position of the vehicle can be provided, but also an image of the environment that goes beyond this.
  • the multiple positions at which the sensor data and the camera images are captured correspond to positions within a parking scenario.
  • a history of the environment can be taken into account, that is, a camera-based image of the environment, especially in a parking scenario, can be built up step by step.
  • the vehicle drives (position by position) through the parking scenario (e.g. parking lot or parking garage), with the image of the environment being generated step by step while driving through the parking scenario.
  • the positions are determined, for example, using the vehicle's own odometry and/or retrieved from the odometry.
  • the surroundings of the vehicle are recorded by means of several surrounding cameras and by means of at least one distance-resolving sensor. This is done in particular at each of the positions.
  • the number and orientation of the environment cameras are selected and arranged in such a way that they can capture a complete image of the environment, in particular all horizontal directions around the vehicle, that is, in particular a horizontal angle of 0 to 360°. Based on the recorded sensor data from the at least one distance-resolving sensor, open spaces and raised objects in the surrounding area are identified.
  • a local environment map is generated based on the recorded sensor data and/or from the identified open spaces and the identified raised objects. This is done in particular using methods known per se.
  • the identified open spaces and the identified raised objects are linked to corresponding camera images depending on the position and detection direction. For example, image elements of the camera images captured at the multiple positions can each be linked to corresponding solid angles of the detection areas of the surrounding cameras and stored at the positions. For each of the positions, this enables the image elements in the captured camera images to be assigned to areas in the environment, whereby the camera images can be projected onto the respective areas.
  • An image signal for an image of the surroundings in a top view of the vehicle is generated by projecting the camera images onto the open spaces and objects in the surroundings map at least partially in a position-dependent and detection direction-dependent manner.
  • the generated image signal is provided, for example as an analog or digital signal or digital data packet.
  • the generated image signal can then be displayed, for example, on a display device of the vehicle in order to support a driver of the vehicle when parking and/or maneuvering
  • the environment image only takes into account a limited area of the environment, that is, that a history only extends back to a specific point in time or to a specific position in the past.
  • the environment map can, for example, be designed in the manner of a shift register, in which sensor data and camera images recorded up to a specific point in time in the past or up to a specific past position of the vehicle are taken into account.
  • it can also be provided to generate the environment image without limitation until a memory space in the data processing device reserved for this purpose is completely occupied.
  • a distance-resolving sensor is in particular an ultrasonic sensor, a radar sensor or a lidar sensor. In principle, several (different) sensors can also be used be used. A number and/or an arrangement and/or an orientation of the at least one distance-resolving sensor is selected in particular such that a horizon angle (eg azimuth) of 0 to 360° can be detected. A vertical angle range (e.g. elevation) lies, for example, in a range from 0 to approximately 30-45°, but can in principle also cover a different range.
  • Open spaces are, for example, areas in the environment for which the distance-resolved sensor data show that no object and/or no reflection above a predetermined vertical angle is detected within a predetermined minimum distance from the sensor.
  • the specified minimum distance is, for example, a few meters (e.g. 10 m, 15 m, etc.) and the specified vertical angle is, for example, 5°, 10° or 15° etc.
  • the local environment map is in particular a three-dimensional environment map, which is described, for example, using a local Cartesian coordinate system.
  • Projecting is done in particular by placing the camera images as textures over the open spaces and raised objects.
  • the projection is carried out in a perspectively correct form, in particular in a position-dependent and detection direction-dependent manner, so that the image of the environment is generated in a top view in such a way that it ideally corresponds to an image of the environment captured from a bird's eye view using a camera.
  • methods known per se such as stitching and/or brightness and/or contrast adjustments, are used in particular to connect different camera images with one another.
  • Parts of the device in particular the data processing device, can be designed individually or collectively as a combination of hardware and software, for example as program code that is executed on a microcontroller or microprocessor. However, it can also be provided that parts are designed individually or combined as an application-specific integrated circuit (ASIC) and/or field-programmable gate array (FPGA).
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • a position-dependent ground projection plane is gradually defined for the multiple positions, onto which the camera images linked to the open spaces are projected.
  • Position-dependent means in particular that the ground projection plane is only defined where open spaces have been identified. This allows you to gradually achieve a floor level for the footprint of the vehicle can be defined. This is based on the idea that open spaces do not appear as open spaces from every position of the vehicle, for example because an object obscures the open space when viewed from some positions of the vehicle but not from others. However, if this open space can be detected and identified as an open space from several positions or from at least one position, this is sufficient to take this open space into account as such in the environmental image. In particular, once identified as open spaces, areas in the surrounding area are always identified as open spaces, even if they are not identified as open spaces from other positions.
  • the ground projection plane is defined in particular in a local (for example Cartesian) coordinate system of the environment map.
  • At least one position-dependent projection plane is gradually defined for the multiple positions, onto which the camera images associated with the raised objects are projected.
  • This allows raised objects to be displayed as such in the image of the environment in a top view.
  • this other vehicle is identified as a raised object and a projection plane arranged at this highest position is defined for the other vehicle, for example starting from the highest position of the associated sensor data.
  • This projection plane then ideally coincides with a height of a hood or a roof or a trunk lid (in the case of a hatchback) of the other vehicle.
  • the captured camera images are then projected onto the projection planes defined for this purpose, depending on the position and direction of capture, so that a more realistic representation results in the generated image signal or in the image of the environment in a top view.
  • the at least one projection plane when the at least one projection plane is generated, at least one area is added if the sensor data of the at least one distance-resolving sensor recorded for this area corresponds to another vehicle.
  • a projection plane for a roof of another vehicle can be added, for example, if the height of a roof edge is known in the surrounding map. The roof is then added at the height that coincides with the edge of the roof.
  • other vehicles are detected. This is done using methods known per se, such as pattern recognition and/or machine learning. In principle, other objects can also be recognized and supplemented accordingly.
  • a degree of a respective radial distortion of the surrounding cameras is taken into account when detecting and/or projecting.
  • environmental cameras generally have a large detection range (e.g. 180°)
  • imaging optics are used to image this large detection range on an image sensor.
  • the imaging optics image particularly edge areas in a highly distorted manner (“fisheye optics”), so that, compared to the center of the image, a larger solid angle area is imaged on an equal number of sensor elements of the image sensor of the surrounding cameras. This results in increasing distortion and decreasing resolution radially from the center of the image.
  • a predetermined threshold value for example specified as the number of image elements per solid angle or as a change in the number of image elements per solid angle, etc.
  • a predetermined threshold value for example specified as the number of image elements per solid angle or as a change in the number of image elements per solid angle, etc.
  • the overall resolution of the image of the surroundings can be increased.
  • distortion and/or falsification during projection can be reduced or even prevented.
  • an open area is captured at several positions from different directions, those camera images or areas of these camera images can be used that have the highest resolution or at least a resolution above a predetermined threshold value. Provision can also be made to dynamically adapt such a threshold value depending on the availability of captured camera images.
  • the threshold value can be adjusted dynamically accordingly. In one embodiment it is provided that only a predetermined inner image area of the captured camera images is used during projection. This also allows radial distortion to be taken into account. In particular, this enables a variant that is particularly easy to implement and requires little effort, since the inner image area can be defined once and then only this inner image area is taken into account when projecting.
  • the inner image area can be defined, for example, by specifying a minimum resolution.
  • the image signal for areas of the environment image for which no camera data and/or no sensor data is available due to occlusion is extrapolated and/or estimated based on neighboring areas that border on these areas.
  • an image signal can also be provided for these areas that are not captured by the surrounding cameras.
  • the idea behind this is that, for example, a texture for a roof of another vehicle (or another object) can be supplemented and/or appreciated when a texture of a hood, a tailgate and/or another part of the other vehicle (or the other object) is known.
  • the roof is then supplemented with the help of the extrapolated and/or estimated texture or an extrapolated and/or estimated camera image is projected onto the associated projection plane.
  • identified raised objects are at least partially rendered in the environment, this being done based on the recorded sensor data of the distance-resolving sensor.
  • a corresponding image signal is generated for the at least partially rendered objects.
  • the rendering is carried out in particular using the data processing device and methods known per se.
  • Point cloud of the environment is generated and taken into account when identifying the open spaces and / or the raised objects in the environment.
  • distance-resolving data or three-dimensional data of the environment can also be generated and taken into account based on the captured camera images.
  • SLAM monocular Simultaneous Localization and Mapping
  • DSO Direct Sparse Odometry
  • a vehicle is also created in particular, comprising at least one device according to one of the described embodiments.
  • a vehicle is in particular a motor vehicle.
  • FIG. 1 shows a schematic representation of an embodiment of the device for generating an image of the surroundings for a parking assistant of a vehicle
  • 2 is a schematic representation to illustrate sensor data from distance-resolving sensors
  • FIG. 3 shows a schematic representation to illustrate a floor projection plane and projection planes for raised objects
  • 5a, 5b schematic representations to illustrate an embodiment of the device and the method.
  • the Device 1 shows a schematic representation of an embodiment of the device 1 for generating an environment image 21 for a parking assistant of a vehicle 50.
  • the Device 1 is arranged, for example, in a vehicle 50, in particular a motor vehicle.
  • the device 1 carries out the method described in this disclosure.
  • the device 1 includes several surrounding cameras 2, at least one distance-resolving sensor 3 and a data processing device 4.
  • the device 1 comprises four environment cameras 2 (for the sake of clarity, only one is shown in FIG. 1), which are each designed as 180° cameras with fisheye optics and are arranged on the vehicle 50 in such a way that a horizontal angular range of an environment from 0 to 360 ° around the vehicle 50 can be recorded without gaps.
  • environment cameras 2 for the sake of clarity, only one is shown in FIG. 1
  • the at least one distance-resolving sensor 3 includes, for example, an ultrasonic sensor, a radar sensor and/or a lidar sensor.
  • a number and an arrangement of the distance-resolving sensors 3 are selected in particular such that a horizontal angular range of an environment from 0 to 360 ° around the vehicle 50 can be detected without gaps.
  • the surrounding cameras 2 and the distance-resolving sensors 3 are calibrated to one another, that is, a relative pose (relative position and relative orientation) to one another is known. This enables an assignment of captured sensor data 11 to captured camera images 10.
  • the data processing device 4 includes a computing device 4-1 and a memory 4-2.
  • the data processing device 4 is set up to identify open spaces and raised objects in the environment based on sensor data 11 of the at least one distance-resolving sensor 3 recorded at several positions of the vehicle 50, a local environment map 30 based on the recorded sensor data 11 and/or the identified open spaces and to generate the identified raised objects and to link identified open spaces and identified raised objects in a position-dependent and detection direction-dependent manner with camera images 10 corresponding thereto.
  • the data processing device 4 is set up to generate an image signal 20 for an image of the surroundings 21 in a top view of the vehicle 50, and the camera images 10 for this at least partially depending on the position and the direction of detection towards the open spaces and objects in the surroundings map 30 project, and provide the generated image signal 20.
  • the provided image signal 20 can, for example, be fed to a display device 51 of the vehicle 50 and displayed This can be displayed to support a driver when parking and/or maneuvering.
  • Fig. 2 shows a schematic representation to illustrate sensor data from distance-resolving sensors. Shown is an environment map 30 in which three-dimensionally located data points 31 -x (only some are marked with a reference number) of an ultrasonic sensor (31-1), a radar sensor (31-2) and a lidar sensor (31-3) are stored.
  • the data points 31-x were recorded by the sensors at several positions 32 and stored in the environment map 30.
  • the data points 31-x mark areas where raised objects 41 are arranged in the environment. There are open spaces 40 in the surrounding area between the raised objects 41.
  • the raised objects 41 are identified, for example, by clustering or combining locally clustered data points 31-x.
  • An accumulation is defined here, for example, as a predetermined density of data points 31-x per solid angle or horizontal angle or area in the environment or as a predetermined signal strength of a reflection of the signal emitted by the sensor, which must be achieved so that a raised object 41 is identified. In the area of open spaces 40, however, such accumulations and/or reflections of a signal emitted by the sensor are missing.
  • Camera images are captured for all directions around the vehicle 50 at each of the multiple positions 32.
  • Image information in the captured camera images is linked to the data points 31-x depending on the position and direction of capture.
  • the camera images are (at least partially) projected onto the open spaces 40 and the raised objects 41 in the environment map 30 depending on the position and the detection direction.
  • the captured camera images are (at least partially) placed as textures over the open spaces 40 and the raised objects 41 in a position-dependent and capture direction-dependent manner, thereby generating an environment image 21 in a top view.
  • a position-dependent floor projection plane 33 is gradually defined for the multiple positions 32, onto which the camera images linked to the open spaces 40 are projected.
  • FIG. 3 which basically corresponds to FIG. 2, so that the same reference numbers designate the same features and terms.
  • at least one position-dependent projection plane 34 is gradually defined for the multiple positions 32, onto which the camera images associated with the raised objects 41 are projected.
  • the main aim here is to display the raised objects 41 in the environment image 21 (FIG. 1) in a top view at a different height (direction perpendicular to the standing surface on which the vehicle 50 stands) than the floor projection plane 34.
  • more than a projection plane 34 may be provided in order to be able to image intermediate stages, for example of other vehicles 51 (eg intermediate stages for the rear, roof, hood, loading area, etc.).
  • the at least one projection plane 34 is generated, at least one area is added if the sensor data recorded for this area of the at least one distance-resolving sensor corresponds to another vehicle 51.
  • pattern recognition and/or object recognition can be carried out on the recorded sensor data, in particular on the data points 31-x. If it is recognized based on pattern recognition and/or object recognition that a raised object 41 is another vehicle 51, this prior knowledge can be used to identify more distant areas of the other vehicles 51 for which a density of data points 31-x decreases supplement, for example by adding data points 31-x to reproduce an outer contour of the other vehicle 51.
  • a degree of a respective radial distortion of the surrounding cameras 2 is taken into account when detecting and/or projecting.
  • empirical tests can be carried out and/or optical property parameters 15 (FIG. 1) of an imaging optics (“fisheye optics”) can be read into the data processing device 4.
  • a suitable inner image area can be determined, for example, based on empirical tests and/or simulations.
  • the image signal 20 is extrapolated and/or estimated for areas of the environment image 21 for which no camera data 10 and/or no sensor data 11 are available due to occlusion, starting from neighboring areas that border on these areas.
  • the image signal 20 can be used for the side 35 (FIG. 3) of the raised objects 41 or others facing away from the vehicle 50 Vehicles 51 are supplemented accordingly by image data from camera images 10 that were recorded for the side 36 facing the vehicle 50.
  • a paint color or a texture of the facing sides 36 in captured camera images 10 can be used to extrapolate and/or estimate a texture for the facing side 35.
  • a three-dimensional point cloud of the surroundings is additionally generated and taken into account when identifying the open spaces 40 and/or the raised objects 41 in the surroundings. This is done, for example, using known SLAM methods, for example using the Direct Sparse Odometry (DSO) method.
  • DSO Direct Sparse Odometry
  • FIG. 4 shows a schematic representation of an environment image 21, which corresponds to the generated image signal 20 (FIG. 4 is merely an example and does not correspond in content to the previously shown FIGS. 2 and 3).
  • An improved image of the environment 21 can be provided by the device 1 (FIG. 1) and the method.
  • the environmental image 21 or the image signal 20 shown in FIG. 4 was generated in particular with an embodiment of the device 1 and the method, in which it is provided that only a predetermined inner image area of the captured camera images 10 is used during projection.
  • a direct comparison between an embodiment in which external areas of the camera images 10 were also used, that is to say also radially distorted areas due to fisheye optics, and the surrounding image 21 shown in FIG. 4 is shown in FIGS. 5a and 5b.
  • FIGS. 5a and 5b A direct comparison between an embodiment in which external areas of the camera images 10 were also used, that is to say also radially distorted areas due to fisheye optics, and the surrounding image 21 shown in FIG. 4 is shown in FIGS. 5a and 5b.
  • Fig. 5a it can be clearly seen that due to the use of the radially distorted outer image areas of the camera images 10, other vehicles 51 appear strongly distorted in the surrounding image 21. However, this is not the case with the other vehicles 51, which are shown in FIG particularly laterally restricted).
  • Device environment camera distance-resolving sensor data processing device -1 computing device -2 memory 0 camera image 1 sensor data 5 optical property parameters 0 image signal 1 environment image 0 local environment map 1-1 data point (ultrasonic sensor) 1-2 data point (radar sensor) 1-3 data point (lidar sensor) 2 position 3 floor projection plane 4 projection plane 5 facing away side 6 facing side 0 open space 1 raised object 0 vehicle 1 display device

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé de génération d'une image d'environnement (21) pour un assistant de stationnement d'un véhicule (50), procédé dans lequel l'environnement du véhicule (50) est capturé au niveau d'une pluralité de positions (32) du véhicule (50) au moyen d'une pluralité de caméras d'environnement (2), l'environnement est capturé au moyen d'au moins un capteur à limite de résolution en portée (3); des espaces ouverts (40) et des objets surélevés (41) dans l'environnement sont identifiés sur la base de données de capteur capturées (11) provenant du ou des capteurs à limite de résolution en portée (3), une carte d'environnement locale (30) est générée sur la base des données de capteur capturées (11) et/ou des espaces ouverts identifiés (40) et des objets surélevés identifiés (41), des espaces ouverts identifiés (40) et des objets surélevés identifiés (41) étant liés, en fonction de la position et de la direction de capture, à des images de caméra (10) correspondant aux espaces ouverts identifiés et aux objets surélevés identifiés, un signal d'image (20) pour une image d'environnement (21) dans la vue de dessus du véhicule (50) est généré par projection des images de caméra (10) dans ce but sur les espaces ouverts (40) et les objets (41) dans la carte d'environnement (30) au moins en partie en fonction de la position et de la direction de capture, et le signal d'image généré (20) est fourni. L'invention concerne également un dispositif (1) pour générer une image d'environnement (21).
PCT/EP2023/058624 2022-05-02 2023-04-03 Procédé et dispositif de génération d'une image d'environnement pour un assistant de stationnement d'un véhicule WO2023213482A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102022204313.8 2022-05-02
DE102022204313.8A DE102022204313A1 (de) 2022-05-02 2022-05-02 Verfahren und Vorrichtung zum Erzeugen eines Umfeldabbildes für einen Parkassistenten eines Fahrzeugs

Publications (1)

Publication Number Publication Date
WO2023213482A1 true WO2023213482A1 (fr) 2023-11-09

Family

ID=86054130

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/058624 WO2023213482A1 (fr) 2022-05-02 2023-04-03 Procédé et dispositif de génération d'une image d'environnement pour un assistant de stationnement d'un véhicule

Country Status (2)

Country Link
DE (1) DE102022204313A1 (fr)
WO (1) WO2023213482A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010051204A1 (de) * 2010-11-12 2012-05-16 Valeo Schalter Und Sensoren Gmbh Verfahren zum Darstellen eines Hindernisses und Abbildungsvorrichtung
EP2583869A1 (fr) * 2010-06-18 2013-04-24 Aisin Seiki Kabushiki Kaisha Dispositif d'aide au stationnement
EP2832590A1 (fr) * 2012-03-30 2015-02-04 Panasonic Corporation Dispositif et procédé d'aide au stationnement
DE102018210812A1 (de) * 2018-06-30 2020-01-02 Robert Bosch Gmbh Verfahren zu einer sensor- und speicherbasierten Darstellung einer Umgebung, Anzeigevorrichtung und Fahrzeug mit der Anzeigevorrichtung
WO2021043732A1 (fr) * 2019-09-05 2021-03-11 Valeo Schalter Und Sensoren Gmbh Affichage d'un environnement de véhicule pour déplacer le véhicule vers une position cible
EP3939863A1 (fr) * 2020-07-13 2022-01-19 Faurecia Clarion Electronics Co., Ltd. Dispositif de génération d'image en vue aérienne, système de génération d'image en vue aérienne, et dispositif de stationnement automatique

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007027847A2 (fr) 2005-09-01 2007-03-08 Geosim Systems Ltd. Systeme et procede de modelisation 3d rentable haute fidelite d'environnements urbains a grande echelle
JP2007172541A (ja) 2005-12-26 2007-07-05 Toyota Motor Corp 運転支援装置
KR20180084952A (ko) 2015-12-21 2018-07-25 로베르트 보쉬 게엠베하 다중 카메라 차량 시스템들을 위한 동적 이미지 혼합
DE102018214874B3 (de) 2018-08-31 2019-12-19 Audi Ag Verfahren und Anordnung zum Erzeugen einer mit Bildinformationen texturierten Umfeldkarte eines Fahrzeugs und Fahrzeug umfassend eine solche Anordnung

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2583869A1 (fr) * 2010-06-18 2013-04-24 Aisin Seiki Kabushiki Kaisha Dispositif d'aide au stationnement
DE102010051204A1 (de) * 2010-11-12 2012-05-16 Valeo Schalter Und Sensoren Gmbh Verfahren zum Darstellen eines Hindernisses und Abbildungsvorrichtung
EP2832590A1 (fr) * 2012-03-30 2015-02-04 Panasonic Corporation Dispositif et procédé d'aide au stationnement
DE102018210812A1 (de) * 2018-06-30 2020-01-02 Robert Bosch Gmbh Verfahren zu einer sensor- und speicherbasierten Darstellung einer Umgebung, Anzeigevorrichtung und Fahrzeug mit der Anzeigevorrichtung
WO2021043732A1 (fr) * 2019-09-05 2021-03-11 Valeo Schalter Und Sensoren Gmbh Affichage d'un environnement de véhicule pour déplacer le véhicule vers une position cible
EP3939863A1 (fr) * 2020-07-13 2022-01-19 Faurecia Clarion Electronics Co., Ltd. Dispositif de génération d'image en vue aérienne, système de génération d'image en vue aérienne, et dispositif de stationnement automatique

Also Published As

Publication number Publication date
DE102022204313A1 (de) 2023-11-02

Similar Documents

Publication Publication Date Title
EP3183721B1 (fr) Procédé et dispositif de comptage sans contact d'essieux d'un véhicule et système de comptage d'essieux pour trafic routier
DE102018121019A1 (de) Erweitern von realen sensoraufzeichnungen mit simulierten sensordaten
DE102016220075A1 (de) Kraftfahrzeug und Verfahren zur 360°-Umfelderfassung
DE112010000019T5 (de) Verfahren zum optischen Abtasten und Vermessen einer Umgebung
DE102010042026B4 (de) Verfahren zum Erzeugen eines Abbildes mindestens eines Objekts in einer Umgebung eines Fahrzeugs
DE102012001554A1 (de) Verfahren zum Betreiben einer Fahrerassistenzeinrichtung eines Kraftfahrzeugs,Fahrerassistenzeinrichtung und Kraftfahrzeug
DE102018120405A1 (de) Fusion von radar- und bildsensorsystemen
DE102015115012A1 (de) Verfahren zum Erzeugen einer Umgebungskarte einer Umgebung eines Kraftfahrzeugs anhand eines Bilds einer Kamera, Fahrerassistenzsystem sowie Kraftfahrzeug
DE102016203710B4 (de) Abstands- und Richtungsschätzung eines Zielpunkts von einem Fahrzeug unter Verwendung einer monokularen Videokamera
DE112020004301T5 (de) Objekterkennungsvorrichtung
DE102013103952B4 (de) Spurerkennung bei voller Fahrt mit einem Rundumsichtsystem
DE102013103953B4 (de) Spurerkennung bei voller Fahrt unter Verwendung mehrerer Kameras
DE102017106152A1 (de) Ermitteln einer Winkelstellung eines Anhängers mit optimierter Vorlage
DE102011010860A1 (de) Verfahren und Fahrerassistenzsystem zum Anzeigen von Bildern in einem Kraftfahrzeug sowie Kraftfahrzeug
WO2018054521A1 (fr) Procédé d'autolocalisation d'un véhicule
WO2020001838A1 (fr) Procédé de représentation d'un environnement basée sur capteur et mémoire, dispositif d'affichage et véhicule équipé d'un tel dispositif d'affichage
DE102013215408A1 (de) Anzeigesystem für ein Fahrzeug und Verfahren für die Anzeige einer Fahrzeugumgebung
DE102014201409B4 (de) Parkplatz - trackinggerät und verfahren desselben
DE102015223500A1 (de) Verfahren und Vorrichtung zur Prüfung der Funktionalität einer außenseitigen Lichteinrichtung eines Fahrzeugs
WO2023213482A1 (fr) Procédé et dispositif de génération d'une image d'environnement pour un assistant de stationnement d'un véhicule
DE102008050456B4 (de) Verfahren und Vorrichtung zur Fahrspurerkennung
DE112012003630B4 (de) Verfahren, anordnung und fahrassistenzsystem zur ermittlung der räumlichen verteilung von objekten relativ zu einem fahrzeug
DE102010009620B4 (de) Verfahren und Vorrichtung zur Detektion mindestens eines Hindernisses in einem Fahrzeugumfeld
DE102020109789A1 (de) Verfahren zum Durchführen einer Selbstlokalisierung eines Fahrzeugs auf der Grundlage einer reduzierten digitalen Umgebungskarte, Computerprogrammprodukt sowie ein Selbstlokalisierungssystem
DE102019209849B4 (de) Verfahren und Steuergerät zur abstandsmessenden Bildverarbeitung für eine Stereokameraeinrichtung für ein Fahrzeug und Stereokamerasystem mit einer Stereokameraeinrichtung und einem Steuergerät

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23718188

Country of ref document: EP

Kind code of ref document: A1