EP3815044B1 - Method for sensor and memory-based depiction of an environment, display apparatus and vehicle having the display apparatus - Google Patents

Method for sensor and memory-based depiction of an environment, display apparatus and vehicle having the display apparatus Download PDF

Info

Publication number
EP3815044B1
EP3815044B1 EP19724396.7A EP19724396A EP3815044B1 EP 3815044 B1 EP3815044 B1 EP 3815044B1 EP 19724396 A EP19724396 A EP 19724396A EP 3815044 B1 EP3815044 B1 EP 3815044B1
Authority
EP
European Patent Office
Prior art keywords
vehicle
model
surround
basis
distance data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP19724396.7A
Other languages
German (de)
French (fr)
Other versions
EP3815044A1 (en
Inventor
Paul Robert Herzog
Uwe Brosch
Lidia Rosario Torres Lopez
Dirk Raproeger
Paul-Sebastian Lauer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of EP3815044A1 publication Critical patent/EP3815044A1/en
Application granted granted Critical
Publication of EP3815044B1 publication Critical patent/EP3815044B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/53Constructional details of electronic viewfinders, e.g. rotatable or detachable
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/31Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles providing stereoscopic vision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present invention relates to a method for a sensor and memory-based representation of an environment of a vehicle, a display device for carrying out the method and the vehicle with the display device.
  • the font EP 1 462 762 A1 discloses a surroundings detection device for a vehicle.
  • the environment detection device creates a virtual three-dimensional environment model and displays this model.
  • the font DE 10 2008 034 594 A1 relates to a method for informing an occupant of a vehicle, with a representation of an area surrounding the vehicle being generated.
  • a camera of a vehicle cannot capture environmental areas behind objects in an environment of a vehicle. For example, it is not possible for the driver to display a rear view of a third-party vehicle detected from the front by a camera.
  • An environment model that is only determined as a function of captured camera images, which is usually shown from a perspective obliquely from above, therefore typically shows the driver the environment incompletely.
  • the known methods result in the case of high and nearby objects unnatural distortions in a displayed representation of the environment.
  • the object of the present invention is to improve the representation of surroundings of a vehicle for a driver.
  • the present invention relates to a method for a sensor and memory-based representation of an environment of a vehicle, the vehicle having at least one imaging sensor for detecting the environment.
  • the imaging sensor preferably has a camera.
  • the method includes capturing a sequence of images using the imaging sensor.
  • Distance data are then determined as a function of the recorded images, in particular a two-dimensional depth map and/or a three-dimensional point cloud.
  • the distance data in particular the two-dimensional depth map and/or the three-dimensional point cloud, can be determined as a function of distances between objects in the environment and the vehicle, detected by at least one distance sensor of the vehicle.
  • the optional distance sensor has an ultrasonic, radar and/or lidar sensor.
  • the distance data represent the detected and/or determined distances between the vehicle and objects in the vicinity of the vehicle. Determining distances to the vehicle or the distance data using an active distance sensor, for example using the lidar sensor and/or using the radar sensor and/or using ultrasonic sensors, has the fundamental advantage over a camera-based distance determination that distances can also be measured in poor lighting conditions and/or poor Weather conditions are reliably recorded. Provision can be made for a sensor type, camera and/or ultrasonic sensor and/or lidar sensor and/or radar sensor, to be selected for determining the distance data as a function of light conditions and/or weather conditions and/or a vehicle speed.
  • a three-dimensional structure of an environment model is Dependence of the distance data determined, in particular the depth map and/or the point cloud determined.
  • the structure of the environment model has, in particular, a three-dimensional grid.
  • at least one object in the area surrounding the vehicle is recognized as a function of the captured images.
  • the object is recognized by a first neural network.
  • vehicles, pedestrians, infrastructure objects, such as traffic lights, and/or buildings are recognized as objects.
  • an object class of the detected object and/or an object type of the detected object is also determined or detected by the first neural network.
  • a synthetic object model is then loaded from an electrical memory as a function of the detected object, the memory being arranged, for example, within a control unit of the vehicle.
  • the object model is additionally loaded as a function of the identified object class and/or the identified object type.
  • the synthetic object model can be a specific object model representing the recognized object or a generic object model.
  • the generic object model can be parameterized or is changed depending on the detected object and/or the detected object class and/or the detected object type and/or depending on the distance data.
  • the generated three-dimensional structure of the environmental model is adapted as a function of the loaded synthetic object model and the distance data, with the synthetic object model replacing a structural area of the generated environmental model.
  • the generated environment model is expanded by the loaded object models depending on the detected and/or determined distances.
  • This environment model adapted to the synthetic object model is displayed to the driver, with the display preferably taking place on a display of the vehicle and/or on a display of a mobile electronic device.
  • the method can advantageously reduce unnatural distortions in a view of the environment model, so that the environment model displayed appears more realistic and error-free.
  • the environment model using object models from the memory for a camera are not visible areas in the Environment model represented realistically, for example a view of another vehicle not captured by a camera.
  • the method also enables the driver to better and more quickly assess a driving situation, the distance to an object or a parking space, which also increases driving comfort for the driver.
  • an object orientation of the detected object is determined as a function of the captured images; in particular, the object orientation is detected by a second neural network and/or by another type of artificial intelligence or another classification method.
  • the generated three-dimensional structure of the environment model is also adapted as a function of the determined object orientation. As a result, the environment model generated is advantageously adapted more quickly and reliably.
  • segments or object instances in the area surrounding the vehicle are recognized as a function of the captured images.
  • the segments or object instances are preferably recognized by means of a third neural network and/or by another type of artificial intelligence or another classification method.
  • the distances or depth information in the distance data is then assigned to the detected segment or the detected object instance.
  • the generated three-dimensional structure of the environment model is also adapted as a function of the recognized segments or object instances, in particular as a function of the segments or object instances assigned to the distances.
  • the generated three-dimensional structure of the environment model is advantageously adapted more precisely and quickly as a function of the loaded synthetic object model.
  • a texture for the adapted three-dimensional structure of the environment model is determined as a function of the captured images, with the captured images preferably being camera images.
  • the adapted environment model with the determined texture is then displayed. For example, a color of a vehicle in the environment model is determined. Provision can be made for the determined texture to be captured camera images or in perspective has changed camera images. As a result of this continuation, the environment model is displayed with realistic imaging and/or coloring, which enables easy orientation for the driver.
  • the determination of the texture for the structural area of the environmental model adapted by the loaded object model is loaded from the memory or determined, in particular as a function of the recognized object or a recognized object class or a recognized object type. For example, the texture of a manufacturer's model of a vehicle is loaded from the electrical memory when a corresponding object type has been identified. This design makes the environment model appear realistic to the driver. In addition, unnatural distortions in the texture are avoided.
  • the adapted environment model is displayed within a predefined area around the vehicle.
  • the environment model is only displayed within an area that is delimited by a predetermined distance around the vehicle.
  • the predetermined distance around the vehicle is preferably less than or equal to 200 meters, particularly preferably 50 meters, in particular less than or equal to 10 meters.
  • the specified area is defined, for example, by a base area of the specified area, which represents the specified distance or the specified area.
  • a center point of the base area represents, in particular, a center point of the vehicle.
  • the base can have any shape, preferably the base has a square, elliptical or circular shape. This advantageously reduces the computational complexity for generating the environment model and the computational complexity for adapting the environment model. Furthermore, this continuation advantageously achieves a low rate of unnatural distortions in the texture of the displayed environment model.
  • a size of the specified area and/or a shape of the specified area and/or an observer's perspective of the displayed environment model depends on a vehicle speed and/or a steering angle of the vehicle and/or the distance data or depending on a detected distance matched to an object.
  • At least one projection surface arranged at least partially vertically to a base surface of the predefined area is displayed outside of the adapted environment model. At least one partial area of a currently recorded image is projected onto this projection surface, in particular at least one partial area of a recorded camera image.
  • the images displayed on the displayed projection surface represent a view of a distant environment, ie an environmental area which is outside the displayed environmental model and further away from the vehicle than the predetermined distance delimiting the predetermined area.
  • the projection surface is displayed as a function of the vehicle speed and/or the steering angle of the vehicle and/or the distance data or as a function of a detected distance from an object.
  • a size and/or a shape of the projection area can be adjusted depending on the vehicle speed and/or the steering angle of the vehicle and/or the distance data.
  • the driver can advantageously be shown a narrow section of the surrounding area to the front in the direction of travel, for example when driving at a higher speed, which minimizes the computing effort in a driving situation at increased speed and the driver's attention is focused on the area that is essential in this driving situation .
  • the invention also relates to a display device with a display, which is set up to carry out a method according to the invention.
  • the display device preferably has an imaging sensor, in particular a camera, and a control unit.
  • the control device is set up to carry out the method according to the invention, that is to say to capture the images, to generate and adapt the displayed environment model and to control the display to display the adapted environment model.
  • the invention also relates to a vehicle with the display device.
  • a vehicle 100 is shown in plan.
  • the vehicle 100 has a forward-facing camera 101 as an imaging sensor.
  • wide-angle cameras 102 are arranged on vehicle 100 as imaging sensors at the front, rear and on each side of the vehicle, which capture surroundings 190 of the vehicle.
  • Vehicle 100 also has distance sensors 103 and 104, the distance sensors in this exemplary embodiment having a lidar sensor 103, which can also be an imaging sensor, and a plurality of ultrasonic sensors 104, which can also be imaging sensors.
  • a radar sensor can be arranged on the vehicle, which can also be an imaging sensor.
  • Lidar sensor 103 and ultrasonic sensors 104 are set up to detect distances between vehicle 100 and objects 108a and 108b in the area surrounding vehicle 100 .
  • Vehicle 100 also has a control unit 105 which captures the images captured by the camera and the distances captured by lidar sensor 103 and/or ultrasonic sensors 104 .
  • Control unit 105 is also set up to control a display 106 in vehicle 100 to display a visual representation of the environment for the driver, in particular to display an environment model generated and adapted by means of the control unit and, if appropriate, projection surfaces outside of the environment model.
  • control unit 105 loads data, in particular synthetic and/or generic object models, from an electrical memory 107 of vehicle 100 and/or from an electrical memory of control unit 105.
  • control unit 105 captures at least one sequence of images using camera 101 and/or optionally using multiple wide-angle cameras 102 and/or using lidar sensor 103 . Furthermore, control unit 105 can optionally detect distances using lidar sensor 103 and/or ultrasonic sensors 104 . Control unit 105 is set up to load data from external memory 107 and/or from internal memory 202 of the control unit. A computing unit 201 of control unit 105 calculates as a function of the recorded images and/or the recorded distances, which are summarized in distance data, in particular in a specific depth map and/or a specific point cloud, and/or the data from memory 107 and/or 202 an environment model.
  • control unit 105 is set up to control a display 106 to display a representation of the environment of vehicle 100, in particular the calculated representation is the adapted environment model, with the display including additional information, for example driving dynamics parameters such as vehicle speed and/or a projection screen can be added.
  • FIG 3 a flowchart of a method according to the invention is shown as a block diagram by way of example.
  • the method begins with acquisition 301 of a sequence of images using an imaging sensor, in particular a camera 101 and/or 102.
  • an imaging sensor in particular a camera 101 and/or 102.
  • distances between vehicle 100 and objects in the area surrounding vehicle 100 are determined using at least one distance sensor 103 and /or 104 detected.
  • distance data in particular a two-dimensional depth map and/or a three-dimensional point cloud, is determined as a function of the recorded images and/or as a function of the recorded distances.
  • the distance data includes the recorded or determined distances of vehicle 100 to objects 108a, 108b in the area surrounding vehicle 100.
  • the distance data are determined, for example, as a function of the recorded sequence of images, in particular as a function of a Evaluation of an optical flow between captured camera images. Any distance the Distance data or each point of the depth map and/or the point cloud represents, for example, a determined distance between vehicle 100 and objects 108a, 108b in the area surrounding vehicle 100. It can alternatively or additionally be provided in step 303 that the distance data is determined as a function by means of a Stereo camera captured images is determined.
  • the distance data or the depth map and/or the point cloud are determined in step 303 as a function of sensor systems 101, 102, 103 and/or 104 that are independent of one another.
  • the distance data or the depth map and/or the point cloud is determined as a function of a time profile of data from a sensor system 101, 102, 103 and/or 104.
  • ultrasonic sensors 104 have a specific advantage over a camera 101, 102, for example, relative independence of the detected distances from poor light and/or weather conditions.
  • a three-dimensional structure of an environment model is generated as a function of the distance data, in particular the depth map and/or the point cloud.
  • the three-dimensional structure has a three-dimensional grid, the three-dimensional grid preferably simplifying or representing the distance data.
  • the surrounding areas captured in an image are segmented as a function of a sequence of the captured images. For example, a segment or an object instance “roadway”, an “object” segment, a “building” segment and/or an “infrastructure object” segment are recognized.
  • the recognized segments or object instances are assigned to the distances or the depth information in the distance data.
  • at least one object in the area surrounding the vehicle is recognized as a function of the captured images. This recognition is carried out with at least one first neural network trained for this.
  • a synthetic object model is loaded from memory 107 and/or 202 depending on the detected object.
  • an object orientation of the detected object is determined as a function of the captured images, preferably by a second neural network.
  • the object orientation can represent a first approximation of the orientation of the object, for example a category of a relative orientation of the detected object to the vehicle from a set comprising the categories "object orientation to the front", “object orientation to the rear", “object orientation to the right” and/or "Object alignment to the left” determined.
  • the generated three-dimensional structure of the environmental model is adapted as a function of the synthetic object model and the distance data, with the synthetic object model replacing or adapting a structural area of the generated environmental model.
  • the adaptation 310 of the generated three-dimensional structure of the environment model can preferably also take place as a function of the determined object orientation.
  • a texture is then determined for the adapted three-dimensional structure of the environment model as a function of the captured images.
  • a determination 311 of the texture for adapted structural areas of the environmental model is not carried out if in an optional step 312 a texture for this adapted structural area is loaded from the memory.
  • the adapted environment model is then displayed 314, with the determined and/or loaded texture optionally being displayed on the three-dimensional structure of the adapted environment model.
  • the display 314 of the adapted environment model takes place within the specified area or the base area of the specified area around vehicle 100.
  • a projection surface which is at least partially vertical to a base area of the specified area, outside of the adapted environment model. at least a partial area of a captured image, in particular a camera image, being projected onto this projection surface.
  • a size and/or a shape of the projection area can optionally be adjusted depending on the vehicle speed, the steering angle and/or the distance data.
  • FIG 4 1 shows a captured camera image of the forward-facing front camera 101 of the vehicle with the objects 401, 402, 403, 404 and 405 recognized by step 307.
  • Objects 401, 402, 403, 404 and 405 are recognized by at least one first neural network trained for this purpose.
  • To the recognized Objects can each be recognized or assigned an object class, for example vehicle 401, 402 and 403, building 405 or tree 404.
  • FIG 5 are the object orientations 501 and 502 of the detected objects 401, 402 and 403 detected by step 309 as a function of the in figure 4
  • the camera image shown is shown as a broken line for category 501 "object orientation to the front” and as a dotted line for category 502 "object orientation to the rear", with object orientations 501 and 502 being recognized by at least one second neural network trained for this purpose.
  • FIG 6 are the values generated by step 305 depending on the in figure 4 shown camera image or a sequence of camera images recognized segments or object instances 601, 602, 603, 605 and 606, wherein the segments 601, 602, 603, 605 and 606 were recognized by at least one third neural network trained for this.
  • the segment 601 represents, for example, an area that can be driven over by the vehicle.
  • the segment 602 represents an object area and the segment 603 represents a non-driveable area.
  • a green space area is represented by segment 605 and a sky area by segment 606 .
  • the first neural network and/or the second neural network and/or the third neural network can be replaced by a more general neural network or a recognition method or classification method or an artificial intelligence that recognizes both objects, object orientations and segments.
  • FIG 7 a displayed environment model 701 is shown.
  • vehicles were recognized as objects, as a result of which the environment model was adapted by an object model 702 and 703 in each case.
  • the object models 702 and 703 were inserted into the environment model 701 or the environment model was adapted by the object models 702 and 703 depending on the detected objects, the detected object orientation and the detected segments.
  • the environment model 701 accordingly has a structure adapted by two object models 702 and 703 .
  • the customized environment model 701 is used in figure 7 only displayed within a predetermined square area 704 around a center point of the vehicle 705, the vehicle 100 also being used as an additional object model in the environment model 701.
  • Additional projection surfaces 802 can be arranged at the edge and outside of the displayed environment model 701 .
  • a partial area of the captured images, in particular captured camera images, can be displayed on these projection surfaces 802 .
  • the partial areas of the images displayed on the projection surfaces 802 represent a long-distance view for a driver.
  • a vehicle 100 is shown with a predetermined area arranged around the vehicle, which is represented by a base area 801 of the predetermined area, and a projection surface 802 .
  • the base 801 of the predetermined area is shown in perspective and is square.
  • the shape of the base area could also be elliptical or circular.
  • the display can also be made with a perspective from above or diagonally from the side.
  • the shape of the base area 801 of the specified area and/or the length a and/or the width b of the base area 801 of the specified area are adapted, for example, as a function of the vehicle speed and/or the weather conditions and/or the visibility conditions, for example brightness or the time of day .
  • the projection surface 802 is curved and is vertical or perpendicular to the base surface 801 of the predetermined area.
  • the projection surface 802 can be arranged as a non-curved plane on at least one side of the base surface 801 of the predetermined area, for example a projection surface 802 can be arranged on each side of the base surface 801 .
  • the projection surface 802 can be arranged around 360° and closed or as a cylindrical lateral surface around the base surface 801 or around the specified area.
  • the length c and/or the height d of the projection surface 802 are adjusted depending on the vehicle speed, for example.
  • the length c and/or the height d of the projection surface 802 are adjusted in figure 8 symbolized by the arrows 804.
  • the base area 801 is preferably also displayed as part of the environment model, the base area being determined in particular as a function of the captured images and/or the distance data, in particular the depth map and/or the point cloud, so that the base area 801 depicts, for example, bumps in a roadway.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Description

Die vorliegende Erfindung betrifft ein Verfahren zu einer sensor- und speicherbasierten Darstellung einer Umgebung eines Fahrzeugs, eine Anzeigevorrichtung zur Durchführung des Verfahrens und das Fahrzeug mit der Anzeigevorrichtung.The present invention relates to a method for a sensor and memory-based representation of an environment of a vehicle, a display device for carrying out the method and the vehicle with the display device.

Stand der TechnikState of the art

Die Schrift EP 1 462 762 A1 offenbart eine Umfelderfassungsvorrichtung für ein Fahrzeug.The font EP 1 462 762 A1 discloses a surroundings detection device for a vehicle.

Die Umfelderfassungsvorrichtung erzeugt ein virtuelles dreidimensionales Umgebungsmodell und zeigt dieses Modell an.The environment detection device creates a virtual three-dimensional environment model and displays this model.

Die Schrift DE 10 2008 034 594 A1 betrifft ein Verfahren zur Information eines Insassen eines Fahrzeuges, wobei eine Darstellung einer Umgebung des Fahrzeugs generiert wird.The font DE 10 2008 034 594 A1 relates to a method for informing an occupant of a vehicle, with a representation of an area surrounding the vehicle being generated.

Das Dokument US 7,161,616 B1 offenbart eine Anzeige eines synthetischen Bilds von einer virtuellen Beobachterperspektive.The document U.S. 7,161,616 B1 discloses a synthetic image display from a virtual observer's perspective.

Relevanter Stand der Technik ist auch offenbart in US 2014/278065 A1 , US 9 013 286 B2 und DE 602 07 655 T2 .Relevant prior art is also disclosed in U.S. 2014/278065 A1 , U.S. 9,013,286 B2 and DE 602 07 655 T2 .

Eine Kamera eines Fahrzeugs kann Umgebungsbereiche hinter Objekten in einer Umgebung eines Fahrzeugs nicht erfassen. Eine Darstellung einer rückwärtigen Ansicht eines mittels einer Kamera von vorne erfassten Fremdfahrzeugs für den Fahrer ist beispielsweise nicht möglich. Ein nur in Abhängigkeit erfasster Kamerabilder ermitteltes Umgebungsmodell, welches meist aus einer Perspektive von schräg oben dargestellt ist, zeigt demnach dem Fahrer die Umgebung typischerweise unvollständig an. Des Weiteren resultieren durch die bekannten Verfahren im Falle von hohen und naheliegenden Objekten unnatürliche Verzerrungen in einer angezeigten Darstellung der Umgebung.A camera of a vehicle cannot capture environmental areas behind objects in an environment of a vehicle. For example, it is not possible for the driver to display a rear view of a third-party vehicle detected from the front by a camera. An environment model that is only determined as a function of captured camera images, which is usually shown from a perspective obliquely from above, therefore typically shows the driver the environment incompletely. Furthermore, the known methods result in the case of high and nearby objects unnatural distortions in a displayed representation of the environment.

Die Aufgabe der vorliegenden Erfindung ist es, die Darstellung einer Umgebung eines Fahrzeugs für einen Fahrer zu verbessern.The object of the present invention is to improve the representation of surroundings of a vehicle for a driver.

Offenbarung der ErfindungDisclosure of Invention

Die vorstehende Aufgabe wird durch die Merkmale der unabhängigen Ansprüche 1 und 9 gelöst.The above object is solved by the features of independent claims 1 and 9.

Die vorliegende Erfindung betrifft ein Verfahren zu einer sensor- und speicherbasierten Darstellung einer Umgebung eines Fahrzeugs, wobei das Fahrzeug zumindest einen bildgebenden Sensor zur Erfassung der Umgebung aufweist. Der bildgebende Sensor weist bevorzugt eine Kamera auf. Das Verfahren weist eine Erfassung einer Abfolge von Bildern mittels des bildgebenden Sensors auf. Anschließend werden Abstandsdaten in Abhängigkeit der erfassten Bilder bestimmt, insbesondere eine zweidimensionale Tiefenkarte und/oder eine dreidimensionale Punktwolke. Alternativ oder zusätzlich können die Abstandsdaten, insbesondere die zweidimensionale Tiefenkarte und/oder die dreidimensionale Punktwolke, in Abhängigkeit von mittels mindestens eines Abstandssensors des Fahrzeugs erfasster Abstände zwischen Objekten in der Umgebung und dem Fahrzeug bestimmt werden. Der optionale Abstandssensor weist einen Ultraschall-, Radar- und/oder Lidarsensor auf. Die Abstandsdaten repräsentieren die erfassten und/oder ermittelten Abstände zwischen dem Fahrzeug und Objekten in der Umgebung des Fahrzeugs. Die Bestimmung von Abständen zum Fahrzeug beziehungsweise der Abstandsdaten mittels eines aktiven Abstandsensors, beispielsweise mittels des Lidarsensors und/oder mittels des Radarsensors und/oder mittels Ultraschallsensoren, weist gegenüber einer kamerabasierten Abstandsbestimmung den prinzipiellen Vorteil auf, dass Abstände auch bei schlechten Lichtverhältnissen und/oder schlechten Witterungsbedingungen zuverlässig erfasst werden. Es kann vorgesehen sein, dass eine Auswahl eines Sensortyps Kamera und/oder Ultraschallsensor und/oder Lidarsensor und/oder Radarsensor zur Bestimmung der Abstandsdaten in Abhängigkeit von Lichtverhältnissen und/oder Witterungsbedingungen und/oder einer Fahrzeuggeschwindigkeit durchgeführt wird. Anschließend wird in einem weiteren Verfahrensschritt eine dreidimensionale Struktur eines Umgebungsmodells in Abhängigkeit der bestimmten Abstandsdaten, insbesondere der Tiefenkarte und/oder der bestimmten Punktwolke, erzeugt. Die Struktur des Umgebungsmodells weist insbesondere ein dreidimensionales Gitternetz auf. Des Weiteren wird mindestens ein Objekt in der Umgebung des Fahrzeugs in Abhängigkeit der erfassten Bilder erkannt. Die Erkennung des Objektes erfolgt durch ein erstes neuronales Netz. Beispielsweise werden Fahrzeuge, Fußgänger, Infrastrukturobjekte, wie beispielsweise eine Ampel, und/oder Gebäude als Objekt erkannt. Optional wird zusätzlich eine Objektklasse des erkannten Objektes und/oder eine Objektgattung des erkannten Objektes durch das erste neuronale Netz ermittelt beziehungsweise erkannt. Es kann beispielsweise vorgesehen sein, dass eine Fahrzeugklasse eines Kleinwagens als Objektklasse und/oder ein Herstellermodell des Kleinwagens als Objektgattung erkannt wird. Danach wird ein synthetisches Objektmodell in Abhängigkeit des erkannten Objektes aus einem elektrischen Speicher geladen, wobei der Speicher beispielsweise innerhalb eines Steuergeräts des Fahrzeugs angeordnet ist. Optional erfolgt das Laden des Objektmodells zusätzlich in Abhängigkeit der erkannten Objektklasse und/oder der erkannten Objektgattung. Das synthetische Objektmodell kann ein spezifisches Objektmodell, welches das erkannte Objekt repräsentiert, oder ein generisches Objektmodell sein. Das generische Objektmodell ist parametrisierbar beziehungsweise wird in Abhängigkeit des erkannten Objektes und/oder der erkannten Objektklasse und/oder der erkannten Objektgattung und/oder in Abhängigkeit der Abstandsdaten verändert. Danach wird eine Anpassung der erzeugten dreidimensionalen Struktur des Umgebungsmodells in Abhängigkeit des geladenen synthetischen Objektmodells und der Abstandsdaten durchgeführt, wobei das synthetische Objektmodell einen Strukturbereich des erzeugten Umgebungsmodells ersetzt. Mit anderen Worten wird das erzeugte Umgebungsmodell durch die geladenen Objektmodelle in Abhängigkeit der erfassten und/oder ermittelten Abstände erweitert. Dieses um das synthetische Objektmodell angepasste Umgebungsmodell wird dem Fahrer angezeigt, wobei die Anzeige bevorzugt auf einem Display des Fahrzeugs und/oder auf einem Display eines mobilen elektronischen Gerätes erfolgt. Durch das Verfahren können vorteilhafterweise unnatürliche Verzerrungen in einer Ansicht des Umgebungsmodells reduziert werden, so dass das angezeigte Umgebungsmodell realistischer und fehlerfrei wirkt. Des Weiteren werden durch die Anpassung des Umgebungsmodells unter Verwendung von Objektmodellen aus dem Speicher für eine Kamera nicht einsehbare Bereiche in dem Umgebungsmodell realistisch dargestellt, beispielsweise eine nicht durch eine Kamera erfasste Ansicht eines Fremdfahrzeugs. Durch das Verfahren kann der Fahrer darüber hinaus eine Fahrsituation, ein Abstand zu einem Objekt oder eine Parklücke besser und schneller einschätzen, wodurch zusätzlich der Fahrkomfort für den Fahrer erhöht wird.The present invention relates to a method for a sensor and memory-based representation of an environment of a vehicle, the vehicle having at least one imaging sensor for detecting the environment. The imaging sensor preferably has a camera. The method includes capturing a sequence of images using the imaging sensor. Distance data are then determined as a function of the recorded images, in particular a two-dimensional depth map and/or a three-dimensional point cloud. Alternatively or additionally, the distance data, in particular the two-dimensional depth map and/or the three-dimensional point cloud, can be determined as a function of distances between objects in the environment and the vehicle, detected by at least one distance sensor of the vehicle. The optional distance sensor has an ultrasonic, radar and/or lidar sensor. The distance data represent the detected and/or determined distances between the vehicle and objects in the vicinity of the vehicle. Determining distances to the vehicle or the distance data using an active distance sensor, for example using the lidar sensor and/or using the radar sensor and/or using ultrasonic sensors, has the fundamental advantage over a camera-based distance determination that distances can also be measured in poor lighting conditions and/or poor Weather conditions are reliably recorded. Provision can be made for a sensor type, camera and/or ultrasonic sensor and/or lidar sensor and/or radar sensor, to be selected for determining the distance data as a function of light conditions and/or weather conditions and/or a vehicle speed. Then, in a further process step, a three-dimensional structure of an environment model is Dependence of the distance data determined, in particular the depth map and/or the point cloud determined. The structure of the environment model has, in particular, a three-dimensional grid. Furthermore, at least one object in the area surrounding the vehicle is recognized as a function of the captured images. The object is recognized by a first neural network. For example, vehicles, pedestrians, infrastructure objects, such as traffic lights, and/or buildings are recognized as objects. Optionally, an object class of the detected object and/or an object type of the detected object is also determined or detected by the first neural network. Provision can be made, for example, for a vehicle class of a small car to be recognized as an object class and/or a manufacturer model of the small car to be recognized as an object category. A synthetic object model is then loaded from an electrical memory as a function of the detected object, the memory being arranged, for example, within a control unit of the vehicle. Optionally, the object model is additionally loaded as a function of the identified object class and/or the identified object type. The synthetic object model can be a specific object model representing the recognized object or a generic object model. The generic object model can be parameterized or is changed depending on the detected object and/or the detected object class and/or the detected object type and/or depending on the distance data. Thereafter, the generated three-dimensional structure of the environmental model is adapted as a function of the loaded synthetic object model and the distance data, with the synthetic object model replacing a structural area of the generated environmental model. In other words, the generated environment model is expanded by the loaded object models depending on the detected and/or determined distances. This environment model adapted to the synthetic object model is displayed to the driver, with the display preferably taking place on a display of the vehicle and/or on a display of a mobile electronic device. The method can advantageously reduce unnatural distortions in a view of the environment model, so that the environment model displayed appears more realistic and error-free. Furthermore, by adapting the environment model using object models from the memory for a camera are not visible areas in the Environment model represented realistically, for example a view of another vehicle not captured by a camera. The method also enables the driver to better and more quickly assess a driving situation, the distance to an object or a parking space, which also increases driving comfort for the driver.

In einer bevorzugten Ausgestaltung wird eine Objektausrichtung des erkannten Objektes in Abhängigkeit der erfassten Bilder ermittelt, insbesondere erfolgt die Erkennung der Objektausrichtung durch ein zweites neuronales Netz und/oder durch eine andere Art künstlicher Intelligenz beziehungsweise eines anderen Klassifikationsverfahrens. In dieser Ausgestaltung erfolgt die Anpassung der erzeugten dreidimensionalen Struktur des Umgebungsmodells zusätzlich in Abhängigkeit der ermittelten Objektausrichtung. Dadurch erfolgt die Anpassung des erzeugten Umgebungsmodells vorteilhafterweise schneller und zuverlässiger.In a preferred embodiment, an object orientation of the detected object is determined as a function of the captured images; in particular, the object orientation is detected by a second neural network and/or by another type of artificial intelligence or another classification method. In this embodiment, the generated three-dimensional structure of the environment model is also adapted as a function of the determined object orientation. As a result, the environment model generated is advantageously adapted more quickly and reliably.

In einer besonders bevorzugten Ausführung werden Segmente beziehungsweise Objektinstanzen in der Umgebung des Fahrzeugs in Abhängigkeit der erfassten Bilder erkannt. Die Erkennung der Segmente beziehungsweise Objektinstanzen erfolgt vorzugsweise mittels eines dritten neuronalen Netzes Netz und/oder durch eine andere Art künstlicher Intelligenz beziehungsweise eines anderen Klassifikationsverfahrens. Anschließend erfolgt eine Zuordnung der Abstände beziehungsweise Tiefeninformation in den Abstandsdaten zu dem erkannten Segment beziehungsweise der erkannten Objektinstanz. Die Anpassung der erzeugten dreidimensionalen Struktur des Umgebungsmodells erfolgt in dieser Ausführung zusätzlich in Abhängigkeit der erkannten Segmente beziehungsweise Objektinstanzen, insbesondere in Abhängigkeit der den Abständen zugeordneten Segmente beziehungsweise Objektinstanzen. Dadurch erfolgt die Anpassung der erzeugten dreidimensionalen Struktur des Umgebungsmodells in Abhängigkeit des geladenen synthetischen Objektmodells vorteilhafterweise genauer und schneller.In a particularly preferred embodiment, segments or object instances in the area surrounding the vehicle are recognized as a function of the captured images. The segments or object instances are preferably recognized by means of a third neural network and/or by another type of artificial intelligence or another classification method. The distances or depth information in the distance data is then assigned to the detected segment or the detected object instance. In this embodiment, the generated three-dimensional structure of the environment model is also adapted as a function of the recognized segments or object instances, in particular as a function of the segments or object instances assigned to the distances. As a result, the generated three-dimensional structure of the environment model is advantageously adapted more precisely and quickly as a function of the loaded synthetic object model.

Es wird eine Textur für die angepasste dreidimensionale Struktur des Umgebungsmodells in Abhängigkeit der erfassten Bilder ermittelt, wobei die erfassten Bilder bevorzugt Kamerabilder sind. Anschließend erfolgt die Anzeige des angepassten Umgebungsmodells mit der ermittelten Textur. Beispielsweise wird eine Farbe eines Fahrzeugs in dem Umgebungsmodell ermittelt. Es kann vorgesehen sein, dass die ermittelte Textur erfasste Kamerabilder beziehungsweise perspektivisch veränderte Kamerabilder aufweist. Durch diese Weiterführung wird das Umgebungsmodell mit einer realistischen Bildgebung und/oder Farbgebung angezeigt, wodurch dem Fahrer eine leichte Orientierung ermöglicht wird. Es wird die Ermittlung der Textur für den durch das geladene Objektmodell angepassten Strukturbereich des Umgebungsmodells aus dem Speicher geladen beziehungsweise ermittelt, insbesondere in Abhängigkeit des erkannten Objektes beziehungsweise einer erkannten Objektklasse beziehungsweise einer erkannten Objektgattung. Beispielsweise wird die Textur eines Herstellermodells eines Fahrzeugs aus dem elektrischen Speicher geladen, wenn eine entsprechende Objektgattung erkannt worden ist. Durch diese Ausgestaltung wirkt das Umgebungsmodell für den Fahrer realistisch. Außerdem werden unnatürliche Verzerrungen in der Textur vermieden.A texture for the adapted three-dimensional structure of the environment model is determined as a function of the captured images, with the captured images preferably being camera images. The adapted environment model with the determined texture is then displayed. For example, a color of a vehicle in the environment model is determined. Provision can be made for the determined texture to be captured camera images or in perspective has changed camera images. As a result of this continuation, the environment model is displayed with realistic imaging and/or coloring, which enables easy orientation for the driver. The determination of the texture for the structural area of the environmental model adapted by the loaded object model is loaded from the memory or determined, in particular as a function of the recognized object or a recognized object class or a recognized object type. For example, the texture of a manufacturer's model of a vehicle is loaded from the electrical memory when a corresponding object type has been identified. This design makes the environment model appear realistic to the driver. In addition, unnatural distortions in the texture are avoided.

In einer bevorzugten Weiterführung erfolgt die Anzeige des angepassten Umgebungsmodells innerhalb eines vorgegebenen Bereichs um das Fahrzeug. Mit anderen Worten erfolgt eine Darstellung des Umgebungsmodells nur innerhalb eines Bereichs, welcher durch einen vorgegebenen Abstand um das Fahrzeug begrenzt ist. Der vorgegebene Abstand um das Fahrzeug ist vorzugsweise kleiner oder gleich 200 Meter, besonders bevorzugt 50 Meter, insbesondere kleiner oder gleich 10 Meter. Der vorgegebene Bereich wird beispielsweise durch eine Grundfläche des vorgegebenen Bereichs definiert, welche den vorgegebenen Abstand beziehungsweise den vorgegebenen Bereich repräsentiert. Ein Mittelpunkt der Grundfläche repräsentiert insbesondere einen Mittelpunkt des Fahrzeugs. Die Grundfläche kann eine beliebige Form aufweisen, vorzugsweise weist die Grundfläche eine quadratische, elliptische oder kreisrunde Form auf. Somit wird vorteilhafterweise der Rechenaufwand zur Erzeugung des Umgebungsmodells sowie der Rechenaufwand zur Anpassung des Umgebungsmodells reduziert. Des Weiteren wird vorteilhafterweise durch diese Weiterführung eine niedrige Quote an unnatürlichen Verzerrungen in der Textur des angezeigten Umgebungsmodells erreicht.In a preferred development, the adapted environment model is displayed within a predefined area around the vehicle. In other words, the environment model is only displayed within an area that is delimited by a predetermined distance around the vehicle. The predetermined distance around the vehicle is preferably less than or equal to 200 meters, particularly preferably 50 meters, in particular less than or equal to 10 meters. The specified area is defined, for example, by a base area of the specified area, which represents the specified distance or the specified area. A center point of the base area represents, in particular, a center point of the vehicle. The base can have any shape, preferably the base has a square, elliptical or circular shape. This advantageously reduces the computational complexity for generating the environment model and the computational complexity for adapting the environment model. Furthermore, this continuation advantageously achieves a low rate of unnatural distortions in the texture of the displayed environment model.

Es kann vorgesehen sein, dass eine Größe des vorgegebenen Bereichs und/oder eine Form des vorgegebenen Bereichs und/oder eine Beobachterperspektive des angezeigten Umgebungsmodells in Abhängigkeit einer Fahrzeuggeschwindigkeit und/oder eines Lenkwinkels des Fahrzeugs und/oder der Abstandsdaten beziehungsweise in Abhängigkeit eines erfassten Abstandes zu einem Objekt angepasst wird.It can be provided that a size of the specified area and/or a shape of the specified area and/or an observer's perspective of the displayed environment model depends on a vehicle speed and/or a steering angle of the vehicle and/or the distance data or depending on a detected distance matched to an object.

In einer weiteren Ausgestaltung der Erfindung wird wenigstens eine zumindest teilweise vertikal zu einer Grundfläche des vorgegebenen Bereichs angeordnete Projektionsfläche außerhalb des angepassten Umgebungsmodells angezeigt. Auf diese Projektionsfläche wird mindestens ein Teilbereich eines aktuell erfassten Bildes projiziert, insbesondere mindestens ein Teilbereich eines erfassten Kamerabildes. Die auf der angezeigten Projektionsfläche dargestellten Bilder repräsentieren eine Ansicht einer fernen Umgebung, das heißt eines Umgebungsbereichs, welcher außerhalb des angezeigten Umgebungsmodells und weiter als der vorgegebene Abstand, welcher den vorgegebenen Bereich begrenzt, von dem Fahrzeug entfernt liegt.In a further embodiment of the invention, at least one projection surface arranged at least partially vertically to a base surface of the predefined area is displayed outside of the adapted environment model. At least one partial area of a currently recorded image is projected onto this projection surface, in particular at least one partial area of a recorded camera image. The images displayed on the displayed projection surface represent a view of a distant environment, ie an environmental area which is outside the displayed environmental model and further away from the vehicle than the predetermined distance delimiting the predetermined area.

In einer weiteren Ausgestaltung erfolgt die Anzeige der Projektionsfläche in Abhängigkeit der Fahrzeuggeschwindigkeit und/oder des Lenkwinkels des Fahrzeugs und/oder der Abstandsdaten beziehungsweise in Abhängigkeit eines erfassten Abstandes zu einem Objekt. Dadurch wird vorteilhafterweise bei einem Einparkvorgang keine Fernsicht auf einer Projektionsfläche angezeigt und folglich der Rechenaufwand in dieser Fahrsituation minimiert. Alternativ oder zusätzlich kann eine Größe und/oder eine Form der Projektionsfläche in Abhängigkeit der Fahrzeuggeschwindigkeit und/oder des Lenkwinkels des Fahrzeugs und/oder der Abstandsdaten angepasst werden. Dadurch kann dem Fahrer vorteilhafterweise beispielsweise bei einer Fahrt mit einer höheren Geschwindigkeit ein enger Ausschnitt des nach vorne in Fahrtrichtung liegenden Umgebungsbereichs angezeigt werden, wodurch der Rechenaufwand in einer Fahrsituation mit erhöhter Geschwindigkeit minimiert und eine Aufmerksamkeit des Fahrers auf den in dieser Fahrsituation wesentlichen Bereich fokussiert wird.In a further refinement, the projection surface is displayed as a function of the vehicle speed and/or the steering angle of the vehicle and/or the distance data or as a function of a detected distance from an object. Advantageously, this means that no long-distance view is displayed on a projection surface during a parking maneuver, and the computing effort in this driving situation is consequently minimized. Alternatively or additionally, a size and/or a shape of the projection area can be adjusted depending on the vehicle speed and/or the steering angle of the vehicle and/or the distance data. As a result, the driver can advantageously be shown a narrow section of the surrounding area to the front in the direction of travel, for example when driving at a higher speed, which minimizes the computing effort in a driving situation at increased speed and the driver's attention is focused on the area that is essential in this driving situation .

Die Erfindung betrifft auch eine Anzeigevorrichtung mit einem Display, welche dazu eingerichtet ist, ein erfindungsgemäßes Verfahren durchzuführen. Die Anzeigevorrichtung weist vorzugsweise einen bildgebenden Sensor, insbesondere eine Kamera, und ein Steuergerät auf. Das Steuergerät ist dazu eingerichtet, das erfindungsgemäße Verfahren durchzuführen, das heißt die Bilder zu erfassen, das angezeigte Umgebungsmodell zu erzeugen und anzupassen sowie das Display zur Anzeige des angepassten Umgebungsmodells anzusteuern.The invention also relates to a display device with a display, which is set up to carry out a method according to the invention. The display device preferably has an imaging sensor, in particular a camera, and a control unit. The control device is set up to carry out the method according to the invention, that is to say to capture the images, to generate and adapt the displayed environment model and to control the display to display the adapted environment model.

Die Erfindung betrifft auch ein Fahrzeug mit der Anzeigevorrichtung.The invention also relates to a vehicle with the display device.

Weitere Vorteile ergeben sich aus der nachfolgenden Beschreibung von Ausführungsbeispielen mit Bezug zu den Figuren.

Figur 1:
Fahrzeug
Figur 2:
Steuergerät
Figur 3:
Ablaufdiagramm eines erfindungsgemäßen Verfahrens
Figur 4:
Bild mit erkannten Objekten
Figur 5:
Erkannte Objektausrichtung in Abhängigkeit des Bildes aus Figur 4
Figur 6:
Erkannte Segmente in Abhängigkeit des Bildes aus Figur 4
Figur 7:
Beispiel eines angezeigten Umgebungsmodells
Figur 8:
Grundfläche des vorgegebenen Bereichs zur Darstellung des Umgebungsmodells sowie Projektionsfläche
Further advantages result from the following description of exemplary embodiments with reference to the figures.
Figure 1:
vehicle
Figure 2:
control unit
Figure 3:
Flow chart of a method according to the invention
Figure 4:
Image with detected objects
Figure 5:
Detected object orientation depending on the image figure 4
Figure 6:
Detected segments depending on the image figure 4
Figure 7:
Example of an environment model displayed
Figure 8:
Base area of the specified area for displaying the environment model and projection area

In Figur 1 ist ein Fahrzeug 100 in Aufsicht dargestellt. Das Fahrzeug 100 weist eine nach vorne gerichtete Kamera 101 als bildgebenden Sensor auf. Des Weiteren sind an dem Fahrzeug 100 als bildgebende Sensoren vorne, hinten sowie an jeder Seite des Fahrzeug Weitwinkelkameras 102 angeordnet, welche die Umgebung 190 des Fahrzeugs erfassen. Ferner weist das Fahrzeug 100 Abstandssensoren 103 und 104 auf, wobei die Abstandssensoren in diesem Ausführungsbeispiel ein Lidarsensor 103, welcher auch bildgebender Sensor sein kann, und mehrere Ultraschallsensoren 104 auf, welche auch bildgebende Sensoren sein können. Alternativ oder zusätzlich kann ein Radarsensoren am Fahrzeug angeordnet sein, welcher auch bildgebender Sensor sein kann. Der Lidarsensor 103 und die Ultraschallsensoren 104 sind dazu eingerichtet, Abstände zwischen dem Fahrzeug 100 und Objekten 108a und 108b in der Umgebung des Fahrzeugs 100 zu erfassen. Das Fahrzeug 100 weist darüber hinaus ein Steuergerät 105 auf, welches die mittels der Kamera erfassten Bilder und die mittels des Lidarsensors 103 und/oder der Ultraschallsensoren 104 erfassten Abstände erfasst. Das Steuergerät 105 ist ferner dazu eingerichtet, ein Display 106 im Fahrzeug 100 zur Anzeige einer visuellen Darstellung der Umgebung für den Fahrer anzusteuern, insbesondere zur Anzeige eines mittels des Steuergeräts erzeugten und angepassten Umgebungsmodells sowie gegebenenfalls von Projektionsflächen außerhalb des Umgebungsmodells. Das Steuergerät 105 lädt zur Berechnung des Umgebungsmodells Daten, insbesondere synthetische und/oder generische Objektmodelle, aus einem elektrischen Speicher 107 des Fahrzeugs 100 und/oder aus einem elektrischen Speicher des Steuergeräts 105.In figure 1 a vehicle 100 is shown in plan. The vehicle 100 has a forward-facing camera 101 as an imaging sensor. Furthermore, wide-angle cameras 102 are arranged on vehicle 100 as imaging sensors at the front, rear and on each side of the vehicle, which capture surroundings 190 of the vehicle. Vehicle 100 also has distance sensors 103 and 104, the distance sensors in this exemplary embodiment having a lidar sensor 103, which can also be an imaging sensor, and a plurality of ultrasonic sensors 104, which can also be imaging sensors. Alternatively or additionally, a radar sensor can be arranged on the vehicle, which can also be an imaging sensor. Lidar sensor 103 and ultrasonic sensors 104 are set up to detect distances between vehicle 100 and objects 108a and 108b in the area surrounding vehicle 100 . Vehicle 100 also has a control unit 105 which captures the images captured by the camera and the distances captured by lidar sensor 103 and/or ultrasonic sensors 104 . Control unit 105 is also set up to control a display 106 in vehicle 100 to display a visual representation of the environment for the driver, in particular to display an environment model generated and adapted by means of the control unit and, if appropriate, projection surfaces outside of the environment model. In order to calculate the environment model, control unit 105 loads data, in particular synthetic and/or generic object models, from an electrical memory 107 of vehicle 100 and/or from an electrical memory of control unit 105.

In Figur 2 ist das Steuergerät 105 als Blockschaltbild dargestellt. Das Steuergerät 105 erfasst mittels der Kamera 101 und/oder optional mittels mehrerer Weitwinkelkameras 102 und/oder mittels des Lidarsensors 103 zumindest eine Abfolge von Bildern. Ferner kann das Steuergerät 105 optional Abstände mittels des Lidarsensors 103 und/oder den Ultraschallsensoren 104 erfassen. Das Steuergerät 105 ist dazu eingerichtet, Daten aus dem externen Speicher 107 und/oder aus dem internen Speicher 202 des Steuergeräts zu laden. Eine Recheneinheit 201 des Steuergerät 105 berechnet in Abhängigkeit der erfassten Bilder und/oder der erfassten Abstände, welche in Abstandsdaten, insbesondere in einer bestimmten Tiefenkarte und/oder einer bestimmten Punktwolke, zusammengefasst sind, und/oder der Daten aus dem Speicher 107 und/oder 202 ein Umgebungsmodell. Darüber hinaus ist das Steuergerät 105 dazu eingerichtet, ein Display 106 zur Anzeige einer Darstellung der Umgebung des Fahrzeugs 100 anzusteuern, insbesondere ist die berechnete Darstellung das angepasste Umgebungsmodell, wobei die Anzeige um weitere Informationen, beispielsweise fahrdynamische Parameter, wie eine Fahrzeuggeschwindigkeit, und/oder eine Projektionsfläche ergänzt werden kann.In figure 2 the control unit 105 is shown as a block diagram. Control unit 105 captures at least one sequence of images using camera 101 and/or optionally using multiple wide-angle cameras 102 and/or using lidar sensor 103 . Furthermore, control unit 105 can optionally detect distances using lidar sensor 103 and/or ultrasonic sensors 104 . Control unit 105 is set up to load data from external memory 107 and/or from internal memory 202 of the control unit. A computing unit 201 of control unit 105 calculates as a function of the recorded images and/or the recorded distances, which are summarized in distance data, in particular in a specific depth map and/or a specific point cloud, and/or the data from memory 107 and/or 202 an environment model. In addition, control unit 105 is set up to control a display 106 to display a representation of the environment of vehicle 100, in particular the calculated representation is the adapted environment model, with the display including additional information, for example driving dynamics parameters such as vehicle speed and/or a projection screen can be added.

In Figur 3 ist beispielhaft ein Ablaufdiagramm eines erfindungsgemäßen Verfahrens als Blockschaltbild dargestellt. Das Verfahren beginnt mit einer Erfassung 301 einer Abfolge von Bildern mittels eines bildgebenden Sensors, insbesondere einer Kamera 101 und/oder 102. Optional werden in einem Schritt 302 Abstände zwischen dem Fahrzeug 100 und Objekten in der Umgebung des Fahrzeugs 100 mittels mindestens eines Abstandsensors 103 und/oder 104 erfasst. Anschließend werden in einem Schritt 303 Abstandsdaten, insbesondere eine zweidimensionale Tiefenkarte und/oder eine dreidimensionale Punktwolke, in Abhängigkeit der erfassten Bilder und/oder in Abhängigkeit der erfassten Abstände ermittelt. Die Abstandsdaten, insbesondere die Tiefenkarte und/oder die Punktwolke, umfassen die erfassten oder ermittelten Abstände des Fahrzeugs 100 zu Objekten 108a, 108b in der Umgebung des Fahrzeugs 100. Die Abstandsdaten werden beispielsweise in Abhängigkeit der erfassten Abfolge der Bilder ermittelt, insbesondere in Abhängigkeit einer Auswertung eines optischen Flusses zwischen erfassten Kamerabildern. Jeder Abstand der Abstandsdaten beziehungsweise jeder Punkt der Tiefenkarte und/oder der Punktwolke repräsentiert beispielsweise einen ermittelten Abstand zwischen dem Fahrzeug 100 und Objekten 108a, 108b in der Umgebung des Fahrzeugs 100. Es kann im Schritt 303 alternativ oder zusätzlich vorgesehen sein, dass die Abstandsdaten in Abhängigkeit mittels einer Stereokamera erfasster Bilder ermittelt wird. Alternativ werden die Abstandsdaten beziehungsweise die Tiefenkarte und/oder die Punktwolke im Schritt 303 in Abhängigkeit von voneinander unabhängigen Sensorsystemen 101, 102, 103 und/oder 104 bestimmt. Zusätzlich kann es vorgesehen sein, dass die Abstandsdaten beziehungsweise die Tiefenkarte und/oder die Punktwolke in Abhängigkeit eines zeitlichen Verlaufs von Daten eines Sensorsystems 101, 102, 103 und/oder 104 ermittelt wird. Zur Ermittlung der Abstandsdaten weisen Ultraschallsensoren 104 gegenüber einer Kamera 101, 102 beispielsweise als spezifischen Vorteil eine relative Unabhängigkeit der erfassten Abstände gegenüber schlechten Licht- und/oder Witterungsbedingungen auf. In einem Schritt 304 wird eine dreidimensionale Struktur eines Umgebungsmodells in Abhängigkeit der Abstandsdaten, insbesondere der Tiefenkarte und/oder der Punktwolke, erzeugt, insbesondere weist die dreidimensionale Struktur ein dreidimensionales Gitternetz auf, wobei das dreidimensionale Gitternetz bevorzugt die Abstandsdaten vereinfacht beziehungsweise repräsentiert. In einem optionalen Schritt 305 erfolgt eine Segmentierung der in einem Bild erfassten Umgebungsbereiche in Abhängigkeit einer Abfolge der erfassten Bilder. Beispielsweise werden ein Segment beziehungsweise eine Objektinstanz "Fahrbahn", ein Segment "Objekt", ein Segment "Gebäude" und/oder ein Segment "Infrastrukturobjekt" erkannt. Den Abständen beziehungsweise den Tiefeninformationen in den Abstandsdaten werden in einem optionalen Schritt 306 die erkannten Segmente beziehungsweise Objektinstanzen zugeordnet. In einem Schritt 307 wird mindestens ein Objekt in der Umgebung des Fahrzeugs in Abhängigkeit der erfassten Bilder erkannt. Diese Erkennung wird mit mindestens einem dafür trainierten ersten neuronalen Netz durchgeführt. In einem anschließenden Schritt 308 wird ein synthetisches Objektmodell in Abhängigkeit des erkannten Objektes aus dem Speicher 107 und/oder 202 geladen. In einem optionalen Schritt 309 erfolgt eine Ermittlung einer Objektausrichtung des erkannten Objektes in Abhängigkeit der erfassten Bilder, vorzugsweise durch ein zweites neuronales Netz. Die Objektausrichtung kann eine erste Näherung der Ausrichtung des Objektes repräsentieren, beispielsweise wird eine Kategorie einer relativen Ausrichtung des erkannten Objektes zum Fahrzeug aus einer Menge umfassend die Kategorien "Objektausrichtung nach vorne", "Objektausrichtung nach hinten", "Objektausrichtung nach rechts" und/oder "Objektausrichtung nach links" ermittelt. Danach erfolgt in einem weiteren Verfahrensschritt 310 eine Anpassung der erzeugten dreidimensionalen Struktur des Umgebungsmodells in Abhängigkeit des synthetischen Objektmodells und der Abstandsdaten, wobei das synthetische Objektmodell einen Strukturbereich des erzeugten Umgebungsmodells ersetzt beziehungsweise anpasst. Die Anpassung 310 der erzeugten dreidimensionalen Struktur des Umgebungsmodells kann vorzugsweise zusätzlich in Abhängigkeit der ermittelten Objektausrichtung erfolgen. In einem Schritt 311 erfolgt anschließend eine Ermittlung einer Textur für die angepasste dreidimensionale Struktur des Umgebungsmodells in Abhängigkeit der erfassten Bilder. Eine Ermittlung 311 der Textur für angepasste Strukturbereiche des Umgebungsmodells wird nicht durchgeführt, wenn in einem optionalen Schritt 312 eine Textur für diesen angepassten Strukturbereich aus dem Speicher geladen wird. In einem weiteren optionalen Schritt 313 wird eine Form eines vorgegebenen Bereichs, eine Größe des vorgegebenen Bereichs und/oder eine Anzeigeperspektive des angepassten Umgebungsmodells in Abhängigkeit einer Fahrzeuggeschwindigkeit, eines Lenkwinkels des Fahrzeugs, eines erfassten Abstandes zwischen dem Fahrzeug und einem Objekt und/oder den aktuellen Lichtverhältnissen und/oder den aktuellen Wetterbedingungen und/oder in Abhängigkeit des ausgewählten Sensortyps zur Erzeugung der Abstandsdaten angepasst, wobei der vorgegebenen Bereich beispielsweise durch eine Grundfläche repräsentiert wird. Anschließend erfolgt eine Anzeige 314 des angepassten Umgebungsmodells, wobei optional die ermittelte und/oder geladene Textur auf der dreidimensionalen Struktur des angepassten Umgebungsmodells angezeigt wird. Die Anzeige 314 des angepassten Umgebungsmodells erfolgt innerhalb des vorgegebenen Bereichs beziehungsweise der Grundfläche des vorgegebenen Bereichs um das Fahrzeug 100. In einem weiteren Schritt 315 kann es vorgesehen sein, eine zumindest teilweise vertikal zu einer Grundfläche des vorgegebenen Bereichs angeordnete Projektionsfläche außerhalb des angepassten Umgebungsmodells anzuzeigen, wobei auf diese Projektionsfläche mindestens ein Teilbereich eines erfassten Bildes projiziert wird, insbesondere eines Kamerabildes. Eine Größe und/oder eine Form der Projektionsfläche können optional in Abhängigkeit der Fahrzeuggeschwindigkeit, des Lenkwinkels und/oder der Abstandsdaten angepasst werden.In figure 3 a flowchart of a method according to the invention is shown as a block diagram by way of example. The method begins with acquisition 301 of a sequence of images using an imaging sensor, in particular a camera 101 and/or 102. Optionally, in a step 302, distances between vehicle 100 and objects in the area surrounding vehicle 100 are determined using at least one distance sensor 103 and /or 104 detected. Then, in a step 303, distance data, in particular a two-dimensional depth map and/or a three-dimensional point cloud, is determined as a function of the recorded images and/or as a function of the recorded distances. The distance data, in particular the depth map and/or the point cloud, includes the recorded or determined distances of vehicle 100 to objects 108a, 108b in the area surrounding vehicle 100. The distance data are determined, for example, as a function of the recorded sequence of images, in particular as a function of a Evaluation of an optical flow between captured camera images. Any distance the Distance data or each point of the depth map and/or the point cloud represents, for example, a determined distance between vehicle 100 and objects 108a, 108b in the area surrounding vehicle 100. It can alternatively or additionally be provided in step 303 that the distance data is determined as a function by means of a Stereo camera captured images is determined. Alternatively, the distance data or the depth map and/or the point cloud are determined in step 303 as a function of sensor systems 101, 102, 103 and/or 104 that are independent of one another. In addition, it can be provided that the distance data or the depth map and/or the point cloud is determined as a function of a time profile of data from a sensor system 101, 102, 103 and/or 104. To determine the distance data, ultrasonic sensors 104 have a specific advantage over a camera 101, 102, for example, relative independence of the detected distances from poor light and/or weather conditions. In step 304, a three-dimensional structure of an environment model is generated as a function of the distance data, in particular the depth map and/or the point cloud. In particular, the three-dimensional structure has a three-dimensional grid, the three-dimensional grid preferably simplifying or representing the distance data. In an optional step 305, the surrounding areas captured in an image are segmented as a function of a sequence of the captured images. For example, a segment or an object instance “roadway”, an “object” segment, a “building” segment and/or an “infrastructure object” segment are recognized. In an optional step 306, the recognized segments or object instances are assigned to the distances or the depth information in the distance data. In a step 307, at least one object in the area surrounding the vehicle is recognized as a function of the captured images. This recognition is carried out with at least one first neural network trained for this. In a subsequent step 308, a synthetic object model is loaded from memory 107 and/or 202 depending on the detected object. In an optional step 309, an object orientation of the detected object is determined as a function of the captured images, preferably by a second neural network. The object orientation can represent a first approximation of the orientation of the object, for example a category of a relative orientation of the detected object to the vehicle from a set comprising the categories "object orientation to the front", "object orientation to the rear", "object orientation to the right" and/or "Object alignment to the left" determined. Then, in a further method step 310, the generated three-dimensional structure of the environmental model is adapted as a function of the synthetic object model and the distance data, with the synthetic object model replacing or adapting a structural area of the generated environmental model. The adaptation 310 of the generated three-dimensional structure of the environment model can preferably also take place as a function of the determined object orientation. In a step 311, a texture is then determined for the adapted three-dimensional structure of the environment model as a function of the captured images. A determination 311 of the texture for adapted structural areas of the environmental model is not carried out if in an optional step 312 a texture for this adapted structural area is loaded from the memory. In a further optional step 313, a shape of a specified area, a size of the specified area and/or a display perspective of the adapted environment model depending on a vehicle speed, a steering angle of the vehicle, a detected distance between the vehicle and an object and/or the current Lighting conditions and/or the current weather conditions and/or adapted as a function of the selected sensor type for generating the distance data, the predetermined area being represented by a base area, for example. The adapted environment model is then displayed 314, with the determined and/or loaded texture optionally being displayed on the three-dimensional structure of the adapted environment model. The display 314 of the adapted environment model takes place within the specified area or the base area of the specified area around vehicle 100. In a further step 315, it can be provided to display a projection surface, which is at least partially vertical to a base area of the specified area, outside of the adapted environment model. at least a partial area of a captured image, in particular a camera image, being projected onto this projection surface. A size and/or a shape of the projection area can optionally be adjusted depending on the vehicle speed, the steering angle and/or the distance data.

In Figur 4 ist als Bild ein erfasstes Kamerabild der nach vorne gerichteten Frontkamera 101 des Fahrzeugs mit den durch den Schritt 307 erkannten Objekten 401, 402, 403, 404 und 405 dargestellt. Die Objekte 401, 402, 403, 404 und 405 werden durch mindestens ein dafür trainiertes erstes neuronales Netz erkannt. Zu den erkannten Objekten kann jeweils eine Objektklassen erkannt beziehungsweise zugeordnet werden, beispielsweise Fahrzeug 401, 402 und 403, Gebäude 405 oder Baum 404.In figure 4 1 shows a captured camera image of the forward-facing front camera 101 of the vehicle with the objects 401, 402, 403, 404 and 405 recognized by step 307. Objects 401, 402, 403, 404 and 405 are recognized by at least one first neural network trained for this purpose. To the recognized Objects can each be recognized or assigned an object class, for example vehicle 401, 402 and 403, building 405 or tree 404.

In Figur 5 sind die durch den Schritt 309 erkannte Objektausrichtungen 501 und 502 der erkannten Objekte 401, 402 und 403 in Abhängigkeit des in Figur 4 gezeigten Kamerabildes gestrichelt für die Kategorie 501 "Objektausrichtung nach vorne" und gepunktet für die Kategorie 502 "Objektausrichtung nach hinten" dargestellt, wobei die Objektausrichtungen 501 und 502 durch mindestens ein dafür trainiertes zweites neuronales Netz erkannt wurden.In figure 5 are the object orientations 501 and 502 of the detected objects 401, 402 and 403 detected by step 309 as a function of the in figure 4 The camera image shown is shown as a broken line for category 501 "object orientation to the front" and as a dotted line for category 502 "object orientation to the rear", with object orientations 501 and 502 being recognized by at least one second neural network trained for this purpose.

In Figur 6 sind die durch den Schritt 305 in Abhängigkeit des in Figur 4 gezeigten Kamerabildes beziehungsweise einer Abfolge von Kamerabildern erkannten Segmente beziehungsweise Objektinstanzen 601, 602, 603, 605 und 606 dargestellt, wobei die Segmente 601, 602, 603, 605 und 606 durch mindestens ein dafür trainiertes drittes neuronales Netz erkannt wurden. Das Segment 601 repräsentiert beispielsweise einen für das Fahrzeug befahrbaren Bereich. Das Segment 602 repräsentiert einen Objektbereich und das Segment 603 repräsentiert einen nicht befahrbaren Bereich. Ein Grünflächenbereich wird durch das Segment 605 und ein Himmelbereich durch das Segment 606 repräsentiert. Das erste neuronale Netz und/oder das zweite neuronale Netz und/oder das dritte neuronale Netz können durch ein allgemeineres neuronales Netz beziehungsweise ein Erkennungsverfahren beziehungsweise Klassifizierungsverfahren beziehungsweise eine künstliche Intelligenz ersetzt werden, welches sowohl Objekte, Objektausrichtungen als auch Segemente erkennt.In figure 6 are the values generated by step 305 depending on the in figure 4 shown camera image or a sequence of camera images recognized segments or object instances 601, 602, 603, 605 and 606, wherein the segments 601, 602, 603, 605 and 606 were recognized by at least one third neural network trained for this. The segment 601 represents, for example, an area that can be driven over by the vehicle. The segment 602 represents an object area and the segment 603 represents a non-driveable area. A green space area is represented by segment 605 and a sky area by segment 606 . The first neural network and/or the second neural network and/or the third neural network can be replaced by a more general neural network or a recognition method or classification method or an artificial intelligence that recognizes both objects, object orientations and segments.

In Figur 7 ist ein angezeigtes Umgebungsmodell 701 dargestellt. Im Schritt 307 wurden Fahrzeuge als Objekte erkannt, wodurch das Umgebungsmodell jeweils durch ein Objektmodell 702 und 703 angepasst wurde. Mit anderen Worten wurden die Objektmodelle 702 und 703 in Abhängigkeit der erkannten Objekte, der erkannten Objektausrichtung und der erkannten Segmente in das Umgebungsmodell 701 eingesetzt bzw. das Umgebungsmodell durch die Objektmodelle 702 und 703 angepasst. Das Umgebungsmodell 701 weist demnach eine durch zwei Objektmodelle 702 und 703 angepasste Struktur auf. Das angepasste Umgebungsmodell 701 wird in Figur 7 nur innerhalb eines vorgegebenen quadratischen Bereichs 704 um einen Mittelpunkt des Fahrzeugs 705 angezeigt, wobei das Fahrzeug 100 auch als ein zusätzliches Objektmodell in das Umgebungsmodell 701 eingesetzt wurde.In figure 7 a displayed environment model 701 is shown. In step 307, vehicles were recognized as objects, as a result of which the environment model was adapted by an object model 702 and 703 in each case. In other words, the object models 702 and 703 were inserted into the environment model 701 or the environment model was adapted by the object models 702 and 703 depending on the detected objects, the detected object orientation and the detected segments. The environment model 701 accordingly has a structure adapted by two object models 702 and 703 . The customized environment model 701 is used in figure 7 only displayed within a predetermined square area 704 around a center point of the vehicle 705, the vehicle 100 also being used as an additional object model in the environment model 701.

Am Rand und außerhalb des angezeigten Umgebungsmodells 701 können zusätzliche Projektionsflächen 802 angeordnet sein. Auf diese Projektionsflächen 802 kann ein Teilbereich der erfassten Bilder, insbesondere erfasster Kamerabilder, angezeigt werden. Die auf den Projektionsflächen 802 angezeigten Teilbereiche der Bilder repräsentieren eine Fernsicht für einen Fahrer.Additional projection surfaces 802 can be arranged at the edge and outside of the displayed environment model 701 . A partial area of the captured images, in particular captured camera images, can be displayed on these projection surfaces 802 . The partial areas of the images displayed on the projection surfaces 802 represent a long-distance view for a driver.

In Figur 8 ist ein Fahrzeug 100 mit einem um das Fahrzeug herum angeordneten vorgegebenen Bereich, welcher durch eine Grundfläche 801 des vorgegebenen Bereichs repräsentiert wird, und eine Projektionsfläche 802 dargestellt. In diesem Ausführungsbeispiel ist die Grundfläche 801 des vorgegebenen Bereichs perspektivisch dargestellt und quadratisch. Alternativ könnte die Form der Grundfläche auch elliptisch oder kreisförmig ausgeführt sein. Die Anzeige kann alternativ auch mit einer Perspektive senkrecht von oben oder schräg von der Seite erfolgen. Die Form der Grundfläche 801 des vorgegebenen Bereichs und/oder die Länge a und/oder die Breite b der Grundfläche 801 des vorgegebenen Bereichs werden beispielsweise in Abhängigkeit der Fahrzeuggeschwindigkeit und/oder der Witterungsverhältnisse und/oder der Sichtverhältnisse, beispielsweise einer Helligkeit beziehungsweise Tageszeit, angepasst. Die Anpassung der Länge a und/oder der Breite b der Grundfläche 801 des vorgegebenen Bereichs werden in Figur 8 durch die Pfeile 803 symbolisiert. Die Projektionsfläche 802 ist in diesem Ausführungsbeispiel gekrümmt und steht vertikal beziehungsweise senkrecht zu der Grundfläche 801 des vorgegebenen Bereichs. Alternativ kann die Projektionsfläche 802 als nicht gekrümmte Ebene an mindestens einer Seite der Grundfläche 801 des vorgegebenen Bereichs angeordnet sein, beispielsweise kann an jeder Seite der Grundfläche 801 eine Projektionsfläche 802 angeordnet sein. Des Weiteren kann die Projektionsfläche 802 in einem anderen Ausführungsbeispiel um 360° und geschlossen beziehungsweise als zylindrische Mantelfläche um die Grundfläche 801 beziehungsweise um den vorgegebenen Bereich herum angeordnet sein. Die Länge c und/oder die Höhe d der Projektionsfläche 802 werden beispielsweise in Abhängigkeit der Fahrzeuggeschwindigkeit angepasst. Die Anpassung der Länge c und/oder der Höhe d der Projektionsfläche 802 werden in Figur 8 durch die Pfeile 804 symbolisiert. Die Grundfläche 801 wird bevorzugt als Teil des Umgebungsmodells mit angezeigt, wobei die Grundfläche insbesondere in Abhängigkeit der erfassten Bilder und/oder der Abstandsdaten, insbesondere der Tiefenkarte und/oder der Punktwolke, ermittelt wird, so dass die Grundfläche 801 beispielsweise Unebenheiten einer Fahrbahn abbildet.In figure 8 a vehicle 100 is shown with a predetermined area arranged around the vehicle, which is represented by a base area 801 of the predetermined area, and a projection surface 802 . In this embodiment, the base 801 of the predetermined area is shown in perspective and is square. Alternatively, the shape of the base area could also be elliptical or circular. Alternatively, the display can also be made with a perspective from above or diagonally from the side. The shape of the base area 801 of the specified area and/or the length a and/or the width b of the base area 801 of the specified area are adapted, for example, as a function of the vehicle speed and/or the weather conditions and/or the visibility conditions, for example brightness or the time of day . The adjustment of the length a and/or the width b of the base area 801 of the specified area is carried out in figure 8 symbolized by the arrows 803. In this exemplary embodiment, the projection surface 802 is curved and is vertical or perpendicular to the base surface 801 of the predetermined area. Alternatively, the projection surface 802 can be arranged as a non-curved plane on at least one side of the base surface 801 of the predetermined area, for example a projection surface 802 can be arranged on each side of the base surface 801 . Furthermore, in another exemplary embodiment, the projection surface 802 can be arranged around 360° and closed or as a cylindrical lateral surface around the base surface 801 or around the specified area. The length c and/or the height d of the projection surface 802 are adjusted depending on the vehicle speed, for example. The length c and/or the height d of the projection surface 802 are adjusted in figure 8 symbolized by the arrows 804. The base area 801 is preferably also displayed as part of the environment model, the base area being determined in particular as a function of the captured images and/or the distance data, in particular the depth map and/or the point cloud, so that the base area 801 depicts, for example, bumps in a roadway.

Claims (9)

  1. Method for a sensor-based and memory-based representation of a surround of a vehicle (100), the vehicle (100) having at least one imaging sensor, more particularly a camera (101, 102), for capturing the surround, comprising the following method steps:
    • capturing (301) a sequence of images, in particular camera images,
    • determining (303) distance data on the basis of the captured images and/or a distance sensor (103, 104) of the vehicle (100), the distance data comprising distances between the vehicle and objects in the surround of the vehicle,
    • generating (304) a three-dimensional structure of a surround model (701) on the basis of the distance data,
    • recognizing (307), by way of a first neural network, at least one object (108a, 108b, 401, 402, 403, 404, 405) in the surround of the vehicle (100) on the basis of the captured images,
    • loading (308) a synthetic object model (702, 703) on the basis of the recognized object (108a, 108b, 401, 402, 403, 404, 405),
    • adapting (310) the generated three-dimensional structure of the surround model (701) on the basis of the synthetic object model (702, 703) and on the basis of the distance data, the synthetic object model replacing a structure region of the generated surround model,
    • ascertaining (311) a texture for the adapted three-dimensional structure of the surround model (701) on the basis of the captured images, with the texture for the structure region of the surround model adapted by the loaded object model being loaded from the memory on the basis of the recognized object, and
    • displaying (314) the adapted surround model (701) with the ascertained texture.
  2. Method according to Claim 1, characterized in that the following steps are carried out:
    • ascertaining (309), in particular by way of a neural network, an object alignment (501, 502) of the recognized object (108a, 108b, 401, 402, 403, 404, 405) on the basis of the captured images, and
    • additionally adapting (310) the generated three-dimensional structure of the surround model (701) on the basis of the ascertained object alignment (501, 502).
  3. Method according to either of the preceding claims, characterized in that the following steps are carried out:
    • recognizing (305), in particular by way of a neural network, an object entity (601, 602, 603, 605, 606) in the surround of the vehicle on the basis of the captured images,
    • assigning (306) the distances in the distance data to a recognized object entity (601, 602, 603, 605, 606), and
    • additionally adapting (310) the generated three-dimensional structure of the surround model (701) on the basis of the object entity (601, 602, 603, 605, 606) assigned in the distance data.
  4. Method according to any one of the preceding claims, characterized in that the adapted surround model (701) is displayed (314) within a predetermined region around the vehicle (100).
  5. Method according to Claim 4, characterized in that a size of the predetermined region and/or a shape of the predetermined region and/or a display perspective of the adapted surround model (701) is adapted on the basis of a vehicle speed and/or the distance data.
  6. Method according to either of Claims 4 and 5, characterized in that the following method step is carried out:
    • displaying a projection surface (802), which is arranged at least partially perpendicular to a footprint (801) of the predetermined region, outside of the adapted surround model (701), with at least a partial region of a captured image, in particular of a camera image, being projected onto the projection surface (802).
  7. Method according to Claim 6, characterized in that the projection surface (802) is displayed on the basis of a vehicle speed and/or the distance data.
  8. Display apparatus, wherein the display apparatus is configured to carry out a method according to any one of Claims 1 to 7.
  9. Vehicle having a display apparatus according to Claim 8.
EP19724396.7A 2018-06-30 2019-05-09 Method for sensor and memory-based depiction of an environment, display apparatus and vehicle having the display apparatus Active EP3815044B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102018210812.9A DE102018210812A1 (en) 2018-06-30 2018-06-30 Method for a sensor- and memory-based representation of an environment, display device and vehicle with the display device
PCT/EP2019/061953 WO2020001838A1 (en) 2018-06-30 2019-05-09 Method for sensor and memory-based depiction of an environment, display apparatus and vehicle having the display apparatus

Publications (2)

Publication Number Publication Date
EP3815044A1 EP3815044A1 (en) 2021-05-05
EP3815044B1 true EP3815044B1 (en) 2023-08-16

Family

ID=66554356

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19724396.7A Active EP3815044B1 (en) 2018-06-30 2019-05-09 Method for sensor and memory-based depiction of an environment, display apparatus and vehicle having the display apparatus

Country Status (5)

Country Link
US (1) US11580695B2 (en)
EP (1) EP3815044B1 (en)
CN (1) CN112334947A (en)
DE (1) DE102018210812A1 (en)
WO (1) WO2020001838A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018214875A1 (en) * 2018-08-31 2020-03-05 Audi Ag Method and arrangement for generating an environmental representation of a vehicle and vehicle with such an arrangement
TWI760678B (en) * 2020-01-16 2022-04-11 國立虎尾科技大學 Smart curb parking meter, management system and payment method
DE102022204313A1 (en) * 2022-05-02 2023-11-02 Volkswagen Aktiengesellschaft Method and device for generating an image of the environment for a parking assistant of a vehicle

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3102250B2 (en) * 1994-02-14 2000-10-23 三菱自動車工業株式会社 Ambient information display device for vehicles
US7161616B1 (en) 1999-04-16 2007-01-09 Matsushita Electric Industrial Co., Ltd. Image processing device and monitoring system
ATE311725T1 (en) * 2001-09-07 2005-12-15 Matsushita Electric Ind Co Ltd DEVICE FOR DISPLAYING THE SURROUNDINGS OF A VEHICLE AND SYSTEM FOR PROVIDING IMAGE
FR2853121B1 (en) 2003-03-25 2006-12-15 Imra Europe Sa DEVICE FOR MONITORING THE SURROUNDINGS OF A VEHICLE
DE102008034594B4 (en) 2008-07-25 2021-06-24 Bayerische Motoren Werke Aktiengesellschaft Method and information system for informing an occupant of a vehicle
US8892358B2 (en) * 2013-03-14 2014-11-18 Robert Bosch Gmbh System and method for distortion correction in three-dimensional environment visualization
US9013286B2 (en) 2013-09-23 2015-04-21 Volkswagen Ag Driver assistance system for displaying surroundings of a vehicle
DE102014208664A1 (en) * 2014-05-08 2015-11-12 Conti Temic Microelectronic Gmbh METHOD AND DEVICE FOR DISABLING DISPLAYING A VEHICLE ENVIRONMENT ENVIRONMENT
US10055643B2 (en) * 2014-09-19 2018-08-21 Bendix Commercial Vehicle Systems Llc Advanced blending of stitched images for 3D object reproduction
CN107563256A (en) * 2016-06-30 2018-01-09 北京旷视科技有限公司 Aid in driving information production method and device, DAS (Driver Assistant System)
US10394237B2 (en) * 2016-09-08 2019-08-27 Ford Global Technologies, Llc Perceiving roadway conditions from fused sensor data
US10330787B2 (en) * 2016-09-19 2019-06-25 Nec Corporation Advanced driver-assistance system
CN107944390B (en) * 2017-11-24 2018-08-24 西安科技大学 Motor-driven vehicle going objects in front video ranging and direction localization method

Also Published As

Publication number Publication date
WO2020001838A1 (en) 2020-01-02
DE102018210812A1 (en) 2020-01-02
US20210327129A1 (en) 2021-10-21
US11580695B2 (en) 2023-02-14
CN112334947A (en) 2021-02-05
EP3815044A1 (en) 2021-05-05

Similar Documents

Publication Publication Date Title
DE102007043110B4 (en) A method and apparatus for detecting a parking space using a bird's-eye view and a parking assistance system using the same
DE102011086512B4 (en) fog detection
DE102013209415B4 (en) Dynamic clue overlay with image cropping
DE102009045682B4 (en) Method of warning against lane change and lane departure warning
EP3815044B1 (en) Method for sensor and memory-based depiction of an environment, display apparatus and vehicle having the display apparatus
WO2015173092A1 (en) Method and apparatus for calibrating a camera system in a motor vehicle
DE102019115459A1 (en) METHOD AND DEVICE FOR EVALUATING A ROAD SURFACE FOR MOTOR VEHICLES
DE102013226476B4 (en) IMAGE PROCESSING METHOD AND SYSTEM OF AN ALL-ROUND SURVEILLANCE SYSTEM
DE102013205854B4 (en) Method for detecting a free path using temporary coherence
DE102018100215A1 (en) Image display device
DE112016000689T5 (en) Kameraparametereinstellvorrichtung
DE102018131424A1 (en) Vehicle and control procedures therefor
WO2017198429A1 (en) Ascertainment of vehicle environment data
EP3721371A1 (en) Method for position determination for a vehicle, controller and vehicle
DE102019218479A1 (en) Method and device for classifying objects on a roadway in the surroundings of a vehicle
DE102019208507A1 (en) Method for determining the degree of overlap between an object and a lane
WO2019162327A2 (en) Method for determining a distance between a motor vehicle and an object
EP3688412A1 (en) Method and device for determining a highly precise position and for operating an automated vehicle
DE112015005317B4 (en) IMAGE CONVERSION DEVICE AND IMAGE CONVERSION METHOD
WO2020043461A1 (en) Method and arrangement for generating a representation of surroundings of a vehicle, and vehicle having such an arrangement
EP3871006A1 (en) Rain detection by means of an environment sensor for capturing the environment of a vehicle point by point, in particular by means of a lidar-based environment sensor
DE102020215696B4 (en) Method for displaying an environment of a vehicle, computer program product, storage medium, control device and vehicle
DE102013210607B4 (en) Method and apparatus for detecting raised environment objects
DE102021212949A1 (en) Method for determining a calibration quality of a sensor system of a vehicle, computer program, control unit and vehicle
WO2023208456A1 (en) Method and device for detecting an environment of a vehicle

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210201

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G06V 20/00 20220101ALI20230301BHEP

Ipc: G06T 19/20 20110101ALI20230301BHEP

Ipc: G06T 17/20 20060101ALI20230301BHEP

Ipc: G06T 7/55 20170101AFI20230301BHEP

INTG Intention to grant announced

Effective date: 20230323

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: DE

Ref legal event code: R096

Ref document number: 502019008958

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20230816

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231117

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231216

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231218

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231116

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231216

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231117

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230816