EP3815044B1 - Method for sensor and memory-based depiction of an environment, display apparatus and vehicle having the display apparatus - Google Patents
Method for sensor and memory-based depiction of an environment, display apparatus and vehicle having the display apparatus Download PDFInfo
- Publication number
- EP3815044B1 EP3815044B1 EP19724396.7A EP19724396A EP3815044B1 EP 3815044 B1 EP3815044 B1 EP 3815044B1 EP 19724396 A EP19724396 A EP 19724396A EP 3815044 B1 EP3815044 B1 EP 3815044B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- vehicle
- model
- surround
- basis
- distance data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 30
- 238000013528 artificial neural network Methods 0.000 claims description 16
- 238000003384 imaging method Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 description 37
- 230000007613 environmental effect Effects 0.000 description 9
- 238000001454 recorded image Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
- B60R1/27—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/586—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/53—Constructional details of electronic viewfinders, e.g. rotatable or detachable
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/31—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles providing stereoscopic vision
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/20—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/60—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Definitions
- the present invention relates to a method for a sensor and memory-based representation of an environment of a vehicle, a display device for carrying out the method and the vehicle with the display device.
- the font EP 1 462 762 A1 discloses a surroundings detection device for a vehicle.
- the environment detection device creates a virtual three-dimensional environment model and displays this model.
- the font DE 10 2008 034 594 A1 relates to a method for informing an occupant of a vehicle, with a representation of an area surrounding the vehicle being generated.
- a camera of a vehicle cannot capture environmental areas behind objects in an environment of a vehicle. For example, it is not possible for the driver to display a rear view of a third-party vehicle detected from the front by a camera.
- An environment model that is only determined as a function of captured camera images, which is usually shown from a perspective obliquely from above, therefore typically shows the driver the environment incompletely.
- the known methods result in the case of high and nearby objects unnatural distortions in a displayed representation of the environment.
- the object of the present invention is to improve the representation of surroundings of a vehicle for a driver.
- the present invention relates to a method for a sensor and memory-based representation of an environment of a vehicle, the vehicle having at least one imaging sensor for detecting the environment.
- the imaging sensor preferably has a camera.
- the method includes capturing a sequence of images using the imaging sensor.
- Distance data are then determined as a function of the recorded images, in particular a two-dimensional depth map and/or a three-dimensional point cloud.
- the distance data in particular the two-dimensional depth map and/or the three-dimensional point cloud, can be determined as a function of distances between objects in the environment and the vehicle, detected by at least one distance sensor of the vehicle.
- the optional distance sensor has an ultrasonic, radar and/or lidar sensor.
- the distance data represent the detected and/or determined distances between the vehicle and objects in the vicinity of the vehicle. Determining distances to the vehicle or the distance data using an active distance sensor, for example using the lidar sensor and/or using the radar sensor and/or using ultrasonic sensors, has the fundamental advantage over a camera-based distance determination that distances can also be measured in poor lighting conditions and/or poor Weather conditions are reliably recorded. Provision can be made for a sensor type, camera and/or ultrasonic sensor and/or lidar sensor and/or radar sensor, to be selected for determining the distance data as a function of light conditions and/or weather conditions and/or a vehicle speed.
- a three-dimensional structure of an environment model is Dependence of the distance data determined, in particular the depth map and/or the point cloud determined.
- the structure of the environment model has, in particular, a three-dimensional grid.
- at least one object in the area surrounding the vehicle is recognized as a function of the captured images.
- the object is recognized by a first neural network.
- vehicles, pedestrians, infrastructure objects, such as traffic lights, and/or buildings are recognized as objects.
- an object class of the detected object and/or an object type of the detected object is also determined or detected by the first neural network.
- a synthetic object model is then loaded from an electrical memory as a function of the detected object, the memory being arranged, for example, within a control unit of the vehicle.
- the object model is additionally loaded as a function of the identified object class and/or the identified object type.
- the synthetic object model can be a specific object model representing the recognized object or a generic object model.
- the generic object model can be parameterized or is changed depending on the detected object and/or the detected object class and/or the detected object type and/or depending on the distance data.
- the generated three-dimensional structure of the environmental model is adapted as a function of the loaded synthetic object model and the distance data, with the synthetic object model replacing a structural area of the generated environmental model.
- the generated environment model is expanded by the loaded object models depending on the detected and/or determined distances.
- This environment model adapted to the synthetic object model is displayed to the driver, with the display preferably taking place on a display of the vehicle and/or on a display of a mobile electronic device.
- the method can advantageously reduce unnatural distortions in a view of the environment model, so that the environment model displayed appears more realistic and error-free.
- the environment model using object models from the memory for a camera are not visible areas in the Environment model represented realistically, for example a view of another vehicle not captured by a camera.
- the method also enables the driver to better and more quickly assess a driving situation, the distance to an object or a parking space, which also increases driving comfort for the driver.
- an object orientation of the detected object is determined as a function of the captured images; in particular, the object orientation is detected by a second neural network and/or by another type of artificial intelligence or another classification method.
- the generated three-dimensional structure of the environment model is also adapted as a function of the determined object orientation. As a result, the environment model generated is advantageously adapted more quickly and reliably.
- segments or object instances in the area surrounding the vehicle are recognized as a function of the captured images.
- the segments or object instances are preferably recognized by means of a third neural network and/or by another type of artificial intelligence or another classification method.
- the distances or depth information in the distance data is then assigned to the detected segment or the detected object instance.
- the generated three-dimensional structure of the environment model is also adapted as a function of the recognized segments or object instances, in particular as a function of the segments or object instances assigned to the distances.
- the generated three-dimensional structure of the environment model is advantageously adapted more precisely and quickly as a function of the loaded synthetic object model.
- a texture for the adapted three-dimensional structure of the environment model is determined as a function of the captured images, with the captured images preferably being camera images.
- the adapted environment model with the determined texture is then displayed. For example, a color of a vehicle in the environment model is determined. Provision can be made for the determined texture to be captured camera images or in perspective has changed camera images. As a result of this continuation, the environment model is displayed with realistic imaging and/or coloring, which enables easy orientation for the driver.
- the determination of the texture for the structural area of the environmental model adapted by the loaded object model is loaded from the memory or determined, in particular as a function of the recognized object or a recognized object class or a recognized object type. For example, the texture of a manufacturer's model of a vehicle is loaded from the electrical memory when a corresponding object type has been identified. This design makes the environment model appear realistic to the driver. In addition, unnatural distortions in the texture are avoided.
- the adapted environment model is displayed within a predefined area around the vehicle.
- the environment model is only displayed within an area that is delimited by a predetermined distance around the vehicle.
- the predetermined distance around the vehicle is preferably less than or equal to 200 meters, particularly preferably 50 meters, in particular less than or equal to 10 meters.
- the specified area is defined, for example, by a base area of the specified area, which represents the specified distance or the specified area.
- a center point of the base area represents, in particular, a center point of the vehicle.
- the base can have any shape, preferably the base has a square, elliptical or circular shape. This advantageously reduces the computational complexity for generating the environment model and the computational complexity for adapting the environment model. Furthermore, this continuation advantageously achieves a low rate of unnatural distortions in the texture of the displayed environment model.
- a size of the specified area and/or a shape of the specified area and/or an observer's perspective of the displayed environment model depends on a vehicle speed and/or a steering angle of the vehicle and/or the distance data or depending on a detected distance matched to an object.
- At least one projection surface arranged at least partially vertically to a base surface of the predefined area is displayed outside of the adapted environment model. At least one partial area of a currently recorded image is projected onto this projection surface, in particular at least one partial area of a recorded camera image.
- the images displayed on the displayed projection surface represent a view of a distant environment, ie an environmental area which is outside the displayed environmental model and further away from the vehicle than the predetermined distance delimiting the predetermined area.
- the projection surface is displayed as a function of the vehicle speed and/or the steering angle of the vehicle and/or the distance data or as a function of a detected distance from an object.
- a size and/or a shape of the projection area can be adjusted depending on the vehicle speed and/or the steering angle of the vehicle and/or the distance data.
- the driver can advantageously be shown a narrow section of the surrounding area to the front in the direction of travel, for example when driving at a higher speed, which minimizes the computing effort in a driving situation at increased speed and the driver's attention is focused on the area that is essential in this driving situation .
- the invention also relates to a display device with a display, which is set up to carry out a method according to the invention.
- the display device preferably has an imaging sensor, in particular a camera, and a control unit.
- the control device is set up to carry out the method according to the invention, that is to say to capture the images, to generate and adapt the displayed environment model and to control the display to display the adapted environment model.
- the invention also relates to a vehicle with the display device.
- a vehicle 100 is shown in plan.
- the vehicle 100 has a forward-facing camera 101 as an imaging sensor.
- wide-angle cameras 102 are arranged on vehicle 100 as imaging sensors at the front, rear and on each side of the vehicle, which capture surroundings 190 of the vehicle.
- Vehicle 100 also has distance sensors 103 and 104, the distance sensors in this exemplary embodiment having a lidar sensor 103, which can also be an imaging sensor, and a plurality of ultrasonic sensors 104, which can also be imaging sensors.
- a radar sensor can be arranged on the vehicle, which can also be an imaging sensor.
- Lidar sensor 103 and ultrasonic sensors 104 are set up to detect distances between vehicle 100 and objects 108a and 108b in the area surrounding vehicle 100 .
- Vehicle 100 also has a control unit 105 which captures the images captured by the camera and the distances captured by lidar sensor 103 and/or ultrasonic sensors 104 .
- Control unit 105 is also set up to control a display 106 in vehicle 100 to display a visual representation of the environment for the driver, in particular to display an environment model generated and adapted by means of the control unit and, if appropriate, projection surfaces outside of the environment model.
- control unit 105 loads data, in particular synthetic and/or generic object models, from an electrical memory 107 of vehicle 100 and/or from an electrical memory of control unit 105.
- control unit 105 captures at least one sequence of images using camera 101 and/or optionally using multiple wide-angle cameras 102 and/or using lidar sensor 103 . Furthermore, control unit 105 can optionally detect distances using lidar sensor 103 and/or ultrasonic sensors 104 . Control unit 105 is set up to load data from external memory 107 and/or from internal memory 202 of the control unit. A computing unit 201 of control unit 105 calculates as a function of the recorded images and/or the recorded distances, which are summarized in distance data, in particular in a specific depth map and/or a specific point cloud, and/or the data from memory 107 and/or 202 an environment model.
- control unit 105 is set up to control a display 106 to display a representation of the environment of vehicle 100, in particular the calculated representation is the adapted environment model, with the display including additional information, for example driving dynamics parameters such as vehicle speed and/or a projection screen can be added.
- FIG 3 a flowchart of a method according to the invention is shown as a block diagram by way of example.
- the method begins with acquisition 301 of a sequence of images using an imaging sensor, in particular a camera 101 and/or 102.
- an imaging sensor in particular a camera 101 and/or 102.
- distances between vehicle 100 and objects in the area surrounding vehicle 100 are determined using at least one distance sensor 103 and /or 104 detected.
- distance data in particular a two-dimensional depth map and/or a three-dimensional point cloud, is determined as a function of the recorded images and/or as a function of the recorded distances.
- the distance data includes the recorded or determined distances of vehicle 100 to objects 108a, 108b in the area surrounding vehicle 100.
- the distance data are determined, for example, as a function of the recorded sequence of images, in particular as a function of a Evaluation of an optical flow between captured camera images. Any distance the Distance data or each point of the depth map and/or the point cloud represents, for example, a determined distance between vehicle 100 and objects 108a, 108b in the area surrounding vehicle 100. It can alternatively or additionally be provided in step 303 that the distance data is determined as a function by means of a Stereo camera captured images is determined.
- the distance data or the depth map and/or the point cloud are determined in step 303 as a function of sensor systems 101, 102, 103 and/or 104 that are independent of one another.
- the distance data or the depth map and/or the point cloud is determined as a function of a time profile of data from a sensor system 101, 102, 103 and/or 104.
- ultrasonic sensors 104 have a specific advantage over a camera 101, 102, for example, relative independence of the detected distances from poor light and/or weather conditions.
- a three-dimensional structure of an environment model is generated as a function of the distance data, in particular the depth map and/or the point cloud.
- the three-dimensional structure has a three-dimensional grid, the three-dimensional grid preferably simplifying or representing the distance data.
- the surrounding areas captured in an image are segmented as a function of a sequence of the captured images. For example, a segment or an object instance “roadway”, an “object” segment, a “building” segment and/or an “infrastructure object” segment are recognized.
- the recognized segments or object instances are assigned to the distances or the depth information in the distance data.
- at least one object in the area surrounding the vehicle is recognized as a function of the captured images. This recognition is carried out with at least one first neural network trained for this.
- a synthetic object model is loaded from memory 107 and/or 202 depending on the detected object.
- an object orientation of the detected object is determined as a function of the captured images, preferably by a second neural network.
- the object orientation can represent a first approximation of the orientation of the object, for example a category of a relative orientation of the detected object to the vehicle from a set comprising the categories "object orientation to the front", “object orientation to the rear", “object orientation to the right” and/or "Object alignment to the left” determined.
- the generated three-dimensional structure of the environmental model is adapted as a function of the synthetic object model and the distance data, with the synthetic object model replacing or adapting a structural area of the generated environmental model.
- the adaptation 310 of the generated three-dimensional structure of the environment model can preferably also take place as a function of the determined object orientation.
- a texture is then determined for the adapted three-dimensional structure of the environment model as a function of the captured images.
- a determination 311 of the texture for adapted structural areas of the environmental model is not carried out if in an optional step 312 a texture for this adapted structural area is loaded from the memory.
- the adapted environment model is then displayed 314, with the determined and/or loaded texture optionally being displayed on the three-dimensional structure of the adapted environment model.
- the display 314 of the adapted environment model takes place within the specified area or the base area of the specified area around vehicle 100.
- a projection surface which is at least partially vertical to a base area of the specified area, outside of the adapted environment model. at least a partial area of a captured image, in particular a camera image, being projected onto this projection surface.
- a size and/or a shape of the projection area can optionally be adjusted depending on the vehicle speed, the steering angle and/or the distance data.
- FIG 4 1 shows a captured camera image of the forward-facing front camera 101 of the vehicle with the objects 401, 402, 403, 404 and 405 recognized by step 307.
- Objects 401, 402, 403, 404 and 405 are recognized by at least one first neural network trained for this purpose.
- To the recognized Objects can each be recognized or assigned an object class, for example vehicle 401, 402 and 403, building 405 or tree 404.
- FIG 5 are the object orientations 501 and 502 of the detected objects 401, 402 and 403 detected by step 309 as a function of the in figure 4
- the camera image shown is shown as a broken line for category 501 "object orientation to the front” and as a dotted line for category 502 "object orientation to the rear", with object orientations 501 and 502 being recognized by at least one second neural network trained for this purpose.
- FIG 6 are the values generated by step 305 depending on the in figure 4 shown camera image or a sequence of camera images recognized segments or object instances 601, 602, 603, 605 and 606, wherein the segments 601, 602, 603, 605 and 606 were recognized by at least one third neural network trained for this.
- the segment 601 represents, for example, an area that can be driven over by the vehicle.
- the segment 602 represents an object area and the segment 603 represents a non-driveable area.
- a green space area is represented by segment 605 and a sky area by segment 606 .
- the first neural network and/or the second neural network and/or the third neural network can be replaced by a more general neural network or a recognition method or classification method or an artificial intelligence that recognizes both objects, object orientations and segments.
- FIG 7 a displayed environment model 701 is shown.
- vehicles were recognized as objects, as a result of which the environment model was adapted by an object model 702 and 703 in each case.
- the object models 702 and 703 were inserted into the environment model 701 or the environment model was adapted by the object models 702 and 703 depending on the detected objects, the detected object orientation and the detected segments.
- the environment model 701 accordingly has a structure adapted by two object models 702 and 703 .
- the customized environment model 701 is used in figure 7 only displayed within a predetermined square area 704 around a center point of the vehicle 705, the vehicle 100 also being used as an additional object model in the environment model 701.
- Additional projection surfaces 802 can be arranged at the edge and outside of the displayed environment model 701 .
- a partial area of the captured images, in particular captured camera images, can be displayed on these projection surfaces 802 .
- the partial areas of the images displayed on the projection surfaces 802 represent a long-distance view for a driver.
- a vehicle 100 is shown with a predetermined area arranged around the vehicle, which is represented by a base area 801 of the predetermined area, and a projection surface 802 .
- the base 801 of the predetermined area is shown in perspective and is square.
- the shape of the base area could also be elliptical or circular.
- the display can also be made with a perspective from above or diagonally from the side.
- the shape of the base area 801 of the specified area and/or the length a and/or the width b of the base area 801 of the specified area are adapted, for example, as a function of the vehicle speed and/or the weather conditions and/or the visibility conditions, for example brightness or the time of day .
- the projection surface 802 is curved and is vertical or perpendicular to the base surface 801 of the predetermined area.
- the projection surface 802 can be arranged as a non-curved plane on at least one side of the base surface 801 of the predetermined area, for example a projection surface 802 can be arranged on each side of the base surface 801 .
- the projection surface 802 can be arranged around 360° and closed or as a cylindrical lateral surface around the base surface 801 or around the specified area.
- the length c and/or the height d of the projection surface 802 are adjusted depending on the vehicle speed, for example.
- the length c and/or the height d of the projection surface 802 are adjusted in figure 8 symbolized by the arrows 804.
- the base area 801 is preferably also displayed as part of the environment model, the base area being determined in particular as a function of the captured images and/or the distance data, in particular the depth map and/or the point cloud, so that the base area 801 depicts, for example, bumps in a roadway.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Remote Sensing (AREA)
- Mechanical Engineering (AREA)
- Signal Processing (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
Description
Die vorliegende Erfindung betrifft ein Verfahren zu einer sensor- und speicherbasierten Darstellung einer Umgebung eines Fahrzeugs, eine Anzeigevorrichtung zur Durchführung des Verfahrens und das Fahrzeug mit der Anzeigevorrichtung.The present invention relates to a method for a sensor and memory-based representation of an environment of a vehicle, a display device for carrying out the method and the vehicle with the display device.
Die Schrift
Die Umfelderfassungsvorrichtung erzeugt ein virtuelles dreidimensionales Umgebungsmodell und zeigt dieses Modell an.The environment detection device creates a virtual three-dimensional environment model and displays this model.
Die Schrift
Das Dokument
Relevanter Stand der Technik ist auch offenbart in
Eine Kamera eines Fahrzeugs kann Umgebungsbereiche hinter Objekten in einer Umgebung eines Fahrzeugs nicht erfassen. Eine Darstellung einer rückwärtigen Ansicht eines mittels einer Kamera von vorne erfassten Fremdfahrzeugs für den Fahrer ist beispielsweise nicht möglich. Ein nur in Abhängigkeit erfasster Kamerabilder ermitteltes Umgebungsmodell, welches meist aus einer Perspektive von schräg oben dargestellt ist, zeigt demnach dem Fahrer die Umgebung typischerweise unvollständig an. Des Weiteren resultieren durch die bekannten Verfahren im Falle von hohen und naheliegenden Objekten unnatürliche Verzerrungen in einer angezeigten Darstellung der Umgebung.A camera of a vehicle cannot capture environmental areas behind objects in an environment of a vehicle. For example, it is not possible for the driver to display a rear view of a third-party vehicle detected from the front by a camera. An environment model that is only determined as a function of captured camera images, which is usually shown from a perspective obliquely from above, therefore typically shows the driver the environment incompletely. Furthermore, the known methods result in the case of high and nearby objects unnatural distortions in a displayed representation of the environment.
Die Aufgabe der vorliegenden Erfindung ist es, die Darstellung einer Umgebung eines Fahrzeugs für einen Fahrer zu verbessern.The object of the present invention is to improve the representation of surroundings of a vehicle for a driver.
Die vorstehende Aufgabe wird durch die Merkmale der unabhängigen Ansprüche 1 und 9 gelöst.The above object is solved by the features of
Die vorliegende Erfindung betrifft ein Verfahren zu einer sensor- und speicherbasierten Darstellung einer Umgebung eines Fahrzeugs, wobei das Fahrzeug zumindest einen bildgebenden Sensor zur Erfassung der Umgebung aufweist. Der bildgebende Sensor weist bevorzugt eine Kamera auf. Das Verfahren weist eine Erfassung einer Abfolge von Bildern mittels des bildgebenden Sensors auf. Anschließend werden Abstandsdaten in Abhängigkeit der erfassten Bilder bestimmt, insbesondere eine zweidimensionale Tiefenkarte und/oder eine dreidimensionale Punktwolke. Alternativ oder zusätzlich können die Abstandsdaten, insbesondere die zweidimensionale Tiefenkarte und/oder die dreidimensionale Punktwolke, in Abhängigkeit von mittels mindestens eines Abstandssensors des Fahrzeugs erfasster Abstände zwischen Objekten in der Umgebung und dem Fahrzeug bestimmt werden. Der optionale Abstandssensor weist einen Ultraschall-, Radar- und/oder Lidarsensor auf. Die Abstandsdaten repräsentieren die erfassten und/oder ermittelten Abstände zwischen dem Fahrzeug und Objekten in der Umgebung des Fahrzeugs. Die Bestimmung von Abständen zum Fahrzeug beziehungsweise der Abstandsdaten mittels eines aktiven Abstandsensors, beispielsweise mittels des Lidarsensors und/oder mittels des Radarsensors und/oder mittels Ultraschallsensoren, weist gegenüber einer kamerabasierten Abstandsbestimmung den prinzipiellen Vorteil auf, dass Abstände auch bei schlechten Lichtverhältnissen und/oder schlechten Witterungsbedingungen zuverlässig erfasst werden. Es kann vorgesehen sein, dass eine Auswahl eines Sensortyps Kamera und/oder Ultraschallsensor und/oder Lidarsensor und/oder Radarsensor zur Bestimmung der Abstandsdaten in Abhängigkeit von Lichtverhältnissen und/oder Witterungsbedingungen und/oder einer Fahrzeuggeschwindigkeit durchgeführt wird. Anschließend wird in einem weiteren Verfahrensschritt eine dreidimensionale Struktur eines Umgebungsmodells in Abhängigkeit der bestimmten Abstandsdaten, insbesondere der Tiefenkarte und/oder der bestimmten Punktwolke, erzeugt. Die Struktur des Umgebungsmodells weist insbesondere ein dreidimensionales Gitternetz auf. Des Weiteren wird mindestens ein Objekt in der Umgebung des Fahrzeugs in Abhängigkeit der erfassten Bilder erkannt. Die Erkennung des Objektes erfolgt durch ein erstes neuronales Netz. Beispielsweise werden Fahrzeuge, Fußgänger, Infrastrukturobjekte, wie beispielsweise eine Ampel, und/oder Gebäude als Objekt erkannt. Optional wird zusätzlich eine Objektklasse des erkannten Objektes und/oder eine Objektgattung des erkannten Objektes durch das erste neuronale Netz ermittelt beziehungsweise erkannt. Es kann beispielsweise vorgesehen sein, dass eine Fahrzeugklasse eines Kleinwagens als Objektklasse und/oder ein Herstellermodell des Kleinwagens als Objektgattung erkannt wird. Danach wird ein synthetisches Objektmodell in Abhängigkeit des erkannten Objektes aus einem elektrischen Speicher geladen, wobei der Speicher beispielsweise innerhalb eines Steuergeräts des Fahrzeugs angeordnet ist. Optional erfolgt das Laden des Objektmodells zusätzlich in Abhängigkeit der erkannten Objektklasse und/oder der erkannten Objektgattung. Das synthetische Objektmodell kann ein spezifisches Objektmodell, welches das erkannte Objekt repräsentiert, oder ein generisches Objektmodell sein. Das generische Objektmodell ist parametrisierbar beziehungsweise wird in Abhängigkeit des erkannten Objektes und/oder der erkannten Objektklasse und/oder der erkannten Objektgattung und/oder in Abhängigkeit der Abstandsdaten verändert. Danach wird eine Anpassung der erzeugten dreidimensionalen Struktur des Umgebungsmodells in Abhängigkeit des geladenen synthetischen Objektmodells und der Abstandsdaten durchgeführt, wobei das synthetische Objektmodell einen Strukturbereich des erzeugten Umgebungsmodells ersetzt. Mit anderen Worten wird das erzeugte Umgebungsmodell durch die geladenen Objektmodelle in Abhängigkeit der erfassten und/oder ermittelten Abstände erweitert. Dieses um das synthetische Objektmodell angepasste Umgebungsmodell wird dem Fahrer angezeigt, wobei die Anzeige bevorzugt auf einem Display des Fahrzeugs und/oder auf einem Display eines mobilen elektronischen Gerätes erfolgt. Durch das Verfahren können vorteilhafterweise unnatürliche Verzerrungen in einer Ansicht des Umgebungsmodells reduziert werden, so dass das angezeigte Umgebungsmodell realistischer und fehlerfrei wirkt. Des Weiteren werden durch die Anpassung des Umgebungsmodells unter Verwendung von Objektmodellen aus dem Speicher für eine Kamera nicht einsehbare Bereiche in dem Umgebungsmodell realistisch dargestellt, beispielsweise eine nicht durch eine Kamera erfasste Ansicht eines Fremdfahrzeugs. Durch das Verfahren kann der Fahrer darüber hinaus eine Fahrsituation, ein Abstand zu einem Objekt oder eine Parklücke besser und schneller einschätzen, wodurch zusätzlich der Fahrkomfort für den Fahrer erhöht wird.The present invention relates to a method for a sensor and memory-based representation of an environment of a vehicle, the vehicle having at least one imaging sensor for detecting the environment. The imaging sensor preferably has a camera. The method includes capturing a sequence of images using the imaging sensor. Distance data are then determined as a function of the recorded images, in particular a two-dimensional depth map and/or a three-dimensional point cloud. Alternatively or additionally, the distance data, in particular the two-dimensional depth map and/or the three-dimensional point cloud, can be determined as a function of distances between objects in the environment and the vehicle, detected by at least one distance sensor of the vehicle. The optional distance sensor has an ultrasonic, radar and/or lidar sensor. The distance data represent the detected and/or determined distances between the vehicle and objects in the vicinity of the vehicle. Determining distances to the vehicle or the distance data using an active distance sensor, for example using the lidar sensor and/or using the radar sensor and/or using ultrasonic sensors, has the fundamental advantage over a camera-based distance determination that distances can also be measured in poor lighting conditions and/or poor Weather conditions are reliably recorded. Provision can be made for a sensor type, camera and/or ultrasonic sensor and/or lidar sensor and/or radar sensor, to be selected for determining the distance data as a function of light conditions and/or weather conditions and/or a vehicle speed. Then, in a further process step, a three-dimensional structure of an environment model is Dependence of the distance data determined, in particular the depth map and/or the point cloud determined. The structure of the environment model has, in particular, a three-dimensional grid. Furthermore, at least one object in the area surrounding the vehicle is recognized as a function of the captured images. The object is recognized by a first neural network. For example, vehicles, pedestrians, infrastructure objects, such as traffic lights, and/or buildings are recognized as objects. Optionally, an object class of the detected object and/or an object type of the detected object is also determined or detected by the first neural network. Provision can be made, for example, for a vehicle class of a small car to be recognized as an object class and/or a manufacturer model of the small car to be recognized as an object category. A synthetic object model is then loaded from an electrical memory as a function of the detected object, the memory being arranged, for example, within a control unit of the vehicle. Optionally, the object model is additionally loaded as a function of the identified object class and/or the identified object type. The synthetic object model can be a specific object model representing the recognized object or a generic object model. The generic object model can be parameterized or is changed depending on the detected object and/or the detected object class and/or the detected object type and/or depending on the distance data. Thereafter, the generated three-dimensional structure of the environmental model is adapted as a function of the loaded synthetic object model and the distance data, with the synthetic object model replacing a structural area of the generated environmental model. In other words, the generated environment model is expanded by the loaded object models depending on the detected and/or determined distances. This environment model adapted to the synthetic object model is displayed to the driver, with the display preferably taking place on a display of the vehicle and/or on a display of a mobile electronic device. The method can advantageously reduce unnatural distortions in a view of the environment model, so that the environment model displayed appears more realistic and error-free. Furthermore, by adapting the environment model using object models from the memory for a camera are not visible areas in the Environment model represented realistically, for example a view of another vehicle not captured by a camera. The method also enables the driver to better and more quickly assess a driving situation, the distance to an object or a parking space, which also increases driving comfort for the driver.
In einer bevorzugten Ausgestaltung wird eine Objektausrichtung des erkannten Objektes in Abhängigkeit der erfassten Bilder ermittelt, insbesondere erfolgt die Erkennung der Objektausrichtung durch ein zweites neuronales Netz und/oder durch eine andere Art künstlicher Intelligenz beziehungsweise eines anderen Klassifikationsverfahrens. In dieser Ausgestaltung erfolgt die Anpassung der erzeugten dreidimensionalen Struktur des Umgebungsmodells zusätzlich in Abhängigkeit der ermittelten Objektausrichtung. Dadurch erfolgt die Anpassung des erzeugten Umgebungsmodells vorteilhafterweise schneller und zuverlässiger.In a preferred embodiment, an object orientation of the detected object is determined as a function of the captured images; in particular, the object orientation is detected by a second neural network and/or by another type of artificial intelligence or another classification method. In this embodiment, the generated three-dimensional structure of the environment model is also adapted as a function of the determined object orientation. As a result, the environment model generated is advantageously adapted more quickly and reliably.
In einer besonders bevorzugten Ausführung werden Segmente beziehungsweise Objektinstanzen in der Umgebung des Fahrzeugs in Abhängigkeit der erfassten Bilder erkannt. Die Erkennung der Segmente beziehungsweise Objektinstanzen erfolgt vorzugsweise mittels eines dritten neuronalen Netzes Netz und/oder durch eine andere Art künstlicher Intelligenz beziehungsweise eines anderen Klassifikationsverfahrens. Anschließend erfolgt eine Zuordnung der Abstände beziehungsweise Tiefeninformation in den Abstandsdaten zu dem erkannten Segment beziehungsweise der erkannten Objektinstanz. Die Anpassung der erzeugten dreidimensionalen Struktur des Umgebungsmodells erfolgt in dieser Ausführung zusätzlich in Abhängigkeit der erkannten Segmente beziehungsweise Objektinstanzen, insbesondere in Abhängigkeit der den Abständen zugeordneten Segmente beziehungsweise Objektinstanzen. Dadurch erfolgt die Anpassung der erzeugten dreidimensionalen Struktur des Umgebungsmodells in Abhängigkeit des geladenen synthetischen Objektmodells vorteilhafterweise genauer und schneller.In a particularly preferred embodiment, segments or object instances in the area surrounding the vehicle are recognized as a function of the captured images. The segments or object instances are preferably recognized by means of a third neural network and/or by another type of artificial intelligence or another classification method. The distances or depth information in the distance data is then assigned to the detected segment or the detected object instance. In this embodiment, the generated three-dimensional structure of the environment model is also adapted as a function of the recognized segments or object instances, in particular as a function of the segments or object instances assigned to the distances. As a result, the generated three-dimensional structure of the environment model is advantageously adapted more precisely and quickly as a function of the loaded synthetic object model.
Es wird eine Textur für die angepasste dreidimensionale Struktur des Umgebungsmodells in Abhängigkeit der erfassten Bilder ermittelt, wobei die erfassten Bilder bevorzugt Kamerabilder sind. Anschließend erfolgt die Anzeige des angepassten Umgebungsmodells mit der ermittelten Textur. Beispielsweise wird eine Farbe eines Fahrzeugs in dem Umgebungsmodell ermittelt. Es kann vorgesehen sein, dass die ermittelte Textur erfasste Kamerabilder beziehungsweise perspektivisch veränderte Kamerabilder aufweist. Durch diese Weiterführung wird das Umgebungsmodell mit einer realistischen Bildgebung und/oder Farbgebung angezeigt, wodurch dem Fahrer eine leichte Orientierung ermöglicht wird. Es wird die Ermittlung der Textur für den durch das geladene Objektmodell angepassten Strukturbereich des Umgebungsmodells aus dem Speicher geladen beziehungsweise ermittelt, insbesondere in Abhängigkeit des erkannten Objektes beziehungsweise einer erkannten Objektklasse beziehungsweise einer erkannten Objektgattung. Beispielsweise wird die Textur eines Herstellermodells eines Fahrzeugs aus dem elektrischen Speicher geladen, wenn eine entsprechende Objektgattung erkannt worden ist. Durch diese Ausgestaltung wirkt das Umgebungsmodell für den Fahrer realistisch. Außerdem werden unnatürliche Verzerrungen in der Textur vermieden.A texture for the adapted three-dimensional structure of the environment model is determined as a function of the captured images, with the captured images preferably being camera images. The adapted environment model with the determined texture is then displayed. For example, a color of a vehicle in the environment model is determined. Provision can be made for the determined texture to be captured camera images or in perspective has changed camera images. As a result of this continuation, the environment model is displayed with realistic imaging and/or coloring, which enables easy orientation for the driver. The determination of the texture for the structural area of the environmental model adapted by the loaded object model is loaded from the memory or determined, in particular as a function of the recognized object or a recognized object class or a recognized object type. For example, the texture of a manufacturer's model of a vehicle is loaded from the electrical memory when a corresponding object type has been identified. This design makes the environment model appear realistic to the driver. In addition, unnatural distortions in the texture are avoided.
In einer bevorzugten Weiterführung erfolgt die Anzeige des angepassten Umgebungsmodells innerhalb eines vorgegebenen Bereichs um das Fahrzeug. Mit anderen Worten erfolgt eine Darstellung des Umgebungsmodells nur innerhalb eines Bereichs, welcher durch einen vorgegebenen Abstand um das Fahrzeug begrenzt ist. Der vorgegebene Abstand um das Fahrzeug ist vorzugsweise kleiner oder gleich 200 Meter, besonders bevorzugt 50 Meter, insbesondere kleiner oder gleich 10 Meter. Der vorgegebene Bereich wird beispielsweise durch eine Grundfläche des vorgegebenen Bereichs definiert, welche den vorgegebenen Abstand beziehungsweise den vorgegebenen Bereich repräsentiert. Ein Mittelpunkt der Grundfläche repräsentiert insbesondere einen Mittelpunkt des Fahrzeugs. Die Grundfläche kann eine beliebige Form aufweisen, vorzugsweise weist die Grundfläche eine quadratische, elliptische oder kreisrunde Form auf. Somit wird vorteilhafterweise der Rechenaufwand zur Erzeugung des Umgebungsmodells sowie der Rechenaufwand zur Anpassung des Umgebungsmodells reduziert. Des Weiteren wird vorteilhafterweise durch diese Weiterführung eine niedrige Quote an unnatürlichen Verzerrungen in der Textur des angezeigten Umgebungsmodells erreicht.In a preferred development, the adapted environment model is displayed within a predefined area around the vehicle. In other words, the environment model is only displayed within an area that is delimited by a predetermined distance around the vehicle. The predetermined distance around the vehicle is preferably less than or equal to 200 meters, particularly preferably 50 meters, in particular less than or equal to 10 meters. The specified area is defined, for example, by a base area of the specified area, which represents the specified distance or the specified area. A center point of the base area represents, in particular, a center point of the vehicle. The base can have any shape, preferably the base has a square, elliptical or circular shape. This advantageously reduces the computational complexity for generating the environment model and the computational complexity for adapting the environment model. Furthermore, this continuation advantageously achieves a low rate of unnatural distortions in the texture of the displayed environment model.
Es kann vorgesehen sein, dass eine Größe des vorgegebenen Bereichs und/oder eine Form des vorgegebenen Bereichs und/oder eine Beobachterperspektive des angezeigten Umgebungsmodells in Abhängigkeit einer Fahrzeuggeschwindigkeit und/oder eines Lenkwinkels des Fahrzeugs und/oder der Abstandsdaten beziehungsweise in Abhängigkeit eines erfassten Abstandes zu einem Objekt angepasst wird.It can be provided that a size of the specified area and/or a shape of the specified area and/or an observer's perspective of the displayed environment model depends on a vehicle speed and/or a steering angle of the vehicle and/or the distance data or depending on a detected distance matched to an object.
In einer weiteren Ausgestaltung der Erfindung wird wenigstens eine zumindest teilweise vertikal zu einer Grundfläche des vorgegebenen Bereichs angeordnete Projektionsfläche außerhalb des angepassten Umgebungsmodells angezeigt. Auf diese Projektionsfläche wird mindestens ein Teilbereich eines aktuell erfassten Bildes projiziert, insbesondere mindestens ein Teilbereich eines erfassten Kamerabildes. Die auf der angezeigten Projektionsfläche dargestellten Bilder repräsentieren eine Ansicht einer fernen Umgebung, das heißt eines Umgebungsbereichs, welcher außerhalb des angezeigten Umgebungsmodells und weiter als der vorgegebene Abstand, welcher den vorgegebenen Bereich begrenzt, von dem Fahrzeug entfernt liegt.In a further embodiment of the invention, at least one projection surface arranged at least partially vertically to a base surface of the predefined area is displayed outside of the adapted environment model. At least one partial area of a currently recorded image is projected onto this projection surface, in particular at least one partial area of a recorded camera image. The images displayed on the displayed projection surface represent a view of a distant environment, ie an environmental area which is outside the displayed environmental model and further away from the vehicle than the predetermined distance delimiting the predetermined area.
In einer weiteren Ausgestaltung erfolgt die Anzeige der Projektionsfläche in Abhängigkeit der Fahrzeuggeschwindigkeit und/oder des Lenkwinkels des Fahrzeugs und/oder der Abstandsdaten beziehungsweise in Abhängigkeit eines erfassten Abstandes zu einem Objekt. Dadurch wird vorteilhafterweise bei einem Einparkvorgang keine Fernsicht auf einer Projektionsfläche angezeigt und folglich der Rechenaufwand in dieser Fahrsituation minimiert. Alternativ oder zusätzlich kann eine Größe und/oder eine Form der Projektionsfläche in Abhängigkeit der Fahrzeuggeschwindigkeit und/oder des Lenkwinkels des Fahrzeugs und/oder der Abstandsdaten angepasst werden. Dadurch kann dem Fahrer vorteilhafterweise beispielsweise bei einer Fahrt mit einer höheren Geschwindigkeit ein enger Ausschnitt des nach vorne in Fahrtrichtung liegenden Umgebungsbereichs angezeigt werden, wodurch der Rechenaufwand in einer Fahrsituation mit erhöhter Geschwindigkeit minimiert und eine Aufmerksamkeit des Fahrers auf den in dieser Fahrsituation wesentlichen Bereich fokussiert wird.In a further refinement, the projection surface is displayed as a function of the vehicle speed and/or the steering angle of the vehicle and/or the distance data or as a function of a detected distance from an object. Advantageously, this means that no long-distance view is displayed on a projection surface during a parking maneuver, and the computing effort in this driving situation is consequently minimized. Alternatively or additionally, a size and/or a shape of the projection area can be adjusted depending on the vehicle speed and/or the steering angle of the vehicle and/or the distance data. As a result, the driver can advantageously be shown a narrow section of the surrounding area to the front in the direction of travel, for example when driving at a higher speed, which minimizes the computing effort in a driving situation at increased speed and the driver's attention is focused on the area that is essential in this driving situation .
Die Erfindung betrifft auch eine Anzeigevorrichtung mit einem Display, welche dazu eingerichtet ist, ein erfindungsgemäßes Verfahren durchzuführen. Die Anzeigevorrichtung weist vorzugsweise einen bildgebenden Sensor, insbesondere eine Kamera, und ein Steuergerät auf. Das Steuergerät ist dazu eingerichtet, das erfindungsgemäße Verfahren durchzuführen, das heißt die Bilder zu erfassen, das angezeigte Umgebungsmodell zu erzeugen und anzupassen sowie das Display zur Anzeige des angepassten Umgebungsmodells anzusteuern.The invention also relates to a display device with a display, which is set up to carry out a method according to the invention. The display device preferably has an imaging sensor, in particular a camera, and a control unit. The control device is set up to carry out the method according to the invention, that is to say to capture the images, to generate and adapt the displayed environment model and to control the display to display the adapted environment model.
Die Erfindung betrifft auch ein Fahrzeug mit der Anzeigevorrichtung.The invention also relates to a vehicle with the display device.
Weitere Vorteile ergeben sich aus der nachfolgenden Beschreibung von Ausführungsbeispielen mit Bezug zu den Figuren.
- Figur 1:
- Fahrzeug
- Figur 2:
- Steuergerät
- Figur 3:
- Ablaufdiagramm eines erfindungsgemäßen Verfahrens
- Figur 4:
- Bild mit erkannten Objekten
- Figur 5:
- Erkannte Objektausrichtung in Abhängigkeit des Bildes aus
Figur 4 - Figur 6:
- Erkannte Segmente in Abhängigkeit des Bildes aus
Figur 4 - Figur 7:
- Beispiel eines angezeigten Umgebungsmodells
- Figur 8:
- Grundfläche des vorgegebenen Bereichs zur Darstellung des Umgebungsmodells sowie Projektionsfläche
- Figure 1:
- vehicle
- Figure 2:
- control unit
- Figure 3:
- Flow chart of a method according to the invention
- Figure 4:
- Image with detected objects
- Figure 5:
- Detected object orientation depending on the image
figure 4 - Figure 6:
- Detected segments depending on the image
figure 4 - Figure 7:
- Example of an environment model displayed
- Figure 8:
- Base area of the specified area for displaying the environment model and projection area
In
In
In
In
In
In
In
Am Rand und außerhalb des angezeigten Umgebungsmodells 701 können zusätzliche Projektionsflächen 802 angeordnet sein. Auf diese Projektionsflächen 802 kann ein Teilbereich der erfassten Bilder, insbesondere erfasster Kamerabilder, angezeigt werden. Die auf den Projektionsflächen 802 angezeigten Teilbereiche der Bilder repräsentieren eine Fernsicht für einen Fahrer.Additional projection surfaces 802 can be arranged at the edge and outside of the displayed
In
Claims (9)
- Method for a sensor-based and memory-based representation of a surround of a vehicle (100), the vehicle (100) having at least one imaging sensor, more particularly a camera (101, 102), for capturing the surround, comprising the following method steps:• capturing (301) a sequence of images, in particular camera images,• determining (303) distance data on the basis of the captured images and/or a distance sensor (103, 104) of the vehicle (100), the distance data comprising distances between the vehicle and objects in the surround of the vehicle,• generating (304) a three-dimensional structure of a surround model (701) on the basis of the distance data,• recognizing (307), by way of a first neural network, at least one object (108a, 108b, 401, 402, 403, 404, 405) in the surround of the vehicle (100) on the basis of the captured images,• loading (308) a synthetic object model (702, 703) on the basis of the recognized object (108a, 108b, 401, 402, 403, 404, 405),• adapting (310) the generated three-dimensional structure of the surround model (701) on the basis of the synthetic object model (702, 703) and on the basis of the distance data, the synthetic object model replacing a structure region of the generated surround model,• ascertaining (311) a texture for the adapted three-dimensional structure of the surround model (701) on the basis of the captured images, with the texture for the structure region of the surround model adapted by the loaded object model being loaded from the memory on the basis of the recognized object, and• displaying (314) the adapted surround model (701) with the ascertained texture.
- Method according to Claim 1, characterized in that the following steps are carried out:• ascertaining (309), in particular by way of a neural network, an object alignment (501, 502) of the recognized object (108a, 108b, 401, 402, 403, 404, 405) on the basis of the captured images, and• additionally adapting (310) the generated three-dimensional structure of the surround model (701) on the basis of the ascertained object alignment (501, 502).
- Method according to either of the preceding claims, characterized in that the following steps are carried out:• recognizing (305), in particular by way of a neural network, an object entity (601, 602, 603, 605, 606) in the surround of the vehicle on the basis of the captured images,• assigning (306) the distances in the distance data to a recognized object entity (601, 602, 603, 605, 606), and• additionally adapting (310) the generated three-dimensional structure of the surround model (701) on the basis of the object entity (601, 602, 603, 605, 606) assigned in the distance data.
- Method according to any one of the preceding claims, characterized in that the adapted surround model (701) is displayed (314) within a predetermined region around the vehicle (100).
- Method according to Claim 4, characterized in that a size of the predetermined region and/or a shape of the predetermined region and/or a display perspective of the adapted surround model (701) is adapted on the basis of a vehicle speed and/or the distance data.
- Method according to either of Claims 4 and 5, characterized in that the following method step is carried out:• displaying a projection surface (802), which is arranged at least partially perpendicular to a footprint (801) of the predetermined region, outside of the adapted surround model (701), with at least a partial region of a captured image, in particular of a camera image, being projected onto the projection surface (802).
- Method according to Claim 6, characterized in that the projection surface (802) is displayed on the basis of a vehicle speed and/or the distance data.
- Display apparatus, wherein the display apparatus is configured to carry out a method according to any one of Claims 1 to 7.
- Vehicle having a display apparatus according to Claim 8.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102018210812.9A DE102018210812A1 (en) | 2018-06-30 | 2018-06-30 | Method for a sensor- and memory-based representation of an environment, display device and vehicle with the display device |
PCT/EP2019/061953 WO2020001838A1 (en) | 2018-06-30 | 2019-05-09 | Method for sensor and memory-based depiction of an environment, display apparatus and vehicle having the display apparatus |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3815044A1 EP3815044A1 (en) | 2021-05-05 |
EP3815044B1 true EP3815044B1 (en) | 2023-08-16 |
Family
ID=66554356
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19724396.7A Active EP3815044B1 (en) | 2018-06-30 | 2019-05-09 | Method for sensor and memory-based depiction of an environment, display apparatus and vehicle having the display apparatus |
Country Status (5)
Country | Link |
---|---|
US (1) | US11580695B2 (en) |
EP (1) | EP3815044B1 (en) |
CN (1) | CN112334947A (en) |
DE (1) | DE102018210812A1 (en) |
WO (1) | WO2020001838A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102018214875A1 (en) * | 2018-08-31 | 2020-03-05 | Audi Ag | Method and arrangement for generating an environmental representation of a vehicle and vehicle with such an arrangement |
TWI760678B (en) * | 2020-01-16 | 2022-04-11 | 國立虎尾科技大學 | Smart curb parking meter, management system and payment method |
DE102022204313A1 (en) * | 2022-05-02 | 2023-11-02 | Volkswagen Aktiengesellschaft | Method and device for generating an image of the environment for a parking assistant of a vehicle |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3102250B2 (en) * | 1994-02-14 | 2000-10-23 | 三菱自動車工業株式会社 | Ambient information display device for vehicles |
US7161616B1 (en) | 1999-04-16 | 2007-01-09 | Matsushita Electric Industrial Co., Ltd. | Image processing device and monitoring system |
ATE311725T1 (en) * | 2001-09-07 | 2005-12-15 | Matsushita Electric Ind Co Ltd | DEVICE FOR DISPLAYING THE SURROUNDINGS OF A VEHICLE AND SYSTEM FOR PROVIDING IMAGE |
FR2853121B1 (en) | 2003-03-25 | 2006-12-15 | Imra Europe Sa | DEVICE FOR MONITORING THE SURROUNDINGS OF A VEHICLE |
DE102008034594B4 (en) | 2008-07-25 | 2021-06-24 | Bayerische Motoren Werke Aktiengesellschaft | Method and information system for informing an occupant of a vehicle |
US8892358B2 (en) * | 2013-03-14 | 2014-11-18 | Robert Bosch Gmbh | System and method for distortion correction in three-dimensional environment visualization |
US9013286B2 (en) | 2013-09-23 | 2015-04-21 | Volkswagen Ag | Driver assistance system for displaying surroundings of a vehicle |
DE102014208664A1 (en) * | 2014-05-08 | 2015-11-12 | Conti Temic Microelectronic Gmbh | METHOD AND DEVICE FOR DISABLING DISPLAYING A VEHICLE ENVIRONMENT ENVIRONMENT |
US10055643B2 (en) * | 2014-09-19 | 2018-08-21 | Bendix Commercial Vehicle Systems Llc | Advanced blending of stitched images for 3D object reproduction |
CN107563256A (en) * | 2016-06-30 | 2018-01-09 | 北京旷视科技有限公司 | Aid in driving information production method and device, DAS (Driver Assistant System) |
US10394237B2 (en) * | 2016-09-08 | 2019-08-27 | Ford Global Technologies, Llc | Perceiving roadway conditions from fused sensor data |
US10330787B2 (en) * | 2016-09-19 | 2019-06-25 | Nec Corporation | Advanced driver-assistance system |
CN107944390B (en) * | 2017-11-24 | 2018-08-24 | 西安科技大学 | Motor-driven vehicle going objects in front video ranging and direction localization method |
-
2018
- 2018-06-30 DE DE102018210812.9A patent/DE102018210812A1/en active Pending
-
2019
- 2019-05-09 CN CN201980044427.2A patent/CN112334947A/en active Pending
- 2019-05-09 EP EP19724396.7A patent/EP3815044B1/en active Active
- 2019-05-09 WO PCT/EP2019/061953 patent/WO2020001838A1/en unknown
- 2019-05-09 US US17/049,327 patent/US11580695B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
WO2020001838A1 (en) | 2020-01-02 |
DE102018210812A1 (en) | 2020-01-02 |
US20210327129A1 (en) | 2021-10-21 |
US11580695B2 (en) | 2023-02-14 |
CN112334947A (en) | 2021-02-05 |
EP3815044A1 (en) | 2021-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102007043110B4 (en) | A method and apparatus for detecting a parking space using a bird's-eye view and a parking assistance system using the same | |
DE102011086512B4 (en) | fog detection | |
DE102013209415B4 (en) | Dynamic clue overlay with image cropping | |
DE102009045682B4 (en) | Method of warning against lane change and lane departure warning | |
EP3815044B1 (en) | Method for sensor and memory-based depiction of an environment, display apparatus and vehicle having the display apparatus | |
WO2015173092A1 (en) | Method and apparatus for calibrating a camera system in a motor vehicle | |
DE102019115459A1 (en) | METHOD AND DEVICE FOR EVALUATING A ROAD SURFACE FOR MOTOR VEHICLES | |
DE102013226476B4 (en) | IMAGE PROCESSING METHOD AND SYSTEM OF AN ALL-ROUND SURVEILLANCE SYSTEM | |
DE102013205854B4 (en) | Method for detecting a free path using temporary coherence | |
DE102018100215A1 (en) | Image display device | |
DE112016000689T5 (en) | Kameraparametereinstellvorrichtung | |
DE102018131424A1 (en) | Vehicle and control procedures therefor | |
WO2017198429A1 (en) | Ascertainment of vehicle environment data | |
EP3721371A1 (en) | Method for position determination for a vehicle, controller and vehicle | |
DE102019218479A1 (en) | Method and device for classifying objects on a roadway in the surroundings of a vehicle | |
DE102019208507A1 (en) | Method for determining the degree of overlap between an object and a lane | |
WO2019162327A2 (en) | Method for determining a distance between a motor vehicle and an object | |
EP3688412A1 (en) | Method and device for determining a highly precise position and for operating an automated vehicle | |
DE112015005317B4 (en) | IMAGE CONVERSION DEVICE AND IMAGE CONVERSION METHOD | |
WO2020043461A1 (en) | Method and arrangement for generating a representation of surroundings of a vehicle, and vehicle having such an arrangement | |
EP3871006A1 (en) | Rain detection by means of an environment sensor for capturing the environment of a vehicle point by point, in particular by means of a lidar-based environment sensor | |
DE102020215696B4 (en) | Method for displaying an environment of a vehicle, computer program product, storage medium, control device and vehicle | |
DE102013210607B4 (en) | Method and apparatus for detecting raised environment objects | |
DE102021212949A1 (en) | Method for determining a calibration quality of a sensor system of a vehicle, computer program, control unit and vehicle | |
WO2023208456A1 (en) | Method and device for detecting an environment of a vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20210201 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06V 20/00 20220101ALI20230301BHEP Ipc: G06T 19/20 20110101ALI20230301BHEP Ipc: G06T 17/20 20060101ALI20230301BHEP Ipc: G06T 7/55 20170101AFI20230301BHEP |
|
INTG | Intention to grant announced |
Effective date: 20230323 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP Ref country code: DE Ref legal event code: R096 Ref document number: 502019008958 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: GERMAN |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20230816 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231216 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231218 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231116 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231216 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231117 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230816 |