EP4208372A1 - Procédé pour afficher l'environnement d'un véhicule sur un dispositif d'affichage, unité de traitement et véhicule - Google Patents

Procédé pour afficher l'environnement d'un véhicule sur un dispositif d'affichage, unité de traitement et véhicule

Info

Publication number
EP4208372A1
EP4208372A1 EP21769721.8A EP21769721A EP4208372A1 EP 4208372 A1 EP4208372 A1 EP 4208372A1 EP 21769721 A EP21769721 A EP 21769721A EP 4208372 A1 EP4208372 A1 EP 4208372A1
Authority
EP
European Patent Office
Prior art keywords
display device
overlay structure
vehicle
displayed
round
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21769721.8A
Other languages
German (de)
English (en)
Inventor
Tobias KLINGER
Ralph-Carsten Lülfing
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZF CV Systems Global GmbH
Original Assignee
ZF CV Systems Global GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZF CV Systems Global GmbH filed Critical ZF CV Systems Global GmbH
Publication of EP4208372A1 publication Critical patent/EP4208372A1/fr
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/304Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images
    • B60R2300/305Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images merging camera image with lines or icons
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • B60R2300/607Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective from a bird's eye viewpoint
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Definitions

  • the invention relates to a method for displaying surroundings of a vehicle, in particular a commercial vehicle, on a display device, a processing unit for carrying out the method, and a vehicle.
  • DE 10 2017 108 254 B4 describes how to create an image composed of individual images and display it on a display.
  • the individual images are each recorded by several cameras.
  • Depth information relating to the object, in particular a position of the object, can be determined by triangulating two or more images.
  • the objects can also be tracked over time.
  • a first image from the first camera and a second image from the second camera are projected via a homographic matrix onto a ground plane and reference plane, respectively, before the first image and the second image are rotated to create a combined image of the environment.
  • DE 100 35 223 A1 describes creating an overall image or all-round view image from a number of individual images and projecting the vehicle itself into the all-round view image as an artificial graphic object or overlay structure.
  • EP 3 293 700 B1 describes reconstructing parts of the environment from a number of individual images from a camera using a structure-from-motion method and thus determining depth information on individual objects. A quality metric is determined in order to obtain an optimal reconstruction of the environment. By combining several cameras or individual images, this can also be done in an all-round view.
  • DE 102018 100 909 A1 describes how the structure-from-motion method is used to obtain a reconstruction of the current environment and to classify or categorize objects in a neural network.
  • the object of the invention is to specify a method for displaying the surroundings of a vehicle on a display device which can be carried out with little hardware and computing effort and enables an observer to easily orientate himself in the environment.
  • the task is also to specify a processing unit and a vehicle.
  • a method for displaying the surroundings of a vehicle on a display device and also a processing unit for carrying out the method with at least the following steps are provided:
  • the environment around the vehicle is recorded, particularly in a close-up area, with at least two cameras, each camera having a different field of view, with the fields of view of neighboring cameras overlapping at least in certain areas, in particular at the edges.
  • An all-round view image is then created from the individual images determined in this way, with each individual image being recorded by a different camera at approximately the same time and the individual images for creating the all-round view image being projected into a reference plane, for example using a homography matrix.
  • depth information is determined for at least one object in the recorded environment, the depth information being obtained by triangulation from at least two different individual images from the same camera, preferably by a so-called structure-from-motion method, the at least one object being imaged in the at least two different individual images, preferably from at least two different points of view.
  • a stereo camera system is not used to determine the depth information. Rather, a perspective reconstruction of the surroundings or the object is obtained merely by image processing of the individual images of a camera, from which the depth information can be derived.
  • At least one overlay structure is generated as a function of the depth information previously determined by the SfM method, with each overlay structure being uniquely assigned to an imaged object.
  • the all-round view image created containing the at least one object and the at least one overlay structure generated is then displayed on the display device in such a way that the at least one overlay structure is displayed on and/or adjacent to the respectively assigned object .
  • An overlay structure can therefore advantageously be displayed on the display device without additional sensors, based solely on methods of image processing, in particular structure-from-motion. With recourse to the previously determined depth information, this can be displayed in a targeted manner at the position or adjacent to the position on the display device on which the object is also displayed. Even if the respective object in the all-round view image is not displayed on the display device in a geometrically interpretable manner, the overlay structure can help to ensure that the all-round view image can be used for reliable orientation, since the overlay structure contains the essential information relating to an object emphasized on the display device. A distortion of raised objects in the all-round view image that occurs as a result of the projection into the reference plane can at least be compensated for in this way.
  • the display of the overlay structure can also solve the problem that objects in the overlapping areas of individual images from two adjacent cameras often "disappear" or can be perceived insufficiently, since these objects are tilted backwards in the respective camera perspective and in the all-round view -Image usually does not appear at all or at least cannot be perceived adequately.
  • an object is contained in the overlay structures according to the method described above and is therefore also displayed in the all-round view image.
  • the outer edge is preferably understood to mean an outer boundary of the respective object, with the bar or the respective overlay structure being superimposed at least on that outer edge or boundary that is closest to the host vehicle.
  • a boundary of the object in the direction of one's own vehicle can be identified on the all-round view by means of a bar.
  • the observer can clearly see up to which point in space one's own vehicle can be maneuvered or positioned without touching the object, for example during a parking or maneuvering operation.
  • a polygon is displayed on the display device as an overlay structure in such a way that the polygon at least partially, preferably completely, spans the respectively assigned object.
  • the polygon can also be adapted to this object contour or object shape. If the object contour or the object shape is not known, a rectangle can be assumed as a polygon, for example, which covers the object points of the respective object depicted in the all-round view image.
  • the viewer can also be provided with information regarding the spatial characteristics of the object by means of color coding.
  • an overlay structure on the display device that is further away from one's own vehicle can be given a color, for example, which indicates a low level of danger, for example a green color.
  • an overlay structure that is closer to one's own vehicle on the display device can be given a color that indicates a higher risk, for example a red color. Any color gradations are possible depending on the object distance. Provision can preferably also be made for the color and/or the type of overlay structure of the respective object to be dependent on a movement indicator assigned to the object, with the movement indicator indicating whether the respective object can move, e.g. a person or a vehicle, or is permanently stationary, for example a building or a lantern, the movement indicator being obtained from the ascertained depth information with regard to the respectively associated object. As a result, it can additionally be highlighted on the display device which additional danger can emanate from the displayed object due to a potential movement if this object cannot be clearly recognized directly on the all-round view image due to the distortion.
  • object points on the object depicted in the individual images are tracked over time in order to derive the movement indicator for the object in question. Accordingly, by forming difference images, for example, it can be determined how individual image pixels behave over time and a movement of the object can be inferred from this. Provision can preferably also be made for the at least one overlay structure to be displayed opaquely or at least partially transparently on the display device, so that the at least one overlay structure completely or at least partially displays the all-round view image on and/or adjacent to the respectively assigned object related to the transparency covers.
  • the respective object can be emphasized by the overlay structure, but at the same time the viewer can be given the opportunity to still recognize the object lying behind it, in order to make a plausible assessment of the danger that the object can pose.
  • the transparency of the overlay structure can also be defined in a manner similar to the color coding as a function of the depth information relating to the object. For example, objects with a larger object distance from the vehicle can be displayed with greater transparency than objects with a small object distance from the vehicle, as a result of which more relevant objects are highlighted more strongly.
  • the display device has display pixels, with all-round view pixels of the all-round view image being displayed on the display pixels of the display device, with an object contained in the all-round view image being displayed on object pixels, with the object pixels being a subset of the display pixels, wherein the overlay structure assigned to the respective object is displayed or superimposed on and/or adjacent to the respective object pixels on the display device.
  • the overlay structures are preferably superimposed by superimposing an overlay image with at least one overlay structure on the all-round view image with the at least one object on the display device in such a way that the overlay structure assigned to the respective object is on and/or adjacent to the respective object pixels is displayed on the display device.
  • a pre-processing is first carried out using the depth information in such a way that an overlay image is created in which preferably Finally, the overlay structures are displayed at the positions at which or adjacent to which the objects are displayed in the all-round view image.
  • these two images can then be displayed simultaneously on the display device in order to achieve the superimposition according to the invention.
  • the all-round view image itself contains the at least one overlay structure, with the all-round view image being adapted in this way at and/or adjacent to the all-round view pixels on which an object is depicted that the overlay structure assigned to the respective object is displayed on and/or adjacent to the respective object pixels on the display device.
  • the display device is only sent an image for display, which is “provided” with the overlay structures in advance, in that the pixels are appropriately “manipulated” in order to display the overlay structures in this image.
  • the vehicle itself can thus also be viewed as an object to which overlay structures are assigned, which the viewer can use to orientate himself.
  • the position of the isolines depends on the spatial information that is determined via the individual images. For example, the isolines can be displayed at intervals of 1 m around the host vehicle.
  • the overlay structures can also be color-coded with color gradations, so that iso-lines closer to the vehicle are shown in red and iso-lines further away are shown in green, with a corresponding color gradient for iso-lines in between.
  • the transparency of the isolines can also vary depending on the iso distance in order to make the distances to the vehicle easily recognizable to the viewer.
  • a vehicle according to the invention in which the method according to the invention can be carried out, therefore has at least two cameras, each camera having a different field of view, with the fields of view of adjacent cameras overlapping at least partially, in particular at the edges.
  • a display device and a processing unit according to the invention are provided in the vehicle, the display device being designed to display a created all-round view image containing at least one object and at least one generated overlay structure—as part of the all-round view image or as a separate overlay image - to represent in such a way that the at least one overlay structure is displayed on and/or adjacent to the respectively assigned object.
  • each individual camera has a field of vision with a viewing angle of greater than or equal to 120°, in particular greater than or equal to 170°, the camera being designed, for example, as a fisheye camera that is selected from the group on at least two sides of the vehicle consisting of a front, a back or at least one long side.
  • the individual cameras are preferably aimed at a close-up area of the environment, i.e. at the ground, in order to enable a bird's-eye view.
  • FIG. 1 shows a schematic of a vehicle for carrying out the method according to the invention
  • 2a shows a detailed view of an object imaged in two individual images from a single camera
  • FIG. 2b shows a detailed view of a display device in the vehicle according to FIG. 1 ;
  • FIG. 3 shows a detailed view of the environment displayed on the display device.
  • FIG. 1 shows a vehicle 1, in particular a commercial vehicle, which, according to the embodiment shown, has a front camera 3a on a front side 2a, for example in the roof liner, and a rear camera 3b on a rear side 2b. Furthermore, side cameras 3c are arranged on the longitudinal sides 2c of the vehicle 1, for example on the mirrors. Additional cameras 3 (not shown) can also be provided in the vehicle 1 in order to capture an environment U, in particular a close-up area N (environment up to 10 m away from the vehicle 1).
  • a close-up area N environment up to 10 m away from the vehicle 1).
  • Each camera 3 has a field of vision 4, with a front field of vision 4a of the front camera 3a looking forward, a rear field of vision 4b of the rear field camera 3b looking backward and a side field of vision 4c of the side cameras 3c to the respective side of the Vehicle 1 are aligned.
  • the cameras 3 are aligned towards the ground on which the vehicle 1 is moving.
  • the number and the position of the cameras 3 is preferably chosen such that the viewing areas 4 of adjacent cameras 3 overlap in the close-up area N, so that all viewing areas 4 together can cover the close-up area N without gaps and thus over the entire area.
  • the cameras 3 can each be designed, for example, as fisheye cameras, which can each cover a field of view 4 with a viewing angle W of equal to or greater than 170°.
  • Each camera 3 outputs image signals SB that characterize the surroundings U in the field of view 4 that are imaged on the sensor of the respective camera 3 .
  • the image signals SB are output to a processing unit 6, the processing unit 6 being designed to generate 3 individual images EBk (running index k) for each camera on the basis of the image signals SB.
  • the kth individual image EBk has a number Ni of individual image pixels EBkPi (running index i from 0 to Ni) on which the surroundings U are imaged.
  • Certain single image pixels EBkPi are assigned to object points PPn (running index n) which belong to an object 0 in the environment U according to FIG.
  • the individual images EBk from different cameras 3 are converted by projecting the individual images EBk into a reference plane RE, for example a horizontal plane under the vehicle 1 (cf. a plane parallel to the plane spanned by xO and yO in FIG. 2a), an all-round view image RB with a number Np of all-round view image pixels RBPp (running index p from 0 to Np) is created by a perspective transformation, eg as a function of a homography matrix, using an all-round view algorithm A1.
  • the all-round view image RB the surroundings U around the vehicle 1 are displayed without gaps on all sides, at least in the close-up range N (cf. FIG.
  • the all-round view image RB results in an all-round view field of view 4R, which is larger than the individual fields of view 4 of the cameras 3.
  • This all-round view image RB of the close-up area N can be shown to an occupant, for example the driver of the vehicle 1, on a display device 7 be issued, so that the driver can orientate himself, for example, when parking or maneuvering.
  • the environment U can be presented to the viewer in a bird's-eye view.
  • the display device 7 has a number Nm of display pixels APm (running index m from 0 to Nm), with each all-round view image pixel RBPp being displayed on a specific display pixel APm of the display device 7, so that the viewer All-round view image RB visible on the display device 7 results.
  • a dynamic subset of the display pixels APm are object pixels OAPq (running index q), on which an object O from the environment U is displayed (only schematically in FIG. 2b). All-round view image pixels RBPp, on which a specific object O from the environment U or a specific object point PPn is imaged, are therefore assigned to the object pixels OAPq.
  • the application of the all-round view algorithm A1 can result in distortions occurring at the image edges of the all-round view image RB.
  • further information which results from depth information TI for the objects 0 shown, is superimposed on the created all-round view image RB.
  • the depth information TI is obtained from a number of individual images EBk from a single camera 3 using the so-called Structure-from-Motion (SfM) method. Depth information TI is therefore extracted camera by camera for each camera 3 individually.
  • SfM Structure-from-Motion
  • the relevant three-dimensional object 0 in the environment U with its object points PPn is recorded by the respective camera 3 from at least two different viewpoints SP1, SP2, as shown in FIG. 2a.
  • the depth information TI relating to the respective three-dimensional object 0 can then be obtained by triangulation T:
  • image coordinates xB, yB are determined for at least one first single-image pixel EB1 P1 in a first single image EB1, e.g. of the front camera 3a, and at least one first single-image pixel EB2P1 in a second single image EB2, also of the front camera 3a.
  • Both individual images EB2 are recorded by the front camera 3a at different positions SP1, SP2, ie the vehicle 1 or the front camera 3a moves by a base length L between the individual images EB1, EB2.
  • the first two individual image pixels EB1 P1 , EB2P1 are selected in the respective individual images EB1, EB2 in such a way that they are assigned to the same object point PPn on the three-dimensional object 0 that is depicted in each case.
  • one or more pairs of individual image pixels EB1Pi, EB2Pi for one or more object points PPn can be determined for one or more objects 0 in the environment U.
  • a certain number of frame Pixels EB1 Pi, EB2Pi in the respective individual image EB1, EB2 are combined in a feature point MP1, MP2 (see Fig. 2), the individual image pixels EB1 Pi, EB2Pi to be combined being selected in such a way that the respective feature point MP1, MP2 has a specific clearly localizable feature M is assigned to the three-dimensional object 0.
  • the feature M can be, for example, a corner ME or an outer edge MK on the three-dimensional object 0 (cf. FIG. 2a).
  • the absolute, actual object coordinates xO, yO, zO (world coordinates) of the three-dimensional Object 0 or the object point PPj or the feature M are calculated or estimated.
  • a correspondingly determined base length L between the positions SP1, SP2 of the front camera 3a is used.
  • both a position and an orientation, ie a pose, of the vehicle 1 relative to the respective three-dimensional object 0 can then be determined from geometric considerations if the triangulation T for a sufficient number of object points PPi or characteristics M of an object 0 was carried out.
  • the processing unit 6 can also at least estimate an object shape FO and/or an object contour CO if the exact object coordinates xO, yO, zO of a plurality of object points PPj or features M of an object 0 are known.
  • the object shape FO and/or the object contour CO can be supplied to a deep learning algorithm A2 for later processing.
  • objects 0 and their object coordinates xO, yO, zO can also be detected by any other camera 3 in the vehicle 1 and their position and orientation in space can be determined via this.
  • it can additionally be provided that more than two individual images EB1, EB2 are recorded with the respective camera 3 and evaluated by triangulation T as described above and/or a bundle adjustment is additionally carried out.
  • the object 0 for the SfM method can be viewed from at least two different viewpoints SP1, SP2 by the respective camera 3, as shown schematically in FIG. 2a.
  • the respective camera 3 is to be moved into the different positions SP1, SP2 in a controlled manner.
  • Based on odometry data OD it can be determined which base length L results between the positions SP1, SP2 from this movement. Different methods can be used for this:
  • the vehicle 1 is actively set in motion in its entirety, for example by the drive system, or passively, for example by a slope will. If at least two individual images EB1, EB2 are recorded by the respective camera 3 during this movement within a time offset, the base length L can be determined using odometry data OD, from which the vehicle movement and thus also the camera movement can be derived. The two positions SP1, SP2 assigned to the individual images EB1, EB2 are thus determined by odometry.
  • wheel speed signals S13 from active and/or passive wheel speed sensors 13 on the wheels of vehicle 1 can be used as odometry data OD.
  • these can be used to determine how far the vehicle 1 or the respective camera 3 has moved between the positions SP1, SP2, from which the Base length L follows.
  • further odometry data OD available in the vehicle 1 can be accessed.
  • a steering angle LW and/or a yaw rate G which are correspondingly determined by sensors or analytically, can be used in order to also take into account the rotational movement of the vehicle 4 .
  • odometry can also be used.
  • a camera position can be continuously determined from the image signals SB of the respective camera 3 or from information in the captured individual images EB1, EB2, insofar as object coordinates xO, yO, zO of a specific object point PPn, for example, are known at least at the beginning.
  • the odometry data OD can therefore also contain a dependency on the camera position determined in this way, since the vehicle movement between the two positions SP1, SP2 or directly also the base length L can be derived from this.
  • an active adjustment of the camera 3 can also be provided without changing the state of motion of the entire vehicle 1 . Accordingly, any movements of the respective camera 3 are possible in order to bring them into different positions SP1, SP2 in a controlled and measurable manner.
  • overlay structures 20 can subsequently be superimposed on the all-round view image RB, as shown in FIG.
  • the superimposition can take place in such a way that the display device 7 is transmitted the all-round view image RB via an all-round view image signal SRB and an overlay image OB to be superimposed with the respective overlay structures 20 is transmitted via an overlay signal SO.
  • the display device Device 7 displays both images RB, OB on the corresponding display pixels APm, for example by pixel addition or pixel multiplication or any other pixel operation.
  • the all-round view image RB can also be changed or adjusted directly in the processing unit 6 at the corresponding all-round view pixels RBPp, so that the display device 7 can use the all-round view image signal SRB to show an all-round view image RB that contains the overlay structures 20 includes, is transmitted for display.
  • the additional overlay structures 20 are uniquely assigned to a specific object O in the environment U.
  • the observer can be presented with additional information about the respective object O, which makes orientation in the area U based on the display more convenient.
  • the overlay structure 20 can be, for example, a bar 20a and/or a polygon 20b and/or a text 20c, which can additionally be encoded depending on the respectively associated depth information TI.
  • the superimposition takes place in such a way that the overlay structure 20 appears on or adjacent to the object pixels OAPq of the display device 7 which are assigned to the object O.
  • the respective object pixels OAPq can be dynamically identified by the processing unit 6 via the all-round view algorithm A1. Based on this, the overlay image OB can be created or the all-round view pixels RBPp of the all-round view image RB can be changed or adjusted directly, so that the respective overlay structure 20 is on or adjacent to the respective object O on the display device 7 appears.
  • a bar 20a can be displayed on the display pixels APm of the display device 7 that is located on or adjacent to an outer edge MK of the respective object O that is closest to the host vehicle 1 .
  • the alignment of the bar 20a can be chosen such that the bar 20a is perpendicular to an object normal NE, as shown in FIG Delimitation of the object 0 indicates if the object 0 has no straight outer edge MK, for example.
  • the object normal NO can be estimated from the depth information TI on this object, ie from the position and the orientation, which follows from the SfM method.
  • the object pixels OAPq of the object O, to which the bar 20a is also assigned can be colored in a fixed color F.
  • the object O itself is displayed more clearly, so that possible distortions in the display of the object O are perceived less.
  • a polygon 20b with the object shape OF or the object contour OC in a specific color F is thus superimposed as a further overlay structure 20 on the all-round view image RB in the area of the object pixels OAPq. If the object shape OF or the object contour OC cannot be clearly determined in the SfM method, only a rectangle can be assumed for the object O, which then runs “behind” the bar 20a as seen from the vehicle 1 .
  • the further overlay structure 20 is a polygon 20b with four corners.
  • Black for example, can be selected as the color F.
  • the color F can also be selected as a function of an object distance OA from the respective object O.
  • the bar 20a itself can also be color-coded depending on the object distance OA from the respective object O.
  • the object distance OA between the vehicle 1 and the object O also follows from the depth information TI on this object obtained via the SfM method, i.e. from the position and the orientation.
  • the color F of the respective overlay structure 20, ie the bar 20a and/or the polygon 20b can be displayed in a warning color, in particular red, for example will. If the object distance OA of the object O is in a range between 1 m and 5 m, yellow can be used as the color F for this object O associated overlay structure 20 can be used. In the case of object distances OA greater than 5 m, green can be provided as the color F. In this way, the danger emanating from the respective object O can be shown to the observer in a clear manner. Since the depth information TI is obtained from the individual images EBk, the distortions from the all-round view algorithm A1 have no influence on the overlay structures 20 and can therefore be displayed at the correct position on the display device 7 starting from the host vehicle 1 .
  • the respective overlay structure 20 can be displayed opaquely or at least partially transparently on the display device 7, so that the at least one overlay structure 20 completely or at least partially shows the all-round view image RB on and/or adjacent to the respectively assigned object O in terms of transparency.
  • the color F of the overlay structure 20 can be selected as a function of a movement indicator B.
  • an object contour OC and/or an object shape OF for the respectively detected object O can be determined using the SfM method. However, no direct conclusions about the dynamics of the object O can be drawn from the SfM method. However, if the object contour OC and/or the object shape OF is supplied to a deep learning algorithm A2 in the processing unit 6, at least one classification of the object O can take place, from which the possible dynamics of the object O can then be inferred .
  • the object contour OC and/or the object shape OF of the respective object O can be compared with known objects. These can be stored in a database, which is stored, for example, in a memory fixed to the vehicle or which can be accessed from vehicle 1 via a mobile data connection. Based on the entries of known objects in the database, it can be determined whether the detected object O is a person or a building or a vehicle or the like. Each detected object 0 can be assigned a movement indicator B stored in the database, which indicates whether and how the object 0 normally moves in the environment U. From this it can be concluded whether increased attention to the object 0, eg in the case of people, is required.
  • the overlay structure 20, for example the bar 20a and/or the polygon 20b or another overlay structure 20, can be coded according to the movement indicator B, for example in color.
  • a text 20c for example in the form of an “!” (exclamation mark), etc., can be superimposed as a further overlay structure 20.
  • the movement indicator B of an object O can also be estimated by chronologically tracking individual image pixels EBkPi, which are assigned to an object point PPn in the environment U. This can be done, for example, by forming a difference between successive individual images EBk pixel by pixel. A movement of the respective object O can also be inferred from this.
  • isolines 20d can be superimposed as overlay structures 20, each of which characterizes a fixed iso-distance A1 to a vehicle exterior 1a.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé pour afficher l'environnement d'un véhicule (1) sur un dispositif d'affichage (7), qui comprend au moins les étapes suivantes comportant : l'acquisition de l'environnement au moyen d'au moins deux caméras, chacune ayant un champ de vision différent, les champs de vision des caméras adjacentes se chevauchant; la génération d'une image à 360° (RB) à partir d'au moins deux images individuelles de caméras différentes, lesdites images individuelles étant projetées sur un plan de référence afin que soit générée l'image à 360° (RB); l'identification d'informations de profondeur sur au moins un objet (O) dans l'environnement à partir d'au moins deux images individuelles différentes de la même caméra par triangulation; et la génération d'au moins une structure de recouvrement (20) sur la base des informations de profondeur identifiées, chaque structure de recouvrement (20) étant attribuée de manière unique à un objet imagé (O); et l'affichage de l'image à 360° (RB) générée contenant le ou les objets (O) et la ou les structures de recouvrement (20) générées sur le dispositif d'affichage (7) de façon que la ou les structures de recouvrement (20) sont affichées sur et/ou de manière adjacente à l'objet respectif (O).
EP21769721.8A 2020-09-02 2021-08-31 Procédé pour afficher l'environnement d'un véhicule sur un dispositif d'affichage, unité de traitement et véhicule Pending EP4208372A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102020122908.9A DE102020122908A1 (de) 2020-09-02 2020-09-02 Verfahren zum Anzeigen einer Umgebung eines Fahrzeuges auf einer Anzeigeeinrichtung, Verarbeitungseinheit und Fahrzeug
PCT/EP2021/073924 WO2022049040A1 (fr) 2020-09-02 2021-08-31 Procédé pour afficher l'environnement d'un véhicule sur un dispositif d'affichage, unité de traitement et véhicule

Publications (1)

Publication Number Publication Date
EP4208372A1 true EP4208372A1 (fr) 2023-07-12

Family

ID=77739074

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21769721.8A Pending EP4208372A1 (fr) 2020-09-02 2021-08-31 Procédé pour afficher l'environnement d'un véhicule sur un dispositif d'affichage, unité de traitement et véhicule

Country Status (5)

Country Link
US (1) US20230226977A1 (fr)
EP (1) EP4208372A1 (fr)
CN (1) CN115968485A (fr)
DE (1) DE102020122908A1 (fr)
WO (1) WO2022049040A1 (fr)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10035223A1 (de) 2000-07-20 2002-01-31 Daimler Chrysler Ag Vorrichtung und Verfahren zur Überwachung der Umgebung eines Objekts
US20150286878A1 (en) 2014-04-08 2015-10-08 Bendix Commercial Vehicle Systems Llc Generating an Image of the Surroundings of an Articulated Vehicle
DE102017108254B4 (de) 2016-04-19 2019-10-02 GM Global Technology Operations LLC Rundumsichtkamerasystem zur Objekterkennung und -verfolgung und Verfahren zum Ausstatten eines Fahrzeugs mit einem Rundumsichtkamerasystem
EP3293700B1 (fr) 2016-09-09 2019-11-13 Panasonic Automotive & Industrial Systems Europe GmbH Reconstruction 3d pour véhicule
DE102016223388A1 (de) * 2016-11-25 2018-05-30 Conti Temic Microelectronic Gmbh Bildverarbeitungssystem und bildverarbeitungsverfahren
DE102018100909A1 (de) 2018-01-17 2019-07-18 Connaught Electronics Ltd. Verfahren zum Rekonstruieren von Bildern einer Szene, die durch ein multifokales Kamerasystem aufgenommen werden

Also Published As

Publication number Publication date
US20230226977A1 (en) 2023-07-20
CN115968485A (zh) 2023-04-14
WO2022049040A1 (fr) 2022-03-10
DE102020122908A1 (de) 2022-03-03

Similar Documents

Publication Publication Date Title
DE102006003538B3 (de) Verfahren zum Zusammenfügen mehrerer Bildaufnahmen zu einem Gesamtbild in der Vogelperspektive
DE102014107158B4 (de) Verbesserte Top-Down-Bilderzeugung in einem Frontbordstein-Visualisierungssystem
DE102014107156B4 (de) System und Verfahren zum Bereitstellen einer verbesserten perspektivischen Bilderzeugung in einem Frontbordstein-Visualisierungssystem
DE102014107155B4 (de) Verbessertes Frontbordstein-Visualisierungssystem
DE102013209415B4 (de) Dynamische Hinweisüberlagerung mit Bildbeschneidung
EP1875442B1 (fr) Procede pour la representation graphique de l'environnement d'un vehicule automobile
EP2805183B1 (fr) Procédé et dispositif de visualisation de l'environnement d'un véhicule
WO2016162245A1 (fr) Procédé de représentation d'un environnement d'un véhicule
DE102013205882A1 (de) Verfahren und Vorrichtung zum Führen eines Fahrzeugs im Umfeld eines Objekts
EP2934947A1 (fr) Véhicule automobile pourvu d'un système de surveillance à caméra
WO2013087362A1 (fr) Procédé permettant d'améliorer la détection d'un objet par des systèmes multi-caméras
DE102019105630B4 (de) Anzeigesteuervorrichtung, Fahrzeugumgebungsanzeigesystem und Computerprogramm
DE102018108751B4 (de) Verfahren, System und Vorrichtung zum Erhalten von 3D-Information von Objekten
DE102013220477A1 (de) Verfahren zur Korrektur einer bildlichen Darstellung einer Fahrzeugumgebung
DE102018100909A1 (de) Verfahren zum Rekonstruieren von Bildern einer Szene, die durch ein multifokales Kamerasystem aufgenommen werden
DE102010051204A1 (de) Verfahren zum Darstellen eines Hindernisses und Abbildungsvorrichtung
WO2017198429A1 (fr) Détermination de données d'environnement de véhicule
DE102006037600B4 (de) Verfahren zur auflösungsabhängigen Darstellung der Umgebung eines Kraftfahrzeugs
DE102020127278A1 (de) Around-View-Synthese-System und -Verfahren
EP3292535B1 (fr) Procédé de production d'une image complète d'un environnement d'un véhicule et d'un dispositif correspondant
DE102016208370A1 (de) Verfahren zum Ermitteln von Daten, die eine Umgebung unterhalb eines Fahrzeugs repräsentieren
EP4208372A1 (fr) Procédé pour afficher l'environnement d'un véhicule sur un dispositif d'affichage, unité de traitement et véhicule
DE112018005744T5 (de) Bedienerassistenz-sichtsystem
DE102022206127A1 (de) Verfahren zur Darstellung einer Umgebung eines Fahrzeugs mit einem angekoppelten Anhänger, Computerprogramm, Rechenvorrichtung und Fahrzeug
DE102021206608A1 (de) Kamerasystem sowie Verfahren für ein Kamerasystem

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230403

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)