EP3555808A1 - Vorrichtung zur bereitstellung einer verbesserten hinderniserkennung - Google Patents
Vorrichtung zur bereitstellung einer verbesserten hinderniserkennungInfo
- Publication number
- EP3555808A1 EP3555808A1 EP17829121.7A EP17829121A EP3555808A1 EP 3555808 A1 EP3555808 A1 EP 3555808A1 EP 17829121 A EP17829121 A EP 17829121A EP 3555808 A1 EP3555808 A1 EP 3555808A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image data
- image
- image features
- processing unit
- overlap
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 35
- 230000007613 environmental effect Effects 0.000 claims description 35
- 238000000034 method Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 10
- 239000013598 vector Substances 0.000 claims description 8
- 239000000284 extract Substances 0.000 abstract description 4
- 238000003384 imaging method Methods 0.000 description 5
- GNFTZDOKVXKIBK-UHFFFAOYSA-N 3-(2-methoxyethoxy)benzohydrazide Chemical compound COCCOC1=CC=CC(C(=O)NN)=C1 GNFTZDOKVXKIBK-UHFFFAOYSA-N 0.000 description 4
- 230000001419 dependent effect Effects 0.000 description 3
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002195 synergetic effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
- B60R1/27—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/105—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
- B60R2300/301—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/60—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
- B60R2300/607—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective from a bird's eye viewpoint
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Definitions
- the present invention relates to a device for
- Providing enhanced obstacle detection a system for providing improved obstacle detection, a method for providing improved obstacle detection, and a computer program element.
- a driver assistance system is used in a vehicle to assist a driver in making driving maneuvers, particularly during parking maneuvers.
- a conventional driver assistance system may include an environmental image system having cameras adapted to capture camera images of the vehicle
- Capture vehicle environment to create an environmental image The generated environmental image can be displayed to the driver on a display during a driving maneuver.
- a top view may consist of several
- the environmental imaging system may include multiple cameras, where adjacent cameras may have an overlapping field of view, FOV.
- Conventional environmental image systems lead to a bad one
- Obstacle detection of obstacles in the overlapping areas and in areas that extend into the overlapping areas are poorly visible through an environmental imaging system. This can lead to inadequate safety functions of the driver assistance system with that of the
- an apparatus for providing improved obstacle detection, comprising:
- the first camera is configured to capture first vehicle image data, and the first camera is configured to display the first vehicle image data to the first vehicle image data
- the second camera is configured to second
- Vehicle image data to capture and the second camera is configured to provide the second vehicle image data to the processing unit.
- the first vehicle image data and the second vehicle image data extend over a ground plane and wherein the region of image overlap extends over an overlap region of the ground plane.
- the processing unit is configured to extract first image features from the first
- Extract vehicle image data and is configured to extract second image features from the second vehicle image data.
- the processing unit is also configured to project the first image features onto the ground plane and is configured to project the second image features to the ground plane.
- the processing unit is configured to generate at least one environmental image, either comprising (a) at least a portion of the first vehicle image data associated with the overlap region, or (b) at least a portion of the second vehicle image data associated with the overlap region. The generation is based in part on the determination of first image features whose projections lie in the overlapping region of the ground plane, and second image features whose projections in the overlap region of the
- an overlapping area of an environmental image may use images from one of two cameras, each of which overlaps taking into account the projections of an object seen by each camera, which are in the overlap area. This allows the
- Overlap area camera images are selected which can be better combined with the individual camera images from each camera to provide representative images of obstacles.
- images having more projected features in an image overlap area may be prioritized.
- Overlap area whose projections are in the overlap area, and objects that are outside of the overlap area but whose projections are in the overlap area can also be considered.
- the processing unit is configured to determine a number of first image features whose projections are in the overlap region of the ground plane and is configured to determine a number of second image features whose projections are in the overlap region of the ground plane.
- the processing unit is configured to generate the at least one environmental image having at least a portion of the first vehicle image data corresponding to the first vehicle image data
- Overlap area are assigned when the number of first image features whose projections are in the overlapping area, is greater than the number of the second
- the processing unit is also configured to generate the at least one environmental image having at least a portion of the second vehicle image data corresponding to the at least one environmental image
- Overlap area are assigned, if the number of second image features whose projections are in the overlapping area, is greater than the number of the first
- Image features whose projections lie in the overlap area are whose projections lie in the overlap area.
- the image for the overlap area is determined according to which image more recognizable image features whose projections of the
- Image features lie in the overlapping area has.
- the extraction of the first image features comprises a determination of binary data
- the extraction of the second image features comprises a determination of binary data.
- the feature extraction method results in a binary image that may, for example, have ones where features were detected and zeros where no features were detected. This simplifies the determination of the number of features whose projections lie in the overlap area, which only requires one summation procedure.
- the first image features are projected along vectors that extend from the first camera through the first image features to the ground plane
- the second image features are projected along vectors that extend from the second camera through the second image features to the ground plane.
- the at least one environmental image includes the first vehicle image data outside the overlap region and includes the second
- Vehicle image data outside the overlap area Vehicle image data outside the overlap area.
- the environmental image uses the appropriate image for the overlap area and non-overlapping images to provide an environmental image for improved obstacle detection about a vehicle.
- the generation of the at least one environmental image is based in part on first image features located in the overlap region and on second image features located in the overlap region.
- Overlap area are as well as a feature that lies in the overlap area and whose projections to the ground level but outside the overlap area. In this way, tall objects at the side away from the overlapping area can be suitably taken into account when the images for displaying the image are displayed
- Overlap area can be selected.
- the processing unit is configured to determine a number of first image features in the overlap area and is configured to Number of second image features in the overlap area to determine.
- the processing unit is also configured to generate the at least one environmental image comprising at least a portion of the first vehicle image data associated with the overlap region when the number of first image features whose projections are in the overlap region is equal to the number of first image features in the overlap region to be added is larger than the number of second image features whose projections are in the overlap area, to which number of second image features in the overlapping area are added.
- the processing unit is also configured to generate the at least one environmental image comprising at least a portion of the second vehicle image data associated with the overlap region when the number of second image features whose projections are in the overlap region is equal to the number of second image features in the overlap region Overlap area is added, is greater than the number of first image features whose projections are in the overlap area, are added to the number of first image features in the overlap area.
- a vehicle configured to provide improved obstacle detection, comprising:
- the display unit is configured to display at least one environment image.
- a method of providing improved obstacle detection comprising:
- Vehicle image data and at least a portion of the second vehicle image data and wherein the first vehicle image data and the second vehicle image data extend over a ground plane and wherein the region of image overlap extends over an overlapping region of the ground plane; Providing the second vehicle image data to the processing unit
- Processing unit either comprising (i-a) at least a portion of the first
- Vehicle image data associated with the overlap area or (i-b) at least a portion of the second vehicle image data associated with the overlap area, the generation based in part on a determination of first
- step g) includes determining, by the processing unit, a number of first image features whose projections in the
- step h) comprises determining, by the processing unit, a number of second image features whose projections are in the overlap region of the ground plane; And step i-a) occurs when the number of first image features whose projections are in the overlap region is greater than the number of second image features whose projections are in the overlap region; And step i-b) occurs when the number of second image features whose projections are in the overlap region is greater than the number of first image features whose projections are in the overlap region.
- Computer program element control device as described above, in which the computer program element is executed by a processing unit, and which is adapted to carry out the method steps described above.
- a computer readable medium having stored the computer program element described above.
- Fig. 1 shows a schematic structure of an example of a device for providing an improved obstacle detection
- Fig. 2 shows a schematic structure of an example of a system for providing improved obstacle detection
- Fig. 3 shows a method for providing an improved
- Fig. 4 shows the classification of regions around a vehicle into different sectors
- Fig. 5 shows a schematic structure of an example of the projections of an image feature on the ground plane
- Fig. 6 shows a schematic structure of an example of a system for providing improved obstacle detection
- Fig. 7 shows a schematic structure of an example of a system for providing improved obstacle detection.
- FIG. 1 shows an example of an apparatus 10 for providing improved obstacle detection.
- the apparatus includes a first camera 20, a second camera 30, and a processing unit 40.
- the first camera 20 is configured to acquire first vehicle image data, and the first camera 20 is configured, the first
- the second camera 30 is configured to acquire second vehicle image data, and the second camera 30 is configured to supply the second vehicle image data to the processing unit 40.
- Provision of image data may be via wired or wireless communication.
- There is a region of image overlap that includes at least a portion of the first vehicle image data and at least a portion of the second vehicle image data.
- the first vehicle image data and the second vehicle image data extend over a ground plane and the region of image overlap extends over an overlapping region of the ground plane.
- the processing unit 40 is configured to extract first image features from the first vehicle image data and is configured to extract second image features from the second vehicle image data.
- the processing unit 40 is also configured to project the first image features onto the ground plane and is configured to project the second image features onto the ground plane.
- Processing unit 40 is configured to generate at least one environmental image, either comprising (a) at least a portion of the first vehicle image data associated with the overlap region, or (b) at least a portion of the second
- Vehicle image data associated with the overlap area is based in part on the determination of first image features whose projections lie in the overlap region of the ground plane, and on second image features whose projections lie in the overlap region of the ground plane.
- the processing unit is configured to generate at least one environmental image in real time.
- the first and second cameras are mounted on different sides of a chassis of a vehicle.
- the apparatus further includes a third camera 50 and a fourth camera 60 configured to capture third vehicle image data and fourth vehicle image data.
- a second region of image overlap that includes at least a portion of the first vehicle image data and at least a portion of the third
- Vehicle image data has. There is a third region of image overlap that includes at least a portion of the second vehicle image data and at least a portion of the fourth vehicle image data. There is a fourth area of image overlap that includes at least a portion of the third vehicle image data and at least a portion of the fourth vehicle image data.
- each of the cameras has a field of view which is greater than 180
- a radar sensor is used in conjunction with the first camera to determine the distance of objects imaged in the field of view of the camera.
- a radar sensor is used with the second camera to determine the distance of objects imaged in the field of view of the camera.
- lidar and / or ultrasonic sensors may be used alternatively or in addition to the radar sensors to determine the distances of objects imaged in the fields of view of the cameras.
- the processing unit 40 is configured to determine a number of first image features whose projections in the
- Overlap area of the ground plane is configured to determine a number of second image features whose projections lie in the overlapping area of the ground plane.
- the processing unit 40 is also configured to have at least one
- the processing unit 40 is also configured to generate at least one environmental image having at least a portion of the second vehicle image data associated with the overlap region when the number of second image features whose projections are in the overlap region is greater than the number of the first
- Image features whose projections lie in the overlap area are whose projections lie in the overlap area.
- an edge detection algorithm is used to capture first and second image features.
- the extraction for determining the first image features comprises binary data
- the extraction for determining the second image features comprises binary data
- the first image features are projected along vectors extending from the first camera 20 through the first image features to the ground plane
- the second image features are projected along vectors extending from the second camera 30 through the second image features to the ground plane
- the at least one environmental image includes the first vehicle image data outside the overlap region and includes the second vehicle image data outside the overlap region.
- the processing unit is configured to determine a number of first image features in the overlap region and is configured to determine a number of second image features in the overlap region.
- the processing unit is also configured, at least one
- the processing unit is configured to generate at least one environmental image having at least a portion of the second vehicle image data associated with the overlap region when the number of second image features whose projections are in the overlap region to the number of second image features in the overlap region is added, is larger than the number of first image features whose projections are in the overlap area, to which the number of the first image features in the overlap area are added.
- FIG. 2 shows an example of a vehicle 100 configured to provide improved obstacle detection.
- the vehicle 100 has a device
- the vehicle 100 also includes a display unit 110.
- the display unit 110 is configured to display the at least one environmental image.
- FIG. 3 shows a method 200 for providing an improved
- the method 200 includes:
- a detection step 210 which is also referred to as step a) acquiring first vehicle image data with a first camera 20;
- a providing step 220 also referred to as step b) providing the first vehicle image data to a processing unit 40 by the first camera;
- a detection step 230 also referred to as step c)
- step 240 which is also referred to as step d), providing the second vehicle image data to the processing unit by the second camera;
- step e extracting the first image features from the first vehicle image data by the
- step f extracting the second image features from the second vehicle image data by the processing unit;
- a projection step 270 which is also referred to as step g)
- a projection step 280 also referred to as step h
- step h projecting the second image features onto the ground plane by the processing unit
- step i In a generation step 290, which is also referred to as step i),
- Generating at least one environmental image by the processing unit either comprising (i-a) at least a portion of the first vehicle image data corresponding to
- Overlapping area are assigned, or (i-b) at least a part of the second vehicle image data, which are assigned to the overlapping area, wherein the
- step g) comprises determining 272 by the processing unit a number of first image features whose projections lie in the overlap region of the ground plane.
- step h) comprises determining 282 by the processing unit a number of second image features whose projections lie in the overlap region of the ground plane.
- step ia) applies if the number of first image features whose projections are in the
- Overlap range is greater than the number of second image features whose projections are in the overlap area.
- step i-b) applies if the number of second image features whose projections are in the overlap region is greater than the number of first image features whose projections in the
- step e) includes determining 252 binary data
- step f) includes determining 262 binary data
- step g) includes projecting 274 of first
- step h) includes projecting 284 second image features along vectors that extend from the second camera 30 through the second image features to the ground plane.
- step i) includes generating the at least one environmental image based in part on the first image features that appear in the
- Overlap area are located.
- the method includes determining a number of first image features in the overlap region and determining a number of second image features in the overlap region. In this example, if the number of first image features whose projections are in the overlap area are added to the number of first image features in the overlapping area, step ia) is greater than the number of second image features whose projections are in the overlap area the number of second image features in
- step i-b) occurs when the
- Number of second image features whose projections are in the overlap area, added to the number of second image features in the overlap area, is greater than the number of first image features whose projections in the overlap area be added to the number of first image features in the overlap area.
- the top view is made up of several
- the nodes in the fence data are represented in world coordinates.
- the various areas of the plan view are divided into sectors as shown in FIG.
- Each node or feature, such as a detected edge, in the fence data is categorized into one of the sectors with respect to its position in world coordinates.
- image features such as edges are captured or extracted and classified into the regions in which they are located.
- Fig. 5 is shown.
- Nodes in the overlapping area have two starting points in the fence process.
- step ii is used for the projected points repeated. This is the output of the fence process for adaptive
- each overlapping sector has two sets of parent nodes.
- the camera that creates more output nodes in the overlapping area gets a higher priority. For example, if the output of the fence process in the front-right overlapping sector has more projected nodes from the front camera, the front camera image data in the overlapping area is prioritized higher.
- FIG. 6 shows a detailed example of an environmental image system 100 as described in FIG. 2.
- the system 100 includes at least one camera pair formed by two overlapping field of view cameras FOV 20, 30 adapted to produce camera images CI having an overlap area OA, as shown in FIG.
- the camera pair is directed forward, but could be aligned with their optical axes at right angles to each other, as shown in FIG.
- the field of view FOV of each camera 20, 30 may be more than 180 degrees.
- the cameras 20, 30 may be provided with so-called fisheye lenses mounted on a chassis of a vehicle.
- the cameras 20, 30 are connected to a processing unit 40, which may comprise at least one microprocessor.
- the processing unit 40 is configured to calculate the surrounding images including overlapping areas OAs with respect to each camera.
- the processing unit extracts features from the images and projects them to the ground plane as shown in FIG. Depending on whether the number of ground plane projected features seen by the camera 20 that are in an overlap area OA is greater than that seen by the camera 30
- the number of ground plane projected features located in the overlap area OA when merging the environment image, is given priority to either the camera image CI captured by the camera 20 or 30, depending on which has the larger assigned number of projected features in the OA.
- the images may be temporarily stored in a buffer memory of the processing unit 40.
- the processing unit may be part of the
- Feature extraction process to produce a binary image in which features such as edges in represent an image as ones and other parts of the image are represented as zeros.
- Camera image CI1 and the second camera 30 of the camera pair a second camera image CI2.
- the processing unit 40 calculates images of a plan view for the
- the processing unit 40 includes an edge detector or feature detector capable of calculating edges or features for all images that may be provided as two binary images BI, if desired.
- the processing unit 40 includes a projector that projects detected or extracted features from the images from the position of the camera through the feature onto the ground plane, as shown in FIG. 5.
- the processing unit then sums the number of features projected onto the ground plane by each camera and uses that information to prioritize the images of the particular camera used for composing the image for the overlap area.
- the number of customized cameras for capturing the camera images CI may vary.
- FIG. 7 shows in detail a vehicle with a detailed example of an environmental image system 100 as described in FIGS. 2 and 6.
- the vehicle VEH with the surrounding image system 100 includes, in the illustrated example, four cameras 20, 30, 50, 60 positioned on different sides of the vehicle chassis. In the illustrated example, each camera has a FOV of more than 180 degrees.
- the illustrated vehicle VEH may be any type of vehicle, such as a car, bus, or truck, that performs a driving maneuver that may be assisted by a driver assistance system having an integrated environmental imaging system 100, as shown in FIGS. 2, 6 is shown.
- the four vehicle cameras 20, 30, 50, 60 are at different
- Figs. 4 and 7. Side of the vehicle chassis mounted so that four different overlapping areas OAs are visible to the vehicle cameras, as shown in Figs. 4 and 7.
- Figs. 4 and 7. For example, in the front left corner of the vehicle chassis, there is an overlapping area OA12 of the camera images CI which receive the front camera 20 and the left camera 30 of the surrounding image system 1 of the vehicle VEH.
- Fig.7 includes the
- Overlap area OA12 an obstacle.
- the obstacle in the illustrated example is a wall of a garage into which the driver of the vehicle VEH wishes to maneuver the vehicle VEH.
- the wall is within the overlap area and extends out of the overlap area.
- the processing unit determines the image to be prioritized by one of the cameras 20 or 30 for the
- a computer program or a computer program element is provided, which is characterized in that it is configured to execute the method steps of the method according to one of the preceding embodiments on a suitable system.
- the computer program element can therefore be stored on a computer unit, which could also be part of an embodiment.
- This computing unit may be configured to perform the steps of the above
- the computing unit may be configured to control the components of the apparatus and / or the system described above.
- the computing unit may be configured to automatically operate and / or execute the commands of a user.
- a computer program can be loaded into a memory of a data processor.
- the data processor may thus be configured to perform the method according to one of the preceding embodiments.
- a computer readable medium such as a CD-ROM
- the computer readable medium having a computer program element stored thereon.
- the computer program element was replaced by the previous one
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102016225073.6A DE102016225073A1 (de) | 2016-12-15 | 2016-12-15 | Vorrichtung zur bereitstellung einer verbesserten hinderniserkennung |
PCT/DE2017/200129 WO2018108215A1 (de) | 2016-12-15 | 2017-12-06 | Vorrichtung zur bereitstellung einer verbesserten hinderniserkennung |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3555808A1 true EP3555808A1 (de) | 2019-10-23 |
Family
ID=60971881
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP17829121.7A Withdrawn EP3555808A1 (de) | 2016-12-15 | 2017-12-06 | Vorrichtung zur bereitstellung einer verbesserten hinderniserkennung |
Country Status (4)
Country | Link |
---|---|
US (1) | US10824884B2 (de) |
EP (1) | EP3555808A1 (de) |
DE (2) | DE102016225073A1 (de) |
WO (1) | WO2018108215A1 (de) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11011063B2 (en) * | 2018-11-16 | 2021-05-18 | Toyota Motor North America, Inc. | Distributed data collection and processing among vehicle convoy members |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4297501B2 (ja) * | 2004-08-11 | 2009-07-15 | 国立大学法人東京工業大学 | 移動体周辺監視装置 |
JP4328692B2 (ja) * | 2004-08-11 | 2009-09-09 | 国立大学法人東京工業大学 | 物体検出装置 |
DE102006003538B3 (de) | 2006-01-24 | 2007-07-19 | Daimlerchrysler Ag | Verfahren zum Zusammenfügen mehrerer Bildaufnahmen zu einem Gesamtbild in der Vogelperspektive |
US7728879B2 (en) * | 2006-08-21 | 2010-06-01 | Sanyo Electric Co., Ltd. | Image processor and visual field support device |
US20100020170A1 (en) * | 2008-07-24 | 2010-01-28 | Higgins-Luthman Michael J | Vehicle Imaging System |
WO2010137265A1 (ja) | 2009-05-25 | 2010-12-02 | パナソニック株式会社 | 車両周囲監視装置 |
DE102012215026A1 (de) | 2012-08-23 | 2014-05-28 | Bayerische Motoren Werke Aktiengesellschaft | Verfahren und Vorrichtung zum Betreiben eines Fahrzeugs |
US10179543B2 (en) * | 2013-02-27 | 2019-01-15 | Magna Electronics Inc. | Multi-camera dynamic top view vision system |
JP6371553B2 (ja) * | 2014-03-27 | 2018-08-08 | クラリオン株式会社 | 映像表示装置および映像表示システム |
US10127463B2 (en) * | 2014-11-21 | 2018-11-13 | Magna Electronics Inc. | Vehicle vision system with multiple cameras |
DE102015204213B4 (de) * | 2015-03-10 | 2023-07-06 | Robert Bosch Gmbh | Verfahren zum Zusammensetzen von zwei Bildern einer Fahrzeugumgebung eines Fahrzeuges und entsprechende Vorrichtung |
-
2016
- 2016-12-15 DE DE102016225073.6A patent/DE102016225073A1/de not_active Withdrawn
-
2017
- 2017-12-06 WO PCT/DE2017/200129 patent/WO2018108215A1/de unknown
- 2017-12-06 DE DE112017005010.3T patent/DE112017005010A5/de active Pending
- 2017-12-06 US US16/468,093 patent/US10824884B2/en active Active
- 2017-12-06 EP EP17829121.7A patent/EP3555808A1/de not_active Withdrawn
Also Published As
Publication number | Publication date |
---|---|
WO2018108215A1 (de) | 2018-06-21 |
DE102016225073A1 (de) | 2018-06-21 |
US20200074191A1 (en) | 2020-03-05 |
US10824884B2 (en) | 2020-11-03 |
DE112017005010A5 (de) | 2019-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3328686B1 (de) | Verfahren und vorrichtung zum anzeigen einer umgebungsszene eines fahrzeuggespanns | |
DE102014222617B4 (de) | Fahrzeugerfassungsverfahren und Fahrzeugerfassungssytem | |
EP3292510B1 (de) | Verfahren und vorrichtung zur erkennung und bewertung von fahrbahnreflexionen | |
DE102013205882A1 (de) | Verfahren und Vorrichtung zum Führen eines Fahrzeugs im Umfeld eines Objekts | |
DE102015105248A1 (de) | Erzeugen eines bildes von der umgebung eines gelenkfahrzeugs | |
EP3281178A1 (de) | Verfahren zur darstellung einer fahrzeugumgebung eines fahrzeuges | |
DE112016001150T5 (de) | Schätzung extrinsischer kameraparameter anhand von bildlinien | |
EP3308361B1 (de) | Verfahren zur erzeugung eines virtuellen bildes einer fahrzeugumgebung | |
EP2888715B1 (de) | Verfahren und vorrichtung zum betreiben eines fahrzeugs | |
EP3117399B1 (de) | Verfahren zum zusammenfügen von einzelbildern, die von einem kamerasystem aus unterschiedlichen positionen aufgenommen wurden, zu einem gemeinsamen bild | |
EP2562681B1 (de) | Objektverfolgungsverfahren für ein Kamerabasiertes Fahrerassistenzsystem | |
DE102018100909A1 (de) | Verfahren zum Rekonstruieren von Bildern einer Szene, die durch ein multifokales Kamerasystem aufgenommen werden | |
DE102016124747A1 (de) | Erkennen eines erhabenen Objekts anhand perspektivischer Bilder | |
DE102019132996A1 (de) | Schätzen einer dreidimensionalen Position eines Objekts | |
EP2996327B1 (de) | Surround-view-system für fahrzeuge mit anbaugeräten | |
DE102008050809A1 (de) | Lichtstreifendetektionsverfahren zur Innennavigation und Parkassistenzvorrichtung unter Verwendung desselben | |
EP3555808A1 (de) | Vorrichtung zur bereitstellung einer verbesserten hinderniserkennung | |
DE102014007565A1 (de) | Verfahren zum Ermitteln einer jeweiligen Grenze zumindest eines Objekts, Sensorvorrichtung, Fahrerassistenzeinrichtung und Kraftfahrzeug | |
DE102014201409A1 (de) | Parkplatz - trackinggerät und verfahren desselben | |
EP3476696A1 (de) | Verfahren zum ermitteln von objektgrenzen eines objekts in einem aussenbereich eines kraftfahrzeugs sowie steuervorrichtung und kraftfahrzeug | |
DE102017104957A1 (de) | Verfahren zum Bestimmen einer Bewegung von zueinander korrespondierenden Bildpunkten in einer Bildsequenz aus einem Umgebungsbereich eines Kraftfahrzeugs, Auswerteeinrichtung, Fahrerassistenzsystem sowie Kraftfahrzeug | |
DE102020003465A1 (de) | Verfahren zur Detektion von Objekten in monokularen RGB-Bildern | |
EP3465608B1 (de) | Verfahren und vorrichtung zum bestimmen eines übergangs zwischen zwei anzeigebildern, und fahrzeug | |
DE102020107949A1 (de) | Sichtfeldunterstützungsbild-Erzeugungsvorrichtung und Bildumwandlungsverfahren | |
DE102022112318B3 (de) | Verfahren zur Ermittlung einer Ausdehnungsinformation eines Zielobjekts, Kraftfahrzeug, Computerprogramm und elektronisch lesbarer Datenträger |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20190715 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: CONTI TEMIC MICROELECTRONIC GMBH |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20210217 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20210608 |