DE102016207209A1 - STIXEL ESTIMATION AND SEGMENTATION OF THE TRANSPORT PROCESS USING "DEEP LEARNING" - Google Patents
STIXEL ESTIMATION AND SEGMENTATION OF THE TRANSPORT PROCESS USING "DEEP LEARNING" Download PDFInfo
- Publication number
- DE102016207209A1 DE102016207209A1 DE102016207209.9A DE102016207209A DE102016207209A1 DE 102016207209 A1 DE102016207209 A1 DE 102016207209A1 DE 102016207209 A DE102016207209 A DE 102016207209A DE 102016207209 A1 DE102016207209 A1 DE 102016207209A1
- Authority
- DE
- Germany
- Prior art keywords
- data
- image
- vertical
- deep learning
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
Methoden und Systeme stehen zur Verfügung, um ein Objekt in einem Bild zu erkennen. In einer Ausführungsform umfasst eine Methode Folgendes: den Empfang, über einen Prozessor, von einem einzelnen Sensor, wobei die Daten ein Bild darstellen; die Aufteilung des Bildes durch den Prozessor in vertikale Teilbilder; die Verarbeitung der vertikalen Teilbilder durch den Prozessor, basierend auf Deep-Learning-Modellen; und die Erkennung eines Objektes durch den Prozessor, basierend auf der Verarbeitung.Methods and systems are available to recognize an object in an image. In one embodiment, a method comprises: receiving, via a processor, from a single sensor, the data representing an image; the division of the image by the processor into vertical partial images; the processing of the vertical part images by the processor based on deep learning models; and the recognition of an object by the processor based on the processing.
Description
TECHNISCHER BEREICHTECHNICAL PART
Der technische Bereich bezieht sich im Allgemeinen auf Objekterkennungssysteme und -verfahren. Insbesondere geht es dabei um Objekterkennungssysteme und -verfahren, die basierend auf dem sog. „Deep Learning” („tiefgreifende oder profunde Lernerfahrung”) Objekte erkennen.The technical field generally refers to object recognition systems and methods. In particular, it involves object recognition systems and procedures that recognize objects based on the so-called "deep learning" ("profound or profound learning experience").
HINTERGRUNDBACKGROUND
Verschiedene Systeme bearbeiten Daten, um Objekte in der Nähe des Systems zu erkennen. So entdecken zum Beispiel einige Fahrzeugsysteme Objekte in der Nähe des Fahrzeuges und verwenden die Informationen über das Objekt, um den Fahrer auf das Objekt aufmerksam zu machen und/oder das Fahrzeug zu steuern. Die Fahrzeugsysteme erkennen das Objekt anhand der Sensoren, die überall am und im Fahrzeug verteilt sind. So sind zum Beispiel mehrere Kameras hinten, an den Seiten und/oder an der Vorderseite des Fahrzeugs angebracht, um Objekte zu erkennen. Bilder von mehreren Kameras werden verwendet, um das Objekt basierend auf Stereovision zu erkennen. Mehrere Kameras in einem Fahrzeug oder einem beliebigen System anzubringen erhöht die Gesamtkosten.Different systems process data to detect objects near the system. For example, some vehicle systems detect objects in the vicinity of the vehicle and use the information about the object to alert the driver to the object and / or control the vehicle. The vehicle systems recognize the object based on the sensors, which are distributed throughout the vehicle. For example, multiple cameras are mounted behind, on the sides and / or the front of the vehicle to detect objects. Images from multiple cameras are used to recognize the object based on stereo vision. Mounting multiple cameras in a vehicle or any system increases the overall cost.
Dementsprechend ist es wünschenswert, Methoden und Systeme bereitzustellen, die Objekte in einem Bild aus einer einzigen Kamera erkennen. Weiterhin werden weitere wünschenswerte Funktionen und Merkmale aus der nachfolgenden detaillierten Beschreibung und den beigefügten Ansprüchen in Verbindung mit den beigefügten Zeichnungen und dem vorangegangenen technischen Gebiet und Hintergrund offensichtlich.Accordingly, it is desirable to provide methods and systems that recognize objects in an image from a single camera. Furthermore, other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background.
ZUSAMMENFASSUNGSUMMARY
Verfahren und Systeme stehen zur Verfügung, um ein Objekt in einem Bild zu erkennen. In einer Ausführungsform beinhaltet ein Verfahren Folgendes: den Empfang, über einen Prozessor, von einem einzelnen Sensor, wobei die Daten ein Bild darstellen; die Aufteilung des Bildes durch den Prozessor in vertikale Teilbilder; die Verarbeitung der vertikalen Teilbilder durch den Prozessor, basierend auf Deep-Learning-Modellen; und die Erkennung eines Objektes durch den Prozessor, basierend auf der Verarbeitung.Methods and systems are available to recognize an object in an image. In one embodiment, a method includes: receiving, via a processor, from a single sensor, the data representing an image; the division of the image by the processor into vertical partial images; the processing of the vertical part images by the processor based on deep learning models; and the recognition of an object by the processor based on the processing.
In einer Ausführungsform enthält ein System ein nicht vorübergehendes computerlesbares Medium. Das nicht vorübergehende computerlesbare Medium enthält ein erstes Computermodul, das über einen Prozessor Daten von einem einzelnen Sensor empfängt, wobei die Daten ein Bild darstellen. Das nicht vorübergehende computerlesbare Medium enthält ein zweites Computermodul, das über den Prozessor das Bild in vertikale Teilbilder aufteilt. Das nicht vorübergehende computerlesbare Medium enthält ein drittes Computermodul, das über den Prozessor die vertikalen Teilbilder basierend auf Deep-Learning-Modellen verarbeitet, und das über den Prozessor ein Objekt erkennt, basierend auf der Verarbeitung.In one embodiment, a system includes a non-transitory computer-readable medium. The non-transitory computer-readable medium includes a first computer module that receives data from a single sensor via a processor, the data representing an image. The non-transitory computer-readable medium includes a second computer module that divides the image into vertical sub-images via the processor. The non-transitory computer-readable medium includes a third computer module that processes, via the processor, the vertical sub-images based on deep-learning models, and that detects an object through the processor based on the processing.
BESCHREIBUNG DER ZEICHNUNGENDESCRIPTION OF THE DRAWINGS
Die Ausführungsbeispiele werden nachfolgend in Verbindung mit den folgenden Abbildungen (Zeichnungen) beschrieben, worin gleiche Ziffern gleiche Elemente bezeichnen, und worin:The embodiments will be described below in conjunction with the following figures (drawings), wherein like numerals denote like elements, and wherein:
AUSFÜHRLICHE BESCHREIBUNGDETAILED DESCRIPTION
Die folgende detaillierte Beschreibung dient lediglich als Beispiel und soll nicht die Anwendung und Verwendungen einschränken. Weiterhin besteht keine Absicht, an eine in vorstehendem technischen Bereich, Hintergrund, Kurzzusammenfassung oder der folgenden detaillierten Beschreibung ausdrücklich oder implizit vorgestellte Theorie gebunden zu sein. Es wird darauf hingewiesen, dass in allen Zeichnungen die gleichen Referenznummern auf die gleichen oder entsprechenden Teile und Merkmale verweisen. Der hier verwendete Begriff „Modul” bezieht sich auf eine anwendungsspezifische integrierte Schaltung (ASIC), eine elektronische Schaltung, einen Prozessor (gemeinsam genutzt, dediziert oder Gruppenprozessor) und einen Speicher, der ein oder mehrere Software- oder Firmwareprogramme, eine kombinatorische Logikschaltung und/oder andere geeignete Komponenten ausführt, die die beschriebene Funktionalität bieten.The following detailed description is by way of example only and is not intended to limit the application and uses. Furthermore, there is no intention to be bound by any theory expressed or implied in the preceding technical field, background, brief summary or the following detailed description. It should be noted that the same reference numbers refer to the same or corresponding parts and features throughout the drawings. The term "module" as used herein refers to an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group processor), and a memory containing one or more software or firmware programs, a combinatorial logic circuit, and / or others performs suitable components that provide the described functionality.
Hier unter Bezugnahme auf
Das Objekterkennungssystem
Der einzelne Sensor
Der Einzelsensor
In verschiedenen Ausführungsformen erfasst das Objekterkennungsmodul
In verschiedenen Ausführungsformen verarbeitet das Objekterkennungsmodul
Unter Bezugnahme hier auf
Der Modelldatenspeicher
Unter weiterer Bezugnahme auf
Das Bildverarbeitungsmodul
Das Deep-Learning-Modul
Das Stixelerkennungsmodul
Das Stixelerkennungsmodul
Unter weiterer Bezugnahme auf
Unter weiterer Bezugnahme auf
Unter weiterer Bezugnahme auf
Unter Bezugnahme hier auf
Wie des Weiteren zu erkennen ist, kann die Methode aus
In einem Beispiel beginnt die Methode eventuell bei
Während mindestens ein Ausführungsbeispiel in der vorstehenden detaillierten Beschreibung vorgestellt wurde, versteht sich, dass es eine Vielzahl an Varianten gibt. Es versteht sich weiterhin, dass das Ausführungsbeispiel oder Ausführungsbeispiele lediglich Beispiele sind und den Umfang, die Anwendbarkeit oder die Konfiguration dieser Offenlegung nicht in irgendeiner Weise einschränken sollen. Vorstehende detaillierte Beschreibung bietet Fachleuten vielmehr eine zweckmäßige Roadmap zur Implementierung des Ausführungsbeispiels oder von Ausführungsbeispielen. Es versteht sich, dass verschiedene Veränderungen an der Funktion und der Anordnung von Elementen vorgenommen werden können, ohne vom Rahmen der Offenlegung, wie sie in den beigefügten Ansprüchen und deren rechtlichen Entsprechungen aufgeführt werden, abzuweichen.While at least one embodiment has been presented in the foregoing detailed description, it should be understood that there are a variety of variations. It is further understood that the embodiment or embodiments are merely examples and are not intended to limit the scope, applicability, or configuration of this disclosure in any way. The foregoing detailed description rather provides those skilled in the art with a convenient roadmap for implementing the embodiment or embodiments. It is understood that various changes can be made in the function and arrangement of elements without departing from the scope of the disclosure as set forth in the appended claims and their legal equivalents.
Claims (10)
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562155948P | 2015-05-01 | 2015-05-01 | |
US62/155,948 | 2015-05-01 | ||
US15/092,853 | 2016-04-07 | ||
US15/092,853 US20160217335A1 (en) | 2009-02-27 | 2016-04-07 | Stixel estimation and road scene segmentation using deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
DE102016207209A1 true DE102016207209A1 (en) | 2016-11-03 |
Family
ID=57135985
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
DE102016207209.9A Withdrawn DE102016207209A1 (en) | 2015-05-01 | 2016-04-27 | STIXEL ESTIMATION AND SEGMENTATION OF THE TRANSPORT PROCESS USING "DEEP LEARNING" |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106096493A (en) |
DE (1) | DE102016207209A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10860034B1 (en) | 2017-09-27 | 2020-12-08 | Apple Inc. | Barrier detection |
US20230053786A1 (en) * | 2021-08-19 | 2023-02-23 | Ford Global Technologies, Llc | Enhanced object detection |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190072978A1 (en) * | 2017-09-01 | 2019-03-07 | GM Global Technology Operations LLC | Methods and systems for generating realtime map information |
CN109508673A (en) * | 2018-11-13 | 2019-03-22 | 大连理工大学 | It is a kind of based on the traffic scene obstacle detection of rodlike pixel and recognition methods |
WO2021056309A1 (en) * | 2019-09-26 | 2021-04-01 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for detecting road markings from a laser intensity image |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4797794B2 (en) * | 2006-05-24 | 2011-10-19 | 日産自動車株式会社 | Pedestrian detection device and pedestrian detection method |
US8385599B2 (en) * | 2008-10-10 | 2013-02-26 | Sri International | System and method of detecting objects |
CN102930274B (en) * | 2012-10-19 | 2016-02-03 | 上海交通大学 | A kind of acquisition methods of medical image and device |
-
2016
- 2016-04-27 DE DE102016207209.9A patent/DE102016207209A1/en not_active Withdrawn
- 2016-05-03 CN CN201610285721.0A patent/CN106096493A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10860034B1 (en) | 2017-09-27 | 2020-12-08 | Apple Inc. | Barrier detection |
US20230053786A1 (en) * | 2021-08-19 | 2023-02-23 | Ford Global Technologies, Llc | Enhanced object detection |
US11922702B2 (en) * | 2021-08-19 | 2024-03-05 | Ford Global Technologies, Llc | Enhanced object detection |
Also Published As
Publication number | Publication date |
---|---|
CN106096493A (en) | 2016-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102018116111B4 (en) | A unified deep convolutional neural network for free space estimation, object recognition estimation, and object pose estimation | |
DE102017201717B4 (en) | Method for using a rear vision system for a host vehicle and rear vision system for carrying out the method | |
DE102017201852B4 (en) | Parking assistance system for a vehicle and a method for using it | |
DE102016207209A1 (en) | STIXEL ESTIMATION AND SEGMENTATION OF THE TRANSPORT PROCESS USING "DEEP LEARNING" | |
DE102015121339B4 (en) | SYSTEMS AND METHODS FOR DETERMINING A CONDITION OF A ROAD | |
EP2569953B1 (en) | Optical self-diagnosis of a stereoscopic camera system | |
DE102017120112A1 (en) | DEPTH CARE VALUATION WITH STEREO IMAGES | |
DE102016122190A1 (en) | Stixel estimation methods and systems | |
DE102015208782A1 (en) | Object detection device, driving support device, object detection method, and object detection program | |
WO2015173092A1 (en) | Method and apparatus for calibrating a camera system in a motor vehicle | |
DE102012222963A1 (en) | Apparatus and method for detecting a three-dimensional object using an image of the surroundings of a vehicle | |
WO2020207528A1 (en) | Method and processing unit for ascertaining the size of an object | |
DE112016003517T5 (en) | Apparatus for displaying assistance images for a driver and method thereto | |
DE102014227032A1 (en) | System for filtering LiDAR data in a vehicle and corresponding method | |
DE102015201747A1 (en) | SENSOR SYSTEM FOR A VEHICLE AND METHOD | |
DE102011111440A1 (en) | Method for representation of environment of vehicle, involves forming segments of same width from image points of equal distance in one of image planes, and modeling objects present outside free space in environment | |
DE112016003546T5 (en) | Apparatus for displaying assistance images for a driver and method thereto | |
DE102006005512A1 (en) | System and method for measuring the distance of a preceding vehicle | |
DE102012223360A1 (en) | Apparatus and method for detecting an obstacle to an all-round surveillance system | |
DE112018007485T5 (en) | Road surface detection device, image display device using a road surface detection device, obstacle detection device using a road surface detection device, road surface detection method, image display method using a road surface detection method, and obstacle detection method using a road surface detection method | |
DE102013226476A1 (en) | IMAGE PROCESSING SYSTEM AND SYSTEM OF A ROUND MONITORING SYSTEM | |
DE102016106293A1 (en) | Dynamic Stixel estimation using a single moving camera | |
DE102011118171A1 (en) | Method for continuous estimation of driving surface plane of motor vehicle, involves determining current three-dimensional points of surrounding of motor vehicle from current image of image sequence of image capture device | |
DE102011082881A1 (en) | Method for representing surroundings of vehicle e.g. motor vehicle e.g. car, involves transforming primary image information into secondary image information corresponding to panoramic view using spatial information | |
DE102013012780A1 (en) | Method for detecting a target object by clustering characteristic features of an image, camera system and motor vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
R012 | Request for examination validly filed | ||
R082 | Change of representative |
Representative=s name: SCHWEIGER & PARTNERS, DE |
|
R082 | Change of representative |
Representative=s name: SCHWEIGER & PARTNERS, DE |
|
R082 | Change of representative |
Representative=s name: SCHWEIGER & PARTNERS, DE |
|
R082 | Change of representative |
Representative=s name: SCHWEIGER & PARTNERS, DE |
|
R119 | Application deemed withdrawn, or ip right lapsed, due to non-payment of renewal fee |