US11461627B2 - Systems and methods for training and controlling an artificial neural network with discrete vehicle driving commands - Google Patents
Systems and methods for training and controlling an artificial neural network with discrete vehicle driving commands Download PDFInfo
- Publication number
- US11461627B2 US11461627B2 US15/827,982 US201715827982A US11461627B2 US 11461627 B2 US11461627 B2 US 11461627B2 US 201715827982 A US201715827982 A US 201715827982A US 11461627 B2 US11461627 B2 US 11461627B2
- Authority
- US
- United States
- Prior art keywords
- neural network
- driving
- command
- maneuvers
- definitive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/0088—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G06K9/6256—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G06N3/0445—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/147—Details of sensors, e.g. sensor lenses
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- G05D2201/0213—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Definitions
- the present disclosure relates to systems, components, and methodologies for using an artificial network in autonomous driving.
- the present disclosure relates to training an artificial neural network and controlling a vehicle using the trained artificial neural network.
- an end-to-end artificial neural network that has a single model.
- the single model is trained in definitive and neutral commands to more smoothly control a vehicle maneuver.
- the definitive commands may be forward, left and right turn commands that are input into the model along with respective forward, left, and right maneuvers executed by a vehicle driver.
- the neutral commands may be a plurality of neutral commands that are input in the model along with randomly selected forward, left and right turn maneuvers executed by the vehicle driver. In this manner, the neutral commands are input to induce “confusion” into the neural network during training.
- the maneuvers are input in real-time via sensors on the vehicle.
- the neural network may be trained so that it can determine right, left, or forward trajectories at upcoming intersections.
- the neural network may be further configured to process navigational inputs such as voice commands or mapped route guidance by mapping them to particular spatial coordinates.
- the neural network may control a command controller so that a command is output to a vehicle component to execute the predicted trajectory at the intersection.
- FIG. 1 is a schematic and diagrammatic illustration of a vehicle control system including sensors and inputs to an autonomous driving system having a neural network and command controller output to execute driving maneuvers;
- FIG. 2 is a block diagram of a training regime for training the neural network of FIG. 1 to identify driving trajectories and predict driving maneuvers;
- FIG. 3 is a schematic and diagrammatic illustration of an end-to-end neural network of FIG. 1 ;
- FIG. 4 is an illustration of exemplary two-dimensional model of a mapping of neural network commands for the neural network according of FIG. 1 including straight, right, left, and neutral, and their relative polarity and distance from neutral in the plane;
- FIG. 5 is an illustration of an exemplary three-dimensional model of a semantically-sensitive mapping of neural network commands for the neural network according of FIG. 1 including straight, right, left, and neutral, their relative polarity as well as secondary commands and their relative coordinates and distance from neutral in the planes;
- FIG. 6A is an illustration of an exemplary steering angle over time for a human driver compared with an autonomous vehicle having a neural network that is trained only in definitive commands;
- FIG. 6B is an illustration of an exemplary steering angle over time for a human driver compared with an autonomous vehicle having a neural network that is trained in definitive and confusion induce commands
- FIG. 1 is a schematic and diagrammatic view of an exemplary vehicle control system 10 .
- a vehicle 10 may include an autonomous driving system 20 including a neural network 25 , a local storage or memory 30 , and a component controller 35 implemented in part using one or more computer processors.
- Vehicle 15 may further include sensors 40 , such as camera or video sensors, coupled to the vehicle to capture data about the environment surrounding the vehicle and communicate the captured data to the neural network.
- Vehicle 15 may also include a navigation system 45 including one or more computer processors running software configured to capture global positioning data and communicate navigational data to the vehicle neural network 25 .
- Neural network 25 may be trained in using real-time and/or stored driving maneuver data along with commands as described further in FIG.
- the data and command inputs alternatively may not be in ‘real-time’.
- the commands plus imagery data can be stored on disk and processed repetitively until model is trained.
- a neural network 225 may be a single model end-to-end neural network as described in FIG. 3 .
- Neural network training 200 may include two modes of training, definitive mode, 230 and random mode 232 . In each mode, the network is trained on the same trajectories of straight, right, and left, with different definitive and neutral input commands.
- definitive mode training 230 a plurality of definitive commands is provided as an input to the neural network along with corresponding definitive driving maneuver data.
- a discrete definitive command may be a right turn command 234 that is input along with right turn sensor data 236 for an approaching intersection.
- Another definitive command may be a left turn command 238 that is input with left turn sensor data 240 for an approaching intersection.
- Another definitive command may be a straight command 242 and straight sensor data 244 for an approaching intersection.
- the sensor data may be image data selected and input from stored historic data sets of known driving maneuvers.
- the sensor data may be image data collected in real time during manual driving of the vehicle that includes the neural network 225 .
- a neutral mode command is provided as an input to the neural network 225 along with corresponding randomly selected driving data, the driving data corresponding to straight, right turn, and left turn data as was used in definitive commands training.
- the random command may be a neutral command 246 that is input along with right turn sensor data 248 for an approaching intersection.
- the neutral command 246 may also be input along with left turn sensor data 250 for an approaching intersection.
- the neutral command may also be input along with straight sensor data 252 for an approaching intersection.
- the network in this manner in random mode, the network is trained in the overall dynamics of driving under all valid maneuvers, while still maintaining control over which trajectory or maneuver to choose. While the control input is in random training mode, the neural network is allowed to learn the possible valid maneuvers in each driving situation due to the random mode simply being a mix of all the other definitive commands. In this manner, the neural network learns to follow the free space in the road, and avoid obstacles.
- the neural network model is monolithic, the command inputs flow through the same computational graph, and therefore, lessons learned in random mode training are shared with the other definitive command modes as well. Due to the monolithic nature and induced confusion means provided by the random mode and with unique random commands, such as neutral command inputs, some of the neurons in the network automatically learning to detect features in the images that are crucial to safe navigation, such as open spaces in the random mode.
- the neural network 225 receives 50% of its training in definitive mode training 230 and 50% of its training in random mode training. However, the percentage of time in each training mode may vary depending on the dataset used and driving scenarios the network is being trained for.
- Sensor data 350 may be captured and input from a plurality of optical sensors such as cameras 366 configured to capture at least left, front, and right images relative to the exterior of the vehicle. These inputs may be fed into convolutional layers 352 of the network.
- NavFusion embedding 354 of input from a navigational system 368 may be added to the convolutional image output.
- Each voice command or guidance given by a navigation system may be mapped according to the semantic mapping described in FIGS. 4-5 , and fed, directly, into the neural network.
- Dense layers 356 predict the vehicle command (left, right straight), and recurrent layers 358 may determine the time-series of the collected data as part of a feedback loop 370 with the command controller 362 which outputs the command and drives execution 360 of a vehicle component 364 .
- the component 364 is a steering wheel and the recurrent layers 358 utilize long and short term memory, the real-time continuous input of images, and the real-time command controller 362 output to adjust the steering wheel angle over time throughout the execution of the turn maneuver.
- the vehicle can be operated in the random mode for prolonged periods of time.
- the mode of operation may be input by a user via a user interface such as a keyboard, touchscreen, or vehicle interface.
- Sensors are active and capture images or other data indicative of the vehicle surroundings.
- the vehicle executes all the discrete maneuvers it has been trained on when each of them becomes relevant and safe/valid to execute. For example, if the vehicle is inside a parking lot in random mode, it will keep going straight until it reaches a turn or intersection, at which point it will randomly choose a maneuver it has been trained on, and execute that maneuver if feasible.
- the vehicle can then drive as if in straight definitive mode until it reaches another turn or intersection.
- the vehicle avoids curbs and obstacles, and allows the neural network to produce trajectories which are a combination of the definitive commands it has been trained on.
- the network may use lessons learned across different definitive training sessions together in a single driving control/command mode.
- the network deployed in the command mode can make a trajectory determination using a minimum of a single captured image.
- FIG. 4 illustrates the two-dimensional relationship of the neutral command to each of the right, left and straight commands so that they are suitable numerical inputs for the neural network.
- the neutral command 402 is at the origin point (0, 0).
- each of straight command 404 , left command 408 and right command 406 are located on a unit circle forming a triangle 410 .
- the coordinates for each definitive command 404 , 406 , and 408 are chosen so that their polarities are distinct, thereby helping the neural network to learn and associate polarities with the definitive command.
- straight command coordinates are both positive (+,+)
- right command 406 coordinates are positive
- left command 408 coordinates are negative, positive ( ⁇ ,+).
- Neutral command (0,0) has no polarity and is symmetrically at the center of the triangle as it is the combination of all three definitive commands represented by the vertices of the triangle.
- FIG. 5 illustrates how mapping the four commands of FIG. 4 , can be extended to semantically sensitive numerical mapping of additional commands to number sequences.
- similar commands are close to each other in the embedding space.
- exit left 516 may lie between straight command 504 and left turn command 508 (90 degrees from straight), but closer to straight command 504 depending on the angle of the exit. In this manner, relative degrees of turns between 0 degrees and a 90 degree turn can be defined.
- exit right 514 may lie between straight command 404 and right turn 406 .
- An optional third dimension “z” may be added to show relationship s including braking and accelerating driving maneuvers.
- stop slowly 512 may lie in the z-plane indicative of the rate of deceleration and smooth transition to a stopped state.
- FIGS. 6A-6B illustrate how the neural network of this disclosure and the use of the random training command results in driving maneuvers that more closely resemble smooth human driving trajectories.
- FIG. 6A depicts how the neural network may behave through turning steering angles over time 682 when the neural network has not been trained with the neutral mixed command.
- the neural network driven vehicle Compared with the human manual trajectory 680 for the same steering angles maneuvers, the neural network driven vehicle exhibits abrupt turning behaviors 683 , 684 and inability to transition fully and smoothly 685 . This is due to the strong data correlation between the definitive commands and outputs that occur.
- FIG. 6B the human trajectory 690 is closely followed by the neural network driven vehicle trajectory 692 . Vertical dashed lines may indicate transitions between control modes.
- the network has been disclosed as having convolutional layers and sensed image inputs, other sensor inputs such as radar or LIDAR sensors and inputs may be used. Additional inputs can be provided via a vehicle CAN or similar internal communication network of the vehicle. Further commands may be expanded beyond simply left, right and straight commands such as reverse or stop as described with respect to FIGS. 4-5 .
- the system may permit a range of operation from full autonomy to partially-supervised, or semi-autonomous driving capabilities.
- Deep neural networks have recently been shown to be able to control the steering of an automotive vehicle by learning a mapping between raw image sensor data to steering direction.
- the network may operate in an end-to-end manner without any external control over the network's predictions.
- such a network could theoretically be trained on a set of external commands to control the network, such a system would be unable to learn the dynamics of driving. As a result, such raining would be unable to transition smoothly or in a human-like fashion between one command or mode to another command or mode.
- This technical problem stems from a strong data correlation between commands and output trajectories. More specifically, a single neural network cannot perform different discrete driving tasks using a single type of output (e.g. steering).
- the disclosed embodiments provide a technical solution to these conventional deficiencies by providing a method to control the sometimes unpredictable output of a driving artificial neural network by devising a new training method for driving networks. This approach may be termed “induced confusion mode.”
- this approach involves training the same neural network on all commands separately, and additionally, training the same network on an additional auxiliary random mode or command.
- This auxiliary random mode may constitute training the network on all other trajectories while keeping this random mode as the control input for the network.
- This approach teaches the network the overall dynamics of driving under all valid maneuvers, while still maintaining control over which trajectory or maneuver to choose, given an external command.
- this approach enables smooth, human-like steering behavior for the vehicle, while paying attention to dynamics of driving by learning to avoid obstacles and drive within free road space automatically.
- the neural network may be trained to direct maneuver other systems or robotic devices.
- Disclosed embodiments may include apparatus/systems for performing the operations disclosed herein.
- An apparatus/system may be specially constructed for the desired purposes, or it may comprise a general purpose apparatus/system selectively activated or reconfigured by a program stored in the apparatus/system.
- Disclosed embodiments may also be implemented in one or a combination of hardware, firmware, and software. They may be implemented as instructions stored on a machine-readable medium, which may be read and executed by a computing platform to perform the operations described herein.
- a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer).
- a machine-readable medium may include read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices including thumb drives and solid state drives, and others.
- processor may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.
- a “computing platform” or “controller” may comprise one or more processors.
- computer readable medium is meant to refer to any machine-readable medium (automated data medium) capable of storing data in a format readable by a mechanical device.
- machine-readable media include magnetic media such as magnetic disks, cards, tapes, and drums, punched cards and paper tapes, optical disks, barcodes and magnetic ink characters.
- computer readable and/or writable media may include, for example, a magnetic disk (e.g., a floppy disk, a hard disk), an optical disc (e.g., a CD, a DVD, a Blu-ray), a magneto-optical disk, a magnetic tape, semiconductor memory (e.g., a non-volatile memory card, flash memory, a solid state drive, SRAM, DRAM), an EPROM, an EEPROM, etc.).
- a magnetic disk e.g., a floppy disk, a hard disk
- an optical disc e.g., a CD, a DVD, a Blu-ray
- magneto-optical disk e.g., a magneto-optical disk
- magnetic tape e.g., a magnetic tape
- semiconductor memory e.g., a non-volatile memory card, flash memory, a solid state drive, SRAM, DRAM
- EPROM e.g., EPROM, an
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Radar, Positioning & Navigation (AREA)
- Aviation & Aerospace Engineering (AREA)
- Remote Sensing (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
- Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Medical Informatics (AREA)
- Vascular Medicine (AREA)
- Multimedia (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
Claims (7)
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/827,982 US11461627B2 (en) | 2017-11-30 | 2017-11-30 | Systems and methods for training and controlling an artificial neural network with discrete vehicle driving commands |
| EP18811240.3A EP3717981B1 (en) | 2017-11-30 | 2018-11-28 | Methods for training and controlling an artificial neural network with discrete vehicle driving commands |
| PCT/EP2018/082782 WO2019105974A1 (en) | 2017-11-30 | 2018-11-28 | Systems and methods for training and controlling an artificial neural network with discrete vehicle driving commands |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/827,982 US11461627B2 (en) | 2017-11-30 | 2017-11-30 | Systems and methods for training and controlling an artificial neural network with discrete vehicle driving commands |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20190164051A1 US20190164051A1 (en) | 2019-05-30 |
| US11461627B2 true US11461627B2 (en) | 2022-10-04 |
Family
ID=64556912
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/827,982 Active 2039-02-24 US11461627B2 (en) | 2017-11-30 | 2017-11-30 | Systems and methods for training and controlling an artificial neural network with discrete vehicle driving commands |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US11461627B2 (en) |
| EP (1) | EP3717981B1 (en) |
| WO (1) | WO2019105974A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220153294A1 (en) * | 2019-03-19 | 2022-05-19 | Uisee Technologies (beijing) Co., Ltd. | Methods for updating autonomous driving system, autonomous driving systems, and on-board apparatuses |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10839230B2 (en) * | 2018-09-06 | 2020-11-17 | Ford Global Technologies, Llc | Multi-tier network for task-oriented deep neural network |
| US11514293B2 (en) * | 2018-09-11 | 2022-11-29 | Nvidia Corporation | Future object trajectory predictions for autonomous machine applications |
| US20200241542A1 (en) * | 2019-01-25 | 2020-07-30 | Bayerische Motoren Werke Aktiengesellschaft | Vehicle Equipped with Accelerated Actor-Critic Reinforcement Learning and Method for Accelerating Actor-Critic Reinforcement Learning |
| CN110428693B (en) * | 2019-07-31 | 2021-08-24 | 驭势科技(北京)有限公司 | User driving habit training method, training module, vehicle-mounted device and storage medium |
| US11829150B2 (en) | 2020-06-10 | 2023-11-28 | Toyota Research Institute, Inc. | Systems and methods for using a joint feature space to identify driving behaviors |
| KR102525191B1 (en) * | 2020-08-07 | 2023-04-26 | 한국전자통신연구원 | System and method for generating and controlling driving paths in autonomous vehicles |
| US20220274603A1 (en) * | 2021-03-01 | 2022-09-01 | Continental Automotive Systems, Inc. | Method of Modeling Human Driving Behavior to Train Neural Network Based Motion Controllers |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190113917A1 (en) * | 2017-10-16 | 2019-04-18 | Toyota Research Institute, Inc. | System and method for leveraging end-to-end driving models for improving driving task modules |
-
2017
- 2017-11-30 US US15/827,982 patent/US11461627B2/en active Active
-
2018
- 2018-11-28 EP EP18811240.3A patent/EP3717981B1/en active Active
- 2018-11-28 WO PCT/EP2018/082782 patent/WO2019105974A1/en not_active Ceased
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190113917A1 (en) * | 2017-10-16 | 2019-04-18 | Toyota Research Institute, Inc. | System and method for leveraging end-to-end driving models for improving driving task modules |
Non-Patent Citations (8)
| Title |
|---|
| Andreu Catala, Antoni Gau, Bernardo Morcego, Josep M. Fuertes, "A Neural Network Texture Segmentation System for Open Road Vehicle Guidance", Department ESA,pp. 247-252, 1992. (Year: 1992). * |
| Bojarski, Mariusz, et al. "End to end learning for self-driving cars." arXiv preprint arXiv:1604.07316 (2016). (Year: 2016). * |
| Hubschneider et al.; Adding Navigation to the Equation: Turning Decisions for End-to-End Vehicle Control; 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC); IEEE; Oct. 16, 2017; pp. 1-8. |
| Jesus Muńoz-Bulnes, Carlos Fernandez, Ignacio Parra, David Fernãndez-Llorca, Miguel A. Sotelo, "Deep Fully Convolutional Networks with Random Data Augmentation for Enhanced Generalization in Road Detection", 2017 IEEE 20th International Conference on Intelligent Transporation Systems, 366-371. (Year: 2017). * |
| Mahmud, Firoz, Al Arafat, and Syed Tauhid Zuhori. "Intelligent autonomous vehicle navigated by using artificial neural network." 2012 7th International Conference on Electrical and Computer Engineering. IEEE, 2012. (Year: 2012). * |
| Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, Karol Zieba, "End to End Learning for Self-Driving Cars", NVIDIA Corporation, Apr. 2016, pp. 1-9 (Year: 2016). * |
| Pomerleau, Dean A. "Efficient training of artificial neural networks for autonomous navigation." Neural computation 3.1 (1991): 88-97. (Year: 1991). * |
| Search Report and Written Opinion for International Patent Application No. PCT/EP2018/082782; dated Apr. 15, 2019. |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220153294A1 (en) * | 2019-03-19 | 2022-05-19 | Uisee Technologies (beijing) Co., Ltd. | Methods for updating autonomous driving system, autonomous driving systems, and on-board apparatuses |
| US11685397B2 (en) * | 2019-03-19 | 2023-06-27 | Uisee Technologies (beijing) Co., Ltd. | Methods for updating autonomous driving system, autonomous driving systems, and on-board apparatuses |
Also Published As
| Publication number | Publication date |
|---|---|
| EP3717981A1 (en) | 2020-10-07 |
| US20190164051A1 (en) | 2019-05-30 |
| EP3717981B1 (en) | 2023-07-12 |
| WO2019105974A1 (en) | 2019-06-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11461627B2 (en) | Systems and methods for training and controlling an artificial neural network with discrete vehicle driving commands | |
| US11835962B2 (en) | Analysis of scenarios for controlling vehicle operations | |
| JP7655866B2 (en) | Planning room for reversing vehicles | |
| CN113242958B (en) | Automatic vehicle hierarchical planning system and method | |
| EP3535636B1 (en) | Method and system for controlling vehicle | |
| US9921585B2 (en) | Detailed map format for autonomous driving | |
| JP7680452B2 (en) | Blocked Area Guidance | |
| AU2019251362A1 (en) | Techniques for considering uncertainty in use of artificial intelligence models | |
| CA3096415A1 (en) | Dynamically controlling sensor behavior | |
| CN110341700A (en) | The self-navigation learnt using deeply | |
| US11912302B2 (en) | Autonomous control engagement | |
| US11738777B2 (en) | Dynamic autonomous control engagement | |
| EP3990328B1 (en) | Techniques for contacting a teleoperator | |
| CN109297502A (en) | Laser projection pointing method and device based on image processing and GPS navigation technology | |
| CN115279640A (en) | External control strategy derivation for autonomous vehicles | |
| Chauvin | Hierarchical decision-making for autonomous driving | |
| Yan | Automotive safety‐assisted driving technology based on computer artificial intelligence environment | |
| Huang et al. | iCOIL: scenario aware autonomous parking via integrated constrained optimization and imitation learning | |
| Chithra et al. | SURVEY ON INTELLIGENT TRANSPORT SYSTEMS: INSIGHTS OF MACHINE LEARNING AND DEEP LEARNING ALGORITHMS | |
| Khan | AUTOMATED VEHICLE CONTROL | |
| Rosero | Leveraging Modular Architectures and End-to-End Learning for Autonomous Driving in Unmapped Environments | |
| WO2024177936A1 (en) | Systems and methods for controlling a vehicle | |
| WO2022140063A1 (en) | Autonomous control engagement | |
| CN119116981A (en) | Automatic driving control method, device, computer equipment and storage medium | |
| Macdonald | A Simulated Autonomous Car |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: VOLKSWAGEN GROUP OF AMERICA, INC., VIRGINIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SALEEM, MUNEEB;REEL/FRAME:044267/0064 Effective date: 20171128 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: PORSCHE AG, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VOLKSWAGEN GROUP OF AMERICA, INC.;REEL/FRAME:044463/0120 Effective date: 20171207 Owner name: AUDI AG, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VOLKSWAGEN GROUP OF AMERICA, INC.;REEL/FRAME:044463/0120 Effective date: 20171207 Owner name: VOLKSWAGEN AG, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VOLKSWAGEN GROUP OF AMERICA, INC.;REEL/FRAME:044463/0120 Effective date: 20171207 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |