CA3087361A1 - Autonomous driving methods and systems - Google Patents

Autonomous driving methods and systems Download PDF

Info

Publication number
CA3087361A1
CA3087361A1 CA3087361A CA3087361A CA3087361A1 CA 3087361 A1 CA3087361 A1 CA 3087361A1 CA 3087361 A CA3087361 A CA 3087361A CA 3087361 A CA3087361 A CA 3087361A CA 3087361 A1 CA3087361 A1 CA 3087361A1
Authority
CA
Canada
Prior art keywords
driving
vehicle
arrow
driving situation
cognitive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CA3087361A
Other languages
French (fr)
Inventor
Deyi Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Driving Brain International Ltd
Original Assignee
Driving Brain International Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Driving Brain International Ltd filed Critical Driving Brain International Ltd
Publication of CA3087361A1 publication Critical patent/CA3087361A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process

Abstract

Methods and systems are described for autonomously driving a vehicle. Sensor data is received from one or more sensors and a graphical representation of the sensor data is generated, which may comprise a driving situation map. The driving situation map may be represented using a logarithmic polar coordinate system comprising an angle dimension and a radius dimension. From the driving situation map, a graphical driving command is determined, which may comprise a graphically depicted arrow, referred to as a cognitive arrow. The vehicle is driven based on the graphical driving command to represent the desired controlling parameters. Pairs of driving situation maps as input with the cognitive arrows as the output may be used for end-to-end deep learning to train the algorithm for generating the cognitive arrow.

Description

AUTONOMOUS DRIVING METHODS AND SYSTEMS
Technical Field [0001] The application is directed to self driving methods and systems. More specifically, the .. application is directed to autonomous driving methods and systems incorporating deep learning techniques.
Background
[0002] The Society of Automotive Engineers (SAE) defines 6 levels of autonomous driving from "no automation" (level 0) to "full automation" (level 5) (see SAE J3016 Automated Driving Levels of Driving Automation as defined in New SAE International Standard J3016).
Herein, "autonomous driving" refers to level 3, 4 and 5 automation. Passengers in a level 3, 4 or 5 autonomously driven vehicle are physically disengaged from controlling the vehicle (i.e.
their hands and feet are disengaged from the vehicle's controls), and may only be responsible for taking control of the vehicle when required by the system.
[0003] Many existing autonomous driving systems capture images and parameters of objects surrounding a vehicle to create a view of the area around the vehicle.
The view may be in two or three dimensions. The system then takes actions based on the view. To determine what action to take, most autonomous driving systems are based on a library of if-then rules covering various circumstances. For example, if an object is within a certain distance of the vehicle, moving towards the vehicle, and within the direction of travel of the vehicle, then the system should apply the brakes. Such existing systems are limited to a finite number of circumstances they are capable of handling, since one must be in the library for the system to correctly respond to it. This is particularly limiting in the instance of exceptional and unexpected driving circumstances which may be encountered while driving.
[0004] Furthermore, many existing systems are designed as an extension of a vehicle's functions. They are not designed from the perspective of a human driver. Such systems may respond to situations differently than a human driver and are limited in the range of driving circumstances that they are capable of responding to.
[0005] There is a general desire for an autonomous driving system which more closely emulates the driving ability of a skilled human driver and is capable of responding to a broader range of driving circumstances.
[0006] The foregoing examples of the related art and limitations related thereto are intended to be illustrative and not exclusive. Other limitations of the related art will become apparent to those of skill in the art upon a reading of the specification and a study of the drawings.
Summary
[0007] The following embodiments and aspects thereof are described and illustrated in conjunction with systems, tools, and methods which are meant to be exemplary and illustrative, not limiting in scope. In various embodiments, one or more of the above-described problems have been reduced or eliminated, while other embodiments are directed to other improvements.
[0008] The application is directed to autonomous driving methods and systems, which define and describe how a device composed of hardware and software can be used to drive vehicles and other ground conveyances to achieve level 3 (L3) and above autonomous driving as defined by SAE.
[0009] One aspect of the systems and methods described herein is the deep learning techniques implemented to deal with uncertainties on the road similar to how human drivers would. With proper sensors equipped on a vehicle, the systems can be used to learn from human drivers' cognitive ability, accumulate driving skills, and eventually replace human drivers by replicating a human driver's driving skills.
[0010] One aspect of the invention provides a method of autonomously driving a vehicle, the method comprising: receiving, refining and integrating data from one or more sensors;
creating a graphical representation of the driving situation based on the sensor data;
producing a graphical driving command based on the graphical representation;
and controlling the vehicle based as least in part on the graphical driving command. In some embodiments the graphical driving command is based on the graphical representation and previously learnt driving skills.
[0011] In some embodiments of the claimed invention the graphical representation comprises a graphical representation of the driving situation. The graphical representation is expressed in particular embodiments using a polar coordinate system comprising an angle dimension and a radius dimension.
[0012] In some embodiments of the claimed invention the graphical representation comprises a current driving situation map, and the graphical driving command comprises a cognitive arrow.
[0013] In some embodiments of the claimed invention generating the cognitive arrow comprises: searching a set of historical driving situation maps; determining a closest matching driving situation map from the set of historical driving situation maps most similar to the current driving situation map; and retrieving a cognitive arrow associated with the closest matching driving situation map.
[0014] In some embodiments of the claimed invention determining a closest matching driving situation map comprises executing a machine learning algorithm.
[0015] In some embodiments of the claimed invention the machine learning algorithm uses one or more of a convolutional neural network and a recurrent neural network.
[0016] In some embodiments of the claimed invention calculating the cognitive arrow comprises: determining a right of way of the vehicle from the current driving situation map;
.. and calculating the cognitive arrow based at least in part on the right of way.
[0017] Another aspect-of the invention provides a method of determining the final driving command for a driverless vehicle, which is a comprehensive result of two individual commands derived from two training mechanisms: the first driving command is produced based on a sequence of driving situation maps and to achieve the purpose of the trip based on a pre-defined route before the journey begins; the other driving command is generated based on various parameters when the vehicle is operating, which are not related to the purpose of the trip but are vital for the safety, comfort, smoothness, and energy efficiency of the passengers' experience and operation of the vehicle.
[0018] Another aspect of the invention provides a method for determining a driving command for a vehicle, the method comprising: receiving a graphical representation of a driving situation; searching a first database of previous driving situations for a closest matching driving situation; retrieving a first driving command from the first database associated with the closest matching driving situation; receiving sensor data;
determining a driving scenario from the sensor data; searching a second database of driving scenarios for a closest matching driving scenario; retrieving a second driving command from the database of driving scenarios associated with the closest matching driving scenario;
and combining the first driving command and the second driving command to generate a composite driving command. In some embodiments, the first database comprises a graphical database. In certain embodiments the training of an autonomous system for driving the vehicle in accordance with driving situations and/or a determination of a closest matching driving situation map comprises executing a machine learning algorithm. This machine learning algorithm may be based on a convolutional neural network. In certain embodiments the training of an autonomous system for driving a vehicle in accordance with driving scenarios and/or a determination of a closest matching driving scenario comprises executing a machine learning algorithm. This machine learning algorithm may be based on a recurrent neural network.
[0019] Another aspect of the invention provides a method of providing autonomous driving for a vehicle, wherein the vehicle comprises an autonomous driving system, a manual driving system and a sensor system, the method comprising: operating the vehicle in either one of a training and a driving mode, wherein operating the vehicle in the training mode comprises: (i) representing a driving situation at a point in time based on data output from the sensor system; (ii) recording an output of the manual driving system at the point in time, and representing the output as a first image denoting an actual steering direction and an actual acceleration or deceleration; (iii) generating, by the autonomous driving system, an autonomous driving output based on the driving situation at the point in time and a training route plan, wherein the autonomous driving output is represented as a second image denoting a planned steering direction and a planned acceleration or deceleration; (iv) determining a difference between the manual driving system output and the autonomous driving output; and (v) configuring a real time planning module of the autonomous driving system based at least in part on the difference, wherein the real time planning module determines autonomous driving outputs for the driving mode based on driving situations and route plans.
[0020] In some embodiments, operating the vehicle in the driving mode comprises: (i) representing a driving situation at a second point in time based on data output from the sensor system; (ii) generating, by the autonomous driving system, an autonomous driving output based on the driving situation at the second point in time and a predefined route plan, and representing the output as a third image denoting a calculated steering direction and a calculated acceleration or deceleration; and (iii) applying the autonomous driving output to one or more control units of a vehicle to drive the vehicle along the predefined route plan.
[0021] In addition to the exemplary aspects and embodiments described above, further aspects and embodiments will become apparent by reference to the drawings and by study of the following detailed descriptions.
Brief Description of the Drawings
[0022] Exemplary embodiments are illustrated in referenced figures of the drawings. It is intended that the embodiments and figures disclosed herein are to be considered illustrative rather than restrictive.
[0023] FIG. 1 illustrates an autonomous driving system according to one embodiment.
[0024] FIG. 2 illustrates modules of an autonomous driving system which may be implemented as software.
[0025] FIG. 3A illustrates a flowchart of a method of generating one or more Driving Situation Maps (DSMs) which may be executed by an autonomous driving system according to one embodiment.
[0026] FIG. 3B illustrates an example driving situation.
[0027] FIG. 30 illustrates an example DSM.
[0028] FIG. 4 is a flowchart of a general route planning method which may be executed by an autonomous driving system according to one embodiment.
[0029] FIG. 5A is a flowchart of a method which may be executed by the cognition module of an autonomous driving system according to one embodiment to generate driving commands.
[0030] FIGS. 5B to 5H illustrate example cognitive arrows that may be generated by an autonomous driving system according to one embodiment.
[0031] FIGS. 51-1 to 51-8 illustrate an example sequence of cognitive arrows that may be generated for the scenario of overtaking another vehicle.
[0032] FIG. 5J illustrates a flowchart of a method for generating cognitive arrows during an autonomous driving mode.
[0033] FIG. 6A illustrates a flowchart of a method for generating cognitive arrows including strategic and tactical considerations.
[0034] FIG. 6B illustrates a right of way during an example driving situation.
[0035] FIG. 7 illustrates a flowchart of a method for training an autonomous driving system according to one embodiment.
[0036] FIG. 8 shows perception, cognition and control modules of an autonomous driving system according to one embodiment.
[0037] FIG. 9 is a data flowchart illustrating the factors that affect the decision-making process of the cognition module.
[0038] FIG. 10 is a data flowchart illustrating the relationships between instantaneous memory, working memory and long- term memory.
[0039] FIG. 11 illustrates positive learning performed by a cognition module of an autonomous driving system according to one embodiment.
[0040] FIG. 12 illustrates negative learning performed by a cognition module of an autonomous driving system according to one embodiment.
[0041] FIG. 13 schematically illustrates a control chip for tactical driving and the input that may be provided to such control chip for purposes of determining a tactical cognitive arrow.
[0042] FIG. 14 illustrates a comparison between a model driver's trajectory and an autonomous driving system or robot's trajectory during training mode.
Description
[0043] Throughout the following description specific details are set forth in order to provide a more thorough understanding to persons skilled in the art. However, well known elements may not have been shown or described in detail to avoid unnecessarily obscuring the disclosure. Accordingly, the description and drawings are to be regarded in an illustrative, rather than a restrictive, sense.
[0044] Particular embodiments described herein provide self driving systems for use in autonomously driven vehicles, which may be used for transportation of people or cargo, including land vehicles such as cars, buses, trucks, motorcycles, carts, and/or the like.
Vehicle, as used herein, includes any ground conveyance, including ground conveyances that are not necessarily primarily for purposes of transportation or cargo, such as, for example, lawnmowers, vacuums, ice cleaners, tractors, forklifts, camera vehicles, tracking vehicles, and/or the like.
[0045] Modules implemented by systems described herein emulate or replace a series of driver activities such as visual perception, auditory hearing, thinking, memory, learning, reasoning, decision making, interaction, control, and so on. The system may apply deep learning techniques to deal with uncertainties on the road. By obtaining and processing data from sensors installed on a vehicle, the system can learn from a human driver's cognitive ability, accumulate driving skills, and replace many or all of driving functions normally performed by a human driver. An objective of such an autonomous driving system is to remove human drivers from driving tasks so that they are free to enjoy the ride as passengers. As an additional benefit and unlike human drivers, an autonomous driving system does not become distracted and is not affected by emotions, or physical or mental fatigue.
[0046] Embodiments of autonomous driving systems as described herein may be developed based on how human drivers perceive and process images, receive feedback and information during driving, and apply accumulated experiences to deal with uncertainties on the road. For example, which vehicle has the right of way when two vehicles meet head to head in a narrow road? To answer this question, a study of drivers' behaviour is important.
Drivers' behaviour informs communications and interactions with other vehicles.
[0047] A human driver's decision-making process may be decomposed into three stages:
perception, cognition and control. Perception is the state of seeing, feeling or detecting surrounding objects such as other vehicles, the lane width or curvature, traffic signs, traffic signals and the like. Perception may also include sensing conditions relating to the vehicle .. itself like vehicle status, speed, acceleration, deceleration, vibrations, or shaking, etc. The information collected at the perception stage may be used by the driver during the cognition stage to make a driving decision like steering around an obstacle or applying the brake. The decisions are converted into particular ways in which to control the vehicle at the control stage, for example, by turning the steering wheel by a certain angle, or pressing the brake by a certain amount.
[0048] These three stages may be implemented by modules of an autonomous driving system according to the invention described herein. For the perception module, an autonomous driving system may include a plurality of sensors in or on the vehicle which receive sensor data or otherwise provide data relevant to perception. These devices may include visible light cameras, infrared cameras, radar, LIDAR (Light Detection and Ranging), satellite navigation sensors such as GPS, inertial navigation sensors such as accelerometers and gyroscopes, thermometers, compasses, clocks, and/or other sensors.
[0049] The perception module collects data from these sensors, and provides them to the cognition module for further processing. Embodiments of autonomous driving systems described herein comprise a cognition module for generating real-time driving decisions, such as changing lanes, steering, accelerating and/or braking, based in part on the end result of the perception module.
[0050] Particular embodiments provide for methods and systems for processing, refining and integrating data received from the plurality of sensors into a graphical representation. In certain embodiments, the graphical representation comprises a driving situation map (DSM), as described in further detail below, for use by a cognition module to determine a driving decision. Unlike existing autonomous driving systems which rely on a detailed three-dimensional "view" of the vehicle environment, a DSM represents in a particular format the information provided by the perception module, so as to place greater emphasis or more selective attention on objects which are closer to the vehicle. This is similar to human perception of a situation. The DSM also contains information that is relevant to the driving situation and to making assessments based on the Right of Way (RoW), defined below. The DSM reflects real time situations, like whether or not there is a pedestrian crossing in front of the vehicle. The DSM does not show information that is not relevant to the driving situation, for example, what particular clothing a pedestrian is wearing.
[0051] In the cognition module, the autonomous driving system uses the DSMs to calculate a RoW for the vehicle. In the cognition module, the autonomous driving system also performs validity checks for safety, comfort, smoothness, speed and/or energy efficiency. It is in the cognition domain that an experienced driver would also apply the knowledge from previous journeys to the current situation to make a driving decision, for example, knowledge based on experience driving on icy roads, wet roads, or other edge driving conditions. Thus, particular embodiments may gather driving data from model drivers during edge driving conditions, and use the data as part of the cognitive decision-making process.
[0052] In the control stage, the vehicle is operated to adjust its direction and/or speed (e.g.
turning the steering wheel, engaging or releasing the accelerator or gas pedal, engaging or releasing the brake) to implement the decision made by the cognition module.
[0053] Unlike many existing autonomous driving systems, embodiments of an autonomous driving system according to the invention as described herein are not based on "if-then" rules.
The system is based instead on employing certain deep learning techniques during a learning or a training phase to enable it to make real-time decisions to deal with uncertainties on the road. Deep learning techniques using Convolutional Neural Networks and Recurrent Neural Networks may be employed here, as described elsewhere herein (see lmagenet Classification with Deep Convolutional Neural Networks published at the Neural Information Processing Systems Conference by: Alex Krizhevsky, Ilya Sutaskever and Geoffrey E.
Hinton Year 2012, and Speech Recognition with Deep Recurrent Neural Networks published at IEEED01: 10.1109/ICASSP.2013.6638947 by: Alex Graves; Abdel-rahman Mohamed and Geoffrey Hinton Year 2013).
[0054] In particular embodiments as described herein, input-output pairings comprise a sequence of driving situation maps (DSMs) as an input and a cognitive arrow (CA) as an output. The DSM is a graphical representation of the driving situation. The cognitive arrow is a graphical representation of the command to be applied to drive the vehicle.
These input-output pairings form the fundamental structure for the deep learning mechanism of the system. Deep learning may be applied to train an autonomous driving system for deployment in different types of vehicles and is adaptable to different situations ranging, for example, from driving on city streets to freeway driving.
[0055] Representing CAs and DSMs graphically provides many advantages, specifically in the context of deep learning algorithms. Some deep learning algorithms are particularly tuned to operate with images, and can be efficiently applied to the tasks described herein.
Furthermore, graphical representations may provide advantages in classifying, pairing, storing, searching, and retrieving CAs and DSMs.
[0056] The methods described herein enable an autonomous driving system to learn how to control a vehicle from the actions of humans, including humans serving as model drivers.
The output of the control, learning and/or training methods is a cognitive arrow, which is an image that can be translated into parameters that directly control a vehicle's electronic control units. An autonomous driving mode is one wherein the autonomous driving system evaluates a driving situation and makes a decision to control the vehicle. A
training mode is one wherein a model human driver drives the vehicle and the system uses the output provided by the model human driver's actions to train the control algorithms for the autonomous driving system.
[0057] Before a vehicle can be self-driving, the autonomous driving system is operated in training mode, wherein the autonomous driving control algorithm is trained by a human operating the vehicle. The algorithm is trained with pairs of DSM sets and cognitive arrows.
To generate the DSMs, sensor outputs are captured at discrete time increments (e.g. every 100 ms) over a certain period. The sensor outputs may be processed to generate a sequence of DSMs. A set of sequential DSMs may be used to represent a driving situation.
Cognitive arrows may indicate a direction and an acceleration or deceleration of the vehicle at a point in time.
[0058] In a training mode, when a set of DSMs are generated, the system records a human model driver's operation in a graphic representation as Human Operation Cognitive Arrow (HOCA). Concurrently, the system generates a Program Generated Cognitive Arrow (PGCA) wherein a deep learning algorithm is embedded in the generation of the PGCA.
Using the DSM set at one end and the HOCA at another end, learning techniques are applied for end-to-end learning to form a convolutional neural network. As deep learning techniques are continually applied to tune the algorithm, the PGCAs generated by the system will more and more closely approximate the HOCAs generated for the same driving situation.
An objective is to train the autonomous driving system so that the PGCA approaches the HOCA
for a similar driving situation. Over time, such a system may be tuned to perform more and more like a human driver.
[0059] FIG. 1 shows one embodiment of an autonomous driving system 100. The autonomous driving system 100 comprises a Self-Driving Cognitive System (SDCS) 110.
SDCS 110 may be in communication with a plurality of vehicle sensors 140, Controller Area Network (CAN) Bus 150, and user interface 180. SDCS 110 comprises hardware 120, and software 130 stored thereon. Hardware 120 may comprise processors, data storage devices, application specific integrated circuits (ASIC), field programmable gate arrays (FPGA), and communications devices.
[0060] Sensors 140 may comprise visible light cameras, infrared cameras, radar, LIDAR
(Light Detection and Ranging), satellite navigation receivers for receiving satellite signals such as GPS or BeiDou signals, inertial navigation sensors such as accelerometers and gyroscopes, thermometers, compasses, clocks, and/or other sensors.
[0061] SDCS 110 receives data from sensors 140 using one or more of a number of communication methods. These communication methods may include TCP/IP, Wi-Fi, .. Bluetooth wireless, or other communications protocols.
[0062] CAN Bus 150 is in communication with various Electronic Control Unit (ECU) subsystems, for example, steering ECU 160 and throttle/brake ECU 170 as shown in FIG. 1.
SDCS 110 may issue commands to steering ECU 160 and throttle/brake ECU 170 via CAN
Bus 150. Steering ECU 160 controls the direction of the vehicle.
Throttle/brake ECU 170 controls the acceleration and deceleration of the vehicle.
[0063] User interface 180 may comprise any interface capable of receiving user commands.
For example, user interface 180 may be a hardware user-interface such as a touch screen or vocal user interface to enable a driver/passenger to relay their commands by touch or voice.
User interface 180 may comprise a wired or wireless interface for receiving commands from another user device, for example a wireless handheld device such as a smart phone or tablet, or from another system external to autonomous driving system 100.
[0064] As shown in the embodiment of FIG. 2, software 130 of SDCS 110 implements three modules: perception module 210, cognition module 220, and control module 230.
Software instructions for implementing each of modules 210, 220, and 230 may be provided in different memory stores.
[0065] Perception module 210 implements perception method 300 shown in FIG.
3A.
Perception method 300 comprises data receiving step 310, and data processing step 320.
Data receiving step 310 comprises receiving sensor data 330 from one or more of sensors 140. The set of data from sensors 140 prior to processing may be referred to as a Rich Point Cloud (RPC) 330. Information comprising RPC 330 may be derived from various sources, such as a satellite navigation receiver, a sensor equipped with inertial navigation, a radar/LIDAR channel, and/or a camera channel. Data receiving step 310 may capture sensor data comprising RPC 330 at regular time intervals (e.g. every 100 ms).
Next, data processing step 320 processes RPC 330 to generate one or more Driving Situation Maps (DSMs) 350. DSM 350 may be generated as frequently as RPC 330 is captured (e.g. every 100 ms, in the example above). The generation of DSMs 350 marks the completion of perception and provides a starting point for cognition.
[0066] In some embodiments, data processing step 320 processes RPC 330 to produce DSMs 350. Each DSM may show static objects that are in the vicinity of the vehicle like road markings, green belts, road shoulders, traffic lights and roadblocks, and dynamic objects in the vicinity of the vehicle, for example, other moving vehicles, pedestrians and animals.
[0067] FIG. 3C is an example of a single DSM 350 representing both static and dynamic objects in example driving situation 301 represented in FIG. 3B. DSM 350 shows objects within a certain radius of vehicle 311 to enable assessment for possible collision from all directions including, front, back, left and right of the vehicle.
[0068] In some embodiments, data receiving step 310 comprises receiving sensor data 330 from one or more of sensors 140 at two or more different times (e.g. t1, t2, t3, separated by 100 ms intervals). Data processing step 320 may comprise comparing sensor data 330 from one or more of sensors 140 at different times (e.g. t1, t2, t3, separated by 100 ms intervals) to identify objects in a vicinity of the vehicle which are static, and objects in a vicinity of the vehicle which are dynamic. Data processing step 320 may comprise generating a DSM
based in part on the identification of dynamic and static objects from the sensor data.
[0069] Perception method 300 comprises performing data receiving step 310 and data processing step 320 with a frequency F. In some embodiments, frequency F may be fixed for a particular autonomous driving system 100 and may depend on the speed at which sensor data or RPC 330 can be captured and processed into DSMs. In some embodiments, frequency F varies with the speed of the vehicle, for example, a higher frequency F for a higher vehicle speed. For example, in some embodiments, a DSM may be generated once every 100 ms (i.e. F=10 Hz). When the vehicle is travelling at a speed of 36 km/h speed or m/s, the frequency corresponds to a new DSM being generated for every 1 m travelled by the vehicle. Since, in this example, the frequency at which DSMs are generated is fixed, as 10 the vehicle speed increases the distance that a vehicle travels in between successive DSMs will increase. Conversely, as the vehicle speed decreases, the distance that a vehicle travels in between successive DSMs will decrease.
[0070] A sequence of DSMs (at various times t1, t2, t3, etc.) is generated while the vehicle is moving. Each DSM in the sequence has an associated time stamp and location identification to identify where the DSM was captured. The sequence of DSMs may be used to make driving decisions by the cognition module 220 which is also referred to herein as a decision-making module. By receiving a sequence of DSMs and comparing successive DSMs, the cognition module of the SDCS 110 may determine the position, velocity and acceleration of all driving situation-related objects in the vicinity of the vehicle at any given time.
[0071] The coordinates of a DSM may be expressed in polar coordinates, such as logarithmic polar coordinates (or log-polar coordinates). In particular embodiments, the forward direction of the vehicle may be used as the baseline of the logarithmic polar coordinate system (i.e. directly in front of the vehicle would be an angle of 0 ). In other embodiment, the baseline of the logarithmic polar coordinate system may be dynamic, for example a forward direction when the vehicle is moving forward, and a reverse direction when the vehicle is moving in reverse (i.e. backing up). Each object or point on an object may be represented in the DSM by two numbers: a first numberb for the logarithm of distance between the object and the vehicle, and a second number 0 denoting the angle between the object and the baseline. The position of any object in the vicinity of the vehicle may therefore be represented by 15 and 0. For example, where the baseline is the direction of travel of the vehicle, an object directly in front of the vehicle would be represented with 0 = 00 , and an object directly behind the vehicle would be represented with 0 = 180 .
[0072] In some embodiments, a DSM is divided into a grid of regions. For example, where a DSM implements log-polar coordinates, the 3600 surrounding the vehicle may be divided into a number of regions each of an equal angle. Similarly, the distance from the vehicleb may be divided into regions of equal size. Where the DSM implements distanceb as a logarithm of the distance of an object from the vehicle, the distance from the vehicle represented by each region in the grid will increase logarithmically with distance from the vehicle, if each region in the DSM is equal in size. The logarithmic nature of distanceb facilitates perception and analysis at a higher resolution closer to a vehicle, and perception and analysis at a lower resolution further from the vehicle. Logarithmic coordinates rather than, for example, GPS-84 direct coordinates, enable selective attention: the closer the object it is, the more important it is to calculations of the vehicle's right of way by the SDCS
110. Thus, a DSM is not simply a reflection of nearby objects, but enables the selective attention of the SDCS 110, similar to a human driver.
[0073] A DSM may omit irrelevant information. For example, there is no need to identify the year, make or model of surrounding vehicles, and no need to capture a pedestrian's gender, age, clothing, etc. in order for SDCS 110 to understand the driving situation.
A DSM
comprising a two-dimensional image showing the surrounding objects in logarithmic polar coordinates may be sufficient for cognition module 220 to understand the situation and make driving decisions.
[0074] In some embodiments, data processing step 320 comprises prioritizing and selecting a subset of data from RPC 330 before generating one or more DSMs 350.
[0075] Additionally, each object appearing in a DSM may be associated with a set of attributes A. Attributes A may include an object type such as: vehicle, pedestrian, green belt, road shoulder, etc., and a size of the object. Where a DSM is implemented as a log-polar coordinate grid, the size of an object may be represented by the regions in the grid occupied by the object, and stored in an array. Due to the logarithmic nature of the regions in the grid, an object of a given size will occupy more regions in the grid if it is closer to the vehicle than if it is further from the vehicle.
[0076] Due to the logarithmic nature of the coordinate system, the polar regions are asymptotic as they approach the center of the vehicle. For this reason, polar regions less than a certain distance from a vehicle in a DSM may not be represented in the DSM. Such polar regions may be safely ignored, because they represent regions inside the vehicle.
[0077] In some embodiments, objects represented in a DSM are stored in an array associated with the DSM. The array may be indexed by the position (15, 0) of each object on the grid in the DSM. Using an array, a large number of objects and their attributes may be stored and quickly searched by the SDCS. As an example of two attributes, "Surface Coefficient of Friction" and "Reflection of Light on the Road" may both be stored for the same coordinate (61,01) in the grid, and simultaneously retrieved when the SDCS
searches for that coordinate (61,01). Storing and indexing objects in this manner allows a large number of attributes to be efficiently stored and quickly retrieved.
[0078] Each DSM may have a timestamp T, representing the time at which the sensor data that was processed to produce the DMS was captured or received. The SDCS may calculate a speed and/or an acceleration of an object represented in a DSM by:
= calculating the time T' between DSMi with timestamp T1 and DSM2 with timestamp T2;
= calculating the change in position (6%0') of an object A with position (61,01) in DSMi and position (62,02) in DSM2;
= dividing the change in position (6%0') by the change in time T' to calculate a speed and a direction of travel of object A (together velocity VA); and/or = differentiating velocity VA of object A to calculate an acceleration AA
of object A.
[0079] An example DSM 350 is depicted in more detail in FIG. 30. DSM 350 is generated from an example driving situation 301 which is shown schematically in FIG. 3B
as a birds-eye view. FIG. 3B shows an autonomous vehicle 311 driven along a road by an autonomous driving system 100. In the illustrated driving situation 301, the road comprises three lanes which are marked by line markings 341, 351, 361 and 371. Line markings 351 and 361 are broken lines, denoting boundaries between lanes of traffic travelling in the same direction. The boundaries marked by line markings 351 and 361 may be safely crossed by vehicle 311 in certain situations. Line markings 341 and 371 are solid lines, denoting boundaries of lanes that vehicle 311 may not cross. For example, line markings 341 and 371 may denote boundaries between lanes of traffic travelling in an opposite direction, or a shoulder of the road.
[0080] In driving situation 301, other vehicles, including vehicles 321 and 331, are seen travelling along the same road and in the same direction as vehicle 311, but in a separate lane from vehicle 311.
[0081] DSM 350 depicted in FIG. 30 comprises a structured set of data representing objects and the environment in the vicinity of vehicle 311 in driving situation 301.
The objects depicted in DSM 350 may be represented in a logarithmic polar coordinate system, with an angle 0, a logarithmic distance O, and a set of attributes A, as described above. In the illustrated polar coordinate system, DSM 350 comprises boundaries 312, 322 and 332 of equally sized polar regions concentric with vehicle 311. Due to the nature of the polar coordinates, each polar region represents an increasingly large area from vehicle 311 the further the polar region is from vehicle 311. For example, the region bounded by 332 and 322 is larger than the region bounded by 322 and 312.
[0082] In example DSM 350, lanes 341, 351, 361 and 371 are represented by a series of dots. Lanes 351 and 361 are represented by hollow dots to denote the broken line, and lanes 341 and 371 are represented by solid dots to denote the solid lines. A general route plan (a "GRP", described below), is represented by lined dots 381. In a logarithmic polar coordinate system, the parallel lines defining the lanes in FIG. 3B are depicted in FIG.
30 as lines curving toward each other with increasing distance away from the center of the logarithmic polar coordinate system, the center representing the vehicle. Vehicles 321 and 331 are represented in DSM 350 by irregular polygons due to the transformation of a rectangular object in FIG. 3B by the logarithmic polar coordinates used in FIG. 30.
[0083] The polar coordinates of DSM 350 result in the display of objects closer to vehicle 311 at a higher resolution, and the display of objects further from vehicle 311 at a relatively lower resolution. For example in DSM 350, lane dots 351 appear further apart closer to vehicle 311, and closer together further from vehicle 311. Similarly, vehicle 321 is represented by a larger shape than vehicle 331, because vehicle 321 is closer to vehicle 311 than vehicle 331.
[0084] The representation of closer objects as larger shapes in DSM 350 prioritizes the closer objects in the DSM and tunes the cognition module as described below to emphasize closer objects more than further objects. In this manner, the DSMs implement selective attention in accordance with Weber-Fechner's Law, which states that a change Ax in a large data set is more difficult to detect than the same change Ax in a smaller data set (see Scheler G. (2017). "Logarithmic distributions prove that intrinsic learning is Hebbian". F1000research.
6: 1222. doi:10.12688/f1000research.12130.2. PMC 5639933. PMID 29081973). By reducing the data set by minimizing objects further from the vehicle, the DSM
focuses the cognition module on making decisions on objects closer to the vehicle.
[0085] Each DSM has a time stamp. Differentiation between static and dynamic objects, and the motion (speed and direction) of dynamic objects may be determined by analysing at least two DSMs. The present DSM as expressed as DSM, along with one or more of the DSMs generated prior to the present DSM expressed as DSMt_3, DSMt-2, DSMt_i, etc., may be stored in short-term memory, and they together represent a driving situation, providing information about objects and road markings in the vicinity of the vehicle which may be used for decision making by the cognition module 220 of SDCS 110.
[0086] Before departure, SDCS 110 calculates a general route plan (GRP) of the trip from the starting point to the destination. The GRP provides an ideal plan of travel on the roads on which the vehicle should travel to reach the desired destination, assuming no vehicles or other obstacles on the roads. General route planning may be performed with the help of high-definition (HD) maps. The GRP may detail the specific steps required to reach the destination, including which lane to take, when to change lanes, when to turn, etc. The GRP
accounts for driving rules such as speed limits, traffic signals, and driving lanes. The GRP
does not account for obstacles which may be encountered along the way, for example, other vehicles, pedestrians, and construction or traffic obstacles or detours. The portion of the GRP that is within the vicinity of the vehicle (i.e. within the area covered by the instant DSM
for the vehicle) may be shown on the DSM as a dotted trajectory line, for example, by lined dots 381 in DSM 350 in FIG. 30.
[0087] While the vehicle is on the road, other vehicles may be encountered and other disturbances or threats may appear (e.g. road obstructions, detours, emergency vehicles requiring right of way, pedestrians, animals, etc.). At any given point in time, the vehicle has a right of way (RoW) defined as the physical space in the vicinity of the vehicle which the vehicle may safely move into. As the RoW becomes limited by other vehicles and objects, SDCS 110 responds by evaluating the current driving situation, and making necessary changes such as changing lanes, stopping, or temporarily deviating from the route set by the GRP. In this instance, the vehicle may follow a different route from the GRP.
The new route becomes the actual driving trajectory line, also referred to herein as the real-time route plan (RTRP). The steps for calculating the actual driving trajectory line in response to the current situation may be referred to herein as real-time route planning.
[0088] The required RoW is related to various factors including: the vehicle size, speed, surrounding traffic, etc. The higher the speed is, the greater the RoW needed.
Therefore, the RoW may be considered as a non-linear function of various parameters like vehicle length, speed, acceleration, and any other parameter which may impact the space the vehicle requires to safely travel.
[0089] There is a required or expected RoW, which is the RoW that is required for the vehicle to safely move forward without touching or colliding with other objects. There is also an actual RoW which refers to the actual space that is available for the vehicle to move into.
When the two RoW do not agree or when the actual RoW is less than the required RoW, for example, when there is a pedestrian approaching and is anticipated to occupy the RoW so that the vehicle no longer has sufficient space to move forward without colliding with the pedestrian, SDCS 110 realizes the necessity for adjustment. Since the SDCS 110 makes its judgment by comparing the required RoW and actual RoW, it constantly performs the steps of detecting, calculating, comparing, requesting, competing, abandoning (a previous decision) or occupying the actual RoW.
[0090] Therefore, in some embodiments, operating the vehicle in the driving mode comprises: (i) determining the available moving spaces called Right of Way needed for the vehicle based on the change of the driving situation maps; (ii) making real time decisions for the vehicle if the Right of Way available does not meet the requirement to pursue the predefined route plan before the journey begins; (iii) conducting route changes based on the actual Right of Way and knowing when to get back to the predefined route plan after the temporary route deviation.
[0091] FIG. 4 shows a general route planning method 400 that may be implemented by the cognition module 220 of FIG. 2. FIG. 5A shows a real-time route planning method 500 that may be implemented by the cognition module 220 of FIG. 2. General route planning method 400 and real-time route planning method 500 may be executed while SDCS 110 is operating in driving mode. Various steps of real-time route planning method 500 may also be executed while SDCS 110 is operating in training mode for comparison to a Human Operation Cognitive Arrow generated during a training mode. Driving and training modes of SDCS 110 are described in further detail below.
[0092] General route planning method 400 comprises steps for receiving a destination 440 and vehicle information 445 at block 410, receiving one or more high-definition maps 421 at block 420, and generating a General Route Plan (GRP) 450 at block 430. GRP 450 may comprise a plurality calibration points used to calibrate the actual route of the vehicle along a journey, and to aid in navigation.
[0093] Generating GRP 450 at block 430 comprises calculating the ideal path to reach the destination 440, assuming no other vehicles or other obstacles are on the road. The ideal path will depend in part on the vehicle attributes, for example the length, height, and/or weight of the vehicle. As an example, a taller vehicle may be unable to pass under an overpass, necessitating a different path than a shorter vehicle. Generating GRP 450 at block 430 may also depend on roadmaps and traffic rules.
[0094] GRP 450 may be based on a RoW along the ideal path to destination 440.
The required or expected RoW along the ideal path may be referred to as a restraint band. The RoW may be represented graphically on each generated DSM.
[0095] Destination 440 may be provided by a variety of methods, including from a user through user interface 180. In other embodiments, destination 440 may be predefined for multiple journeys like those taken by cargo trucks, commuting buses and lawn-mowers, in which case destination 440 is stored within cognition module 220, and receiving destination step 410 comprise retrieving destination 440.
[0096] Real-time route planning method 500 of FIG. 5A comprises receiving input at block 510, and generating output at block 520 to control the actual trajectory of the vehicle in response to the current driving situation, while maintaining the vehicle as close to the general route plan as possible. Input received at 510 may comprise one or more DSMs 530, which are generated by the perception module 210 executing perception method 300 (see FIG. 3A).
[0097] The output generated at block 520 comprises a cognitive arrow 550. A
cognitive arrow is the expression of a cognitive decision made to control the vehicle.
In some embodiments, cognitive arrow 550 may represent a driving command comprising a steering direction and an acceleration or deceleration for the vehicle.
[0098] In some embodiments, each cognitive arrow 550 is stored in database 580 along with the DSM set 530 from which it was generated. Database 580 comprises pairs of cognitive arrows and DSM sets. Database 580 may contain DSM sets for common sequences such as crossing an intersection, merging onto a highway, making a right turn in city traffic, making a left turn in city traffic, giving way to a pedestrian, etc. When evaluating a DSM set to determine a driving action, SDCS 110 first searches in the database for a matching sequence; if one is found, a cognitive arrow may be generated fairly quickly because these cognitive arrows are already stored in memory. In some embodiments, database 580 is a graphical database.
[0098] When there is no existing sequence of DSMs that is comparable to the current one, the SDCS will initiate an embedded program based on a predictive algorithm.
The algorithm calculates an available RoW for the front, left, and right of the vehicle.
When the RoW is expanding, the vehicle may accelerate in the direction of the increasing RoW.
When the RoW is shrinking, the vehicle may decelerate in the direction of the decreasing RoW. The algorithm may also implement other pre-defined rules. The pre-defined rules may implement existing driving skills expressed in semantic expressions like "decelerate rather than change lane when the situation is uncertain." In the case of such a rule, the SDCS
prioritizes .. decelerating over changing a direction of travel.
[0099] In preferred embodiments, cognitive arrow 550 is represented graphically. For example, in particular embodiments, cognitive arrow 550 is represented as a single image.
The graphical representation of cognitive arrow 550 may comprise an arrow. A
steering direction for controlling the vehicle steering wheel may be indicated by the direction of the .. arrow. For example, the angle of the arrow with respect to a reference center axis may indicate how much the steering wheel should be rotated and in which direction.
Different colors may be used to indicate a particular type of action such as acceleration and deceleration. For example, a red arrow may be used to indicate acceleration, and a blue arrow may be used to indicate deceleration.
[0100] A magnitude of an acceleration or deceleration may be indicated by a width and/or a length of the arrow. Where the magnitude of an acceleration or declaration is depicted by a width and/or length of an arrow, the arrow may have a minimum width and/or length, wherein an arrow of the minimum width and/or length represents a steering direction and an acceleration of zero.
[0101] Figures 5B ¨ 5H depict seven example cognitive arrows corresponding to the following steering directions and accelerations (wherein a magnitude of the acceleration corresponds to a length of the arrow depicted):
Figure Steering Direction Acceleration 5B Forward Accelerate 5C Right Accelerate 5D Left Accelerate 5E Forward None 5F Right None 5G Left None 5H Forward Decelerate (Brake)
[0102] Figures 51-1 to 51-8 depict example cognitive arrows for an example overtaking scenario:
Figure Action 51-1 Steer left and accelerate into left lane 51-2 Steer straight and accelerate forwards into left lane 51-3 Steer right to align vehicle with left lane 51-4 Steer straight along left lane 51-5 Steer right back into original lane 51-6 Steer straight to continue back into original lane 51-7 Steer left to align vehicle with original lane 51-8 Steer straight to continue along in original lane, and brake to return to original speed
[0103] Figure 5J shows an example method 501 which may be implemented at generate output step 520 (see FIG. 5A) to generate cognitive arrow 550. Method 501 receives a current DSM set 511 comprising 2 or more DSMs. Method 501 comprises at step searching a database of DSMs and associated cognitive arrows to determine, at step 531, if there is a DSM set in the database matching DSM set 511. If there is a matching DSM set, then method 501 comprises at step 541 retrieving the cognitive arrow paired with the matching DSM set. If the database does not contain a matching DSM set, then method 501 comprises at steps 551 and 561 respectively, calculating a RoW and generating a cognitive arrow based on the predictive changes of RoW. The determination of RoW at step 551 takes into account the physical space required for the vehicle to safely advance (without colliding into any objects). The determination of the cognitive arrow, including representation of the steering direction and magnitude of acceleration, is based on what is required to advance the vehicle to the point along the GRP, or as close to the next point along the GRP if the vehicle is required to deviate from the GRP, as described elsewhere herein.
[0104] FIG. 6B depicts an example RoW 641 for a driving situation 601. In driving situation 601, an autonomous vehicle 611 is traveling along a three-lane road. Vehicles 621 and 631 are travelling in the same direction as vehicle 611, in a lane to the left-hand side of vehicle 611. The RoW 641 is the room available for vehicle 611 to travel into. Because vehicles 621 and 631 occupy the lane to the left-hand side of vehicle 611, RoW 641 does not include the lane to the left-hand side of vehicle 611. Because the lane to the right-hand side of vehicle 611 is unoccupied, RoW 641 includes the lane to the right-hand side of vehicle 611.
[0105] In some embodiments, the DSM has a radius r in the range of 50 and 150 meters (therefore showing objects that are within a distance r of the vehicle). In certain embodiments, the DSM has a radius r in the range of 50 and 80 meters. The DSM
takes the driver as the center of the log polar coordinate system (log O, a). As the center of the log polar coordinate system is fixed with respect to the center of the vehicle, the DSM moves along with the vehicle. The coordinate baseline may be set as the direction extending straight forward from the center of the vehicle. Whenever there is an object that falls into a RoW section, the closer it is, the more attention it receives from the SDCS
110. The importance of an object decays logarithmically, not linearly with distance from the vehicle. In this way, the DSM reflects the driver's selective attention. The DSM is a map showing the vehicle's surrounding situation in a particular instance. The DSM may show the portion of the General Route Plan and/or the actual trajectory of the real-time route plan that is located within the radius r of the DSM which is a direct mapping from the WPS-84 coordination system. In most driving situations, the partial General Route Plan appears most of the time as a straight path in front of the vehicle in the DSM.
[0106] An online predictive control algorithm may be developed for application in the cognition domain. In particular embodiments, the online predictive algorithm is designed based on the following ideas:
= A driverless vehicle in a six-dimension space can be expressed by six parameters, s = {x, y, z, a, )3, y}, wherein {x, y, z} provides a description of the latitudinal, longitudinal, and altitudinal position of the vehicle; and {a, )3, y}
describes the heading angle, rolling angle, and pitching angle of the vehicle.
= If the original position is So and the final position is Sg, the algorithm is designed to calculate and select from available options to move from So to Sg, by turning the steering wheel along angles (Po, Pi,... (pn and changing the acceleration along ao, , an=
= With So and Sg set and the paths defined, the algorithm then produces all the (p and a needed in the transition process.
[0107] Such an online predictive control algorithm may be subject to fine-tuning control changes when the vehicle is on the road. Real-time Route Planning (RTRP) combined with predictive control may significantly reduce the computational complexity for determining the output for controlling the vehicle's ECUs. Convolution neural network (CNN) deep learning methodologies may be applied to develop data-driven algorithms based on pairing of a DSM
set with a cognitive arrow for end-to-end decision making.
[0108] FIG. 6A shows a method 600 according to one embodiment that may be used to generate the output at method 500's block 520 in one embodiment. Method 600 comprises a real-time route planning step 610 comprising determining a strategic cognitive arrow 650 based on DSM sets 630 and general route plan 640. Determination of strategic cognitive arrow 650 may involve, for example, using the steps as discussed in relation to FIG. 5J's method 501 to determine a cognitive arrow by calculating a cognitive arrow or determining a best match cognitive arrow, in order to navigate the vehicle along or close to the General Route Plan. The cognitive arrow resulting from the performance of method 501 (depicted in FIG. 5J) is shown as strategic cognitive arrow 650 in FIG. 6A's method 600. As described above in relation to method 501, the determination of strategic cognitive arrow 650 at step 610 may be implemented using a deep learning algorithm such as one using a cognitive neural network, as described herein.
[0109] The driving skills to deal with "edge driving conditions" like extreme weather conditions such as wind, rain, snow, fog or exceptional maneuvering situations like parallel parking on a sloped road do not impact the route that the vehicle needs to take to get to its destination. However, according to some embodiments, a specific Cognitive Arrow (referred to as the tactical cognitive arrow) is produced to accommodate the edge driving conditions and keep the vehicle steady and safe.
[0110] Method 600 also incorporates tactical cognitive processing. An objective of tactical cognitive processing is to ensure steady and safe movement of vehicle 611 as it moves along its real-time route planning trajectory. Therefore, method 600 comprises retrieving a tactical cognitive arrow 670 at step 620.
[0111] The tactical cognitive processing is realized through capturing a "driver's fingerprint".
Each experienced driver has his or her unique behavioural patterns and habits for driving. A
driver's personalized driving style, behaviour, and skills, may be referred to as the driver's .. fingerprint. When the vehicle is running under certain weather and road surface conditions, driving comfort, ride characteristics and energy consumption may be influenced by the driver's operational style for application of the steering wheel, accelerator or gas pedal, and brake pedal. A driver's fingerprint may also be used to uniquely identify the driver.
[0112] The driver's fingerprint represents the driver's special skills in maintaining the balance of the vehicle, saving fuel when driving, and keeping passengers comfortable in different road conditions or scenarios (also referred to elsewhere herein as "tactical"
cognition;
different drivers may produce different tactical cognitive arrows in certain driving conditions in accordance with their unique driver's fingerprint). These techniques may not be related to the route that the vehicle is taking or the destination of the trip. For example, in theory if the driver is driving forward at a steady speed, the cognitive arrow should indicate an acceleration of 0. However, even on a straight path the driver controls the vehicle to keep the balance of the vehicle, and maintain comfort for the passengers. This results in a series of cognitive arrows (e.g. tactical cognitive arrows) for control of the vehicle specific to that driver, although such cognitive arrows are not used for reaching the destination (i.e.
they are not controls for navigating the vehicle along the route plan).
[0113] Different drivers may have different fingerprints, and the tactical cognition process described herein may result in different tactical cognitive arrows for different drivers related to a particular person's driving style under an edge condition. For example, one driver may rush through a flooded area while another one may drive slowly to pass a flooded area.
These two different driving styles would result in two different tactical cognitive arrows for the same driving situation corresponding to two different drivers' fingerprints.
[0114] In certain vehicles, parameters that may be monitored and stored include speed, mileage, four-wheel or two-wheel rotating speed, slightly unbalance degree, pitch and roll, engine torque, sideslip angle, body vibration frequency and amplitude, tire pressure, etc.
These parameters, along with the commands to control the vehicle as performed by an experienced driver, may form the driver's fingerprint. The driver's fingerprint may be stored on a dedicated chip. The parameters and the driver's commands may be provided as inputs to a machine learning algorithm.
[0115] Parking the vehicle is an example application of the driver's fingerprint. For parking maneuvers an experienced driver will typically use both hands and feet, and the four wheels of the vehicle have different trajectories when the speed is low. The SDCS 110 may be trained by collecting a model driver's fingerprint for different kinds of parking maneuvers (e.g.
parallel parking, parking in a slope, rear-in parking, head-in parking etc.), and in different road conditions (e.g. slopes, mud roads, cobblestones, etc.).
[0116] To retrieve tactical cognitive arrow 670, input 660 may be provided which may comprise a library of tactical cognitive arrows paired with real-time data collected from vehicle sensors which are relevant to certain driving conditions (also referred to herein as scenarios). As seen in FIG. 13, such parameters may include vehicle speed, mileage, tire rotation speed, distortion angle, body course, angle of pitch, parallel rolling, engine torque, and the like. At step 620, the tactical cognitive arrow 670 may be retrieved based on input 660. The database of tactical cognitive arrows from which tactical cognitive arrow 670 is retrieved may be developed from prior training data.
[0117] Such training data may be collected by recording the actions of a model driver driving in similar conditions on a straight of way, and by recording the corresponding driving condition parameters. During the training, the model driver maintains the vehicle moving forward and in a steady speed. In this situation, all the driving actions of the driver are primarily for the purpose of keeping the vehicle steady and smooth and have no relationship to where the vehicle is driven to. The driver's actions may include:
angle/torque of wheel, drive pedal displacement and brake pedal displacement (see FIG. 13). Each of the driver's actions may be represented, at various points in time, as a tactical cognitive arrow paired with a set of corresponding driving condition parameters. For example, on a flooded road, the model driver may be recorded handling the steering wheel and/or pressing or releasing the gas pedal or the brake in a certain manner. Based on these actions, a sequence of tactical cognitive arrows may be generated and associated, for example by using a recurrent neural network with corresponding sets of driving condition parameters which are reflective of the flooded road conditions (see: Raul Rojas (1996). Neural networks: a systematic introduction. Springer. p. 336. ISBN 978-3-540-60505-8). These training operations may be repeated several times to collect sufficient data, including the vehicle/driving parameters and tactical cognitive arrows, for the library that is provided as input 660. The input to the recurrent neural network is therefore a set of driving condition parameters (recorded while the driver is operating the vehicle under certain conditions in a training mode) and the output is the tactical cognitive arrow. Therefore, machine learning based on a recurrent neural network may be used to train the SDSC 110 to determine a tactical cognitive arrow in response to the vehicle encountering various driving conditions or scenarios while in autonomous driving mode.
[0118] The tactical cognitive arrows and associated sets of driving condition parameters may be stored in a dedicated memory storage unit. To retrieve tactical cognitive arrow 670 at step 620, the current driving conditions are matched to similar driving conditions in the tactical cognition memory store, and the tactical cognitive arrow that is paired with those driving conditions may be selected as tactical cognitive arrow 670. In particular embodiments, various types of driving conditions are identified (e.g. bumpy road, icy road, wet road, snow-covered road, etc.) and the driving conditions are classified based on the type. Tactical cognition at step 620 may comprise recognizing the type of the current driving condition based on the real-time data collected from the vehicle sensors, and searching through the specific portion of the tactical cognition memory store that relates to that type of driving condition.
[0119] In some embodiments, the computational expense required at step 620 for retrieving a tactical cognitive arrow 670 may be less, or substantially less, than that required for calculating a strategic cognitive arrow 650 at step 610. Where the tactical cognition module and strategic cognition module are implemented on separate hardware, the modules may be implemented using different hardware tailored to the computational expense of each module.
[0120] Strategic cognitive arrow 650 and tactical cognitive arrow 670 are combined at block 680 to produce aggregate cognitive arrow 690. In some embodiments, block 680 comprises adding the strategic cognitive arrow 650 and tactical cognitive arrow 670. The resulting aggregate cognitive arrow 690 may correspond to cognitive arrow 550 of FIG.
Sand is used to control the vehicle. A steering direction for the vehicle may be indicated by the direction of the arrow 690. An amplitude of an acceleration or deceleration of the vehicle may be indicated by a width or a length of the arrow 690. Acceleration and deceleration may be represented as different colors for the arrow 690.
[0121] Method 600 depicted in FIG. 6A produces two cognitive arrows, strategic cognitive arrow 650 and tactical cognitive arrow 670. For example, strategic cognitive arrow 650 may be generated to overtake a vehicle, while tactical cognitive arrow 670 is generated because there is a manhole cover in front of the vehicle. The manhole cover causes an edge driving condition (a bumpy road scenario) that requires certain learned technique to address. In order to reduce the bumpy experience for the passenger, SDCS 110 generates tactical cognitive arrow 670 directing the vehicle to decelerate (i.e. apply the brakes). In order to determine the output to control to the vehicle, an aggregate cognitive arrow 690 is generated (as described above) which results in strategic cognitive arrow 670 being set off by 670 cognitive arrow 670. The resulting aggregate cognitive arrow 690 will control the vehicle to travel not as fast as indicated by strategic cognitive arrow 650, but not as slow as tactical cognitive arrow 670.
[0122] Particular embodiments employ deep learning to "train" the autonomous driving system 100 to drive, and more particularly, to train the cognition module 220 in its determination of the output (cognitive arrow 560) based on input received at block 510 (e.g.

DSM 530 and GRP 540 in FIG. 5J). One example embodiment of a machine learning-based method 700 that may be performed by autonomous driving system 100 is shown in FIG. 7 while a model driver operates the vehicle in training mode. The model driver may have extensive driving experience enabling him or her to deal with many kinds of difficulties and unexpected challenges encountered when driving. The model driver may be expected to know how to operate a vehicle smoothly, safely and/or fuel efficiently. Model drivers may also be expected to comply with every driving rule and regulation. Their behavior can be set as a model for training the SDCS 110.
[0123] In method 700, a model driver operates the vehicle, while the SDCS 110 receives and processes sensor data 705 at block 710. Sensor data 705 may comprise one or more of the sensor data as described above with reference to sensors 140 of FIG. 1 and may comprise Rich Point Cloud (RPC) data. Sensor data 700 may be processed to produce DSM
720 at block 710 as described above with reference to FIG. 3A.
[0124] At block 730, method 700 comprises applying a learning module to generate a Program Generated Cognitive Arrow (PGCA) 740. PGCA 740 may be generated according to FIG. 5A's method 500 described above for generating a cognitive arrow, which may take into account the right of way (ROW) and route plan (as described with reference to FIG. 5J's method 501). PGCA 740 corresponds to cognitive arrow 550 output by method 500.
[0125] At block 730, the learning module may also generate a Human Operation Cognitive Arrow (HOCA) 750 according to the actions of the model driver. HOCA 750 may be generated by receiving sensor data from the vehicle as the model driver is driving the vehicle, and processing the sensor data to determine RPC data and/or DSMs. The output (i.e. the cognitive decision, comprising, for example, steering direction and acceleration/deceleration amplitude) may be recorded as the HOCA 750.
[0126] Method 700 then proceeds to training step 760, wherein PGCA 740 is compared to HOCA 750, and unified cognitive arrow 770 is generated. By applying a deep learning algorithm, the difference between PGCA 740 and HOCA 750 may be used to train the algorithm used in method 500 for generating the cognitive arrow 550. Cognitive arrow 770 is used to drive the vehicle. As the algorithm is trained, and differences between PGCA 740, HOCA 750 will diminish, and PCGA 740 and HOCA 750 will converge.
[0127] Therefore, one aspect of the invention provides a method of providing driving command based on a learning module embedded in the system. The method comprises: (i) recording a series of driving situations based on the refined and integrated data from one or more sensors in graphic representations; (ii) recording the output of a model human driver's operation in a graphic command denoting an actual steering direction and an actual acceleration or deceleration; (iii) generating an autonomous driving output by the self-driving system which is represented in a similar graphic command as (ii); (iv) determining the differences between the human operation and the system-generated operation;
(v) applying machine learning techniques for the system to adapt its Convolutional Neural Network embedded in the system, which would produce further driving commands that approach human operation commands in similar driving situations.
[0128] A vehicle on the road is in an open and uncertain environment. A driver may encounter fog, snow, rain, hail, ice, strong wind, smoke and/or other unfavorable weather conditions. There may also be challenges due to narrow roadways, rugged paths, winding mountain roads, flooded areas, wading paths, icy roads, bumpy roads, etc.
Issues such as traffic light failures, unmarked road construction or potholes or bumps, sudden accidents, pedestrian violations, drunk driving and unexpected changes may also occur and may not be predicted.
[0129] In the testing and training field, some of the above driving scenarios were reproduced and a model driver operated a vehicle with an installed SDCS 110. By using the deep learning LSTM (Long Short Term Memory) model of a general, recurrent neural network composed of 7 layers and with 64 neuron nodes, anti-rehabilitation training and parameter adjustment was performed. A special adapter controller chip was therefore developed and evolved through the training methods described herein, capturing the driving skills of the model driver for the smooth operation of the vehicle in future operations of the SDCS 110.
[0130] An example of the training method is described here in a situation for parking.
Parking skills do not effect where a vehicle is heading, but they may be challenging for new drivers. While in a difficult parking situation, the driver is busy with operations with both the hands and feet, and so is the operational frequency and amplitude. It is a good opportunity to collect driver's behavior data.
[0131] In order to achieve self-parking, different scenarios were created in the testing and training field, such as road conditions with mud, sand, glass, cement, or rugged stone pavement. When a model driver trains the SDCS 110 to park in different styles or options (e.g. head-in, tail-in, inclined side and parallel parking), he or she repeats the operation under these conditions while the SDCS 110 generates HOCAs to store numerous operation details. With all the data collected and the benchmark set, the repetitive training of the LSTM
network obtains and reproduces the driver's fingerprint for parking.
[0132] In particular embodiments, one or more of the steps performed at blocks 730 and 760 of method 700 may be based on Cognitive Neural Networks (CNN) and/or Recurrent Neural Networks (RNN). In particular, the SDCS 110 may associate each input with a particular output image. The input may comprise a sequence of DSMs with a route plan identifying the next positions of the autonomous vehicle along the route. The output may comprise a cognitive arrow. Input and output pairs (DSM sets ¨ cognitive arrow pairs) are generated and stored in memory storage while the vehicle is being driven by a model driver.
When a sequence of DSMs is encountered, the SDCS 110 may search the memory storage for a similar DSM sequence, and identify a cognitive arrow paired with the similar DSM sequence.
That cognitive arrow may be then be used as the output for the DSM series encountered by an autonomous vehicle.
[0133] The system described herein is based on memory cognition. As opposed to computational cognition (as explained, for example, with reference to FIG. 5's method 500), the "memory cognition" employed by SDCS 110 provides for accumulation and continuous application of previous driving "experiences" and "memories" to make immediate decisions through the pairing of DSM sets and cognitive arrows, as explained above. In certain circumstances, memory cognition may take priority over computational cognition during decision-making. This is described above with reference to FIG. 10, where the memory cognition defines rules that assist the execution module 232 to select a cognitive arrow where more than one cognitive arrow is produced, or to otherwise influence a driving decision in a particular case (e.g. slow down when approaching an intersection). Different memory sticks or storage areas may be used to store different categories of Cognitive Arrows. These may comprise, for example, a first memory stick for storing Cognitive Arrows for road-crossing, a second memory stick for storing overtaking other vehicles, and a third memory stick for merging onto a highway, etc. When a typical DSM set is encountered while driving, SDCS 110 searches through the memory storages to locate existing Cognitive Arrows for the current situation. There may be three types of memories based on the various types of work that the SDCS 110 is performing: instantaneous memory, short-term memory and long-term memory. All three types of memories may be distinctively designed and .. expressed. "Instant memory" refers to the process whereby the SDCS receives RPC and starts to produce a DSM; the working memory is to produce the cognitive arrow to direct the car; and long-term memory stores general rules to guide the production of cognitive arrows.
In many occasions, the instant memory and the working memory are working in parallel in the system and the long-term memory facilitates the working memory to produce Cognitive Arrows.
[0134] Memory cognition may be thought of as the convolution of the cognitive time function and the "forgetting" time function. Instantaneous memory, short-term memory, and long-term memory may be expressed in different formats for perception, decision making and knowledge accumulation respectively.
[0135] In the perspective coordinate system, instantaneous memory may be used to refer to the inter-frame correlation for image or radar/LIDAR RPC. Referring to FIG.
10, instantaneous memory 215 may be engaged in the process of converting the raw data from the sensors 140 into Rich Point Cloud (RPC) data and ultimately into Driving Situation Maps (DSM) 350. A sequence of DSMs 350 is generated using the instantaneous memory.
As described elsewhere herein, the DSMs may be represented using log polar coordinates to facilitate cognitive recognition of the driving situation. Subsequently, working memory 217 may be employed to generate cognitive arrows from the DSMs. For example, working memory may be engaged for searching the database for a similar DSM sequence.
If a match is located then that match may be used to produce the cognitive arrow for the control module 230. The accumulation of knowledge, expressed as semantic rules with concepts or the scene forms, may be stored in long-term memory 219. Concepts or phases are semantic annotations of pictures. Concept trees may form a driving knowledge atlas. For example, in a crossroad, an experienced driver may decide that he or she would rather slow down than rush out when the situation is complex and uncertainty may appear any minute.
The decision of slowing down is made rather from the immediate perception of danger than from a general law of driving. Rules stored in long-term memory may be represented using semantic expressions, such as "slow down rather than change lanes when approaching an intersection; "decelerate when making a turn", "stop instead of accelerating".
The long-term memory 219 provides rules and guidelines that are applied by the SDCS 110 when generating a cognitive arrow. Such rules and guidelines may assist the SDCS
110, and in particular, the execution module 232, to select a cognitive arrow where there is more than one possible cognitive arrow produced for the control module 230.
[0136] These experiences may be stored in the memory sticks as "frequent crossroad base", "common accidents bases", "plight bases", etc. Not only does short-term memory contribute to the current perception, it may also activate the long-term memory to search similar situations and create feedback loops for current decision-making (see FIG.
10).
[0137] Whenever the SDCS meets new conditions for perception and cognition, it may prioritize the memory cognition in decision-making. Memories may be emphasized (affirmative learning), restrained (negative learning), or enhanced during the process. Taking the above sequence as an example, when a vehicle with the SDCS 110 is approaching a crossroad, the instantaneous memory is capturing information for traffic lights and pedestrians; the working memory is producing a real-time route plan (RTRP) for crossing the street; and the long-term memory is the general law of "slowing down is better than rushing out when crossing a road". The decision-making process would take the suggestion from "long-term memory" for a safer adjustment to slow down.
[0138] Each sensor has a limit and therefore may be imperfect. With an increased number of sensors installed in a vehicle, the data received is also increased. This may make the task of checking redundant data and merging data into RPC a burden so as to prevent the SDCS
110 from reacting or generating decisions in a timely manner. However, it is not necessary, nor is it suggested that there be an integration among sensor data, such as merging between multiple radar/LIDAR data, between multiple image data, or between radar and GPS navigation data. The feedback provided by memory cognition may avoid the space and time cost for the integration, and significantly improve cognitive performance.
[0139] A further advantage of this approach is to reduce the data conflict that may exist between sensors. Separation of the autonomous system into three working zones (i.e.
perception, cognition, and control) may address the potential problems associated with processing a huge amount of data at once, and may improve the cognitive understanding.
As a result, the SDCS 110 does not need to create a 3D perception model to understand the driving situation and make a decision.
[0140] The methods of SDCS as described above are realized through the systems that are installed in a fully wired automobile, which lays the foundation for complete digital control of the vehicle. Digital control may be in many ways superior to manual control.
Digital control enables online data collection of details of the vehicle when it is being manually driven and acquires the data of a driver's behavior for changes on turning on lights, steering wheel, accelerator, brake pedal, etc. When a vehicle is wired, the SDCS 110 may record parameters such as speed, mileage, four-wheel or two-wheel rotating speed, slightly unbalanced degree, pitch and roll, engine torque, sideslip angle, body vibration frequency and amplitude, tire pressure, and the like.
[0141] Embodiments of the invention described herein may be built with a universal architecture facilitating parallel operations over multiple buses. The construction of a decision bus, learning bus, and interactive bus allows the modules to work concurrently and independently. Therefore, the computation, memory and interaction modules are linked but not entangled. This general architecture also leaves possibilities of extension for future add-ons like tour guide, entertainment and mobile communications.
[0142] Each bus is responsible for the operation and data processing of its own modules that is connected to it. In particular embodiments, a virtual exchange system is provided allowing .. for communications between modules for each bus; no direct point-to-point communication is allowed. Messages may be exchanged in a "subscribe / publish" pattern. This may eliminate the necessity for encapsulation and data analysis. Modules may remain fairly independent by transmitting the "object" directly in between. Modules may be ready to be used and be added, stopped or moved at any point of time as required. In particular the .. learning bus and the interactive bus enable the SDCS 110 to learn new driver skills, and experiences and knowledge can be accumulated along with the learning process.
Learning and interaction tasks do not occupy the bus bandwidth for decision-making, nor would they affect the CAN bus 150. The architecture improves the real-time decision-making performance, so cognitive arrows can be sent to the CAN bus 150 in time.
Remote hackers on the internet can be prevented by a safety monitoring module if they attempt to hack in from the interaction bus. Such attempts will not hijack the SDCS 110 from the control of its owner or actual user.
[0143] In particular embodiments, a decision making bus, learning bus and interactive bus depend respectively on three physical Ethernet buses. The buses are not connected to each other. The communication among the working modules on each bus is implemented through the virtual switching of the modules. The three virtual switching modules are not connected.
Because the communication among the modules is only through the virtual switching module of that bus, the modules can be added or deleted as required for tasks.
Working modules may exchange information across the buses. For example, when the operator of the vehicle interactive user interface 180 provides a command to make an unplanned stop at a destination which is not part of the existing General Route Plan (e.g. stop at a particular coffee shop), the interaction module can communicate that information to the decision bus for the production of a revised General Route Plan which includes a stop at that coffee shop.
[0144] The SDSC 110 has passed actual road tests on various types of vehicles.
These vehicles include: 64-seat bus, 5-seat sedan two-seat coup, specialized vehicles like an RV
and cleaning vehicles. The road tests include: open highway; closed park like an industrial working zone; city traffic etc. These tests have proved the general applicability of the architecture of SDCS 110, which can adapt to different environments and is capable with dealing with many of the uncertainties while driving. The general architecture has potential for future extensions to loT (Internet of Things), Cloud Computing, mobile networks and big data.
[0145]While a number of exemplary aspects and embodiments have been discussed above, those of skill in the art will recognize certain modifications, permutations, additions and sub-combinations thereof. It is therefore intended that the following appended claims and claims hereafter introduced are interpreted to include all such modifications, permutations, additions and sub-combinations as are consistent with the broadest interpretation of the specification as a whole.

Claims (37)

21. 6/1 2019 (21. 06, 2019) it !Eby , What is claimed is:
1. A method of autonomously driving a vehicle, the method comprising:
receiving sensor data from one or more sensors;
generating a graphical representation of the sensor data;
generating a graphical driving command from the graphical representation;
and controlling the vehicle based at least in part on the graphical driving command, wherein the graphical representation comprises a driving situation map, and the graphical driving command comprises a cognitive arrow.
2. A method according to claim 1, wherein generating the graphical representation comprises representing the sensor data using a polar coordinate system comprising an angle dimension and a radius dimension.
3. A method according to claim 2, wherein the radius dimension of the polar coordinate system is logarithmic.
4. A method according to claim 3, comprising generating an array from the driving situation map, wherein the array comprises objects and features represented in the driving situation map and the coordinates of each object or feature in the driving situation map.
5. A method according to claim 1, wherein the cognitive arrow comprises a graphically depicted arrow.
6. A method according to claim 5, wherein a length of the graphically depicted arrow represents an acceleration, and an angle of the graphically depicted arrow represents a steering direction.
7. A method according to claim 5, wherein a width of the graphically depicted arrow represents an acceleration, and an angle of the graphically depicted arrow represents a steering direction.
8. A method according to either of claims 6 or 7, wherein representation of the graphically depicted arrow in a first color indicates a positive acceleration, and AMENDED SHEET ( IPEAICN) Date Recue/Date Received 2020-06-30 21. 6 /1 2019 (21, 06. 2019) representation of the graphically depicted arrow in a second color indicates a negative acceleration.
9. A method according to clairn 1, wherein generating the graphical representation comprises representing a portion of a route in the graphical representation.
10. A method according to claim 9, comprising calculating the route from a current location, a destination location, and one or more street maps.
11. A method according to claim 1, wherein generating the cognitive arrow comprises:
searching a set of historical driving situation maps;
determining a closest matching driving situation map from the set of historical driving situation maps most similar to the current driving situation map; and retrieving a cognitive arrow associated with the closest matching driving situation map.
12. A method according to claim 11, wherein determining a closest matching driving situation map comprises executing a machine learning algorithm.
13. A method according to clairn 12, wherein the machine learning algorithm uses a convolutional neural network.
14. A method according to claim 1, wherein generating the cognitive arrow comprises:
searching a set of historical driving situation maps;
determining if at least one driving situation map in a set of historical driving situation maps is within a threshold of similarity to the current driving situation rnap;
and if at least one driving situation map in the set of historical driving situation maps is within the threshold of similarity to the current driving situation map, retrieving a cognitive arrow associated with the driving situation map in the set of historical driving situation maps most similar to the current driving situation map; and if no driving situation map in the set of historical driving situation rnaps is within the threshold of similarity to the current driving situation map, calculating a cognitive arrow from the current driving situation map.
15, A method according to claim 14, wherein calculating the cognitive arrow comprises;
Date Recue/Date Received 2020-06-30 AMENDED SHEET I [PEW

21. 6/1 2019 (21, 06. 2019) determining a right of way of the vehicle from the current driving situation map;
and calculating the cognitive arrow based at least in part on the right of way.
5 16. A method according to claim 1, wherein the vehicle comprises a controller area network (CAN) bus, and controlling the vehicle based at least in part on the graphical driving command comprises translating the graphical driving command into one or more CAN bus commands.
10 17. A method according to claim 16, wherein the CAN bus commands comprise commands for one or more of a steering electronic control unit, a throttle electronic control unit, and a brake electronic control unit.
18. A method according to claim 1, wherein the method is executed with a frequency at 15 least partly dependent on the speed of travel of the vehicle.
19. A method according to claim 1, wherein the vehicle is one of a motor vehicle, a lawn mower, a vacuum cleaner, and an ice cleaner.
20 20. A method according to claim 1, wherein the sensor data comprises data from one or more of visible light cameras, infrared cameras, radar sensors, LIDAR (Light Detection and Ranging) sensors, satellite driving sensors, inertial navigation system, thermometers, compasses, and clocks.
25 21. A method for determining a driving command for a vehicle, the method comprising:
receiving a graphical representation of a driving situation;
searching a first knowledge base of previous driving situations for a closest matching driving situation;
retrieving a first driving command from the first knowledge base associated 30 with the closest matching driving situation;
receiving sensor data;
determining a driving scenario from the sensor data;
searching an edge driving scenario knowledge base of driving scenarios for a closest matching driving scenario;
AMENDED SHEET IPEA/CN ) Date Recue/Date Received 2020-06-30
21. 6)1 2019 (21. 06. 2019) retrieving a second driving command from the edge driving scenario knowledge base of driving scenarios associated with the closest matching driving scenario; and cornbining the first driving command and the second driving command to generate a composite driving command.
22. The method according to claim 21, wherein:
determining the driving scenario comprises identifying the driver of the vehicle;
and searching the edge driving scenario knowledge base of driving scenarios comprises searching the edge driving scenario knowledge base of driving scenarios for the driving scenarios associated with a profile of the identified driver.
23. A method of providing autonomous driving for a vehicle, wherein the vehicle comprises an autonomous driving system, a manual driving system and a sensor system, the method comprising:
operating the vehicle in either one of a training and a driving mode, wherein operating the vehicle in the training mode comprises:
(i) representing a driving situation at a point in time based on data output from the sensor system;
(ii) recording an output of the manual driving system at the point in time, and representing the output as a first image denoting an actual steering direction and an actual acceleration or deceleration;
(iii) generating, by the autonomous driving system, an autonomous driving output based on the driving situation at the point in time and a training route plan, wherein the autonomous driving output is represented as a second image denoting a planned steering direction and a planned acceleration or deceleration;
(iv) determining a difference between the manual driving system output and the autonomous driving output; and (v) configuring a real-time planning module of the autonomous driving system based at least in part on the difference, wherein the real-time planning module determines autonomous driving outputs for the driving mode based on driving situations and route plans.
24. The method according to claim 23, wherein operating the vehicle in the driving mode comprises:
Date Recue/Date Received 2020-06-30 AAIENDED SHEET(PEA/EN) 21. 61J 2019 (21. 06, 2019) (i) representing a driving situation at a second point in time based on data output from the sensor system;
(ii) generating, by the autonomous driving system, an autonomous driving output based on the driving situation at the second point in time and a predefined route plan, and representing the output as a third image denoting a calculated steering direction and a calculated acceleration or deceleration;
and (iii) applying the autonomous driving output to one or more control units of a vehicle to drive the vehicle along the predefined route plan.
25. The method according to claim 24, wherein each of the first image, the second image and the third image comprises an arrow, wherein a steering direction for controlling a steering wheel of the vehicle is indicated by a direction of the arrow, and an amplitude of acceleration or deceleration of the vehicle is indicated by a length or a width of the arrow.
26. The method according to either one of claims 24 or 25 wherein representing a driving situation at a point in time comprises creating a sequence of driving situation maps of static objects and dynamic objects in a vicinity of the vehicle in logarithmic polar coordinates.
27. The method according to claim 26 cornprising pairing the sequence of driving situation maps with a corresponding arrow generated from or associated with the driving situation, and storing the pair in a memory storage.
28. The method according to claim 27, wherein generating an autonomous driving output based on the driving situation at the second point in time comprises searching in the memory storage for a similar driving situation represented by a sequence of driving situation maps, and selecting the arrow corresponding to the similar driving situation as the autonomous driving output.
29. The method according to claim 27, comprising detecting an edge driving situation and searching in the memory storage for a similar driving situation represented by a sequence of driving situation maps, and selecting the arrow corresponding to the similar driving situation as the autonomous driving output.
AMENDED SHEET IPEA/CN) Date Revue/Date Received 2020-06-30 CA 03087361 2020-06-30 21. 6)1 2019 (21, 06. 2019)
30. The rnethod according to any one of claims 25 to 29, wherein generating an autonomous driving output based on the driving situation at the second point in time comprises generating a strategic cognitive arrow for directing the vehicle along the predefined route plan.
31. A system for autonomously driving a vehicle, the system comprising:
a first module configured to receive sensor data from one or more sensors and generate a graphical representation of the sensor data;
a second module configured to generate a graphical driving command from the graphical representation; and a third module configured to control the vehicle based at least in part on the graphical driving command, wherein the graphical representation comprises a driving situation map, and the graphical driving command comprises a cognitive arrow.
32. A system according to claim 31, wherein generating the graphical representation comprises representing the sensor data using a polar coordinate system comprising an angle dimension and a radius dimension.
33. A system according to claim 32, wherein the radius dimension of the polar coordinate system is logarithmic.
34. A system according to claim 31, wherein the cognitive arrow comprises a graphically depicted arrow.
35. A system according to claim 34, wherein a length of the graphically depicted arrow represents an acceleration, and an angle of the graphically depicted arrow represents a steering direction.
36. A system according to claim 31, wherein the vehicle is one of a motor vehicle, a lawn mower, a vacuum cleaner, and an ice cleaner.
37. A system according to claim 31, wherein the one or more sensors comprise one or more of visible light cameras, infrared cameras, radar sensors, LIDAR (Light Detection and Ranging) sensors, satellite driving sensors, inertial navigation systems, therrnorneters, compasses, and clocks.
Date Recue/Date Received 2020-06-30 AMENDED SHEET IPEA/CN
CA3087361A 2018-01-05 2018-01-05 Autonomous driving methods and systems Pending CA3087361A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/071516 WO2019134110A1 (en) 2018-01-05 2018-01-05 Autonomous driving methods and systems

Publications (1)

Publication Number Publication Date
CA3087361A1 true CA3087361A1 (en) 2019-07-11

Family

ID=67143508

Family Applications (1)

Application Number Title Priority Date Filing Date
CA3087361A Pending CA3087361A1 (en) 2018-01-05 2018-01-05 Autonomous driving methods and systems

Country Status (2)

Country Link
CA (1) CA3087361A1 (en)
WO (1) WO2019134110A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821536A (en) * 2022-05-13 2022-07-29 河南科技大学 Improved method for identifying field obstacles of yolov5 unmanned tractor

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110488872B (en) * 2019-09-04 2023-03-07 中国人民解放军国防科技大学 Unmanned aerial vehicle real-time path planning method based on deep reinforcement learning
CN111178402B (en) * 2019-12-13 2023-04-07 赛迪检测认证中心有限公司 Scene classification method and device for road test of automatic driving vehicle
US11432306B2 (en) 2020-08-05 2022-08-30 International Business Machines Corporation Overtaking anticipation and proactive DTCH adjustment
US11810364B2 (en) * 2020-08-10 2023-11-07 Volvo Car Corporation Automated road damage detection
CN113126620B (en) * 2021-03-23 2023-02-24 北京三快在线科技有限公司 Path planning model training method and device
DE102021203057A1 (en) 2021-03-26 2022-09-29 Volkswagen Aktiengesellschaft Segment-based driver analysis and individualized driver assistance
CN113568324B (en) * 2021-06-29 2023-10-20 之江实验室 Knowledge graph correction method based on simulation deduction

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6359986B2 (en) * 2015-02-13 2018-07-18 トヨタ自動車株式会社 Vehicle driving support system
DE102016202590A1 (en) * 2016-02-19 2017-09-07 Robert Bosch Gmbh Method and device for operating an automated motor vehicle
DE102016202594A1 (en) * 2016-02-19 2017-08-24 Robert Bosch Gmbh Method and device for interpreting a vehicle environment of a vehicle and vehicle
US20170277182A1 (en) * 2016-03-24 2017-09-28 Magna Electronics Inc. Control system for selective autonomous vehicle control
CN107161141B (en) * 2017-03-08 2023-05-23 深圳市速腾聚创科技有限公司 Unmanned automobile system and automobile

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821536A (en) * 2022-05-13 2022-07-29 河南科技大学 Improved method for identifying field obstacles of yolov5 unmanned tractor
CN114821536B (en) * 2022-05-13 2024-02-20 河南科技大学 Unmanned tractor field obstacle recognition method for improving yolov5

Also Published As

Publication number Publication date
WO2019134110A1 (en) 2019-07-11

Similar Documents

Publication Publication Date Title
WO2019134110A1 (en) Autonomous driving methods and systems
US11878683B1 (en) Automated system and method for modeling the behavior of vehicles and other agents
CN113165652B (en) Verifying predicted trajectories using a mesh-based approach
JP7433391B2 (en) Interaction between vehicle and teleoperation system
US10829116B2 (en) Affecting functions of a vehicle based on function-related information about its environment
US20200272160A1 (en) Motion Prediction for Autonomous Devices
US11698638B2 (en) System and method for predictive path planning in autonomous vehicles
CN114489044A (en) Trajectory planning method and device
US20180004210A1 (en) Affecting Functions of a Vehicle Based on Function-Related Information about its Environment
JP2021524410A (en) Determining the drive envelope
CN110573978A (en) Dynamic sensor selection for self-driving vehicles
US20220105959A1 (en) Methods and systems for predicting actions of an object by an autonomous vehicle to determine feasible paths through a conflicted area
CN116249947A (en) Predictive motion planning system and method
CN112698645A (en) Dynamic model with learning-based location correction system
WO2018005819A1 (en) Affecting functions of a vehicle based on function-related information about its environment
US11341866B2 (en) Systems and methods for training a driver about automated driving operation
CN116249644A (en) Method and system for performing out-of-path inference by autonomous vehicles to determine viable paths through an intersection
EP3869342A1 (en) System and method for generating simulation scenario definitions for an autonomous vehicle system
US11774259B2 (en) Mapping off-road entries for autonomous vehicles
CN113496189A (en) Sensing method and system based on static obstacle map
CN115339437A (en) Remote object detection, localization, tracking, and classification for autonomous vehicles
DE112022003364T5 (en) COMPLEMENTARY CONTROL SYSTEM FOR AN AUTONOMOUS VEHICLE
WO2022154986A1 (en) Methods and systems for safe out-of-lane driving
CN112829762A (en) Vehicle running speed generation method and related equipment
CN117916682A (en) Motion planning using a time-space convex corridor

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20220922

EEER Examination request

Effective date: 20220922

EEER Examination request

Effective date: 20220922

EEER Examination request

Effective date: 20220922

EEER Examination request

Effective date: 20220922

EEER Examination request

Effective date: 20220922

EEER Examination request

Effective date: 20220922

EEER Examination request

Effective date: 20220922