CN111572562A - Automatic driving method, device, equipment, system, vehicle and computer readable storage medium - Google Patents

Automatic driving method, device, equipment, system, vehicle and computer readable storage medium Download PDF

Info

Publication number
CN111572562A
CN111572562A CN202010630799.8A CN202010630799A CN111572562A CN 111572562 A CN111572562 A CN 111572562A CN 202010630799 A CN202010630799 A CN 202010630799A CN 111572562 A CN111572562 A CN 111572562A
Authority
CN
China
Prior art keywords
data
neural network
vehicle
feature extraction
autonomous vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010630799.8A
Other languages
Chinese (zh)
Inventor
由长喜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010630799.8A priority Critical patent/CN111572562A/en
Publication of CN111572562A publication Critical patent/CN111572562A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/06Direction of travel
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/50Barriers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2555/00Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
    • B60W2555/60Traffic rules, e.g. speed limits or right of way

Abstract

The application discloses an automatic driving method based on a neural network, wherein the neural network comprises a trained first feature extraction part, a trained second feature extraction part and a trained decision part, the first feature extraction part and the trained decision part are feed-forward neural networks, the second feature extraction part is a convolutional neural network, and the method comprises the following steps: inputting first type data associated with automatic driving of a vehicle into a first feature extraction section to extract first type features, the first type data being one-dimensional data; inputting second-class data associated with automatic driving of the vehicle into the second feature extraction section to extract a second-class feature, the second-class data being multi-dimensional data; the first type of feature and the second type of feature are input into a decision-making portion to determine a current driving strategy of the autonomous vehicle. An automatic driving device and the like are also disclosed.

Description

Automatic driving method, device, equipment, system, vehicle and computer readable storage medium
Technical Field
The present application relates to autonomous driving, and more particularly, to an autonomous driving method, apparatus, device, system, vehicle, and computer-readable storage medium.
Background
The automatic driving technology generally comprises technologies such as high-precision maps, environment perception, behavior decision, path planning, motion control and the like, and has wide application prospects.
In 2013, the National Highway Traffic Safety Administration (NHTSA) under the jurisdiction of the U.S. department of transportation, for example, first released a classification standard for an auto-driven vehicle, which has 4 levels for description of automation, and includes four stages of specific function automation, partial automation, conditional automation, and full automation, which roughly means the degree of vehicle automation and the degree of human participation when the vehicle is controlled to take over, and the lower the degree of participation, the higher the degree of auto-driving.
In 2014, the american society of international automotive engineers (SAE) also developed a set of grading standards for automotive vehicles for automatic driving, which classified the description of automation into 5 grades, which increased the highest level of automatic driving, full automatic driving. This provides a reliable criterion for determining the level of automatic driving of these vehicles on sale on the market today.
The following table describes the autopilot levels given by NHTSA and SAE.
Figure DEST_PATH_IMAGE002
Disclosure of Invention
Embodiments of the present invention provide a neural network-based autopilot method, apparatus, device, system, vehicle, and computer-readable storage medium.
According to a first aspect of the present invention, there is provided a neural network-based automatic driving method, wherein the neural network comprises a trained first feature extraction part, a second feature extraction part and a decision part, wherein the first feature extraction part and the decision part are feedforward neural networks and the second feature extraction part is a convolutional neural network, the method comprising: inputting first-class data associated with automatic driving of a vehicle into a first feature extraction portion of a neural network to extract first-class features, wherein the first-class data is one-dimensional data; acquiring second-class data associated with automatic driving of the vehicle, and inputting the second-class data into a second feature extraction part of the neural network to extract second-class features, wherein the second-class data are multi-dimensional data; and inputting the first type of feature and the second type of feature into a decision portion of the neural network to determine a current driving strategy of the autonomous vehicle.
According to one embodiment, the first type of data includes: current mission data of the autonomous vehicle, status data of the autonomous vehicle itself and data related to current traffic regulations, the second type of data comprising: the data of the current target obstacle of the autonomous vehicle and the data related to the road in the current driving environment of the autonomous vehicle, and the current driving strategy of the autonomous vehicle comprise: a decision result for the target obstacle.
According to one embodiment, the task data comprises task data associated with a particular road scene.
According to one embodiment, the second type of data comprises coordinate data of a trajectory of a current target obstacle of the sampled autonomous vehicle and coordinate data of a road boundary in a current driving environment of the sampled autonomous vehicle.
According to one embodiment, the sampled coordinate data of the trajectory of the current target obstacle of the autonomous vehicle and the sampled coordinate data of the road boundary in the current driving environment of the autonomous vehicle are in the same format.
According to one embodiment, the output nodes of the first feature extraction part of the neural network and the corresponding input nodes of the decision part of the neural network are connected in a fully connected manner, and the output nodes of the second feature extraction part of the neural network and the corresponding input nodes of the decision part of the neural network are connected in a one-to-one correspondence manner.
According to one embodiment, the own state data includes: position coordinates, speed, heading, and geometric information of the autonomous vehicle.
According to one embodiment, the data relating to the current traffic rules comprises: data indicating the state of the traffic lights that the autonomous vehicle is currently facing.
According to one embodiment, the data relating to the current traffic rules comprises: speed limit data for a current road of the autonomous vehicle.
According to one embodiment, the target obstacle comprises a plurality of target obstacles, and the decision result comprises a decision result for each of the plurality of target obstacles, respectively.
According to one embodiment, the determination of the target obstacle is associated with task data.
According to one embodiment, the first type of data further comprises geometric data of the target obstacle.
According to one embodiment, the task data includes: straight going, left lane changing, right lane changing, crossing straight going, crossing left turning, crossing right turning, merging.
According to one embodiment, the decision result comprises: ignore, yield, detour, follow, dodge, and cut.
According to a second aspect of the present invention, there is provided an automatic driving apparatus based on a neural network, the neural network including a trained first feature extraction portion, a second feature extraction portion and a decision portion, wherein the first feature extraction portion and the decision portion are feedforward neural networks, and the second feature extraction portion is a convolutional neural network, the apparatus comprising: a first extraction module configured to input a first type of data associated with automatic driving of a vehicle into a first feature extraction portion of a neural network to extract a first type of feature, wherein the first type of data is one-dimensional data; a second extraction module configured to input a second type of data associated with the automatic driving of the vehicle into a second feature extraction portion of the neural network to extract a second type of feature, wherein the second type of data is multi-dimensional data; and a determination module configured to input the first class of features and the second class of features into a decision portion of a neural network to determine a current driving strategy of the autonomous vehicle.
According to a third aspect of the present invention, there is provided an automatic driving apparatus comprising: a processor; and a memory configured to have computer executable instructions stored thereon, which when executed in the processor, cause the processor to implement the method of the first aspect and any embodiment thereof described above.
According to a fourth aspect of the present invention, there is provided a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a computing device, cause the computing device to implement the method of the first aspect and any embodiment thereof described above.
According to a fifth aspect of the present invention, there is provided an automatic driving system comprising: data acquisition means configured to acquire first class data and second class data associated with automatic driving of a vehicle, and an automatic driving apparatus according to a third aspect of the present invention.
According to one embodiment, the data acquisition device includes one or more of a perception system available to the vehicle, an electronic map, and an electronic navigation.
According to a sixth aspect of the invention, there is provided a vehicle comprising an autonomous driving system according to the fifth aspect of the invention.
By designing composite input data, such as one-dimensional data and multi-dimensional data, and designing a deep neural network to process simple data, including the current scene type of an automatically-driven vehicle, the state of the automatically-driven vehicle, the state of a signal lamp which the automatically-driven vehicle faces currently, and the like, and designing a convolutional neural network parallel to the deep neural network to process multi-dimensional data signals, including the track of obstacles and the geometric boundary information of roads, the existing data which is favorable for automatic driving can be effectively utilized without being limited to a data structure, and meanwhile, the structurally-optimized neural network can greatly exert the advantages of different neural networks for processing different data, simplify the structure of each part of the neural network, thereby simplifying the structure of the whole neural network and improving the efficiency of the whole neural network.
The tasks are further refined by associating the tasks with the scenes, so that the refined task acquisition and the neural network can functionally support each other and have an interaction relation, and the expected driving behavior can be calculated more accurately.
By associating the determination of the target obstacle with the task, which is associated with the road scene, the task analysis-based acquisition of target obstacle data and the neural network can functionally support each other, in an interactive relationship, so that the expected driving behavior can be calculated more accurately.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 illustrates a conceptual diagram of multiple obstacles at an intersection.
Fig. 2a illustrates one structural example of a neural network for obtaining a driving strategy of an autonomous vehicle according to an embodiment of the present invention.
Fig. 2b illustrates yet another structural example of a neural network for obtaining a driving strategy of an autonomous vehicle according to an embodiment of the present invention.
FIG. 3a illustrates an environmental vehicle in one scenario according to an embodiment of the present invention.
FIG. 3b illustrates an environmental vehicle in another scenario according to an embodiment of the present invention.
FIG. 3c illustrates an environmental vehicle in yet another scenario, according to an embodiment of the present invention.
FIG. 4a illustrates an example of one data transition in a portion of a neural network for obtaining a driving strategy for an autonomous vehicle in accordance with an embodiment of the present invention.
FIG. 4b illustrates an example of yet another data transition in a portion of a neural network for obtaining a driving strategy for an autonomous vehicle in accordance with an embodiment of the present invention.
FIG. 5 illustrates an example of an autonomous driving method based on a neural network according to an embodiment of the present invention.
Fig. 6 illustrates an example of a neural network-based autopilot device according to an embodiment of the present invention.
FIG. 7 illustrates a hardware implementation environment diagram according to an embodiment of the invention.
Detailed Description
The solution provided by the embodiment of the present application relates to technologies such as automatic driving of artificial intelligence, and in order to make the purpose, technical solution, and advantages of the present application clearer, the following will describe in further detail the embodiments of the present application with reference to the accompanying drawings.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.
While autonomous driving is not limited to artificial intelligence technology, artificial intelligence plays an increasingly important role in autonomous driving applications.
At present, most of the vehicle models sold in the market basically only support automatic driving to the level of L2, namely, the vehicle can automatically drive under specific road or environmental conditions during driving of the vehicle, a driver does not need to operate a steering wheel and an accelerator, the liberation of the two hands and the feet basically opens the door of the automatic driving, and at the level, the driver needs to pay attention to the road condition at any time and still pay attention to the road surface, so that more people experience but not completely open under the level, for example, a full-speed-range adaptive cruise system of 0-150km/h is the automatic driving at the level of L2.
Companies and organizations currently engaged in developing unmanned technologies beyond L3 are substantially faced with the difficulty of dealing with complex scenarios, especially those that require gaming and interaction between their own vehicle and multiple surrounding vehicles.
The Tesla company develops an Autopilot system, and has the advantages that the detection and processing of lane lines are stable, the human-computer interaction completion degree is high, the processing of obstacles is not robust enough, scene recognition and processing of complex scenes cannot be accurately realized depending on pure sensing data, and great challenges are provided for processing obstacles at intersections, high-speed entrances and exits and the like. The cadilac corporation developed a CT6 system that is also difficult to interact with multiple obstacles in complex scenes, is slow to handle inserted obstacles, and is currently available in mass-produced models only for closed-road scenes. Fig. 1 shows, by way of example, a conceptual diagram of multiple obstacles at an intersection, in which a vehicle passing through the intersection in the middle is a vehicle with an automatic driving function, and the vehicle faces complex road conditions and complex obstacle conditions, so that a behavior decision for the obstacle in the automatic driving process is challenged. It is desirable that autonomous vehicles can optimize decisions for various driving scenarios and tasks.
In addition, in the automatic driving technology based on artificial intelligence, the efficiency optimization of the neural network for processing the complex scene is also a problem to be considered.
Fig. 2a illustrates one structural example of a neural network for obtaining a driving strategy of an autonomous vehicle according to an embodiment of the present invention. The structure is a composite neural network that can be divided into three parts-a first feature extraction part 201 of the neural network (shown within the dotted box in fig. 2 a), a second feature extraction part 202 of the neural network (shown within the dotted box in fig. 2 a), and a decision part 203 of the neural network (shown within the dotted box in fig. 2 a).
The neural network includes various categories according to different neurons, and the categories are common, namely a Feedforward Neural Network (FNN), a Convolutional Neural Network (CNN) -coding spatial correlation, a Recurrent Neural Network (RNN) -coding time correlation, a countermeasure neural network (GAN) and the like. A broad deep neural network includes any of the above and combinations thereof, a narrow deep neural network refers to a feed-forward neural network, a simple feed-forward neural network may even include only an input layer and an output layer, while a more complex feed-forward neural network with hidden layers is also referred to as a multi-layer perceptive network (MLP) and typically has three parts-an input layer, an output layer and at least one hidden layer. In the embodiment of the present invention, the first feature extraction portion 201 of the neural network and the decision portion 203 of the neural network are feedforward neural networks, and the second feature extraction portion of the neural network is a convolutional neural network, but the present invention is not limited thereto. As in the example of fig. 2a, the first feature extraction part 201 of the neural network has three layers, and the decision part 203 of the neural network has three layers, but the present invention is not limited thereto, and in other examples, the first feature extraction part 201 of the neural network may have two layers, four layers, etc., and the decision part 203 of the neural network may have other number of layers, depending on the complexity of data processing.
In this configuration, the inputs of the first feature extraction portion 201 of the neural network are all one-dimensional data, the data types are, for example, integer (int), boolean (boolean), floating point (double, float), etc., and may include a plurality of data sets related to automatic driving, and fig. 2a shows three data sets — current task data (abbreviated as task) of an automatic driving vehicle, state data of an automatic driving vehicle itself (abbreviated as self-vehicle state), signal lights as data related to a current traffic regulation, but is not limited thereto, and may also include other data related to an automatic driving vehicle, such as target obstacle geometry data, data related to other traffic regulations (such as speed limit data of a current road), and so on. The output node of the first feature extraction portion 201 of the neural network is shared with, or coincident with, the corresponding input node of the decision portion 203 of the neural network. It will be understood that fig. 2b shows an equivalent neural network, separating the overlapping parts, i.e. the output nodes of the first feature extraction part 201 and the corresponding input nodes of the decision part 203 of the neural network are connected in a one-to-one correspondence. The output nodes of the second feature extraction portion 202 of the neural network and the input nodes corresponding thereto in the decision portion 203 of the neural network are connected in a one-to-one correspondence. That is, the input of another part of the decision portion 203 of the neural network comes directly from the output of the second feature extraction portion 202 of the neural network. Of course, the input-output relationship among the first feature extraction portion 201 of the neural network, the second feature extraction portion 202 of the neural network, and the decision portion 203 of the neural network is not limited to that shown in fig. 2a, but may have other variations, for example, the output node of the first feature extraction portion 201 of the neural network and the corresponding input node in the decision portion 203 of the neural network may be connected in a one-to-one correspondence manner, but to achieve the same effect, it may be necessary to provide one more layer of nodes in the decision portion 203 of the neural network, as shown in fig. 2 b. Therefore, the input-output relationship between the first feature extraction portion 201 of the neural network and the decision portion 203 of the neural network shown in fig. 2a can reduce the arrangement of neurons, further simplifying the network structure.
The input to the second feature extraction part 202 of the neural network is multi-dimensional data including, but not limited to, two-dimensional data, and data of obstacles and roads are shown in fig. 2a, but not limited thereto. In one example, the second feature extraction portion 202 of the neural network includes, in order, a convolutional layer, a pooling layer, a further convolutional layer, a further pooling layer, a Flattening layer (Flattening), and a fully-connected layer. The convolutional neural network will extract the necessary feature data from the input information for the decision of the decision part 203 of the subsequent neural network. In one example, the decision portion 203 of the neural network includes three layers, all connected in sequence, and finally outputs the current driving strategy of the autonomous vehicle.
By designing composite input data, such as one-dimensional data and multi-dimensional data, and designing a deep neural network to process simple data, including the current scene type of an automatically-driven vehicle, the state of the automatically-driven vehicle, the state of a signal lamp which the automatically-driven vehicle faces currently, and the like, and designing a convolutional neural network parallel to the deep neural network to process multi-dimensional data signals, including the track of obstacles and the geometric boundary information of roads, the existing data which is favorable for automatic driving can be effectively utilized without being limited to a data structure, and meanwhile, the structurally-optimized neural network can greatly exert the advantages of different neural networks for processing different data, simplify the structure of each part of the neural network, thereby simplifying the structure of the whole neural network and improving the efficiency of the whole neural network.
An example of the structure of a neural network for obtaining a driving strategy of an autonomous vehicle according to an embodiment of the present invention is further described below with reference to fig. 3a to 4.
It is desirable that autonomous vehicles optimize decisions for various driving scenarios and driving tasks, such as traffic merging, lane changing, intersections, and the like. In the prior art, the driving task cannot be well combined with the driving scene. While for different scenarios, such as intersection and non-intersection, the same driving task, such as straight going, the driving strategy may be different, and the target obstacle considered may also be different, which cannot be generalized. The inventors of the present invention have realized that in the design of the input data, which represent an association with a road scene, for example, the first set of input data of the first feature extraction part 201 of the neural network are task data, the design task data comprising tasks associated with a specific road scene, as shown in the following table:
task Straight going, left lane changing, right lane changing, crossing straight going, crossing left turning, crossing right turning, confluence
Based on the table, there are 6 types of tasks, and then in one example, the neurons of 6 input layers are designed to correspond to the group of data, and the data type input by each neuron is boolean. Of course, those skilled in the art will appreciate that the number and variety of tasks are not so limited, nor are the types of data entered.
According to the embodiment of the invention, the tasks are further refined by associating the tasks with the scenes, so that the acquisition of the refined tasks and the neural network can functionally support each other and have an interaction relation, and the expected driving behavior can be calculated more accurately. In an example, the acquisition of refined tasks may be accomplished through one or more of electronic navigation, perception systems, electronic maps, etc. available to the autonomous vehicle, although the invention is not limited thereto.
Referring to fig. 2a, the 2 nd set of input data of the first feature extraction part 201 of the neural network is the own car state, as shown in the following table:
self-driving state Position coordinate x-axis, position coordinate y-axis, velocity, orientation, length, width
Including the location coordinates, speed, heading, and necessary geometric information such as length, width, etc. of the host vehicle. In one example, the neurons of the 6 input layers are designed to correspond to the set of data, and the data type input by each neuron may be a floating point type. Of course, those skilled in the art will appreciate that the number and kinds of the own vehicle states are not limited thereto, nor are the types of data input.
Referring to fig. 2a, the 3 rd set of input data of the first feature extraction part 201 of the neural network is the signal lamp, as shown in the following table:
signal lamp Red, green and yellow light
Data comprising the conventional three signal lights-red, green, yellow, can be used to describe the four states of red/yellow/green/no signal light. In one example, 3 input layers of neurons are designed to correspond to the group of data, the data type input by each neuron can be a boolean type, and if any one of the three inputs is "1", the signal lamp is on, and if all the inputs are "0", the signal lamp is not on. Of course, those skilled in the art will understand that the number and type of the signal lights are not limited thereto, but are associated with specific rules of specific regions, such as deceleration signal lights, pedestrian priority signal lights, etc., and the type of data input is not limited thereto.
In another embodiment, the input layer of the first feature extraction portion 201 of the neural network may also add more neurons for inputting other sets of data, such as geometric data used to describe the target obstacle. As shown in the following table:
target obstacle geometry data Length, width of the target obstacle 1; length and width of target obstacle 2 … … length and width of target obstacle N
Based on the table, there are N target obstacles each having two data, and thus there are 2N data, in one example, 2N neurons corresponding to the group of data may be designed at the input layer of the first feature extraction portion 201 of the neural network, and the data type input by each neuron is a floating point type. In one example, N takes the value of 10, although those skilled in the art will appreciate that the number of target obstacles and the type of geometric data are not so limited, nor are the types of data input.
The determination of the target obstacle is associated with a task, which in turn may be associated with a road scene. The following description will be given taking the number N of target obstacles as 10 as an example. FIG. 3a illustrates an environmental vehicle in one scenario according to an embodiment of the present invention. Where the scene is a non-intersection scene, the vehicle 31 shown with filled oblique lines is an autonomous vehicle according to an embodiment of the present invention having a structure of a neural network for obtaining a driving strategy of the autonomous vehicle according to an embodiment of the present invention, and its current task is to change lanes to the right, then the selected target obstacles are 1 lead vehicle — vehicle 321 right in front of it, 3 vehicles nearest to its left lane — vehicles 322, 323, and 324, 3 vehicles nearest to its right lane — vehicles 325 and 326 (less than 3, the corresponding neurons do no input or input is 0), and the 3 vehicles nearest to the distant lane where there may be lane change conflicts-i.e., 3 vehicles of the rearmost lane in fig. 3a — vehicles 327, 328, and 329.
FIG. 3b illustrates an environmental vehicle in another scenario according to an embodiment of the present invention. Where the scene is a non-intersection scene, the vehicle 31 shown with filled oblique lines is an autonomous vehicle according to an embodiment of the present invention having a structure of a neural network for obtaining a driving strategy of the autonomous vehicle according to an embodiment of the present invention, and its current task is to merge, then the selected target obstacles are 1 lead vehicle — vehicle 321 right in front of it, 3 vehicles nearest to its left lane — vehicles 322 and 323 (less than 3, the corresponding neuron does not make an input or the input is 0), and 3 vehicles nearest to the far lane where there may be a lane change conflict, that is, vehicles 324 and 325 of the last lane in fig. 3b (less than 3, the corresponding neuron does not make an input or the input is 0). So far, three neurons corresponding to the target obstacle number N of 10 remain, and no input or the input is 0.
FIG. 3c illustrates an environmental vehicle in yet another scenario, according to an embodiment of the present invention. The scene is an intersection scene and is relatively complex. The vehicle 31 shown with filled oblique lines is an autonomous vehicle according to an embodiment of the present invention having a structure of a neural network for obtaining a driving strategy of the autonomous vehicle according to an embodiment of the present invention, the current task is to turn right at the intersection, then the selected target obstacle is 1 lead vehicle-vehicle 321 directly in front of it, the 3 vehicles closest to their left lane-vehicles 322 and 323 (less than 3, then no input or 0 input is made to the corresponding neuron), and other vehicles that may have lane change conflicts, such as 2 vehicles of the adjacent lane of the target lane-vehicle 324 (less than 2, then no input or 0 input is made to the corresponding neuron), two vehicles crossing the intersection and corresponding to the straight lane of the target lane north of the intersection and its adjacent lane- vehicles 325 and 326, one vehicle crossing the intersection and on the left turn lane of the east of the intersection-vehicle 327. Likewise, the remaining neurons do not make an input or the input is 0.
According to the embodiment of the invention, the determination of the target barrier is associated with the task, and the task is associated with the road scene, so that the acquisition of the target barrier data based on the task analysis and the neural network can functionally support each other and have an interaction relation, and the expected driving behavior can be calculated more accurately. The task-based analysis of the target obstacle data may be accomplished through one or more of electronic navigation, perception systems, electronic maps, etc. available to the autonomous vehicle, although the invention is not limited thereto.
It should be understood that although the target obstacles listed in the embodiments of the present application are all vehicles, there may be conflicting pedestrians, animals (such as pets), or even stationary objects, all of which may be the target obstacles. Such as pedestrians and pets waiting to cross the road in the southwest corner of the intersection in fig. 3 c. The input data of the neural network related to the method can be flexibly set.
Fig. 4a illustrates an example of one data transition in the second feature extraction portion 202 of the neural network, which is a portion of the neural network for obtaining a driving strategy of an autonomous vehicle, according to an embodiment of the present invention. In one example, the input data passes through the convolutional layer and the pooling layer of the second feature extraction part 202 of the neural network to obtain data Cov1, passes through another convolutional layer and another pooling layer to obtain data Cov2, passes through a Flattening layer (Flattening) to obtain data FC1, and finally passes through a full connection layer to obtain data FC 2. The second feature extraction portion 202 of the neural network processes data of a current target obstacle of the autonomous vehicle and data related to a road in a current running environment of the autonomous vehicle. In one example, the data of the current target obstacle of the autonomous vehicle is sampled coordinate data of a trajectory of the current target obstacle of the autonomous vehicle, and the data related to a road in the current driving environment of the autonomous vehicle is sampled coordinate data of a road boundary in the current driving environment of the autonomous vehicle.
The selection of the current target obstacle of the autonomous vehicle may refer to the embodiment described in conjunction with fig. 3a to 3c, in one example, N obstacles are selected, N is 10, each obstacle corresponds to one input channel of the second feature extraction section 202 of the neural network, the track point of the last 3 seconds is taken, the sampling frequency is 10hz, the coordinate data amount of the track point collected by each track is 2x30 (each track point data of the sampling includes an abscissa and an ordinate), and the data input of the second feature extraction section 202 of the neural network is 2xNx30 only in terms of the data of the target obstacle. Data Cov1, (8 xNx24 in this example) results after processing through the convolutional layer (convolutional kernel size 4x1,8 channels in one example) and the pooling layer, and data Cov2 (6 xNx12 in this example) results after processing through the further convolutional layer (convolutional kernel size 7x1,6 channels in one example) and the further pooling layer. The data FC 172 xN is obtained after processing by the flattening layer, and the data FC 220 xN, i.e. 200 in this example, is obtained after further processing by the full connection layer. That is, the second feature extraction section 202 of the neural network extracts 200 features from the selected history tracks of 10 target obstacles. The second feature extraction part 202 of the neural network functions to predict the trajectory of the target obstacle for a later period of time from the past 3 seconds of the trajectory, thereby providing a basis for the driving strategy of the autonomous vehicle.
Similarly, the same processing may be performed on the coordinate data of the road boundary in the current running environment of the autonomous vehicle. FIG. 4b illustrates an example of another data transition in a portion of a neural network for obtaining a driving strategy for an autonomous vehicle in accordance with an embodiment of the present invention. The current lane boundary coordinates of the autonomous vehicle may be processed in the same format as the sampled trajectory of the target obstacle, for example, the boundary on one side and the boundary coordinate data on the other side are processed in the same format using two channels, and accordingly, the data input to the second feature extraction portion 202 of the neural network is 2x (N +2) x 30. After the processing of the convolutional layer and the pooling layer, data Cov 18 x (N +2) x24 is obtained, and then after the processing of the other convolutional layer and the other pooling layer, data Cov 26 x (N +2) x12 are obtained. The data FC 172 x (N +2) is then obtained after processing by the flattening layer, and the data FC 220 x (N +2), i.e. 240 in this example, is obtained after further processing by the full connection layer. That is, the second feature extraction section 202 of the neural network extracts 240 features from the history tracks of the selected 10 target obstacles and the boundary information. Of course, in other examples, multiple road boundary channels may be required depending on the actual scene. It will be appreciated by those skilled in the art that other sampling arrangements are possible, and that the format of the coordinate data of the trajectory of the current target obstacle of the sampled autonomous vehicle and the coordinate data of the road boundary in the current driving environment of the sampled autonomous vehicle may be different.
According to the embodiment of the invention, the format of the coordinate data of the track of the current target obstacle of the sampled automatic driving vehicle is the same as that of the coordinate data of the road boundary in the current driving environment of the sampled automatic driving vehicle, so that the structure of the convolutional neural network can be simplified, the structure of the whole neural network is simplified as a whole, and the operation efficiency is improved.
As such, the second neural network can extract the necessary feature data from the input information for the decision of the decision portion 203 of the subsequent neural network.
According to the embodiment of the invention, the sampled coordinate data of the track of the current target obstacle of the automatic driving vehicle and the sampled coordinate data of the road boundary in the current driving environment of the automatic driving vehicle are adopted, so that the direct processing of the original image data can be avoided, the data are acquired by using a perception system available for the vehicle and an electronic map, the complexity of a second feature extraction part of a neural network can be reduced, and the operation efficiency of the whole neural network is improved as a whole.
From this point on, the feature data generated by the second feature extraction portion 202 of the neural network is one-dimensional data, and will correspond to a part of neurons of the decision portion 203 of the neural network to which the data is input, and another part of neurons of the decision portion 203 of the neural network receives an output from the second feature extraction portion 201 of the neural network in a fully connected manner. The decision portion 203 of the neural network is a fully connected network, and in one example, the activation function of the decision portion 203 of the neural network as a fully connected network employs a Linear rectification function (ReLU), which is also referred to as a modified Linear Unit. The linear rectification function generally refers to a nonlinear function represented by a ramp function and its variation. Compared with the traditional neural network activation function, such as a Logistic function (Logistic sigmoid) and a hyperbolic function like tanh, the linear rectification function has the following advantages:
first, following biological principles: related brain studies have shown that information encoding of biological neurons is often relatively scattered and sparse. Typically, only about 1% -4% of the neurons in the brain are active at the same time. The activity (namely, the output is a positive value) of the neurons in the machine neural network can be debugged by using linear correction and regularization; in contrast, a logistic function reaches when the input is 0, i.e., is already in a half-saturated steady state, which is not sufficient to meet the expectations of actual biology for a simulated neural network. It should be noted, however, that typically, approximately 50% of the neurons in a neural network using modified linear elements (i.e., linear rectification) are active.
Second, more efficient gradient descent and back propagation: the problems of gradient explosion and gradient disappearance are avoided.
Thirdly, simplifying the calculation process: the influence of other complex activation functions such as an exponential function is avoided; meanwhile, the activity dispersity enables the overall calculation cost of the neural network to be reduced.
Further, in this example, the optimizer employs Adam, which combines the advantages of both the AdaGrad and RMSProp optimization algorithms. The First Moment estimate (i.e., the mean of the gradient) and the Second Moment estimate (i.e., the non-centered variance of the gradient) of the gradient are considered together, and the update step is calculated. The Adam optimizer is an optimizer with excellent working performance, and mainly includes the following significant advantages: the method is simple to implement, high in calculation efficiency and low in memory requirement; the updating of the parameters is not influenced by the gradient scaling transformation; hyper-parameters are well-interpretable and typically require no or little fine-tuning; the step size of the update can be limited to a rough range (initial learning rate); the step annealing process (automatic adjustment of learning rate) can be naturally realized; the method is very suitable for being applied to large-scale data and parameter scenes; is applicable to unstable objective functions; the method is suitable for the problem of sparse gradient or large noise in the gradient.
As such, through calculations of, for example, three to four fully connected layers, the current driving strategy of the autonomous vehicle, which in one example is a decision for each obstacle, can be derived. Fig. 2a shows the decision portion 203 of the neural network as three layers, but the invention is not limited thereto. In one example, the decision for each obstacle is shown in the following table:
decision label Ignore (ignore), yield (yield), bypass (bypass), follow (follow), avoid (avoid), cut-in (cut-in)
A total of 6 classes of labels, in one example, the output layer of the decision portion 203 of the neural network contains 6N channels, each 6 channels corresponding to a classification of an obstacle, with a data type, e.g., boolean, representing the decision for each obstacle. As such, the decision problem for a target obstacle translates into a multi-classification problem for multiple target obstacles.
The above description of the neural network on which embodiments of the present invention are based is focused, and the data input to the neural network and related to the automatic driving of the vehicle may be obtained by one or more of a perception system, an electronic map, and an electronic navigation device available to the vehicle, and is not limited to the described data, and any data that affects and/or facilitates the automatic driving of the vehicle may be designed to train the neural network as input data.
Data required for training the neural network shown in fig. 2a may be collected by real-time vehicle testing for a specific scene, where the test vehicle has one or more of a sensing system, an electronic map, electronic navigation, and the like, to determine the required inputs for the neural network, and the neural network is trained in combination with later manual screening and labeling.
FIG. 5 illustrates an example of a neural network-based autopilot method according to an embodiment of the present invention. Such as the neural network described above in connection with fig. 2a-5, which will not be described in detail here, the method starts with step 501, in which first type data associated with the automatic driving of the vehicle is input into the first feature extraction portion 201 of the neural network to extract the first type features. At step 502, a second type of data associated with the autonomous driving of the vehicle is acquired and input to the second feature extraction portion 202 of the neural network to extract a second type of feature. The first class of features and the second class of features are then input into the decision portion 203 of the neural network to determine the current driving strategy of the autonomous vehicle at step 503.
It should be understood that the order of description of the above steps does not represent the order of execution unless the steps described later are premised on the execution of the steps described earlier.
Fig. 6 illustrates an example of a neural network-based autopilot device according to an embodiment of the present invention. Such as the neural network described above in connection with fig. 2a-5, will not be described in detail here. The automatic driving apparatus 600 includes: a first extraction module 601 configured to input a first type of data associated with automatic driving of a vehicle into a first feature extraction portion of a neural network to extract a first type of feature, wherein the first type of data is one-dimensional data; a second extraction module 602 configured to input a second type of data associated with the autonomous driving of the vehicle into a second feature extraction portion of the neural network to extract a second type of feature, wherein the second type of data is multidimensional data; and a determination module 603 configured to input the first class of features and the second class of features into a decision portion of a neural network to determine a current driving strategy of the autonomous vehicle.
FIG. 7 illustrates a hardware implementation environment diagram according to an embodiment of the invention. Referring to fig. 7, in an embodiment of the present invention, autopilot device 702 includes a processor 704 that includes a hardware element 710. The processor 704 includes, for example, one or more processors such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, Application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. The term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Additionally, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for autonomous driving, or incorporated in combined hardware and/or software modules. Also, the techniques may be fully implemented in one or more circuits or logic elements. The methods in this disclosure may be implemented in various components, modules, or units, but need not be implemented by different hardware units. Rather, as noted above, the various components, modules or units may be combined or provided by a collection of interoperative hardware units (including one or more processors as noted above) in combination with appropriate software and/or firmware.
In one or more examples, the aspects described above in connection with fig. 1-6 may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium 706 and executed by a hardware-based processor. Computer-readable media 706 may include computer-readable storage media corresponding to tangible media, such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, such as according to a communication protocol. In this manner, the computer-readable medium 706 may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium such as a signal or carrier wave. The data storage medium can be any available medium that can be read by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described in this disclosure. The computer program product may include a computer-readable medium 706.
By way of example, and not limitation, such computer-readable storage media can comprise memory such as RAM, ROM, EEPROM, CD _ ROM or other optical disk, magnetic disk memory or other magnetic storage, flash memory or any other memory 712 that can be used to store desired program code in the form of instructions or data structures and that can be read by a computer. Also, any connection is properly termed a computer-readable medium 706. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media 706.
The autopilot device 702 may also be provided in the autopilot apparatus 700 along with an I/O interface 706 for transferring data, as well as other functionality 714 (e.g., data acquisition devices configured to acquire first and second types of data associated with the autopilot of a vehicle, including, for example, one or more of sensing, electronic maps, electronic navigation, etc.). The autopilot device 700 may be included in different vehicles such as cars, motorcycles, tricycles, trucks, fuel and/or electric vehicles, etc., where cars 716, trucks 718, and other vehicles 720 are illustrated. Each of these configurations includes devices that may have generally different configurations and capabilities, and thus the autopilot device 700 may be configured according to one or more of the different vehicle categories. The techniques of this disclosure may also be implemented, in whole or in part, on the "cloud" 722 through the use of a distributed system, such as through the platform 724 described below.
Cloud 722 includes and/or is representative of platform 724 for resources 726. The platform 724 abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud 722. The resources 726 may include applications and/or data that may be used when executing computer processes on servers remote from the computing device 702. The resources 726 may also include services provided over the internet and/or over a subscriber network such as a cellular or Wi-Fi network.
The platform 724 may abstract resources and functionality to connect the computing device 702 with other computing devices. The platform 724 may also be used to abstract a hierarchy of resources to provide a corresponding level of hierarchy encountered for the demand of the resources 726 implemented via the platform 724. Thus, in interconnected device embodiments, implementation of functions described herein may be distributed throughout the system. For example, the functionality may be implemented in part on the computing device 702 and by the platform 724 that abstracts the functionality of the cloud 722.
According to the embodiments of the present invention, the processing of the target obstacle is suitable for solving various scenes including complex scenes, and the processing is suitable for the automatic driving scene of L4 and above, considering the scene-based task and the determination of the target obstacle based on the task finely. The neural network of the embodiment of the invention has flexible structure, can effectively utilize the existing data beneficial to automatic driving without being limited to a data structure, and simultaneously, the neural network with optimized structure can greatly play the advantages of different neural networks for processing different data, and can simplify the structure of each part of the neural network, thereby simplifying the structure of the whole neural network and improving the efficiency of the whole neural network.
It should be noted that the appearances of the phrases "first," "second," and the like in this disclosure are not intended to indicate any importance or order to the steps, but are merely used for distinguishing. Method steps are not described in a sequence which does not represent their execution sequence without specific description or prerequisite constraints (i.e., the execution of one step is premised on the execution result of another step), and the described method steps can be executed in a possible and reasonable order.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. An automatic driving method based on a neural network is characterized in that:
the neural network comprising a trained first feature extraction portion, a second feature extraction portion, and a decision portion, wherein the first feature extraction portion and the decision portion are feed-forward neural networks and the second feature extraction portion is a convolutional neural network, the method comprising:
inputting first class data associated with automatic driving of a vehicle into a first feature extraction portion of the neural network to extract first class features, wherein the first class data is one-dimensional data;
inputting a second type of data associated with the automatic driving of the vehicle into a second feature extraction portion of the neural network to extract a second type of feature, wherein the second type of data is multi-dimensional data; and
inputting the first class of features and the second class of features into a decision portion of the neural network to determine a current driving strategy of the autonomous vehicle.
2. The method of claim 1, wherein:
the first type of data comprises: current task data of the autonomous vehicle, status data of the autonomous vehicle itself and data relating to current traffic regulations,
the second type of data comprises: data of a current target obstacle of the autonomous vehicle and data relating to a road in a current driving environment of the autonomous vehicle, an
The current driving strategy of the autonomous vehicle comprises: a decision result for the target obstacle.
3. The method of claim 2, wherein: the task data includes task data associated with a particular road scene.
4. The method of claim 1 or 2, wherein: the second type of data includes sampled coordinate data of a trajectory of a current target obstacle of the autonomous vehicle and sampled coordinate data of a road boundary in a current driving environment of the autonomous vehicle.
5. The method of claim 4, wherein: the sampled coordinate data of the trajectory of the current target obstacle of the autonomous vehicle and the sampled coordinate data of the road boundary in the current driving environment of the autonomous vehicle are in the same format.
6. The method of claim 1 or 2, wherein: the output node of the first characteristic extraction part of the neural network is connected with the input node corresponding to the output node in the decision part of the neural network in a full connection mode, and the output node of the second characteristic extraction part of the neural network is connected with the input node corresponding to the output node in the decision part of the neural network in a one-to-one correspondence mode.
7. The method of claim 2, wherein:
the self state data comprises: position coordinates, speed, heading, and geometric information of the autonomous vehicle;
the current traffic regulation related data includes: data indicating a current traffic light state to which the autonomous vehicle is currently facing, and/or speed limit data for a current road of the autonomous vehicle;
the target obstacle comprises a plurality of target obstacles, and the decision result comprises a decision result for each of the plurality of target obstacles respectively;
the first type of data further comprises geometric data of the target obstacle; and
the decision result comprises: ignore, yield, detour, follow, dodge, and cut.
8. The method of claim 2, wherein: the determination of the target obstacle is associated with the task data.
9. The method of claim 2, wherein: the task data includes: straight going, left lane changing, right lane changing, crossing straight going, crossing left turning, crossing right turning, merging.
10. An automatic driving device based on a neural network is characterized in that:
the neural network comprising a trained first feature extraction portion, a second feature extraction portion, and a decision portion, wherein the first feature extraction portion and the decision portion are feed-forward neural networks and the second feature extraction portion is a convolutional neural network, the apparatus comprising:
a first extraction module configured to input a first class of data associated with automatic driving of a vehicle into a first feature extraction portion of the neural network to extract a first class of features, wherein the first class of data is one-dimensional data;
a second extraction module configured to input a second type of data associated with the automatic driving of the vehicle into a second feature extraction portion of the neural network to extract a second type of feature, wherein the second type of data is multidimensional data; and
a determination module configured to input the first class of features and the second class of features into a decision portion of the neural network to determine a current driving strategy of the autonomous vehicle.
11. An autopilot device, characterized by: the method comprises the following steps:
a processor; and
a memory configured to have computer-executable instructions stored thereon that, when executed in the processor, cause the processor to implement the method of any of claims 1-9.
12. A computer-readable storage medium having computer-executable instructions stored thereon, characterized in that: the instructions, when executed by a computing device, cause the computing device to implement the method of any of claims 1-9.
13. An autopilot system, characterized by: the method comprises the following steps:
a data acquisition device configured to acquire a first type of data and a second type of data associated with autonomous driving of a vehicle, an
The autopilot device of claim 11.
14. The autopilot system of claim 13 wherein:
the data acquisition device includes one or more of a perception system available to the vehicle, an electronic map, and an electronic navigation.
15. A vehicle, characterized in that: comprising an autopilot system according to claim 13.
CN202010630799.8A 2020-07-03 2020-07-03 Automatic driving method, device, equipment, system, vehicle and computer readable storage medium Pending CN111572562A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010630799.8A CN111572562A (en) 2020-07-03 2020-07-03 Automatic driving method, device, equipment, system, vehicle and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010630799.8A CN111572562A (en) 2020-07-03 2020-07-03 Automatic driving method, device, equipment, system, vehicle and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111572562A true CN111572562A (en) 2020-08-25

Family

ID=72124037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010630799.8A Pending CN111572562A (en) 2020-07-03 2020-07-03 Automatic driving method, device, equipment, system, vehicle and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111572562A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561057A (en) * 2020-12-09 2021-03-26 清华大学 State control method and device
CN112859885A (en) * 2021-04-25 2021-05-28 四川远方云天食品科技有限公司 Cooperative optimization method for path of feeding robot
CN113276884A (en) * 2021-04-28 2021-08-20 吉林大学 Intelligent vehicle interactive decision passing method and system with variable game mode
CN113341941A (en) * 2021-08-04 2021-09-03 北京三快在线科技有限公司 Control method and device of unmanned equipment
CN113568416A (en) * 2021-09-26 2021-10-29 智道网联科技(北京)有限公司 Unmanned vehicle trajectory planning method, device and computer readable storage medium
CN113706737A (en) * 2021-10-27 2021-11-26 北京主线科技有限公司 Road surface inspection system and method based on automatic driving vehicle
CN114148344A (en) * 2020-09-08 2022-03-08 华为技术有限公司 Vehicle behavior prediction method and device and vehicle
EP4140845A1 (en) * 2021-08-17 2023-03-01 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus for predicting motion track of obstacle and autonomous vehicle

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090003703A1 (en) * 2007-06-26 2009-01-01 Microsoft Corporation Unifield digital ink recognition
CN107609633A (en) * 2017-05-03 2018-01-19 同济大学 The position prediction model construction method of vehicle traveling influence factor based on deep learning in car networking complex network
CN108225364A (en) * 2018-01-04 2018-06-29 吉林大学 A kind of pilotless automobile driving task decision system and method
CN109697875A (en) * 2017-10-23 2019-04-30 华为技术有限公司 Plan the method and device of driving trace
CN110371112A (en) * 2019-07-06 2019-10-25 深圳数翔科技有限公司 A kind of intelligent barrier avoiding system and method for automatic driving vehicle
CN110647839A (en) * 2019-09-18 2020-01-03 深圳信息职业技术学院 Method and device for generating automatic driving strategy and computer readable storage medium
CN111258217A (en) * 2018-11-30 2020-06-09 百度(美国)有限责任公司 Real-time object behavior prediction
CN111252061A (en) * 2018-11-30 2020-06-09 百度(美国)有限责任公司 Real-time decision making for autonomous vehicles

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090003703A1 (en) * 2007-06-26 2009-01-01 Microsoft Corporation Unifield digital ink recognition
CN107609633A (en) * 2017-05-03 2018-01-19 同济大学 The position prediction model construction method of vehicle traveling influence factor based on deep learning in car networking complex network
CN109697875A (en) * 2017-10-23 2019-04-30 华为技术有限公司 Plan the method and device of driving trace
CN108225364A (en) * 2018-01-04 2018-06-29 吉林大学 A kind of pilotless automobile driving task decision system and method
CN111258217A (en) * 2018-11-30 2020-06-09 百度(美国)有限责任公司 Real-time object behavior prediction
CN111252061A (en) * 2018-11-30 2020-06-09 百度(美国)有限责任公司 Real-time decision making for autonomous vehicles
CN110371112A (en) * 2019-07-06 2019-10-25 深圳数翔科技有限公司 A kind of intelligent barrier avoiding system and method for automatic driving vehicle
CN110647839A (en) * 2019-09-18 2020-01-03 深圳信息职业技术学院 Method and device for generating automatic driving strategy and computer readable storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114148344A (en) * 2020-09-08 2022-03-08 华为技术有限公司 Vehicle behavior prediction method and device and vehicle
WO2022052556A1 (en) * 2020-09-08 2022-03-17 华为技术有限公司 Method and apparatus for predicting vehicle behaviour, and vehicle
CN114148344B (en) * 2020-09-08 2023-06-02 华为技术有限公司 Vehicle behavior prediction method and device and vehicle
CN112561057A (en) * 2020-12-09 2021-03-26 清华大学 State control method and device
CN112859885A (en) * 2021-04-25 2021-05-28 四川远方云天食品科技有限公司 Cooperative optimization method for path of feeding robot
CN113276884A (en) * 2021-04-28 2021-08-20 吉林大学 Intelligent vehicle interactive decision passing method and system with variable game mode
CN113341941A (en) * 2021-08-04 2021-09-03 北京三快在线科技有限公司 Control method and device of unmanned equipment
EP4140845A1 (en) * 2021-08-17 2023-03-01 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus for predicting motion track of obstacle and autonomous vehicle
CN113568416A (en) * 2021-09-26 2021-10-29 智道网联科技(北京)有限公司 Unmanned vehicle trajectory planning method, device and computer readable storage medium
CN113706737A (en) * 2021-10-27 2021-11-26 北京主线科技有限公司 Road surface inspection system and method based on automatic driving vehicle
CN113706737B (en) * 2021-10-27 2022-01-07 北京主线科技有限公司 Road surface inspection system and method based on automatic driving vehicle

Similar Documents

Publication Publication Date Title
US11899411B2 (en) Hybrid reinforcement learning for autonomous driving
CN111572562A (en) Automatic driving method, device, equipment, system, vehicle and computer readable storage medium
CN110796856B (en) Vehicle lane change intention prediction method and training method of lane change intention prediction network
CN107229973B (en) Method and device for generating strategy network model for automatic vehicle driving
CN112099496B (en) Automatic driving training method, device, equipment and medium
US11256964B2 (en) Recursive multi-fidelity behavior prediction
Wang et al. End-to-end autonomous driving: An angle branched network approach
Hu et al. Learning a deep cascaded neural network for multiple motion commands prediction in autonomous driving
US20220067511A1 (en) Methods, systems, and apparatuses for user-understandable explainable learning models
Zaghari et al. Improving the learning of self-driving vehicles based on real driving behavior using deep neural network techniques
Chen Multimedia for autonomous driving
CN114882457A (en) Model training method, lane line detection method and equipment
Wang et al. Autonomous Driving System Driven by Artificial Intelligence Perception Fusion
Wang et al. End-to-end self-driving approach independent of irrelevant roadside objects with auto-encoder
Koenig et al. Bridging the gap between open loop tests and statistical validation for highly automated driving
CN114973199A (en) Rail transit train obstacle detection method based on convolutional neural network
Sun et al. Cross validation for CNN based affordance learning and control for autonomous driving
Wang et al. Vision-Based Autonomous Driving: A Hierarchical Reinforcement Learning Approach
WO2023060963A1 (en) Method and apparatus for identifying road information, electronic device, vehicle, and medium
US20230311930A1 (en) Capturing and simulating radar data for autonomous driving systems
CN115107806A (en) Vehicle track prediction method facing emergency scene in automatic driving system
CN115062202A (en) Method, device, equipment and storage medium for predicting driving behavior intention and track
CN114495050A (en) Multitask integrated detection method for automatic driving forward vision detection
Wang et al. A front water recognition method based on image data for off-road intelligent vehicle
Tewari et al. AI-based autonomous driving assistance system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40027871

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination