CN113954858A - Method for planning vehicle driving route and intelligent automobile - Google Patents

Method for planning vehicle driving route and intelligent automobile Download PDF

Info

Publication number
CN113954858A
CN113954858A CN202010698231.XA CN202010698231A CN113954858A CN 113954858 A CN113954858 A CN 113954858A CN 202010698231 A CN202010698231 A CN 202010698231A CN 113954858 A CN113954858 A CN 113954858A
Authority
CN
China
Prior art keywords
information
vehicle
model
groups
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010698231.XA
Other languages
Chinese (zh)
Inventor
古强
庄雨铮
王志涛
刘武龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010698231.XA priority Critical patent/CN113954858A/en
Priority to PCT/CN2021/084330 priority patent/WO2022016901A1/en
Publication of CN113954858A publication Critical patent/CN113954858A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0011Planning or execution of driving tasks involving control alternatives for a single driving scenario, e.g. planning several paths to avoid obstacles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/005Handover processes
    • B60W60/0059Estimation of the risk associated with autonomous or manual driving, e.g. situation too complex, sensor failure or driver incapacity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method for planning a vehicle driving route, which can be applied to an intelligent automobile and an intelligent networking automobile and comprises the following steps: the method comprises the steps of obtaining first information, wherein the first information comprises one or more of position information, lane information, navigation information and obstacle information of a vehicle. When the vehicle is in the automatic driving mode, the first information is input data of a first model, output data of the first model is used for planning a driving route for the vehicle, the first model is a model obtained by training second information acquired in the manual driving mode by taking a driving track of the vehicle in the manual driving mode as a training target, the type of the information included in the second information is consistent with the type of the information included in the first information, and the similarity between the first information and the second information meets a preset condition. The first information is used to train a first model when the vehicle is in a manual driving mode. According to the scheme provided by the application, the receiving rate of the driver can be reduced for a large number of long-tail scenes.

Description

Method for planning vehicle driving route and intelligent automobile
Technical Field
The application relates to the field of automatic driving, in particular to a method for planning a vehicle driving route and an intelligent automobile.
Background
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. Research in the field of artificial intelligence includes robotics, natural language processing, computer vision, decision and reasoning, human-computer interaction, recommendation and search, AI basic theory, and the like.
Automatic driving is a mainstream application in the field of artificial intelligence, and the automatic driving technology depends on the cooperative cooperation of computer vision, radar, a monitoring device, a global positioning system and the like, so that the motor vehicle can realize automatic driving without the active operation of human beings. Autonomous vehicles use various computing systems to assist in transporting passengers from one location to another. Some autonomous vehicles may require some initial input or continuous input from an operator, such as a pilot, driver, or passenger. Autonomous vehicles permit an operator to switch from a manual mode of operation to an autonomous mode or an intermediate mode. Because the automatic driving technology does not need human to drive the motor vehicle, the driving error of human can be effectively avoided theoretically, the occurrence of traffic accidents is reduced, and the transportation efficiency of the road can be improved. Therefore, the automatic driving technique is increasingly emphasized.
In the field of autopilot technology, route planning for autopilots enables the routing and route optimization (the route is also referred to as a path or trajectory) of an autopilot and further enables a more optimal control strategy. However, the actual driving conditions of the automatic driving automobile are very complex, and a large number of long-tail scenes can be formed under the influence of weather, temperature and humidity, special obstacles, road shapes and the like. The long-tailed scene refers to a scene faced by an automatic driving automobile, a large number of atypical and different scenes exist, and the scene is difficult to process by a uniform rule, so that how to plan the driving route of the automobile aiming at the large number of long-tailed scenes needs to be solved urgently.
Disclosure of Invention
The application provides a method for a vehicle driving route and related equipment, which can reduce the receiving rate of a driver aiming at a large number of long-tail scenes.
In order to solve the technical problem, the application provides the following technical scheme:
the first aspect of the application provides a method for planning a vehicle driving route, which can be used in the field of automatic driving in the field of artificial intelligence. The method can comprise the following steps: first information is acquired, and the first information may include one or more of position information, lane information, navigation information, and obstacle information of the vehicle. For example, the first information may include information, such as the first information includes location information of the vehicle. Or the first information may include two kinds of information, such as the first information includes position information and lane information of the vehicle. Or the first information may include three kinds of information, such as the first information including position information of the vehicle, lane information, and navigation information. Or the first information may include four kinds of information, such as the first information including position information of the vehicle, lane information, navigation information, and obstacle information. The lane information is used to determine the relative position of the vehicle and the lane line, the navigation information is used to predict the driving direction of the vehicle, and the obstacle information is used to determine the relative position of the vehicle and the obstacle. The environmental information around the vehicle, which may be used to determine whether the vehicle has entered the same or a similar scene, may be determined by the acquired first information. In some scenes with simple road conditions, it can be considered that the same scene can be determined as long as the position information of the vehicles is consistent. In some scenes with complex road conditions, the same scene may be determined only when the position information, the lane information, the navigation information and the obstacle information of the vehicle are consistent. When the vehicle is in the automatic driving mode, the first information is input data of a first model, output data of the first model is used for planning a driving route for the vehicle, the first model is a model obtained by training second information acquired in the manual driving mode by taking a driving track of the vehicle in the manual driving mode as a training target, the type of the information which can be included in the second information is consistent with the type of the information which can be included in the first information, the similarity between the first information and the second information meets a preset condition, the similarity between the first information and the second information meets the preset condition and is used for representing that a first scene and a second scene are the same or similar, the first scene is a scene where the vehicle is located when the vehicle acquires the first information, and the second scene is a scene where the vehicle is located when the vehicle acquires the second information. For example, the first information includes location information of the vehicle, where the location information may include longitude and latitude information of a location where the vehicle is located. For example, the first information includes first longitude and latitude information, the second information includes second longitude and latitude information, if a difference between the first longitude and latitude information and the second longitude and latitude information is smaller than a preset threshold, the similarity between the first information and the second information may be considered to satisfy a preset condition, and if the difference between the first longitude and latitude information and the second longitude and latitude information is larger than the preset threshold, the similarity between the first information and the second information may be considered to not satisfy the preset condition. When the vehicle is in the manual driving mode, the first information is used for training a first model, and output data of the first model is used for planning a driving route for the vehicle. According to the first aspect, the scheme provided by the application acquires environmental information around the vehicle in real time, such as the first information and the second information. When the vehicle is in the manual driving mode, the environment information around the vehicle is used as training data to train the model, so that the track output by the model can be close to the running track of the vehicle in the manual driving mode. When the vehicle is in the automatic driving mode, if the similarity between the environmental information of the vehicle and the environmental information around the vehicle used by the model training is determined to meet the preset condition, the environmental information around the vehicle is used as the input data of the model, and a driving route is planned for the vehicle according to the output data of the model. The scheme provided by the application can reduce the frequency of taking over by a driver.
Optionally, with reference to the first aspect, in a first possible implementation manner, the method may further include: and evaluating the similarity between the first information and the M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity of any two groups of information in the M groups of information does not meet a preset condition, and each group of information in the M groups of information is respectively used for training a model. When the vehicle is in the automatic driving mode, the first information is input data of the first model, and may include: and when the similarity between the first information and the second information in the M groups of information meets the preset condition and the similarity between the second information in the M groups of information and the first information is maximum, the first information is input data of the first model. As can be seen from the first possible implementation manner of the first aspect, when the vehicle is in the automatic driving mode, the similarity of the scene (surrounding environment information) in which the vehicle is located may be evaluated by evaluating the similarity between the first information and the M groups of information, and if the similarity between the first information and the M groups of information satisfies a preset condition, that is, it is considered that the similarity between the current scene of the vehicle and the scene corresponding to the existing available model exceeds a threshold, an available model with the highest similarity may be selected from the M models, and the currently acquired periodic environment information (e.g., the first information) is used as an input of the available model with the highest similarity (e.g., the first model), and output data of the available model with the highest similarity may be used to plan a driving route for the vehicle.
Optionally, with reference to the first aspect, in a second possible implementation manner, the method may further include: and evaluating the similarity between the first information and the M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet a preset condition, each group of information in the M groups of information is respectively used for training a model, the second information in the M groups of information is used for training the first model, and M is a positive integer. The first information is used to train the first model when the vehicle is in the manual driving mode, and may include: and when the similarity between the first information and the second information in the M groups of information meets a preset condition and the similarity between the second information and the first information in the M groups of information is maximum, the first information is used for training the first model to obtain an updated first model. As can be seen from the second possible implementation manner of the first aspect, if the similarity between the current scene and the scene corresponding to the existing available models exceeds the threshold, the available model with the highest similarity is selected, for example, the first model with the highest similarity is selected, the first model is updated and trained by using the first information, and the available model for the scene is updated, that is, the first model is updated.
Optionally, with reference to the first aspect, in a third possible implementation manner, the method may further include: and evaluating the similarity between the first information and the M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet a preset condition, each group of information in the M groups of information is respectively used for training a model, and M is a positive integer. The first information is used to train the first model when the vehicle is in the manual driving mode, and may include: and when the similarity of the first information and each group of information in the M groups of information does not meet the preset condition, the first information is used for carrying out first training on the first model. When the similarity of each of the first information and the M groups of information does not satisfy the preset condition, it may be considered that there is no scene that is the same as or similar to the current scene of the vehicle in the historical manual driving mode. No output data of the available models is available at this time to plan a driving route for the vehicle. Then using the first information, a temporary model is constructed for training, and a temporary model for the scene is formed, namely the first model is trained for the first time by using the first information. The first model may not satisfy the training target, that is, the output trajectory of the first model and the driving trajectory of the vehicle in the manual driving mode are not distributed within a preset range. And the first model after the first training is a temporary model and a non-available model, when the information of the similarity with the first information is obtained for multiple times in a manual driving mode, the first model is iteratively trained until the first model meets the training target, the first model is an available model, and the first model which is the available model can plan a driving route for the vehicle.
Optionally, with reference to the first aspect or the first to third possible implementation manners of the first aspect, in a fourth possible implementation manner, when the vehicle is in an automatic driving mode, the method may further include: and when the current position information of the vehicle is determined to be consistent with the position information of the vehicle acquired in the manual driving mode, sending a prompt message, wherein the prompt message is used for indicating the vehicle to plan a driving route for the vehicle according to the output data of the first model, and the first model is obtained by training the position information of the vehicle acquired in the manual driving mode.
Optionally, with reference to the first aspect or the first to third possible implementation manners of the first aspect, in a fifth possible implementation manner, when the vehicle is in a manual driving mode, the method may further include: and sending a prompt message, wherein the prompt message is used for instructing the training of the first model according to the current position information of the vehicle.
Optionally, with reference to the first aspect or the first to fifth possible implementation manners of the first aspect, in a sixth possible implementation manner, the method may further include: and determining the similarity of the same kind of information in the first information and the second information, wherein the linear weighted sum of the similarities of the same kind of information is used for determining the similarity of the first information and the second information.
Optionally, with reference to the first aspect or the first to sixth possible implementation manners of the first aspect, in a seventh possible implementation manner, the first model may include a convolutional neural network CNN or a recurrent neural network RNN. According to a seventh possible implementation manner of the first aspect, two possible first models are provided, so that the diversity of the scheme is increased.
A second aspect of the present application provides an apparatus for planning a driving route of a vehicle, which may include: the acquisition module is used for acquiring first information, wherein the first information can comprise one or more of position information, lane information, navigation information and obstacle information of the vehicle, the lane information is used for determining the relative position of the vehicle and a lane line, the navigation information is used for predicting the driving direction of the vehicle, and the obstacle information is used for determining the relative position of the vehicle and an obstacle. The control module is used for planning a driving route for the vehicle according to output data of the first model when the vehicle is in an automatic driving mode, the first information acquired by the acquisition module is input data of the first model, the first model is a model obtained by training second information acquired in an artificial driving mode by taking a driving track of the vehicle in the artificial driving mode as a training target, the type of the information which can be included in the second information is consistent with the type of the information which can be included in the first information, and the similarity of the first information and the second information meets a preset condition. And the training module is used for training the first model according to the first information acquired by the acquisition module when the vehicle is in the manual driving mode.
Optionally, with reference to the second aspect, in a first possible implementation manner, the method may further include: the evaluation module is used for evaluating the similarity between the first information and the M groups of information, each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet a preset condition, and each group of information in the M groups of information is respectively used for training a model. And when the similarity between the first information and the second information in the M groups of information meets the preset condition and the similarity between the second information in the M groups of information and the first information is maximum, the first information is input data of the first model.
Optionally, with reference to the second aspect, in a second possible implementation manner, the method may further include: the evaluation module is used for evaluating the similarity between the first information and the M groups of information, each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet a preset condition, each group of information in the M groups of information is respectively used for training a model, the second information in the M groups of information is used for training the first model, and M is a positive integer. And when the similarity between the first information and the second information in the M groups of information meets a preset condition and the similarity between the second information and the first information in the M groups of information is maximum, the first information is used for training the first model to obtain an updated first model.
Optionally, with reference to the second aspect, in a third possible implementation manner, the method may further include: the evaluation module is used for evaluating the similarity between the first information and the M groups of information, each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet a preset condition, each group of information in the M groups of information is respectively used for training a model, and M is a positive integer. And when the similarity of the first information and each group of information in the M groups of information does not meet the preset condition, the first information is used for carrying out first training on the first model.
Optionally, with reference to the second aspect or the third possible implementation manner of the first to the second aspects of the second aspect, in a fourth possible implementation manner, the method may further include: and the sending module is used for sending a prompt message when the evaluation module determines that the current position information of the vehicle is consistent with the position information of the vehicle acquired in the manual driving mode, wherein the prompt message is used for indicating the vehicle to plan a driving route for the vehicle according to the output data of the first model, and the first model is obtained by training the position information of the vehicle acquired in the manual driving mode.
Optionally, with reference to the second aspect or the third possible implementation manner of the first to the second aspects of the second aspect, in a fifth possible implementation manner, the method may further include: and the sending module is used for sending a prompt message, and the prompt message is used for indicating that the first model is trained according to the current position information of the vehicle.
Optionally, with reference to the second aspect or the fifth possible implementation manner of the first to the second aspects of the second aspect, in a sixth possible implementation manner, the method may further include: and the processing module is used for determining the similarity of the same kind of information in the first information and the second information, and the linear weighted sum of the similarity of the same kind of information is used for determining the similarity of the first information and the second information.
Optionally, with reference to the second aspect or the sixth possible implementation manner of the first to the second aspects of the second aspect, in a seventh possible implementation manner, the first model may include a convolutional neural network CNN or a recurrent neural network RNN.
The third aspect of the present application provides a system for planning a driving route of a vehicle, which may include the vehicle and a cloud-side device, wherein the vehicle is configured to obtain first information, and the first information may include one or more of position information of the vehicle, lane information, navigation information, and obstacle information, the lane information is used to determine a relative position of the vehicle and a lane line, the navigation information is used to predict a driving direction of the vehicle, and the obstacle information is used to determine a relative position of the vehicle and an obstacle. The vehicle is further used for sending the first information and the driving mode of the vehicle to the cloud-side equipment. The cloud-side equipment is used for planning a driving route for the vehicle according to output data of the first model when the vehicle is determined to be in the automatic driving mode, the first information is input data of the first model, the first model is a model obtained by training second information acquired in the manual driving mode by taking a driving track of the vehicle in the manual driving mode as a training target, the type of the information which can be included in the second information is consistent with the type of the information which can be included in the first information, and the similarity between the first information and the second information meets a preset condition. And the cloud side equipment is also used for training the first model according to the first information when the vehicle is determined to be in the manual driving mode.
Optionally, with reference to the third aspect, in a first possible implementation manner, the cloud-side device is further configured to: and evaluating the similarity between the first information and the M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity of any two groups of information in the M groups of information does not meet a preset condition, and each group of information in the M groups of information is respectively used for training a model. And when the similarity between the first information and the second information in the M groups of information meets the preset condition and the similarity between the second information in the M groups of information and the first information is maximum, the first information is input data of the first model.
Optionally, with reference to the third aspect, in a first possible implementation manner, the cloud-side device is further configured to: and evaluating the similarity between the first information and the M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet a preset condition, each group of information in the M groups of information is respectively used for training a model, the second information in the M groups of information is used for training the first model, and M is a positive integer. And when the similarity between the first information and the second information in the M groups of information meets a preset condition and the similarity between the second information and the first information in the M groups of information is maximum, the first information is used for training the first model to obtain an updated first model.
Optionally, with reference to the third aspect, in a third possible implementation manner, the cloud-side device is further configured to evaluate similarity between the first information and M groups of information, where each group of information in the M groups of information is information obtained when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not satisfy a preset condition, each group of information in the M groups of information is used for training one model, and M is a positive integer. And when the similarity of the first information and each group of information in the M groups of information does not meet the preset condition, the first information is used for carrying out first training on the first model.
Optionally, with reference to the third aspect or the third possible implementation manner of the first to third aspects, in a fourth possible implementation manner, the cloud-side device is further configured to send a prompt message when it is determined that the current location information of the vehicle is consistent with the location information of the vehicle acquired in the manual driving mode, where the prompt message is used to instruct the vehicle to plan a driving route for the vehicle according to output data of the first model, and the first model is a model obtained by training the location information of the vehicle acquired in the manual driving mode.
Optionally, with reference to the third aspect or the third possible implementation manner of the first to third aspects, in a fifth possible implementation manner, the cloud-side device is further configured to send a prompt message, where the prompt message is used to instruct training of the first model according to current location information of the vehicle.
Optionally, with reference to the third aspect or the first to fifth possible implementation manners of the third aspect, in a sixth possible implementation manner, the cloud-side device is further configured to determine a similarity between the same kind of information in the first information and the second information, and a linear weighted sum of the similarities between the same kind of information is used to determine the similarity between the first information and the second information.
Optionally, with reference to the third aspect or the sixth possible implementation manner of the first to third aspects, in a seventh possible implementation manner, the first model may include a Convolutional Neural Network (CNN) or a Recurrent Neural Network (RNN).
A fourth aspect of the present application provides an apparatus for planning a driving route of a vehicle, which may include a processor, a memory coupled to the processor, the memory storing program instructions, and the program instructions stored in the memory when executed by the processor implement the method for planning a driving route of a vehicle described in the first aspect or any one of the possible embodiments of the first aspect.
A fifth aspect of the present application provides a computer-readable storage medium, which may include a program that, when run on a computer, causes the computer to perform the method of planning a vehicle travel route described in the first aspect or any one of the possible implementations of the first aspect.
A sixth aspect of the present application provides a smart car that may comprise processing circuitry and memory circuitry configured to perform the method of planning a vehicle driving route described in the first aspect or any one of the possible implementations of the first aspect.
A seventh aspect of the present application provides circuitry that may include processing circuitry configured to perform the method for planning a vehicle travel route described in the first aspect or any one of the possible implementations of the first aspect.
An eighth aspect of the present application provides a computer program which, when run on a computer, causes the computer to perform the method of planning a vehicle driving route as described in the first aspect or any one of the possible embodiments of the first aspect.
A ninth aspect of the present application provides a chip system, which may include a processor, configured to support a cloud-side device or an apparatus for planning a driving route of a vehicle to implement the functions referred to in the above aspects, for example, to transmit or process data and/or information referred to in the above methods. In one possible design, the system-on-chip may further include a memory, storage, for storing program instructions and data necessary for the server or communication device. The chip system may be formed by a chip, or may include a chip and other discrete devices.
For specific implementation steps of the second aspect to the ninth aspect and various possible implementation manners and beneficial effects brought by each possible implementation manner in the present application, reference may be made to descriptions in various possible implementation manners in the first aspect, and details are not repeated here.
Drawings
FIG. 1a is a schematic diagram of an application scenario for planning a driving route of a vehicle according to the present application;
FIG. 1b is a schematic diagram of another application scenario for planning a driving route of a vehicle according to the present application;
FIG. 1c is a schematic diagram of another application scenario for planning a driving route of a vehicle according to the present application;
FIG. 2a is a schematic diagram of another application scenario for planning a driving route of a vehicle according to the present application;
FIG. 2b is a schematic diagram of another application scenario for planning a driving route of a vehicle according to the present application;
FIG. 2c is a schematic diagram of another application scenario for planning a driving route of a vehicle according to the present application;
FIG. 3 is a schematic structural diagram of an autonomous vehicle provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart of a method for planning a driving route of a vehicle according to the present application;
FIG. 5a is a schematic diagram of another application scenario for planning a driving route of a vehicle according to the present application;
FIG. 5b is a schematic diagram of another application scenario for planning a driving route of a vehicle according to the present application;
FIG. 5c is a schematic diagram of another application scenario for planning a driving route of a vehicle according to the present application;
FIG. 5d is a schematic diagram of another application scenario for planning a driving route of a vehicle according to the present application;
FIG. 6 is a schematic flow chart of a method for planning a driving route of a vehicle according to an embodiment of the present application;
FIG. 7 is a schematic flow chart of another method for planning a driving route of a vehicle according to an embodiment of the present application;
FIG. 8a is a schematic diagram of another application scenario for planning a driving route of a vehicle according to the present application;
FIG. 8b is a schematic diagram of another application scenario for planning a driving route of a vehicle according to the present application;
FIG. 9 is a schematic diagram illustrating a scenario of another method for planning a vehicle travel route provided herein;
FIG. 10 is a schematic structural diagram of an apparatus for planning a driving route of a vehicle according to an embodiment of the present application;
FIG. 11 is a schematic structural diagram of an autonomous vehicle provided by an embodiment of the present application;
fig. 12 is a schematic structural diagram of a chip according to an embodiment of the present disclosure.
Detailed Description
The embodiment of the application provides a method and related equipment for planning a vehicle running route, wherein a driving track of a vehicle in a manual driving mode is taken as a training target, motion information and surrounding environment information of the vehicle acquired in the manual driving mode are trained to obtain a first model, and when the vehicle passes through the same or similar scene again, the vehicle running route can be planned for the vehicle according to output data of the first model.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely descriptive of the various embodiments of the application and how objects of the same nature can be distinguished. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the present application may be applied to a scenario of performing route planning on various automatically-driven intelligent agents, for example, the embodiment of the present application may be applied to a scenario of performing route planning on an automatically-driven vehicle, and specifically, in the scenario of automatically driving, many complex working conditions have no significant common features (such as being related to a specific road section and a specific road condition), and it is difficult to deal with the special working conditions by a uniform regulation and control method. When driving in such a scenario, it often results in that the take-over rate cannot be reduced, i.e. the driver is always required to take over the vehicle. In addition, different scenes require different strategies, different users, and different driving styles. At present, the automatic driving regulation is difficult to customize aiming at a specific scene and a specific user, and a personalized and customized regulation and control strategy is provided. When the autonomous vehicle passes through a road environment with a large number of obstacles obstructing information perception, such as a lot of vehicles parked at the roadside, traffic congestion, passing through a bus station, roadside barriers during road repair, severe weather or other road environments with a large number of obstacles obstructing information perception, the autonomous vehicle may need a driver to take over the vehicle. For example, as shown in fig. 1a to 1c, schematic diagrams of application scenarios for planning a vehicle driving route provided by the present application are shown. The scenes shown in fig. 1a to 1c are parking lot scenes, and as shown in fig. 1a, the right front of the parking space where the vehicle is located has an obstacle, and it is assumed that the trajectory of the automatic parking space planned by the automatic driving only supports the case of leaving the parking space right and forward. Under the current working condition, there is not enough space for getting out of the parking space to the right and forward, as shown in fig. 1b, there may be multiple collisions, and the current trajectory planning algorithm does not make rules to draw a reasonable trajectory. At this time, the driver intervenes to take over the vehicle, as shown in fig. 1c, and can exit the parking lot after the vehicle is perpendicular to the parking space in a manner of going forward to the left. Fig. 2a to 2c are schematic diagrams of another application scenario for planning a vehicle driving route according to the present application. The scenarios shown in fig. 2a to 2c are scenarios in which the vehicle travels on a road, and the autonomous vehicle travels on a same-direction two-lane road, as shown in fig. 2a, wherein the left lane has a continuous manhole cover, and the trajectory planned by the autonomous planning module is a trajectory along the center line of the left lane, which results in poor comfort of passengers during traveling. After the driver takes over the vehicle manually, as shown in fig. 2b, the vehicle can run in a manner that the center line of the left lane is slightly deviated to the right, the vehicle still remains in the left lane, but the left wheel does not press the manhole cover continuously any more. Alternatively, as shown in fig. 2c, the driver may also manually switch to right lane driving when there is no other vehicle in the right lane.
After the driver takes over, in order to reduce the automatic driving taking-over rate, the log of the scene may be analyzed, and the analysis result shows that the planning of the trajectory of the scene cannot be supported by the planning control module of the current version (the planning control module will be introduced below). For example, in the scenarios shown in fig. 1a to 1c, the planning module of the current version cannot support route planning of the parking space in such scenarios, and in the scenarios shown in fig. 2a to 2c, the planning module of the current version cannot support trajectory planning in such scenarios. Therefore, in the current planning module, the design needs to be re-analyzed, and trajectory planning logic (similar to the driving trajectory of the driver) for the scene is added. Then follow-up when passing this scene again, can realize the orbit of parking stall automatically, perhaps can realize dodging the orbit of continuous well lid automatically. The scheme needs to analyze the scene independently every time the scheme is taken over, and a modification scheme is made, so that the scheme is complicated, the efficiency is not high, and the cost is high. Furthermore, similar to the above example, the long tail scenes are many and difficult to handle with a unified regulatory approach. In the scenes shown in fig. 1a to 1c, a part of the space on the left side of the parking space needs to be borrowed. In other parking space scenes, if the newly added planning rule in the scene is also used, unsafe conditions such as collision and the like can be caused, danger is caused, manual takeover of other scenes is caused, and the takeover rate of automatic driving cannot be reduced.
Therefore, it is desirable to route autonomous vehicles to cope with a large number of long tailed scenes by a vehicle routing scheme. As another example, for example, the embodiments of the present application may also be applied to a scenario in which a route of various types of robots is planned, such as a freight robot, a probe robot, a sweeping robot, or another type of robot, where the application scenario is further described with a freight robot as an example, when a plurality of freight robots work simultaneously, or a freight robot shuttles between shelves, other freight robots and shelves may both become obstacles that hinder the freight robot from performing information perception, so that when a freight robot travels in the aforementioned freight environment, in order to reduce collisions between freight robots and between freight robots, a route of a freight robot may be planned by using the solution provided in the present application. It should be understood that, the examples herein are only for convenience of understanding the application scenarios of the embodiments of the present application, and the application scenarios of the embodiments of the present application are not exhaustive, and the embodiments of the present application are only described in detail by taking the application to route planning of an autonomous vehicle as an example.
Embodiments of the present application are described below with reference to the accompanying drawings. As can be known to those skilled in the art, with the development of technology and the emergence of new scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
To facilitate understanding of the present solution, in the embodiment of the present application, first, referring to fig. 3, please refer to fig. 3, where fig. 3 is a schematic structural diagram of an autonomous vehicle provided in the embodiment of the present application, and the autonomous vehicle 100 is configured in a full or partial autonomous driving mode, for example, the autonomous vehicle 100 may control itself while in the autonomous driving mode, and may determine a current state of the vehicle and its surrounding environment by human operation, determine a possible behavior of at least one other vehicle in the surrounding environment, determine a confidence level corresponding to a possibility that the other vehicle performs the possible behavior, and control the autonomous vehicle 100 based on the determined information. The autonomous vehicle 100 may also be placed into operation without human interaction while the autonomous vehicle 100 is in the autonomous mode.
Autonomous vehicle 100 may include various subsystems such as a travel system 102, a sensor system 104, a control system 106, one or more peripherals 108, as well as a power supply 110, a computer system 112, and a user interface 116. Alternatively, the autonomous vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the sub-systems and components of the autonomous vehicle 100 may be interconnected by wires or wirelessly.
The travel system 102 may include components that provide powered motion to the autonomous vehicle 100. In one embodiment, the travel system 102 may include an engine 118, an energy source 119, a transmission 120, and wheels 121.
The engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine composed of a gasoline engine and an electric motor, and a hybrid engine composed of an internal combustion engine and an air compression engine. The engine 118 converts the energy source 119 into mechanical energy. Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 119 may also provide energy to other systems of the autonomous vehicle 100. The transmission 120 may transmit mechanical power from the engine 118 to the wheels 121. The transmission 120 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 120 may also include other devices, such as a clutch. Wherein the drive shaft may comprise one or more shafts that may be coupled to one or more wheels 121.
The sensor system 104 may include a number of sensors that sense information about the environment surrounding the autonomous vehicle 100. For example, the sensor system 104 may include a global positioning system 122 (the positioning system may be a global positioning GPS system, a beidou system, or other positioning system), an Inertial Measurement Unit (IMU) 124, a radar 126, a laser range finder 128, and a camera 130. The sensor system 104 may also include sensors that are monitored for internal systems of the autonomous vehicle 100 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). The sensing data from one or more of these sensors can be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a key function of safe operation of the autonomous vehicle 100.
The positioning system 122 may be used, among other things, to estimate the geographic location of the autonomous vehicle 100. The IMU 124 is used to sense position and orientation changes of the autonomous vehicle 100 based on inertial acceleration. In one embodiment, IMU 124 may be a combination of an accelerometer and a gyroscope. The radar 126 may utilize radio signals to sense objects within the surrounding environment of the autonomous vehicle 100, which may be embodied as millimeter wave radar or lidar. In some embodiments, in addition to sensing objects, radar 126 may also be used to sense the speed and/or heading of an object. The laser rangefinder 128 may use a laser to sense objects in the environment in which the autonomous vehicle 100 is located. In some embodiments, the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, among other system components. The camera 130 may be used to capture multiple images of the surrounding environment of the autonomous vehicle 100. The camera 130 may be a still camera or a video camera.
The control system 106 is for controlling the operation of the autonomous vehicle 100 and its components. The control system 106 may include various components including a steering system 132, a throttle 134, a braking unit 136, a computer vision system 140, a line control system 142, and an obstacle avoidance system 144.
Wherein the steering system 132 is operable to adjust the heading of the autonomous vehicle 100. For example, in one embodiment, a steering wheel system. The throttle 134 is used to control the operating speed of the engine 118 and thus the speed of the autonomous vehicle 100. The brake unit 136 is used to control the deceleration of the autonomous vehicle 100. The brake unit 136 may use friction to slow the wheel 121. In other embodiments, the brake unit 136 may convert the kinetic energy of the wheel 121 into an electric current. The brake unit 136 may also take other forms to slow the rotational speed of the wheels 121 to control the speed of the autonomous vehicle 100. The computer vision system 140 may be operable to process and analyze images captured by the camera 130 to identify objects and/or features in the environment surrounding the autonomous vehicle 100. The objects and/or features may include traffic signals, road boundaries, and obstacles. The computer vision system 140 may use object recognition algorithms, Motion from Motion (SFM) algorithms, video tracking, and other computer vision techniques. In some embodiments, the computer vision system 140 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The routing control system 142 is used to determine the travel route and travel speed of the autonomous vehicle 100. In some embodiments, the route control system 142 may include a lateral planning module 1421 and a longitudinal planning module 1422, the lateral planning module 1421 and the longitudinal planning module 1422 being used to determine a travel route and a travel speed for the autonomous vehicle 100 in conjunction with data from the obstacle avoidance system 144, the GPS 122, and one or more predetermined maps, respectively. Obstacle avoidance system 144 is used to identify, evaluate, and avoid or otherwise negotiate obstacles in the environment of autonomous vehicle 100, which may be embodied as actual obstacles and virtual moving objects that may collide with autonomous vehicle 100. In one example, the control system 106 may additionally or alternatively include components other than those shown and described. Or may reduce some of the components shown above.
The autonomous vehicle 100 interacts with external sensors, other vehicles, other computer systems, or users through peripherals 108. The peripheral devices 108 may include a wireless communication system 146, an in-vehicle computer 148, a microphone 150, and/or speakers 152. In some embodiments, the peripheral devices 108 provide a means for a user of the autonomous vehicle 100 to interact with the user interface 116. For example, the onboard computer 148 may provide information to a user of the autonomous vehicle 100. The user interface 116 may also operate the in-vehicle computer 148 to receive user input. The in-vehicle computer 148 may be operated via a touch screen. In other cases, peripheral devices 108 may provide a means for autonomous vehicle 100 to communicate with other devices located within the vehicle. For example, the microphone 150 may receive audio (e.g., voice commands or other audio input) from a user of the autonomous vehicle 100. Similarly, the speaker 152 may output audio to a user of the autonomous vehicle 100. The wireless communication system 146 may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system 146 may use 3G cellular communication such as Code Division Multiple Access (CDMA), EVD0, global system for mobile communications (GSM), General Packet Radio Service (GPRS), or 4G cellular communication such as Long Term Evolution (LTE) or 5G cellular communication. The wireless communication system 146 may communicate using a Wireless Local Area Network (WLAN). In some embodiments, the wireless communication system 146 may utilize an infrared link, bluetooth, or ZigBee to communicate directly with the device. Other wireless protocols, such as various vehicle communication systems, for example, the wireless communication system 146 may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The power supply 110 may provide power to various components of the autonomous vehicle 100. In one embodiment, power source 110 may be a rechargeable lithium ion or lead acid battery. One or more battery packs of such batteries may be configured as a power source to provide power to various components of the autonomous vehicle 100. In some embodiments, the power source 110 and the energy source 119 may be implemented together, such as in some all-electric vehicles.
Some or all of the functions of the autonomous vehicle 100 are controlled by the computer system 112. The computer system 112 may include at least one processor 113, the processor 113 executing instructions 115 stored in a non-transitory computer readable medium, such as the memory 114. The computer system 112 may also be a plurality of computing devices that control individual components or subsystems of the autonomous vehicle 100 in a distributed manner. The processor 113 may be any conventional processor, such as a commercially available Central Processing Unit (CPU). Alternatively, the processor 113 may be a dedicated device such as an Application Specific Integrated Circuit (ASIC) or other hardware-based processor. Although fig. 3 functionally illustrates a processor, memory, and other components of the computer system 112 in the same block, those skilled in the art will appreciate that the processor, or memory, may actually comprise multiple processors, or memories, that are not stored within the same physical housing. For example, the memory 114 may be a hard drive or other storage medium located in a different enclosure than the computer system 112. Thus, references to processor 113 or memory 114 are to be understood as including references to a collection of processors or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only computations related to the component-specific functions. In various aspects described herein, the processor 113 may be located remotely from the autonomous vehicle 100 and in wireless communication with the autonomous vehicle 100. In other aspects, some of the processes described herein are executed on a processor 113 disposed within the autonomous vehicle 100 while others are executed by the remote processor 113, including taking the steps necessary to execute a single maneuver. In some embodiments, the memory 114 may contain instructions 115 (e.g., program logic), and the instructions 115 may be executed by the processor 113 to perform various functions of the autonomous vehicle 100, including those described above. The memory 114 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the travel system 102, the sensor system 104, the control system 106, and the peripheral devices 108. For example, taking a lane change to the right as an example, the following operations are required for the human driver: the first step is as follows: considering safety factors and traffic rules factors, and determining the track changing time; the second step is that: planning a driving track; the third step: and controlling an accelerator, a brake and a steering wheel to enable the vehicle to run along a preset track. The above-described operations correspond to an autonomous vehicle, and may be performed by a Behavior Planner (BP), a motion planner (MoP), and a motion controller (Control) of the autonomous vehicle, respectively. The BP is responsible for issuing high-level decisions, the MoP is responsible for planning expected tracks and speeds, and the Control is responsible for operating an accelerator brake steering wheel to enable the automatic driving vehicle to reach target speeds according to target tracks. It should be understood that the operations performed by the activity planner, the motion planner, and the motion controller may be the processor 113 as shown in fig. 3 executing instructions 115 in the memory 114, which instructions 115 may be used to instruct the line control system 142. The embodiment of the application also sometimes refers to a behavior planner, an exercise planner and an exercise controller as a planning control module.
In addition to instructions 115, memory 114 may also store data such as road maps, route information, the location, direction, speed of the vehicle, and other such vehicle data, among other information. Such information may be used by the autonomous vehicle 100 and the computer system 112 during operation of the autonomous vehicle 100 in autonomous, semi-autonomous, and/or manual modes. A user interface 116 for providing information to or receiving information from a user of the autonomous vehicle 100. Optionally, the user interface 116 may include one or more input/output devices within the collection of peripheral devices 108, such as a wireless communication system 146, an in-vehicle computer 148, a microphone 150, and a speaker 152.
The computer system 112 may control the functions of the autonomous vehicle 100 based on inputs received from various subsystems (e.g., the travel system 102, the sensor system 104, and the control system 106) and from the user interface 116. For example, the computer system 112 may utilize input from the control system 106 in order to control the steering system 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144. In some embodiments, the computer system 112 is operable to provide control over many aspects of the autonomous vehicle 100 and its subsystems.
Alternatively, one or more of these components described above may be mounted or associated separately from the autonomous vehicle 100. For example, the memory 114 may exist partially or completely separate from the autonomous vehicle 100. The above components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 3 should not be construed as limiting the embodiment of the present application. Autonomous vehicles traveling on a roadway, such as autonomous vehicle 100 above, may identify objects within their surrounding environment to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to be adjusted.
Optionally, the autonomous vehicle 100 or a computing device associated with the autonomous vehicle 100, such as the computer system 112, the computer vision system 140, the memory 114 of fig. 3, may predict behavior of the identified object based on characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Optionally, each identified object depends on the behavior of each other, so it is also possible to predict the behavior of a single identified object taking all identified objects together into account. The autonomous vehicle 100 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle 100 is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the autonomous vehicle 100, such as the lateral position of the autonomous vehicle 100 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and so forth. In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the autonomous vehicle 100 to cause the autonomous vehicle 100 to follow a given trajectory and/or maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle 100 (e.g., cars in adjacent lanes on a road).
The autonomous vehicle 100 may be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, an amusement car, a playground vehicle, construction equipment, an electric car, a golf cart, a train, etc., and the embodiment of the present invention is not particularly limited.
In conjunction with the above description, the present embodiment provides a method for planning a driving route of a vehicle, which can be applied to the autonomous vehicle 100 shown in fig. 3. The scheme provided by the application can comprise at least two working modes, wherein one mode is an automatic driving mode, and the other mode is a manual driving mode. The following description will be made separately for the case where the vehicle is in such an operation mode.
Fig. 4 is a schematic flow chart of a method for planning a driving route of a vehicle according to the present application.
Referring to fig. 4, a method for planning a driving route of a vehicle according to an embodiment of the present application may include the following steps:
401. first information is acquired.
The first information includes one or more of position information of the vehicle, lane information for determining a relative position of the vehicle and a lane line, navigation information for predicting a driving direction of the vehicle, and obstacle information for determining a relative position of the vehicle and an obstacle.
The position information of the vehicle can be obtained by a Global Positioning System (GPS), a real-time kinematic (RTK), a camera, a laser radar, and the like. In one possible implementation, the specific location of the vehicle may be determined by determining the location where the vehicle may exist by combining a map, GPS location information, and millimeter wave measurement information stored in advance, and calculating the probability of occurrence of the location where the vehicle may exist. It should be noted that the solution provided by the present application may obtain the position information of the vehicle in various ways, and the embodiments of the present application may be applied to the way of obtaining the position information of the vehicle in the related art.
The lane lines may be detected by processing images collected by image collection devices mounted on the vehicle. For example, a large number of pictures including lane lines may be used as training data to train the target detection model, and the trained target detection model may identify the lane lines in the image acquired by the image acquisition device and determine the positions of the lane lines in the image. In one possible implementation, the road condition detection may be performed by a millimeter wave radar and an image capturing device installed, and the relative position of the vehicle and the lane line may be determined by using the road condition detection result as lane information. It should be noted that the scheme provided by the present application may obtain lane line information of a vehicle in various ways, and in the related art, the embodiments of the present application may be adopted with respect to a way of obtaining lane line information of a vehicle.
The navigation information may be obtained through a navigation request. The navigation request may include location information of a start location and location information of a destination of the vehicle. For example, the vehicle may obtain the navigation request by clicking or touching a screen of the vehicle navigation system by the user, or may obtain the navigation request by a voice command of the user. And planning a path by utilizing the vehicle-mounted GPS and matching with an electronic map, further acquiring navigation information of the vehicle, and predicting the driving direction of the vehicle according to the navigation information. It should be noted that the scheme provided by the present application may obtain the navigation information of the vehicle in various ways, and the embodiments of the present application may be adopted in the related art with respect to the way of obtaining the navigation information of the vehicle.
For the obstacle information, in one possible implementation, the vehicle may perform obstacle scanning on a sector area in front of the vehicle by the millimeter wave radar, and perform image acquisition on the area in front of the vehicle by the image acquisition device. And rasterizing the sector areas according to the field angles of the millimeter wave radar and the image acquisition equipment, so that the obstacles detected by the millimeter wave radar are positioned in the grid units. And then, converting the obstacle detected in the image acquired by the image acquisition equipment into a grid unit under a polar coordinate system, and strictly matching the obstacle detected by the millimeter wave radar in the grid unit with the obstacle detected by the image acquisition equipment to obtain final obstacle information. It should be noted that the scheme provided by the present application may obtain the obstacle information in various ways, and the embodiments of the present application may be adopted in the related art with respect to the way of obtaining the obstacle information.
The first information may include one or more of position information, lane information, navigation information, and obstacle information of the vehicle. For example, the first information may include information, such as the first information includes location information of the vehicle. Or the first information may include two kinds of information, such as the first information includes position information and lane information of the vehicle. Or the first information may include three kinds of information, such as the first information including position information of the vehicle, lane information, and navigation information. Or the first information may include four kinds of information, such as the first information including position information of the vehicle, lane information, navigation information, and obstacle information. It should be noted that the types of the first information listed above may be included for illustration purposes only and are not meant to be limiting, for example, the first information may include position information and obstacle information, or the first information may include other information besides the above-mentioned position information, lane information, navigation information, and obstacle information of the vehicle.
The environmental information around the vehicle, which may be used to determine whether the vehicle has entered the same or a similar scene, may be determined by the acquired first information. In some scenes with simple road conditions, it can be considered that the same scene can be determined as long as the position information of the vehicle is consistent. In some scenes with complex road conditions, the same scene may be determined only when the position information, the lane information, the navigation information and the obstacle information of the vehicle are consistent.
Firstly, the vehicle is in an automatic driving mode
402. When the vehicle is in the automatic driving mode, the first information is input data of a first model, output data of the first model is used for planning a driving route for the vehicle, the first model is a model obtained by training second information acquired in the manual driving mode by taking a driving track of the vehicle in the manual driving mode as a training target, the type of the information included in the second information is consistent with the type of the information included in the first information, and the similarity between the first information and the second information meets a preset condition.
The first information mentioned in step 401 may include one or more of position information, lane information, navigation information, and obstacle information of the vehicle. The second information may include one or more of position information, lane information, navigation information, and obstacle information of the vehicle. Specifically, for example, the first information includes position information of the vehicle, and the second information includes position information of the vehicle; the first information includes position information and lane information of the vehicle, and the second information includes position information and lane information of the vehicle; the first information comprises position information, lane information and navigation information of the vehicle, and the second information comprises the position information, the lane information and the navigation information of the vehicle; the first information includes position information, lane information, navigation information, and obstacle information of the vehicle, and the second information includes position information, lane information, navigation information, and obstacle information of the vehicle.
The similarity of the first information and the second information satisfies a preset condition in order to determine whether the vehicles enter the same scene. For example, the ambient environment information acquired by the vehicle passing through the first road segment is first information, the ambient environment information acquired by the vehicle passing through the second road segment is second information, and if the similarity between the first information and the second information meets a preset condition, the first road segment and the second road segment are considered to be identical or similar scenes, for example, the first road segment and the second road segment may be the same road segment.
Regarding how to determine the similarity between two pieces of information, the means that the related art can adopt can be adopted in the embodiments of the present application. For example, the first information includes location information of the vehicle, where the location information may include longitude and latitude information of a location where the vehicle is located. For example, the first information includes first longitude and latitude information, the second information includes second longitude and latitude information, if a difference between the first longitude and latitude information and the second longitude and latitude information is smaller than a preset threshold, the similarity between the first information and the second information may be considered to satisfy a preset condition, and if the difference between the first longitude and latitude information and the second longitude and latitude information is larger than the preset threshold, the similarity between the first information and the second information may be considered to not satisfy the preset condition. In other words, whether the similarity of two pieces of information satisfies the preset condition can be judged by the means of whether the deviation of the two pieces of information is smaller than the preset threshold. For another example, the first information includes lane information, and specifically, the first information includes first lane information, and the second information includes second lane information. The method includes the steps that a certain road section comprises 2 left-turning lanes in total, wherein first information is a lane close to the right in the 2 left-turning lanes, second information is a lane close to the left in the left-turning lanes, if the preset condition is that the first information and the second information are identical, the similarity of the first information and the second information meets the preset condition, the similarity of the first information and the second information is considered not to meet the preset condition, and if the preset condition is that a vehicle is located in the left-turning lane and the similarity of the first information and the second information meets the preset condition, the similarity of the first information and the second information can be considered to meet the preset condition. For another example, the first information includes navigation information, and specifically, the first information includes first navigation information, and the second information includes second navigation information. The first information is that the next driving direction of the vehicle is straight, the second information is that the next driving direction of the vehicle is left-turning, the similarity between the first information and the second information can be considered not to meet the preset condition, the next driving direction of the vehicle is straight, the second information is that the next driving direction of the vehicle is straight, and the similarity between the first information and the second information can be considered to meet the preset condition. For another example, the first information includes obstacle information, wherein the obstacle information may include static obstacle information and dynamic obstacle information, the first information may include only the static obstacle information or only the dynamic obstacle information, or may include both the static obstacle information and the dynamic obstacle information. The information about the obstacle may include a relative position relationship between the obstacle and the vehicle, and the information about the dynamic obstacle may further include a speed of the dynamic obstacle, and when the dynamic obstacle is another vehicle, the information about the dynamic obstacle may further include a direction of a head of the another vehicle. It is assumed that the first information includes first obstacle information and the second information includes second obstacle information. The first obstacle information comprises first relative position information between the obstacle and the vehicle, first speed of the obstacle and first head direction of the obstacle, the second obstacle information comprises second relative position information between the obstacle and the vehicle, and second speed of the obstacle is in second head direction of the obstacle. The preset condition may be that a deviation between the first relative position information and the second relative position information is smaller than a first threshold, a deviation between the first speed and the second speed is smaller than a second threshold, and a deviation between the first vehicle head direction and the third vehicle head direction is smaller than a third threshold, and then the similarity between the first information and the second information is considered to satisfy the preset condition. Or, the preset condition may be that a deviation between the first relative position information and the second relative position information is smaller than a first threshold, and a deviation between the first speed and the second speed is smaller than a second threshold, and then the similarity between the first information and the second information is considered to satisfy the preset condition. It should be noted that, in the process of practical application, preset conditions may be set according to the scene and the requirements.
In one possible embodiment, the similarity of the same kind of information in the first information and the second information is determined, and a linear weighted sum of the similarities of the same kind of information is used to determine the similarity of the first information and the second information. For example, the first information includes first position information, first lane information, first navigation information, and first obstacle information of the vehicle, and the second information includes second position information, second lane information, second navigation information, and second obstacle information of the vehicle. Assume that the weight of similarity of position information is 3/8, the weight of similarity of lane information is 3/8, the weight of similarity of navigation information is 1/8, and the weight of similarity of obstacle information is 1/8. Assuming that the similarity of the two pieces of information satisfies a preset condition, the similarity is set to 1, the similarity of the two pieces of information does not satisfy the preset condition, the similarity is set to 0, if the similarity of the first position information and the second position information satisfies a first preset condition, the similarity of the first lane information and the second lane information does not satisfy a second preset condition, the similarity of the first navigation information and the second navigation information does not satisfy a third preset condition, and the similarity of the first obstacle information and the second obstacle information satisfies a fourth preset condition, (3/8 + 1+3/8 + 0+1/8 + 0+ 1/8) is a linear weighted sum of the similarities of the same kind of information, that is, the linear weighted sum is 1/2, if the preset condition sets that the similarity is not less than 1/2, the similarity of the first information and the second information may be considered to satisfy the preset condition, if the preset condition sets the degree of similarity to be greater than 1/2, the degree of similarity of the first information and the second information may be considered not to satisfy the preset condition. It should be noted that, the value of the weight of the similarity of the different listed information, setting the condition that the satisfied similarity is 1, and setting the condition that the unsatisfied similarity is 0 are all examples, which do not represent the limitation of the present application on the value, and in an actual application scenario, the weight of the different types of information may be selected according to the scenario requirements, and the linear weighted sum of the similarities of the same types of information may be determined.
In one possible embodiment, the similarity between the first information and the second information may also be determined in other ways. For example, the first information and the second information collectively include 4 kinds of information, and if 3 kinds of information satisfy the preset condition, it may be considered that the similarity between the first information and the second information satisfies the preset condition. It should be noted that, the 3 kinds of information satisfying the preset condition here means that the 3 kinds of information respectively satisfy the respective preset conditions, and the description thereof is not repeated here. Further, this has been explained above as to how to determine whether each kind of information satisfies a preset condition.
If the similarity of the first information and the second information meets the preset condition, the vehicle can be considered to enter two similar driving scenes, when the vehicle is in an automatic driving mode, the first information is input data of a first model, output data of the first model is used for planning a driving route for the vehicle, the first model is a model obtained by training second information acquired in a manual driving mode by taking a driving track of the vehicle in the manual driving mode as a training target. In order to better understand the solution provided by the present application, the following describes the automatic driving mode, the manual driving mode, and the scene corresponding to the automatic driving mode after entering the same scene, respectively, with reference to fig. 5a to 5 c.
Fig. 5a is a schematic flow chart of the vehicle in the automatic driving mode. The sensor system sends sensed surrounding environment information (such as first information) to the behavior planner, the behavior planner issues a high-level decision according to the surrounding environment acquired by the sensor system, the motion planner plans an expected track and speed according to the high-level decision issued by the behavior planner, and the motion controller is responsible for operating an accelerator brake steering wheel to enable the automatic driving vehicle to achieve a target speed according to a target track. In the automatic driving mode, the driver is not involved, as shown by the dashed line in fig. 5a, which represents the flow of the driver's operation.
As shown in fig. 5b, when the vehicle enters the similar scene again, the similarity between the first information and the second information satisfies the preset condition, and the second information is the training data of the first model. If the vehicle is in the autonomous driving mode, a driving route may be planned for the vehicle based on the output data of the first model based on the first information as input data of the first model.
Secondly, the vehicle is in a manual driving mode
403. The first information is used to train a first model when the vehicle is in a manual driving mode.
As shown in fig. 5c, when the vehicle may be in danger or is not driving as expected by the driver (it should be noted that the description related to the driver in the present embodiment refers to a human driver), the driver may intervene in the driving of the vehicle by operating the steering wheel or depressing the brake pedal. When the driver takes over, the normal operation of the sensor system is kept, and the surrounding environment information is output. When the driver takes over, the behavior planner, the motion planner and the motion controller do not intervene in the control of the vehicle. The first model is trained using data (i.e., ambient environment information) acquired by the sensor system in the manual driving mode as training data. Wherein, the data acquired by the sensor in the manual driving mode can be understood as: and acquiring data acquired by the sensor when the driver drives the vehicle to the driving mode and the automatic driving mode is switched back again, or acquiring data acquired by the sensor when the driver drives the vehicle to the preset distance or the preset time. The training target of the first model is the driving trajectory of the vehicle in the manual driving mode, i.e., the training target is to make the output data of the first model closer to the driving trajectory of the driver. In other words, the training is aimed at matching the trajectory distribution output by the trained first model with the trajectory distribution of the vehicle in the manual driving mode, or at a deviation within a preset range. It should be noted that the simulation learning algorithm in the related art can be adopted in the embodiments of the present application.
As can be seen from the embodiment corresponding to fig. 4, the solution provided by the present application obtains the environmental information around the vehicle, such as the first information and the second information, in real time. When the vehicle is in the manual driving mode, the environment information around the vehicle is used as training data to train the model, so that the track output by the model can be close to the running track of the vehicle in the manual driving mode. When the vehicle is in the automatic driving mode, if the similarity between the environmental information of the vehicle and the environmental information around the vehicle used by the model training is determined to meet the preset condition, the environmental information around the vehicle is used as the input data of the model, and a driving route is planned for the vehicle according to the output data of the model. The scheme provided by the application can reduce the frequency of manual taking over, for example, in a typical scene, in the scene shown in fig. 2a to 2c, when passing through the road section, the vehicle does not need to be taken over by a driver each time, and the vehicle can plan a driving route for the vehicle according to data output by the model, so that a continuous well cover is avoided. In addition, the scheme provided by the application can greatly reduce the data required by training. In the prior art, a large number of long-tail scenes are not considered, the planning problem in the most common scenes is solved as a target, the more common scenes in the automatic driving process are optimized, and the automatic driving data in the scenes are easier to collect and have more commonality, such as lane changing, lane keeping, high-speed cruising and other scenes. Since a more common scene needs to be considered, the solution of the method has generalization, a large amount of driving data is often needed in the prior art, and a general manufacturer or a research unit cannot meet the requirement. The scheme provided by the application is mainly optimized for a specific scene, and the generalization of the model is not considered, so that only data for the scene needs to be collected, and the required data volume is much smaller than that of the prior art.
In addition, for a large number of long-tail scenes, in the prior art, researchers are required to analyze the scenes one by one, the analysis process is complex, special processing is often required for a solution, the logic of other existing scenes is often influenced, and automatic incremental optimization cannot be performed. The present application also addresses this problem, as described in more detail below.
As shown in fig. 5d, when the vehicle is in the automatic driving mode, the similarity of the scene (surrounding environment information) where the vehicle is located may be evaluated by evaluating the similarity between the first information and the M groups of information, and if the similarity between the first information and the M groups of information satisfies a preset condition, that is, the similarity between the current scene of the vehicle and the scene corresponding to the existing available model exceeds a threshold, an available model with the highest similarity may be selected from the M models, and the currently acquired periodic environment information (e.g., the first information) is used as an input of the available model with the highest similarity (e.g., the first model), and output data of the available model with the highest similarity may be used to plan a driving route for the vehicle. This scenario is described in detail below.
Referring to fig. 6, fig. 6 is a schematic flow chart of a method for planning a vehicle driving route according to an embodiment of the present application, where the method for planning a vehicle driving route by a vehicle according to the embodiment of the present application may include:
601. first information is acquired.
Step 601 can be understood by referring to step 401 in the corresponding embodiment of fig. 4, and is not repeated herein.
602. And evaluating the similarity of the first information and the M groups of information.
Each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity of any two groups of information in the M groups of information does not meet a preset condition, and each group of information in the M groups of information is respectively used for training a model. The similarity of any two groups of information in the M groups of information does not meet the preset condition, so that each type of information in the M groups of information can respectively correspond to different scenes, and the different scenes correspond to different models. For example, M is 2, and M groups of information are M1 group information and M2 group information, respectively. The M1 and M2 groups of information are both information obtained in manual driving mode, for example, the M1 group of information is information obtained in the scenario of fig. 1c, and the M2 group of information is information obtained in the scenario of fig. 2 b. For example, the information type included in the M group of information is the position information of the vehicle, and in the scenes shown in fig. 1c and fig. 2b, the geographic position deviation of the vehicle is large, and it can be considered that the similarity between the M1 group of information and the M2 group of information does not satisfy the preset condition. Of course, the above listed M sets of information including 2 sets of information is merely an example, and in an actual scenario, the M sets of information may include more than two sets of information. For example, if the category information included in the M groups of information is the location information of the vehicle, each group of information in the M groups of information may correspond to a different road segment. How to evaluate the similarity of the two pieces of information has been described above, and it can be specifically understood with reference to how to determine the similarity of the two pieces of information in step 402 in the embodiment corresponding to fig. 4, and details are not repeated here.
Firstly, the vehicle is in an automatic driving mode
603. And when the similarity between the first information and the second information in the M groups of information meets the preset condition and the similarity between the second information in the M groups of information and the first information is maximum, the first information is input data of the first model.
For example, it is assumed that M groups of information are M1 group information and M2 group information, respectively, M1 group information is information acquired in the scenario of fig. 1c, and M2 group information is information acquired in the scenario of fig. 2 b. Assume that the model corresponding to the M1 set of information is the A model and the model corresponding to the M2 set of information is the B model. Taking the model a as an example for explanation, if the vehicle is in the manual driving mode in the scene shown in fig. 1c, the driving track of the vehicle is obtained, and the surrounding environment information in the scene of fig. 1c is obtained as training data, the model a is trained, when the output track of the model a and the vehicle track of the vehicle in the manual driving mode in the scene of fig. 1c are distributed within a preset range, the model a is considered to be trained, and the output data of the model a can be used for planning the driving route for the vehicle. If the similarity between the first information and the M1 group information meets the preset condition and the similarity between the first information and the M1 group information is greater than the similarity between the first information and the M2 group information, the first information can be used as input data of an A model, the output data of the A model can be used for planning a driving route for the vehicle, and the planned driving route is similar to the driving track of the vehicle in a manual driving mode and cannot collide with an obstacle at the parking exit. In the present application, the trajectory of the model output may be referred to as data of the model output, and when the difference between the two is not emphasized, the two have the same meaning. Taking the B model as an example for explanation, if the vehicle is in the manual driving mode in the scene shown in fig. 2B, the driving track of the vehicle is obtained, and the surrounding environment information in the scene shown in fig. 2B is obtained as training data, the B model is trained, when the output track of the B model and the vehicle track of the vehicle in the manual driving mode in the scene shown in fig. 2B are distributed within a preset range, the B model is considered to be finished training, and the output data of the B model can be used for planning the driving route for the vehicle. If the similarity between the first information and the M2 group information meets the preset condition and the similarity between the first information and the M1 group information is smaller than the similarity between the first information and the M2 group information, the first information can be used as input data of a B model, output data of the B model can be used for planning a driving route for the vehicle, the planned driving route is similar to a driving route of the vehicle in a manual driving mode, and continuous well covers on a road can be avoided.
The similarity of the scene (surrounding environment information) where the vehicle is located can be evaluated by evaluating the similarity of the first information and the M groups of information, if the similarity of the first information and the M groups of information meets a preset condition, that is, the similarity of the current scene of the vehicle and the scene corresponding to the existing available model exceeds a threshold value, the available model with the highest similarity can be selected, the currently acquired periodic environment information (such as the first information) is used as the input of the available model (such as the first model) with the highest similarity, and the output data of the available model with the highest similarity can be used for planning a driving route for the vehicle. Wherein, the model can be considered as a usable model when the distribution of the vehicle trajectory in the manual driving mode is within a preset range. Further description of the available models will be provided below.
Secondly, the vehicle is in a manual driving mode
604. And when the similarity between the first information and the second information in the M groups of information meets a preset condition and the similarity between the second information and the first information in the M groups of information is maximum, the first information is used for training the first model to obtain an updated first model.
How to determine whether the similarity of each of the first information and the M groups of information satisfies the similarity condition may be understood by referring to how to determine the similarity of the two information, and the detailed description is not repeated here.
If the similarity of the first information and one of the M groups of information meets a preset condition, for example, the similarity of the first information and the second information in the M groups meets a preset condition, it may be considered that the current scene of the vehicle is the same as or similar to a certain scene in the historical manual driving mode.
If the similarity of the current scene and the scene corresponding to the existing available model exceeds a threshold, selecting the available model with the highest similarity, for example, the first model with the highest similarity, performing update training on the first model by using the first information, and updating the available model for the scene, that is, updating the first model.
605. And when the similarity of the first information and each group of information in the M groups of information does not meet the preset condition, the first information is used for carrying out first training on the first model.
When the similarity of each of the first information and the M groups of information does not satisfy the preset condition, it may be considered that there is no scene that is the same as or similar to the current scene of the vehicle in the historical manual driving mode. No output data of the available models is available at this time to plan a driving route for the vehicle. Then using the first information, a temporary model is constructed for training, and a temporary model for the scene is formed, namely the first model is trained for the first time by using the first information. The first model may not satisfy the training target, that is, the output trajectory of the first model and the driving trajectory of the vehicle in the manual driving mode are not distributed within a preset range. And the first model after the first training is a temporary model and a non-available model, when the information of the similarity with the first information is obtained for multiple times in a manual driving mode, the first model is iteratively trained until the first model meets the training target, the first model is an available model, and the first model which is the available model can plan a driving route for the vehicle. For example, assuming that the similarity between the first information and each of the M groups of information does not satisfy the preset condition, the first model is trained for the first time by using the first information. In a later manual driving mode, if the obtained surrounding environment information, such as the obtained third information, the similarity of the third information and the first information meets a preset condition, and the similarity of the third information and the first information is the maximum in all the stored surrounding environment information at present, performing second training on the first model through the third information, and repeating the steps in the same way, and continuously performing iterative training on the first model until the output trajectory of the first model and the driving trajectory of the vehicle in the manual driving mode are distributed in a preset range, and then considering that the first model is an available model. In one possible embodiment, before the first model is the available model, if the vehicle is in the automatic driving mode, even if the similarity of the acquired fourth information and the first information satisfies the preset condition, the fourth information is not used as the input of the first model, and the driving route is not planned for the vehicle by the output of the first model at the time. Since the distribution of the output trajectory of the first model and the trajectory of the vehicle in the manual driving mode is not yet within the preset range, a danger may occur if the output of the first model is used to plan a driving route for the vehicle.
In order to better understand the technical solution provided by the present application, the solution provided by the present application is described below with reference to a specific flow, and a temporary model and an available model are described.
Fig. 7 is a schematic flow chart of a method for planning a driving route of a vehicle according to an embodiment of the present application. As shown in fig. 7, the vehicle acquires the surrounding environment information (e.g., the vehicle acquires the first message). The determination as to whether the vehicle is taken over by the driver (driver) may be made, for example, by determining whether the driver intervenes in the driving of the vehicle by operating the steering wheel or depressing the brake pedal. If it is determined that the vehicle is not being taken over by the driver, the vehicle is in an autonomous driving mode, and the similarity of the scene is evaluated, such as understood according to step 602 in the corresponding embodiment of FIG. 6. And judging whether available models of similar scenes exist or not, if so, understanding by referring to 603 in the embodiment corresponding to fig. 6, and planning a driving route for the vehicle by using output data corresponding to the scenes. And if no available model similar to the scene exists, controlling the vehicle to run according to a normal automatic driving mode, namely controlling the vehicle to run according to a set automatic driving strategy. An available model in which no similar scene exists can be understood as a model with a similar scene, but the model of the scene has not yet reached the training goal, or can also be understood as a model without a similar scene. And if the vehicle is determined to be taken over by the driver, the vehicle is in a manual driving mode, the vehicle track in the manual driving mode or the driving track of the vehicle is obtained, and the similarity of the scene is evaluated. If the model corresponding to the similar scene exists and the model is already an available model, the model is trained through the currently acquired surrounding environment information to obtain an updated model. If the model corresponding to the similar scene does not exist, the model is trained through the currently acquired surrounding environment information, and the model at the moment is called a temporary model in the application and is used for distinguishing from the available model. The temporary model represents that the model does not reach the training target, namely the distribution of the output track of the model and the running track of the vehicle in the manual driving mode is not in the preset range, and the running track of the vehicle in the manual driving mode cannot be simulated. Whether the model is a usable model or a temporary model can be determined by evaluating whether the model is safe and stable. Regarding whether the evaluation model is safe and stable, there may be other ways than the above-mentioned way of determining whether the output data of the model is within the preset range from the distribution of the travel locus of the vehicle in the manual driving mode. For example, more scene verifications can be constructed based on the existing scenes to ensure that no collision exists in the driving range, and then the model is confirmed to meet the requirements of safety and stability.
The scheme provided by the embodiment of the present application is described below with reference to several exemplary application scenarios. Fig. 8a is a schematic diagram illustrating a scenario of a method for planning a driving route of a vehicle according to the present application. As shown in fig. 8a, the geographical position of the vehicle, static obstacle information (such as street lamps on both sides of the road shown in fig. 8 a), and dynamic obstacle information (such as other vehicles on the road, not shown) can be obtained. When the vehicle is in the manual driving mode, the driving track of the vehicle is obtained, and a prompt message for prompting a user that the vehicle is learning the current driving mode can be sent out. The prompting mode may include a text prompt or a voice prompt, which is not limited in the embodiment of the present application. It should be noted that the information about the surrounding environment may be embodied in different manners, for example, in one possible embodiment, the surrounding environment information may include only the geographic location, and when the vehicle is in the manual driving mode, a prompt message is sent, where the prompt message is used to instruct the vehicle to train the first model according to the current location information of the vehicle. Fig. 8b is a schematic diagram illustrating a scenario of a method for planning a driving route of a vehicle according to the present application. As shown in fig. 8b, the geographic location of the vehicle, static obstacle information (e.g., street lamps on both sides of the road shown in fig. 8 b), and dynamic obstacle information (e.g., other vehicles on the road, not shown) may be obtained. When the vehicle is in the automatic driving mode, if it is determined that the similarity between the surrounding environment information of the vehicle and the environment information around the vehicle in the manual driving mode satisfies a preset condition, for example, the similarity between the environment information around the vehicle shown in fig. 8b and the environment information around the vehicle shown in fig. 8a satisfies the preset condition, a prompt message is sent to prompt that a driving route is planned for the vehicle according to data output by the model. For example, a successful scene match may be prompted and a driving route for the vehicle may be planned according to the model output data.
It should be noted that the solution provided in the present application can be implemented by a plurality of devices together. For example, as shown in fig. 9, a schematic view of a scenario of another method for planning a driving route of a vehicle provided by the present application is shown. In this scenario, the vehicles a, B, and C all enter the manual driving mode in the same road segment, and the vehicles a, B, and C may send the obtained ambient environment information (for example, first information) and the travel tracks of the respective vehicles to the cloud-side device, and the cloud-side device trains the first model according to the obtained first information sent by the multiple vehicles and sends the trained first model to the vehicles a, B, and C. In one possible embodiment, the cloud-side device may further send the first model to a vehicle that does not participate in sending the first information, such as to vehicles other than vehicle a, vehicle B, and vehicle C. When other vehicles pass through the road section, a driving route can be planned for the vehicles according to the output data of the first model, and continuous well covers are avoided.
In one possible embodiment, the vehicle is configured to obtain first information, and the first information may include one or more of position information of the vehicle, lane information, navigation information, and obstacle information, the lane information is used to determine a relative position of the vehicle and a lane line, the navigation information is used to predict a driving direction of the vehicle, and the obstacle information is used to determine a relative position of the vehicle and an obstacle. The vehicle is further used for sending the first information and the driving mode of the vehicle to the cloud-side equipment. The cloud-side equipment is used for planning a driving route for the vehicle according to output data of the first model when the vehicle is determined to be in the automatic driving mode, the first information is input data of the first model, the first model is a model obtained by training second information acquired in the manual driving mode by taking a driving track of the vehicle in the manual driving mode as a training target, the type of the information which can be included in the second information is consistent with the type of the information which can be included in the first information, and the similarity between the first information and the second information meets a preset condition. And the cloud side equipment is also used for training the first model according to the first information when the vehicle is determined to be in the manual driving mode.
In one possible embodiment, the cloud-side device is further configured to: and evaluating the similarity between the first information and the M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity of any two groups of information in the M groups of information does not meet a preset condition, and each group of information in the M groups of information is respectively used for training a model. And when the similarity between the first information and the second information in the M groups of information meets the preset condition and the similarity between the second information in the M groups of information and the first information is maximum, the first information is input data of the first model.
In one possible embodiment, the cloud-side device is further configured to: and evaluating the similarity between the first information and the M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet a preset condition, each group of information in the M groups of information is respectively used for training a model, the second information in the M groups of information is used for training the first model, and M is a positive integer. And when the similarity between the first information and the second information in the M groups of information meets a preset condition and the similarity between the second information and the first information in the M groups of information is maximum, the first information is used for training the first model to obtain an updated first model.
In a possible implementation manner, the cloud-side device is further configured to evaluate similarity between the first information and M groups of information, each group of information in the M groups of information is information obtained when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not satisfy a preset condition, each group of information in the M groups of information is used for training a model, and M is a positive integer. And when the similarity of the first information and each group of information in the M groups of information does not meet the preset condition, the first information is used for carrying out first training on the first model.
In a possible implementation manner, the cloud-side device is further configured to send a prompt message when it is determined that the current location information of the vehicle is consistent with the location information of the vehicle acquired in the manual driving mode, where the prompt message is used to instruct the vehicle to plan a driving route for the vehicle according to output data of a first model, and the first model is a model obtained by training the location information of the vehicle acquired in the manual driving mode.
In one possible implementation, the cloud-side device is further configured to send a prompt message, where the prompt message is used to instruct training of the first model according to the current location information of the vehicle.
In one possible implementation, the cloud-side device is further configured to determine a similarity of the same kind of information in the first information and the second information, and a linear weighted sum of the similarities of the same kind of information is used to determine the similarity of the first information and the second information.
In one possible embodiment, the first model may comprise a convolutional neural network CNN or a recurrent neural network RNN.
On the basis of the embodiments corresponding to fig. 4 to 9, in order to better implement the above-mentioned scheme of the embodiments of the present application, the following also provides related equipment for implementing the above-mentioned scheme. Referring to fig. 10, fig. 10 is a schematic structural diagram of an apparatus for planning a driving route of a vehicle according to an embodiment of the present disclosure. The device for planning the driving route of the vehicle comprises an acquisition module 1001, a planning module 1002, a training module 1003, an evaluation module 1004, a sending module 1006 and a processing module 1005.
In one possible implementation, may include: the acquiring module 1001 is configured to acquire first information, where the first information may include one or more of position information of a vehicle, lane information used to determine a relative position of the vehicle and a lane line, navigation information used to predict a driving direction of the vehicle, and obstacle information used to determine a relative position of the vehicle and an obstacle. The planning and control module 1002 is configured to plan a driving route for a vehicle according to output data of a first model when the vehicle is in an automatic driving mode, where first information acquired by the acquisition module 1001 is input data of the first model, the first model is a model obtained by training second information acquired in an artificial driving mode with a driving trajectory of the vehicle in the artificial driving mode as a training target, a type of the information that the second information may include is consistent with a type of the information that the first information may include, and a similarity between the first information and the second information satisfies a preset condition. The training module 1003 is configured to train the first model according to the first information acquired by the acquiring module 1001 when the vehicle is in the manual driving mode.
In one possible implementation, the method may further include: the evaluation module 1004 is configured to evaluate similarity between the first information and M groups of information, where each group of information in the M groups of information is information obtained when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not satisfy a preset condition, and each group of information in the M groups of information is used for training a model. And when the similarity between the first information and the second information in the M groups of information meets the preset condition and the similarity between the second information in the M groups of information and the first information is maximum, the first information is input data of the first model.
In one possible implementation, the method may further include: the evaluation module 1004 is configured to evaluate similarity between the first information and M groups of information, where each group of information in the M groups of information is information obtained when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not satisfy a preset condition, each group of information in the M groups of information is used for training a model, the second information in the M groups of information is used for training the first model, and M is a positive integer. And when the similarity between the first information and the second information in the M groups of information meets a preset condition and the similarity between the second information and the first information in the M groups of information is maximum, the first information is used for training the first model to obtain an updated first model.
In one possible implementation, the method may further include: the evaluation module 1004 is configured to evaluate similarity between the first information and M groups of information, where each group of information in the M groups of information is information obtained when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not satisfy a preset condition, each group of information in the M groups of information is used for training a model, and M is a positive integer. And when the similarity of the first information and each group of information in the M groups of information does not meet the preset condition, the first information is used for carrying out first training on the first model.
In one possible implementation, the method may further include: a sending module 1006, configured to send a prompt message when the evaluation module 1004 determines that the current location information of the vehicle is consistent with the location information of the vehicle acquired in the manual driving mode, where the prompt message is used to instruct the vehicle to plan a driving route for the vehicle according to output data of a first model, and the first model is a model obtained by training the location information of the vehicle acquired in the manual driving mode.
In one possible implementation, the method may further include: a sending module 1006, configured to send a prompt message, where the prompt message is used to instruct training of the first model according to the current location information of the vehicle.
In a possible implementation, the processing module 1005 is configured to determine a similarity between the same kind of information in the first information and the second information, and a linear weighted sum of the similarities between the same kind of information is used to determine the similarity between the first information and the second information.
In one possible embodiment, the first model may comprise a convolutional neural network CNN or a recurrent neural network RNN.
It should be noted that, the information interaction, the execution process, and the like between the modules in the device 1000 for planning the vehicle driving route are based on the same concept as the method embodiments corresponding to fig. 4 to 9 in the present application, and specific contents may refer to the description in the foregoing method embodiments in the present application, and are not described herein again.
Fig. 11 shows a schematic structural diagram of the autonomous vehicle provided in the embodiment of the present application, where fig. 11 is a schematic structural diagram of the autonomous vehicle provided in the embodiment of the present application, where the autonomous vehicle 100 may be disposed with a device for planning a vehicle driving route described in the embodiment corresponding to fig. 10, so as to implement the functions of the autonomous vehicle in the embodiments corresponding to fig. 4 to fig. 9. Since in some embodiments the autonomous vehicle 100 may also include communication functionality, the autonomous vehicle 100 may include, in addition to the components shown in fig. 3: a receiver 1201 and a transmitter 1202, wherein the processor 113 may include an application processor 1131 and a communication processor 1132. In some embodiments of the present application, the receiver 1201, the transmitter 1202, the processor 113, and the memory 114 may be connected by a bus or other means.
The processor 113 controls operation of the autonomous vehicle. In a particular application, the various components of the autonomous vehicle 100 are coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. For clarity of illustration, the various buses are referred to in the figures as a bus system.
Receiver 1201 may be used to receive input numeric or character information and to generate signal inputs related to settings and function controls associated with the autonomous vehicle. The transmitter 1202 may be configured to output numeric or character information via the first interface; the transmitter 1202 is also operable to send instructions to the disk group via the first interface to modify data in the disk group; the transmitter 1202 may also include a display device such as a display screen.
In an embodiment of the present application, the processor 1131 is configured to execute a method for planning a driving route of a vehicle, which is executed by an autonomous vehicle in the embodiment corresponding to fig. 2. Specifically, the application processor 1131 is configured to perform the following steps:
when the vehicle is in an automatic driving mode, a driving route is planned for the vehicle according to output data of a first model, first information acquired by a sensor system is input data of the first model, the first model is a model obtained by training second information acquired in a manual driving mode by taking a driving track of the vehicle in the manual driving mode as a training target, the type of the information which can be included in the second information is consistent with the type of the information which can be included in the first information, and the similarity of the first information and the second information meets a preset condition. When the vehicle is in a manual driving mode, a first model is trained according to first information acquired by a sensor system.
In one possible implementation manner, the similarity between the first information and M groups of information is evaluated, each group of information in the M groups of information is information obtained when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet a preset condition, and each group of information in the M groups of information is respectively used for training a model. And when the similarity between the first information and the second information in the M groups of information meets the preset condition and the similarity between the second information in the M groups of information and the first information is maximum, the first information is input data of the first model.
In one possible implementation manner, the similarity between the first information and M groups of information is evaluated, each group of information in the M groups of information is information obtained when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet a preset condition, each group of information in the M groups of information is used for training a model, the second information in the M groups of information is used for training the first model, and M is a positive integer. And when the similarity between the first information and the second information in the M groups of information meets a preset condition and the similarity between the second information and the first information in the M groups of information is maximum, the first information is used for training the first model to obtain an updated first model.
In one possible implementation manner, the similarity between the first information and M groups of information is evaluated, each group of information in the M groups of information is information obtained when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet a preset condition, each group of information in the M groups of information is respectively used for training a model, and M is a positive integer. And when the similarity of the first information and each group of information in the M groups of information does not meet the preset condition, the first information is used for carrying out first training on the first model.
In one possible embodiment, the transmitter is configured to send the first message to the cloud-side device.
In one possible embodiment, the receiver is configured to receive the first model transmitted by the cloud-side device.
It should be noted that, for the specific implementation manner and the beneficial effects brought by the method for executing the vehicle driving route planning by the application processor 1131, reference may be made to descriptions in each method embodiment corresponding to fig. 4 to fig. 9, and details are not repeated here.
Also provided in an embodiment of the present application is a computer-readable storage medium having stored therein a program for planning a vehicle travel route, which when run on a computer, causes the computer to perform the steps performed by an autonomous vehicle (or an apparatus for planning a vehicle travel route) in the method described in the aforementioned embodiments shown in fig. 4 to 9.
Embodiments of the present application also provide a computer program product, which when running on a computer, causes the computer to perform the steps performed by the autonomous vehicle in the method described in the embodiments of fig. 4 to 9.
Further provided in embodiments of the present application is a circuit system comprising a processing circuit configured to perform the steps performed by the autonomous vehicle in the method described in the embodiments of fig. 4-9 above.
The device for planning the vehicle driving route or the automatic driving vehicle provided by the embodiment of the application can be specifically a chip, and the chip comprises: a processing unit, which may be for example a processor, and a communication unit, which may be for example an input/output interface, a pin or a circuit, etc. The processing unit may execute the computer executable instructions stored in the storage unit to cause the chip in the server to execute the method for planning the driving route of the vehicle described in the embodiments shown in fig. 2 to 9. Optionally, the storage unit is a storage unit in the chip, such as a register, a cache, and the like, and the storage unit may also be a storage unit located outside the chip in the wireless access device, such as a read-only memory (ROM) or another type of static storage device that can store static information and instructions, a Random Access Memory (RAM), and the like.
Specifically, referring to fig. 12, fig. 12 is a schematic structural diagram of a chip provided in the embodiment of the present application, where the chip may be represented as a neural network processor NPU 130, and the NPU 130 is mounted on a main CPU (Host CPU) as a coprocessor, and the Host CPU allocates tasks. The core portion of the NPU is an arithmetic circuit 1303, and the arithmetic circuit 1303 is controlled by a controller 1304 to extract matrix data in a memory and perform multiplication.
In some implementations, the arithmetic circuit 1303 includes a plurality of processing units (PEs) therein. In some implementations, the operational circuit 1303 is a two-dimensional systolic array. The arithmetic circuit 1303 may also be a one-dimensional systolic array or other electronic circuit capable of performing mathematical operations such as multiplication and addition. In some implementations, the arithmetic circuitry 1303 is a general-purpose matrix processor.
For example, assume that there is an input matrix A, a weight matrix B, and an output matrix C. The arithmetic circuit fetches the data corresponding to the matrix B from the weight memory 1302 and buffers the data on each PE in the arithmetic circuit. The arithmetic circuit takes the matrix a data from the input memory 1301 and performs matrix operation with the matrix B, and a partial result or a final result of the obtained matrix is stored in an accumulator (accumulator) 1308.
The unified memory 1306 is used to store input data as well as output data. The weight data directly passes through a Direct Memory Access Controller (DMAC) 1305, and the DMAC is transferred to the weight memory 1302. The input data is also carried into the unified memory 1306 through the DMAC.
A Bus Interface Unit (BIU) 1310 for interaction of the AXI bus with the DMAC and the instruction fetch memory (IFB) 1309.
BIU1310 is used for instruction fetch 1309 to fetch instructions from external memory, and is also used for memory access controller 1305 to fetch the original data of input matrix a or weight matrix B from external memory.
The DMAC is mainly used to transfer input data in the external memory DDR to the unified memory 1306 or to transfer weight data into the weight memory 1302 or to transfer input data into the input memory 1301.
The vector calculation unit 1307 includes a plurality of operation processing units, and further processes such as vector multiplication, vector addition, exponential operation, logarithmic operation, magnitude comparison, and the like are performed on the outputs of the operation circuits, if necessary. The method is mainly used for non-convolution/full-connection layer network calculation in the neural network, such as batch normalization (batch normalization), pixel-level summation, up-sampling of a feature plane and the like.
In some implementations, vector calculation unit 1307 can store the processed output vector to unified memory 1306. For example, the vector calculation unit 1307 may apply a linear function and/or a nonlinear function to the output of the arithmetic circuit 1303, such as linear interpolation of the feature planes extracted by the convolution layer, and further such as a vector of accumulated values to generate an activation value. In some implementations, the vector calculation unit 1307 generates normalized values, pixel-level summed values, or both. In some implementations, the vector of processed outputs can be used as activation inputs to the arithmetic circuitry 1303, e.g., for use in subsequent layers in a neural network.
An instruction fetch buffer (issue fetch buffer)1309 is connected to the controller 1304 and is used to store instructions used by the controller 1304.
The unified memory 1306, input memory 1301, weight memory 1302 and instruction fetch memory 1309 are all On-Chip memories. The external memory is private to the NPU hardware architecture.
Here, the operation of each layer in the recurrent neural network may be performed by the operation circuit 1303 or the vector calculation unit 1307.
Wherein any of the aforementioned processors may be a general purpose central processing unit, a microprocessor, an ASIC, or one or more integrated circuits configured to control the execution of the programs of the method of the first aspect.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiments of the apparatus provided in the present application, the connection relationship between the modules indicates that there is a communication connection therebetween, and may be implemented as one or more communication buses or signal lines.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus necessary general hardware, and certainly can also be implemented by special hardware including application specific integrated circuits, special CLUs, special memories, special components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, for the present application, the implementation of a software program is more preferable. Based on such understanding, the technical solutions of the present application may be substantially embodied in the form of a software product, which is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods described in the embodiments of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

Claims (22)

1. A method of planning a vehicle travel route, comprising:
acquiring first information, wherein the first information comprises one or more of position information, lane information, navigation information and obstacle information of a vehicle;
the first information is input data of a first model when the vehicle is in an autonomous driving mode, the output data of the first model is used for planning a driving route for the vehicle, the first model takes the driving track of the vehicle in a manual driving mode as a training target, a model obtained by training second information acquired in a manual driving mode, wherein the type of the information included in the second information is consistent with the type of the information included in the first information, the similarity of the first information and the second information meets a preset condition, the similarity of the first information and the second information meets the preset condition and is used for indicating that the first scene and the second scene are the same or similar, the first scene is a scene where the vehicle is located when the vehicle acquires the first information, and the second scene is a scene where the vehicle is located when the vehicle acquires the second information;
when the vehicle is in a manual driving mode, the first information is used for training the first model, and output data of the first model is used for planning a driving route for the vehicle.
2. The method of planning a vehicle travel route according to claim 1, further comprising:
evaluating the similarity between the first information and M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet the preset condition, and each group of information in the M groups of information is respectively used for training a model;
the first information is input data of a first model when the vehicle is in an autonomous driving mode, and includes:
and when the similarity between the first information and the second information in the M groups of information meets the preset condition and the similarity between the second information and the first information in the M groups of information is the maximum, the first information is input data of a first model.
3. The method of planning a vehicle travel route according to claim 1, further comprising:
evaluating the similarity between the first information and M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet the preset condition, each group of information in the M groups of information is respectively used for training a model, the second information in the M groups of information is used for training the first model, and M is a positive integer;
when the vehicle is in a manual driving mode, the first information is used for training the first model, and the method comprises the following steps:
and when the similarity between the first information and the second information in the M groups of information meets the preset condition and the similarity between the second information and the first information in the M groups of information is the maximum, the first information is used for training the first model to obtain the updated first model.
4. The method of planning a vehicle travel route according to claim 1, further comprising:
evaluating the similarity between the first information and M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet the preset condition, each group of information in the M groups of information is respectively used for training a model, and M is a positive integer;
when the vehicle is in a manual driving mode, the first information is used for training the first model, and the method comprises the following steps:
and when the similarity of the first information and each group of information in the M groups of information does not meet the preset condition, the first information is used for carrying out first training on the first model.
5. The method of planning a vehicle travel route according to any one of claims 1 to 4, wherein when the vehicle is in an autonomous driving mode, the method further comprises:
and when the current position information of the vehicle is determined to be consistent with the position information of the vehicle acquired in the manual driving mode, sending a prompt message, wherein the prompt message is used for indicating the vehicle to plan a driving route for the vehicle according to the output data of the first model, and the first model is obtained by training the position information of the vehicle acquired in the manual driving mode.
6. The method of planning a vehicle travel route according to any one of claims 1 to 4, wherein when the vehicle is in manual driving mode, the method further comprises:
and sending a prompt message, wherein the prompt message is used for indicating that the first model is trained according to the current position information of the vehicle.
7. The method for planning a vehicle travel route according to any one of claims 1 to 6, characterized in that the method further comprises:
and determining the similarity of the same kind of information in the first information and the second information, wherein the linear weighted sum of the similarities of the same kind of information is used for determining the similarity of the first information and the second information.
8. The method for planning a vehicle travel route according to any one of claims 1 to 7, wherein the first model comprises a Convolutional Neural Network (CNN) or a Recurrent Neural Network (RNN).
9. An apparatus for planning a route for a vehicle, comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring first information, and the first information comprises one or more of position information, lane information, navigation information and obstacle information of a vehicle;
a planning control module for planning a driving route for the vehicle according to output data of the first model when the vehicle is in an automatic driving mode, the first information acquired by the acquisition module is input data of a first model, the first model takes the driving track of the vehicle in the manual driving mode as a training target, a model obtained by training second information acquired in a manual driving mode, wherein the type of the information included in the second information is consistent with the type of the information included in the first information, the similarity of the first information and the second information meets a preset condition, the similarity of the first information and the second information meets the preset condition and is used for indicating that the first scene and the second scene are the same or similar, the first scene is a scene where the vehicle is located when the vehicle acquires the first information, and the second scene is a scene where the vehicle is located when the vehicle acquires the second information;
and the training module is used for training the first model according to the first information acquired by the acquisition module when the vehicle is in a manual driving mode, and the output data of the first model is used for planning a driving route for the vehicle.
10. The apparatus for planning a vehicle travel route according to claim 9, further comprising:
the evaluation module is used for evaluating the similarity between the first information and M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet the preset condition, and each group of information in the M groups of information is respectively used for training a model;
and when the similarity between the first information and the second information in the M groups of information meets the preset condition and the similarity between the second information and the first information in the M groups of information is the maximum, the first information is input data of a first model.
11. The apparatus for planning a vehicle travel route according to claim 9, further comprising:
the evaluation module is used for evaluating the similarity between the first information and M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet the preset condition, each group of information in the M groups of information is respectively used for training a model, the second information in the M groups of information is used for training the first model, and M is a positive integer;
and when the similarity between the first information and the second information in the M groups of information meets the preset condition and the similarity between the second information and the first information in the M groups of information is the maximum, the first information is used for training the first model to obtain the updated first model.
12. The apparatus for planning a vehicle travel route according to claim 9, further comprising:
the evaluation module is used for evaluating the similarity between the first information and M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet the preset condition, each group of information in the M groups of information is respectively used for training a model, and M is a positive integer;
and when the similarity of the first information and each group of information in the M groups of information does not meet the preset condition, the first information is used for carrying out first training on the first model.
13. The apparatus for planning a driving route of a vehicle according to any one of claims 9 to 12, further comprising:
and the sending module is used for sending a prompt message when the evaluation module determines that the current position information of the vehicle is consistent with the position information of the vehicle acquired in the manual driving mode, wherein the prompt message is used for indicating the vehicle to plan a driving route for the vehicle according to the output data of the first model, and the first model is obtained by training the position information of the vehicle acquired in the manual driving mode.
14. The apparatus for planning a driving route of a vehicle according to any one of claims 9 to 12, further comprising:
and the sending module is used for sending a prompt message, and the prompt message is used for indicating that the first model is trained according to the current position information of the vehicle.
15. The apparatus for planning a driving route of a vehicle according to any one of claims 9 to 14, further comprising:
and the processing module is used for determining the similarity of the same kind of information in the first information and the second information, and the linear weighted sum of the similarity of the same kind of information is used for determining the similarity of the first information and the second information.
16. The apparatus for planning a vehicle travel route according to any one of claims 9 to 15, wherein the first model comprises a Convolutional Neural Network (CNN) or a Recurrent Neural Network (RNN).
17. A system for planning a driving route of a vehicle, the system comprising the vehicle and a cloud-side device,
the vehicle is used for acquiring first information, wherein the first information comprises one or more of position information, lane information, navigation information and obstacle information of the vehicle;
the vehicle is further used for sending the first information and the driving mode of the vehicle to the cloud-side equipment;
the cloud-side device is used for planning a driving route for the vehicle according to output data of the first model when the vehicle is determined to be in the automatic driving mode, the first information is input data of the first model, the first model takes the driving track of the vehicle in the manual driving mode as a training target, a model obtained by training second information acquired in a manual driving mode, wherein the type of the information included in the second information is consistent with the type of the information included in the first information, the similarity of the first information and the second information meets a preset condition, the similarity of the first information and the second information meets the preset condition and is used for indicating that the first scene and the second scene are the same or similar, the first scene is a scene where the vehicle is located when the vehicle acquires the first information, and the second scene is a scene where the vehicle is located when the vehicle acquires the second information;
the cloud-side device is further configured to train the first model according to the first information when it is determined that the vehicle is in a manual driving mode, and output data of the first model is used for planning a driving route for the vehicle.
18. The system for planning a vehicle travel route according to claim 17, wherein the cloud-side device is further configured to:
evaluating the similarity between the first information and M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet the preset condition, and each group of information in the M groups of information is respectively used for training a model;
and when the similarity between the first information and the second information in the M groups of information meets the preset condition and the similarity between the second information and the first information in the M groups of information is the maximum, the first information is input data of a first model.
19. The system for planning a vehicle travel route according to claim 17, wherein the cloud-side device is further configured to:
evaluating the similarity between the first information and M groups of information, wherein each group of information in the M groups of information is information acquired when the vehicle is in a manual driving mode, the similarity between any two groups of information in the M groups of information does not meet the preset condition, each group of information in the M groups of information is respectively used for training a model, the second information in the M groups of information is used for training the first model, and M is a positive integer;
and when the similarity between the first information and the second information in the M groups of information meets the preset condition and the similarity between the second information and the first information in the M groups of information is the maximum, the first information is used for training the first model to obtain the updated first model.
20. An apparatus for planning a vehicle travel route, comprising a processor coupled to a memory, the memory storing program instructions that, when executed by the processor, implement the method of any of claims 1 to 8.
21. A computer-readable storage medium comprising a program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 8.
22. An intelligent car, characterized in that the intelligent car comprises a processing circuit and a storage circuit, the processing circuit and the storage circuit being configured to perform the method of any of claims 1 to 8.
CN202010698231.XA 2020-07-20 2020-07-20 Method for planning vehicle driving route and intelligent automobile Pending CN113954858A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010698231.XA CN113954858A (en) 2020-07-20 2020-07-20 Method for planning vehicle driving route and intelligent automobile
PCT/CN2021/084330 WO2022016901A1 (en) 2020-07-20 2021-03-31 Method for planning driving route of vehicle, and intelligent vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010698231.XA CN113954858A (en) 2020-07-20 2020-07-20 Method for planning vehicle driving route and intelligent automobile

Publications (1)

Publication Number Publication Date
CN113954858A true CN113954858A (en) 2022-01-21

Family

ID=79459534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010698231.XA Pending CN113954858A (en) 2020-07-20 2020-07-20 Method for planning vehicle driving route and intelligent automobile

Country Status (2)

Country Link
CN (1) CN113954858A (en)
WO (1) WO2022016901A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115293301A (en) * 2022-10-09 2022-11-04 腾讯科技(深圳)有限公司 Estimation method and device for lane change direction of vehicle and storage medium
CN117804490A (en) * 2024-02-28 2024-04-02 四川交通职业技术学院 Comprehensive planning method and device for vehicle running route

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115665699B (en) * 2022-12-27 2023-03-28 博信通信股份有限公司 Multi-scene signal coverage optimization method, device, equipment and medium
CN116206441A (en) * 2022-12-30 2023-06-02 云控智行科技有限公司 Optimization method, device, equipment and medium of automatic driving planning model

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106080590A (en) * 2016-06-12 2016-11-09 百度在线网络技术(北京)有限公司 Control method for vehicle and device and the acquisition methods of decision model and device
CN106338988A (en) * 2015-07-06 2017-01-18 丰田自动车株式会社 Control system of automated driving vehicle
CN107340773A (en) * 2017-06-26 2017-11-10 怀效宁 A kind of method of automatic driving vehicle user individual
CN108205830A (en) * 2016-12-20 2018-06-26 百度(美国)有限责任公司 Identify the personal method and system for driving preference for automatic driving vehicle
CN109085837A (en) * 2018-08-30 2018-12-25 百度在线网络技术(北京)有限公司 Control method for vehicle, device, computer equipment and storage medium
CN109109863A (en) * 2018-07-28 2019-01-01 华为技术有限公司 Smart machine and its control method, device
CN109491375A (en) * 2017-09-13 2019-03-19 百度(美国)有限责任公司 The path planning based on Driving Scene for automatic driving vehicle
CN109747659A (en) * 2018-11-26 2019-05-14 北京汽车集团有限公司 The control method and device of vehicle drive
CN109839937A (en) * 2019-03-12 2019-06-04 百度在线网络技术(北京)有限公司 Determine method, apparatus, the computer equipment of Vehicular automatic driving planning strategy
CN109857118A (en) * 2019-03-12 2019-06-07 百度在线网络技术(北京)有限公司 For planning the method, apparatus, equipment and storage medium of driving strategy
CN110325935A (en) * 2017-09-18 2019-10-11 百度时代网络技术(北京)有限公司 The lane guide line based on Driving Scene of path planning for automatic driving vehicle
US20190317512A1 (en) * 2018-04-17 2019-10-17 Baidu Usa Llc Method to evaluate trajectory candidates for autonomous driving vehicles (advs)
CN110733504A (en) * 2019-11-27 2020-01-31 禾多科技(北京)有限公司 Driving method of automatic driving vehicle with backup path
CN110893858A (en) * 2018-09-12 2020-03-20 华为技术有限公司 Intelligent driving method and intelligent driving system
CN111399490A (en) * 2018-12-27 2020-07-10 华为技术有限公司 Automatic driving method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8036425B2 (en) * 2008-06-26 2011-10-11 Billy Hou Neural network-controlled automatic tracking and recognizing system and method
CN106548645B (en) * 2016-11-03 2019-07-12 济南博图信息技术有限公司 Vehicle route optimization method and system based on deep learning
DE102016223830A1 (en) * 2016-11-30 2018-05-30 Robert Bosch Gmbh Method for operating an automated vehicle
CN109697875B (en) * 2017-10-23 2020-11-06 华为技术有限公司 Method and device for planning driving track
CN109492835B (en) * 2018-12-28 2021-02-19 东软睿驰汽车技术(沈阳)有限公司 Method for determining vehicle control information, method for training model and related device
CN109901574B (en) * 2019-01-28 2021-08-13 华为技术有限公司 Automatic driving method and device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106338988A (en) * 2015-07-06 2017-01-18 丰田自动车株式会社 Control system of automated driving vehicle
CN106080590A (en) * 2016-06-12 2016-11-09 百度在线网络技术(北京)有限公司 Control method for vehicle and device and the acquisition methods of decision model and device
CN108205830A (en) * 2016-12-20 2018-06-26 百度(美国)有限责任公司 Identify the personal method and system for driving preference for automatic driving vehicle
CN107340773A (en) * 2017-06-26 2017-11-10 怀效宁 A kind of method of automatic driving vehicle user individual
CN109491375A (en) * 2017-09-13 2019-03-19 百度(美国)有限责任公司 The path planning based on Driving Scene for automatic driving vehicle
CN110325935A (en) * 2017-09-18 2019-10-11 百度时代网络技术(北京)有限公司 The lane guide line based on Driving Scene of path planning for automatic driving vehicle
US20190317512A1 (en) * 2018-04-17 2019-10-17 Baidu Usa Llc Method to evaluate trajectory candidates for autonomous driving vehicles (advs)
CN109109863A (en) * 2018-07-28 2019-01-01 华为技术有限公司 Smart machine and its control method, device
CN109085837A (en) * 2018-08-30 2018-12-25 百度在线网络技术(北京)有限公司 Control method for vehicle, device, computer equipment and storage medium
CN110893858A (en) * 2018-09-12 2020-03-20 华为技术有限公司 Intelligent driving method and intelligent driving system
CN109747659A (en) * 2018-11-26 2019-05-14 北京汽车集团有限公司 The control method and device of vehicle drive
CN111399490A (en) * 2018-12-27 2020-07-10 华为技术有限公司 Automatic driving method and device
CN109839937A (en) * 2019-03-12 2019-06-04 百度在线网络技术(北京)有限公司 Determine method, apparatus, the computer equipment of Vehicular automatic driving planning strategy
CN109857118A (en) * 2019-03-12 2019-06-07 百度在线网络技术(北京)有限公司 For planning the method, apparatus, equipment and storage medium of driving strategy
CN110733504A (en) * 2019-11-27 2020-01-31 禾多科技(北京)有限公司 Driving method of automatic driving vehicle with backup path

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115293301A (en) * 2022-10-09 2022-11-04 腾讯科技(深圳)有限公司 Estimation method and device for lane change direction of vehicle and storage medium
CN115293301B (en) * 2022-10-09 2023-01-31 腾讯科技(深圳)有限公司 Estimation method and device for lane change direction of vehicle and storage medium
CN117804490A (en) * 2024-02-28 2024-04-02 四川交通职业技术学院 Comprehensive planning method and device for vehicle running route
CN117804490B (en) * 2024-02-28 2024-05-17 四川交通职业技术学院 Comprehensive planning method and device for vehicle running route

Also Published As

Publication number Publication date
WO2022016901A1 (en) 2022-01-27

Similar Documents

Publication Publication Date Title
CN110379193B (en) Behavior planning method and behavior planning device for automatic driving vehicle
CN110550029B (en) Obstacle avoiding method and device
CN109901574B (en) Automatic driving method and device
WO2021102955A1 (en) Path planning method for vehicle and path planning apparatus for vehicle
CN112230642B (en) Road travelable area reasoning method and device
CN110471411A (en) Automatic Pilot method and servomechanism
WO2022016901A1 (en) Method for planning driving route of vehicle, and intelligent vehicle
CN113156927A (en) Safety control method and safety control device for automatic driving vehicle
CN112512887B (en) Driving decision selection method and device
CN113492830A (en) Vehicle parking path planning method and related equipment
WO2022142839A1 (en) Image processing method and apparatus, and intelligent vehicle
CN114440908B (en) Method and device for planning driving path of vehicle, intelligent vehicle and storage medium
CN113835421A (en) Method and device for training driving behavior decision model
WO2022062825A1 (en) Vehicle control method, device, and vehicle
WO2022017307A1 (en) Autonomous driving scenario generation method, apparatus and system
CN111950726A (en) Decision method based on multi-task learning, decision model training method and device
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
CN112810603B (en) Positioning method and related product
CN113552867A (en) Planning method of motion trail and wheel type mobile equipment
CN115042821A (en) Vehicle control method, vehicle control device, vehicle and storage medium
CN114261404A (en) Automatic driving method and related device
CN112829762A (en) Vehicle running speed generation method and related equipment
CN113859265A (en) Reminding method and device in driving process
CN114549610A (en) Point cloud data processing method and related device
CN113799794A (en) Method and device for planning longitudinal motion parameters of vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination