CN117657190A - Automatic driving control method, device, electronic equipment and storage medium - Google Patents

Automatic driving control method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117657190A
CN117657190A CN202311571498.2A CN202311571498A CN117657190A CN 117657190 A CN117657190 A CN 117657190A CN 202311571498 A CN202311571498 A CN 202311571498A CN 117657190 A CN117657190 A CN 117657190A
Authority
CN
China
Prior art keywords
layer
decision
automatic driving
cognitive
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311571498.2A
Other languages
Chinese (zh)
Inventor
周熙钦
崔迎杰
梁振宝
陈勇
符茂磊
万雷鸣
涂正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Geely Automobile Research Institute Ningbo Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Geely Automobile Research Institute Ningbo Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Geely Automobile Research Institute Ningbo Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN202311571498.2A priority Critical patent/CN117657190A/en
Publication of CN117657190A publication Critical patent/CN117657190A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application discloses an automatic driving control method, an automatic driving control device, electronic equipment and a storage medium, wherein the automatic driving control method adopts a cognitive decision model, the cognitive decision model comprises a cognitive layer and a plurality of decision layers, and the decision layers are connected in series; the automatic driving control method includes the steps of: acquiring in-vehicle and out-of-vehicle monitoring data; extracting environmental cognitive characteristics and user demand characteristics from the in-vehicle and in-vehicle monitoring data through the cognitive layer; and sequentially inputting the environmental cognition characteristics and the user demand characteristics into each decision layer to perform multi-layer decision reasoning and determine automatic driving control parameters. The method and the device improve the interactivity between the automatic driving vehicle and the user, reduce the negative moods of the user such as confusion, panic, anxiety and the like, and improve the user experience.

Description

Automatic driving control method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of vehicle technologies, and in particular, to an automatic driving control method, an automatic driving control device, an electronic device, and a storage medium.
Background
With the continuous development of science and technology, the continuous breakthrough of calculation power and the rapid development of automatic driving can provide safer, more convenient and efficient travel modes for people, and simultaneously, the whole traffic system and urban planning are also affected profoundly.
The main technical schemes of the automatic driving system are mainly divided into two types: rule-based sub-module architecture and data-driven based end-to-end perceptual decision architecture. Although the architecture based on rule sub-modules has realized mass production landing by virtue of low computational power requirements, simple deployment, interpretable process and the like, full scene domain coverage is difficult to realize, and the result of the sub-module solution is not a globally optimal solution. The end-to-end scheme takes the original sensor data as input, generates planning and/or low-level control actions as output, can perform joint and global optimization, and has higher accuracy of automatic driving decisions.
However, the end-to-end perception decision architecture has poor interpretability, and cannot perform real-time human-computer interaction with an automatic driving system, so that when a control vehicle executes sudden behavior, a user taking the automatic driving vehicle cannot understand the driving behavior of the automatic driving vehicle to generate negative moods such as confusion, panic, anxiety and the like, and meanwhile, the user is difficult to express own demands in a intelligent driving domain, so that the user experience is poor.
Disclosure of Invention
The main purpose of the application is to provide an automatic driving control method, an automatic driving control device, an electronic device and a storage medium, and aims to solve the technical problem that the end-to-end perception decision scheme is poor in interpretation due to the fact that decisions are not layered in the related technology.
In order to achieve the above objective, the present application provides an automatic driving control method, which adopts a cognitive decision model, wherein the cognitive decision model includes a cognitive layer and a plurality of decision layers, and each decision layer is connected in series; the automatic driving control method includes the steps of:
acquiring in-vehicle and out-of-vehicle monitoring data;
extracting environmental cognitive characteristics and user demand characteristics from the in-vehicle and in-vehicle monitoring data through the cognitive layer;
and sequentially inputting the environmental cognition characteristics and the user demand characteristics into each decision layer to perform multi-layer decision reasoning and determine automatic driving control parameters.
Optionally, the decision layer includes a first decision layer and a second decision layer; the step of inputting the environmental cognition characteristics and the user demand characteristics into each decision layer in turn to carry out multi-layer decision reasoning, and determining the automatic driving control parameters comprises the following steps:
determining a task sequence to be executed based on the environmental cognitive characteristics and the user demand characteristics through the first decision layer, wherein the task sequence to be executed comprises at least one task to be executed;
determining a target task to be executed from the tasks to be executed in sequence, and inputting the target task to be executed into the second decision layer;
And determining an automatic driving control parameter based on each target task to be executed and the corresponding environmental cognitive characteristics of each target task to be executed through the second decision layer.
Optionally, the step of determining, by the second decision layer, an autopilot control parameter based on each target task to be performed and the environmental cognitive feature corresponding to each target task to be performed includes:
determining, by the second decision layer, an autopilot control parameter based on the target task to be performed and the environmental awareness feature;
determining the target task to be executed as an executed task;
acquiring new in-and-out monitoring data, and extracting new environmental cognitive characteristics from the new in-and-out monitoring data through the cognitive layer;
and returning to the step of executing the target task to be executed, which is determined from the tasks to be executed in turn, and inputting the target task to be executed into the second decision layer.
Optionally, the second decision layer includes an inference layer and a decoding layer; the step of determining, by the second decision layer, an autopilot control parameter based on each of the target tasks to be performed and the environmental awareness features corresponding to each of the target tasks to be performed includes:
Acquiring environmental cognitive characteristics and vehicle running state data matched with the time of each target task to be executed;
determining driving behavior characteristics corresponding to each target task to be executed based on the environmental cognitive characteristics corresponding to each target task to be executed through the reasoning layer;
and decoding, by the decoding layer, an automatic driving control parameter based on each target task to be executed, driving behavior characteristics corresponding to each target task to be executed and vehicle running state data corresponding to each target task to be executed.
Optionally, after the step of sequentially inputting the environmental awareness feature and the user demand feature into each decision layer to perform multi-layer decision-making reasoning and determining the autopilot control parameter, the method further includes:
generating a driving behavior interpretation according to the in-vehicle and out-of-vehicle monitoring data and the automatic driving control parameters;
outputting the driving behavior interpretation.
Optionally, the cognitive layer includes a user feature extraction model; the in-vehicle and out-of-vehicle monitoring data comprise user operation data and user state data;
the step of extracting the user demand characteristics from the in-vehicle monitoring data through the cognitive layer comprises the following steps of:
And extracting user display demand characteristics from the user operation data through the user characteristic extraction model, and/or extracting user implicit demand characteristics from the user state data through the user characteristic extraction model.
Optionally, the cognitive layer comprises a scene cognitive model, a map building model, a risk target detection tracking model and a motion prediction model; the environment cognition features comprise scene cognition features, map features, risk target detection tracking features and risk target movement features;
the step of extracting the environmental cognitive characteristics from the in-vehicle and out-of-vehicle monitoring data through the cognitive layer comprises the following steps of:
extracting scene cognitive characteristics from the in-vehicle monitoring data through the scene cognitive model, extracting map characteristics from the in-vehicle monitoring data through the map building model, and extracting risk target detection tracking characteristics from the in-vehicle monitoring data through the risk target detection tracking model;
and predicting the risk target motion based on the scene cognitive feature, the map feature and the risk target detection tracking feature through the motion prediction model to obtain the risk target motion feature, wherein the motion prediction model adopts an attention mechanism.
The application also provides an automatic driving control device, the automatic driving control device is provided with a cognitive decision model, the cognitive decision model comprises a cognitive layer and a plurality of decision layers, each decision layer is connected in series, and the automatic driving control device comprises:
the first acquisition module is used for acquiring the monitoring data inside and outside the vehicle;
the first cognition module is used for extracting environmental cognition characteristics and user demand characteristics from the in-vehicle and in-vehicle monitoring data through the cognition layer;
and the layering decision module is used for sequentially inputting the environmental cognition characteristics and the user demand characteristics into each decision layer to carry out layering decision reasoning and determine automatic driving control parameters.
The application also provides an electronic device, which is an entity device, and includes: the automatic driving control method comprises a memory, a processor and a program of the automatic driving control method, wherein the program of the automatic driving control method is stored in the memory and can be run on the processor, and the program of the automatic driving control method can realize the steps of the automatic driving control method when being executed by the processor.
The present application also provides a storage medium, which is a computer readable storage medium, where a program for implementing an autopilot control method is stored in the computer readable storage medium, where the program for implementing the autopilot control method implements the steps of the autopilot control method as described above when executed by a processor.
The application provides an automatic driving control method, an automatic driving control device, electronic equipment and a storage medium, wherein the automatic driving control method adopts a cognitive decision model, the cognitive decision model comprises a cognitive layer and a plurality of decision layers, and the decision layers are connected in series. Firstly, environmental cognition characteristics and user demand characteristics are extracted from in-vehicle monitoring data through the cognition layers by acquiring in-vehicle monitoring data, the extraction of the environmental cognition characteristics and the user demand characteristics is realized, and then, the environmental cognition characteristics and the user demand characteristics are sequentially input into each decision layer to carry out multi-layer decision-making reasoning, automatic driving control parameters are determined, and the purpose of carrying out multi-layer decision-making reasoning together by fusing the environmental cognition characteristics and the user demand characteristics is realized. On one hand, the user demand is realized layer by layer in a progressive way, and the process accords with the logic reasoning process of the user, so that the determined automatic driving control parameters are more orderly and accord with logic, and are easier to understand by the user; on the other hand, the automatic driving control is carried out based on the environmental cognition characteristics to objectively make a decision so as to ensure the safe driving of the automatic driving vehicle, and the automatic driving control parameters for ensuring the safe driving of the automatic driving vehicle are usually allowed to fluctuate within a certain safety range, for example, the vehicle speed is safe through an intersection within a certain range, and the turning angle is not collided with other traffic participants within a certain range during turning, so that the comfort range which can be suitable for the user requirements can be further found out from the safety range by combining the user requirement characteristics, and the interactivity with the user is improved due to the catering of the user requirements, the final driving behavior of the automatic driving vehicle is easier to be understood by the user, thereby improving the interpretability of the end-to-end perception decision framework, reducing the negative moods of the user such as confusion, panic, uneasiness and the like, and improving the user experience.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described, and it is obvious to those skilled in the art that other drawings can be obtained according to these drawings without inventive effort.
FIG. 1 is a flow chart of a first embodiment of an autopilot control method of the present application;
FIG. 2 is a schematic structural view of an embodiment of an autopilot control apparatus according to the present disclosure;
FIG. 3 is a flow chart of a second embodiment of the autopilot control method of the present application;
fig. 4 is a schematic structural diagram of an autopilot control apparatus according to an embodiment of the present application;
fig. 5 is a schematic device structure diagram of a hardware operating environment related to an autopilot control method in an embodiment of the present application.
The implementation, functional features and advantages of the present application will be further described with reference to the accompanying drawings in conjunction with the embodiments.
Detailed Description
In order to make the above objects, features and advantages of the present invention more comprehensible, the following description of the embodiments accompanied with the accompanying drawings will be given in detail. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present invention without making any inventive effort, are intended to be within the scope of the present invention.
Example 1
Referring to fig. 1, in a first embodiment of an automatic driving control method of the present application, the automatic driving control method adopts a cognitive decision model, where the cognitive decision model includes a cognitive layer and a plurality of decision layers, and each of the decision layers is connected in series, the automatic driving control method includes the following steps:
s10, acquiring in-vehicle and out-of-vehicle monitoring data;
the main execution body of the method of the embodiment may be an automatic driving control device, or may be an automatic driving control terminal device or a server, and the automatic driving control device is exemplified by the automatic driving control device, and the automatic driving control device may be integrated on a terminal device such as a vehicle, a vehicle-mounted terminal, a smart phone, a computer, etc. with a data processing function.
In this embodiment, it should be noted that, the automatic driving control method is applied to a cognitive decision model, where the cognitive decision model is used to implement end-to-end driving behavior control on an automatic driving vehicle, and the cognitive decision model uses an attention mechanism, may be a deep learning model based on a transducer structure, may also be another deep learning neural network model using an attention mechanism, and may specifically be designed according to an actual situation.
The cognitive decision model at least comprises a cognitive layer and a decision layer, wherein the cognitive layer is used for carrying out feature extraction, fusion, coding, reasoning and the like on the acquired in-vehicle and-out monitoring data to obtain risk features required by the decision layer; the decision-making layer is used for planning the driving behavior of the automatic driving vehicle in the next time step based on the risk characteristics transferred by the cognition layer, so that the automatic driving vehicle can avoid the risk and safely drive to a destination, the decision-making layer is of a multi-layer serial structure, namely, the environmental cognition characteristics and the user demand characteristics are input into the first layer to obtain the output of the first layer, the output of the first layer is further input into the second layer, the hierarchy is progressive until the last layer outputs the automatic driving control parameters, in this way, the automatic driving control parameters can be triggered by the user demand, reasoning and decision can be progressively carried out layer by layer until the final automatic driving control parameters are obtained, and the process accords with the logic reasoning process of the user, so that the determined automatic driving control parameters are more orderly and accord with logic and are easier to be understood by the user.
The in-car and out-of-car monitoring data refers to data required by automatic driving control and comprises in-car monitoring data and cabin monitoring data, wherein the in-car and out-of-car monitoring data can be one or more modes of image data, text data, sound data, signal data and the like. The off-vehicle monitoring data may be off-vehicle image data collected by a camera, off-vehicle point cloud data collected by a laser radar and a millimeter wave radar, vehicle position information data collected by a vehicle positioning system, off-vehicle sound data collected by a microphone, signal lamp reading second data obtained by a communication module, and the like. The cabin monitoring data may be vehicle driving data such as vehicle speed and acceleration acquired from a vehicle controller or a controller area network bus, cabin image data acquired through a camera, driver image data, cabin sound data acquired through a microphone, text data obtained by converting acquired voice, text data extracted from acquired images, user demand signal data acquired through a man-machine interaction interface, and the like. The in-vehicle and out-of-vehicle monitoring data can be collected in real time, so that the in-vehicle and out-of-vehicle monitoring data can carry real-time information of information which possibly changes in the states of other traffic participants, the states of drivers, road conditions and the like at the current moment.
As an example, the step S10 includes: the sensor data at the current moment is collected as the in-vehicle and-out monitoring data through one or more sensors arranged on the automatic driving vehicle, and the communication module on the automatic driving vehicle can be used for communicating with external equipment to obtain the external data provided by the external equipment as the in-vehicle and-out monitoring data, wherein the external data are traffic light signals, traffic accident information, road side monitoring data collected by road side monitoring equipment and the like.
Step S20, extracting environmental cognitive characteristics and user demand characteristics from the in-vehicle and out-of-vehicle monitoring data through the cognitive layer;
in this embodiment, it should be noted that, for the in-vehicle monitoring data of different modes, different models may be used to perform feature extraction, for example, the image data may be used to perform feature extraction by using a convolutional neural network, the time sequence data may be used to perform feature extraction by using a long-and-short-term memory network, or may be used to perform feature processing by using different models so as to extract different risk features required by the decision layer, so that multiple models may be deployed in the cognitive layer to perform feature processing, and specific model structures and model types may be determined according to actual situations.
The environmental awareness features are used to characterize the awareness of an autonomous vehicle of the current environment, such as awareness of traffic facilities, awareness of ground obstacles, awareness of the driving state of the host vehicle, awareness of other traffic participants, awareness of the risk of the scene, awareness of the risk item of the scene, and the like. The main purpose of the autopilot vehicle behavior control is to enable the autopilot vehicle to safely reach a destination, i.e. to control the vehicle to travel to the destination while avoiding risks, and in one embodiment, the autopilot control may be implemented in such a way as to predict a high risk travel area and/or a low risk travel area of the autopilot vehicle at a next time step, so as to plan a driving behavior of the autopilot vehicle to avoid the high risk travel area or to pass through the low risk travel area at the next time step. The environmental cognitive characteristics are objective representations of the current environment of the automatic driving vehicle, so that the more accurate the environmental cognitive characteristics are extracted, the more accurate the current risk judgment of the automatic driving vehicle is, and the higher the safety of automatic driving control is. The environment cognition features can comprise scene cognition features, map features, risk target detection tracking features, risk target movement features, low-risk driving region features and the like, and can also comprise other features, wherein the cognition layers can be of a multi-layer structure, namely, the input of any one cognition layer can be the output of other cognition layers, and the output of any one cognition layer can also be the input of other cognition layers, so that fusion and reasoning of multiple features can be realized, relevance and dependence among all the features extracted by the cognition layers are captured, and more comprehensive understanding and deeper logic reasoning on the actual driving situation are realized.
The user demand feature is used to characterize the demand of a user riding on an autonomous vehicle, e.g., the user's demand for a destination, the user's demand for a path to a destination, the user's demand for a catch-time, the user's desire to travel smoothly to slow motion, etc. The user demand features can capture user demand information by detecting user operation, collecting user images and the like, further user demand features can be extracted from the user demand information, for example, the user images can be collected through a camera, under the condition that the user is in a sleeping state currently in the user images is identified through an image identification technology, the user usually implies the requirement of the user on stable running of the vehicle in the sleeping state, for example, the user voice information can be collected through a microphone, and under the condition that the user is identified to say "quick and quick", the user can be known to have the requirement of acceleration.
The cognitive layer may employ a attentive mechanism and in one embodiment, the cognitive layer may employ a transducer structure. The method has the advantages that the attention mechanism is adopted to well conduct feature extraction, fusion, coding and reasoning on various types of vehicle monitoring data in various modes, real-time information of information which is possibly changed at the current moment is extracted, the information sequence is formed by the information and historical information of a period of historical time before the current moment, relevance and dependence among different types of information are captured, feature reasoning is conducted, risk features with deep logic information are extracted for a decision layer to conduct automatic driving control on an automatic driving vehicle, and therefore the mutual influence among information which is possibly changed, a plurality of risk items and user requirements can be fully considered in the process of predicting the driving risk of the automatic driving vehicle, prediction accuracy of the driving risk of the automatic driving vehicle is improved, so that the automatic driving vehicle can avoid more risks, meanwhile, the user requirements are better met, and safety and comfort of the automatic driving control are improved. Wherein the relevance and dependency between different types of information may have a large impact on the autopilot control of an autopilot vehicle, for example, interactions may occur between traffic participants, between traffic participants and traffic facilities, and between traffic participants and scenes. For example, at an intersection, the host vehicle is running in the east-west direction, and there are a second vehicle running in the north-south direction and pedestrians running in the east-west direction nearby, assuming that the host vehicle and the second vehicle run at the current vehicle speed without considering the interaction between the traffic participants, but if the second vehicle decelerates or stops because of avoiding pedestrians, the host vehicle and the second vehicle may collide, so that the host vehicle still needs to decelerate or plan other running routes while considering the interaction between the traffic participants; if the second vehicle is assumed to be positioned in the turning lane, under the condition that interaction between the traffic participant and the traffic facilities is considered, if the turning track of the second vehicle is not intersected with the running track of the vehicle, the vehicle does not need to be decelerated; in addition, in the case where interaction between traffic participants and a scene is considered, in the scene currently at the intersection, because of the complexity of the intersection scene, the risk factors of people and vehicles at the intersection can be increased, and thus it may be necessary to control the vehicle to slow down.
As an example, the step S20 includes: and inputting the in-vehicle and out-of-vehicle monitoring data into the cognitive layer, and performing multi-level progressive feature processing such as feature extraction, feature fusion, feature coding, feature reasoning and the like to obtain the features of the cognitive layer, wherein the features of the cognitive layer at least comprise environmental cognitive features and user demand features.
Optionally, the cognitive layer includes a user feature extraction model; the in-vehicle and out-of-vehicle monitoring data comprise user operation data and user state data;
the step of extracting the user demand characteristics from the in-vehicle monitoring data through the cognitive layer comprises the following steps of:
and extracting user display demand characteristics from the user operation data through the user characteristic extraction model, and/or extracting user implicit demand characteristics from the user state data through the user characteristic extraction model.
In this embodiment, the user operation data refers to data generated by the user actively interacting with the autopilot vehicle, for example, the user sends a voice command to the autopilot vehicle, or actively clicks a button on a display interface of the vehicle-mounted terminal, and so on. The user state data refers to information related to a user actively collected by the automatic driving vehicle, for example, when the user falls asleep, the user does not actively interact with the automatic driving vehicle, at this time, the user image data can be collected through the camera, the sleeping state of the user is identified, for another example, when the user makes a phone call with a friend to express that the user feels the stress and the core of the autopilot vehicle, the user does not actively interact with the autopilot vehicle, and at this time, the microphone is used for collecting the voice data of the user, so that the state of the stress and the core of the user is identified.
As an example, the user operation data may be input into the user feature extraction model to perform multi-level progressive feature processing such as feature extraction, feature fusion, feature encoding, and feature reasoning, so as to obtain a user display demand feature, or the user state data may be input into the user feature extraction model to perform multi-level progressive feature processing such as feature extraction, feature fusion, feature encoding, and feature reasoning, so as to obtain a user implicit demand feature.
In this embodiment, the user may actively interact with the autopilot vehicle to actively propose a demand, the autopilot vehicle may also respond to the user's interaction, so as to satisfy the user demand, improve the interactivity between the autopilot vehicle and the user, and the autopilot vehicle may also actively discover the user demand, and actively make an autopilot behavior that better meets the user demand, so as to reduce discomfort generated by the user.
Optionally, the cognitive layer comprises a scene cognitive model, a map building model, a risk target detection tracking model and a motion prediction model; the environment cognition features comprise scene cognition features, map features, risk target detection tracking features and risk target movement features;
The step of extracting the environmental cognitive characteristics from the in-vehicle and out-of-vehicle monitoring data through the cognitive layer comprises the following steps of:
step A10, extracting scene cognitive characteristics from the in-vehicle and out-vehicle monitoring data through the scene cognitive model, extracting map characteristics from the in-vehicle and out-vehicle monitoring data through the map building model, and extracting risk target detection tracking characteristics from the in-vehicle and out-vehicle monitoring data through the risk target detection tracking model;
and step A20, performing risk target motion prediction through the motion prediction model based on the scene cognitive characteristics, the map characteristics and the risk target detection tracking characteristics to obtain risk target motion characteristics, wherein the motion prediction model adopts an attention mechanism.
In this embodiment, it should be noted that the cognitive layer includes a scene cognitive model, a map building model, a risk target detection tracking model and a motion prediction model, where the risk target refers to an object that may cause risk to the running of the autonomous vehicle, and may be one or more of a traffic participant, a traffic infrastructure, and the like. The scene cognition model, the map construction model and the scene cognition feature, the map feature and the risk target detection tracking feature output by the risk target detection tracking model all have certain influence on the motion of the predicted risk target in the next time step, and certain relevance and dependence exist among the scene cognition feature, the map feature and the risk target detection tracking feature, so that the scene cognition model, the map feature and the risk target detection tracking feature can be used as input of a motion prediction model, the relevance and dependence among the features are captured through the motion prediction model by adopting an attention mechanism, and further feature fusion, coding and reasoning are carried out to determine the risk target motion feature of the risk target.
The risk features include scene recognition features, map features, risk target detection tracking features and risk target movement features. The scene recognition feature is used for representing risk recognition of a scene where the vehicle is currently located, such as attention distribution, semantic understanding of risk items existing in the scene and the like, the map feature is used for representing map information such as ground facilities, road elements and the like of the scene where the vehicle is located, the risk target detection tracking feature is used for representing identification of a risk target and position information of the risk target in a time range before the current moment, the risk target movement feature is used for representing movement information of the risk target in a time range after the current moment, and the movement information can comprise position information, track information, movement intention, movement trend and the like.
As an example, the steps a10 to a20 include: inputting the in-vehicle and out-of-vehicle monitoring data into the scene cognition model, performing multi-level progressive feature processing such as feature extraction, feature fusion, feature coding, feature reasoning and the like, and extracting scene cognition features; synchronously, inputting the in-vehicle and out-vehicle monitoring data into the map building model, and performing multi-level progressive feature processing such as feature extraction, feature fusion, feature coding, feature reasoning and the like to extract map features; synchronously, the in-vehicle and out-vehicle monitoring data are input into the risk target detection tracking model, and multilevel progressive feature processing such as feature extraction, feature fusion, feature coding, feature reasoning and the like is performed to extract risk target detection tracking features. Furthermore, the scene recognition feature, the map feature and the risk target detection tracking feature are input into the motion prediction model, the motion prediction model captures the relevance and the dependence among the scene recognition feature, the map feature and the risk target detection tracking feature by adopting an attention mechanism, and based on the relevance and the dependence, the risk target operation prediction can be more accurately performed to determine the risk target motion feature of at least one risk target.
In one embodiment, the risk target detection tracking feature includes at least one risk target detection tracking sub-feature; the motion prediction model comprises an attention layer, a multi-layer perceptron layer and a prediction layer;
and performing risk target motion prediction by the motion prediction model based on the scene cognitive feature, the map feature and the risk target detection tracking feature to obtain risk target motion feature, wherein the step of adopting an attention mechanism by the motion prediction model comprises the following steps:
step A21, inputting the scene recognition feature, the map feature and the risk target detection tracking feature into the attention layer, and determining a risk target interaction feature between the risk target detection tracking sub-features, a scene interaction feature between the risk target detection tracking sub-features and the scene recognition feature and a map interaction feature between the risk target detection tracking sub-features and the map feature by adopting an attention mechanism;
step A22, inputting the risk target interaction feature, the scene interaction feature and the map interaction feature into a multi-layer perceptron layer to obtain scene risk feature and motion query feature;
And step A23, inputting the scene risk features and the motion query features into the prediction layer to perform risk target motion prediction to obtain risk target motion features.
In this embodiment, it should be noted that, for a complex scenario, multiple risk targets may be involved, where multiple risk targets are identified, the risk target detection tracking model may identify multiple risk targets and track multiple risk targets synchronously, so that the obtained risk target detection tracking feature may include relevant information of multiple risk targets, that is, the risk target detection tracking feature includes at least one risk target detection tracking sub-feature. In this case, motion prediction may be performed separately for each risk target. By adopting the attention mechanism, the relevance and the dependence between each risk target and other risk targets, the relevance and the dependence between each risk target and map elements and the relevance and the dependence between each risk target and scene cognition can be captured, and based on the relevance and the dependence, the risk target operation prediction can be more accurately carried out on each risk target, and the risk target movement characteristics of each risk target can be determined.
The scene risk feature and the motion query feature characterize the existing risk of the current scene, and motion information of the risk target after the current moment can be further predicted based on the scene risk feature and the motion query feature.
As an example, the steps a21-a23 include: inputting the scene recognition feature, the map feature and the risk target detection tracking feature into the attention layer, capturing the dependency relationship between each risk target detection tracking sub-feature by adopting an attention mechanism to determine the interaction between each risk target and other risk targets, generating a risk target interaction feature, capturing the dependency relationship between each risk target detection tracking sub-feature and the scene recognition feature to determine the interaction between each risk target and the scene recognition, generating a scene interaction feature, capturing the dependency relationship between each risk target detection tracking sub-feature and the map feature to determine the interaction between each risk target and map elements, and generating the scene interaction feature. And then, after the risk target interaction features, the scene interaction features and the map interaction features are spliced, inputting the risk target interaction features, the scene interaction features and the map interaction features into a multi-layer perception machine layer to obtain scene risk features and motion query features. And inputting the scene risk characteristics and the motion query characteristics into the prediction layer, and predicting the motion of the risk target for each risk target to obtain the motion characteristics of the risk target of each risk target.
In an implementation manner, the multi-layer perceptron layer can also decode scene risk features and motion query features, and output the decoded scene risk prediction information and motion information of a risk target as intermediate results for a user to view.
And step S30, sequentially inputting the environmental cognition characteristics and the user demand characteristics into each decision layer to perform multi-layer decision-making reasoning, and determining automatic driving control parameters.
In this embodiment, it should be noted that the automatic driving control parameters may include a driving track, a vehicle control parameter, and the like, where the driving track refers to a track of the automatic driving vehicle after a period of time at the current time, and the automatic driving vehicle may be controlled to drive based on the driving track, and the vehicle control parameter includes a speed, an accelerator pedal opening, a steering wheel angle, and the like.
As an example, the step S30 includes: and sequentially inputting the environmental awareness features and the user demand features into each decision layer to perform multi-layer decision reasoning, and determining automatic driving control parameters of the automatic driving vehicle at the current moment or within a period of time after the current moment.
Optionally, after the step of sequentially inputting the environmental awareness feature and the user demand feature into each decision layer to perform multi-layer decision-making reasoning and determining the autopilot control parameter, the method further includes:
Step S40, generating driving behavior interpretation according to the in-vehicle monitoring data and the automatic driving control parameters;
and step S50, outputting the driving behavior interpretation.
In this embodiment, it should be noted that, although meeting the needs of the user, the user can better understand the automatic driving decision made by the cognitive decision model, the cognitive decision model synthesizes a large amount of in-vehicle and out-vehicle monitoring data, makes a decision after comprehensive and multi-level reasoning, and has a longer logic chain with the root cause between the decision making and the decision making, so that the user can only feel the decision finally made, and even can not see the root cause affecting the decision with the lapse of time. For example, a cat suddenly walks over the road, causing the autonomous vehicle to stop suddenly, and the user may not have been able to run when he experiences a sudden stop looking for a cause. For another example, since the movement track of the pedestrian and the track of the vehicle are likely to intersect after a certain time, there is a risk of collision, and the vehicle cannot decelerate and avoid collision, so that the acceleration lane change is performed to avoid the obstacle, when the user senses the acceleration lane change to find the cause, the user cannot predict the track, and at least cannot accurately know the cause of the acceleration lane change behavior of the vehicle in a short time. In such cases, it is still difficult for the user to understand the behavior of the autonomous vehicle, and thus, there is a negative emotion such as confusion, panic, anxiety, etc., and the user experience is poor.
Through the mode of generating and outputting the driving behavior interpretation, the method can actively interact with the user, so that the reason generated by the driving behavior is interpreted for the user, the confusion of the user on the driving behavior is eliminated, the negative moods of the user such as confusion, panic, anxiety and the like are reduced, the trust degree of the user on the automatic driving vehicle is improved, and the user experience is improved.
As an example, the steps S40 to S50 include: the information of the influence factors influencing the decision layer to output the automatic driving control parameters can be decoded from the in-car monitoring data, the automatic driving control parameters and the information of the influence factors corresponding to the automatic driving control parameters are fused, and driving behavior interpretation can be generated, so that the driving behavior interpretation is output in the modes of images, sounds, characters and the like, and a user can know the reason of the driving behavior. For example, when an autonomous vehicle is to stop at a red light intersection, the autonomous control parameter may be stop, the information of the influencing factor may be the position information of the front intersection being a red light and the stop line, and the two information may be aggregated to generate a voice broadcast of driving behavior interpretation "the vehicle stops before the stop line because of the front red light", and the voice broadcast of "the vehicle stops before the stop line because of the front red light" may be output through a speaker.
In one embodiment, the driving behavior interpretation may be output before the autonomous vehicle executes the autonomous control parameters, such that the user may be informed of the driving behavior that the vehicle is about to make without being panicked by the driving behavior that the autonomous vehicle suddenly makes.
In an embodiment, referring to fig. 2, the automatic driving control device includes a cabin domain and intelligent driving domain man-machine interaction interface, a cognitive decision model and a vehicle driving behavior reflecting module, where the cabin domain and intelligent driving domain man-machine interaction interface is used to receive user demand information and environmental cognitive information obtained by the cabin domain and intelligent driving domain, the cognitive decision model is used to determine automatic driving control parameters, the vehicle driving behavior reflecting module is used to control the automatic driving vehicle to make corresponding driving behaviors based on the automatic driving control parameters, and after the driving behaviors are made, feedback information of the driving behaviors of the user can be obtained through the cabin domain and intelligent driving domain man-machine interaction interface, and the cycle is so that the automatic driving vehicle can make driving behaviors suitable for the user demand.
In this embodiment, the automatic driving control method adopts a cognitive decision model, where the cognitive decision model includes a cognitive layer and a plurality of decision layers, and each of the decision layers is connected in series. Firstly, environmental cognition characteristics and user demand characteristics are extracted from in-vehicle monitoring data through the cognition layers by acquiring in-vehicle monitoring data, the extraction of the environmental cognition characteristics and the user demand characteristics is realized, and then, the environmental cognition characteristics and the user demand characteristics are sequentially input into each decision layer to carry out multi-layer decision-making reasoning, automatic driving control parameters are determined, and the purpose of carrying out multi-layer decision-making reasoning together by fusing the environmental cognition characteristics and the user demand characteristics is realized. On one hand, the user demand is realized layer by layer in a progressive way, and the process accords with the logic reasoning process of the user, so that the determined automatic driving control parameters are more orderly and accord with logic, and are easier to understand by the user; on the other hand, the automatic driving control is carried out based on the environmental cognition characteristics to objectively make a decision so as to ensure the safe driving of the automatic driving vehicle, and the automatic driving control parameters for ensuring the safe driving of the automatic driving vehicle are usually allowed to fluctuate within a certain safety range, for example, the vehicle speed is safe through an intersection within a certain range, and the turning angle is not collided with other traffic participants within a certain range during turning, so that the comfort range which can be suitable for the user requirements can be further found out from the safety range by combining the user requirement characteristics, and the interactivity with the user is improved due to the catering of the user requirements, the final driving behavior of the automatic driving vehicle is easier to be understood by the user, thereby improving the interpretability of the end-to-end perception decision framework, reducing the negative moods of the user such as confusion, panic, uneasiness and the like, and improving the user experience.
Example two
Further, in the second embodiment of the present application, the same or similar content as the above embodiment may be referred to the above description, and will not be described in detail later. On this basis, referring to fig. 3, the decision layers include a first decision layer and a second decision layer; the step of inputting the environmental cognition characteristics and the user demand characteristics into each decision layer in turn to carry out multi-layer decision reasoning, and determining the automatic driving control parameters comprises the following steps:
step S31, through the first decision layer, decision reasoning is carried out based on the environmental cognitive characteristics and the user demand characteristics, and a task sequence to be executed is determined, wherein the task sequence to be executed comprises at least one task to be executed;
in this embodiment, it should be noted that, the user needs are various, and for the user needs that can be realized only by multiple steps, if the overall control strategy corresponding to the user needs is output, for example, if the user needs are going to a certain place, the running track or the destination is output, for example, if the user needs are mild, the output is switched to a gentle driving mode, so that although the user needs can be met to a certain extent, the user needs need to cooperate with other functional modules of the vehicle, a certain requirement is provided for vehicle configuration, and a large amount of communication is required in the process, so that the overall control efficiency of the automatic driving vehicle may be affected.
The decision layer is divided into two levels. The first decision layer makes task level decisions on user demands to divide a series of tasks to be executed, and because the acquired user demands are scattered and subjective, the first decision layer needs to infer exact and complete driving tasks which are expected to be executed by the user through combining environment cognition characteristics based on user demand characteristics, and the driving tasks which can be completed in more steps or longer time can be further divided into a plurality of tasks to be executed sequentially. The second decision layer sequentially generates automatic driving control parameters of each task to be executed under the current environment according to the tasks to be executed and the environmental cognition characteristics, and the generated automatic driving control parameters can be determined to be parameters which can be directly executed by the automatic driving vehicle according to the actual situation of the automatic driving vehicle.
In an embodiment, the automatic driving control parameter may be a driving parameter such as an accelerator pedal opening, a steering wheel angle, etc., and outputting the driving parameter may implement finer control of the automatic driving vehicle, so as to implement comprehensive regulation and control of the automatic driving vehicle.
The task sequence to be executed is formed by arranging at least one task to be executed in time sequence, and the automatic driving vehicle can realize the user requirement by sequentially executing the tasks to be executed in the task sequence to be executed.
As an example, the step S31 includes: and inputting the environmental cognitive characteristics and the user demand characteristics into the first decision layer for decision reasoning, and determining a task sequence to be executed, which is required to be executed by the automatic driving vehicle in time sequence and in sequence within a period of time after the current moment.
Step S32, determining a target task to be executed from the tasks to be executed in turn, and inputting the target task to be executed into the second decision layer;
as an example, the step S32 includes: after the first decision layer determines the task sequence to be executed, sequentially determining the task to be executed with earliest time which is not executed as a target task to be executed, and inputting the target task to be executed into the second decision layer so that the second decision layer can further generate corresponding automatic driving control parameters. It should be noted that, whether a task to be executed has been executed or not may be determined by adding a flag to the task to be executed to distinguish between executed and unexecuted tasks, or by deleting a corresponding task to be executed from a task sequence to be executed after each determination as a target task to be executed.
Step S33, determining, by the second decision layer, an autopilot control parameter based on each target task to be executed and the environmental cognitive characteristics corresponding to each target task to be executed.
In this embodiment, it should be noted that the environmental cognitive features corresponding to the tasks to be executed may be the same as the environmental cognitive features used by the first decision layer to determine the task sequence to be executed, that is, the same environmental cognitive features are used for the whole task sequence to be executed. The environmental cognition characteristics corresponding to each target task to be executed may be different from the environmental cognition characteristics used by the first decision-making layer to determine the task sequence to be executed, that is, new environmental cognition characteristics may be obtained again to adapt to the change of the environment generated with the lapse of time, for example, in the process of automatic driving, in parallel with the automatic driving control method, the collection of the in-vehicle and out-vehicle monitoring data and the extraction of the environmental cognition characteristics may be performed continuously or at intervals; and the acquisition of the in-vehicle and out-of-vehicle monitoring data and the extraction of the environmental cognitive characteristics can be carried out once before the second decision-making layer determines the automatic driving control parameters each time.
As an example, the step S33 includes: and inputting each target task to be executed and the corresponding environmental cognitive characteristics of each target task to be executed into the second decision layer to generate the corresponding automatic driving control parameters of each target task to be executed.
Optionally, after the step of determining, by the second decision layer, the autopilot control parameter based on each target task to be performed and the environmental cognitive feature corresponding to each target task to be performed, the method further includes:
step B10, determining automatic driving control parameters based on the target task to be executed and the environmental cognitive characteristics through the second decision layer;
step B20, determining the target task to be executed as the executed task;
step B30, acquiring new in-and-out monitoring data, and extracting new environment cognition characteristics from the new in-and-out monitoring data through the cognition layer;
and step B40, returning to the step of executing the steps of determining a target task to be executed from the tasks to be executed in turn and inputting the target task to be executed into the second decision layer.
As an example, the steps B10 to B40 include: inputting the target task to be executed and the environmental cognitive characteristics into the second decision layer to generate an automatic driving control parameter corresponding to the target task to be executed, wherein in an implementation manner, after the automatic driving control parameter corresponding to the target task to be executed is generated, the automatic driving control parameter can be immediately decoded to control the automatic driving vehicle to execute driving behaviors corresponding to the automatic driving control parameter. And then, determining the target task to be executed as the executed task, re-acquiring new in-vehicle and out-vehicle monitoring data, inputting the new in-vehicle and out-vehicle monitoring data into the cognitive layer, extracting new environment cognitive characteristics, and updating the environment cognitive characteristics. And after updating the environment cognition characteristics, returning to execute the steps of determining target tasks to be executed from the tasks to be executed in turn and inputting the target tasks to be executed into the second decision layer, so that the automatic driving vehicle continues to execute the next tasks to be executed until all the tasks to be executed are completed.
Optionally, the second decision layer includes an inference layer and a decoding layer; the step of determining, by the second decision layer, an autopilot control parameter based on each of the target tasks to be performed and the environmental awareness features corresponding to each of the target tasks to be performed includes:
step C10, acquiring environmental cognitive characteristics and vehicle running state data which are matched with the time of each target task to be executed;
in this embodiment, it should be noted that the second decision layer includes an inference layer for inferring driving behavior conforming to the current actual situation, and a decoding layer for decoding automatic driving control parameters that can be directly executed by the automatic driving vehicle.
In the process of executing each task to be executed, the external environment and the vehicle state may change continuously, so that the external environment and the vehicle driving state need to be continuously recognized, and the vehicle can make an automatic driving decision suitable for the actual situation through time matching. In one embodiment, the time matching with the target task to be performed may be that the generation time or the collection time is closest to the execution time of the target task to be performed.
As an example, the step C10 includes: and acquiring environmental cognitive characteristics and vehicle running state data which are matched with the time of each target task to be executed.
Step C20, determining driving behavior characteristics corresponding to each target task to be executed based on the environmental cognitive characteristics corresponding to each target task to be executed through the reasoning layer;
as an example, the step C20 includes: and inputting each target task to be executed and the corresponding environmental cognitive characteristics of each target task to be executed into the reasoning layer, carrying out characteristic reasoning, and determining the corresponding driving behavior characteristics of each target task to be executed.
And step C30, decoding, by the decoding layer, automatic driving control parameters based on each target task to be executed, driving behavior characteristics corresponding to each target task to be executed and vehicle running state data corresponding to each target task to be executed.
As an example, the step C30 includes: and inputting each target task to be executed, driving behavior characteristics corresponding to each target task to be executed and vehicle running state data corresponding to each target task to be executed into the decoding layer, and decoding automatic driving control parameters corresponding to each target task to be executed.
In an implementation manner, the steps C10-C30 may be sequentially performed, that is, the first decision layer determines one target task to be executed at a time, the second decision layer obtains the environmental cognitive characteristic and the vehicle driving state data closest to the currently determined target task to be executed, the inference layer determines the driving behavior characteristic based on the environmental cognitive characteristic corresponding to the currently determined target task to be executed, and the decoding layer decodes the automatic driving control parameter corresponding to the currently determined target task to be executed based on the currently determined target task to be executed, the environmental cognitive characteristic and the vehicle driving state data. And then a completion signal can be transmitted to the first decision layer, so that the first decision layer redetermines a target task to be executed, and the process is circulated until all the tasks to be executed are executed.
In this embodiment, through hierarchical decision, subdivision of driving tasks required to be executed by user demands can be achieved, so that overall control of execution of the whole task sequence to be executed is achieved, on one hand, no other functional modules of a vehicle are needed, time occupied by communication can be reduced, automatic driving vehicle control efficiency is improved, on the other hand, for driving tasks with long execution time or more steps, adaptability adjustment can be continuously conducted based on changes of external environments and vehicle states in the whole control process, on the basis of meeting user demands, the actual situation of continuous change is better adapted, driving risks are reduced, and automatic driving safety is improved.
Example III
Further, referring to fig. 4, a cognitive decision model is disposed on the automatic driving control device, where the cognitive decision model includes a cognitive layer and a plurality of decision layers, and each of the decision layers is connected in series, and the automatic driving control device includes:
a first acquisition module 10 for acquiring in-vehicle and out-of-vehicle monitoring data;
the first cognition module 20 is configured to extract environmental cognition characteristics and user demand characteristics from the in-vehicle and in-vehicle monitoring data through the cognition layer;
and the hierarchical decision module 30 is used for sequentially inputting the environmental awareness features and the user demand features into each decision layer to perform multi-layer decision reasoning and determine automatic driving control parameters.
Optionally, the decision layer includes a first decision layer and a second decision layer; the hierarchical decision module 30 is further configured to:
determining a task sequence to be executed based on the environmental cognitive characteristics and the user demand characteristics through the first decision layer, wherein the task sequence to be executed comprises at least one task to be executed;
determining a target task to be executed from the tasks to be executed in sequence, and inputting the target task to be executed into the second decision layer;
And determining an automatic driving control parameter based on each target task to be executed and the corresponding environmental cognitive characteristics of each target task to be executed through the second decision layer.
Optionally, the hierarchical decision module 30 is further configured to:
determining, by the second decision layer, an autopilot control parameter based on the target task to be performed and the environmental awareness feature;
determining the target task to be executed as an executed task;
acquiring new in-and-out monitoring data, and extracting new environmental cognitive characteristics from the new in-and-out monitoring data through the cognitive layer;
and returning to the step of executing the target task to be executed, which is determined from the tasks to be executed in turn, and inputting the target task to be executed into the second decision layer.
Optionally, the second decision layer includes an inference layer and a decoding layer; the hierarchical decision module 30 is further configured to:
acquiring environmental cognitive characteristics and vehicle running state data matched with the time of each target task to be executed;
determining driving behavior characteristics corresponding to each target task to be executed based on the environmental cognitive characteristics corresponding to each target task to be executed through the reasoning layer;
And decoding, by the decoding layer, an automatic driving control parameter based on each target task to be executed, driving behavior characteristics corresponding to each target task to be executed and vehicle running state data corresponding to each target task to be executed.
Optionally, after the operation of inputting the environmental awareness feature and the user demand feature into each of the decision layers in turn to perform multi-layer decision-making reasoning and determine the autopilot control parameter, the autopilot control apparatus further includes an interpretation module, where the interpretation module is configured to:
generating a driving behavior interpretation according to the in-vehicle and out-of-vehicle monitoring data and the automatic driving control parameters;
outputting the driving behavior interpretation.
Optionally, the cognitive layer includes a user feature extraction model; the in-vehicle and out-of-vehicle monitoring data comprise user operation data and user state data;
the first cognitive module 10 is further configured to:
and extracting user display demand characteristics from the user operation data through the user characteristic extraction model, and/or extracting user implicit demand characteristics from the user state data through the user characteristic extraction model.
Optionally, the cognitive layer comprises a scene cognitive model, a map building model, a risk target detection tracking model and a motion prediction model; the environment cognition features comprise scene cognition features, map features, risk target detection tracking features and risk target movement features;
The first cognitive module 10 is further configured to:
extracting scene cognitive characteristics from the in-vehicle monitoring data through the scene cognitive model, extracting map characteristics from the in-vehicle monitoring data through the map building model, and extracting risk target detection tracking characteristics from the in-vehicle monitoring data through the risk target detection tracking model;
and predicting the risk target motion based on the scene cognitive feature, the map feature and the risk target detection tracking feature through the motion prediction model to obtain the risk target motion feature, wherein the motion prediction model adopts an attention mechanism.
The automatic driving control device provided by the invention solves the technical problem that the end-to-end perception decision scheme has poor interpretation due to no layering of decisions in the related technology by adopting the automatic driving control method in the embodiment. Compared with the related art, the benefit of the automatic driving control device provided by the embodiment of the present invention is the same as that of the automatic driving control method provided by the above embodiment, and other technical features of the automatic driving control device are the same as those disclosed by the method of the above embodiment, which are not described in detail herein.
Example IV
Further, an embodiment of the present invention provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the autopilot control method of the above embodiments.
Referring now to fig. 5, a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as bluetooth headsets, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), car terminals (e.g., car navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 5, the electronic device may include a processing means (e.g., a central processing unit, a graphic processor, etc.) that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) or a program loaded from a storage means into a Random Access Memory (RAM). In the RAM, various programs and arrays required for the operation of the electronic device are also stored. The processing device, ROM and RAM are connected to each other via a bus. An input/output (I/O) interface is also connected to the bus.
In general, the following systems may be connected to the I/O interface: input devices including, for example, touch screens, touch pads, keyboards, mice, image sensors, microphones, accelerometers, gyroscopes, etc.; output devices including, for example, liquid Crystal Displays (LCDs), speakers, vibrators, etc.; storage devices including, for example, magnetic tape, hard disk, etc.; a communication device. The communication means may allow the electronic device to communicate with other devices wirelessly or by wire to exchange arrays. While electronic devices having various systems are shown in the figures, it should be understood that not all of the illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via a communication device, or installed from a storage device, or installed from ROM. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by a processing device.
The electronic equipment provided by the invention adopts the automatic driving control method in the embodiment, so that the technical problem that the end-to-end perception decision scheme has poor interpretation in the related technology because decisions are not layered is solved. Compared with the related art, the electronic device provided by the embodiment of the present invention has the same advantages as those of the automatic driving control method provided by the above embodiment, and other technical features of the electronic device are the same as those disclosed by the method of the above embodiment, and are not described in detail herein.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Example five
Further, the present embodiment provides a computer-readable storage medium having computer-readable program instructions stored thereon for executing the automatic driving control method in the above-described embodiment.
The computer readable storage medium according to the embodiments of the present invention may be, for example, a usb disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The above-described computer-readable storage medium may be contained in an electronic device; or may exist alone without being assembled into an electronic device.
The computer-readable storage medium carries one or more programs that, when executed by an electronic device, cause the electronic device to: acquiring in-vehicle and out-of-vehicle monitoring data; extracting environmental cognitive characteristics and user demand characteristics from the in-vehicle and in-vehicle monitoring data through the cognitive layer; and sequentially inputting the environmental cognition characteristics and the user demand characteristics into each decision layer to perform multi-layer decision reasoning and determine automatic driving control parameters.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The computer readable storage medium provided by the invention stores the computer readable program instructions for executing the automatic driving control method, and solves the technical problem that the end-to-end perception decision scheme still has poor interpretability due to no layering of decisions in the related technology. Compared with the related art, the benefits of the computer readable storage medium provided by the embodiments of the present invention are the same as those of the automatic driving control method provided by the above embodiments, and are not described herein.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims.

Claims (10)

1. An automatic driving control method is characterized in that the automatic driving control method adopts a cognitive decision model, wherein the cognitive decision model comprises a cognitive layer and a plurality of decision layers, and the decision layers are connected in series; the automatic driving control method includes the steps of:
acquiring in-vehicle and out-of-vehicle monitoring data;
extracting environmental cognitive characteristics and user demand characteristics from the in-vehicle and in-vehicle monitoring data through the cognitive layer;
And sequentially inputting the environmental cognition characteristics and the user demand characteristics into each decision layer to perform multi-layer decision reasoning and determine automatic driving control parameters.
2. The automatic driving control method according to claim 1, wherein the decision layer includes a first decision layer and a second decision layer; the step of inputting the environmental cognition characteristics and the user demand characteristics into each decision layer in turn to carry out multi-layer decision reasoning, and determining the automatic driving control parameters comprises the following steps:
determining a task sequence to be executed based on the environmental cognitive characteristics and the user demand characteristics through the first decision layer, wherein the task sequence to be executed comprises at least one task to be executed;
determining a target task to be executed from the tasks to be executed in sequence, and inputting the target task to be executed into the second decision layer;
and determining an automatic driving control parameter based on each target task to be executed and the corresponding environmental cognitive characteristics of each target task to be executed through the second decision layer.
3. The automatic driving control method according to claim 2, wherein the step of determining, by the second decision layer, the automatic driving control parameter based on each of the target tasks to be performed and the environmental awareness feature corresponding to each of the target tasks to be performed includes:
Determining, by the second decision layer, an autopilot control parameter based on the target task to be performed and the environmental awareness feature;
determining the target task to be executed as an executed task;
acquiring new in-and-out monitoring data, and extracting new environmental cognitive characteristics from the new in-and-out monitoring data through the cognitive layer;
and returning to the step of executing the target task to be executed, which is determined from the tasks to be executed in turn, and inputting the target task to be executed into the second decision layer.
4. The autopilot control method of claim 2 wherein the second decision layer includes an inference layer and a decoding layer; the step of determining, by the second decision layer, an autopilot control parameter based on each of the target tasks to be performed and the environmental awareness features corresponding to each of the target tasks to be performed includes:
acquiring environmental cognitive characteristics and vehicle running state data matched with the time of each target task to be executed;
determining driving behavior characteristics corresponding to each target task to be executed based on the environmental cognitive characteristics corresponding to each target task to be executed through the reasoning layer;
And decoding, by the decoding layer, an automatic driving control parameter based on each target task to be executed, driving behavior characteristics corresponding to each target task to be executed and vehicle running state data corresponding to each target task to be executed.
5. The automatic driving control method according to claim 1, wherein after the step of sequentially inputting the environmental awareness feature and the user demand feature into each of the decision layers to perform multi-layer decision-making reasoning, the automatic driving control method further comprises:
generating a driving behavior interpretation according to the in-vehicle and out-of-vehicle monitoring data and the automatic driving control parameters;
outputting the driving behavior interpretation.
6. The automatic driving control method according to claim 1, wherein the cognitive layer includes a user feature extraction model; the in-vehicle and out-of-vehicle monitoring data comprise user operation data and user state data;
the step of extracting the user demand characteristics from the in-vehicle monitoring data through the cognitive layer comprises the following steps of:
and extracting user display demand characteristics from the user operation data through the user characteristic extraction model, and/or extracting user implicit demand characteristics from the user state data through the user characteristic extraction model.
7. The automatic driving control method according to claim 1, wherein the cognitive layer includes a scene cognitive model, a map building model, a risk target detection tracking model, and a motion prediction model; the environment cognition features comprise scene cognition features, map features, risk target detection tracking features and risk target movement features;
the step of extracting the environmental cognitive characteristics from the in-vehicle and out-of-vehicle monitoring data through the cognitive layer comprises the following steps of:
extracting scene cognitive characteristics from the in-vehicle monitoring data through the scene cognitive model, extracting map characteristics from the in-vehicle monitoring data through the map building model, and extracting risk target detection tracking characteristics from the in-vehicle monitoring data through the risk target detection tracking model;
and predicting the risk target motion based on the scene cognitive feature, the map feature and the risk target detection tracking feature through the motion prediction model to obtain the risk target motion feature, wherein the motion prediction model adopts an attention mechanism.
8. An automatic driving control device, wherein a cognitive decision model is disposed on the automatic driving control device, the cognitive decision model includes a cognitive layer and a plurality of decision layers, each decision layer is connected in series, the automatic driving control device includes:
The first acquisition module is used for acquiring the monitoring data inside and outside the vehicle;
the first cognition module is used for extracting environmental cognition characteristics and user demand characteristics from the in-vehicle and in-vehicle monitoring data through the cognition layer;
and the layering decision module is used for sequentially inputting the environmental cognition characteristics and the user demand characteristics into each decision layer to carry out layering decision reasoning and determine automatic driving control parameters.
9. An electronic device, the electronic device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to at least one of the processors; wherein,
the memory stores instructions executable by at least one of the processors to enable the at least one of the processors to perform the steps of the autopilot control method of any one of claims 1 to 7.
10. A storage medium, characterized in that the storage medium is a computer-readable storage medium having stored thereon a program for realizing an automatic driving control method, the program for realizing an automatic driving control method being executed by a processor to realize the steps of the automatic driving control method according to any one of claims 1 to 7.
CN202311571498.2A 2023-11-22 2023-11-22 Automatic driving control method, device, electronic equipment and storage medium Pending CN117657190A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311571498.2A CN117657190A (en) 2023-11-22 2023-11-22 Automatic driving control method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311571498.2A CN117657190A (en) 2023-11-22 2023-11-22 Automatic driving control method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117657190A true CN117657190A (en) 2024-03-08

Family

ID=90085605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311571498.2A Pending CN117657190A (en) 2023-11-22 2023-11-22 Automatic driving control method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117657190A (en)

Similar Documents

Publication Publication Date Title
US10459440B2 (en) System and method for remotely assisting autonomous vehicle operation
US11663516B2 (en) Artificial intelligence apparatus and method for updating artificial intelligence model
KR102060662B1 (en) Electronic device and method for detecting a driving event of vehicle
US11055544B2 (en) Electronic device and control method thereof
US9747898B2 (en) Interpretation of ambiguous vehicle instructions
CN115175841A (en) Behavior planning for autonomous vehicles
KR20190083317A (en) An artificial intelligence apparatus for providing notification related to lane-change of vehicle and method for the same
US20200050894A1 (en) Artificial intelligence apparatus and method for providing location information of vehicle
KR20190096307A (en) Artificial intelligence device providing voice recognition service and operating method thereof
US11755033B2 (en) Artificial intelligence device installed in vehicle and method therefor
Vemori Towards Safe and Equitable Autonomous Mobility: A Multi-Layered Framework Integrating Advanced Safety Protocols and XAI for Transparent Decision-Making in Self-Driving Vehicles
EP3893162A1 (en) Artificial intelligence apparatus using a plurality of output layers and method for same
KR20210057886A (en) Apparatus and method for preventing vehicle collision
CN114537141A (en) Method, apparatus, device and medium for controlling vehicle
US11449074B2 (en) Robot for providing guidance service using artificial intelligence and method of operating the same
US11211045B2 (en) Artificial intelligence apparatus and method for predicting performance of voice recognition model in user environment
US11074814B2 (en) Portable apparatus for providing notification
CN116776151A (en) Automatic driving model capable of performing autonomous interaction with outside personnel and training method
CN116880462A (en) Automatic driving model, training method, automatic driving method and vehicle
KR20190095190A (en) Artificial intelligence device for providing voice recognition service and operating mewthod thereof
CN117657190A (en) Automatic driving control method, device, electronic equipment and storage medium
CN113799799A (en) Security compensation method and device, storage medium and electronic equipment
CN117901892A (en) Automatic driving control method, device, electronic equipment and storage medium
CN117644877A (en) Automatic driving optimization method, electronic device and storage medium
Cui et al. Human-Autonomy Teaming on Autonomous Vehicles with Large Language Model-Enabled Human Digital Twins

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination