CN112396183A - Method, device and equipment for automatic driving decision and computer storage medium - Google Patents

Method, device and equipment for automatic driving decision and computer storage medium Download PDF

Info

Publication number
CN112396183A
CN112396183A CN202110078838.2A CN202110078838A CN112396183A CN 112396183 A CN112396183 A CN 112396183A CN 202110078838 A CN202110078838 A CN 202110078838A CN 112396183 A CN112396183 A CN 112396183A
Authority
CN
China
Prior art keywords
information
decision
automatic driving
agent
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110078838.2A
Other languages
Chinese (zh)
Inventor
韩晓健
潘晏涛
冉雪峰
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoqi Intelligent Control Beijing Technology Co Ltd
Original Assignee
Guoqi Intelligent Control Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoqi Intelligent Control Beijing Technology Co Ltd filed Critical Guoqi Intelligent Control Beijing Technology Co Ltd
Priority to CN202110078838.2A priority Critical patent/CN112396183A/en
Publication of CN112396183A publication Critical patent/CN112396183A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application provides an automatic driving decision method, an automatic driving decision device, an automatic driving decision equipment and a computer storage medium. And generating second decision information according to the at least one agent state information or the at least one agent control information, so that decision results are not required to be obtained according to switching conditions of a plurality of functional states, and the influence of undefined switching conditions on the decision results is reduced, so that the decision results are more accurate.

Description

Method, device and equipment for automatic driving decision and computer storage medium
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a method, an apparatus, a device, and a computer storage medium for automatic driving decision.
Background
Automatic driving, as a new technology, is a current hotspot in the automotive industry. The automatic driving function refers to that an automatic driving vehicle can determine the automatic driving behavior of the vehicle according to collected environmental information, and the automatic driving function is usually developed in a decision-making manner of a finite state machine, for example, Adaptive Cruise Control (ACC), Emergency Brake Assist (EBA), and Lane Keeping (LKS).
However, the automatic driving functions such as ACC and EBA belong to low-level automatic driving functions, and cannot meet complex and variable driving environments in practical applications, and development of higher-level automatic driving functions is a problem that needs to be solved at present. When a high-level automatic driving function is developed based on a finite-state machine, due to the existence of a plurality of functional states, the switching condition of each functional state cannot be clearly designed, so that the automatic driving vehicle cannot accurately obtain a decision result.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a computer storage medium for automatic driving decision, which do not need to obtain decision results according to switching conditions of various functional states, reduce the influence on the decision results due to uncertain switching conditions and enable the decision results to be more accurate.
In a first aspect, an embodiment of the present application provides a method for automatic driving decision, where the method includes: acquiring environment perception fusion information of an automatic driving environment; generating first decision information according to the environment perception fusion information, wherein the first decision information comprises at least one agent state information of the driving state of the automatic driving vehicle in the automatic driving environment and at least one agent control information of the automatic driving vehicle changing the driving state in the automatic driving environment; and generating second decision information according to the at least one agent state information or the at least one agent control information.
In one possible implementation, generating the second decision information according to at least one piece of agent status information includes: obtaining a state transition matrix according to at least one agent state information; and generating second decision information according to the state transition matrix.
In one possible implementation manner, generating the second decision information according to the at least one piece of proxy control information includes: calculating a weight value of each agent control information through a decision model according to at least one agent control information; and generating second decision information according to the weight value of the agent control information.
In one possible implementation, generating the first decision information according to the context-aware fusion information includes: and generating first decision information through a low-level agent function module according to the environment perception fusion information.
In one possible implementation, obtaining context-aware fusion information of an autonomous driving environment includes: acquiring environmental information of an automatic driving environment; and fusing the environment information through a fusion perception algorithm to obtain environment perception fusion information.
In one possible implementation, the method further includes: and sending second decision information to the control end of the automatic driving vehicle, so that the control end controls the driving behavior of the automatic driving vehicle according to the second decision information.
In a second aspect, an embodiment of the present application provides an apparatus for automatic driving decision, where the apparatus includes: the acquisition module is used for acquiring environment perception fusion information of the automatic driving environment; the generating module is used for generating first decision information according to the environment perception fusion information, wherein the first decision information comprises at least one piece of agent state information of the driving state of the automatic driving vehicle in the automatic driving environment and at least one piece of agent control information of the automatic driving vehicle for changing the driving state in the automatic driving environment; and the generating module is further used for generating second decision information according to the at least one piece of agent state information or the at least one piece of agent control information.
In a possible implementation manner, the generating module is specifically configured to obtain a state transition matrix according to at least one piece of agent state information; and generating second decision information according to the state transition matrix.
In a third aspect, an embodiment of the present application provides an automatic driving decision apparatus, including: a processor, and a memory storing computer program instructions; the processor reads and executes computer program instructions to implement the method for automated driving decision-making in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer storage medium having computer program instructions stored thereon, where the computer program instructions, when executed by a processor, implement a method for automated driving decision in the first aspect or any one of the possible implementation manners of the first aspect.
According to the method, the device, the equipment and the computer storage medium for automatic driving decision, the environment perception fusion information of the automatic driving environment is obtained, and the first decision information is generated according to the environment perception fusion information, wherein the first decision information comprises at least one piece of agent state information of the driving state of the automatic driving vehicle in the automatic driving environment and at least one piece of agent control information of the automatic driving vehicle changing the driving state in the automatic driving environment. And generating second decision information according to the at least one agent state information or the at least one agent control information, so that decision results are not required to be obtained according to switching conditions of a plurality of functional states, and the influence of undefined switching conditions on the decision results is reduced, so that the decision results are more accurate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an automatic driving decision system provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating a method for automated driving decision-making provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of an automatic driving decision device provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an automatic driving decision device provided in an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative only and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Currently, a recognized autodrive classification standard is established by the Society of Automotive Engineers (SAE). The concrete grading is as follows:
level 0 (no automation): the car is operated by a human driver in full authority, and can be assisted by a warning and protection system during driving.
Level 1 (driving support): the driving support is provided for one operation of the pay-off reel and acceleration and deceleration through the driving environment, and other driving actions are operated by a human driver.
Level 2 (partially automated): the driving support is provided for a plurality of operations in the steering wheel and acceleration/deceleration by the driving environment, and other driving actions are operated by the human driver.
Level 3 (conditional automation): all driving operations are completed by the unmanned system. Upon request by the system, the human driver provides an appropriate response.
Level 4 (highly automated): all driving operations are completed by the unmanned system. Depending on the system request, the human driver does not necessarily need to respond to all system requests, define road and environmental conditions, etc.
Level 5 (fully automated): all driving operations are completed by the unmanned system. Human drivers take over, if possible, driving under all road and environmental conditions.
At present, when the automatic driving function is developed, a finite state machine or a behavior tree is generally adopted, and the functional state transition of the development mode is clear. Because the number of functional states is small in the lower-level automatic driving function, for example, the automatic driving function at the level of L2, the development mode of a finite-state machine or a behavior tree is relatively simple and convenient. However, when the automatic driving function of a higher level is developed, the number of functional states is too large, and the switching condition for each state cannot be clearly designed, and the decision result cannot be accurately obtained. Therefore, the conventional development mode is not suitable for developing advanced automatic driving functions.
In another development mode, the input of the vehicle and the environment is connected to an input layer of a neural network, the command for controlling the vehicle is used as an output layer, and the network is trained to achieve a certain automatic driving function to obtain an automatic driving decision model. However, the automatic driving decision model is a neural network model with a large number of parameter values and a deep network layer number, and a lot of computing resources are consumed in training, and the neural network model is difficult to converge.
In order to solve the problems of the prior art, embodiments of the present application provide a method, an apparatus, a device, and a computer storage medium for automatic driving decision.
Fig. 1 shows a schematic structural diagram of an automatic driving decision system provided in an embodiment of the present application. As shown in fig. 1, the automatic driving decision system 100 includes a low-level agent function module 110, a decision agent function module 120, an environmental information acquisition module 130, and an automatic driving vehicle control end 140.
The low-level proxy function module 110 is composed of N sub-modules, which are respectively the first proxy function module 111, the second proxy function module 112, and the nth function module 11N, which can be set according to actual requirements, and are not limited herein. Sub-modules within the low-level proxy function 110 may be configured as an adaptive cruise control proxy function, an automatic emergency braking proxy function, a blind spot detection proxy function, and the like.
A denotes agent state information generated by the first agent function module 111, B denotes agent control information generated by the first agent function module 111, C denotes agent state information generated by the second agent function module 111, D denotes agent control information generated by the second agent function module 112, E denotes agent state information generated by the nth agent function module 11N, and F denotes agent control information generated by the nth agent function module 11N.
G denotes decision information generated by the decision agent function module 120 from all information transmitted from the low-level agent function module 110. H represents a traffic report formed by the autonomous vehicle control end 140 controlling the driving behavior of the autonomous vehicle according to the decision information sent by the decision agent function module 120. The environment information acquisition module 130 is configured to acquire environment information I of an autonomous driving environment.
The automatic driving decision system 100 adopts a method of combining low-level automatic driving function modules to construct a high-level automatic driving function module, does not need to establish a huge state machine to deal with complex states, does not need to process state switching of a responsible state machine, reduces the influence on a decision result due to unclear switching conditions, and enables the decision result to be more accurate. And a deep and huge parameter neural network does not need to be constructed, and the resources of the training network are reduced. The existing basic automatic driving function is used, the convergence of the model can be accelerated, and the influence of cold start of the neural network is reduced.
The following describes a method for automatic driving decision provided by the embodiment of the present application.
Fig. 2 is a flow chart illustrating a method for automated driving decision provided in an embodiment of the present application. As shown in fig. 2, the method may include the steps of: and S210, acquiring environment perception fusion information of the automatic driving environment.
The environment perception fusion information is the combination of environment information of the automatic driving environment, and comprises road information, pedestrian information, traffic vehicle information and the like.
Specifically, environmental information of an autonomous driving environment is acquired; and fusing the environment information through a fusion perception algorithm to obtain environment perception fusion information.
The low-level proxy function module 110 may send request information to the environment information obtaining module 130, and the environment information obtaining module 130 sends the environment information to the low-level proxy function module 110 according to the request information, or may set the environment information obtaining module 130 to send the environment information to the low-level proxy function module 110 at regular time within a certain period of time.
The environment information acquiring module 130 includes a camera, a laser radar, a millimeter wave radar, a vehicle to outside information exchange (V2X) communication device, and the like. The vehicle-mounted equipment such as a camera, a laser radar and a millimeter wave radar is used for collecting road information of road infrastructure such as roads and lane lines, and the position information of surrounding vehicles and the position information of pedestrians are obtained through a V2X communication device. In addition, the position data of the automatic driving vehicle can be acquired through satellite positioning, and the working condition data of the vehicle can be acquired through a vehicle sensor of an Electronic Control Unit (ECU).
The environmental information is fused through the fusion perception algorithm, the obtained environmental perception fusion information contains more abundant environmental factors, the information participating in the automatic driving decision is more accurate, and the accuracy of the automatic driving decision result is improved.
S220, generating first decision information according to the environment perception fusion information, wherein the first decision information comprises at least one piece of agent state information of the driving state of the automatic driving vehicle in the automatic driving environment and at least one piece of agent control information of the automatic driving vehicle for changing the driving state in the automatic driving environment.
The low-level agent function module 110 is a combination of low-level automatic driving agent function modules, and each sub-module of the low-level agent function module 110 may be regarded as an agent function module, for example, an ACC function module is regarded as an agent function module.
The low-level agent function module 110 includes at least one sub-module, and each sub-module may receive the context-aware fusion information and generate the first decision information according to the context-aware fusion information. The first decision information includes agent status information, such as A, C, E in fig. 1, and agent control information, such as B, D, F in fig. 1.
The agent status information characterizes driving conditions of the autonomous vehicle in the autonomous driving environment, such as a cruise condition, a follow condition, an emergency braking condition, a lane keeping condition, a lane change condition, an abnormal lane change condition, and the like.
The agent control information is a control instruction for changing the driving state of the autonomous vehicle, which is generated by the agent function module according to the autonomous driving environment of the autonomous vehicle, for example, straight, turning left, turning right, or the like.
The low-level agent function module 110 may adopt Highway assistance (HWA), and the HWA may realize organic combination of low-level automatic driving functions such as ACC and LKS, so as to construct a high-level automatic driving function.
And S230, generating second decision information according to the at least one piece of agent state information or the at least one piece of agent control information.
The decision-making proxy function module 120 generates second decision information according to the at least one proxy state information generated by the low-level proxy function module 110, or the decision-making proxy function module 120 generates second decision information according to the at least one proxy control information generated by the low-level proxy function module 110. The number of agent state information or agent control information is determined by the number of sub-modules in the low-level agent function module 110.
In the embodiment of the application, environment perception fusion information of an automatic driving environment is obtained, and first decision information is generated according to the environment perception fusion information, wherein the first decision information comprises at least one agent state information of a driving state of an automatic driving vehicle in the automatic driving environment and at least one agent control information of the automatic driving vehicle changing the driving state in the automatic driving environment. And generating second decision information according to the at least one agent state information or the at least one agent control information, so that decision results are not required to be obtained according to switching conditions of a plurality of functional states, and the influence of undefined switching conditions on the decision results is reduced, so that the decision results are more accurate.
In some embodiments, generating the second decision information based on the at least one agent state information comprises: obtaining a state transition matrix according to at least one agent state information; and generating second decision information according to the state transition matrix.
And acquiring the driving state of the automatic driving vehicle corresponding to each agent state information, and calculating a state transition matrix according to the driving state. The state transition matrix is the probability that the automatic driving vehicle is transitioned from one driving state to another driving state, the switching state corresponding to the maximum probability value is selected from the state transition matrix, and second decision information is generated according to the switching state. For example, the maximum probability value in the state transition matrix is 0.8, the corresponding switching state is switched from the lane keeping state to the lane switching state, and the second decision information is the lane switching to the right or left at this time.
In some embodiments, generating the second decision information based on the at least one agent control information comprises: calculating a weight value of each agent control information through a decision model according to at least one agent control information; and generating second decision information according to the weight value of the agent control information.
The decision model is a neural network model obtained by training in advance according to environment perception fusion information, agent control information and a decision result of the automatic driving environment, the weight value of each agent control information can be calculated through the decision model, and the agent control information with the largest weight value is selected as second decision information.
In some embodiments, the second decision information is sent to the control end of the autonomous vehicle, so that the control end can control the driving behavior of the autonomous vehicle according to the second decision information.
Fig. 3 is a schematic structural diagram of an apparatus according to an embodiment of the present disclosure. As shown in fig. 3, the automatic driving decision device 300 may include an acquisition module 310 and a generation module 320.
An obtaining module 310 is configured to obtain environment awareness fusion information of an autonomous driving environment.
The generating module 320 is configured to generate first decision information according to the environment awareness fusion information, where the first decision information includes at least one agent state information of a driving state of the autonomous vehicle in the autonomous driving environment and at least one agent control information of the autonomous vehicle changing the driving state in the autonomous driving environment.
The generating module 320 is further configured to generate second decision information according to the at least one agent status information or the at least one agent control information.
In the embodiment of the application, the decision result does not need to be obtained according to the switching conditions of various functional states, so that the influence on the decision result due to the unclear switching conditions is reduced, and the decision result is more accurate.
In some embodiments, the generating module 320 is specifically configured to obtain a state transition matrix according to at least one piece of agent state information; and generating second decision information according to the state transition matrix.
In some embodiments, the generating module 320 is specifically configured to calculate, according to at least one agent control information, a weight value of each agent control information through a decision model; and generating second decision information according to the weight value of the agent control information.
In some embodiments, the generating module 320 is specifically configured to generate the first decision information through the low-level proxy function module according to the context-aware fusion information.
In some embodiments, the obtaining module 310 is specifically configured to obtain environmental information of an autonomous driving environment; and fusing the environment information through a fusion perception algorithm to obtain environment perception fusion information.
In some embodiments, the apparatus further comprises: and a sending device 330, configured to send the second decision information to a control end of the autonomous vehicle, so that the control end controls the driving behavior of the autonomous vehicle according to the second decision information.
Each module in the apparatus shown in fig. 3 has a function of implementing each step in fig. 2, and can achieve the corresponding technical effect, and for brevity, is not described again here.
Fig. 4 shows a hardware structure diagram of an automatic driving decision device provided in an embodiment of the present application.
The autonomous driving decision device may include a processor 401 and a memory 402 storing computer program instructions.
Specifically, the processor 401 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement the embodiments of the present Application.
Memory 402 may include mass storage for data or instructions. By way of example, and not limitation, memory 402 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. In one example, memory 402 may include removable or non-removable (or fixed) media, or memory 402 is non-volatile solid-state memory. The memory 402 may be internal or external to the integrated gateway disaster recovery device.
In one example, memory 402 may include Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory 402 comprises one or more tangible (non-transitory) computer-readable storage media (e.g., a memory device) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors), it is operable to perform operations described with reference to a method according to an aspect of the present application.
The processor 401 reads and executes the computer program instructions stored in the memory 402 to implement steps S210 to S230 in the embodiment shown in fig. 2, and achieve the corresponding technical effect achieved by executing the steps in the example shown in fig. 1, which is not described herein again for brevity.
In one example, the autonomous driving decision device may also include a communication interface 403 and a bus 410. As shown in fig. 4, the processor 401, the memory 402, and the communication interface 403 are connected via a bus 410 to complete communication therebetween.
The communication interface 403 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiments of the present application.
Bus 410 includes hardware, software, or both that couple the components of the autopilot decision device to each other. By way of example, and not limitation, a Bus may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (Front Side Bus, FSB), a Hyper Transport (HT) interconnect, an Industry Standard Architecture (ISA) Bus, an infiniband interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a Micro Channel Architecture (MCA) Bus, a Peripheral Component Interconnect (PCI) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a video electronics standards association local (VLB) Bus, or other suitable Bus or a combination of two or more of these. Bus 410 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The automatic driving decision device can execute the automatic driving decision method in the embodiment of the application based on the environment perception fusion information, the agent state information and the agent control information, so as to realize the automatic driving decision method described in conjunction with fig. 2.
In addition, in combination with the method for automatic driving decision in the foregoing embodiments, the embodiments of the present application may provide a computer storage medium to implement. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement any of the above-described embodiments of a method of automated driving decision.
It is to be understood that the present application is not limited to the particular arrangements and instrumentality described above and shown in the attached drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions or change the order between the steps after comprehending the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic Circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As described above, only the specific embodiments of the present application are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.

Claims (10)

1. A method of automated driving decision making, comprising:
acquiring environment perception fusion information of an automatic driving environment;
generating first decision information according to the environment perception fusion information, wherein the first decision information comprises at least one piece of agent state information of the driving state of the automatic driving vehicle in the automatic driving environment and at least one piece of agent control information of the automatic driving vehicle for changing the driving state in the automatic driving environment;
and generating second decision information according to the at least one agent state information or the at least one agent control information.
2. The method of claim 1, wherein generating second decision information based on the at least one agent state information comprises:
obtaining a state transition matrix according to the at least one agent state information;
and generating the second decision information according to the state transition matrix.
3. The method of claim 1, wherein generating second decision information based on the at least one agent control information comprises:
calculating a weight value of each agent control information through a decision model according to the at least one agent control information;
and generating the second decision information according to the weight value of the agent control information.
4. The method according to claim 1, wherein generating first decision information based on the context-aware fusion information comprises:
and generating first decision information through a low-level agent function module according to the environment perception fusion information.
5. The method of claim 1, wherein the obtaining context-aware fusion information for the autonomous driving environment comprises:
acquiring environmental information of an automatic driving environment;
and fusing the environment information through a fusion perception algorithm to obtain the environment perception fusion information.
6. The method of claim 1, further comprising:
and sending the second decision information to a control end of the automatic driving vehicle, so that the control end controls the driving behavior of the automatic driving vehicle according to the second decision information.
7. An apparatus for automated driving decision-making, the apparatus comprising:
the acquisition module is used for acquiring environment perception fusion information of the automatic driving environment;
the generation module is used for generating first decision information according to the environment perception fusion information, wherein the first decision information comprises at least one piece of agent state information of the driving state of the automatic driving vehicle in the automatic driving environment and at least one piece of agent control information of the automatic driving vehicle for changing the driving state in the automatic driving environment;
the generating module is further configured to generate second decision information according to the at least one agent state information or the at least one agent control information.
8. The apparatus according to claim 7, wherein the generating module is specifically configured to obtain a state transition matrix according to the at least one piece of proxy state information;
and generating the second decision information according to the state transition matrix.
9. An automated driving decision device, comprising: a processor, and a memory storing computer program instructions; the processor reads and executes the computer program instructions to implement the method of automated driving decision according to any of claims 1-6.
10. A computer storage medium having computer program instructions stored thereon that, when executed by a processor, implement a method of automated driving decision according to any one of claims 1-6.
CN202110078838.2A 2021-01-21 2021-01-21 Method, device and equipment for automatic driving decision and computer storage medium Pending CN112396183A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110078838.2A CN112396183A (en) 2021-01-21 2021-01-21 Method, device and equipment for automatic driving decision and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110078838.2A CN112396183A (en) 2021-01-21 2021-01-21 Method, device and equipment for automatic driving decision and computer storage medium

Publications (1)

Publication Number Publication Date
CN112396183A true CN112396183A (en) 2021-02-23

Family

ID=74625124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110078838.2A Pending CN112396183A (en) 2021-01-21 2021-01-21 Method, device and equipment for automatic driving decision and computer storage medium

Country Status (1)

Country Link
CN (1) CN112396183A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114244880A (en) * 2021-12-16 2022-03-25 云控智行科技有限公司 Operation method, device, equipment and medium for intelligent internet driving cloud control function
CN114435395A (en) * 2021-12-31 2022-05-06 赛可智能科技(上海)有限公司 Method, apparatus, device, medium and computer program product for automatic driving

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170357263A1 (en) * 2016-06-14 2017-12-14 The Boeing Company Autonomous Vehicle Re-Tasking During Performance of a Programmed Task based on Detection of a Task Interruption Scenario
CN108225364A (en) * 2018-01-04 2018-06-29 吉林大学 A kind of pilotless automobile driving task decision system and method
CN109991987A (en) * 2019-04-29 2019-07-09 北京智行者科技有限公司 Automatic Pilot decision-making technique and device
CN110228473A (en) * 2019-05-27 2019-09-13 驭势科技(北京)有限公司 A kind of intelligent vehicle lane-change decision-making technique, device, storage medium and intelligent vehicle
CN110764507A (en) * 2019-11-07 2020-02-07 舒子宸 Artificial intelligence automatic driving system for reinforcement learning and information fusion
CN111123948A (en) * 2019-12-31 2020-05-08 北京新能源汽车技术创新中心有限公司 Vehicle multidimensional perception fusion control method and system and automobile
US20200255021A1 (en) * 2017-12-19 2020-08-13 Intel Corporation Road surface friction based predictive driving for computer assisted or autonomous driving vehicles

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170357263A1 (en) * 2016-06-14 2017-12-14 The Boeing Company Autonomous Vehicle Re-Tasking During Performance of a Programmed Task based on Detection of a Task Interruption Scenario
US20200255021A1 (en) * 2017-12-19 2020-08-13 Intel Corporation Road surface friction based predictive driving for computer assisted or autonomous driving vehicles
CN108225364A (en) * 2018-01-04 2018-06-29 吉林大学 A kind of pilotless automobile driving task decision system and method
CN109991987A (en) * 2019-04-29 2019-07-09 北京智行者科技有限公司 Automatic Pilot decision-making technique and device
CN110228473A (en) * 2019-05-27 2019-09-13 驭势科技(北京)有限公司 A kind of intelligent vehicle lane-change decision-making technique, device, storage medium and intelligent vehicle
CN110764507A (en) * 2019-11-07 2020-02-07 舒子宸 Artificial intelligence automatic driving system for reinforcement learning and information fusion
CN111123948A (en) * 2019-12-31 2020-05-08 北京新能源汽车技术创新中心有限公司 Vehicle multidimensional perception fusion control method and system and automobile

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114244880A (en) * 2021-12-16 2022-03-25 云控智行科技有限公司 Operation method, device, equipment and medium for intelligent internet driving cloud control function
CN114244880B (en) * 2021-12-16 2023-12-26 云控智行科技有限公司 Operation method, device, equipment and medium of intelligent network driving cloud control function
CN114435395A (en) * 2021-12-31 2022-05-06 赛可智能科技(上海)有限公司 Method, apparatus, device, medium and computer program product for automatic driving

Similar Documents

Publication Publication Date Title
JP7047089B2 (en) Cellular network-based driving support method and traffic control unit
CN111127931B (en) Vehicle road cloud cooperation method, device and system for intelligent networked automobile
US10235881B2 (en) Autonomous operation capability configuration for a vehicle
US10642268B2 (en) Method and apparatus for generating automatic driving strategy
CN113223317B (en) Method, device and equipment for updating map
US10755565B2 (en) Prioritized vehicle messaging
CN112396183A (en) Method, device and equipment for automatic driving decision and computer storage medium
Mo et al. Simulation and analysis on overtaking safety assistance system based on vehicle-to-vehicle communication
CN109910880B (en) Vehicle behavior planning method and device, storage medium and terminal equipment
CN113734193A (en) System and method for estimating take over time
CN114348025A (en) Vehicle driving monitoring system, method, equipment and storage medium
US10953871B2 (en) Transportation infrastructure communication and control
US11877217B2 (en) Message processing for wireless messages based on value of information
CN112712608B (en) System and method for collecting performance data by a vehicle
CN112446466A (en) Measuring confidence in deep neural networks
CN111731285B (en) Vehicle anti-collision method and device based on V2X technology
US10950129B1 (en) Infrastructure component broadcast to vehicles
EP3819888B1 (en) Vehicle system of a vehicle for detecting and validating an event using a deep learning model
CN112829757A (en) Automatic driving control system, server device, and storage medium storing program
CN113092135A (en) Test method, device and equipment for automatically driving vehicle
US20230286514A1 (en) Detection of abnormal driving based on behavior profiles
CN116894225B (en) Driving behavior abnormality analysis method, device, equipment and medium thereof
CN114446042B (en) Method, device, equipment and storage medium for early warning traffic accidents
US20230347925A1 (en) Agent and scenario modeling extracted via an mbse classification on a large number of real-world data samples
US20230339517A1 (en) Autonomous driving evaluation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210223

RJ01 Rejection of invention patent application after publication