CN116721536A - Unmanned decision result determining method, device, equipment and medium - Google Patents

Unmanned decision result determining method, device, equipment and medium Download PDF

Info

Publication number
CN116721536A
CN116721536A CN202211394041.4A CN202211394041A CN116721536A CN 116721536 A CN116721536 A CN 116721536A CN 202211394041 A CN202211394041 A CN 202211394041A CN 116721536 A CN116721536 A CN 116721536A
Authority
CN
China
Prior art keywords
state information
driving state
feature vector
determining
unmanned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211394041.4A
Other languages
Chinese (zh)
Inventor
王兆麒
姜珊
张晓谦
孙忠刚
王兆麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202211394041.4A priority Critical patent/CN116721536A/en
Publication of CN116721536A publication Critical patent/CN116721536A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Analytical Chemistry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Atmospheric Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method, a device, equipment and a medium for determining an unmanned decision result. The driving state information of the host vehicle and surrounding vehicles is obtained in real time in the automatic driving process of the host vehicle; organizing driving state information of the host vehicle and surrounding vehicles to obtain a plurality of driving state information matrixes; calculating driving state information feature vectors corresponding to each driving state information matrix in parallel through a pre-trained long-short-term memory neural network unit model, and obtaining a driving state information total feature vector according to each driving state information feature vector; and determining an unmanned decision result corresponding to the total feature vector of the driving state information through a pre-optimized decision classifier. The problems of low accuracy of unmanned operation and poor generalization capability based on a mathematical modeling method and a machine learning algorithm are solved, the accuracy of unmanned decision results is improved, and the generalization capability is improved.

Description

Unmanned decision result determining method, device, equipment and medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method, an apparatus, a device, and a medium for determining an unmanned decision result.
Background
The huge number of automobiles not only brings people convenience to travel, but also is in consideration of the current traffic environment. According to the published data of the related organizations, 90% of traffic accidents are caused by human misoperation. Once the perfect driving rule is set, the unmanned driving system always keeps a good driving level, and traffic accidents caused by errors are avoided.
The inventors have found that the following drawbacks exist in the prior art in the process of implementing the present invention: at present, the formulation based on the unmanned rule generally adopts a mathematical modeling method and a machine learning algorithm, and the algorithm uses a softmax classifier for classification tasks, however, the two methods have certain defects, the unmanned lane change decision judging capability is inaccurate, and the generalization capability of an unmanned judging system is lower.
Disclosure of Invention
The invention provides a method, a device, equipment and a medium for unmanned decision results, which are used for improving the accuracy of the unmanned decision results and the generalization capability.
According to an aspect of the present invention, there is provided an unmanned decision result determining method, including:
in the automatic driving process of the own vehicle, driving state information of the own vehicle and surrounding vehicles is obtained in real time;
Organizing the driving state information of the host vehicle and surrounding vehicles to obtain a plurality of driving state information matrixes;
calculating driving state information feature vectors corresponding to each driving state information matrix in parallel through a pre-trained Long-short-term memory neural network unit (LSTM) model, and obtaining a driving state information total feature vector according to each driving state information feature vector;
and determining an unmanned decision result corresponding to the total feature vector of the driving state information through a pre-optimized decision classifier.
According to another aspect of the present invention, there is provided an unmanned decision result determination apparatus, including:
the driving state information acquisition module is used for acquiring driving state information of the host vehicle and surrounding vehicles in real time in the automatic driving process of the host vehicle;
the driving state information matrix determining module is used for organizing the driving state information of the host vehicle and surrounding vehicles to obtain a plurality of driving state information matrixes;
the driving state information total feature vector determining module is used for calculating driving state information feature vectors corresponding to each driving state information matrix respectively in parallel through a pre-trained LSTM model, and obtaining the driving state information total feature vector according to each driving state information feature vector;
And the unmanned decision result determining module is used for determining an unmanned decision result corresponding to the driving state information total feature vector through a pre-optimized decision classifier.
According to another aspect of the present invention, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the unmanned decision result determination method of any of the embodiments of the present invention when executing the computer program.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to perform the unmanned decision result determination method of any of the embodiments of the present invention when executed.
According to the technical scheme, the driving state information of the own vehicle and surrounding vehicles is obtained in real time in the automatic driving process of the own vehicle; organizing the driving state information of the host vehicle and surrounding vehicles to obtain a plurality of driving state information matrixes; calculating driving state information feature vectors corresponding to each driving state information matrix in parallel through a pre-trained LSTM model, and obtaining a driving state information total feature vector according to each driving state information feature vector; and determining an unmanned decision result corresponding to the total feature vector of the driving state information through a pre-optimized decision classifier. The problems of low accuracy of unmanned operation and poor generalization capability based on a mathematical modeling method and a machine learning algorithm are solved, the accuracy of unmanned decision results is improved, and the generalization capability is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an unmanned decision result determination method provided in accordance with a first embodiment of the present invention;
FIG. 2 is a flow chart of another method for determining unmanned decision results provided in accordance with a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an unmanned decision result determining apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "target," "current," and the like in the description and claims of the present invention and the above-described drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of an unmanned decision result determining method according to an embodiment of the present invention, where the present embodiment is applicable to a case of making a lane change result decision in a driving process in the unmanned field, and the method may be performed by an unmanned decision result determining apparatus, and the unmanned decision result determining apparatus may be implemented in a form of hardware and/or software.
Accordingly, as shown in fig. 1, the method includes:
s110, acquiring driving state information of the host vehicle and surrounding vehicles in real time in the automatic driving process of the host vehicle.
The driving state information may be state information describing a driving process of the vehicle.
In this embodiment, the host vehicle may collect driving state information of the host vehicle and surrounding vehicles through the sensors provided in the environment sensing module of the host vehicle during the automatic driving.
Further, according to the collected driving state information, whether the own vehicle should be in a lane change state or a non-lane change state is determined.
Specifically, the non-lane change state may mean that the unmanned vehicle has a good driving environment on the current lane, and no rear-end collision, violation, collision and other phenomena exist. In this state there are two cases, the first aspect is: the front of the automobile is free of obstacles and other vehicles. Under the state, the unmanned automobile can keep high safety speed to carry out safe driving, and the risk coefficient is low. The second aspect is: other vehicles are in front of the current vehicle, however, the driving behavior of the vehicle has no great influence on the speed and driving state of the current vehicle, so that the vehicle can follow the running. In both cases, the lane change is not required, and the vehicle can be maintained in the current lane.
In addition, the lane change state refers to that the unmanned vehicle leaves the current state and is changed to other lanes to continue running. There are many scenes that vehicles need to change lanes, for example, the driving environment of the current lane is poor and the adjacent lanes have good driving conditions (such as slower driving of the front vehicle and poor road condition of the current lane); the vehicle needs to occupy the road (e.g., need to change to a left-turn lane), etc. The lane change needs to meet certain conditions, first: the target lane has a certain space, and can have enough driving space after the vehicle changes lanes. Depending on the driving state of the vehicle in front of and behind the target lane. Second,: it takes enough time to successfully change lanes, which is determined by the driving state and spacing of the vehicles behind the original lane. Therefore, when the vehicle is about to perform lane changing operation, it is necessary to refer to the states of surrounding vehicles, and further precisely adjust the lateral movement and the longitudinal movement of the vehicle to ensure that lane changing is performed safely, so as to avoid collision of the vehicle.
S120, organizing driving state information of the host vehicle and surrounding vehicles to obtain a plurality of driving state information matrixes.
The driving state information matrix may describe driving state information for a period of time in a matrix form.
Optionally, the driving state information includes: instantaneous speed, instantaneous acceleration, instantaneous steering angle, lane number, vehicle number, lateral position, and longitudinal position; and acquiring the current time period corresponding to the driving state information, and organizing to obtain a plurality of driving state information matrixes corresponding to the driving state information.
In this embodiment, it is assumed that the current time of taking driving state information in real time is T, and a current time period [ T-2, T ] corresponding to the current time T may be taken respectively, and if the step size is 1s, driving state information of the host vehicle and surrounding vehicles corresponding to the times T-2, T-1 and T are selected respectively.
The driving state information is specifically an instantaneous speed, an instantaneous acceleration, an instantaneous steering angle, a lane number, a vehicle number, a lateral position, and a longitudinal position. Using instantaneous speed m 1 To represent; m for instantaneous acceleration 2 To represent; m for instantaneous steering angle 3 To represent; m for lane number 4 To represent; m for vehicle numbering 5 To represent; m is used for transverse position 6 To represent; and m for longitudinal position 7 To represent.
Correspondingly, a plurality of driving state information matrixes can be determined according to the current time period and the driving state information.
Optionally, the surrounding vehicle includes: rear vehicles of adjacent lanes, front vehicles and rear vehicles; the step of obtaining the current time period corresponding to the driving state information, and organizing to obtain a plurality of driving state information matrixes corresponding to the driving state information, including: determining a first driving state information matrix according to driving state information corresponding to the rear vehicle of the adjacent lane, the front vehicle of the adjacent lane and the rear vehicle and the current time period; determining a second driving state information matrix according to the driving state information corresponding to the front vehicle and the current time period; determining a third driving state information matrix according to the driving state information corresponding to the host vehicle and the current time period; and constructing a plurality of driving state information matrixes according to the first driving state information matrix, the second driving state information matrix and the third driving state information matrix.
The first driving state information matrix may describe driving state information of the rear vehicle of the adjacent lane, the front vehicle of the adjacent lane, and the rear vehicle in a matrix form. The second driving state information matrix may be a matrix describing driving state information of the preceding vehicle for a period of time. The third driving state information matrix describes driving state information of the own vehicle for a period of time in a matrix form.
In the present embodiment, the surrounding vehicles corresponding to the host vehicle may include adjacent lane rear vehicles, adjacent lane front vehicles, and rear vehicles corresponding to the host vehicle. Because driving state information of the rear vehicle of the adjacent lane, the front vehicle of the adjacent lane and the rear vehicle of the adjacent lane can influence lane changing conditions of the own vehicle, information of the rear vehicle of the adjacent lane, the front vehicle of the adjacent lane and the rear vehicle of the adjacent lane needs to be collected to analyze whether lane changing conditions of the own vehicle are met or not.
In the following example, assuming that the own vehicle is M, the collected driving state information includes a rear vehicle TB of an adjacent lane, a front vehicle TP of an adjacent lane, a front vehicle MP and a rear vehicle MB, respectively.
According to the collected driving state information of the rear vehicle TB, the front vehicle TP and the rear vehicle MB of the adjacent lanes and the selected T-2, T-1 and T moments of the current time period, a matrix of 3 multiplied by 21 can be constructed, wherein the matrix is specifically Is a first driving state information matrix.
Further, according to the collected driving state information of the preceding vehicle MP and the selected T-2, T-1 and T moments of the current time period, a 3×7 matrix, specifically, a matrix can be constructedIs a second driving state information matrix.
Similarly, according to the acquired driving state information of the own vehicle M and the selected T-2, T-1 and T moments of the current time period, a 3×7 matrix, specifically, a matrix can be constructedAnd the third driving state information matrix.
It is understood that the driving state information matrix includes a first driving state information matrix, a second driving state information matrix, and a third driving state information matrix.
S130, calculating driving state information feature vectors corresponding to each driving state information matrix in parallel through a pre-trained LSTM model, and obtaining a driving state information total feature vector according to each driving state information feature vector.
The driving state information feature vector may be a feature vector obtained by inputting the first driving state information matrix, the second driving state information matrix, and the third driving state information matrix into parallel LSTM sub-models in the LSTM model, respectively. And the driving state information total feature vector is a total feature vector obtained by splicing the feature vectors output by the three parallel LSTM sub-models.
Optionally, the LSTM model includes: three parallel LSTM sub-models and one serial LSTM sub-model;
the driving state information feature vectors corresponding to each driving state information matrix are calculated in parallel through a pre-trained LSTM model, and a driving state information total feature vector is obtained according to each driving state information feature vector, and the method comprises the following steps: inputting the first driving state information matrix into a first parallel LSTM sub-model, and determining a first driving state information feature vector; inputting the second driving state information matrix into a second parallel LSTM sub-model, and determining a second driving state information feature vector; inputting the third driving state information matrix into a third parallel LSTM sub-model, and determining a third driving state information feature vector; performing splicing processing on the first driving state information feature vector, the second driving state information feature vector and the third driving state information feature vector to obtain a driving state information spliced feature vector; and inputting the driving state information spliced feature vector into a serial LSTM sub-model to obtain a driving state information total feature vector.
The first driving state information feature vector may be a feature vector obtained by analyzing the first driving state information matrix through the first parallel LSTM sub-model. The second driving state information feature vector may be a feature vector obtained by analyzing the second driving state information matrix through a second parallel LSTM sub-model. The third driving state information feature vector may be a feature vector obtained by analyzing the third driving state information matrix through a third parallel LSTM sub-model.
In this embodiment, the neural network neurons of the first parallel LSTM sub-model, the second parallel LSTM sub-model, and the third parallel LSTM sub-model are lstm_unit1; the neural network neurons of the serial LSTM sub-model are lstm_unit2.
Specifically, the first parallel LSTM sub-model is configured to analyze a lane change condition, and input driving state information including a rear vehicle TB of an adjacent lane, a front vehicle TP of an adjacent lane, and a rear vehicle MB of the own lane; the second parallel LSTM sub-model is used for analyzing driving state information of the front vehicle MP of the own lane; and a third parallel LSTM sub-model is used to analyze driving state information of the host vehicle.
In the previous example, the first driving state information matrix Inputting the first driving state information feature vector alpha into a first parallel LSTM sub-model 1 The method comprises the steps of carrying out a first treatment on the surface of the Matrix a second driving state informationInputting the first driving state information feature vector alpha into a first parallel LSTM sub-model 2 The method comprises the steps of carrying out a first treatment on the surface of the Third driving state information matrix->Input to a third parallel LSTM submodelIn which, a third driving state information feature vector alpha is determined 3 The method comprises the steps of carrying out a first treatment on the surface of the Characterizing the first driving state information to a vector alpha 1 Second driving state information feature vector α 2 And a third driving state information feature vector alpha 3 Performing splicing processing to obtain a driving state information spliced feature vector alpha; further, the driving state information spliced feature vector alpha is input into a serial LSTM sub-model, and the driving state information total feature vector is obtained.
The advantages of this arrangement are that: the three parallel LSTM sub-models and the serial LSTM sub-model are used for analyzing each driving state information matrix to obtain the total feature vector of the driving state information, so that lane information, the front vehicle condition and the own vehicle condition can be analyzed and processed more accurately, more accurate lane change decisions are obtained, and the accuracy of unmanned lane change decisions is improved.
And S140, determining an unmanned decision result corresponding to the total feature vector of the driving state information through a pre-optimized decision classifier.
The decision classifier can be a classifier which uses a support vector machine (Support Vector Machine, SVM) as an unmanned action decision to obtain a corresponding unmanned decision result.
Optionally, the determining, by using a pre-optimized decision classifier, an unmanned decision result corresponding to the driving state information total feature vector includes: inputting the total feature vector of the driving state information into a pre-optimized decision classifier, and respectively determining left lane change probability, right lane change probability and original lane keeping probability; and comparing the left lane change probability, the right lane change probability and the original lane keeping probability, and determining an unmanned decision result corresponding to the highest probability.
The method includes the steps that an assumption is made that a driving state information total feature vector is input into a pre-optimized decision classifier, the decision classifier analyzes the driving state information total feature vector to obtain a left lane change probability of 0.8, a right lane change probability of 0.1 and an original lane keeping probability of 0.1, the magnitudes of the left lane change probability, the right lane change probability and the original lane keeping probability are compared, and the fact that the highest probability is the left lane change probability is determined, so that an unmanned decision result is determined to be left lane change.
The advantages of this arrangement are that: the unmanned decision result corresponding to the highest probability is determined by calculating and comparing the left lane change probability, the right lane change probability and the original lane keeping probability, so that the host vehicle is instructed to perform lane change processing, lane change operation can be performed more accurately, and the unmanned decision accuracy can be improved.
According to the technical scheme, the driving state information of the own vehicle and surrounding vehicles is obtained in real time in the automatic driving process of the own vehicle; organizing the driving state information of the host vehicle and surrounding vehicles to obtain a plurality of driving state information matrixes; calculating driving state information feature vectors corresponding to each driving state information matrix in parallel through a pre-trained LSTM model, and obtaining a driving state information total feature vector according to each driving state information feature vector; and determining an unmanned decision result corresponding to the total feature vector of the driving state information through a pre-optimized decision classifier. The problems of low accuracy of unmanned operation and poor generalization capability based on a mathematical modeling method and a machine learning algorithm are solved, the accuracy of unmanned decision results is improved, and the generalization capability is improved.
Example two
Fig. 2 is a flowchart of another method for determining an unmanned decision result according to the second embodiment of the present invention, where the method is optimized based on the foregoing embodiments, and in this embodiment, before determining, by using a pre-optimized decision classifier, an unmanned decision result corresponding to the total feature vector of the driving state information, a specific optimization operation of the decision classifier is further included.
Accordingly, as shown in fig. 2, the method includes:
s210, acquiring driving state information of the host vehicle and surrounding vehicles in real time in the automatic driving process of the host vehicle.
S220, organizing the driving state information of the host vehicle and the surrounding vehicles to obtain a plurality of driving state information matrixes.
S230, calculating driving state information feature vectors corresponding to each driving state information matrix in parallel through a pre-trained LSTM model, and obtaining a driving state information total feature vector according to each driving state information feature vector.
S240, constructing a Gaussian kernel accuracy function corresponding to the decision classifier, taking an alpha parameter corresponding to the Gaussian kernel accuracy function and a penalty factor in the decision classifier as independent variables, taking a maximized objective function as an optimization target, and carrying out iterative optimization based on an improved bat optimization algorithm to obtain an optimal alpha parameter and an optimal penalty factor.
The bat optimizing algorithm can be a novel group intelligent optimizing algorithm for simulating bat behaviors, and is widely applied to solving practical problems due to the advantages of simple model, few parameters, strong universality and the like.
In this embodiment, by constructing the gaussian kernel accuracy function as an objective function, taking the α parameter corresponding to the gaussian kernel accuracy function and the penalty factor in the decision classifier as independent variables, and performing multiple iterations based on the improved bat optimization algorithm to maximize the objective function as an optimization target, the optimal α parameter and the optimal penalty factor can be obtained.
Optionally, the iterative optimization based on the improved bat optimization algorithm obtains an optimal α parameter and an optimal penalty factor, including: after each iteration, sorting from small to large according to each accuracy corresponding to the Gaussian approval determination rate function obtained by calculating the bat to obtain a sorting result; wherein, each decision classifier corresponds to one bat, and a plurality of bats form a bat optimized data set; deleting a first number of target bats with small accuracy according to a certain number proportion according to the sorting result, and randomly generating a first number of newly-added bats to be added into the bat optimization data set; and returning to execute the operation of sorting from small to large according to the accuracy corresponding to the Gaussian approval rate function obtained by calculating the bat for each time after each iteration until the optimal alpha parameter and the optimal penalty factor are determined.
Wherein, the bat optimizing data set may be an optimizing data set including a plurality of bats, specifically, one decision classifier corresponds to one bat, where the number of bat groups may be set to be N, that is, the N decision classifiers are iteratively optimized to determine the optimal decision classifier.
Specifically, the α parameter may be an argument in constructing a gaussian approval rate function, and the penalty factor may be an argument of a decision classifier.
In this embodiment, the improved bat optimizing algorithm is to perform optimization based on the bat optimizing algorithm, in the process of optimizing iteration each time, the fitness values of all bats are sorted from small to large, the principle of "eliminating the superiority" is adopted, the worst bat with a certain proportion (which can be set as 10% and is not limited in detail) is eliminated, and new bats with corresponding numbers are randomly generated for replacement, so that optimizing capability is improved, and the bat optimizing capability is greatly improved.
Illustratively, assume a population count of 10 bats, with a quantitative proportion of 10%. After one iteration, the respective accuracy rates corresponding to the gaussian approval rate function are calculated for 10 bats, namely bat 1:99 percent; bat 2:98 percent; bat 3:96%; bat 4:90%; bat 5:91%; bat 6:89%; bat 7:88%; bat 8:96%; bat 9:90%; bat 10:91%. Further, the accuracy of 10 bats is required to be ranked from small to large, and a ranking result is obtained, that is, the accuracy of bat 1 is the highest and the accuracy of bat 7 is the lowest.
Thus, according to the sorting result, the target bat of one having a small accuracy is deleted by 10% (one is deleted), that is, bat 7 is deleted. After deleting bat 7, a new bat is randomly generated and added to the bat optimizing data set, and the number of bats in the bat optimizing data set at this time is still 10. Meanwhile, after the bat 1 having the highest accuracy is determined, the remaining 9 bats learn toward the bat 1.
Correspondingly, after each iteration is carried out, the operations of sorting from small to large are carried out on the accuracy corresponding to the Gaussian approval rate function obtained through calculation of the bat for a plurality of bat until the optimal alpha parameter and the optimal penalty factor are determined.
S250, optimizing and completing the decision classifier according to the optimal alpha parameter and the optimal penalty factor.
And S260, determining an unmanned decision result corresponding to the total feature vector of the driving state information through a pre-optimized decision classifier.
According to the technical scheme, the driving state information of the own vehicle and surrounding vehicles is obtained in real time in the automatic driving process of the own vehicle; organizing the driving state information of the host vehicle and surrounding vehicles to obtain a plurality of driving state information matrixes; calculating driving state information feature vectors corresponding to each driving state information matrix in parallel through a pre-trained LSTM model, and obtaining a driving state information total feature vector according to each driving state information feature vector; constructing a Gaussian kernel accuracy function corresponding to the decision classifier as an objective function, taking alpha parameters corresponding to the Gaussian kernel accuracy function and penalty factors in the decision classifier as independent variables, taking a maximized objective function as an optimization target, and carrying out iterative optimization based on an improved bat optimization algorithm to obtain optimal alpha parameters and optimal penalty factors; optimizing and completing the decision classifier according to the optimal alpha parameter and the optimal penalty factor; and determining an unmanned decision result corresponding to the total feature vector of the driving state information through a pre-optimized decision classifier. By optimizing the decision classifier based on the improved bat optimizing algorithm, the decision classifier with more accurate decision strategy can be optimized, and the accuracy of unmanned decision results can be improved.
Example III
Fig. 3 is a schematic structural diagram of an unmanned decision result determining device according to a third embodiment of the present invention. The unmanned decision result determining device provided by the embodiment of the invention can be realized through software and/or hardware, and can be configured in terminal equipment to realize the unmanned decision result determining method in the embodiment of the invention. As shown in fig. 3, the apparatus includes: the driving state information acquisition module 310, the driving state information matrix determination module 320, the driving state information total feature vector determination module 330 and the unmanned decision result determination module 340.
The driving state information obtaining module 310 is configured to obtain driving state information of the host vehicle and surrounding vehicles in real time during an automatic driving process of the host vehicle;
a driving state information matrix determining module 320, configured to organize driving state information of the host vehicle and surrounding vehicles to obtain a plurality of driving state information matrices;
the driving state information total feature vector determining module 330 is configured to calculate driving state information feature vectors corresponding to each driving state information matrix in parallel through a pre-trained LSTM model, and obtain driving state information total feature vectors according to each driving state information feature vector;
And the unmanned decision result determining module 340 is configured to determine, through a pre-optimized decision classifier, an unmanned decision result corresponding to the driving state information total feature vector.
According to the technical scheme, the driving state information of the own vehicle and surrounding vehicles is obtained in real time in the automatic driving process of the own vehicle; organizing the driving state information of the host vehicle and surrounding vehicles to obtain a plurality of driving state information matrixes; calculating driving state information feature vectors corresponding to each driving state information matrix in parallel through a pre-trained LSTM model, and obtaining a driving state information total feature vector according to each driving state information feature vector; and determining an unmanned decision result corresponding to the total feature vector of the driving state information through a pre-optimized decision classifier. The problems of low accuracy of unmanned operation and poor generalization capability based on a mathematical modeling method and a machine learning algorithm are solved, the accuracy of unmanned decision results is improved, and the generalization capability is improved.
Optionally, the driving state information includes: instantaneous speed, instantaneous acceleration, instantaneous steering angle, lane number, vehicle number, lateral position, and longitudinal position;
The driving state information obtaining module 310 may specifically include: and the driving state information matrix determining unit is used for acquiring the current time period corresponding to the driving state information and organizing a plurality of driving state information matrixes corresponding to the driving state information.
Optionally, the surrounding vehicle includes: rear vehicles of adjacent lanes, front vehicles and rear vehicles; the driving state information matrix determining unit may be specifically configured to: determining a first driving state information matrix according to driving state information corresponding to the rear vehicle of the adjacent lane, the front vehicle of the adjacent lane and the rear vehicle and the current time period; determining a second driving state information matrix according to the driving state information corresponding to the front vehicle and the current time period; determining a third driving state information matrix according to the driving state information corresponding to the host vehicle and the current time period; and constructing a plurality of driving state information matrixes according to the first driving state information matrix, the second driving state information matrix and the third driving state information matrix.
Optionally, the LSTM model includes: three parallel LSTM sub-models and one serial LSTM sub-model; the driving state information total feature vector determining module 330 may be specifically configured to: inputting the first driving state information matrix into a first parallel LSTM sub-model, and determining a first driving state information feature vector; inputting the second driving state information matrix into a second parallel LSTM sub-model, and determining a second driving state information feature vector; inputting the third driving state information matrix into a third parallel LSTM sub-model, and determining a third driving state information feature vector; performing splicing processing on the first driving state information feature vector, the second driving state information feature vector and the third driving state information feature vector to obtain a driving state information spliced feature vector; and inputting the driving state information spliced feature vector into a serial LSTM sub-model to obtain a driving state information total feature vector.
Optionally, the unmanned decision result determination module 340 may be specifically configured to: inputting the total feature vector of the driving state information into a pre-optimized decision classifier, and respectively determining left lane change probability, right lane change probability and original lane keeping probability; and comparing the left lane change probability, the right lane change probability and the original lane keeping probability, and determining an unmanned decision result corresponding to the highest probability.
Optionally, the decision classifier optimization module may specifically include: the optimal alpha parameter and optimal punishment factor determining unit is used for constructing a Gaussian kernel accuracy function corresponding to the decision classifier as an objective function, taking alpha parameters corresponding to the Gaussian kernel accuracy function and punishment factors in the decision classifier as independent variables, taking the maximized objective function as an optimization target, and iteratively optimizing based on an improved bat optimization algorithm to obtain the optimal alpha parameter and the optimal punishment factors before the unmanned decision result corresponding to the total feature vector of the driving state information is determined by the pre-optimized decision classifier; and the decision classifier optimizing unit is used for optimizing the decision classifier according to the optimal alpha parameter and the optimal penalty factor.
Optionally, the optimal α parameter and optimal penalty factor determining unit may be specifically configured to: after each iteration, sorting from small to large according to each accuracy corresponding to the Gaussian approval determination rate function obtained by calculating the bat to obtain a sorting result; wherein, each decision classifier corresponds to one bat, and a plurality of bats form a bat optimized data set; deleting a first number of target bats with small accuracy according to a certain number proportion according to the sorting result, and randomly generating a first number of newly-added bats to be added into the bat optimization data set; and returning to execute the operation of sorting from small to large according to the accuracy corresponding to the Gaussian approval rate function obtained by calculating the bat for each time after each iteration until the optimal alpha parameter and the optimal penalty factor are determined.
The unmanned decision result determining device provided by the embodiment of the invention can execute the unmanned decision result determining method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example IV
Fig. 4 shows a schematic diagram of an electronic device 10 that may be used to implement a fourth embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the unmanned decision result determination method.
In some embodiments, the unmanned decision result determination method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the unmanned decision result determination method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the unmanned decision result determination method by any other suitable means (e.g. by means of firmware).
The method comprises the following steps: in the automatic driving process of the own vehicle, driving state information of the own vehicle and surrounding vehicles is obtained in real time; organizing the driving state information of the host vehicle and surrounding vehicles to obtain a plurality of driving state information matrixes; calculating driving state information feature vectors corresponding to each driving state information matrix in parallel through a pre-trained long-short-term memory neural network unit LSTM model, and obtaining a driving state information total feature vector according to each driving state information feature vector; and determining an unmanned decision result corresponding to the total feature vector of the driving state information through a pre-optimized decision classifier.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.
Example five
A fifth embodiment of the present invention also provides a computer-readable storage medium containing computer-readable instructions, which when executed by a computer processor, are configured to perform a method of determining an unmanned decision result, the method comprising: in the automatic driving process of the own vehicle, driving state information of the own vehicle and surrounding vehicles is obtained in real time; organizing the driving state information of the host vehicle and surrounding vehicles to obtain a plurality of driving state information matrixes; calculating driving state information feature vectors corresponding to each driving state information matrix in parallel through a pre-trained long-short-term memory neural network unit LSTM model, and obtaining a driving state information total feature vector according to each driving state information feature vector; and determining an unmanned decision result corresponding to the total feature vector of the driving state information through a pre-optimized decision classifier.
Of course, the embodiment of the present invention provides a computer-readable storage medium, where the computer-executable instructions are not limited to the above method operations, but may also perform the related operations in the unmanned decision result determination method provided by any embodiment of the present invention.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, etc., including several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to execute the method of the embodiments of the present invention.
It should be noted that, in the embodiment of the unmanned decision result determining apparatus, each unit and module included are only divided according to the functional logic, but not limited to the above-mentioned division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for determining an unmanned decision result, comprising:
in the automatic driving process of the own vehicle, driving state information of the own vehicle and surrounding vehicles is obtained in real time;
organizing the driving state information of the host vehicle and surrounding vehicles to obtain a plurality of driving state information matrixes;
calculating driving state information feature vectors corresponding to each driving state information matrix in parallel through a pre-trained long-short-term memory neural network unit LSTM model, and obtaining a driving state information total feature vector according to each driving state information feature vector;
and determining an unmanned decision result corresponding to the total feature vector of the driving state information through a pre-optimized decision classifier.
2. The method of claim 1, wherein the driving state information includes: instantaneous speed, instantaneous acceleration, instantaneous steering angle, lane number, vehicle number, lateral position, and longitudinal position;
And acquiring the current time period corresponding to the driving state information, and organizing to obtain a plurality of driving state information matrixes corresponding to the driving state information.
3. The method of claim 2, wherein the surrounding vehicle comprises: rear vehicles of adjacent lanes, front vehicles and rear vehicles;
the step of obtaining the current time period corresponding to the driving state information, and organizing to obtain a plurality of driving state information matrixes corresponding to the driving state information, including:
determining a first driving state information matrix according to driving state information corresponding to the rear vehicle of the adjacent lane, the front vehicle of the adjacent lane and the rear vehicle and the current time period;
determining a second driving state information matrix according to the driving state information corresponding to the front vehicle and the current time period;
determining a third driving state information matrix according to the driving state information corresponding to the host vehicle and the current time period;
and constructing a plurality of driving state information matrixes according to the first driving state information matrix, the second driving state information matrix and the third driving state information matrix.
4. The method of claim 3, wherein the LSTM model comprises: three parallel LSTM sub-models and one serial LSTM sub-model;
The driving state information feature vectors corresponding to each driving state information matrix are calculated in parallel through a pre-trained LSTM model, and a driving state information total feature vector is obtained according to each driving state information feature vector, and the method comprises the following steps:
inputting the first driving state information matrix into a first parallel LSTM sub-model, and determining a first driving state information feature vector;
inputting the second driving state information matrix into a second parallel LSTM sub-model, and determining a second driving state information feature vector;
inputting the third driving state information matrix into a third parallel LSTM sub-model, and determining a third driving state information feature vector;
performing splicing processing on the first driving state information feature vector, the second driving state information feature vector and the third driving state information feature vector to obtain a driving state information spliced feature vector;
and inputting the driving state information spliced feature vector into a serial LSTM sub-model to obtain a driving state information total feature vector.
5. The method according to claim 4, wherein the determining, by a pre-optimized decision classifier, the unmanned decision result corresponding to the driving state information total feature vector includes:
Inputting the total feature vector of the driving state information into a pre-optimized decision classifier, and respectively determining left lane change probability, right lane change probability and original lane keeping probability;
and comparing the left lane change probability, the right lane change probability and the original lane keeping probability, and determining an unmanned decision result corresponding to the highest probability.
6. The method according to claim 1, further comprising, before the determining, by the pre-optimized decision classifier, an unmanned decision result corresponding to the driving state information total feature vector:
constructing a Gaussian kernel accuracy function corresponding to the decision classifier as an objective function, taking alpha parameters corresponding to the Gaussian kernel accuracy function and penalty factors in the decision classifier as independent variables, taking a maximized objective function as an optimization target, and carrying out iterative optimization based on an improved bat optimization algorithm to obtain optimal alpha parameters and optimal penalty factors;
and optimizing and completing the decision classifier according to the optimal alpha parameter and the optimal penalty factor.
7. The method of claim 6, wherein iteratively optimizing based on the improved bat optimization algorithm results in an optimal α parameter and an optimal penalty factor, comprising:
After each iteration, sorting from small to large according to each accuracy corresponding to the Gaussian approval determination rate function obtained by calculating the bat to obtain a sorting result;
wherein, each decision classifier corresponds to one bat, and a plurality of bats form a bat optimized data set;
deleting a first number of target bats with small accuracy according to a certain number proportion according to the sorting result, and randomly generating a first number of newly-added bats to be added into the bat optimization data set;
and returning to execute the operation of sorting from small to large according to the accuracy corresponding to the Gaussian approval rate function obtained by calculating the bat for each time after each iteration until the optimal alpha parameter and the optimal penalty factor are determined.
8. An unmanned decision result determining apparatus, comprising:
the driving state information acquisition module is used for acquiring driving state information of the host vehicle and surrounding vehicles in real time in the automatic driving process of the host vehicle;
the driving state information matrix determining module is used for organizing the driving state information of the host vehicle and surrounding vehicles to obtain a plurality of driving state information matrixes;
the driving state information total feature vector determining module is used for calculating driving state information feature vectors corresponding to each driving state information matrix in parallel through a pre-trained long-short-period memory neural network unit LSTM model, and obtaining the driving state information total feature vector according to each driving state information feature vector;
And the unmanned decision result determining module is used for determining an unmanned decision result corresponding to the driving state information total feature vector through a pre-optimized decision classifier.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the unmanned decision result determination method of any of claims 1-7 when the computer program is executed by the processor.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores computer instructions for causing a processor to implement the unmanned decision result determination method of any of claims 1-7 when executed.
CN202211394041.4A 2022-11-08 2022-11-08 Unmanned decision result determining method, device, equipment and medium Pending CN116721536A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211394041.4A CN116721536A (en) 2022-11-08 2022-11-08 Unmanned decision result determining method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211394041.4A CN116721536A (en) 2022-11-08 2022-11-08 Unmanned decision result determining method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN116721536A true CN116721536A (en) 2023-09-08

Family

ID=87863703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211394041.4A Pending CN116721536A (en) 2022-11-08 2022-11-08 Unmanned decision result determining method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116721536A (en)

Similar Documents

Publication Publication Date Title
EP4080416A1 (en) Adaptive search method and apparatus for neural network
CN110111113B (en) Abnormal transaction node detection method and device
CN114120253B (en) Image processing method, device, electronic equipment and storage medium
CN113682318B (en) Vehicle running control method and device
CN110263628B (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN112907552A (en) Robustness detection method, device and program product for image processing model
CN112141098B (en) Obstacle avoidance decision method and device for intelligent driving automobile
CN113408711A (en) Ship motion extremely-short-term forecasting method and system based on LSTM neural network
CN113674317B (en) Vehicle tracking method and device for high-level video
CN114897312A (en) Driving behavior scoring method, device, equipment and storage medium
CN111210347A (en) Transaction risk early warning method, device, equipment and storage medium
CN116721536A (en) Unmanned decision result determining method, device, equipment and medium
EP4339051A1 (en) Driving strategy determination method and apparatus, device, and vehicle
CN114987515A (en) Method and device for determining driving strategy and automatic driving vehicle
CN114137525A (en) Multi-target detection method and system based on vehicle-mounted millimeter wave radar
CN113849400A (en) Test scene library generation method, device, equipment and storage medium
CN113326885A (en) Method and device for training classification model and data classification
CN117734692A (en) Method, device, equipment and storage medium for determining lane change result of vehicle
CN116842392B (en) Track prediction method and training method, device, equipment and medium of model thereof
CN115476882A (en) Behavior decision model optimization method, behavior decision device, behavior decision equipment and behavior decision medium
CN112541708B (en) Index determination method and device and electronic equipment
CN115798261B (en) Vehicle obstacle avoidance control method, device and equipment
CN115600318A (en) Steering decision model optimization method, steering decision method, device, equipment and medium
CN116467861A (en) Method, device, equipment and medium for generating automatic driving simulation scene
CN116749965A (en) Vehicle speed planning method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination