CN115716476A - Automatic driving lane changing method and device, electronic equipment and storage medium - Google Patents

Automatic driving lane changing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115716476A
CN115716476A CN202211445165.0A CN202211445165A CN115716476A CN 115716476 A CN115716476 A CN 115716476A CN 202211445165 A CN202211445165 A CN 202211445165A CN 115716476 A CN115716476 A CN 115716476A
Authority
CN
China
Prior art keywords
vehicle
information
lane
driving
electroencephalogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211445165.0A
Other languages
Chinese (zh)
Inventor
颜莉蓉
詹君
尹智帅
颜伏伍
卢炽华
胡宇风
黄虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Xianhu Laboratory
Original Assignee
Foshan Xianhu Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Xianhu Laboratory filed Critical Foshan Xianhu Laboratory
Priority to CN202211445165.0A priority Critical patent/CN115716476A/en
Publication of CN115716476A publication Critical patent/CN115716476A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of automatic driving, and discloses an automatic driving lane changing method, an automatic driving lane changing device, electronic equipment and a storage medium. The method comprises the following steps: acquiring real-time road condition information and real-time vehicle condition information; acquiring electroencephalogram information of a driver to obtain real-time electroencephalogram information, wherein the electroencephalogram information represents driving intention of the driver; predicting a driving route of the self-vehicle running on the current road condition by using a trained lane change decision model based on the real-time road condition information and the real-time vehicle condition information to obtain a route prediction result; and comparing the route prediction result with the driving intention of the driver represented by the electroencephalogram information, and controlling the self-vehicle to execute lane-changing driving action along the route prediction result when the route prediction result is consistent with the driving intention of the driver. The embodiment of the invention can improve the synchronism and safety during automatic driving lane change.

Description

Automatic driving lane changing method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of automatic driving, in particular to an automatic driving lane changing method, an automatic driving lane changing device, electronic equipment and a storage medium.
Background
At present, road traffic injuries become one of the important death causes of people, about 95 percent of all motor vehicle accidents are caused by improper operation of drivers to a certain extent, and the traffic accidents completely induced by improper driving behaviors of the drivers account for three-fourth of the total number of accidents. Therefore, analysis of driver behavior and driver state is very necessary.
At present, automatic driving is mostly applied to specific scenes, the decision control capability of a human driver cannot be compared under complex working conditions, and full-working-condition application is difficult to realize.
Disclosure of Invention
The invention aims to provide an automatic driving lane changing method, an automatic driving lane changing device, electronic equipment and a storage medium, and aims to improve the synchronism and safety during automatic driving lane changing.
In a first aspect, an automatic driving lane changing method is provided, including:
acquiring real-time road condition information and real-time vehicle condition information, wherein the road condition information comprises the number of lanes, lanes where self vehicles are located, running vehicles and barrier distances representing relative distances between the self vehicles and the running vehicles or barriers, and the vehicle condition information comprises self vehicle speed and self vehicle positioning;
performing electroencephalogram acquisition on a driver to obtain real-time electroencephalogram information, wherein the electroencephalogram information represents the driving intention of the driver;
predicting a driving route of the self-vehicle running on the current road condition by using a trained lane change decision model based on the real-time road condition information and the real-time vehicle condition information to obtain a route prediction result;
and comparing the route prediction result with the driving intention of the driver represented by the electroencephalogram information, and controlling the self-vehicle to execute lane-changing driving action along the route prediction result when the route prediction result is consistent with the driving intention of the driver.
In some embodiments, before the acquiring the traffic information and the vehicle condition information, the method further includes:
acquiring offline training information, wherein the offline training information comprises offline road condition information and offline vehicle condition information;
constructing a decision table by using the offline training information;
discretizing the off-line training information in the decision table, and classifying the discretized off-line training information by using a preset neural network model to obtain a discrete classification result;
performing attribute reduction on the decision table according to the discrete classification result, and determining a minimum condition attribute set for expressing a decision;
integrating minimum condition attribute sets with the same expression decision result, and calculating the coverage and confidence level of the integrated minimum condition attribute sets to determine an offline decision rule;
and training the lane change decision model by using an offline decision rule, and comparing a route prediction result of the lane change decision model with offline electroencephalogram information.
In some embodiments, the attribute reduction of the decision table according to discrete classification and the determination of the minimum conditional attribute set for expressing the decision result comprise:
constructing a training data set according to the off-line training information, and randomly coding each type of off-line training information in the training data set;
calculating the degree of dependence of the decision attributes on off-line training information required by decision;
judging whether the current dependency degree is equal to the minimum relative dependency value;
if so, outputting a minimum condition attribute set when the training data sets of the optimal fitness of a plurality of continuous generations are not improved any more;
if not, the training data set is subjected to random selection, crossing and mutation treatment in an iterative manner, and the training data set with the optimal fitness is copied to the training data set of the next generation until the training data sets with the optimal fitness of a plurality of continuous generations are not improved any more and the minimum condition attribute set is output.
In some embodiments, the acquiring electroencephalogram information of the driver to obtain electroencephalogram information representing driving intentions of the driver includes:
collecting brain wave signals generated when a driver drives to obtain initial brain wave information;
preprocessing initial electroencephalogram information, and comparing the preprocessed initial electroencephalogram information by using a preset electroencephalogram analog template to obtain an electroencephalogram analog result;
and determining error potentials in the preprocessed initial electroencephalogram information according to the electroencephalogram analog result, and removing to obtain electroencephalogram information representing the driving intention of the driver.
In some embodiments, the controlling the host vehicle to perform the lane-change driving action along the route prediction result includes:
finding a plurality of pre-aiming points according to the distance between the self-vehicle and the barrier, and drawing a lane changing route by using a Bezier curve constructed by the pre-aiming points;
performing boundary expansion processing on the barrier, and setting the expansion radius of the barrier according to the current lane change route and the self-vehicle environment information;
judging whether the expansion radius of the barrier is larger than the width of the barrier;
if so, controlling the self vehicle to run along the current lane changing route;
if not, the position of the preview point is reset, a lane change line is redrawn by using a Bezier curve constructed by the reset preview point, the steering angle of the vehicle corresponding to the redrawn lane change line is larger than the steering angle of the vehicle corresponding to the last drawn lane change line, and the step of performing boundary expansion processing on the obstacle is returned.
In some embodiments, the controlling the host vehicle to travel along the current lane-change route includes:
when no continuous barrier appears in the detection range, controlling the vehicle after lane changing to drive the barrier and then change to the lane of the initial driving route; when continuous obstacles appear in the detection range, controlling the vehicle after lane changing to change to the lane of the initial driving route after the vehicle drives through the continuous obstacles;
the continuous obstacles are obstacles, the distance between which and the previous obstacle in the same lane is less than a preset obstacle distance threshold value.
In some embodiments, a simulated driving scenario is constructed according to the real-time traffic information and the real-time vehicle condition information, and the simulated driving scenario is displayed, wherein the simulated driving scenario converts the real-time traffic information into the simulated traffic information and converts the real-time vehicle condition information into the simulated vehicle condition information.
In a second aspect, there is provided an automatic driving lane change device, the device comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring real-time road condition information and real-time vehicle condition information, the road condition information comprises the number of lanes, the lane where a vehicle is located, the running vehicle and the obstacle distance representing the relative distance between the vehicle and the running vehicle or an obstacle, and the vehicle condition information comprises the speed and the positioning of the vehicle;
the acquisition module is used for carrying out electroencephalogram acquisition on a driver to obtain real-time electroencephalogram information, and the electroencephalogram information represents the driving intention of the driver;
the prediction module is used for predicting a driving route of the self-vehicle running on the current road condition by utilizing the trained lane change decision model based on the real-time road condition information and the real-time vehicle condition information to obtain a route prediction result;
and the control module is used for comparing the route prediction result with the driving intention of the driver represented by the electroencephalogram information, and controlling the vehicle to execute lane change driving action along the route prediction result when the route prediction result is consistent with the driving intention of the driver.
In a third aspect, an electronic device is provided, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the automatic driving lane change method according to the first aspect when executing the computer program.
In a fourth aspect, a computer-readable storage medium is provided, in which a computer program is stored, which, when executed by a processor, implements the automatic driving lane-changing method of the first aspect.
The invention has the beneficial effects that: the lane change decision model is trained based on offline road condition information, vehicle condition information and electroencephalogram information representing the driving intention of a driver, so that the realization of human-like automatic driving is promoted, the driver intention recognition based on the electroencephalogram is objective and high in accuracy, and the synchronism and safety during automatic driving lane change are improved.
Drawings
Fig. 1 is a schematic flow chart of an automatic driving lane change method according to a first embodiment.
Fig. 2 is a flow chart of an automatic driving lane change method according to a second embodiment.
Fig. 3 is a first flowchart of step S204 in fig. 2.
Fig. 4 is a second flowchart of step S102 in fig. 1.
Fig. 5 is a third flowchart of step S104 in fig. 1.
Fig. 6 is a schematic structural diagram of an automatic driving lane change device according to an embodiment of the present application.
Fig. 7 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Fig. 8 is a schematic diagram showing the lane change route drawing according to the first embodiment.
Fig. 9 is a schematic diagram showing the lane change route drawn in the second embodiment.
Fig. 10 is a schematic diagram showing a lane change route drawn in the third embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the present invention will be further described with reference to the embodiments and the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that although functional blocks are partitioned in a schematic diagram of an apparatus and a logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the partitioning of blocks in the apparatus or the order in the flowchart. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
First, several terms referred to in the present application are resolved:
1) Artificial Intelligence (AI)
Artificial intelligence is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the implementation method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
2) Machine Learning (Machine Learning, ML)
Machine learning is a multi-field cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The method specially studies how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning.
In the related technology, the algorithm of behavior decision mainly comprises a rule-based behavior decision algorithm and a learning-based behavior decision algorithm; the rule-based decision method has the advantages of wide scene traversal range, strong logic interpretability, easy design according to scene sub-modules, and a plurality of decision system examples applying finite state machines at home and abroad. However, the system structure determines that certain bottlenecks exist in the scene traversal depth and the decision accuracy, and complex working conditions are difficult to process; the behavior decision system based on the learning algorithm has the advantages that the behavior decision system has the advantage of scene traversal depth, all working conditions can be more easily covered by a big data system aiming at a certain subdivided scene, but the learning algorithm does not have the advantage of scene traversal breadth, and the learning models needed to be adopted by different scenes can be completely different; machine learning requires a large amount of test data as learning samples; the decision effect depends on the data quality, the problems of over-learning, under-learning and the like can be caused by insufficient samples, poor data quality, unreasonable network structure and the like, the decision control capability of a human driver cannot be compared under complex working conditions, and the full-working-condition application is difficult to realize.
Based on this, the embodiment of the application provides an automatic driving lane changing method, an automatic driving lane changing device, an electronic device and a storage medium, and aims to improve the synchronism and the safety during automatic driving lane changing.
The following embodiments are specifically provided to describe an automatic driving lane changing method, an automatic driving lane changing device, an electronic device, and a storage medium, and first describe a recommendation method in the embodiments of the present application.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
The embodiment of the application provides an automatic driving lane changing method, and relates to the technical field of artificial intelligence. The automatic driving lane changing method provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smartphone, tablet, laptop, desktop computer, or the like; the server side can be configured into an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and cloud servers for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN (content delivery network) and big data and artificial intelligence platforms; the software may be an application or the like that implements the table information extraction method, but is not limited to the above form.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In each embodiment of the present application, when data related to the identity or characteristics of a user, such as user information, user behavior data, user history data, and user location information, is processed, permission or consent of the user is obtained, and the collection, use, and processing of the data comply with relevant laws and regulations and standards of relevant countries and regions. In addition, when the embodiment of the present application needs to acquire sensitive personal information of a user, individual permission or individual consent of the user is obtained through a pop-up window or a jump to a confirmation page, and after the individual permission or individual consent of the user is definitely obtained, necessary user-related data for enabling the embodiment of the present application to operate normally is acquired.
Fig. 1 is a flowchart illustrating an automatic driving lane change method according to a first embodiment, and the method in fig. 1 may include, but is not limited to, steps S101 to S104.
And step S101, acquiring real-time road condition information and real-time vehicle condition information.
The road condition information comprises the number of lanes, the lane where the self-vehicle is located, the running vehicle and the obstacle distance representing the relative distance between the self-vehicle and the running vehicle or the obstacle, and the vehicle condition information comprises the speed and the positioning of the self-vehicle.
In a possible implementation manner, the road condition information and the vehicle condition information are acquired through equipment with a sensing module, and the sensing module comprises a laser radar, a camera, a navigation system, a vehicle-mounted machine system and the like.
In a possible implementation manner, the road condition information may refer to all data related to vehicle driving safety in an environment where the vehicle is located, which is acquired by a sensing module such as a laser radar or a camera, for example, the vehicle environment information may be driving states of all vehicles in the environment where the vehicle is located, or may be driving environments (such as lane line data, obstacle data, real-time location data, and map data) in the environment where the vehicle is located, which is not particularly limited in this embodiment of the present application.
In a possible implementation manner, the vehicle condition information may refer to all data of a running state of the vehicle, such as vehicle state information, actual vehicle speed data, sensing data on a steering wheel, a brake pedal, an accelerator pedal, and a clutch, acquired by a sensing module such as a vehicle system during running of the vehicle, which is not particularly limited in this embodiment of the present application.
And S102, carrying out electroencephalogram acquisition on the driver to obtain real-time electroencephalogram information, wherein the electroencephalogram information represents the driving intention of the driver.
It can be understood that when a reaction occurs, the cranial nerve cells generate corresponding electric waves, which are called brain waves, the brain waves are collected and expressed in the form of signals, i.e., electroencephalogram information, and the driving intention of the driver can be determined by analyzing the waveform of the electroencephalogram information.
In a possible implementation manner, the electroencephalogram information may be electroencephalogram information of a driver collected through a device having a brain-computer interface, the brain-computer interface has at least two electrodes, and in a process of collecting signals for the driver through the brain-computer interface, the two electrodes are located in different areas of the head of the driver, so as to collect electroencephalogram information generated by the different areas of the driver.
More specifically, a cosine similarity matrix is constructed according to the cosine similarity obtained through calculation, and the type of the cosine similarity matrix is predicted through the trained convolutional neural network, so that the driving intention type of the brain wave signal is obtained. For example, if the feature vectors of the electroencephalogram information are 128, 128 cosine similarities can be calculated, and then a cosine similarity matrix can be constructed based on the 128 cosine similarities, and then the type to which the cosine similarity matrix belongs is predicted through the trained convolutional neural network, so that the driving intention type to which the electroencephalogram information belongs is obtained.
And step S103, predicting a driving route of the self-vehicle running on the current road condition by using the trained lane-changing decision model based on the real-time road condition information and the real-time vehicle condition information to obtain a route prediction result.
And inputting the road condition information and the vehicle condition information acquired in real time into a trained lane change decision model, and predicting the driving route by the lane change decision model. The route prediction result may include a lane keeping mode, a lane changing driving mode, or an intersection turning mode. It can be understood that the lane keeping mode, the lane changing driving mode and the intersection turning mode are all preset driving modes, the route prediction result is that the current road condition of the self-vehicle and the driving intention of the driver are judged based on the road condition information, the vehicle condition information and the electroencephalogram information, and when the driving space of the real-time driving area meets the preset condition of the corresponding driving mode, the driving intention of the driver is executed on the basis of ensuring safety, so that the route prediction result is obtained.
In the present embodiment, the lane keeping mode may be understood as a driving mode in which the own vehicle maintains driving along the current lane, the lane changing driving mode may be understood as a driving mode in which the own vehicle switches from the current driving lane to driving along another lane, and the intersection turning may be understood as a driving mode in which the own vehicle turns according to the initial driving route or the real-time driving route when the own vehicle reaches the intersection.
And step S104, comparing the route prediction result with the driver driving intention represented by the electroencephalogram information, and controlling the self vehicle to execute lane-changing driving action along the route prediction result when the route prediction result is consistent with the driver driving intention.
In one possible implementation mode, the route prediction result is compared with the driving intention of the driver represented by the electroencephalogram information, when the route prediction result is consistent with the driving intention of the driver, the route prediction result is sent to a control system of the vehicle through interaction with the control system of the vehicle, the control system of the vehicle converts the driving mode indicated by the received route prediction result into a control instruction for the vehicle, the vehicle is controlled to drive according to the driving mode indicated by the route prediction result, and when the route prediction result is lane change, the vehicle is controlled to execute lane change driving action; otherwise, executing the driving action corresponding to the driving intention of the driver.
Fig. 2 is a flowchart illustrating an automatic driving lane change method according to a second embodiment, and based on the first embodiment, the method in fig. 2 may include, but is not limited to, steps S201 to S206.
Step S201, acquiring offline training information.
The offline training information includes offline road condition information and offline vehicle condition information.
It can be understood that the off-line training information is the stored historical information, i.e. the historical data traffic information and the historical vehicle condition information, and the historical data traffic information and the historical vehicle condition information are matched according to the generated time to form a training set.
Step S202, a decision table is constructed by using the off-line training information.
The decision table is also called a judgment table, is a tabular graphic tool, and is suitable for describing the situations that a plurality of judgment conditions are processed, all the conditions are combined with each other, and a plurality of decision schemes exist. The way of describing complex logic accurately and concisely corresponds to a plurality of conditions to be performed after the conditions are satisfied. But different from the control statement in the traditional program language, the decision table can clearly express the direct connection of a plurality of independent conditions and a plurality of actions.
In a possible implementation mode, off-line training information is preprocessed, off-line road condition information and off-line vehicle condition information are matched according to time, and a rough neural network algorithm is applied to an incomplete matching result to remove the incomplete off-line training information, so that a decision table is formed.
Step S203, discretizing the off-line training information in the decision table, and classifying the discretized off-line training information by using a preset neural network model to obtain a discrete classification result.
In a possible implementation manner, the discretized offline training information is input into a preset neural network model, the preset neural network model is preset with probability parameters of classified distribution, the probability parameters can be classified distribution obeying the preset probability parameters, the specific probability value of the preset probability parameters can be preset for initial setting, each group of data in the offline training information is numbered in sequence, after the preset probability parameters are obtained, data with the same label as the distribution selection parameters are obtained, and the selected data are classified into corresponding categories to obtain discrete classification results.
And step S204, performing attribute reduction on the decision table according to the discrete classification result, and determining a minimum condition attribute set for expressing the decision.
It can be understood that attribute reduction is to delete the offline training information with little decision-making effect in the decision table, and reduce the data amount of the decision table, because the discretized offline training information has the characteristics of large data sample, diversified data and high data sample quality, the deleted offline training information does not affect the expression of the object, and the redundant data is deleted, so as to simplify the conditional attribute of the decision.
Step S205, minimum condition attribute sets with the same expression decision result are integrated, and the coverage and the confidence level of the integrated minimum condition attribute sets are calculated to determine an offline decision rule.
In a possible implementation manner, a minimum condition attribute set expressing the same decision result is summarized, so that a decision rule is extracted, specifically, the rule extraction is a process of summarizing and summarizing earlier-stage work, a decision table is simplified again according to attribute reduction conditions, condition attribute intervals subjected to dispersion and reduction and rules with the same decision attribute are combined together, and the coverage degree of the rules is calculated, wherein the coverage degree is the proportion of the rules in all the rules with the same decision attribute.
Step S206, training the lane change decision model by using an offline decision rule, and comparing the route prediction result of the lane change decision model with offline electroencephalogram information.
In a possible implementation manner, parameters of the lane change decision model are adjusted iteratively in the training process, and when the coincidence degree of the route prediction result of the lane change decision model and the off-line electroencephalogram information reaches a preset accuracy rate, the training is ended.
Referring to fig. 3, in some embodiments, step S204 may include, but is not limited to, step S301 to step S305.
Step S301, a training data set is constructed according to the off-line training information, and the off-line training information of each type in the training data set is randomly encoded.
In one possible implementation, each type of off-line training information in the training data set is encoded by representing each data by a binary string of fixed length, and by using a random binary number, wherein each bit represents an attribute, for example, 1 represents that the decision attribute is selected as the required data, and 0 represents that the decision attribute is not selected as the required data.
Illustratively, for a typical lane-change scenario, the training data set constructed from the offline training information is { x } i ,y i ,v i ,x i environment ,y i environment ,v i environment In which v is i As the speed of the bicycle, (x) i ,y i ) Is the horizontal and vertical coordinates (x) of the GPS position of the own vehicle i environment ,y i environment ) Position coordinates, v, of surrounding vehicles or obstacles relative to the vehicle itself i The environment being the surroundingsThe training data set is randomly encoded for the speed of the vehicle relative to the vehicle to obtain a binary string {0, 1, 0}.
Step S302, calculating the dependence degree of the decision attribute on off-line training information required by decision.
In one possible implementation, an adaptive function is first defined, the adaptive function being:
Figure BDA0003949975400000111
and calculating the dependence degree of the decision attribute on the off-line training information required by the decision by using a dependence degree formula, wherein the dependence degree formula is as follows:
Figure BDA0003949975400000112
wherein f (x) is a fitness function, h (x) is the number of 1 in a binary character string, namely the number of the selected offline training information data, n is the total number of characters of the binary character string, namely the number of all offline training information data, m is the degree of dependence of the decision attribute on the condition attribute represented by 1 in the binary character string, the larger the value of m is, the stronger the dependence of the decision attribute on the condition attribute is, the goal is to make h (x) as small as possible under the condition of ensuring that the dependence of the decision attribute on the whole is unchanged, and V is C (D) And c represents the degree of dependence of the decision attribute D, c is data of offline training information, and U is a domain of discourse.
In step S303, it is determined whether the current dependency level is equal to the minimum relative dependency value. If yes, go to step S304; if not, go to step S305.
And step S304, outputting a minimum condition attribute set when the training data sets of the optimal fitness of a plurality of continuous generations are not improved any more.
And S305, performing random selection, crossing and mutation treatment on the training data set in an iterative manner, copying the training data set with the optimal fitness to the training data set of the next generation until the training data sets with the optimal fitness of a plurality of continuous generations are not improved any more and outputting a minimum condition attribute set.
Referring to fig. 4, in some embodiments, step S102 may include, but is not limited to, step S401 to step S403.
Step S401, collecting brain wave signals generated when a driver drives to obtain initial brain wave information.
And S402, preprocessing initial electroencephalogram information, and comparing the preprocessed initial electroencephalogram information by using a preset electroencephalogram analog template to obtain an electroencephalogram analog result.
And S403, determining error potentials in the preprocessed initial electroencephalogram information according to the electroencephalogram analog result, and performing rejection processing to obtain electroencephalogram information representing the driving intention of the driver.
In a possible implementation mode, an electroencephalogram signal is acquired from 32 EEG channels including FP 1-O2 based on a Biopac actiCHAmp 64 electroencephalogram acquisition and analysis system; the method comprises the steps of dividing the vehicle into six areas including a left front area, a right front area, a left rear area, a right rear area and a right rear area according to the position of the vehicle, and collecting the position and speed information of the vehicle and the position and speed information of other vehicles in the six areas. Performing spatial filtering by using Common Average Reference (CAR) values of electroencephalogram signals of FP1, F3, F7, FC5, FC1, FP2, F7, F4, F8, FC6 and FC2 channels to eliminate noise; performing band-pass filtering by using 1-10hz band-pass filtering; the characteristic of the brain electrical signals is enhanced through typical correlation analysis (CCA), and finally, classification is realized through template matching based on an analog template, and if wrong correlation potentials are generated, the classification is eliminated.
Referring to fig. 5, in some embodiments, step S104 may include, but is not limited to, step S501 to step S505.
Step S501, a plurality of pre-aiming points are found according to the distance between the self-vehicle and the obstacle, and a lane changing route is drawn by using a Bezier curve constructed by the pre-aiming points.
A bezier curve, also known as a bezier curve or a bezier curve, is a mathematical curve applied to two-dimensional graphics applications. The bezier curve is a graph created and edited by controlling four points (a start point, an end point, and two intermediate points separated from each other) on the curve, wherein a control line located at the center of the curve plays an important role. This line is virtual, with the middle crossing the bezier curve and the control endpoints at both ends. The bezier curve changes the curvature (degree of curvature) of the curve as the end points at both ends are moved; when moving the middle point (i.e., moving the virtual control line), the bezier curve moves uniformly with the start point and the end point locked.
The Bezier curve has the advantages of continuous curvature, easy tracking, satisfaction of vehicle dynamics constraint and the like. Therefore, the embodiment of the invention selects the fifth-order Bezier curve to draw the road changing line.
The fifth order bezier curve may be expressed as:
Figure BDA0003949975400000131
Figure BDA0003949975400000132
can be expressed as:
Figure BDA0003949975400000133
wherein B (t) represents a Bezier curve, P 0 ,P 1 ,P 2 ,P 3 ,P 4 And P 5 Are all preview points, t is an element [0,1 ]],n∈[0,5],i∈[0,5]。
The method for finding the plurality of pre-aiming points according to the distance between the self-vehicle and the barrier can be characterized in that the pre-aiming points are selected in a real-time driving area, and the pre-aiming points are connected to obtain a Bezier curve which is used as a lane changing route. As shown in FIG. 8, the coordinates P of the 6 preview points in the fifth order Bezier curve i =(X i ,Y i ) Determining according to the distance between the selected autonomous vehicle and the obstacle, specifically:
firstly, a preview point P is determined 0 And P 5 The coordinates of (a):
P 0 =(X 0 ,Y 0 );
P 5 =(X 5 ,Y 5 )=(L,w r );
dividing the distance between the vehicle and the barrier into 4 parts to obtain P 1 ,P 2 ,P 3 And P 4 Coordinates are as follows:
P 1 =(X 1 ,Y 1 )=(X 5 /4,Y 0 );
P 2 =(X 2 ,Y 2 )=(X 5 /2,Y 0 );
P 3 =(X 3 ,Y 3 )=(X 5 /2,Y 5 );
P 4 =(X 4 ,Y 4 )=(3X 5 /4,Y 5 );
wherein L is the distance between the vehicle and the obstacle under the free coordinate system detected by the sensing system; w is a r Lane width information is provided for lane line detection in a perception system.
Step S502, the boundary expansion processing is carried out on the barrier, and the expansion radius of the barrier is set according to the current lane changing route and the self vehicle environment information.
When the lane change route is drawn by using the Bezier curve, the vehicle is regarded as a mass point, the width of the vehicle is ignored, the possibility of collision between the vehicle and an obstacle exists when the vehicle runs along the lane change route, and the collision evaluation is carried out by carrying out boundary expansion processing on the obstacle, comparing the distance between the vehicle and the obstacle when the vehicle runs along the lane change route with the radius of the obstacle collision processed by the boundary expansion processing. The formula for the collision assessment can be expressed as:
Figure BDA0003949975400000134
wherein r is the expansion radius of the obstacle,
Figure BDA0003949975400000142
the width of the vehicle itself, d min Is a desired barrierMinimum safe distance, y, of obstacle from the vehicle ob Is the projected distance, w, of the center of the obstacle from the lane line in the free coordinate system c1 The width of an obstacle detected for the sensing system.
In step S503, it is determined whether the expansion radius of the obstacle is larger than the width of the obstacle. If yes, go to step S504; if not, go to step S505.
When judging whether the expansion radius of the obstacle is larger than the width of the obstacle, the method comprises the following steps:
Figure BDA0003949975400000141
and step S404, controlling the self vehicle to run along the current lane change route.
And S505, resetting the position of the preview point, redrawing a lane change line by using a Bezier curve constructed by the reset preview point, wherein the steering angle of the vehicle corresponding to the redrawn lane change line is larger than the steering angle of the vehicle corresponding to the last drawn lane change line. Return is made to step S502.
Reselecting a preview point in the real-time driving area, and firstly, re-determining the preview point P 5 By shortening the interval P 0 And P 5 The distance of the vehicle-mounted lane-changing device is increased to enable the vehicle-mounted steering angle corresponding to the re-drawn lane-changing line to be larger than the vehicle-mounted steering angle corresponding to the last drawn lane-changing line, and the collision risk of the vehicle and the barrier is reduced. In the present embodiment, the preview point P 5 Is modified by the following formula:
P 5 =(X 5 ,Y 5 )=(4L/5,w r )。
as shown in fig. 9, after the host vehicle travels along the current lane change line and bypasses the obstacle, the control of returning the host vehicle to the lane of the initial travel route may be to draw a route on the lane returning to the initial travel route using a mirrored bezier curve, thereby returning to the lane of the initial travel route after bypassing the obstacle.
In some embodiments, as shown in fig. 10, when the controlling of the host vehicle to travel along the current lane change line may be that no continuous obstacle occurs within the detection range, the controlling of the host vehicle after lane change travels over the obstacle and then changes to the lane of the initial travel route; and when continuous obstacles appear in the detection range, controlling the self-vehicle after lane changing to change to the lane of the initial driving route after the self-vehicle drives through the continuous obstacles. The continuous obstacles are obstacles, the distance between which and the previous obstacle in the same lane is smaller than a preset obstacle distance threshold value.
In the above embodiment, the control of the own vehicle includes lateral control and longitudinal control.
The transverse control adopts an LQR algorithm, and the steps of the LQR algorithm under the discrete condition are as follows:
1. determining an iteration range N;
2. setting iteration initial value, let P N =Q f Wherein Q is f =Q;
3. Loop iteration, from back to front t = N, \ 8230;, 1:
P t-1 =Q+A T P t A-A T P t B(R+B T P t B) -1 B T P t A;
4. from t =0, \8230;, N-1, the feedback coefficients are calculated cyclically:
K t =(R+B T P t+1 B) -1 B T P t+1 A;
5. and finally obtaining an optimized control quantity:
Figure BDA0003949975400000151
the longitudinal control is controlled based on a PID algorithm, and specifically comprises the following steps:
constructing a control quantity output model based on the target speed, wherein the control quantity output model is as follows:
Figure BDA0003949975400000152
wherein, U t Representing the amount of control output, e (t) = v d -v a ,v d Representing target speed, v a Indicating the current speed, K P Denotes the proportional amplification factor, K I Denotes the integration time constant, K D Represents a differential time constant;
discretizing the control quantity output model to obtain a discrete control quantity output model, wherein the discrete control quantity output model is as follows:
Figure BDA0003949975400000153
where T is the command cycle of the controller, e j Indicating the speed deviation at time j, e k Represents the velocity deviation at time k, e k-1 Representing the speed deviation at time k-1;
and calculating the control output quantity by using the discrete control quantity output model, and performing driving assistance control on the vehicle through the calculated control output quantity.
In some embodiments, a simulated driving scenario is constructed according to the real-time traffic information and the real-time vehicle condition information, and the simulated driving scenario is displayed, wherein the simulated driving scenario converts the real-time traffic information into the simulated traffic information and converts the real-time vehicle condition information into the simulated vehicle condition information.
As can be seen from the above, in the automatic driving lane change method provided in the embodiment of the present application, based on the deep learning method, the lane change decision model is trained based on the offline road condition information, the vehicle condition information, and the electroencephalogram information representing the driving intention of the driver, so as to promote the implementation of the human-like automatic driving, and the electroencephalogram-based driver intention recognition is objective and has high accuracy, thereby improving the synchronization and the safety during the automatic driving lane change.
In order to better implement the above method, the embodiment of the present invention further provides a driving assistance device, which may be specifically integrated in an electronic device such as a server or a terminal.
Referring to fig. 6, an embodiment of the present application further provides an automatic driving lane change device, which can implement the automatic driving lane change method mentioned in the foregoing embodiment, and the device includes:
the acquiring module 601 is configured to acquire real-time traffic information and real-time vehicle condition information, where the traffic information includes the number of lanes, lanes where vehicles are located, running vehicles, and obstacle intervals representing relative distances between the vehicles and the running vehicles or obstacles, and the vehicle condition information includes vehicle speed and vehicle positioning;
the acquisition module 602 is used for acquiring electroencephalograms of a driver to obtain real-time electroencephalogram information, wherein the electroencephalogram information represents the driving intention of the driver;
the prediction module 603 is configured to predict a driving route of the self-vehicle running on the current road condition by using the trained lane-changing decision model based on the real-time road condition information and the real-time vehicle condition information, and obtain a route prediction result;
and the control module 604 is used for comparing the route prediction result with the driving intention of the driver represented by the electroencephalogram information, and controlling the vehicle to execute lane change driving action along the route prediction result when the route prediction result is consistent with the driving intention of the driver.
The specific implementation of the automatic driving lane-changing device is substantially the same as the specific implementation of the automatic driving lane-changing method, and is not described herein again.
The embodiment of the application further provides electronic equipment, the electronic equipment comprises a storage and a processor, the storage stores a computer program, and the processor executes the computer program to realize the automatic driving lane changing method. The electronic equipment can be any intelligent terminal including a tablet computer, a vehicle-mounted computer and the like.
Referring to fig. 7, fig. 7 illustrates a hardware structure of an electronic device according to another embodiment, where the electronic device includes:
the processor 701 may be implemented by a general-purpose CPU (central processing unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits, and is configured to execute a relevant program to implement the technical solution provided in the embodiment of the present application;
the memory 702 may be implemented in a form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a Random Access Memory (RAM). The memory 702 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present disclosure is implemented by software or firmware, the relevant program codes are stored in the memory 702 and called by the processor 701 to execute the automatic lane change method according to the embodiments of the present disclosure;
an input/output interface 703 for realizing information input and output;
the communication interface 704 is used for realizing communication interaction between the device and other devices, and can realize communication in a wired manner (for example, USB, network cable, etc.) or in a wireless manner (for example, mobile network, WIFI, bluetooth, etc.);
a bus 705 that transfers information between the various components of the device (e.g., the processor 701, the memory 702, the input/output interface 703, and the communication interface 704);
wherein the processor 701, the memory 702, the input/output interface 703 and the communication interface 704 are communicatively connected to each other within the device via a bus 705.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the above-mentioned automatic driving lane change method.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The automatic driving lane changing method, the automatic driving lane changing device, the electronic equipment and the storage medium are based on a deep learning method, a lane changing decision model is trained based on offline road condition information, vehicle condition information and electroencephalogram information representing driving intention of a driver, realization of man-like automatic driving is promoted, recognition of the driver intention based on the electroencephalogram is objective and high in accuracy, and synchronism and safety during automatic driving lane changing are improved.
The embodiments described in the embodiments of the present application are for more clearly illustrating the technical solutions of the embodiments of the present application, and do not constitute a limitation to the technical solutions provided in the embodiments of the present application, and it is obvious to those skilled in the art that the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems with the evolution of technology and the emergence of new application scenarios.
It will be appreciated by those skilled in the art that the embodiments shown in the figures are not intended to limit the embodiments of the present application and may include more or fewer steps than those shown, or some of the steps may be combined, or different steps may be included.
The above described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, and functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, which are essential or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product stored in a storage medium, which includes multiple instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and the scope of the claims of the embodiments of the present application is not limited thereto. Any modifications, equivalents and improvements that may occur to those skilled in the art without departing from the scope and spirit of the embodiments of the present application are intended to be within the scope of the claims of the embodiments of the present application.

Claims (10)

1. An automatic driving lane change method, comprising:
acquiring real-time road condition information and real-time vehicle condition information, wherein the road condition information comprises the number of lanes, lanes where self vehicles are located, running vehicles and barrier distances representing relative distances between the self vehicles and the running vehicles or barriers, and the vehicle condition information comprises self vehicle speed and self vehicle positioning;
performing electroencephalogram acquisition on a driver to obtain real-time electroencephalogram information, wherein the electroencephalogram information represents the driving intention of the driver;
predicting a driving route of the self-vehicle running on the current road condition by using a trained lane change decision model based on the real-time road condition information and the real-time vehicle condition information to obtain a route prediction result;
and comparing the route prediction result with the driving intention of the driver represented by the electroencephalogram information, and controlling the self-vehicle to execute lane-changing driving action along the route prediction result when the route prediction result is consistent with the driving intention of the driver.
2. The automatic driving lane changing method according to claim 1, further comprising, before the obtaining the traffic information and the vehicle condition information:
acquiring offline training information, wherein the offline training information comprises offline road condition information and offline vehicle condition information;
constructing a decision table by using the offline training information;
discretizing the off-line training information in the decision table, and classifying the discretized off-line training information by using a preset neural network model to obtain a discrete classification result;
performing attribute reduction on the decision table according to the discrete classification result, and determining a minimum condition attribute set for expressing a decision;
integrating minimum condition attribute sets with the same expression decision result, and calculating the coverage and confidence level of the integrated minimum condition attribute sets to determine an offline decision rule;
and training the lane change decision model by using an offline decision rule, and comparing a route prediction result of the lane change decision model with offline electroencephalogram information.
3. The automatic driving lane-changing method according to claim 2, wherein the attribute reduction of the decision table according to the discrete classification, and the determination of the minimum condition attribute set for expressing the decision result comprises:
constructing a training data set according to the off-line training information, and randomly coding each type of off-line training information in the training data set;
calculating the degree of dependence of the decision attributes on off-line training information required by decision;
judging whether the current dependency degree is equal to the minimum relative dependency value;
if yes, outputting a minimum condition attribute set when the training data sets with the optimal fitness of a plurality of continuous generations are not improved any more;
if not, the training data set is subjected to random selection, crossover and mutation treatment iteratively, and the training data set with the optimal fitness is copied to the training data set of the next generation until the training data sets with the optimal fitness of a plurality of continuous generations are not improved any more and a minimum condition attribute set is output.
4. The automatic driving lane-changing method according to claim 1, wherein the electroencephalogram acquisition of the driver to obtain electroencephalogram information representing driving intention of the driver comprises:
collecting brain wave signals generated when a driver drives to obtain initial brain wave information;
preprocessing initial electroencephalogram information, and comparing the preprocessed initial electroencephalogram information by using a preset electroencephalogram analog template to obtain an electroencephalogram analog result;
and determining error potentials in the preprocessed initial electroencephalogram information according to the electroencephalogram analog result, and removing to obtain electroencephalogram information representing the driving intention of the driver.
5. The automated driving lane-change method according to claim 1, wherein the controlling of the host vehicle to perform the lane-change travel action along the route prediction result includes:
finding a plurality of pre-aiming points according to the distance between the self-vehicle and the barrier, and drawing a lane changing route by using a Bezier curve constructed by the pre-aiming points;
performing boundary expansion processing on the barrier, and setting the expansion radius of the barrier according to the current lane change route and the self-vehicle environment information;
judging whether the expansion radius of the barrier is larger than the width of the barrier;
if so, controlling the self vehicle to run along the current lane changing route;
if not, the position of the preview point is reset, a lane change line is redrawn by using a Bezier curve constructed by the reset preview point, the steering angle of the vehicle corresponding to the redrawn lane change line is larger than the steering angle of the vehicle corresponding to the last drawn lane change line, and the step of performing boundary expansion processing on the obstacle is returned.
6. The automated driving lane-change method according to claim 5, wherein the controlling the host vehicle to travel along the current lane-change route comprises:
when no continuous obstacles appear in the detection range, controlling the vehicle after lane changing to drive the obstacles and then changing to the lane of the initial driving route; when continuous obstacles appear in the detection range, controlling the self-vehicle after lane changing to drive the continuous obstacles and then changing to the lane of the initial driving route;
the continuous obstacles are obstacles, the distance between which and the previous obstacle in the same lane is less than a preset obstacle distance threshold value.
7. The automated driving lane change method of claim 1, further comprising:
and constructing a simulated driving scene according to the real-time road condition information and the real-time vehicle condition information, and displaying the simulated driving scene, wherein the simulated driving scene converts the real-time road condition information into simulated road condition information and converts the real-time vehicle condition information into simulated vehicle condition information.
8. An automatic driving lane-changing device, characterized in that the device comprises:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring real-time road condition information and real-time vehicle condition information, the road condition information comprises the number of lanes, the lane where a vehicle is located, the running vehicle and the obstacle distance representing the relative distance between the vehicle and the running vehicle or an obstacle, and the vehicle condition information comprises the speed and the positioning of the vehicle;
the acquisition module is used for carrying out electroencephalogram acquisition on a driver to obtain real-time electroencephalogram information, and the electroencephalogram information represents the driving intention of the driver;
the prediction module is used for predicting a driving route of the self-vehicle running on the current road condition by utilizing the trained lane-changing decision model based on the real-time road condition information and the real-time vehicle condition information to obtain a route prediction result;
and the control module is used for comparing the route prediction result with the driver driving intention represented by the electroencephalogram information and controlling the self-vehicle to execute lane-changing driving action along the route prediction result when the route prediction result is consistent with the driver driving intention.
9. An electronic device, comprising a memory storing a computer program and a processor that implements the autopilot lane-change method of any of claims 1-7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the automated driving lane-change method according to any one of claims 1 to 7.
CN202211445165.0A 2022-11-18 2022-11-18 Automatic driving lane changing method and device, electronic equipment and storage medium Pending CN115716476A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211445165.0A CN115716476A (en) 2022-11-18 2022-11-18 Automatic driving lane changing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211445165.0A CN115716476A (en) 2022-11-18 2022-11-18 Automatic driving lane changing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115716476A true CN115716476A (en) 2023-02-28

Family

ID=85255535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211445165.0A Pending CN115716476A (en) 2022-11-18 2022-11-18 Automatic driving lane changing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115716476A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116788271A (en) * 2023-06-30 2023-09-22 北京理工大学 Brain control driving method and system based on man-machine cooperation control
CN117184086A (en) * 2023-11-07 2023-12-08 江西科技学院 Intelligent driving system based on brain waves

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116788271A (en) * 2023-06-30 2023-09-22 北京理工大学 Brain control driving method and system based on man-machine cooperation control
CN116788271B (en) * 2023-06-30 2024-03-01 北京理工大学 Brain control driving method and system based on man-machine cooperation control
CN117184086A (en) * 2023-11-07 2023-12-08 江西科技学院 Intelligent driving system based on brain waves

Similar Documents

Publication Publication Date Title
JP7338052B2 (en) Trajectory prediction method, device, equipment and storage media resource
Yu et al. Occlusion-aware risk assessment for autonomous driving in urban environments
Lefèvre et al. A survey on motion prediction and risk assessment for intelligent vehicles
Gulzar et al. A survey on motion prediction of pedestrians and vehicles for autonomous driving
US11816901B2 (en) Multi-agent trajectory prediction
CN115716476A (en) Automatic driving lane changing method and device, electronic equipment and storage medium
Gidado et al. A survey on deep learning for steering angle prediction in autonomous vehicles
Cheung et al. Identifying driver behaviors using trajectory features for vehicle navigation
CN113076599A (en) Multimode vehicle trajectory prediction method based on long-time and short-time memory network
CN111583716B (en) Vehicle obstacle avoidance method and device, electronic equipment and storage medium
US11912301B1 (en) Top-down scenario exposure modeling
JP7511544B2 (en) Dynamic spatial scenario analysis
CN114323054A (en) Method and device for determining running track of automatic driving vehicle and electronic equipment
CN113942524A (en) Vehicle running control method and system and computer readable storage medium
Al Mamun et al. Lane marking detection using simple encode decode deep learning technique: SegNet
Chen Multimedia for autonomous driving
JP2023540613A (en) Method and system for testing driver assistance systems
CN117585017B (en) Automatic driving vehicle lane change decision method, device, equipment and storage medium
Li et al. Attention-based lane change and crash risk prediction model in highways
CN116331221A (en) Driving assistance method, driving assistance device, electronic equipment and storage medium
Das et al. Pore acceptance predictions of motorised Two-Wheelers during filtering at urban Mid-Block sections
Hodges et al. Deep learning for driverless vehicles
Vasile et al. Comfort and safety in conditional automated driving in dependence on personal driving behavior
Rizvi et al. Fuzzy adaptive cruise control system with speed sign detection capability
Mardiati et al. Motorcycle movement model based on markov chain process in mixed traffic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination