CN112356841B - Vehicle control method and device based on brain-computer interaction - Google Patents

Vehicle control method and device based on brain-computer interaction Download PDF

Info

Publication number
CN112356841B
CN112356841B CN202011356563.6A CN202011356563A CN112356841B CN 112356841 B CN112356841 B CN 112356841B CN 202011356563 A CN202011356563 A CN 202011356563A CN 112356841 B CN112356841 B CN 112356841B
Authority
CN
China
Prior art keywords
vehicle
driving
information
driver
driving behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011356563.6A
Other languages
Chinese (zh)
Other versions
CN112356841A (en
Inventor
徐昕
方强
尹昕
曾宇骏
刘学卿
兰亦星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202011356563.6A priority Critical patent/CN112356841B/en
Publication of CN112356841A publication Critical patent/CN112356841A/en
Application granted granted Critical
Publication of CN112356841B publication Critical patent/CN112356841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • B60W2050/0028Mathematical models, e.g. for simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

The invention discloses a vehicle control method and a device based on brain-computer interaction, wherein the method comprises the following steps: s1, sensing external environment information of a controlled vehicle and state information of the controlled vehicle in real time in the driving process of the controlled vehicle; s2, judging all driving scenes matched with the current environment according to the real-time sensed information, and outputting the driving scenes as optional driving decisions to be provided for a driver to select; s3, collecting brain wave signals of a driver, and extracting decision selection instructions of the driver for driving decisions; and S4, when a decision selection instruction is received, calling a corresponding target driving behavior model from a plurality of driving behavior models obtained by pre-training according to the decision selection instruction, and controlling the driving behavior of the vehicle by using the target driving behavior model. The method has the advantages of simple implementation method, high intelligent degree, good control stability and robustness, safety, reliability and the like.

Description

Vehicle control method and device based on brain-computer interaction
Technical Field
The invention relates to the technical field of vehicle control, in particular to a vehicle control method and device based on brain-computer interaction.
Background
The brain-controlled vehicle is intended to control vehicle driving through brain wave signals, at present, the brain-controlled vehicle driving mainly focuses on brain machines to make vehicle decisions on the driving behaviors of engineering vehicles or wheelchair vehicles with low vehicle running speed and low steering rate, and generally, a simple processing chip is used for analyzing and extracting original brain wave signals, and specific control components of the vehicle, such as an accelerator, a brake and a steering wheel, are directly controlled based on the brain wave signal analysis result, so that the vehicle speed and steering are controlled. For example, the chinese patent application 2017101716722 discloses a static force photo-wheel road roller controlled by brain electric signals, which realizes the control of the static force photo-wheel road roller by extracting the original brain wave signals and analyzing the control intentions for forward, backward, left turn and right turn therein; and as patent application 2013105670931 discloses a brain wave remote control car and method, which uses an electroencephalogram signal acquisition part to acquire original electroencephalogram signals for analysis and processing, analyzes the intention of a car body part to be controlled by a person, and controls a driving part of a car body according to the analysis result of the electroencephalogram signals through a controller so as to control the motion of the car body, thereby realizing the control of the motion of the car body.
However, the conventional driving control vehicle scheme based on brain waves mainly has the following defects:
firstly, the specific control action of the vehicle is analyzed directly based on the original brain wave signal, and the peripheral environment and the self state of the vehicle are not concerned, so that the brain wave control vehicle mode needs the attention of a driver to be concentrated enough to manually concern the peripheral environment and the vehicle state in real time;
secondly, the intention of the brain needs to be directly extracted from the original electroencephalogram signal, and the extraction of the intention of the brain is actually large, errors or errors possibly exist, the extraction is easy to interfere, and the stability and the robustness of the actual vehicle control are poor;
secondly, because a simple single-core system-free processing chip is adopted to extract the brain wave of the human, the human can only obtain the simple intention of the human, such as the simple actions of advancing, retreating and the like, and the human does not have the complex intention comprehension capability, and the interaction between the intention of the human and a machine can not be realized in the driving process of the vehicle, so that the complex vehicle control can not be finished, and the driver is required to carry out the complex vehicle control in the driving scenes such as overtaking, car following, left turning, right turning, loop and the like, and the driving control mode is not suitable for the driving environments with complex transformation such as overtaking, car following, left turning, right turning, loop and the like.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems in the prior art, the invention provides the brain-computer interaction-based vehicle control method and the brain-computer interaction-based vehicle control device which are simple in implementation method, high in intelligent degree, good in control stability and robustness, safe and reliable, and suitable for various driving environments with complicated transformation.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a vehicle control method based on brain-computer interaction comprises the following steps:
s1, information perception: the method comprises the following steps that a controlled vehicle senses external environment information of the controlled vehicle and self state information of the controlled vehicle in real time in the driving process of the controlled vehicle;
s2, judging decision requirements: judging all driving scenes matched with the current environment according to the external environment information of the controlled vehicle sensed in real time and the self state information of the controlled vehicle, and outputting the driving scenes as optional driving decisions to be provided for a driver to select;
s3, decision instruction extraction: collecting brain wave signals of a driver in real time, and extracting decision selection instructions of the driver for the driving decision from the collected brain wave signals;
s4, controlling and driving the vehicle: and when the decision selection instruction is received, calling a corresponding target driving behavior model from a plurality of driving behavior models obtained by pre-training according to the decision selection instruction, wherein the driving behavior model comprises steering control information of a driver on a steering wheel under different driving scenes, and controlling the driving behavior of the vehicle by using the target driving behavior model.
Further, the driving scene comprises obstacle avoidance, overtaking, car following, road following, Uturn and steering.
Further, the external environment information includes one or more of boundary feature information, object state information and image information in the external environment; the self-state information of the vehicle comprises vehicle speed and/or acceleration, and the steering control information of the steering wheel comprises steering angle and/or steering speed.
Further, before step S1, a driving behavior model training step is further included, and the specific steps include: the method comprises the steps of obtaining external environment information of a vehicle, state information of the vehicle and steering control information of a driver to a steering wheel under different driving scenes, and training to obtain a plurality of driving behavior models corresponding to the driving scenes.
Further, when the driving decision selectable in step S2 is output, all driving decisions selectable in the current scene are displayed correspondingly by using the blinking LED lamps with different frequencies, respectively, so as to form visual stimuli for the driver to select.
Further, the specific step of controlling the driving behavior of the vehicle in step S4 includes: and inputting the external environment information of the current controlled vehicle and the self state information of the controlled vehicle into a target driving behavior model to obtain the corresponding steering wheel steering target angle output, and controlling and adjusting the current steering wheel control angle and the steering wheel control rotating speed according to the obtained steering wheel target steering angle.
Further, the step S3 further includes extracting a vehicle speed selection command selected by the driver corresponding to the vehicle speed, and the step S4 of controlling the driving behavior of the vehicle further includes controlling the vehicle speed of the vehicle according to the vehicle speed selection command.
A brain-computer interaction based vehicle control apparatus comprising:
the information perception module is used for detecting the external environment information of the controlled vehicle and the self state information of the controlled vehicle in real time in the driving process of the controlled vehicle;
the decision demand judging module is used for judging all driving scenes matched with the current environment according to the real-time sensed environment information outside the controlled vehicle and the state information of the controlled vehicle and outputting the driving scenes as optional driving decisions to be provided for the driver to select;
the decision instruction extraction module is used for acquiring brain wave signals of a driver in real time and extracting decision selection instructions of the driver for the driving decision from the acquired brain wave signals;
and the vehicle control driving module is used for calling a corresponding target driving behavior model from a plurality of driving behavior models obtained by pre-training according to the decision selection instruction when the decision selection instruction is received, wherein the driving behavior model comprises steering control information of a driver on a steering wheel under different driving scenes, and the target driving behavior model is used for controlling the driving behavior of the vehicle.
Furthermore, the information acquisition module comprises one or more of a boundary characteristic information sensing unit for sensing boundary characteristic information, an object state information sensing unit for sensing object state information and an image information sensing unit for sensing image information.
Further, the driving behavior model display device further comprises a data storage module for storing the driving behavior model and/or a display module for displaying data.
Compared with the prior art, the invention has the advantages that:
1. the invention continuously senses the external environment of the vehicle and the state of the vehicle in the driving process, judges all driving scenes matched with the current environment according to the current sensed information, provides all matched driving scenes as selectable driving decisions for a driver to select, extracts a decision selection instruction after collecting brain wave signals of the driver, calls different driving behavior models according to the obtained decision selection instruction, contains the steering control information of the expert driver to the steering wheel under different scenes obtained by pre-training in the different driving behavior models, and can realize the driving control of the vehicle behaviors by people by calling the corresponding driving behavior models.
2. The invention makes decision task by brain wave, extracts decision intention of human to current environment decision from brain wave signal, calls driving behavior model based on decision intention, and executes integral driving behavior in driving process by driving behavior model.
3. The invention can be applied to various complex driving environments with crossing obstacle avoidance, overtaking, car following, road following, Uturn, steering and the like, and can provide a selection interface for interaction between a person and a machine when the machine cannot be selected independently, thereby achieving the driving control effect close to the driving behavior of the person.
Drawings
Fig. 1 is a schematic flow chart of an implementation process of the vehicle control method based on brain-computer interaction according to the embodiment.
Fig. 2 is a schematic structural diagram of an apparatus for implementing vehicle control based on brain-computer interaction according to the present embodiment.
Fig. 3 is a schematic diagram of a hardware structure employed in an embodiment of the present invention.
Fig. 4 is a schematic flow chart of implementing vehicle control in an embodiment of the present invention.
FIG. 5 is a schematic diagram of an unstructured simulation scene setup according to an embodiment of the present invention.
Fig. 6 is a schematic view of a kinematic model of a two-wheeled vehicle used in an embodiment of the present invention.
FIG. 7 is a diagram of various driver behavior models trained according to an embodiment of the present invention.
Fig. 8 is a comparison graph of driving effects of different driver behavior models obtained in an embodiment of the present invention.
Fig. 9 is a comparison graph of the optimal driving behavior model and the behavior of the expert driver obtained in the embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the drawings and specific preferred embodiments of the description, without thereby limiting the scope of protection of the invention.
As shown in fig. 1, the steps of the vehicle control method based on brain-computer interaction in the present embodiment include:
s1, information perception: the method comprises the following steps that a controlled vehicle senses external environment information of the controlled vehicle and self state information of the controlled vehicle in real time in the driving process of the controlled vehicle;
s2, judging decision requirements: judging all driving scenes matched with the current scene according to the external environment information of the controlled vehicle sensed in real time and the self state information of the controlled vehicle, and providing the driving scenes as selectable driving decisions for a driver to select;
s3, decision instruction extraction: collecting brain wave signals of a driver in real time, and extracting decision selection instructions of the driver for driving decisions from the collected brain wave signals;
s4, controlling and driving the vehicle: and when a decision selection instruction is received, calling a corresponding target driving behavior model from a plurality of driving behavior models obtained by pre-training according to the decision selection instruction, wherein the driving behavior models comprise steering control information of a driver on a steering wheel under different driving scenes, and controlling the driving behavior of the vehicle by using the target driving behavior model.
The present embodiment continuously senses the external environment of the vehicle and the state of the vehicle itself during driving, judging all driving scenes matched with the current environment according to the current sensed information, wherein the driving scenes can be various scenes such as obstacle avoidance, overtaking, car following, road following, Uturn (U-shaped turning) and steering, providing the matched driving scenes as selectable driving decisions for a driver to select, extracting decision selection instructions after acquiring brain wave signals of the driver, the decision selection of the driver on tasks such as obstacle avoidance, overtaking, car following, road following and the like is extracted, different driving behavior models are called according to the obtained decision selection instruction, the different driving behavior models contain steering control information of the expert driver on a steering wheel under different scenes obtained through pre-training, and the driving control of the vehicle behavior by the person-like can be realized by calling the corresponding driving behavior models.
In the method, the decision-making task is performed through brain waves, the decision-making intention of a person for the current environment decision-making, such as the driving intention of vehicles for overtaking, following, turning at a fork and the like, is extracted from brain wave signals, the driving behavior model is called based on the decision-making intention, and the driving behavior model executes the whole driving behavior in the driving process, compared with the traditional scheme of directly controlling the specific components of the vehicles based on the brain waves, the external environment and the self state of the vehicles can be automatically sensed, the interaction between the decision-making intention of the brain and a machine is realized at the same time, so that the vehicle is more similar to the person relative to the current driving environment control, the stability and the robustness of the vehicle control are effectively improved, the safe and reliable driving of the vehicles is ensured, and the method is particularly suitable for various complex environments with the crossing of obstacle avoidance, overtaking, following, road following, Uturn, turning and the like, meanwhile, the driver can be highly concentrated without paying attention, and the driving fatigue of people can be reduced.
The state information of the vehicle itself specifically includes vehicle speed, acceleration, and the like, and the external environment information specifically includes boundary feature information, object state information, image information, and the like in the external environment. As shown in fig. 2, in the present embodiment, the state information of the vehicle is specifically collected by state sensors such as inertial navigation and optical code disc; external environment information is acquired through a camera, a laser radar, a millimeter wave radar, an EQ4 and other sensing sensors, and passable boundary processing optimization, detection of information such as the position and the speed of an object and image processing are realized through a boundary processing module, an object feature detection module and an image processing module respectively; the brain control decision demand judging module judges all driving scenes matched with the current scene according to the external environment information of the controlled vehicle detected in real time and the self state information of the controlled vehicle, the driving scenes are used as selectable driving decisions and are provided for a driver to select, the brain control decision signal extracting module collects brain wave signals of the driver in real time, a decision selection instruction of the driver for the driving decisions is extracted from the collected brain wave signals, and the decision selection instruction is sent to the driving behavior model through the extracted decision selection instruction; and when a decision selection instruction is received, calling a corresponding target driving behavior model according to the decision selection instruction, and adjusting the steering angle, the speed and the like of the steering wheel by the target driving behavior model according to the currently sensed external environment information and vehicle state information.
In a specific application embodiment, the boundary detection module senses external environment information by adopting an ADAS camera and a laser radar sensor, the ADAS camera is subjected to boundary line parameter fitting, the laser radar is subjected to clustering processing and then extracts boundary characteristics, and a method for obtaining corresponding high-order item parameters by performing polyfit fitting according to coordinate points is specifically adopted; the object detection module adopts a millimeter wave radar and a laser radar 32L sensor, obtains information such as relative position and speed of a corresponding barrier through characteristic attribute analysis and clustering extraction of the object, the image detection module adopts a camera, and the image processor performs image storage, display and perception fusion processing on the imported original image.
In the specific configuration of the embodiment, when a decision selection instruction is not received, currently sensed external environment information and vehicle state information are input into each driving behavior model for matching, a target driving behavior model matched with the current environment is matched from each driving behavior model, the driving behavior of the vehicle is controlled by the matched target driving behavior model, different driving behavior models can be automatically called for different scenes, such as driving behavior models corresponding to overtaking, following, road following, Uturn, steering and the like, and automatic driving control of the vehicle is realized; meanwhile, when a decision selection instruction is received, a corresponding target driving behavior model is called according to the decision selection instruction to control the vehicle, a selection interface is provided for people to make interactive decision with the machine when the machine cannot determine how to select, and the method and the device are suitable for people in various complex environments with intersection such as overtaking, car following, road following, Uturn, steering and the like, and realize the interaction of the driving intention of the people and the machine. Of course, the external environment information and the self state information of the vehicle may specifically sense other information according to actual requirements, may also implement information sensing by using other types of sensors or combinations of sensors, and may specifically be configured according to actual requirements.
In this embodiment, before step S1, a driving behavior model training step is further included, and the specific steps include: the method comprises the steps of obtaining external environment information of a vehicle, state information of the vehicle and steering control information of a driver to a steering wheel under different driving scenes, and training to obtain a plurality of driving behavior models corresponding to the driving scenes. The steering control information of the steering wheel includes a steering angle, a steering speed, and the like. The method comprises the steps of training a driving behavior model in advance before vehicle control, carrying out corresponding vehicle driving operation training by obtaining external environments sensed by expert drivers in different environments and the expert drivers, wherein the vehicle driving operation comprises steering angle and steering speed control of a steering wheel, and carrying out model training by using different models respectively, such as an extreme learning machine algorithm (ELM), a long and short term memory network (LSTM), a Gaussian regression process (GP), a Deep direct connection feedforward network Deep-BP, a Support Vector Machine (SVM) and the like.
In a specific application embodiment, a large amount of data of expert drivers in different scenes (such as overtaking, meeting and following) are collected, expert driver behavior models in corresponding models are obtained through data driving modes of algorithms such as an extreme learning machine algorithm ELM, a long and short term memory network LSTM, a Gaussian regression process GP, a Deep direct connection feedforward network Deep-BP and a support vector machine SVM, and the like, the quality of each model is compared through comparing the average value of deviation between vehicle control (such as steering) learned in the scenes by each model and the operation behavior of the expert drivers, wherein the turn angle obtained by the model is closer to the quality of the control of the expert drivers, and FIG. 8 shows the effect comparison of each algorithm in the meeting model tested in the specific application embodiment.
After the expert driving behavior model is obtained through the training, model matching is carried out on the current environment and the input in the driving behavior model, and all scenes successfully matched are output to be provided for a driver to carry out decision selection; the human decision is provided for the vehicle in an electroencephalogram mode, so that the machine can call a corresponding driving behavior model to drive the vehicle.
The present embodiment may further configure to extract a decision selection instruction for the current environment from the brain wave signal for multiple times, and finally determine the current decision selection instruction according to the decision selection instruction extracted for multiple times, so as to perform multiple confirmation of human brain decisions, further ensure safety and reliability of control, and avoid danger caused by single error in brain waves to the driving process.
In this embodiment, when the selectable driving decisions in step S2 are output, all driving decisions selectable in the current scene are specifically displayed by using the blinking LED lamps with different frequencies, so as to form visual stimuli for the driver to select. The machine responds to the driving selection of the driver and calls a responding driving behavior model according to the intention of the driver to drive the vehicle.
In this embodiment, before the step S3, different trafficable trajectory behavior decisions are obtained in advance by a curve interpolation method.
As shown in fig. 3, the specific steps of controlling the driving behavior of the vehicle in step S4 of the present embodiment include: and inputting the external environment information of the current controlled vehicle and the state information of the controlled vehicle into a target driving behavior model to obtain the corresponding steering wheel steering target angle output, and controlling and adjusting the current steering wheel control angle and the steering wheel control rotating speed according to the obtained steering wheel target steering angle.
In step S3, a vehicle speed selection command selected by the driver according to the vehicle speed is extracted, and in step S4, controlling the driving behavior of the vehicle further includes controlling the vehicle speed of the vehicle according to the vehicle speed selection command.
As shown in fig. 3, in this embodiment, firstly, behavior data (e.g., steering of a steering wheel) of an expert driver and perception feature data are trained in a data-driven manner to obtain driver behavior models in different scenes, in a specific scene (e.g., meeting, overtaking, following, and obstacle avoidance), a decision-making intention (e.g., meeting, overtaking, following, and obstacle avoidance) of a person is obtained by detecting brain waves of the person, a corresponding driving behavior model is called according to the decision-making intention of the person, a vehicle speed is matched by the corresponding driving behavior model, a steering target angle of the steering wheel is output, and a steering control angle and a steering control rotation speed of the steering wheel are adjusted according to the steering target angle, so as to complete a steering control process of the expert driver.
In order to implement the method, the embodiment further provides a vehicle control device based on brain-computer interaction, including:
the information acquisition module is used for detecting the external environment information of the controlled vehicle and the self state information of the controlled vehicle in real time in the driving process of the controlled vehicle;
the decision demand judging module is used for judging all driving scenes matched with the current environment according to the environment information outside the controlled vehicle detected in real time and the state information of the controlled vehicle and providing the driving scenes as selectable driving decisions for the driver to select;
the decision instruction extraction module is used for collecting brain wave signals of a driver and extracting decision selection instructions of the driver for driving decisions from the collected brain wave signals;
and the vehicle control driving module is used for calling a target driving behavior model of a corresponding scene from a plurality of driving behavior models obtained in advance according to the steering control information of the driver to the steering wheel under different driving scenes according to the decision selection instruction, and controlling the driving behavior of the vehicle by using the target driving behavior model.
The vehicle control device based on brain-computer interaction in the embodiment corresponds to the vehicle control method based on brain-computer interaction one to one.
In this embodiment, the information acquisition module specifically includes a boundary feature information sensing unit for sensing boundary feature information, an object state information sensing unit for sensing object state information, and an image information sensing unit for sensing image information, where the boundary feature information sensing unit specifically includes a boundary detection module and a boundary feature extraction module, the object state information sensing unit includes an object detection module and an object detection extraction state module, and the image information sensing unit includes an image detection module and an image preprocessing module, as shown in fig. 2.
In the embodiment, a hardware communication structure is shown in fig. 4, an industrial personal computer 6108 is used to collect sensing feature information and image data required by model operation, the sensing feature information and the image data are transmitted to a 7164 industrial personal computer through UDP, the 7164 industrial personal computer receives the sensing feature information and the image data, vehicle control values or decision strategies are obtained by calling driving behavior models in different scenes, and the vehicle decision control is transmitted to the industrial personal computer 6108 through UDP.
In this embodiment, the vehicle state information display system further includes a data storage module storing a driving behavior model and a display module for displaying data, the data storage module stores sensed data and vehicle state transition data driven by a driver, and the display module displays the sensed external environment of the vehicle, the vehicle state information and the like in real time.
In this embodiment, the vehicle driving control system further includes a driver behavior model switching module for controlling switching of the driver behavior models, and when a scene changes and a new decision selection instruction is extracted from the brain wave signal of the driver, the driver behavior model switching module controls switching to call the corresponding driver behavior model to perform vehicle driving control.
In order to verify the effectiveness of the method, the method is subjected to simulation test. Specifically, on a curved road with the width of about 7m, an obstacle, a vehicle running in the same direction as the simulated vehicle and a vehicle running in the opposite direction to the simulated vehicle are arranged in front of the simulated vehicle, and the scene is built in prescan, so that the effect is shown in fig. 5. The simulation vehicle can complete the functions of obstacle avoidance, overtaking and meeting while running on a road through different models, the speed of the simulation vehicle is 25 km/h-50 km/h, a plurality of road boundary detection sensors, image sensors, vehicle state sensors and the like are arranged on the vehicle, and the plurality of sensors and the modules work in a division manner to transmit characteristic state information required by the models to the models; the model can be matched according to different vehicle speeds to obtain corresponding steering wheel steering, and vehicle control is carried out to complete the three functions. In the process of obstacle avoidance and overtaking, the speed of the obstacle avoidance and overtaking target vehicle is variable speed, the speed is 5 km/h-20 km/h, and the speed curve is shown as the curve in fig. 5.
The dynamics and kinematics model of the simulated vehicle specifically adopts a simplified two-wheel vehicle model, namely a single-wheel vehicle model, and the model is specifically shown in fig. 6, wherein the model satisfies the following conditions:
Figure BDA0002802773240000081
defining the lateral bias force:
Figure BDA0002802773240000082
the front and rear wheel side cornering stiffness Caf,CarIf both are positive values, then according to the assumption of small slip angle:
Figure BDA0002802773240000083
then, based on the assumption of small rotation angle, cos δ ≈ 1, substituting the expressions (2) and (3) into the expression (1) to obtain:
Figure BDA0002802773240000091
wherein the content of the first and second substances,
Figure BDA0002802773240000092
the absolute X-direction speed and the Y-direction speed in FIG. 3 for the controlled vehicle are reflected in the changes of the actual system and the simulation software; l represents the distance between the centers of the front and rear wheels, lfRepresenting the distance of the centre of mass of the vehicle to the centre of the rear wheel, lrRepresenting the distance of the vehicle's center of mass to the center of the front wheel; m represents the mass of the vehicle, CafRepresenting the cornering stiffness of the rear wheel, CarDenotes the side panel stiffness of the front wheel, IzRepresenting the moment of inertia.
In the embodiment, five models, namely an extreme learning machine algorithm ELM, a long and short term memory network LSTM, a Gaussian regression process GP, a Deep direct connection feedforward network Deep-BP and a support vector machine SVM, are adopted to train the driving behavior model, as shown in FIG. 7, the driving behavior model trained by the five methods is different in application field and characteristic data quantity, wherein in the ELM algorithm, the generalized inverse of input characteristic data is mainly solved and determined, the training and testing operation efficiency is mainly high, but the parameter adjusting difficulty is large; LSTM is a special unit of memory cells like accumulators and gated neurons: it will have a weight and couple to itself at the next time step, copying the true value of its state and the accumulated external signal, but this self-coupling is controlled by a multiplication gate that another unit learns and decides when to clear the memory; GP is a statistical method, which is a random process where observations occur in a continuous domain (e.g., time or space). In the Gaussian process, each point in the continuous input space is associated with a normally distributed random variable, and any finite linear combination of the random variables is normally distributed; the distribution of the gaussian process is the joint distribution of all those (an infinite number of) random variables, solving for the distribution of functions over a continuous domain (e.g., time or space); the Deep-BP is formed by two processes of forward propagation of signals and backward propagation of errors in a learning process, and a feed-forward network with more than three layers is directly called a Deep-BP network; the SVR algorithm has the best tolerance to the local disturbance of the training sample cloth due to the fact that the hyperplane is divided, and has good robustness and generalization performance. In this embodiment, the five models are used for training, a most suitable expert driving behavior network is found by training and data fitting according to the driving effect pair obtained by applying different driving behavior models in the specific application embodiment, the network can best fit the driving behavior of experts and has the minimum deviation with the driving behavior of experts, and the comparison result of the behaviors of the optimal driving model and the expert driver in the specific application embodiment is shown in fig. 9, namely, the driving effect close to that of a human can be achieved by adopting the method of the present invention.
The foregoing is considered as illustrative of the preferred embodiments of the invention and is not to be construed as limiting the invention in any way. Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical spirit of the present invention should fall within the protection scope of the technical scheme of the present invention, unless the technical spirit of the present invention departs from the content of the technical scheme of the present invention.

Claims (10)

1. A vehicle control method based on brain-computer interaction is characterized by comprising the following steps:
s1. information perception: the method comprises the following steps that a controlled vehicle senses external environment information of the controlled vehicle and self state information of the controlled vehicle in real time in the driving process of the controlled vehicle;
s2, decision requirement judgment: judging all driving scenes matched with the current environment according to the external environment information of the controlled vehicle sensed in real time and the self state information of the controlled vehicle, and outputting the driving scenes as optional driving decisions to be provided for a driver to select;
s3. decision instruction fetch: collecting brain wave signals of a driver in real time, and extracting decision selection instructions of the driver for the driving decision from the collected brain wave signals;
s4. vehicle control driving: and when the decision selection instruction is received, calling a corresponding target driving behavior model from a plurality of driving behavior models obtained by pre-training according to the decision selection instruction, wherein the driving behavior model comprises steering control information of a driver on a steering wheel under different driving scenes, and controlling the driving behavior of the vehicle by using the target driving behavior model.
2. The brain-computer interaction based vehicle control method according to claim 1, wherein the driving scenarios include obstacle avoidance, passing, following, road following, Uturn, and steering.
3. The brain-computer interaction based vehicle control method according to claim 1, wherein the external environment information includes one or more of boundary feature information, object state information, and image information in an external environment; the self-state information of the vehicle comprises vehicle speed and/or acceleration, and the steering control information of the steering wheel comprises steering angle and/or steering speed.
4. The brain-computer interaction based vehicle control method according to claim 1, further comprising a driving behavior model training step before step S1, and the specific steps include: the method comprises the steps of obtaining external environment information of a vehicle, state information of the vehicle and steering control information of a driver to a steering wheel under different driving scenes, and training to obtain a plurality of driving behavior models corresponding to the driving scenes.
5. The brain-computer interaction based vehicle control method according to claim 1, wherein when the driving decision selectable in step S2 is output, all driving decisions selectable in the current scene are displayed correspondingly by using different frequency blinking LED lamps, so as to form a visual stimulus for the driver to select.
6. The brain-computer interaction based vehicle control method according to any one of claims 1 to 5, wherein the specific step of controlling the driving behavior of the vehicle in the step S4 comprises: and inputting the external environment information of the current controlled vehicle and the self state information of the controlled vehicle into a target driving behavior model to obtain the corresponding steering wheel steering target angle output, and controlling and adjusting the current steering wheel control angle and the steering wheel control rotating speed according to the obtained steering wheel target steering angle.
7. The brain-computer interaction based vehicle control method according to any one of claims 1-5, wherein the step S3 further comprises extracting a vehicle speed selection command selected by a driver corresponding to the vehicle speed, and the step S4 further comprises controlling the vehicle speed of the vehicle according to the vehicle speed selection command.
8. A vehicle control device based on brain-computer interaction is characterized by comprising:
the information perception module is used for detecting the external environment information of the controlled vehicle and the self state information of the controlled vehicle in real time in the driving process of the controlled vehicle;
the decision demand judging module is used for judging all driving scenes matched with the current environment according to the real-time sensed environment information outside the controlled vehicle and the state information of the controlled vehicle and outputting the driving scenes as optional driving decisions to be provided for the driver to select;
the decision instruction extraction module is used for acquiring brain wave signals of a driver in real time and extracting decision selection instructions of the driver for the driving decision from the acquired brain wave signals;
and the vehicle control driving module is used for calling a corresponding target driving behavior model from a plurality of driving behavior models obtained by pre-training according to the decision selection instruction when the decision selection instruction is received, wherein the driving behavior model comprises steering control information of a driver on a steering wheel under different driving scenes, and the target driving behavior model is used for controlling the driving behavior of the vehicle.
9. The brain-computer interaction based vehicle control device according to claim 8, wherein the information perception module comprises one or more of a boundary feature information perception unit for perceiving boundary feature information, an object state information perception unit for perceiving object state information, and an image information perception unit for perceiving image information.
10. The brain-computer interaction based vehicle control apparatus according to claim 8 or 9, further comprising a data storage module storing the driving behavior model and/or a display module for data display.
CN202011356563.6A 2020-11-26 2020-11-26 Vehicle control method and device based on brain-computer interaction Active CN112356841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011356563.6A CN112356841B (en) 2020-11-26 2020-11-26 Vehicle control method and device based on brain-computer interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011356563.6A CN112356841B (en) 2020-11-26 2020-11-26 Vehicle control method and device based on brain-computer interaction

Publications (2)

Publication Number Publication Date
CN112356841A CN112356841A (en) 2021-02-12
CN112356841B true CN112356841B (en) 2021-12-24

Family

ID=74535299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011356563.6A Active CN112356841B (en) 2020-11-26 2020-11-26 Vehicle control method and device based on brain-computer interaction

Country Status (1)

Country Link
CN (1) CN112356841B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112937551B (en) * 2021-03-04 2022-06-17 北京理工大学 Vehicle control method and system considering input characteristics of driver
CN113085851A (en) * 2021-03-09 2021-07-09 傅玥 Real-time driving obstacle avoidance system and method of dynamic self-adaptive SSVEP brain-computer interface
CN113232668B (en) * 2021-04-19 2022-08-30 上海科技大学 Driving assistance method, system, medium and terminal
CN113212410B (en) * 2021-04-28 2022-10-21 三峡大学 Brain wave intelligent driving system
CN113511215B (en) * 2021-05-31 2022-10-04 西安电子科技大学 Hybrid automatic driving decision method, device and computer storage medium
CN113110526B (en) * 2021-06-15 2021-09-24 北京三快在线科技有限公司 Model training method, unmanned equipment control method and device
CN113276867A (en) * 2021-06-15 2021-08-20 王晓铭 Brain wave emergency control system and method under automatic driving situation
CN113610373B (en) * 2021-07-28 2024-01-26 上海德衡数据科技有限公司 Information decision processing method and system based on intelligent manufacturing
CN116688479B (en) * 2023-06-07 2024-02-06 廊坊市珍圭谷科技有限公司 Athletics control device based on brain wave signals
CN116788271B (en) * 2023-06-30 2024-03-01 北京理工大学 Brain control driving method and system based on man-machine cooperation control

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923392A (en) * 2010-09-02 2010-12-22 上海交通大学 Asynchronous brain-computer interactive control method for EEG signal
KR101446845B1 (en) * 2012-12-14 2014-10-08 고려대학교 산학협력단 Method and apparatus for moving object control using brain signal and record media recorded program for realizing the same
CN104622466A (en) * 2013-11-14 2015-05-20 北华航天工业学院 Brain wave remote control car and control method
CN107015632A (en) * 2016-01-28 2017-08-04 南开大学 Control method for vehicle, system based on brain electricity driving
JP6579014B2 (en) * 2016-03-29 2019-09-25 株式会社デンソー Automatic brake control device and computer program
CN106569601A (en) * 2016-10-28 2017-04-19 华南理工大学 Virtual driving system control method based on P300 electroencephalogram

Also Published As

Publication number Publication date
CN112356841A (en) 2021-02-12

Similar Documents

Publication Publication Date Title
CN112356841B (en) Vehicle control method and device based on brain-computer interaction
US11919536B2 (en) Evaluation method and system for steering comfort in human machine cooperative take-over control process of autonomous vehicle, and storage medium
Koesdwiady et al. Recent trends in driver safety monitoring systems: State of the art and challenges
Dai et al. Automatic obstacle avoidance of quadrotor UAV via CNN-based learning
Gidado et al. A survey on deep learning for steering angle prediction in autonomous vehicles
Wu et al. Human-in-the-loop deep reinforcement learning with application to autonomous driving
CN108995654A (en) A kind of driver status recognition methods and system
Cummings Rethinking the maturity of artificial intelligence in safety-critical settings
CN112232490A (en) Deep simulation reinforcement learning driving strategy training method based on vision
Amini et al. Learning steering bounds for parallel autonomous systems
US11858523B2 (en) Vehicle travel control device
Xu et al. Left gaze bias between LHT and RHT: a recommendation strategy to mitigate human errors in left-and right-hand driving
Luo et al. A workload adaptive haptic shared control scheme for semi-autonomous driving
CN116331221A (en) Driving assistance method, driving assistance device, electronic equipment and storage medium
Cai et al. Carl-lead: Lidar-based end-to-end autonomous driving with contrastive deep reinforcement learning
Meng et al. Application and development of AI technology in automobile intelligent cockpit
Wang et al. Remote control system based on the Internet and machine vision for tracked vehicles
Ilievski Wisebench: A motion planning benchmarking framework for autonomous vehicles
Abdou et al. End-to-end deep conditional imitation learning for autonomous driving
Bao et al. Data-Driven Risk-Sensitive Control for Personalized Lane Change Maneuvers
Jiang et al. A study of human-robot copilot systems for en-route destination changing
Qiu et al. Learning a steering decision policy for end-to-end control of autonomous vehicle
US11794780B2 (en) Reward function for vehicles
Khan et al. Learning vision based autonomous lateral vehicle control without supervision
Rojas et al. Evaluation of autonomous vehicle control strategies using resilience engineering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant