CN114035570A - Anti-collision control method based on brain-computer interface and laser radar fusion perception - Google Patents
Anti-collision control method based on brain-computer interface and laser radar fusion perception Download PDFInfo
- Publication number
- CN114035570A CN114035570A CN202111117840.2A CN202111117840A CN114035570A CN 114035570 A CN114035570 A CN 114035570A CN 202111117840 A CN202111117840 A CN 202111117840A CN 114035570 A CN114035570 A CN 114035570A
- Authority
- CN
- China
- Prior art keywords
- angle
- brain
- computer interface
- method based
- collision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 230000004927 fusion Effects 0.000 title claims abstract description 23
- 230000008447 perception Effects 0.000 title claims abstract description 16
- 238000005516 engineering process Methods 0.000 claims abstract description 17
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 238000007500 overflow downdraw method Methods 0.000 claims abstract description 14
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 230000006870 function Effects 0.000 claims description 39
- 230000000694 effects Effects 0.000 claims description 16
- 238000004088 simulation Methods 0.000 claims description 8
- 230000000007 visual effect Effects 0.000 claims description 7
- 230000005484 gravity Effects 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 5
- 230000001133 acceleration Effects 0.000 claims description 4
- 239000000126 substance Substances 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 230000004888 barrier function Effects 0.000 claims description 2
- 230000003068 static effect Effects 0.000 claims description 2
- 230000003321 amplification Effects 0.000 abstract description 10
- 238000003199 nucleic acid amplification method Methods 0.000 abstract description 10
- 230000009471 action Effects 0.000 abstract description 5
- 230000008901 benefit Effects 0.000 abstract description 4
- 238000012790 confirmation Methods 0.000 abstract description 3
- 210000004556 brain Anatomy 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000007547 defect Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 210000001595 mastoid Anatomy 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000004761 scalp Anatomy 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 206010061225 Limb injury Diseases 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007177 brain activity Effects 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 210000003141 lower extremity Anatomy 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 210000000578 peripheral nerve Anatomy 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Aviation & Aerospace Engineering (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- Optics & Photonics (AREA)
- Electromagnetism (AREA)
- Neurology (AREA)
- General Health & Medical Sciences (AREA)
- Dermatology (AREA)
- Biomedical Technology (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses an anti-collision control method based on brain-computer interface and laser radar fusion perception, which comprises an automatic driving system control method based on brain-computer interface technology, an automatic driving system anti-collision method based on laser radar and a decision fusion method based on control and anti-collision. The method has the advantages that electroencephalogram signal collection, amplification, preprocessing, feature extraction, feature classification and control are achieved, safe movement of the intelligent vehicle is guaranteed, obstacles are timely and accurately found, correct avoidance or parking actions are made, the intention of a brain-computer interface and the current running state of the intelligent vehicle are comprehensively considered, and control commands such as an accelerator, a brake or steering are given. The method adds a confirmation link in the brain-computer interface module, and simultaneously adopts a two-step method of fusing the brain-computer interface and the radar data, so that the method has the characteristic of high fault tolerance, ensures the accuracy of navigation and improves the robustness of a navigation system.
Description
Technical Field
The invention relates to the technical field of automatic driving, in particular to an anti-collision control method based on brain-computer interface and laser radar fusion perception.
Background
With the accelerated aging process of society and the increasing number of people with lower limb injuries caused by various diseases, industrial injuries, traffic accidents and the like, providing a travel tool with excellent performance for the old and the disabled has become one of the important concerns of the whole society. Among them, as one of the transportation tools, smart vehicles have received much attention from researchers in various countries around the world. The intelligent vehicle not only has multiple functions of autonomous navigation, collision avoidance and the like, but also integrates multiple control modes of a human-computer interaction technology, such as voice, gestures, head movement, electroencephalogram signals and the like.
Compared with the L2-level automatic driving, starting from the L3-level automatic driving means that the vehicle will completely deal with all the problems in the driving process after the function is turned on, including acceleration and deceleration, overtaking, even avoiding obstacles, and the like, and means that the responsibility is determined to be formally changed from human to vehicle if an accident occurs. Once a person can get away from the hands and divert sight while driving, it is a contradiction that a driver who has been distracted temporarily in an emergency must take over the vehicle. To completely solve the "security risk", both the host factory and the supplier must have a complete unified set of security verification standards, which is the underlying logic that needs to be overridden from level L2 to level L3.
Environmental awareness is one of the key technologies for intelligent vehicle research. Environmental information around the smart vehicle may be used to navigate, collision avoidance, and perform specific tasks. The sensors that acquire this information require both a large enough field of view to cover the entire work area and a high acquisition rate to ensure that real-time information is provided in a moving environment. In recent years, the application of laser radar in intelligent vehicle navigation is increasing. This is mainly due to the many advantages of laser-based distance measurement techniques, in particular their high accuracy. By scanning the laser beam or light plane two or three dimensionally, the lidar is able to provide a large amount of accurate range information at higher frequencies. Compared with other distance sensors, the laser radar can simultaneously consider the precision requirement and the speed requirement, and the method is particularly suitable for the field of automatic driving systems.
On the other hand, the Brain-Computer Interface (BCI) is a completely new man-machine interaction system that establishes a direct information communication and control channel between the human Brain and a Computer or other electronic devices without depending on the conventional Brain output channel (peripheral nerves and muscle tissue). BCI systems are generally composed of four parts: the system comprises a signal acquisition system, a signal processing system, a mode identification system and a system for controlling external equipment. The electrophysiological signals reflecting brain activity are obtained from scalp or brain interior by electrode and transferred into amplifier, and these signals are undergone the processes of amplification, filtering and A/D conversion, etc. and transferred into computer to make complex signal processing, and the signal characteristic quantities related to user's intention are extracted, and these signal characteristic quantities are identified and converted into control command for controlling external equipment.
The brain-computer interface has the advantage of directly controlling external equipment by brain signals, but the brain-computer interface also has the problems of poor signal-to-noise ratio, low accuracy, long time delay and the like, and a driver simply uses the brain-computer interface to control an intelligent vehicle to have multiple uncertain factors, which brings great danger to the driving of the intelligent vehicle.
Disclosure of Invention
In order to solve the limitations and defects of the prior art, the invention provides an anti-collision control method based on fusion perception of a brain-computer interface and a laser radar, which comprises an automatic driving system control method based on a brain-computer interface technology, an automatic driving system anti-collision method based on a laser radar and a decision fusion method based on control and anti-collision;
the automatic driving system control method based on the brain-computer interface technology comprises the following steps:
collecting and amplifying the electroencephalogram signals by using brain-computer interface equipment;
preprocessing, feature extraction and feature classification are carried out on the electroencephalogram signals through the established simulation model of the brain-computer interface system;
the control instructions finished by the motor imagery classification are sent to the intelligent vehicle through wireless serial port communication, real-time control is achieved by controlling an accelerator, a brake and steering of the intelligent vehicle, and the motor imagery tasks comprise: left hand, right leg, rest, control command includes: turning left, turning right, starting and stopping;
the automatic driving system collision avoidance method based on the laser radar comprises the following steps:
converting two-dimensional obstacle information of a field-of-view polar coordinate system into a one-dimensional angle domain by adopting an angle potential field method and taking the current field-of-view sight angle of the robot as a domain of discourse;
comprehensively evaluating the resistance effect of the obstacles in the field of view in the angle domain and the gravitational effect of the target point in the angle domain to obtain the target angle of the current state;
the decision fusion method based on control and collision avoidance comprises the following steps:
the intention of a brain-computer interface and the current running state of the intelligent vehicle are comprehensively considered through the established fusion decision module, and control commands such as an accelerator, a brake or a steering are given.
Optionally, the laser radar is a 4-line laser radar, and has a wide viewing angle of 240 degrees and a detection distance of 0.3m to 200 m.
Optionally, the angular potential field method includes:
converting two-dimensional obstacle information of a current view field polar coordinate system into a one-dimensional angle domain;
comprehensively evaluating the resistance effect of the obstacles in the field of view in the angular domain and the gravitational effect of the target point in the angular domain;
calculating to obtain a current target angle and a pass function;
control outputs of the intelligent vehicle driving angle and speed are determined.
Optionally, the method further includes:
setting a lateral safety distance DsfTransverse distance to the obstacle when the intelligent vehicle can safely pass the obstacle, radial safe distance DsrObtaining the transverse safe distance D for the distance moved by the intelligent vehicle from the deceleration to the static state at the speed v driving statesfThe expression of (a) is as follows:
wherein W is the width of the vehicle body, a is the acceleration of the vehicle during normal deceleration, and ksfAnd ksrTo enlarge the coefficient, ksfAnd ksrGreater than 1;
describing the resistance generated by a preset angle obstacle point on an angle domain by using a platform function, and aiming at the angleThe expression of the resistance generated at the angle θ by the obstacle point of (a) is as follows:
wherein the content of the first and second substances,is an angleDistance of obstacle point of DmIn order to set the maximum evaluation distance,is an angleAs a function of the resistance of the valve,for the purpose of the definition of the resistance function,is an angleCalculated parameter of (D)stIs a safe distance;
for a preset angle theta in the visual field, the total resistance is set to be the maximum value of the resistance generated at the angle theta by the barrier point of each angle, and the expression is as follows:
wherein, KRF(θ) is a resistance function;
and setting the gravity generated by the target point at each angle by adopting a cosine function, wherein the expression is as follows:
KRF(θ)=cos(θ-θobj)
wherein, thetaobjIs the direction angle of the target point in the current field of view, KRF(θ) is a resistance function;
for a preset angle theta in the visual field, setting a pass function as the product of the reciprocal of the resistance and the gravity value, and setting the maximum value of the pass functions of all the angles as the pass function of the current visual field, wherein the expression is as follows:
wherein, Kp(theta) is a pass function, KGF(θ) is a function of gravity, KPGIs the maximum value of the passing function;
when K isPGWhen the speed is equal to 0, the decision output is the braking and deceleration of the intelligent vehicle;
when K isPGWhen greater than 0, K is selectedp(theta) maximum angle as angle output thetaoutThe expression is as follows:
wherein, thetaleftFor inputting the optimum leftward advancement angle, θrightFor inputting the optimum rightward advance angle, θoutThe optimal overall advancing angle is obtained;
wherein U is a threshold value, and U is 3500.
The invention has the following beneficial effects:
the invention discloses an anti-collision control method based on brain-computer interface and laser radar fusion perception, which comprises an automatic driving system control method based on brain-computer interface technology, an automatic driving system anti-collision method based on laser radar and a decision fusion method based on control and anti-collision. The automatic driving system control method based on the brain-computer interface technology mainly comprises electroencephalogram signal acquisition, amplification, preprocessing, feature extraction, feature classification and control realization, the automatic driving system anti-collision method based on the laser radar can guarantee safe movement of an intelligent vehicle, timely and accurately discover obstacles and make correct avoidance or parking actions, an angle potential field method is mainly adopted, and a decision fusion method based on control and anti-collision comprehensively considers the intention of the brain-computer interface and the current running state of the intelligent vehicle and gives control commands of an accelerator, a brake or a steering and the like. According to the anti-collision control method based on the brain-computer interface and laser radar fusion perception, the confirmation link is added into the brain-computer interface module, and meanwhile, the two-step method of fusing the brain-computer interface and radar data is adopted, so that the anti-collision control method has the characteristic of high fault tolerance, the accuracy of navigation is ensured, and the robustness of a navigation system is improved.
Drawings
Fig. 1 is a frame diagram of an overall automatic driving system collision avoidance navigation method according to an embodiment of the present invention.
Fig. 2 is a block diagram of a control method of an automatic driving system based on a brain-computer interface technology according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a placement position of a brain-computer interface electrode according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a brain-computer interface system simulation model according to an embodiment of the present invention.
Fig. 5a is a schematic diagram of an intelligent vehicle body model according to a first embodiment of the present invention.
Fig. 5b is a schematic diagram of an intelligent vehicle body coordinate system according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a fusion decision module according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the following describes in detail a collision avoidance control method based on brain-computer interface and lidar fusion sensing provided by the present invention with reference to the accompanying drawings.
Example one
The embodiment provides an anti-collision control method of an L3-level automatic driving system based on fusion perception of a brain-computer interface and a laser radar, which comprises an automatic driving system control method based on a brain-computer interface technology, an automatic driving system anti-collision method based on a laser radar and a decision fusion method based on control and anti-collision. As shown in fig. 1, the automatic driving system control method based on brain-computer interface technology mainly includes electroencephalogram signal acquisition, amplification, preprocessing, feature extraction, feature classification, control realization and the like; the principle of the automatic driving system anti-collision method based on the laser radar is to ensure the safe movement of the intelligent vehicle, and an angle potential field method is mainly adopted, wherein the obstacle must be timely and accurately found and a correct evading or parking action is made. The decision fusion method based on control and collision avoidance comprehensively considers the intention of a brain-computer interface and the current running state of the intelligent vehicle and gives control commands of an accelerator, a brake or a steering and the like. The intelligent vehicle navigation method based on the brain-computer interface can make up many defects of the existing intelligent vehicle based on pure brain-computer interface control, has the characteristic of high fault tolerance, ensures the accuracy of navigation, improves the robustness of a navigation system, and has various advantages compared with the traditional single brain-computer interface.
The method provided by the embodiment has the characteristic of high fault tolerance, ensures the navigation accuracy and improves the robustness of the navigation system. The method mainly comprises the steps of electroencephalogram signal acquisition, amplification, preprocessing, feature extraction, feature classification, control realization and the like. The method mainly adopts an angle potential field method, takes the current view field sight angle of the robot as a domain of discourse, and converts two-dimensional obstacle information of a view field polar coordinate system into a one-dimensional angle domain. And comprehensively evaluating the resistance effect of the obstacle in the field of view in the angular domain and the gravitational effect of the target point in the angular domain to obtain the target angle of the current state. The method gives control commands such as an accelerator, a brake or a steering by comprehensively considering the intention of a brain-computer interface and the current running state of the intelligent vehicle through the established fusion decision module.
The embodiment aims to provide an anti-collision control method of an L3-level automatic driving system based on fusion perception of a brain-computer interface and a laser radar, the method can make up for many defects of the existing pure automatic driving system based on brain-computer interface control, has the characteristic of high fault tolerance, ensures the accuracy of navigation, and improves the robustness of the navigation system.
The anti-collision control method provided by the embodiment comprises an automatic driving system control method based on a brain-computer interface technology, an automatic driving system anti-collision method based on a laser radar, and a decision fusion method based on control and anti-collision.
The automatic driving system control method based on the brain-computer interface technology mainly comprises the steps of electroencephalogram signal acquisition, amplification, preprocessing, feature extraction, feature classification, control realization and the like; the principle of the automatic driving system anti-collision method based on the laser radar is to ensure the safe movement of the intelligent vehicle, and an angle potential field method is mainly adopted, wherein the obstacle must be timely and accurately found and a correct evading or parking action is made. The decision fusion method based on control and collision avoidance comprehensively considers the intention of a brain-computer interface and the current running state of the intelligent vehicle and gives control commands of an accelerator, a brake or a steering and the like.
The brain-computer interface adopts a brain-computer interface device with 14 wet electrode sensors, and brain-computer data are selected from F3, F4, FC5, FC6 and 4 channels of an international standard 10-20 lead system. The brain electrical signal acquisition and amplification are completed in a brain-computer interface device, the preprocessing, the feature extraction and the feature classification are completed in a simulation model of a brain-computer interface system established based on Maltlab/Simulink, the control is realized by sending a control instruction completed by the motor imagery classification to an intelligent vehicle through wireless serial port communication, and the real-time control is realized by controlling an accelerator, a brake and steering of the intelligent vehicle. The motor imagery task includes: left hand, right leg, rest. The control instructions include: left turn, right turn, start, stop.
The laser radar is a 4-line laser radar, has a wide viewing angle of 240 degrees and a detection distance of 0.3m to 200m, and can be integrated into any vehicle body and observed at any angle. The angle potential field method converts two-dimensional obstacle information of a polar coordinate system of a current field of view into a one-dimensional angle domain, comprehensively evaluates the resistance effect of the obstacles in the field of view in the angle domain and the gravitational effect of a target point in the angle domain, calculates to obtain a current target angle and a traffic function, determines the control output of the driving angle and the speed of the intelligent vehicle, and gives consideration to the safety of the intelligent vehicle and the advancing to the target point.
The embodiment provides an anti-collision control method of an L3-level automatic driving system based on fusion perception of a brain-computer interface and a laser radar. The method can solve the problems of poor signal-to-noise ratio, low accuracy, long delay time and the like existing in the existing intelligent vehicle based on brain-computer interface control, has the characteristic of high fault tolerance, ensures the accuracy of navigation and improves the robustness of a navigation system.
As shown in fig. 2, the method for controlling an automatic driving system based on a brain-computer interface technology provided in this embodiment mainly includes electroencephalogram signal acquisition, amplification, preprocessing, feature extraction, feature classification, control implementation, and the like.
In the embodiment, brain-computer interface equipment which is developed by Emotiv System company and adopts 14 wet electrode sensors is adopted to collect brain electrical signals of the scalp of a driver. The 14 electrodes were placed according to the international standard 10-20 system, as shown in figure 3. Data were obtained from F3, F4, FC5, FC6, 4 channels of the international standard 10-20 lead system. In addition, two reference electrodes are arranged at the mastoid of the left ear and the mastoid of the right ear and used for calculating the algebraic average reference voltage of the mastoids on two sides to help noise reduction. The positions of F3, F4, FC5 and FC6 of the human brain contain the most abundant information when imagining the movement of hands and feet, so the data of the 4 channels are adopted. The experiment consists of a plurality of experiments, including imagining left hand movement, right foot movement, resting state and the like, and then samples and experimental data of each experiment are obtained.
In this embodiment, the electroencephalogram signal processing process in the brain-computer interface is a conversion process from a signal to a control command, and is finally used as a control signal for the peripheral device, and the process is implemented by feature extraction and mode classification. The electroencephalogram signal has the characteristic of weak property and needs to be amplified, filtered, denoised and the like. Then, feature extraction is carried out on the electroencephalogram signals, and the feature extraction can be realized by various different algorithms. After the extraction of the features of the electroencephalogram signal is completed, the electroencephalogram signal needs to be distinguished and identified by different identification algorithms, and finally converted into different control commands. A simulation model of a brain-computer interface system is established based on Maltlab/Simulink, the model consists of 4 parts, data acquisition, preprocessing, feature extraction and feature classification are carried out, and as shown in figure 4, the processing of electroencephalogram signals is completed in the simulation model.
According to the embodiment, firstly, electroencephalogram data are collected and amplified through a brain-computer interface, and a data acquisition module for establishing a brain-computer interface system simulation model based on Maltlab/Simulink is introduced. In the BCI system simulation model, after electroencephalogram data are acquired, the electroencephalogram data enter a data preprocessing module to carry out filtering and denoising on the electroencephalogram data. Various noises mixed in the electroencephalogram signals are mainly removed by adopting multi-resolution analysis, and filtering is carried out by adopting a Butterworth filter module in Simulink.
Fourier transform is a basic signal frequency domain analysis method, and when processing non-stationary signals, the local frequency of the signal and the time period information of the frequency are often known. The basic principle of short-time Fourier transform is to multiply a limited window function before Fourier transform is carried out on a signal, and the window function moves on a time axis, so that the signal can be analyzed and processed according to time axis sections, and thus, the difference of signal frequency spectrums at different moments is obtained, and the time-varying characteristic of the signal is obtained.
Considering the requirement of real-time system speed, short-time Fourier transform is adopted for feature extraction, short-time Fourier transform is carried out every 2s of data, the frequency where the maximum value of the amplitude value appears is judged, when the continuous 3 times of judgment results are the same, the frequency is considered to represent a control command which a driver wants to send, and finally a four-dimensional feature vector is extracted and used for feature classification of the electroencephalogram signals.
In the feature classification of the electroencephalogram signals, because the electroencephalogram signals are four-dimensional feature vectors, a simpler classification method can be adopted, and the simple method is good in stability and strong in anti-interference capability. The Classification of electroencephalogram signals is carried out by adopting a Classification function carried by Maltlab, the types of the Classification function are several, the Mahalanobis distance linear discriminant classification is selected in the embodiment, and training data and test data of the Mahalanobis distance linear discriminant classification have high classification accuracy.
The electroencephalogram signals provided by the embodiment are subjected to an electroencephalogram signal processing system, and then classification results are obtained. The API interface of the brain-computer interface generates 4 motion events from the classification result, which are "COG _ LEFT", "COG _ RIGHT", "COG _ LIFT", and "COG _ NEUTRAL", respectively. And then programming is carried out in a VS environment, and 4 motion control instructions of the intelligent trolley are respectively set according to 4 motion events: left turn, right turn, start, stop, respectively "a, 1,000,000,680", "a, 1,000,000,850", "a, 1,150,000,725", "a, 1,000,000,725". The control instruction is sent to the intelligent vehicle through wireless serial port communication, and real-time control is achieved by controlling an accelerator, a brake and steering of the intelligent vehicle.
The automatic driving system anti-collision method based on the laser radar mainly adopts an angle potential field method, takes the current view field sight angle of the robot as a domain of discourse, and converts two-dimensional obstacle information of a view field polar coordinate system into a one-dimensional angle domain. And comprehensively evaluating the resistance effect of the obstacle in the field of view in the angular domain and the gravitational effect of the target point in the angular domain to obtain the target angle of the current state.
As shown in fig. 5a and 5b, the intelligent vehicle adopts an intelligently modified Changan Yue Xiang vehicle, a four-wheel vehicle chassis is used as a mechanical platform, rear wheels are driven, and front wheels are guided. When the speed is not too fast and the turning radius is large, the bicycle can be approximated to a two-wheel bicycle model. A polar coordinate system of the vehicle body with the emitting point of the head laser radar as the origin can be established.
The radial direction in the vehicle body coordinate system is complete freedom degree, and the transverse direction is incomplete freedom degree. Therefore, the effect of the obstacle on the smart vehicle is not uniform in the radial and lateral directions. This embodiment sets the lateral safety distance DsfTransverse distance to the obstacle when the intelligent vehicle can safely pass the obstacle, radial safe distance DsrFor the distance the smart vehicle moves to begin decelerating to a stationary state at a velocity v, the expression is as follows:
wherein W is the width of the vehicle body; a is the acceleration of the vehicle during normal deceleration; k is a radical ofsf、ksrAre amplification factors, all greater than 1.
In this embodiment, the resistance field is generated by an obstacle at an angle within the field of view, and the resistance increases as the distance from the obstacle decreases. Meanwhile, in the dangerous angle range around the angle of the obstacle, resistance is generated due to the existence of the obstacle. Describing the resistance generated by a certain angle obstacle point on an angle domain by using a platform function, and aiming at the angleThe resistance generated at the angle θ by the obstacle point of (a) is defined as follows.
Wherein the content of the first and second substances,is an angleThe distance of the obstacle point; dmFor a set maximum evaluation distance, exceeding DmAll producing the smallest resistance value.
For a certain angle theta in the visual field, the total resistance is set to be the maximum value of the resistance generated at the angle theta by the obstacle point of each angle. The resistance field function can be expressed as follows.
The resistance field described above is used to describe the effect of obstacles in the field of view on the smart vehicle. In order to guide the intelligent vehicle to move to the planned target point, the gravitational field generated by the target point needs to be considered. The present embodiment uses a cosine function to define the attraction force generated by the target point at each angle.
KRF(θ)=cos(θ-θobj)
Wherein, thetaobjIs the direction angle of the target point in the current field of view.
For an angle θ in the field of view, the pass function is defined as the product of the inverse of the drag and the value of the attractive force. It describes the possibility that the robot passes at this angle and proceeds towards the target point. The maximum value of the pass function for all angles is defined as the pass function for the current field of view. It describes the possibility of the robot passing an obstacle and heading towards the target point under the current field of view.
The decision output rule provided by the embodiment is as follows:
(1) when K isPGAnd when the speed is equal to 0, the intelligent vehicle brakes and decelerates.
(2) When K isPGWhen greater than 0, K is selectedp(theta) maximum angle as angle output thetaoutWherein:
wherein, thetaleft、θrightFor inputting the optimum leftward and rightward advance angle, θoutFor the optimum overall advancing angleOrWhen the user stops, left stop information or right stop information is given, wherein U is a threshold value, and 3500 is taken.
In the decision fusion method based on control and collision avoidance provided by the embodiment, the intention of a brain-computer interface and the current running state of an intelligent vehicle are comprehensively considered through the established fusion decision module, and control commands such as an accelerator, a brake or a steering are given, as shown in fig. 6.
Generally, the intelligent vehicle autonomously travels according to the collision avoidance method, and the brain and machine maintain a "no command" state. When the driver needs to intervene in the travel of the vehicle, a "left turn", "right turn" or "stop/start" command is issued. For safety reasons, when the above-mentioned collision avoidance method results in "stopping", no matter what the intention of the brain-computer is, a braking command is issued. When the result of the collision avoidance method is not 'stop', the decision result is shown in table 1 according to the brain-computer result and the current running state of the intelligent vehicle.
TABLE 1 fusion decision Table
The embodiment discloses an anti-collision control method based on brain-computer interface and laser radar fusion perception, which comprises an automatic driving system control method based on brain-computer interface technology, an automatic driving system anti-collision method based on laser radar and a decision fusion method based on control and anti-collision. The automatic driving system control method based on the brain-computer interface technology mainly comprises electroencephalogram signal acquisition, amplification, preprocessing, feature extraction, feature classification and control realization, the automatic driving system anti-collision method based on the laser radar can guarantee safe movement of an intelligent vehicle, timely and accurately discover obstacles and make correct avoidance or parking actions, an angle potential field method is mainly adopted, and a decision fusion method based on control and anti-collision comprehensively considers the intention of the brain-computer interface and the current running state of the intelligent vehicle and gives control commands of an accelerator, a brake or a steering and the like. According to the anti-collision control method based on the brain-computer interface and laser radar fusion perception, the confirmation link is added into the brain-computer interface module, and meanwhile, the two-step method of fusing the brain-computer interface and radar data is adopted, so that the anti-collision control method has the characteristic of high fault tolerance, the accuracy of navigation is ensured, and the robustness of a navigation system is improved.
It will be understood that the above embodiments are merely exemplary embodiments taken to illustrate the principles of the present invention, which is not limited thereto. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit and substance of the invention, and these modifications and improvements are also considered to be within the scope of the invention.
Claims (4)
1. An anti-collision control method based on brain-computer interface and laser radar fusion perception is characterized by comprising an automatic driving system control method based on brain-computer interface technology, an automatic driving system anti-collision method based on laser radar and a decision fusion method based on control and anti-collision;
the automatic driving system control method based on the brain-computer interface technology comprises the following steps:
collecting and amplifying the electroencephalogram signals by using brain-computer interface equipment;
preprocessing, feature extraction and feature classification are carried out on the electroencephalogram signals through the established simulation model of the brain-computer interface system;
the control instructions finished by the motor imagery classification are sent to the intelligent vehicle through wireless serial port communication, real-time control is achieved by controlling an accelerator, a brake and steering of the intelligent vehicle, and the motor imagery tasks comprise: left hand, right leg, rest, control command includes: turning left, turning right, starting and stopping;
the automatic driving system collision avoidance method based on the laser radar comprises the following steps:
converting two-dimensional obstacle information of a field-of-view polar coordinate system into a one-dimensional angle domain by adopting an angle potential field method and taking the current field-of-view sight angle of the robot as a domain of discourse;
comprehensively evaluating the resistance effect of the obstacles in the field of view in the angle domain and the gravitational effect of the target point in the angle domain to obtain the target angle of the current state;
the decision fusion method based on control and collision avoidance comprises the following steps:
the intention of a brain-computer interface and the current running state of the intelligent vehicle are comprehensively considered through the established fusion decision module, and control commands such as an accelerator, a brake or a steering are given.
2. The anti-collision control method based on brain-computer interface and lidar fusion perception according to claim 1, wherein the lidar is a 4-line lidar having a wide viewing angle of 240 degrees and a detection distance of 0.3m to 200 m.
3. The anti-collision control method based on brain-computer interface and lidar fusion perception according to claim 1, wherein the angular potential field method comprises:
converting two-dimensional obstacle information of a current view field polar coordinate system into a one-dimensional angle domain;
comprehensively evaluating the resistance effect of the obstacles in the field of view in the angular domain and the gravitational effect of the target point in the angular domain;
calculating to obtain a current target angle and a pass function;
control outputs of the intelligent vehicle driving angle and speed are determined.
4. The anti-collision control method based on brain-computer interface and lidar fusion perception according to claim 3, further comprising:
setting a lateral safety distance DsfTransverse distance to the obstacle when the intelligent vehicle can safely pass the obstacle, radial safe distance DsrObtaining the transverse safe distance D for the distance moved by the intelligent vehicle from the deceleration to the static state at the speed v driving statesfThe expression of (a) is as follows:
wherein W is the width of the vehicle body, a is the acceleration of the vehicle during normal deceleration, and ksfAnd ksrTo enlarge the coefficient, ksfAnd ksrGreater than 1;
describing the resistance generated by a preset angle obstacle point on an angle domain by using a platform function, and aiming at the angleThe expression of the resistance generated at the angle θ by the obstacle point of (a) is as follows:
wherein the content of the first and second substances,is an angleDistance of obstacle point of DmIn order to set the maximum evaluation distance,is an angleAs a function of the resistance of the valve,for the purpose of the definition of the resistance function,is an angleCalculated parameter of (D)stIs a safe distance;
for a preset angle theta in the visual field, the total resistance is set to be the maximum value of the resistance generated at the angle theta by the barrier point of each angle, and the expression is as follows:
wherein, KRF(θ) is a resistance function;
and setting the gravity generated by the target point at each angle by adopting a cosine function, wherein the expression is as follows:
KRF(θ)=cos(θ-θobj)
wherein, thetaobjIs the direction angle of the target point in the current field of view, KRF(θ) is a resistance function;
for a preset angle theta in the visual field, setting a pass function as the product of the reciprocal of the resistance and the gravity value, and setting the maximum value of the pass functions of all the angles as the pass function of the current visual field, wherein the expression is as follows:
wherein, Kp(theta) is a pass function, KGF(θ) is a function of gravity, KPGIs the maximum value of the passing function;
when K isPGWhen the speed is equal to 0, the decision output is the braking and deceleration of the intelligent vehicle;
when K isPGWhen greater than 0, K is selectedp(theta) maximum angle as angle output thetaoutThe expression is as follows:
wherein, thetaleftFor inputting the optimum leftward advancement angle, θrightFor inputting the optimum rightward advance angle, θoutThe optimal overall advancing angle is obtained;
wherein U is a threshold value, and U is 3500.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111117840.2A CN114035570A (en) | 2021-09-24 | 2021-09-24 | Anti-collision control method based on brain-computer interface and laser radar fusion perception |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111117840.2A CN114035570A (en) | 2021-09-24 | 2021-09-24 | Anti-collision control method based on brain-computer interface and laser radar fusion perception |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114035570A true CN114035570A (en) | 2022-02-11 |
Family
ID=80140512
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111117840.2A Pending CN114035570A (en) | 2021-09-24 | 2021-09-24 | Anti-collision control method based on brain-computer interface and laser radar fusion perception |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114035570A (en) |
-
2021
- 2021-09-24 CN CN202111117840.2A patent/CN114035570A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105468138A (en) | Intelligent vehicle obstacle avoidance and navigation method based on brain-computer interface technology and lidar | |
DE102018002963B4 (en) | DRIVER MONITORING DEVICE AND DRIVER MONITORING PROCEDURE | |
CN105938657B (en) | The Auditory Perception and intelligent decision system of a kind of automatic driving vehicle | |
CN105599773B (en) | A kind of driver status suggestion device and its method based on moving attitude of vehicle | |
DE102018002962B4 (en) | DRIVER MONITORING DEVICE AND DRIVER MONITORING PROCEDURES | |
CN108491071B (en) | Brain-controlled vehicle sharing control method based on fuzzy control | |
CN103754219A (en) | Automatic parking system of information fusion of multiple sensors | |
CN108297864A (en) | The control method and control system of driver and the linkage of vehicle active safety technologies | |
CN110281938B (en) | Automobile wire-control intelligent steering system integrating fatigue detection and control method thereof | |
CN113635897B (en) | Safe driving early warning method based on risk field | |
CN114735010B (en) | Intelligent vehicle running control method and system based on emotion recognition and storage medium | |
KR102184598B1 (en) | Driving Prediction and Safety Driving System Based on Judgment of Driver Emergency Situation of Autonomous Driving Vehicle | |
CN110893849B (en) | Obstacle avoidance and lane change control method and device for automatic driving vehicle | |
CN113147752A (en) | Unmanned driving method and system | |
CN111873990A (en) | Lane changing collision avoidance device and method suitable for high-speed emergency working condition | |
CN111047047A (en) | Driving model training method and device, electronic equipment and computer storage medium | |
CN113570747A (en) | Driving safety monitoring system and method based on big data analysis | |
CN116331221A (en) | Driving assistance method, driving assistance device, electronic equipment and storage medium | |
CN115416650A (en) | Intelligent driving obstacle avoidance system of vehicle | |
CN110208820A (en) | A kind of 4G host | |
CN202271993U (en) | Vehicle drive-assistant device | |
US9679474B2 (en) | Other-vehicle detection apparatus, driving assistance apparatus, and other-vehicle detection method | |
CN114035570A (en) | Anti-collision control method based on brain-computer interface and laser radar fusion perception | |
US20190171213A1 (en) | Autonomous driving system based on electronic map and electronic compass | |
CN112172821A (en) | Welfare vehicle driving system and driving method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |