CN111880656A - Intelligent brain control system and rehabilitation equipment based on P300 signal - Google Patents

Intelligent brain control system and rehabilitation equipment based on P300 signal Download PDF

Info

Publication number
CN111880656A
CN111880656A CN202010737661.8A CN202010737661A CN111880656A CN 111880656 A CN111880656 A CN 111880656A CN 202010737661 A CN202010737661 A CN 202010737661A CN 111880656 A CN111880656 A CN 111880656A
Authority
CN
China
Prior art keywords
module
signal
electroencephalogram
distance
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010737661.8A
Other languages
Chinese (zh)
Other versions
CN111880656B (en
Inventor
于扬
刘凯玄
曾令李
刘亚东
周宗潭
胡德文
唐景昇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202010737661.8A priority Critical patent/CN111880656B/en
Publication of CN111880656A publication Critical patent/CN111880656A/en
Application granted granted Critical
Publication of CN111880656B publication Critical patent/CN111880656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/04Wheeled walking aids for disabled persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Abstract

The invention discloses an intelligent brain control system and rehabilitation equipment based on P300 signals, which comprises: the control module is used for comprehensively processing information sent by the electroencephalogram signal acquisition processing module, the depth and distance judging module and the target identifying module, feeding processing results back to the operation executing module in real time, and feeding relevant operations made by the operation executing module back to the electroencephalogram signal acquisition processing module, the depth and distance judging module and the target identifying module. The invention has the advantages of wide application range, strong practicability and high robustness.

Description

Intelligent brain control system and rehabilitation equipment based on P300 signal
Technical Field
The invention relates to the technical field of medical instruments, in particular to an intelligent brain control system based on a P300 signal and a rehabilitation device comprising the intelligent brain control system.
Background
China is facing increasingly serious aging problems and a large number of disabled people, and research and application of brain-computer interface technology is expected to provide powerful auxiliary motion means for the elderly, disabled people and other groups with dyskinesia. Currently, the mainstream brain-computer interface technology mainly aims at three electroencephalogram signals, namely an EEG signal (motion perception rhythm signal), an SSVEP signal (steady-state visual evoked potential signal) and a P300 signal. According to the magnetic resonance research on the brain function of the patient, the brain-computer interface, particularly the brain-computer interface of the motion perception type, can improve the function of the brain motion related area of the disabled patient to a certain extent, thereby achieving the effect of recovery training to a certain extent. Because the frequency response bandwidth of the brain visual center to the steady-state visual flicker stimulus is limited and the frequency resolution cannot be too low, the SSVEP signals generated when the difference between the two flicker frequencies is lower than 0.5Hz can not be distinguished, and therefore, the number of commands that can be output by the SSVEP signals is relatively small. The motion-perceived rhythm is relatively less accurate and requires a longer training period for the subject. Meanwhile, most of brain-computer interface systems based on the P300 signals have low actual information transmission rate and low accuracy, cannot realize asynchronous control, and send brain control instructions are not combined with various external devices, and cannot really control the external devices by using the brain control instructions of the systems, so that the brain-computer interface systems are difficult to adapt to complex environments and do not have good practicability and robustness.
Disclosure of Invention
In view of the above, the invention provides an intelligent brain control system and a rehabilitation device based on a P300 signal, the P300 signal which has high accuracy and high actual information transmission rate and does not need to be trained in advance is used as an input signal of the whole intelligent brain control system, the whole system is operated according to an odd ball paradigm, and the functions of identifying and selecting a target in a real scene asynchronously and dynamically, displaying a result on line and automatically executing related operations are realized by combining the Kinect depth point cloud information extraction, the YOLOv3 machine vision target identification, the Kinova mechanical arm and the sonar technology, so that the technical problems are well solved.
On one hand, the invention provides an intelligent brain control system based on a P300 signal, which is applied to a rehabilitation vehicle and comprises:
the electroencephalogram signal acquisition and processing module is used for acquiring and identifying electroencephalogram signals of a user on the rehabilitation vehicle;
the depth and distance judging module is used for judging the position of the barrier in the current environment and the distance between the barrier and the rehabilitation vehicle;
the target identification module is used for framing out the obstacle target in each picture of the current environment and attaching the classification and classification probability information of the obstacle target;
the operation execution module is used for receiving the user instruction sent by the control module and making related operation;
the control module is used for comprehensively processing information sent by the electroencephalogram signal acquisition processing module, the depth and distance judging module and the target identification module, feeding back a processing result to the operation execution module in real time, and feeding back related operations made by the operation execution module to the electroencephalogram signal acquisition processing module, the depth and distance judging module and the target identification module;
the electroencephalogram signal acquisition and processing module, the depth and distance distinguishing module, the target identification module and the operation execution module are respectively connected with the control module.
Furthermore, the electroencephalogram signal acquisition and processing module comprises an electroencephalogram amplifier, an amplifier battery pack, a 64-channel electrode cap, a 64-channel electroencephalogram signal wet electrode sensor and a display, the electroencephalogram amplifier, the amplifier battery pack and the display are all mounted on a rehabilitation vehicle, the 64-channel electrode cap is worn by the brain of a user, and the display is used for inducing a P300 signal;
and/or the depth and distance judging module comprises a first sub-module and a second sub-module which are arranged in parallel and connected with the control module respectively, the first sub-module and the second sub-module are used for judging the position of a barrier in the current environment and the distance between the barrier and a rehabilitation vehicle, the first sub-module comprises a Kinect depth camera, the Kinect depth camera is placed on the central axis of the rehabilitation vehicle, and the second sub-module comprises a laser radar and a sonar;
and/or the operation execution module comprises a mikland wheel chassis and a mechanical arm, wherein the mikland wheel chassis is provided with a mikland wheel
The chassis and the mechanical arm are both connected with the control module and used for receiving user instructions sent by the control module to perform relevant operations.
Further, the brain control system also comprises an asynchronous switch connected with the control module, the asynchronous switch is a CCA classifier used by SSVEP signals, the asynchronous switch comprises two SSVEP stimulation blocks, one SSVEP stimulation block represents that a rehabilitation vehicle is used, and the flicker frequency of the rehabilitation vehicle is 11.6 Hz; the SSVEP stimulation block of the second group indicates that the rehabilitation vehicle is not used, and the flicker frequency is 14.8 Hz;
the brain control system further comprises a SWLDA classifier used by the P300 signal, and the SWLDA classifier is connected with the SSVEP classifier.
Furthermore, the electroencephalogram signal is sampled by complying with 64-channel international 10-20 specifications, wherein electrode positions selected based on the P300 signal are FC1, FC2, CP1, CP2 and CZ; the electrode positions selected based on the SSVEP signals are P7, P3, Pz, P4, P8, PO3, POz, PO4, O1, Oz and O2; the reference electrode selects electrode positions FT7 and FT 8; FPz to ground.
On the other hand, the invention also provides an intelligent brain control method based on the P300 signal, which uses the intelligent brain control system based on the P300 signal, and the brain control method comprises the following steps:
s1, acquiring and identifying electroencephalogram information of a user on the rehabilitation vehicle, which is sent out by the user, through an electroencephalogram signal acquisition and processing module;
s2, judging the position information of the barrier in the current environment and the distance information between the barrier and the rehabilitation vehicle through a depth and distance judging module;
s3, framing out the targets in the pictures of the shot obstacles through a target identification module, and attaching the classification and classification probability information of the targets;
s4, comprehensively processing the received electroencephalogram signal, the position information of the obstacle, the distance information between the obstacle and the rehabilitation vehicle, the target in each image of the obstacle and the attached classification and classification probability information through the control module, and sending the processing result carrying the user instruction to the operation execution module;
and S5, after the operation execution module receives the processing information and completes the user instruction, the execution condition is respectively fed back to the electroencephalogram signal acquisition processing module, the depth and distance judging module and the target identification module through the control module.
Further, in step S1, taking the P300 signal and the SSVEP signal as input signals of the brain control method specifically include:
s101, collecting electroencephalograms of a user wearing a 64-channel electrode cap through a 64-channel electroencephalogram wet electrode sensor in an electroencephalogram signal collecting and processing module, wherein the electroencephalograms carry P300 signals and SSVEP signals;
s102, enabling the electroencephalogram signals of the user to enter a CCA classifier based on the SSVEP signals through an electroencephalogram amplifier;
s103, judging whether the user uses the intelligent brain control system or not by the CCA classifier according to the SSVEP signal, if so, entering a step S104, otherwise, entering a step S105;
s104, starting to call a SWLDA classifier based on the P300 signal, and identifying a user instruction according to a specific mode of the P300 signal of the user;
and S105, the whole intelligent brain control system enters a dormant state until a user uses the system.
Further, the following steps are further included between steps S103 and S104:
1) intercepting signals within 100-800 ms after each stimulation, and splicing signal slices of different electroencephalogram signal channels at first according to the coding of each stimulation to form a one-dimensional vector;
2) filtering the one-dimensional vectors spliced in the step 1) by an average value filter with convolution kernel of 10;
3) down-sampling the signal subjected to the average filtering in the step 2) at a down-sampling rate of one tenth.
Further, the step S2 is embodied as:
s201, acquiring a point cloud depth map of the current environment through a Kinect depth camera in a depth and distance discrimination module or a laser radar and a sonar, wherein the point cloud depth map comprises RGB and an original distance map, and each pixel point in the RGB map is completely registered with each point in the original distance map;
s202, converting the RGB image into a gray image;
s203, respectively placing 20 x 20 combined bilateral filters on the gray-scale map and the original distance map at the corresponding positions according to the following formula:
Figure BDA0002605680740000041
in the formula, p and q are coordinates in the image, f and g are weight calculation functions, and a Gaussian function is taken;
Figure BDA0002605680740000051
is a reference image, namely pixel values of an RGB image at two points p and q; i isqIs the pixel value of the input image, i.e. the original distance image at point q; Ω is the size of the joint bilateral filter; j. the design is a squarePIs the pixel value of the P point on the output image after the filtering is finished;
s204, respectively calculating a result of filtering each pixel point in the 20 x 20 region at the current position according to a formula (3);
s205, continuously sliding the combined bilateral filter until the filtering of the whole depth map is completed;
and S206, replacing the original distance map with the result after the filtering is finished to form a new distance map.
Further, the step S3 adopts YOLOV3 algorithm for identification, and the specific process is as follows:
s301, extracting input picture features in the target identification module through the convolutional layer;
s302, down-sampling the input picture through a pooling layer, and reducing data dimensionality;
s303, collecting and reversely transmitting the residual error of the network through a residual error layer;
s304, classifying, predicting confidence and positioning the targets in the picture according to the input features through the full connection layer;
s305, splicing the input pictures with different resolutions together through a Route layer, framing out the target in each input picture, and attaching the classification and classification probability information.
In another aspect, the invention further provides a rehabilitation device, which comprises a rehabilitation vehicle and any one of the intelligent brain control systems based on the P300 signal, wherein the intelligent brain control system is trained by using any one of the brain control methods.
The P300 signal is used as an input signal of the whole intelligent brain control system, the whole system is operated according to an OddBall paradigm, and the functions of asynchronous and dynamic identification and selection of targets in a real scene, online display of results and automatic execution of related operations are realized by combining Kinect depth point cloud information extraction, YOLOv3 machine vision target identification, Kinova mechanical arm and sonar technologies, so that the intelligent brain control system can be easily adapted to complex environments, and has good practicability and robustness.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a block diagram of an intelligent brain control system based on a P300 signal according to an embodiment of the present invention;
FIG. 2 is a point cloud depth map obtained by a depth and distance determination module according to the present invention;
FIG. 3 is a flow chart of a brain control method of an intelligent brain control system based on P300 signals according to the present invention;
FIG. 4 is a flow chart of electroencephalogram signal determination according to the present invention;
FIG. 5 is the sixty-four lead International 10-20 electroencephalogram distribution specification;
fig. 6 is a P300 signal standard slice.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
Also, for a better understanding of the present invention, the following definitions are specifically explained:
SSVEP (Steady-State Visual Evoked Potentials) refers to a continuous response of the Visual cortex of the human brain in relation to the stimulation frequency (at the fundamental frequency or the multiple frequency of the stimulation frequency) when subjected to a Visual stimulation at a fixed frequency;
kinect, a new vocabulary created by dynamics plus connection, is a 3D somatosensory camera (development code "project natural"), and introduces functions such as instant dynamic capture, image recognition, microphone input, voice recognition, community interaction, etc.;
the fast RCNN is a machine vision algorithm with high recognition accuracy, particularly accurate recognition of small targets, and is improved on the basis of the RCNN, but the recognition speed of the fast RCNN is too slow, about 0.5 frame per second and poor in real-time performance, and the RCNN is called a Region-based connected Neural Network;
YOLO, which is called You Only Look one, is a machine vision algorithm with high recognition speed and low background false detection rate, but the target recognition accuracy of YOLO is lower than that of fast RCNN, and especially the accuracy of recognizing small targets is lower;
the a-Star algorithm is the most effective direct search method for solving the shortest path in a static road network, and is also an effective algorithm for solving a plurality of search problems.
Fig. 1 is a block diagram of an intelligent brain control system based on a P300 signal according to the present invention. As shown in figure 1, the intelligent brain control system based on the P300 signal comprises an electroencephalogram signal acquisition and processing module, a depth and distance judging module, a target identifying module, an operation executing module and a control module, wherein the electroencephalogram signal acquisition and processing module, the depth and distance judging module, the target identifying module and the operation executing module are respectively connected with the control module, the electroencephalogram signal acquisition and processing module is used for acquiring and identifying electroencephalograms of users on a rehabilitation vehicle, the depth and distance judging module is used for judging the positions of obstacles in the current environment and the distances between the obstacles and the rehabilitation vehicle, the target identifying module is used for framing the obstacles in pictures of the current environment and is accompanied with classification and classification probability information, the operation executing module is used for receiving user instructions sent by the control module and making related operations, and the control module is used for comprehensively processing the electroencephalogram signal acquisition and processing module, And the depth and distance judging module and the target identifying module send information, and feed back the processing result to the operation executing module in real time, and simultaneously feed back the relevant operation made by the operation executing module to the electroencephalogram signal acquisition and processing module, the depth and distance judging module and the target identifying module. It should be noted that the control module is preferably a notebook computer operating under the BCI2000 framework; in order to improve the operating efficiency of the whole system, the electroencephalogram signal acquisition and processing module, the depth and distance distinguishing module, the target identification module and the operation execution module respectively operate in independent sub-threads without mutual interference, and information among the electroencephalogram signal acquisition and processing module, the depth and distance distinguishing module, the target identification module, the operation execution module and the control module is preferably in Queue communication through a Python Queue module.
Through the arrangement, in the running process of the system, the information of the electroencephalogram signal acquisition and processing module, the depth and distance judging module and the target identification module can be comprehensively processed in the notebook computer, the result can be sent to the operation execution module, the result executed by the operation execution module can be fed back to the notebook computer in real time, and then the result is sent to the electroencephalogram signal acquisition and processing module, the depth and distance judging module and the target identification module respectively by the notebook computer, so that a closed loop is realized, and the running state of the whole intelligent brain-controlled rehabilitation vehicle can be known by a user in real time.
Meanwhile, the invention also provides a rehabilitation device, which comprises a rehabilitation vehicle and the intelligent brain control system based on the P300 signal shown in the figure 1, wherein the intelligent brain control system is applied to the rehabilitation vehicle, and it needs to be noted that the rehabilitation vehicle can refer to the rehabilitation vehicle structure in the prior art, and preferably the length of the rehabilitation vehicle is about 900mm, specifically:
the electroencephalogram signal acquisition and processing module comprises an electroencephalogram amplifier, an amplifier battery pack, a 64-channel electrode cap, a 64-channel electroencephalogram signal wet electrode sensor and a display, wherein the electroencephalogram amplifier, the amplifier battery pack and the display are all mounted on a rehabilitation vehicle, the 64-channel electrode cap is worn by the brain of a user, the display is used for inducing a P300 signal, the user needs to see a display interface for inducing the P300 signal when operating the system, the electroencephalogram signal is acquired by the 64-channel electroencephalogram signal wet electrode sensor and amplified by the electroencephalogram amplifier in the process and then enters a notebook computer for processing, special attention needs to be paid, when the number of targets appearing in a visual field is small, enough obvious P300 waveforms cannot be induced according to an OddBall model (strange sphere model), and pseudo target stimulation needs to be set;
the depth and distance judging module comprises a first sub-module and a second sub-module which are arranged in parallel and respectively connected with the control module, wherein the first sub-module and the second sub-module are used for judging the position of a barrier in the current environment and the distance between the barrier and a rehabilitation vehicle, the two sub-modules independently operate and independently transmit the measured results to the notebook computer, and the notebook computer obtains a final result according to a related algorithm, so that the two sub-modules are integrated to realize advantage complementation and improve the overall precision so as to adapt to various complex environments, the first sub-module comprises a Kinect depth camera which is placed on the central axis of the rehabilitation vehicle, and the Kinect depth camera has the principle that the Kinect depth camera can emit a beam of infrared light to be coded into the environment and collect the infrared light codes reflected back from the environment so as to judge the depth information of various objects in the environment, the second sub-module comprises a laser radar and a sonar, the principle is that distance information of each target in the environment is judged by emitting laser and sound waves (generally ultrasonic waves) to the environment and then receiving echoes, at the moment, an ultrasonic probe for receiving the ultrasonic waves is further arranged on the rehabilitation vehicle, and fig. 2 is a point cloud depth map obtained by the depth and distance judging module, wherein the depth map comprises two parts: each pixel point in the RGB image is completely registered with each point in the distance image, namely the distance between each object on the left and the Kinect depth camera can be completely reflected at the corresponding position on the right, the black part of the RGB image on the left in FIG. 2 represents a part which can pass through, the part refers to the ground without obstacles in the invention, all other colors represent objects which can not pass through, and the more vivid the colors represent the closer the distance;
the operation execution module comprises a miklamun wheel chassis and a mechanical arm, wherein the miklamun wheel chassis and the mechanical arm are both connected with the notebook computer and used for receiving a user instruction sent by the notebook computer to perform related operation;
it should be noted that, in order to save cost and reduce waste, the object recognition module of the present invention may preferably be used by a notebook computer, and the notebook computer specifically uses the YOLOV3 algorithm for object recognition and classification.
In a further technical scheme, the invention adopts a P300 signal and an SSVEP signal as input signals of the whole intelligent brain control system, wherein the SSVEP signal only plays the role of an asynchronous switch, namely the invention also comprises an asynchronous switch connected with a control module, the asynchronous switch is preferably a CCA classifier used by the SSVEP signal, the asynchronous switch based on the SSVEP signal is a 'step' switch, namely the state of the asynchronous switch is kept unchanged for a long time after each switch is triggered until the next trigger changes the state of the asynchronous switch, the asynchronous switch comprises two SSVEP stimulation blocks, one SSVEP stimulation block of the asynchronous switch indicates that a rehabilitation vehicle is used, and the flicker frequency of the SSVEP stimulation block is 11.6 Hz; the SSVEP stimulation blocks in the two represent that the rehabilitation vehicle is not used, the flicker frequency is 14.8Hz, the system judges whether the user uses the rehabilitation vehicle system at present according to the SSVEP signals, and when the user uses the system, if the user wants to turn off the system midway, the user can trigger the quit switch to turn off the system and enter a sleep state by only looking at the SSVEP stimulation blocks with the frequency of 14.8Hz on the stimulation interface; meanwhile, the invention also comprises a SWLDA classifier used by the P300 signal, wherein the SWLDA classifier is connected with the SSVEP classifier.
Meanwhile, as a preferred embodiment of the present invention, the electroencephalogram signal is sampled while complying with the 64-channel international 10-20 specification of fig. 5 below. The P300 signal, a high-level cognitive signal, is particularly evident in the apical region of the human brain, and therefore the electrode positions selected are FC1, FC2, CP1, CP2, CZ; the SSVEP signal is a visual-related potential induced by an external periodic flicker stimulus greater than 4Hz, and is mainly distributed in the occipital region of a human, so the selective collection electrode positions are P7, P3, Pz, P4, P8, PO3, POz, PO4, O1, Oz, O2; selecting FT7 and FT8 as reference electrodes, wherein the change of the brain electrical signals is minimum in the experiment; FPz is grounded, and a 50Hz notch filter is used for power frequency filtering to reduce power frequency interference.
As shown in fig. 3, the present invention further provides a brain control method of an intelligent brain control system based on P300 signals, comprising the following steps:
s1, acquiring and identifying electroencephalogram information of a user on the rehabilitation vehicle, which is sent out by the user, through an electroencephalogram signal acquisition and processing module;
s2, judging the position information of the barrier in the current environment and the distance information between the barrier and the rehabilitation vehicle through a depth and distance judging module;
s3, framing out the targets in the pictures of the shot obstacles through a target identification module, and attaching the classification and classification probability information of the targets;
s4, comprehensively processing the received electroencephalogram signal, the position information of the obstacle, the distance information between the obstacle and the rehabilitation vehicle, the target in each image of the obstacle and the attached classification and classification probability information through the control module, and sending the processing result carrying the user instruction to the operation execution module;
and S5, after the operation execution module receives the processing information and completes the user instruction, the execution condition is respectively fed back to the electroencephalogram signal acquisition processing module, the depth and distance judging module and the target identification module through the control module.
Specifically, in step 1 of the brain control method, the P300 signal and the SSVEP signal are used as input signals of the brain control method, and the specific signal determination process is represented by the following steps:
s101, collecting electroencephalograms of a user wearing a 64-channel electrode cap through a 64-channel electroencephalogram wet electrode sensor in an electroencephalogram signal collecting and processing module, wherein the electroencephalograms carry P300 signals and SSVEP signals;
s102, enabling the electroencephalogram signals of the user to enter a CCA classifier based on the SSVEP signals through an electroencephalogram amplifier;
s103, judging whether the user uses the intelligent brain control system or not by the CCA classifier according to the SSVEP signal, if so, entering a step S104, otherwise, entering a step S105;
s104, starting to call a SWLDA classifier based on the P300 signal, and identifying a user instruction according to a specific mode of the P300 signal of the user;
and S105, the whole intelligent brain control system enters a dormant state until a user uses the system.
Fig. 4 is a flow chart for judging the electroencephalogram signal. It should be noted that, because the original electroencephalogram signal collected from the electroencephalogram amplifier has a small amplitude and a low signal-to-noise ratio, the mode identification classification cannot be directly performed, and the signal slicing, mean filtering and down sampling are performed before the second classification is performed by using the SWLDA classifier, that is, the following steps are further included between steps S103 and S104:
1) intercepting signals within 100-800 ms after each stimulation, and splicing signal slices of different electroencephalogram signal channels at first according to the coding of each stimulation to form a one-dimensional vector;
2) filtering the one-dimensional vectors spliced in the step 1) by an average value filter with convolution kernel of 10;
3) down-sampling the signal subjected to the average filtering in the step 2) at a down-sampling rate of one tenth.
The signal slice that completes the above steps will enter the SWLDA classifier for classification. The intrinsic SWLDA algorithm is an algorithm that projects complex high-dimensional features into the low-dimension to achieve that the low-dimensional features after projection have the smallest intra-class distance and the largest inter-class distance. Here, the high-dimensional feature of the input swada is the slice signal processed by the three steps 1) to 3). In the invention, only one option needs to be selected as a target to be executed each time, and other options are all non-targets, so that essentially only two classifications need to be carried out on the input signal, and the original input high-dimensional feature can be directly reduced into a one-dimensional feature. Since a one-dimensional feature is essentially a scalar, it can also be considered a score for each input signal slice. After which the corresponding one-dimensional features, i.e. the signal slice scores, are averaged for each stimulus code. The stimulation code with the largest average value is the code corresponding to the target option, as shown in the following formula (1).
Figure BDA0002605680740000111
Wherein, K represents the number of off-line training, i.e. the number of trials, x is the off-line acquired electroencephalogram signal, w is the weight projection matrix of the SWLDA classifier, which is trained according to the off-line acquired electroencephalogram signal, and the principle is shown in the following formula (2):
wx-b=0 (2)
in the formula, b is a label of each electroencephalogram signal slice, namely, whether the signal is an electroencephalogram signal corresponding to a target stimulus.
Considering that the application object of the invention is a disabled person with dyskinesia, in order to ensure the accuracy, safety and reliability of brain control, 4 runs are trained during off-line training, 10 tasks are trained by each Run, the number of trials of each task, namely the number of off-line training, is 4, and the accuracy of the final off-line training must be ensured to be more than or equal to seventy percent, otherwise, the training is continued until the requirement of the off-line accuracy is met. In an ideal situation, after the off-line training is completed, the electroencephalogram signal slice corresponding to the target stimulation should have an obvious peak in about 400ms as shown in fig. 6 below.
In a further technical solution, the step S2 is specifically represented as:
s201, acquiring a point cloud depth map of the current environment through a Kinect depth camera in a depth and distance discrimination module or a laser radar and a sonar, wherein the point cloud depth map comprises RGB and an original distance map, and each pixel point in the RGB map is completely registered with each point in the original distance map; it should be noted that the Kinect depth camera is preferably a Kinect XBOX360 depth camera based on active distance detection, which is a device developed by microsoft and using infrared light coding technology, the depth value is 4096mm at most, 0 value usually means that the depth value cannot be determined, microsoft recommends using a value in the range of 1220mm to 3810mm in the development, considering that the length of the rehabilitation vehicle in the present invention is about 900mm and the Kinect depth camera is placed on the central axis, therefore, all the parts of the rehabilitation vehicle which are less than or equal to the depth value of the Kinect should be set as the rehabilitation vehicle itself, for the sake of remaining margin, 900mm is increased to 1000mm, that is, all the objects in the range of the depth distance of the Kinect being less than or equal to 1000mm are considered as the rehabilitation vehicle itself, even if the object is detected without being counted as an obstacle, the object which is more than 3810mm away from the Kinect, because the detection accuracy of the Kinect itself is reduced, will not be detected accurately and so even if an obstacle is detected within this range it is ignored;
s202, converting the RGB image into a gray image;
s203, respectively placing 20 x 20 combined bilateral filters on the gray-scale map and the original distance map at the corresponding positions according to the following formula:
Figure BDA0002605680740000121
in the formula, p and q are coordinates in the image, f and g are weight calculation functions, and a Gaussian function is taken;
Figure BDA0002605680740000122
is a reference image, namely pixel values of an RGB image at two points p and q; i isqIs an input image, i.e. an original distance mapPixel values like at point q; Ω is the size of the joint bilateral filter; j. the design is a squarePIs the pixel value of the P point on the output image after the filtering is finished;
it should be noted that the joint bidirectional filtering is introduced in view of the problem that the distance image acquired by the Kinect depth camera often contains more noise points, the image quality is poor, and the distance image cannot be directly used;
s204, respectively calculating a result of filtering each pixel point in the 20 x 20 region at the current position according to a formula (3);
s205, continuously sliding the combined bilateral filter until the filtering of the whole depth map is completed;
and S206, replacing the original distance map with the result after the filtering is finished to form a new distance map.
In addition, it is worth mentioning that the current target recognition algorithms mainly include fast RCNN series algorithms that separately perform positioning and classification of targets, and YOLO series algorithms that simultaneously perform positioning, classification, and detection of accuracy of classification of targets in the same network. The invention aims at the brain-computer interface technology with high requirement on real-time performance, so that the algorithm represented by fast RCNN has poor real-time performance although the accuracy and recall rate of target detection are often high, and the requirement of the invention cannot be met and is not considered. The YOLO-series algorithm is relatively inferior in the accuracy of target detection, particularly in the accuracy of clustered small targets, but has a lower false positive rate and far better real-time performance than fast RCNN and the like, so the YOLO-series algorithm is selected in the invention. For details, reference may be made to table 1 below, which compares the performance indicators of the fast RCNN series and the YOLO series in terms of average accuracy and frame rate.
Detection model Average rate of accuracy Frame rate
FasterRCNN 70.0 0.5
YOLO 63.4 45
YOLOv2288*288 69.0 91
YOLOv2352*352 73.7 81
YOLOv2416*416 76.8 67
YOLOv2480*480 77.8 59
YOLOv2544*544 78.6 40
TABLE 1 average accuracy and frame Rate for various algorithms
The YOLO series algorithm is fast because the method is a relatively simple and complete end-to-end algorithm of a network, and the detection, the positioning and the target confidence degree prediction of a target are completely carried out in the network; secondly, the loss function is more appropriate. The YOLO series algorithm essentially consists of a series of convolution layers, a pooling layer, a residual layer, a small part of full-link layers and a Route layer. The YOLO family of algorithms mainly includes four versions, and the merits thereof are as follows:
YOLO model Speed of rotation Average rate of accuracy Identifying object classes
YOLOv1 Is quicker Is higher than Much more
YOLOv2 Fastest speed Is higher than Much more
YOLOv3 Is quicker Highest point of the design Much more
YOLO9000 Is quicker Is lower than At most
TABLE 2 quality of the respective versions of YOLO
In consideration of real-time performance and user experience, the method needs an algorithm with high speed and high accuracy, and relatively speaking, the types of target identification do not need to be many, because the used environment is often indoor and the indoor environment is simple. Therefore, comprehensively considered, the YOLOv3 is selected to bear the function of target identification, which is specifically represented as:
s301, extracting input picture features in the target identification module through the convolutional layer;
s302, down-sampling the input picture through a pooling layer, and reducing data dimensionality;
s303, collecting and reversely transmitting the residual error of the network through a residual error layer;
s304, classifying, predicting confidence and positioning the targets in the picture according to the input features through the full connection layer;
s305, splicing the input pictures with different resolutions together through a Route layer, framing out the target in each input picture, and attaching the classification and classification probability information.
The loss function is shown in the following formula (4), and includes penalties for class, confidence and coordinates of the prediction box, and the sum of squares of such errors is not only easier to derive back propagation but also can maximize the measure of the similarity between the prediction bounding box and the real bounding box of the object.
Figure BDA0002605680740000141
In practical application, an RGB camera in the middle of a Kinect XBOX360 depth camera installed above a rehabilitation vehicle acquires images in an environment in the form of a video stream and inputs the images into a YOLOv3 algorithm, and after being processed, YOLOv3 frames out targets in each picture of the video stream and attaches classification and classification probability information of the targets.
As a preferred embodiment of the present invention, the brain control method implements obstacle avoidance of the rehabilitation vehicle through an a-algorithm, that is, the a-algorithm automatically gives an optimal route from the current location (i.e., the location where the rehabilitation vehicle is located in the invention) to the target location (i.e., the target location in which the user is interested in the experiment), where the optimal route refers to the shortest passable route, rather than the smoothest route. The reason for this is that the field selected by the experiment is indoor, the environment is often simpler, therefore, the moving route of the rehabilitation vehicle should take the efficiency as the first consideration, and whether the moving route is smooth or not does not affect the performance of the invention too much. The a-algorithm is essentially an algorithm for selecting a path according to a priori map, so that a rehabilitation vehicle needs to run once in the field before an experiment is carried out to obtain map information of the experimental field.
Algorithm the programming steps in the present invention are as follows:
1. dividing the map of the experimental site into S × S sub-parts, namely the map of the experimental site consists of S × S small squares, and each small square represents a small area on the map;
2. preparing two sufficiently large lists OPen List and Close List for storing small areas that can pass through in the current state and small areas that have passed through, respectively, and initializing them to an empty array in advance; preparing a large enough array Pointer to store the parent zone Pointer of each zone (namely, recording which zone the rehabilitation vehicle reaches from the front to the current zone), and initializing to be empty;
3. putting the area where the current rehabilitation vehicle is in an OPENL, detecting eight neighborhoods beside the area where the current rehabilitation vehicle is in, storing the areas which can pass through (namely have no barriers) and are not in the CloseList into the OPENL, pointing the parent area Pointer of the areas which are just put in the OPENL to the area where the current rehabilitation vehicle is in, and storing the parent area Pointer in the Pointer;
4. removing the area where the current rehabilitation vehicle is located from the OPENLI and putting the area into the CloseList; calculating the F value of each passable area put into the OPenList in step 3, wherein the calculation formula is as follows:
F=G+H (5)
wherein G is the distance between the initial position of the rehabilitation vehicle on the map and the current position of the rehabilitation vehicle, and H is the Manhattan distance between the current position of the rehabilitation vehicle and the object interested by the user under the condition of neglecting all barriers on the current map;
5. sorting all the regions in the OPenList in a descending order according to the magnitude of the F value, selecting a region with the smallest F value, removing the region with the smallest F value from the OPenList and placing the region in the CloseList, checking eight neighborhoods of the region with the smallest F value, placing the neighborhoods which are not in the CloseList and can pass into the OPenList, and if a region originally in the OPenList exists in the eight neighborhoods of the region with the smallest F value, detecting whether the path is optimal or not, namely comparing the G value of the region originally in the OPenList with the region with the smallest F value as a parent region and taking the region with the smallest F value as the parent region and reaching the magnitude of the corresponding G value from the parent region; if the original path is better, nothing is done; otherwise, the G value of the area originally in OPenList needs to be changed to the size that the area with the smallest F value is the parent area and reaches the corresponding G value from the parent area, and then the parent area of the area originally in OPenList is pointed to the area with the smallest F value in the Pointer. For example, the G value of a region a originally in OPenList is 100, the region b with the smallest F value is a neighborhood of the region a, the G value of the region b is 50, and the G value taken to reach the region a from the region b is 20, since the G value of the original region a is 100> the G value of the region b is 50+ the G value taken to reach the region a from the region b to the region a is reasonably considered to be a better path to reach the region b first and then reach the region a from the region b directly, the G value of the region a needs to be changed from 100 to 50+20, namely 70, and then the pointers are modified to point the parent region of the region a to the region a.
And continuously circulating the above 5 steps until the target in which the user is interested appears in the OPENLT, and then reversely deducing the target in which the user is interested from the Pointer in the Pointer and walking to the position of the rehabilitation vehicle in the initial state step by step along with the parent region Pointer to obtain the optimal path.
In summary, the invention has the following advantages:
(1) according to the invention, through an intelligent brain control system based on a P300 signal and combined with a YOLOV3 algorithm target identification technology, an interested target can be selected from any targets identified in the current visual field in real time and relevant operation can be adopted;
(2) the system can detect the state of the current environment in real time by combining a Kinect depth camera, sonar and a laser radar, automatically dynamically plan a path to a selected target in real time in combination with depth point cloud, and comprises automatic obstacle avoidance, shortest realization path and the like;
(3) the Kinova mechanical arm can automatically reach the area near the target according to the selected target and the current real-time planned path, and the Kinova mechanical arm can automatically grab the target according to the depth point cloud information if needed;
(4) the invention is compatible with two different control modes, namely an automatic mode and a manual mode, so as to facilitate different needs of users in actual life: in the automatic mode, the system can automatically execute all the remaining operations only by selecting any interested target in the current scene by the user until the operation is finished, if the user selects a cup in front, the system can automatically reach the side of the cup, and extend out of the mechanical arm to grab the cup and send the cup to the user; in the manual mode, the user needs to select specific operation every step, the system can execute the selected operation according to the selection of the user, and still take the user to drink water as an example. The reason for the two different modes being set here is that the purely manual mode is too inefficient, but the automatic mode is not satisfactory when the user simply wants to go forward or backward.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. The utility model provides an intelligence brain control system based on P300 signal, is applied to the recovered car which characterized in that includes:
the electroencephalogram signal acquisition and processing module is used for acquiring and identifying electroencephalogram signals of a user on the rehabilitation vehicle;
the depth and distance judging module is used for judging the position of the barrier in the current environment and the distance between the barrier and the rehabilitation vehicle;
the target identification module is used for framing out the obstacle target in each picture of the current environment and attaching the classification and classification probability information of the obstacle target;
the operation execution module is used for receiving the user instruction sent by the control module and making related operation;
the control module is used for comprehensively processing information sent by the electroencephalogram signal acquisition processing module, the depth and distance judging module and the target identification module, feeding back a processing result to the operation execution module in real time, and feeding back related operations made by the operation execution module to the electroencephalogram signal acquisition processing module, the depth and distance judging module and the target identification module;
the electroencephalogram signal acquisition and processing module, the depth and distance distinguishing module, the target identification module and the operation execution module are respectively connected with the control module.
2. The intelligent brain control system based on P300 signals according to claim 1, wherein the electroencephalogram signal acquisition and processing module comprises an electroencephalogram amplifier, an amplifier battery pack, a 64-channel electrode cap, a 64-channel electroencephalogram signal wet electrode sensor and a display, the electroencephalogram amplifier, the amplifier battery pack and the display are all mounted on a rehabilitation vehicle, the 64-channel electrode cap is worn by a user's brain, and the display is used for inducing P300 signals;
and/or the depth and distance judging module comprises a first sub-module and a second sub-module which are arranged in parallel and connected with the control module respectively, the first sub-module and the second sub-module are used for judging the position of a barrier in the current environment and the distance between the barrier and a rehabilitation vehicle, the first sub-module comprises a Kinect depth camera, the Kinect depth camera is placed on the central axis of the rehabilitation vehicle, and the second sub-module comprises a laser radar and a sonar;
and/or the operation execution module comprises a mikland wheel chassis and a mechanical arm, wherein the mikland wheel chassis and the mechanical arm are both connected with the control module and used for receiving a user instruction sent by the control module to perform related operation.
3. The intelligent brain control system based on the P300 signal according to claim 1 or 2, characterized by further comprising an asynchronous switch connected with the control module, wherein the asynchronous switch is a CCA classifier used by SSVEP signals, and comprises two SSVEP stimulation blocks, one of the SSVEP stimulation blocks represents the use of a rehabilitation vehicle, and the flicker frequency of the SSVEP stimulation block is 11.6 Hz; the SSVEP stimulation block of the second group indicates that the rehabilitation vehicle is not used, and the flicker frequency is 14.8 Hz;
the system also comprises a SWLDA classifier used by the P300 signal, wherein the SWLDA classifier is connected with the SSVEP classifier.
4. The intelligent brain control system based on P300 signal according to claim 3, characterized in that the brain electrical signal is sampled according to 64-channel international 10-20 specification, wherein the electrode positions selected based on P300 signal are FC1, FC2, CP1, CP2 and CZ; the electrode positions selected based on the SSVEP signals are P7, P3, Pz, P4, P8, PO3, POz, PO4, O1, Oz and O2; the reference electrode selects electrode positions FT7 and FT 8; FPz to ground.
5. An intelligent brain control method based on P300 signal, characterized in that, the intelligent brain control system based on P300 signal of any claim 1 to 4 is used, the brain control method comprises the following steps:
s1, acquiring and identifying electroencephalogram information of a user on the rehabilitation vehicle, which is sent out by the user, through an electroencephalogram signal acquisition and processing module;
s2, judging the position information of the barrier in the current environment and the distance information between the barrier and the rehabilitation vehicle through a depth and distance judging module;
s3, framing out the targets in the pictures of the shot obstacles through a target identification module, and attaching the classification and classification probability information of the targets;
s4, comprehensively processing the received electroencephalogram signal, the position information of the obstacle, the distance information between the obstacle and the rehabilitation vehicle, the target in each image of the obstacle and the attached classification and classification probability information through the control module, and sending the processing result carrying the user instruction to the operation execution module;
and S5, after the operation execution module receives the processing information and completes the user instruction, the execution condition is respectively fed back to the electroencephalogram signal acquisition processing module, the depth and distance judging module and the target identification module through the control module.
6. The brain-control method according to claim 5, wherein in step S1, the P300 signal and the SSVEP signal are used as input signals of the brain-control method, and are expressed as follows:
s101, collecting electroencephalograms of a user wearing a 64-channel electrode cap through a 64-channel electroencephalogram wet electrode sensor in an electroencephalogram signal collecting and processing module, wherein the electroencephalograms carry P300 signals and SSVEP signals;
s102, enabling the electroencephalogram signals of the user to enter a CCA classifier based on the SSVEP signals through an electroencephalogram amplifier;
s103, judging whether the user uses the intelligent brain control system or not by the CCA classifier according to the SSVEP signal, if so, entering a step S104, otherwise, entering a step S105;
s104, starting to call a SWLDA classifier based on the P300 signal, and identifying a user instruction according to a specific mode of the P300 signal of the user;
and S105, the whole intelligent brain control system enters a dormant state until a user uses the system.
7. The brain control method according to claim 6, further comprising the following steps between steps S103 and S104:
1) intercepting signals within 100-800 ms after each stimulation, and splicing signal slices of different electroencephalogram signal channels at first according to the coding of each stimulation to form a one-dimensional vector;
2) filtering the one-dimensional vectors spliced in the step 1) by an average value filter with convolution kernel of 10;
3) down-sampling the signal subjected to the average filtering in the step 2) at a down-sampling rate of one tenth.
8. The brain-control method according to claim 5, wherein the step S2 is embodied as:
s201, acquiring a point cloud depth map of the current environment through a Kinect depth camera in a depth and distance discrimination module or a laser radar and a sonar, wherein the point cloud depth map comprises RGB and an original distance map, and each pixel point in the RGB map is completely registered with each point in the original distance map;
s202, converting the RGB image into a gray image;
s203, respectively placing 20 x 20 combined bilateral filters on the gray-scale map and the original distance map at the corresponding positions according to the following formula:
Figure FDA0002605680730000041
in the formula, p and q are coordinates in the image, f and g are weight calculation functions, and a Gaussian function is taken;
Figure FDA0002605680730000042
is a reference image, namely pixel values of an RGB image at two points p and q; i isqIs the pixel value of the input image, i.e. the original distance image at point q; Ω is the size of the joint bilateral filter; j. the design is a squarePIs the pixel value of the P point on the output image after the filtering is finished;
s204, respectively calculating a result of filtering each pixel point in the 20 x 20 region at the current position according to a formula (3);
s205, continuously sliding the combined bilateral filter until the filtering of the whole depth map is completed;
and S206, replacing the original distance map with the result after the filtering is finished to form a new distance map.
9. The brain-control method according to claim 5, wherein the step S3 adopts a YOLOV3 algorithm for identification, and the specific process is as follows:
s301, extracting input picture features in the target identification module through the convolutional layer;
s302, down-sampling the input picture through a pooling layer, and reducing data dimensionality;
s303, collecting and reversely transmitting the residual error of the network through a residual error layer;
s304, classifying, predicting confidence and positioning the targets in the picture according to the input features through the full connection layer;
s305, splicing the input pictures with different resolutions together through a Route layer, framing out the target in each input picture, and attaching the classification and classification probability information.
10. A rehabilitation device, comprising a rehabilitation vehicle and the intelligent brain control system based on the P300 signal, which is claimed in any one of claims 1 to 4, wherein the intelligent brain control system is trained by using the brain control method claimed in any one of claims 5 to 9.
CN202010737661.8A 2020-07-28 2020-07-28 Intelligent brain control system and rehabilitation equipment based on P300 signal Active CN111880656B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010737661.8A CN111880656B (en) 2020-07-28 2020-07-28 Intelligent brain control system and rehabilitation equipment based on P300 signal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010737661.8A CN111880656B (en) 2020-07-28 2020-07-28 Intelligent brain control system and rehabilitation equipment based on P300 signal

Publications (2)

Publication Number Publication Date
CN111880656A true CN111880656A (en) 2020-11-03
CN111880656B CN111880656B (en) 2023-04-07

Family

ID=73201349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010737661.8A Active CN111880656B (en) 2020-07-28 2020-07-28 Intelligent brain control system and rehabilitation equipment based on P300 signal

Country Status (1)

Country Link
CN (1) CN111880656B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113616436A (en) * 2021-08-23 2021-11-09 南京邮电大学 Intelligent wheelchair based on motor imagery electroencephalogram and head posture and control method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104083258A (en) * 2014-06-17 2014-10-08 华南理工大学 Intelligent wheel chair control method based on brain-computer interface and automatic driving technology
US8884949B1 (en) * 2011-06-06 2014-11-11 Thibault Lambert Method and system for real time rendering of objects from a low resolution depth camera
EP2808842A2 (en) * 2013-05-31 2014-12-03 Technische Universität München An apparatus and method for tracking and reconstructing three-dimensional objects
CN106485672A (en) * 2016-09-12 2017-03-08 西安电子科技大学 Improved Block- matching reparation and three side Steerable filter image enchancing methods of joint
CN111399652A (en) * 2020-03-20 2020-07-10 南开大学 Multi-robot hybrid system based on layered SSVEP and visual assistance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8884949B1 (en) * 2011-06-06 2014-11-11 Thibault Lambert Method and system for real time rendering of objects from a low resolution depth camera
EP2808842A2 (en) * 2013-05-31 2014-12-03 Technische Universität München An apparatus and method for tracking and reconstructing three-dimensional objects
CN104083258A (en) * 2014-06-17 2014-10-08 华南理工大学 Intelligent wheel chair control method based on brain-computer interface and automatic driving technology
CN106485672A (en) * 2016-09-12 2017-03-08 西安电子科技大学 Improved Block- matching reparation and three side Steerable filter image enchancing methods of joint
CN111399652A (en) * 2020-03-20 2020-07-10 南开大学 Multi-robot hybrid system based on layered SSVEP and visual assistance

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113616436A (en) * 2021-08-23 2021-11-09 南京邮电大学 Intelligent wheelchair based on motor imagery electroencephalogram and head posture and control method
CN113616436B (en) * 2021-08-23 2024-01-16 南京邮电大学 Intelligent wheelchair based on motor imagery electroencephalogram and head gesture and control method

Also Published As

Publication number Publication date
CN111880656B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN107352032B (en) Method for monitoring people flow data and unmanned aerial vehicle
CN111166357A (en) Fatigue monitoring device system with multi-sensor fusion and monitoring method thereof
CN110070056A (en) Image processing method, device, storage medium and equipment
EP3709134A1 (en) Tool and method for annotating a human pose in 3d point cloud data
CN110113116B (en) Human behavior identification method based on WIFI channel information
Sáez et al. Aerial obstacle detection with 3-D mobile devices
US20240062558A1 (en) Systems and methods for detecting symptoms of occupant illness
CN205898143U (en) Robot navigation system based on machine vision and laser sensor fuse
CN105354985A (en) Fatigue driving monitoring device and method
CN110717918B (en) Pedestrian detection method and device
CN107290975A (en) A kind of house intelligent robot
JP2022507635A (en) Intelligent vehicle motion control methods and devices, equipment and storage media
CN106371459A (en) Target tracking method and target tracking device
CN111880656B (en) Intelligent brain control system and rehabilitation equipment based on P300 signal
Rivera-Rubio et al. Appearance-based indoor localization: A comparison of patch descriptor performance
CN113257415A (en) Health data collection device and system
CN114818788A (en) Tracking target state identification method and device based on millimeter wave perception
CN112595728B (en) Road problem determination method and related device
CN210256167U (en) Intelligent obstacle avoidance system and robot
CN106257553A (en) A kind of multifunctional intelligent traffic throughput monitor system and method
CN115082690B (en) Target recognition method, target recognition model training method and device
CN115909498A (en) Three-dimensional laser point cloud intelligent falling monitoring method and system
CN111861275B (en) Household work mode identification method and device
US20230343228A1 (en) Information processing apparatus, information processing system, and information processing method, and program
CN111062311B (en) Pedestrian gesture recognition and interaction method based on depth-level separable convolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant