CN113138668B - Automatic driving wheelchair destination selection method, device and system - Google Patents

Automatic driving wheelchair destination selection method, device and system Download PDF

Info

Publication number
CN113138668B
CN113138668B CN202110446334.1A CN202110446334A CN113138668B CN 113138668 B CN113138668 B CN 113138668B CN 202110446334 A CN202110446334 A CN 202110446334A CN 113138668 B CN113138668 B CN 113138668B
Authority
CN
China
Prior art keywords
destination
image
candidate
electroencephalogram
electrode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110446334.1A
Other languages
Chinese (zh)
Other versions
CN113138668A (en
Inventor
郑晓宇
马其远
沈晓梅
李勇
吴剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Tsinghua University
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Shenzhen International Graduate School of Tsinghua University filed Critical Tsinghua University
Priority to CN202110446334.1A priority Critical patent/CN113138668B/en
Publication of CN113138668A publication Critical patent/CN113138668A/en
Application granted granted Critical
Publication of CN113138668B publication Critical patent/CN113138668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G5/00Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
    • A61G5/04Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs motor-driven
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G5/00Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
    • A61G5/10Parts, details or accessories
    • A61G5/1051Arrangements for steering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G2203/00General characteristics of devices
    • A61G2203/10General characteristics of devices characterised by specific control means, e.g. for adjustment or steering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a method, a device and a system for selecting a destination of an automatic driving wheelchair, wherein the method comprises the following steps: receiving an initial environment image acquired by AR equipment; extracting regional image features from the initial environmental image; inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations; displaying the plurality of candidate destinations in a user field of view of the AR device; and receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and obtaining an identified first destination, wherein the first electroencephalogram signal is generated when a user gazes at a candidate destination for the first time. The invention can realize real-time selection of the destination of the automatic driving wheelchair, and the destination selection has high diversity and convenience.

Description

Automatic driving wheelchair destination selection method, device and system
Technical Field
The invention relates to the technical field of electronic detonator communication, in particular to a method, a device and a system for selecting a destination of an automatic driving wheelchair.
Background
At present, destination selection of the automatic driving wheelchair is mainly selected from preset destinations or selected from destinations under the field of view of a fixed camera, and synchronous change of candidate destinations along with wheelchair movement cannot be achieved.
For example, in the prior art, a destination and a wheelchair position are marked by using a digital tag method, and digital tags are arranged on each navigation destination and wheelchair, and the digital tag content comprises: name of corresponding place or wheelchair, unique ID information of corresponding place or wheelchair, two-dimensional coordinate information of corresponding place or wheelchair, executable operation information of wheelchair, and three-dimensional posture information. And identifying the digital label by using the environment sensing system, extracting the digital label content of each navigation destination, and obtaining the movable area information of the wheelchair. In 2015, 2 months, the university of gold industry in japan sets a map and a plurality of destinations in a specific facility in the system, each destination has a corresponding number, the brain wave sensing device can read the corresponding number only by thinking about the number, and the computer program can enable the wheelchair to avoid obstacles to reach the destination. The disadvantage is that the patient can only select between preset destinations, and the target site cannot be changed in real time according to the situation.
At present, an electroencephalogram acquisition system is also provided for acquiring brain scalp electroencephalogram signals in the process of imagining left and right hand movements of a user, wherein the left hand movements are imagined when the user needs to send a left turn command to a wheelchair, and the right hand movements are imagined when the user needs to send a right turn command to the wheelchair; the left-right hand movement will cause specific event related desynchronization and event related synchronization phenomena in brain scalp electroencephalogram signals. The electric wheelchair is provided with a left wheel motor and a right wheel motor, and the operation of the left wheel and the right wheel of the wheelchair is controlled respectively, so that the movement direction of the wheelchair is determined; the operation states of the left motor and the right motor are controlled by the connected four paths of control voltages. 7 months 2009, the japanese Toyota motor company was the earliest to announce successful development of an electroencephalogram wheelchair. The signal processing system on the wheelchair is used for analyzing brain waves and controlling the electric wheelchair to move forwards, backwards, rotate and the like. The time from the brain sending the command signal to the electric wheelchair being able to respond is reduced to only 125 milliseconds. However, if the wheelchair is to be stopped, one cheek must be raised and a signal is transmitted by a sensor placed on the face. The innovation team of the students of the western traffic university develops an electrically-controlled wheelchair for brain, and the wheelchair can be controlled by ideas. The patient can operate the electric wheelchair with the idea by wearing the hat with the sensor and the tablet personal computer. After brain wave signals are collected through the electrodes, the electrodes are organically matched with a visual system of a person, when a patient sees different directional patterns on a computer respectively, the patient can move left and right, move forwards and backwards, and the wheelchair can be controlled to move through visual and brain wave instructions. The disadvantage is that the user needs to go to a place and the brain is required to send out instructions continuously, which is troublesome. In addition, motor imagery control instructions are limited, and instructions need to be continuously issued, which can create a great mental burden for disabled people. Because brain signals are unstable, the information transmission rate as high as that of a wheelchair control lever cannot be obtained by the prior art, and the control capability as high as that of the control lever is difficult to achieve. And many people cannot generate control signals which can be clearly distinguished after long-time motor imagery training.
Thus, there is a need for an efficient and convenient method of destination selection in real time.
Disclosure of Invention
The embodiment of the invention provides a destination selecting method of an automatic driving wheelchair, which is used for selecting the destination of the automatic driving wheelchair in real time, and has high destination selection diversity and convenience, and the method comprises the following steps:
receiving an initial environment image acquired by AR equipment;
extracting regional image features from the initial environmental image;
inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations;
displaying the plurality of candidate destinations in a user field of view of the AR device;
and receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and obtaining an identified first destination, wherein the first electroencephalogram signal is generated when a user gazes at a candidate destination for the first time.
The embodiment of the invention provides an automatic driving wheelchair destination selecting device, which is used for selecting the automatic driving wheelchair destination in real time, and has high destination selecting diversity and convenience, and the device comprises:
the image receiving module is used for receiving an initial environment image acquired by the AR equipment;
The feature extraction module is used for extracting regional image features from the initial environment image;
the candidate destination obtaining module is used for inputting the extracted regional image characteristics into the image recognition model to obtain a plurality of candidate destinations;
a display module for displaying the plurality of candidate destinations in a user field of view of the AR device;
the identification module is used for receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode to obtain an identified first destination, and the first electroencephalogram signal is generated when a user gazes at a candidate destination for the first time.
The embodiment of the invention provides an automatic wheelchair destination selection system, which is used for selecting the automatic wheelchair destination in real time, and has high destination selection diversity and convenience, and the system comprises:
AR equipment, an electroencephalogram acquisition electrode and the automatic driving wheelchair destination selecting device, wherein,
the AR equipment is used for acquiring an initial environment image and sending the initial environment image to the automatic driving wheelchair destination selecting device;
and the electroencephalogram acquisition electrode is used for acquiring a first electroencephalogram signal of a user and sending the first electroencephalogram signal to the automatic driving wheelchair destination selection device.
The embodiment of the invention also provides computer equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the automatic wheelchair destination selection method when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program for executing the automatic driving wheelchair destination selection method.
In the embodiment of the invention, an initial environment image acquired by AR equipment is received; extracting regional image features from the initial environmental image; inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations; displaying the plurality of candidate destinations in a user field of view of the AR device; and receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and obtaining an identified first destination, wherein the first electroencephalogram signal is generated when a user gazes at a candidate destination for the first time. In the process, the AR equipment and the electroencephalogram acquisition electrode are adopted, so that the destination selected by the user can be identified in real time; in the identification process, a destination is preset and a digital label is set, candidate destination generation is not limited by the field of view of the fixed camera, the diversity and convenience of destination selection are greatly improved, and meanwhile, a patient can also carry out temporary destination change under the real-time field of view.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. In the drawings:
FIG. 1 is a flow chart of a method for automatically selecting a destination for a wheelchair in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of an AR glasses, an electroencephalogram acquisition dry electrode and a headband assembly according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a user wearing an autopilot wheelchair destination selection system in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of a plurality of candidate destination areas displayed on an AR device according to an embodiment of the present invention at a predetermined frequency;
fig. 5 is a schematic diagram of a first destination displayed in a user field of view of an AR device according to a first preset manner in an embodiment of the present invention;
FIG. 6 is a diagram showing a user view of an AR device when a first destination and a second destination are consistent in an embodiment of the present invention;
FIG. 7 is a diagram showing a user view of an AR device when a first destination and a second destination are not identical in an embodiment of the present invention;
FIG. 8 is a detailed flow chart of a method for automatically selecting a destination for a wheelchair in accordance with an embodiment of the present invention;
FIG. 9 is a schematic view of an autopilot wheelchair destination selection apparatus in accordance with an embodiment of the present invention;
FIG. 10 is a schematic diagram of an autopilot wheelchair destination selection system in accordance with an embodiment of the present invention;
FIG. 11 is a schematic diagram of a computer device in an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings. The exemplary embodiments of the present invention and their descriptions herein are for the purpose of explaining the present invention, but are not to be construed as limiting the invention.
In the description of the present specification, the terms "comprising," "including," "having," "containing," and the like are open-ended terms, meaning including, but not limited to. Reference to the terms "one embodiment," "a particular embodiment," "some embodiments," "for example," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. The sequence of steps involved in the embodiments is used to schematically illustrate the practice of the present application, and is not limited thereto and may be appropriately adjusted as desired.
First, abbreviations and key terms involved in the embodiments of the present invention are defined.
Image processing (image processing) technique: the computer is used to analyze the image to obtain the required result. The digital image is a large two-dimensional array obtained by photographing with equipment such as an industrial camera, a video camera, a scanner and the like, wherein the elements of the array are called pixels, and the values of the pixels are called gray values. Image processing techniques generally include image compression, enhancement and restoration, matching, description and recognition of 3 parts.
Image characteristics: mainly has the color characteristics, texture characteristics, shape characteristics and spatial relation characteristics of the image. The color feature is a global feature describing the surface properties of the scene to which the image or image area corresponds; texture features are also global features that also describe the surface properties of the scene to which an image or image region corresponds; the shape features have two types of representation methods, one is outline features, the other is area features, the outline features of the image are mainly aimed at the outer boundary of the object, and the area features of the image relate to the whole shape area; the spatial relationship feature refers to a mutual spatial position or a relative direction relationship between a plurality of objects segmented in an image, and these relationships may be also classified into a connection/adjacency relationship, an overlapping/overlapping relationship, an inclusion/containment relationship, and the like.
HSV (Hue, saturation, value) color model: a color space, also called a hexagonal pyramid model, was created by a.r.smith in 1978 based on visual properties of the color. The parameters of the color in this model are respectively: hue (H), saturation (S), brightness (V).
Directional gradient histogram (Histogram of oriented gradient, HOG): the method is applied to the fields of computer vision and image processing, and is used for calculating the statistic value of the direction information of the local image gradient by a feature descriptor for target detection. This method is very similar to edge direction histogram, scale invariant feature transform and shape context method, except that the HOG descriptors are computed on a grid-dense, uniformly sized cell unit, and overlapping local contrast normalization techniques are used to improve performance.
Support Vector Machine (SVM): a generalized linear classifier for binary classification of data according to a supervised learning mode has a decision boundary of a maximum margin hyperplane for solving a learning sample. The SVM calculates the empirical risk by using a hinge loss function and adds a regularization term in the solving system to optimize the structural risk, and is a classifier with sparsity and robustness. The SVM can perform nonlinear classification through a kernel method, which is one of common kernel learning methods. SVM was proposed in 1964, developed rapidly after 90 s of the twentieth century and derived a series of improved and expanded algorithms, and applied to pattern recognition problems such as image recognition, text classification, etc.
Brain-machine interface (Brain-Computer Interface, BCI): the direct communication and control channel established between the brain and the computer or other electronic equipment can enable the person to express the ideas or control the equipment directly through the brain without language or action, and can effectively enhance the ability of the patient with serious physical disabilities to communicate with the outside or control the external environment so as to improve the life quality of the patient. Brain-machine interface technology is a multi-disciplinary, crossing technology involving neuroscience, signal detection, signal processing, pattern recognition, etc. Brain-machine interfaces fall into two categories, invasive and noninvasive, with the main approach being to implant microelectrode arrays into the brain in invasive BCI. In noninvasive BCI, brain electrical signals (including P300, slow cortex potential, perceived motor rhythm, steady-state visual evoked potential, etc.) are primarily recorded on the scalp surface.
Path planning: path planning is one of the main study contents of motion planning. The motion planning consists of path planning and track planning, the sequence points or curves connecting the start position and the end position are called paths, and the strategy for forming the paths is called path planning. The planning problem of the topology point-line network can be basically solved by adopting a path planning method.
The inventors found that in order to enable the selection of a destination in a real-time field of view, candidate destinations with distinct features are identified from an AR image, and the destination selection is performed using a brain-computer interface. In addition, the AR device and the electroencephalogram acquisition device can be integrated into a head band, so that a patient can wear the device more comfortably and conveniently. Therefore, the invention aims to design an automatic driving wheelchair destination selection method based on AR image recognition and brain-computer interface. The method needs to achieve the rapid recognition speed as much as possible; the training and control burden of the patient is as light as possible; the choice of destination type is as diverse as possible; the equipment is comfortable and portable to wear as much as possible. The method is used for selecting the real-time destination without presetting the destination and setting a digital label, the generation of the candidate destination is not limited by the visual field of the fixed camera, the diversity and convenience of destination selection are greatly improved, and meanwhile, the patient can also carry out temporary destination change under the real-time visual field.
Fig. 1 is a flowchart of a method for selecting a destination of an autopilot wheelchair according to an embodiment of the present invention, as shown in fig. 1, the method includes:
step 101, receiving an initial environment image acquired by AR equipment;
step 102, extracting regional image features from the initial environmental image;
Step 103, inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations;
step 104, displaying the plurality of candidate destinations in a user field of view of the AR device;
step 105, receiving and identifying a first electroencephalogram signal acquired by an electroencephalogram acquisition electrode, and obtaining an identified first destination, wherein the first electroencephalogram signal is generated when a user gazes at a candidate destination for the first time.
In the embodiment of the invention, the AR equipment and the electroencephalogram acquisition electrode are adopted, so that the destination selected by the user can be identified in real time; in the identification process, a destination is preset and a digital label is set, candidate destination generation is not limited by the field of view of the fixed camera, the diversity and convenience of destination selection are greatly improved, and meanwhile, a patient can also carry out temporary destination change under the real-time field of view.
In step 101, an initial environmental image acquired by an AR device is received, and in one embodiment, the AR device is AR glasses; the electroencephalogram acquisition electrode is an electroencephalogram acquisition dry electrode; the device and the AR equipment and the electroencephalogram acquisition electrode form an automatic wheelchair destination selection system, and the automatic wheelchair destination selection system can further comprise flexible wireless electronic equipment (such as a Bluetooth module) and an electrode arranged behind the ear of a user, wherein fig. 2 is a schematic diagram of integration of AR glasses, an electroencephalogram acquisition dry electrode and a head band.
The AR glasses are based on an augmented reality technology, virtual information (objects, pictures, videos, sounds and the like) can be fused in a real environment, the real world is enriched, and a more comprehensive and better world is constructed. The AR glasses are products with science fiction colors and are manufactured by google first, and the AR glasses are also products with foresight technology. The AR glasses can realize a plurality of functions, can be regarded as a miniature mobile phone, judge the current state of a user by tracking eye sight tracks, and can start corresponding functions. The AR glasses can know where to look, can display the road information seen or the surrounding buildings, and can be connected with the mobile phone. Google Glass began and augmented reality glasses began to appear continuously. Monocular glasses such as Google Glass, monocular display information, navigation, short messages, telephone, video taking and other functions, cannot display 3D effects due to monocular, and have limited application scenes due to appearance reasons. When the binocular glasses Meta Glass and the binocular represent images, the binocular parallax can be used for generating 3D effects intended by a developer. By detecting the real scene and supplementing the information, the wearer can obtain information which cannot be obtained quickly in the real world; and because the interaction mode is more natural, the virtual articles are more real. The implementation technical means of the mainstream augmented reality glasses comprise a transparent display screen, a virtual retina display, a semitransparent light-splitting LCD projection, a ToF camera, a single-color glass projection and the like.
The AR glasses mainly comprise a camera and a wide strip-shaped processor device which are suspended in front of the glasses, wherein a chip, a multi-core CPU and an AI engine are arranged in the AR glasses, a memory and a flash memory are configured, and WiFi and Bluetooth transmission is supported. In the embodiment of the invention, the destination selection of the automatic driving wheelchair can be placed in a wide-strip processor, so that the function integration is realized.
Besides AR glasses, the AR device may also be a tablet, a mobile phone, or other devices with AR functions, and related variations should fall within the protection scope of the present invention.
The communication method between the electroencephalogram acquisition electrode and the AR equipment is that wireless communication is interconnected in a local area network or 4G and 5G are connected or Bluetooth is used, and the flexible wireless electronic equipment (such as a Bluetooth module) can be used as communication equipment.
In the embodiment of the invention, the electroencephalogram signal needs to be acquired, the electroencephalogram acquisition dry electrode can be used, wherein the electroencephalogram acquisition dry electrode can be three elastic scalp dry electrodes and is fixed by an elastic headband, and the electroencephalogram acquisition dry electrode is shown in fig. 2. The brain electricity collection dry electrode can be directly contacted with the scalp through the hair, and the performance of the dry electrode is very excellent. When a slight downward pressure is applied, the conductive flexible elastomeric legs of the hair-drying electrode will spread slightly apart and make better contact with the scalp.
When the electroencephalogram collecting dry electrode is used, a reference electrode is generally required to be collected in order to obtain a reference signal, the ultrathin nano electroencephalogram collecting electrode can be adopted for collecting the reference electrode, the ultrathin nano electroencephalogram collecting electrode is in a skin-like meshed structure, aerosol jet printing is adopted, the drawing is realized, motion artifacts can be reduced, and contact impedance between the skin and the electrode is enhanced. The ultra-thin nano electroencephalogram acquisition electrode is generally placed behind the ear and connected to the automatic driving wheelchair destination selection device through a flexible thin film cable.
Fig. 3 is a schematic diagram of a destination selection system for a user wearing an autopilot wheelchair according to an embodiment of the present invention, and of course, it will be understood that other wearing forms are also possible, and related variations should fall within the scope of the present invention.
In step 102, there are various methods for extracting the regional image features from the initial environmental image, and one of the embodiments is described below.
In one embodiment, extracting the regional image features from the initial environmental image includes:
clipping the initial environment image based on a preset height range of the candidate destination to obtain a clipped image;
detecting the color of each pixel point in the cut image to obtain a plurality of pixel points meeting the color requirement of the candidate destination;
A plurality of pixel points meeting the color requirements of the candidate destinations are communicated, and a plurality of communication areas are obtained;
region image features are extracted from the plurality of connected regions.
In an embodiment, detecting a color of each pixel in the clipped image to obtain a plurality of pixels satisfying a candidate destination color requirement includes:
and in the hue saturation brightness HSV mode, determining the pixel point as the pixel point meeting the requirement of the candidate destination color if the color of each pixel point in the cut image is within the HSV threshold range corresponding to the candidate destination color.
In one embodiment, extracting region image features from the plurality of connected regions includes:
performing region filtering on the communication region according to candidate destination image features, wherein the candidate destination image features comprise one or any combination of size features, aspect ratio features, rotation angle features and color distribution features;
and extracting region image features from the filtered connected regions, wherein the region image features are direction gradient histogram HOG features.
In an embodiment, the image recognition model is a support vector machine, and the training method of the support vector machine is as follows:
Obtaining a plurality of destination area images;
extracting a direction gradient Histogram (HOG) characteristic of the destination area image;
and training the support vector machine by using the HOG characteristics of the directional gradient histogram to obtain a trained support vector machine.
In the above embodiment, the destination area may be a wheelchair driving scene of an inorganic motor car such as a park, a district, or the like, which has a characteristic-apparent candidate destination (such as a guideboard or a tree).
In an embodiment, the first electroencephalogram signal is an SSVEP signal; note that, the EEG signal is collected by the electroencephalogram collecting dry electrode, and in the embodiment of the present invention, the SSVEP selection pattern is adopted, so that the SSVEP signal is obtained. When receiving a visual stimulus of a fixed frequency, the visual cortex of the brain will produce a continuous response at the fundamental frequency or multiples of the stimulus frequency. This response is called Steady State Visual Evoked Potential (SSVEP), which can be reliably applied to brain-computer interface systems (BCI). Compared with BCI which sends out other signals (such as P300 and motor imagery), SSVEP-BCI generally has higher information transmission rate, simpler system and experimental design, simpler signal generation and acquisition, obvious spectrum characteristic peak value of the signal, strong anti-interference capability, high information transmission rate and capability of inducing stronger signals without training by all users, can be generally induced by light stimulation, pattern inversion or steady-state oscillation motion (such as Newton rings) stimulation, utilizes frequency division multiple access to code instructions, is visual dependency and is the signal type with the most practical significance in all brain-computer interface systems.
Among non-invasive BCI systems, the SSVEP system is the fastest BCI system at present, and the highest information transmission rate can approach 200bit/min. The user may be faced with multiple visual stimuli simultaneously while using the system. All visual stimuli flash at a specific and mutually non-repeating frequency (> 6 Hz) and each stimulus represents a specific instruction. When a user wants to output a certain instruction, the user only needs to look at the stimulus corresponding to the instruction. The system determines the stimulus gazed by the user by decoding the oscillation frequency of the primary visual cortex of the brain and finally converts the stimulus into corresponding machine instruction output. The SSVEP paradigm can be used to have destinations blink at different frequencies to identify a destination selected by a user.
In an embodiment, displaying the plurality of candidate destination areas on the AR device comprises:
the plurality of candidate destination areas are displayed on the AR device at a predetermined frequency, wherein the frequency of each candidate destination is different.
For example, candidate destinations may flash in the SSVEP paradigm of 8-20Hz range of arithmetic series frequencies (arithmetic intervals vary according to the number of candidate targets) in an AR eyewear projection, all candidate destinations flash simultaneously during stimulation, one stimulus lasts for 3s, the brightness of each target varies sinusoidally according to its predetermined frequency, and the interval between each stimulus is 0.5s. Fig. 4 is a schematic diagram of a plurality of candidate destination areas displayed on an AR device according to a predetermined frequency in an embodiment of the present invention.
In an embodiment, receiving and identifying a first electroencephalogram signal acquired by an electroencephalogram acquisition electrode, obtaining an identified first destination, includes:
receiving a first SSVEP signal in a set time period acquired by an electroencephalogram acquisition electrode;
performing band-pass filtering processing on the first SSVEP signal;
using common space mode projection to the first SSVEP signal after the band-pass filtering processing to obtain a feature vector;
inputting the feature vector into a TRCA classifier to obtain the frequency corresponding to the first SSVEP signal;
and determining a first destination corresponding to the frequency.
In the above embodiment, the set duration may be 200ms, and the frequency of the band-pass filtering may be 8 to 30Hz. Since each destination is displayed on the AR device in a blinking manner at a predetermined frequency, the first destination can be determined from the frequency corresponding to the obtained first SSVEP signal.
In an embodiment, after obtaining the identified first destination, further comprising:
displaying the first destination in a user field of view of the AR equipment according to a first preset mode;
receiving and identifying a second electroencephalogram signal acquired by an electroencephalogram acquisition electrode, and obtaining an identified second destination, wherein the second electroencephalogram signal is generated when a user gazes at a candidate destination for the second time;
And when the first destination and the second destination are consistent, determining that the first destination is the destination selected by the user, and displaying the destination selected by the user in the user field of view of the AR equipment according to a second preset mode.
And when the first destination and the second destination are inconsistent, displaying the second destination in the user field of view of the AR equipment according to a third preset mode.
The above process is a secondary determination process, fig. 5 is a schematic diagram of displaying a first destination in a user field of view of an AR device according to a first preset manner, where the first preset manner is to circle the first destination with a yellow square frame, fig. 6 is a schematic diagram of displaying a user field of view of the AR device when the first destination is consistent with a second destination in the embodiment of the present invention, the second preset manner is to display a green square frame, the secondary confirmation is successful, fig. 7 is a schematic diagram of displaying a user field of view of the AR device when the first destination is inconsistent with the second destination in the embodiment of the present invention, the third preset manner is to display a red square frame, after displaying the red square frame, the secondary confirmation fails, and the destination selection can be continuously performed according to the above manner.
In an embodiment, the method further comprises:
Generating a destination determination signal after determining that the first destination is a user-selected destination;
and sending the destination determining signal to an autopilot wheelchair, wherein the autopilot wheelchair performs destination path planning based on the destination determining signal.
Based on the above embodiments, the present invention proposes the following embodiment to explain the detailed flow of the method for selecting the destination of the wheelchair, and fig. 8 is a detailed flow chart of the method for selecting the destination of the wheelchair according to the embodiment of the present invention, as shown in fig. 8, including:
step 801, receiving an initial environment image acquired by an AR device;
step 802, clipping an initial environment image based on a preset height range of a candidate destination to obtain a clipped image;
step 803, detecting the color of each pixel point in the cut image to obtain a plurality of pixel points meeting the color requirement of the candidate destination;
step 804, connecting a plurality of pixel points meeting the color requirement of the candidate destination to obtain a plurality of connected areas;
step 805, performing region filtering on the connected region according to the candidate destination image features;
step 806, extracting region image features from the filtered communication region;
Step 807, inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations;
step 808, displaying a plurality of candidate destination areas on the AR device according to a predetermined frequency, wherein the frequency of each candidate destination is different;
step 809, receiving and identifying a first electroencephalogram signal acquired by an electroencephalogram acquisition electrode, and obtaining an identified first destination, wherein the first electroencephalogram signal is generated when a user gazes at a candidate destination for the first time;
step 810, displaying the first destination in a first preset manner in a user field of view of the AR device;
step 811, receiving and identifying a second electroencephalogram signal acquired by an electroencephalogram acquisition electrode, and obtaining an identified second destination, wherein the second electroencephalogram signal is generated when a user gazes at a candidate destination for the second time;
step 812, judging whether the first destination is consistent with the second destination, and if so, going to step 813, otherwise going to step 815;
step 813, determining that the first destination is a destination selected by the user, and displaying the destination selected by the user in a user field of view of the AR device according to a second preset mode;
Step 814, generating a destination determining signal, and transmitting the destination determining signal to an autopilot wheelchair, wherein the autopilot wheelchair performs destination path planning based on the destination determining signal;
step 815, displaying the second destination in the user field of view of the AR device according to a third preset manner; go to step 811.
Of course, it is to be understood that other variations of the above detailed procedures are also possible, and all related variations should fall within the protection scope of the present invention.
The application crowd of the method provided by the embodiment of the invention: consciousness, normal visual function, disabled hands and feet, loss of language function and stable illness.
In summary, in the method provided by the embodiment of the present invention, an initial environment image acquired by an AR device is received; extracting regional image features from the initial environmental image; inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations; displaying the plurality of candidate destinations in a user field of view of the AR device; and receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and obtaining an identified first destination, wherein the first electroencephalogram signal is generated when a user gazes at a candidate destination for the first time. In the process, the AR equipment and the electroencephalogram acquisition electrode are adopted, so that the destination selected by the user can be identified in real time; in the identification process, a destination is preset and a digital label is set, candidate destination generation is not limited by the field of view of the fixed camera, the diversity and convenience of destination selection are greatly improved, and meanwhile, a patient can also carry out temporary destination change under the real-time field of view.
The embodiment of the invention also provides a destination selecting device of the automatic driving wheelchair, the principle of which is similar to that of the automatic driving wheelchair, and the destination selecting device is not repeated here.
Fig. 9 is a schematic view of an apparatus for selecting a destination of an autopilot wheelchair in accordance with the present invention, as shown in fig. 9, the apparatus comprising:
the image receiving module 901 is used for receiving an initial environment image acquired by the AR equipment;
a feature extraction module 902, configured to extract a regional image feature from the initial environmental image;
a candidate destination obtaining module 903, configured to input the extracted regional image features to an image recognition model, to obtain a plurality of candidate destinations;
a display module 904 for displaying the plurality of candidate destinations in a user field of view of the AR device;
the identification module 905 is configured to receive and identify a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and obtain an identified first destination, where the first electroencephalogram signal is generated when a user gazes at a candidate destination for the first time.
In one embodiment, the feature extraction module 902 is specifically configured to:
clipping the initial environment image based on a preset height range of the candidate destination to obtain a clipped image;
Detecting the color of each pixel point in the cut image to obtain a plurality of pixel points meeting the color requirement of the candidate destination;
a plurality of pixel points meeting the color requirements of the candidate destinations are communicated, and a plurality of communication areas are obtained;
region image features are extracted from the plurality of connected regions.
In one embodiment, the feature extraction module 902 is specifically configured to:
in the hue saturation brightness HSV mode, for the color of each pixel point in the cut image, if the color of the pixel point is within the HSV threshold range corresponding to the candidate destination color, determining the pixel point as the pixel point meeting the requirement of the candidate destination color.
In one embodiment, the feature extraction module 902 is specifically configured to:
performing region filtering on the communication region according to candidate destination image features, wherein the candidate destination image features comprise one or any combination of size features, aspect ratio features, rotation angle features and color distribution features;
and extracting region image features from the filtered connected regions, wherein the region image features are direction gradient histogram HOG features.
In one embodiment, the display module 904 is specifically configured to:
The plurality of candidate destination areas are displayed on the AR device at a predetermined frequency, wherein the frequency of each candidate destination is different.
In one embodiment, the display module 904 is specifically configured to: after the identified first destination is obtained, displaying the first destination in a user field of view of the AR device in a first preset manner;
the identification module 905 is specifically configured to: receiving and identifying a second electroencephalogram signal acquired by an electroencephalogram acquisition electrode, and obtaining an identified second destination, wherein the second electroencephalogram signal is generated when a user gazes at a candidate destination for the second time; determining the first destination as the destination selected by the user when the first destination and the second destination are consistent;
the display module 904 is specifically configured to: displaying the destination selected by the user in the user field of view of the AR equipment according to a second preset mode; and when the first destination and the second destination are inconsistent, displaying the second destination in the user field of view of the AR equipment according to a third preset mode.
In summary, in the apparatus provided by the embodiment of the present invention, an initial environmental image acquired by an AR device is received; extracting regional image features from the initial environmental image; inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations; displaying the plurality of candidate destinations in a user field of view of the AR device; and receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and obtaining an identified first destination, wherein the first electroencephalogram signal is generated when a user gazes at a candidate destination for the first time. In the process, the AR equipment and the electroencephalogram acquisition electrode are adopted, so that the destination selected by the user can be identified in real time; in the identification process, a destination is preset and a digital label is set, candidate destination generation is not limited by the field of view of the fixed camera, the diversity and convenience of destination selection are greatly improved, and meanwhile, a patient can also carry out temporary destination change under the real-time field of view.
As shown in fig. 10, an embodiment of the present invention further proposes a destination selection system for an autopilot wheelchair, including: an AR apparatus 1001, an electroencephalogram acquisition electrode 1002, and the above-described automatic wheelchair destination selection apparatus 1003, in which,
an AR apparatus 1001 for acquiring an initial environment image and transmitting to an autopilot wheelchair destination selection device 1003;
an electroencephalogram acquisition electrode 1002 for acquiring a first electroencephalogram signal of the user and transmitting to the automatic wheelchair destination selection apparatus 1003.
The principle and function of the AR device, the brain electrode collecting electrode and the destination selecting device of the automatic wheelchair are described in the foregoing, and are not repeated here.
In summary, in the system provided by the embodiment of the present invention, an initial environment image acquired by an AR device is received; extracting regional image features from the initial environmental image; inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations; displaying the plurality of candidate destinations in a user field of view of the AR device; and receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and obtaining an identified first destination, wherein the first electroencephalogram signal is generated when a user gazes at a candidate destination for the first time. In the process, the AR equipment and the electroencephalogram acquisition electrode are adopted, so that the destination selected by the user can be identified in real time; in the identification process, a destination is preset and a digital label is set, candidate destination generation is not limited by the field of view of the fixed camera, the diversity and convenience of destination selection are greatly improved, and meanwhile, a patient can also carry out temporary destination change under the real-time field of view.
The embodiment of the present application further provides a computer device, and fig. 11 is a schematic diagram of the computer device in the embodiment of the present invention, where the computer device can implement all the steps in the method for selecting a destination of an autopilot wheelchair in the foregoing embodiment, and the computer device specifically includes the following contents:
a processor 1101, a memory 1102, a communication interface (Communications Interface) 1103 and a communication bus 1104;
wherein the processor 1101, the memory 1102, and the communication interface 1103 accomplish the communication with each other through the communication bus 1104; the communication interface 1103 is configured to implement information transmission among related devices such as a server device, a detection device, and a user device;
the processor 1101 is configured to invoke a computer program in the memory 1102, which when executed implements all the steps in the method for selecting a destination for an autonomous wheelchair in the above-described embodiment.
Embodiments of the present application also provide a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, implements all the steps of the method for automatically selecting a wheelchair destination in the above embodiments.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A method of automatically driving a wheelchair destination selection, comprising:
receiving an initial environment image acquired by AR equipment;
extracting regional image features from the initial environmental image;
inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations;
displaying the plurality of candidate destinations in a user field of view of the AR device;
receiving and identifying a first electroencephalogram signal acquired by an electroencephalogram acquisition electrode, and obtaining an identified first destination, wherein the first electroencephalogram signal is generated when a user gazes at a candidate destination for the first time;
the AR equipment and the electroencephalogram acquisition equipment are integrated in one headband, the AR equipment is AR glasses, and the electroencephalogram acquisition electrode is an electroencephalogram acquisition dry electrode;
the first electroencephalogram signal is an SSVEP signal, and an SSVEP selection paradigm is adopted;
a flexible wireless electronic device and an electrode positioned behind the ear of the user;
the electroencephalogram acquisition dry electrode adopts three elastic scalp dry electrodes and is fixed by an elastic headband; the brain electricity collecting dry electrode is directly contacted with the scalp through the hair; when slight downward pressure is applied, the conductive flexible elastomer supporting legs of the electroencephalogram collecting and hair drying electrode can slightly open to be in contact with the scalp;
An electrode placed behind the user's ear is used to collect the reference electrode.
2. The method of automatically driving a wheelchair destination selection of claim 1, wherein extracting regional image features from the initial environmental image comprises:
clipping the initial environment image based on a preset height range of the candidate destination to obtain a clipped image;
detecting the color of each pixel point in the cut image to obtain a plurality of pixel points meeting the color requirement of the candidate destination;
a plurality of pixel points meeting the color requirements of the candidate destinations are communicated, and a plurality of communication areas are obtained;
region image features are extracted from the plurality of connected regions.
3. The method for selecting a destination of an autonomous wheelchair according to claim 2, wherein detecting the color of each pixel in the clipped image to obtain a plurality of pixels satisfying the candidate destination color requirement comprises:
in the hue saturation brightness HSV mode, for the color of each pixel point in the cut image, if the color of the pixel point is within the HSV threshold range corresponding to the candidate destination color, determining the pixel point as the pixel point meeting the requirement of the candidate destination color.
4. The autopilot wheelchair destination selection method of claim 2 wherein extracting region image features from the plurality of connected regions comprises:
performing region filtering on the communication region according to candidate destination image features, wherein the candidate destination image features comprise one or any combination of size features, aspect ratio features, rotation angle features and color distribution features;
and extracting region image features from the filtered connected regions, wherein the region image features are direction gradient histogram HOG features.
5. The autopilot wheelchair destination selection method of claim 1 wherein displaying the plurality of candidate destination areas on an AR device comprises:
the plurality of candidate destination areas are displayed on the AR device at a predetermined frequency, wherein the frequency of each candidate destination is different.
6. The method of automatically driving a wheelchair destination selection of claim 1, further comprising, after obtaining the identified first destination:
displaying the first destination in a user field of view of the AR equipment according to a first preset mode;
receiving and identifying a second electroencephalogram signal acquired by an electroencephalogram acquisition electrode, and obtaining an identified second destination, wherein the second electroencephalogram signal is generated when a user gazes at a candidate destination for the second time;
When the first destination is consistent with the second destination, determining that the first destination is a destination selected by a user, and displaying the destination selected by the user in a user field of view of the AR equipment in a second preset mode;
and when the first destination and the second destination are inconsistent, displaying the second destination in the user field of view of the AR equipment according to a third preset mode.
7. An autopilot wheelchair destination selection apparatus comprising:
the image receiving module is used for receiving an initial environment image acquired by the AR equipment;
the feature extraction module is used for extracting regional image features from the initial environment image;
the candidate destination obtaining module is used for inputting the extracted regional image characteristics into the image recognition model to obtain a plurality of candidate destinations;
a display module for displaying the plurality of candidate destinations in a user field of view of the AR device;
the identification module is used for receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode to obtain an identified first destination, and the first electroencephalogram signal is generated when a user gazes at a candidate destination for the first time;
the AR equipment and the electroencephalogram acquisition equipment are integrated in one headband, the AR equipment is AR glasses, and the electroencephalogram acquisition electrode is an electroencephalogram acquisition dry electrode;
The first electroencephalogram signal is an SSVEP signal, and an SSVEP selection paradigm is adopted;
a flexible wireless electronic device and an electrode positioned behind the ear of the user;
the electroencephalogram acquisition dry electrode adopts three elastic scalp dry electrodes and is fixed by an elastic headband; the brain electricity collecting dry electrode is directly contacted with the scalp through the hair; when slight downward pressure is applied, the conductive flexible elastomer supporting legs of the electroencephalogram collecting and hair drying electrode can slightly open to be in contact with the scalp;
an electrode placed behind the user's ear is used to collect the reference electrode.
8. An autopilot wheelchair destination selection system, comprising: AR apparatus, electroencephalogram acquisition electrode, and automatic driving wheelchair destination selection apparatus according to claim 7, wherein,
the AR equipment is used for acquiring an initial environment image and sending the initial environment image to the automatic driving wheelchair destination selecting device;
and the electroencephalogram acquisition electrode is used for acquiring a first electroencephalogram signal of a user and sending the first electroencephalogram signal to the automatic driving wheelchair destination selection device.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 6 when executing the computer program.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program for executing the method of any one of claims 1 to 6.
CN202110446334.1A 2021-04-25 2021-04-25 Automatic driving wheelchair destination selection method, device and system Active CN113138668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110446334.1A CN113138668B (en) 2021-04-25 2021-04-25 Automatic driving wheelchair destination selection method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110446334.1A CN113138668B (en) 2021-04-25 2021-04-25 Automatic driving wheelchair destination selection method, device and system

Publications (2)

Publication Number Publication Date
CN113138668A CN113138668A (en) 2021-07-20
CN113138668B true CN113138668B (en) 2023-07-18

Family

ID=76811901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110446334.1A Active CN113138668B (en) 2021-04-25 2021-04-25 Automatic driving wheelchair destination selection method, device and system

Country Status (1)

Country Link
CN (1) CN113138668B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115192045B (en) * 2022-09-16 2023-01-31 季华实验室 Destination identification/wheelchair control method, device, electronic device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN209574688U (en) * 2019-01-16 2019-11-05 北京布润科技有限责任公司 A kind of brain wave acquisition cap
CN111247505A (en) * 2017-10-27 2020-06-05 索尼公司 Information processing device, information processing method, program, and information processing system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7551952B2 (en) * 2005-10-26 2009-06-23 Sam Technology, Inc. EEG electrode headset
CN103083014B (en) * 2013-01-08 2015-04-29 北京理工大学 Method controlling vehicle by electroencephalogram and intelligent vehicle using method
US9031631B2 (en) * 2013-01-31 2015-05-12 The Hong Kong Polytechnic University Brain biofeedback device with radially adjustable electrodes
CN104083258B (en) * 2014-06-17 2016-10-05 华南理工大学 A kind of method for controlling intelligent wheelchair based on brain-computer interface and automatic Pilot technology
CN108090459B (en) * 2017-12-29 2020-07-17 北京华航无线电测量研究所 Traffic sign detection and identification method suitable for vehicle-mounted vision system
CN111694425A (en) * 2020-04-27 2020-09-22 中国电子科技集团公司第二十七研究所 Target identification method and system based on AR-SSVEP
CN112223288B (en) * 2020-10-09 2021-09-14 南开大学 Visual fusion service robot control method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111247505A (en) * 2017-10-27 2020-06-05 索尼公司 Information processing device, information processing method, program, and information processing system
CN209574688U (en) * 2019-01-16 2019-11-05 北京布润科技有限责任公司 A kind of brain wave acquisition cap

Also Published As

Publication number Publication date
CN113138668A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
US10667697B2 (en) Identification of posture-related syncope using head-mounted sensors
US10376153B2 (en) Head mounted system to collect facial expressions
Al-Rahayfeh et al. Eye tracking and head movement detection: A state-of-art survey
Hammoud Passive eye monitoring: Algorithms, applications and experiments
CN112970056A (en) Human-computer interface using high speed and accurate user interaction tracking
CN112034977A (en) Method for MR intelligent glasses content interaction, information input and recommendation technology application
KR102029219B1 (en) Method for recogniging user intention by estimating brain signals, and brain-computer interface apparatus based on head mounted display implementing the method
KR20120060978A (en) Method and Apparatus for 3D Human-Computer Interaction based on Eye Tracking
US11467662B1 (en) Identifying object of user focus with eye tracking and visually evoked potentials
CN111728608A (en) Augmented reality-based electroencephalogram signal analysis method, device, medium and equipment
KR20200093235A (en) Apparatus and method for generating highlight video using biological data
CN110688910A (en) Method for realizing wearable human body basic posture recognition
Hu et al. StereoPilot: A wearable target location system for blind and visually impaired using spatial audio rendering
CN113190114A (en) Virtual scene experience system and method with haptic simulation and emotional perception
CN109074487A (en) It is read scene cut using neurology into semantic component
CN113138668B (en) Automatic driving wheelchair destination selection method, device and system
Kumar et al. A novel approach to video-based pupil tracking
CN114003129A (en) Idea control virtual-real fusion feedback method based on non-invasive brain-computer interface
Shi et al. Indoor space target searching based on EEG and EOG for UAV
KR101955293B1 (en) Visual fatigue analysis apparatus and method thereof
CN115357113A (en) SSVEP brain-computer interface stimulation modulation and decoding method under dynamic background
WO2021236738A1 (en) Systems and methods for authenticating a user of a head-mounted display
Olszewska Human computer interaction feedback based-on data visualization using MVAR and NN
KR100651104B1 (en) Gaze-based computer interface apparatus and method of using the same
US20230123330A1 (en) Interaction training system for autistic patient using image warping, method for training image warping model, and computer readable storage medium including executions causing processor to perform same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant