CN113138668A - Method, device and system for selecting destination of automatic wheelchair driving - Google Patents

Method, device and system for selecting destination of automatic wheelchair driving Download PDF

Info

Publication number
CN113138668A
CN113138668A CN202110446334.1A CN202110446334A CN113138668A CN 113138668 A CN113138668 A CN 113138668A CN 202110446334 A CN202110446334 A CN 202110446334A CN 113138668 A CN113138668 A CN 113138668A
Authority
CN
China
Prior art keywords
destination
candidate
image
wheelchair
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110446334.1A
Other languages
Chinese (zh)
Other versions
CN113138668B (en
Inventor
郑晓宇
马其远
沈晓梅
李勇
吴剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Tsinghua University
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Shenzhen International Graduate School of Tsinghua University filed Critical Tsinghua University
Priority to CN202110446334.1A priority Critical patent/CN113138668B/en
Publication of CN113138668A publication Critical patent/CN113138668A/en
Application granted granted Critical
Publication of CN113138668B publication Critical patent/CN113138668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G5/00Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
    • A61G5/04Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs motor-driven
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G5/00Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
    • A61G5/10Parts, details or accessories
    • A61G5/1051Arrangements for steering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G2203/00General characteristics of devices
    • A61G2203/10General characteristics of devices characterised by specific control means, e.g. for adjustment or steering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Multimedia (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a method, a device and a system for selecting a destination of an automatic driving wheelchair, wherein the method comprises the following steps: receiving an initial environment image collected by AR equipment; extracting regional image features from the initial environment image; inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations; displaying the plurality of candidate destinations in a user field of view of the AR device; and receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and acquiring an identified first destination, wherein the first electroencephalogram signal is generated when a user watches one candidate destination for the first time. The invention can select the destination of the automatic driving wheelchair in real time, and the diversity and convenience of destination selection are high.

Description

Method, device and system for selecting destination of automatic wheelchair driving
Technical Field
The invention relates to the technical field of electronic detonator communication, in particular to a method, a device and a system for selecting a destination of an automatic driving wheelchair.
Background
Currently, the destination selection of the automatic wheelchair is mainly selected from preset destinations or destinations under the view of a fixed camera, and candidate destinations cannot be changed synchronously with the movement of the wheelchair.
For example, in the prior art, a destination and a wheelchair position are marked by using a digital tag method, digital tags are arranged on each navigation destination and wheelchair, and the digital tag content includes: the name of the corresponding location or wheelchair, the unique ID information of the corresponding location or wheelchair, the two-dimensional coordinate information of the corresponding location or wheelchair, and the executable operation information and three-dimensional posture information of the wheelchair. And identifying the digital tags by using an environment perception system, extracting the digital tag content of each navigation destination, and acquiring the movable area information of the wheelchair. In 2015, 2 months, the university of kingze, japan, built in a map and a plurality of destinations in a specific facility, each destination had a corresponding number, and as long as the user wants to "see" the number, the brain wave sensing device could read the corresponding number, and the computer program could make the wheelchair reach the destination while avoiding obstacles. The disadvantages are that the patient can only select among preset destinations, and simultaneously the target location can not be changed in real time according to the situation.
At present, there is also a brain electrical acquisition system, which acquires brain scalp electrical signals of a user during imagination of left and right hand movement, when the user wants to send a left turn command to a wheelchair, the user imaginates left hand movement, and when the user wants to send a right turn command to the wheelchair, the user imaginates right hand movement; left and right hand movements will cause event-related desynchronization and event-related synchronization phenomena specific to the brain scalp electroencephalogram signals. The electric wheelchair is provided with a left wheel motor and a right wheel motor which respectively control the operation of the left and right rear wheels of the wheelchair, thereby determining the motion direction of the wheelchair; the running states of the left motor and the right motor are controlled by four paths of connected control voltage. In 7 months 2009, japan toyota automotive company was first declared successful development of a brain-electrically controlled wheelchair. The brain waves are analyzed through a signal processing system on the wheelchair, and the advancing, the retreating, the rotation and the like of the electric wheelchair are controlled. The time from the brain to the point at which the electric wheelchair can respond is shortened to only 125 milliseconds. However, if the wheelchair is stopped, one cheek is raised and a sensor placed on the face transmits a signal. The student innovation team of the university of the western-style Security has developed a brain-controlled wheelchair, which can be controlled by idea. As long as the cap with the sensor is worn on the head and the tablet personal computer is worn, the patient can control the electric wheelchair by idea. After the brain wave signals are collected by the electrodes, the electrodes are organically matched with the visual system of a person, when a patient respectively sees the patterns with different directions on the computer, the wheelchair can move leftwards and rightwards and forwards and backwards, and the wheelchair can be controlled to move through the visual and brain wave instructions. The disadvantage is that the user needs to go to a place and needs the brain to send out instructions continuously, which is troublesome. In addition, motor imagery control commands are limited and need to be issued continuously, which can create a huge mental burden for the disabled. Because of the instability of brain signals, the same information transfer rate as a wheelchair control lever cannot be obtained by the prior art, and the control capability as the control lever can be hardly achieved. And many people still cannot generate control signals which can be distinguished obviously after long-time motor imagery training.
Therefore, there is a need for a method for destination selection in real time that is efficient and convenient.
Disclosure of Invention
The embodiment of the invention provides a method for selecting a destination of an automatic driving wheelchair, which is used for selecting the destination of the automatic driving wheelchair in real time, and has high diversity and convenience in destination selection, and the method comprises the following steps:
receiving an initial environment image collected by AR equipment;
extracting regional image features from the initial environment image;
inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations;
displaying the plurality of candidate destinations in a user field of view of the AR device;
and receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and acquiring an identified first destination, wherein the first electroencephalogram signal is generated when a user watches one candidate destination for the first time.
The embodiment of the invention provides a destination selection device of an automatic driving wheelchair, which is used for selecting the destination of the automatic driving wheelchair in real time, and the destination selection has high diversity and convenience, and the device comprises:
the image receiving module is used for receiving an initial environment image collected by the AR equipment;
the characteristic extraction module is used for extracting regional image characteristics from the initial environment image;
a candidate destination obtaining module, configured to input the extracted regional image features to an image recognition model, and obtain multiple candidate destinations;
a display module to display the plurality of candidate destinations in a user field of view of the AR device;
the identification module is used for receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode to obtain an identified first destination, wherein the first electroencephalogram signal is generated when a user watches one candidate destination for the first time.
The embodiment of the invention provides a destination selection system of an automatic driving wheelchair, which is used for selecting the destination of the automatic driving wheelchair in real time, and the destination selection has high diversity and convenience, and the system comprises:
AR equipment, an electroencephalogram acquisition electrode and the automatic driving wheelchair destination selection device, wherein,
the AR equipment is used for acquiring an initial environment image and sending the initial environment image to the automatic driving wheelchair destination selection device;
the electroencephalogram acquisition electrode is used for acquiring a first electroencephalogram signal of a user and sending the first electroencephalogram signal to the automatic driving wheelchair destination selection device.
The embodiment of the invention also provides computer equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the automatic driving wheelchair destination selection method when executing the computer program.
Embodiments of the present invention further provide a computer-readable storage medium storing a computer program for executing the above-mentioned method for selecting a destination of an automatic wheelchair.
In the embodiment of the invention, an initial environment image collected by AR equipment is received; extracting regional image features from the initial environment image; inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations; displaying the plurality of candidate destinations in a user field of view of the AR device; and receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and acquiring an identified first destination, wherein the first electroencephalogram signal is generated when a user watches one candidate destination for the first time. In the process, the AR equipment and the electroencephalogram collecting electrode are adopted, so that the destination selected by the user can be identified in real time; in the identification process, destinations are preset and digital labels are set, generation of candidate destinations is not limited by the visual field of a fixed camera, diversity and convenience of destination selection are greatly improved, and meanwhile, a patient can also perform temporary destination change in a real-time visual field.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
FIG. 1 is a flow chart of a method for automatic wheelchair destination selection in an embodiment of the present invention;
FIG. 2 is a schematic diagram of an AR glasses, brain electrical acquisition dry electrode and headband integration in an embodiment of the invention;
FIG. 3 is a schematic view of a user wearing an automatic wheelchair destination selection system in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of multiple candidate destination areas displayed on an AR device according to a predetermined frequency in an embodiment of the present invention;
FIG. 5 is a diagram illustrating a first destination displayed in a first predetermined manner in a user field of an AR device according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a display of a user's view of an AR device when a first destination and a second destination coincide in an embodiment of the present invention;
FIG. 7 is a schematic diagram of a display of a user's field of view of an AR device when a first destination and a second destination do not coincide in an embodiment of the present invention;
FIG. 8 is a detailed flow chart of a method for automated wheelchair destination selection in an embodiment of the present invention;
FIG. 9 is a schematic view of an automatic wheelchair destination selection apparatus in an embodiment of the present invention;
FIG. 10 is a schematic view of an automated wheelchair destination selection system in an embodiment of the present invention;
FIG. 11 is a diagram of a computer device in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention are further described in detail below with reference to the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
In the description of the present specification, the terms "comprising," "including," "having," "containing," and the like are used in an open-ended fashion, i.e., to mean including, but not limited to. Reference to the description of the terms "one embodiment," "a particular embodiment," "some embodiments," "for example," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. The sequence of steps involved in the embodiments is for illustrative purposes to illustrate the implementation of the present application, and the sequence of steps is not limited and can be adjusted as needed.
First, abbreviations and key terms involved in the embodiments of the present invention are defined.
Image processing (image processing) technique: the technique of analyzing an image with a computer to achieve a desired result is also known as image processing. Digital images are large two-dimensional arrays of elements called pixels and values called gray-scale values, which are captured by industrial cameras, video cameras, scanners, etc. Image processing techniques generally include image compression, enhancement and restoration, matching, description and identification of 3 parts.
Image characteristics: there are mainly color features, texture features, shape features and spatial relationship features of the image. The color feature is a global feature describing surface properties of a scene corresponding to an image or an image area; texture features are also global features that also describe the surface properties of the scene corresponding to the image or image area; the shape features are represented by two types, one is outline features, the other is region features, the outline features of the image mainly aim at the outer boundary of the object, and the region features of the image are related to the whole shape region; the spatial relationship characteristic refers to the mutual spatial position or relative direction relationship among a plurality of targets segmented from the image, and these relationships can be also divided into a connection/adjacency relationship, an overlapping/overlapping relationship, an inclusion/containment relationship, and the like.
HSV (Hue, Saturation, Value) color model: a color space, also known as a hexagonal cone model, was created by a.r.smith in 1978 based on the intuitive nature of color. The parameters of the colors in this model are: hue (H), saturation (S), lightness (V).
Histogram of Oriented Gradients (HOG): the method is applied to the fields of computer vision and image processing, is used for a feature descriptor of target detection, and calculates the statistical value of the direction information of local image gradient. The method has many similarities with the edge direction histogram, scale invariant feature transformation and shape context method, and the difference is that the HOG descriptor is calculated on a grid dense cell unit with uniform size, and in order to improve the performance, an overlapped local contrast normalization technology is adopted.
Support Vector Machine (SVM): a generalized linear classifier for binary classification of data in a supervised learning mode is characterized in that a decision boundary is a maximum margin hyperplane for solving learning samples. The SVM uses a hinge loss function to calculate empirical risks and adds a regularization term in a solution system to optimize structural risks, and the classifier has sparsity and robustness. SVMs can perform nonlinear classification by a kernel method, which is one of the common kernel learning methods. SVM was proposed in 1964 and developed rapidly after the 90 s of the twentieth century to derive a series of improvement and expansion algorithms, which are applied to pattern recognition problems such as image recognition and text classification.
Brain-Computer Interface (Brain-Computer Interface, BCI): the direct communication and control channel is established between the brain of a human and a computer or other electronic equipment, through which the human can express ideas or operate the equipment directly through the brain without language or actions, and the ability of the severely disabled patient to communicate with the outside or control the external environment can be effectively enhanced to improve the quality of life of the patient. The brain-computer interface technology is a cross technology relating to multiple disciplines such as neuroscience, signal detection, signal processing, pattern recognition and the like. The brain-computer interface is divided into two types of invasive and non-invasive, and in the invasive BCI, the main method is to implant a microelectrode array into the brain. In non-invasive BCI, brain electrical signals (including P300, slow cortical potentials, perceptual motor rhythms, steady-state visual evoked potentials, etc.) are recorded primarily on the scalp surface.
Path planning: path planning is one of the main research contents of motion planning. The motion planning is composed of path planning and trajectory planning, sequence points or curves connecting the starting position and the end position are called paths, and the strategy for forming the paths is called path planning. The planning problem of any topologically dotted line network can be basically solved by adopting a path planning method.
The inventors have found that, in order to select a destination in a real-time field of view, a candidate destination having a significant feature is identified from an AR image, and destination selection is performed using a brain-computer interface. In addition, can be with AR equipment and brain electricity collection equipment integration in a bandeau, let the patient wear more comfortable light. Therefore, the invention aims to design an automatic driving wheelchair destination selection method based on AR image recognition and a brain-computer interface. The method needs to achieve the recognition speed as fast as possible; the training and control burden of the patient is as light as possible; the selection of the destination types is as diverse as possible; the wearing of the equipment is comfortable and light as much as possible. The method is used for selecting the real-time destination without presetting the destination and setting a digital label, the generation of the candidate destination is not limited by the visual field of a fixed camera, the diversity and the convenience of destination selection are greatly improved, and meanwhile, the patient can also perform temporary destination change under the real-time visual field.
Fig. 1 is a flowchart of a method for selecting a destination of an automatic wheelchair according to an embodiment of the present invention, as shown in fig. 1, the method includes:
step 101, receiving an initial environment image collected by AR equipment;
102, extracting regional image characteristics from the initial environment image;
step 103, inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations;
step 104, displaying the plurality of candidate destinations in a user field of view of the AR device;
and 105, receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode to obtain an identified first destination, wherein the first electroencephalogram signal is generated when a user watches one candidate destination for the first time.
In the embodiment of the invention, the AR equipment and the electroencephalogram acquisition electrode are adopted, so that the destination selected by a user can be identified in real time; in the identification process, destinations are preset and digital labels are set, generation of candidate destinations is not limited by the visual field of a fixed camera, diversity and convenience of destination selection are greatly improved, and meanwhile, a patient can also perform temporary destination change in a real-time visual field.
In step 101, receiving an initial environment image acquired by an AR device, wherein in an embodiment, the AR device is an AR glasses; the brain electricity collecting electrode is a brain electricity collecting dry electrode; the automatic driving wheelchair destination selection device for realizing the automatic driving wheelchair destination selection method, the device, the AR equipment and the electroencephalogram acquisition electrode form an automatic driving wheelchair destination selection system, the automatic driving wheelchair destination selection system can also comprise flexible wireless electronic equipment (such as a Bluetooth module) and an electrode placed behind the ear of a user, and fig. 2 is a schematic diagram of integration of AR glasses, an electroencephalogram acquisition dry electrode and a head band in the embodiment of the invention.
The AR glasses are based on an augmented reality technology, virtual information (objects, pictures, videos, sounds and the like) can be fused in a real environment through the augmented reality technology, the real world is enriched, and a more comprehensive and better world is constructed. The AR glasses are a product with science fiction colors which is firstly made by Google and also a product with a forever technology. The AR glasses can realize a plurality of functions, can be regarded as a miniature mobile phone, judge the current state of a user by tracking the eye sight track, and can start corresponding functions. The AR glasses can know where to look, can display the information of the road surface or surrounding buildings, and can be connected with a mobile phone. Google Glass began and augmented reality glasses began to appear. Monocular glasses such as Google Glass, functions of presenting information by a single eye, navigation, short messages, telephone calls, video photography and the like, cannot present a 3D effect due to the single eye, and has limited application scenes due to appearance reasons. The binocular glasses Meta Glass can generate a 3D effect desired by a developer using binocular parallax when images are presented to both eyes. By detecting and supplementing information to a real scene, a wearer can obtain information which cannot be quickly obtained in the real world; and because the interaction mode is more natural, the virtual objects are more real. The mainstream augmented reality glasses include a transparent display screen, a virtual retina display, a semitransparent split-beam LCD projection, a ToF camera, a monochromatic glass projection, and the like.
The main structure of the AR glasses comprises a camera and a wide strip-shaped processor device which are suspended in front of the glasses, a built-in chip, a multi-core CPU and an AI engine, a configuration memory and a flash memory, and WiFi and Bluetooth transmission support. In the embodiment of the invention, the destination selection of the automatic driving wheelchair can be placed in a wide-strip processor to realize function integration.
Besides the AR glasses, the AR device may also be a tablet, a mobile phone, etc. with AR function, and all the related modifications should fall into the scope of the present invention.
The electroencephalogram acquisition electrode and the AR equipment are in wireless communication interconnection in a local area network or in 4G and 5G connection or in Bluetooth connection, and flexible wireless electronic equipment (such as a Bluetooth module) can be used as communication equipment.
In the embodiment of the invention, the electroencephalogram signal needs to be acquired, the electroencephalogram acquisition dry electrode mentioned above can be used, wherein the electroencephalogram acquisition dry electrode can be three elastic scalp dry electrodes and is fixed by an elastic head band, as shown in fig. 2. The brain electricity collection dry electrode can be directly contacted with the scalp through hair, and the performance of the dry electrode is very excellent. When a slight downward pressure is applied, the conductive flexible elastomeric legs of the hair drying electrode will splay slightly, making better contact with the scalp.
When the electroencephalogram acquisition dry electrode is used, in order to obtain a reference signal, the reference electrode is generally required to be acquired, the acquisition of the reference electrode can adopt an ultrathin nanometer electroencephalogram acquisition electrode, the ultrathin nanometer electroencephalogram acquisition electrode is in a skin-shaped mesh structure, and can be stretched by adopting aerosol jet printing, so that motion artifacts can be reduced, and the contact impedance between the skin and the electrode can be enhanced. The ultrathin nanometer brain electricity collecting electrode is generally placed behind the ear and is connected to a destination selecting device of the automatic driving wheelchair through a flexible thin film cable.
Fig. 3 is a schematic diagram of a user wearing the automatic driving wheelchair destination selection system according to an embodiment of the present invention, but it should be understood that other wearing forms are also possible, and related modifications are all within the scope of the present invention.
There are various methods for extracting the region image feature from the initial environment image in step 102, and one embodiment is given below.
In one embodiment, extracting region image features from the initial environment image includes:
based on the preset height range of the candidate destination, cutting the initial environment image to obtain a cut image;
detecting the color of each pixel point in the cut image to obtain a plurality of pixel points meeting the color requirement of the candidate destination;
communicating a plurality of pixel points meeting the color requirement of the candidate destination to obtain a plurality of communicated regions;
extracting regional image features from the plurality of connected regions.
In an embodiment, detecting the color of each pixel point in the clipped image to obtain a plurality of pixel points meeting the color requirement of the candidate destination includes:
and under the hue saturation brightness (HSV) mode, determining the color of each pixel point in the cut image as the pixel point meeting the color requirement of the candidate destination if the color of the pixel point is within the HSV threshold range corresponding to the color of the candidate destination.
In one embodiment, extracting region image features from the plurality of connected regions includes:
performing region filtering on the connected region according to candidate destination image features, wherein the candidate destination image features comprise one or any combination of a size feature, a width-to-height ratio feature, a rotation angle feature and a color distribution feature;
extracting regional image features from the filtered connected regions, wherein the regional image features are directional gradient Histogram (HOG) features.
In one embodiment, the image recognition model is a support vector machine, and the training method of the support vector machine is as follows:
obtaining a plurality of destination area images;
extracting HOG (histogram of oriented gradient) features of the destination region image;
and training the support vector machine by using the HOG feature of the histogram of oriented gradient to obtain the trained support vector machine.
In the above embodiment, the destination area may be a wheelchair driving scene of a motor vehicle in a park, a cell, or the like with a characteristic obvious destination candidate (such as a guideboard or a tree).
In one embodiment, the first brain electrical signal is an SSVEP signal; it should be noted that the electroencephalogram acquisition dry electrode acquires an EEG signal, and in the embodiment of the present invention, an SSVEP selection paradigm is adopted, so that an SSVEP signal is obtained. When receiving a visual stimulus of fixed frequency, the cerebral visual cortex will produce a continuous response at the fundamental frequency or multiples of the stimulus frequency. This response is called steady-state visual evoked potential (SSVEP), which can be reliably applied to brain-computer interface systems (BCI). Compared with BCI which sends other signals (such as P300, motor imagery), SSVEP-BCI generally has higher information transmission rate, simpler system and experimental design, simpler and more convenient signal generation and acquisition, obvious signal spectral characteristic peak value, strong anti-interference capability, high information transmission rate and the advantage that all users can induce stronger signals without training, can be generally induced by light stimulation, graph overturning or steady-state oscillation motion (such as Newton ring) stimulation, utilizes frequency division multiple access to carry out instruction coding, is visual-dependent, and is the most practical signal type in all brain-computer interface systems.
Among the non-invasive BCI systems, the SSVEP system is the fastest BCI system at present, and the highest information transmission rate thereof can approach 200 bit/min. The user is presented with multiple visual stimuli simultaneously while using the system. All visual stimuli blink at a specific and mutually non-repeating frequency (>6Hz) and each stimulus represents a specific instruction. When a user wants to output a certain instruction, the user only needs to watch the stimulus corresponding to the instruction. The system determines the stimulus watched by the user by decoding the oscillation frequency of the brain primary visual cortex and finally converts the stimulus into a corresponding machine instruction output. The SSVEP paradigm can be used to flash destinations at different frequencies to identify a user-selected destination.
In an embodiment, displaying the plurality of candidate destination areas on the AR device comprises:
a plurality of candidate destination areas are displayed on the AR device at a predetermined frequency, wherein the frequency of each candidate destination is different.
For example, the candidate destinations may flash in the SSVEP paradigm in AR glasses projection at an arithmetic progression frequency (arithmetic interval varies according to the number of candidate targets) in the range of 8-20Hz, all candidate destinations flash simultaneously during stimulation, one stimulation lasts for 3s, the brightness of each target varies sinusoidally at its predetermined frequency, and each stimulation is separated by 0.5 s. Fig. 4 is a schematic diagram illustrating a plurality of candidate destination areas displayed on an AR device according to a predetermined frequency in an embodiment of the present invention.
In one embodiment, receiving and identifying a first brain electrical signal acquired by a brain electrical acquisition electrode to obtain an identified first destination comprises:
receiving a first SSVEP signal within a set time length acquired by an electroencephalogram acquisition electrode;
performing band-pass filtering processing on the first SSVEP signal;
projecting the first SSVEP signal subjected to band-pass filtering by using a common spatial mode to obtain a feature vector;
inputting the feature vector into a TRCA classifier to obtain the frequency corresponding to the first SSVEP signal;
a first destination corresponding to the frequency is determined.
In the above embodiment, the set duration may be 200ms, and the band-pass filtering frequency may be 8 to 30 Hz. Since each destination is blinked on the AR device at a predetermined frequency, the first destination can be determined from the frequency corresponding to the obtained first SSVEP signal.
In an embodiment, after obtaining the identified first destination, further comprising:
displaying the first destination in a user field of view of the AR device according to a first preset mode;
receiving and identifying a second electroencephalogram signal acquired by the electroencephalogram acquisition electrode to obtain an identified second destination, wherein the second electroencephalogram signal is generated when a user gazes at a candidate destination for the second time;
and when the first destination and the second destination are consistent, determining that the first destination is the destination selected by the user, and displaying the destination selected by the user in a user field of the AR device according to a second preset mode.
And when the first destination and the second destination are inconsistent, displaying the second destination in a third preset mode in the visual field of the user of the AR device.
The above process is a secondary determination process, fig. 5 is a schematic diagram of displaying a first destination in a user field of an AR device according to a first preset manner in an embodiment of the present invention, it can be seen that the first preset manner is to enclose the first destination with a yellow square frame, fig. 6 is a schematic diagram of displaying the user field of the AR device when the first destination and a second destination are consistent in an embodiment of the present invention, the second preset manner is to display a green square frame, the secondary determination is successful, fig. 7 is a schematic diagram of displaying the user field of the AR device when the first destination and the second destination are inconsistent in an embodiment of the present invention, the third preset manner is to display a red square frame, after the red square frame is displayed, the secondary determination fails, and the destination selection can be continued according to the above manner.
In an embodiment, the method further comprises:
generating a destination determination signal after determining that the first destination is a user-selected destination;
and sending the destination determination signal to an automatic driving wheelchair, and planning a destination path by the automatic driving wheelchair based on the destination determination signal.
Based on the above embodiments, the present invention provides the following embodiments to explain a detailed flow of the method for selecting the destination of the automatic wheelchair, and fig. 8 is a detailed flow chart of the method for selecting the destination of the automatic wheelchair according to the embodiments of the present invention, as shown in fig. 8, the method includes:
step 801, receiving an initial environment image collected by AR equipment;
step 802, based on a preset height range of a candidate destination, cutting an initial environment image to obtain a cut image;
step 803, detecting the color of each pixel point in the cut image to obtain a plurality of pixel points meeting the color requirement of the candidate destination;
step 804, communicating a plurality of pixel points meeting the color requirement of the candidate destination to obtain a plurality of communicated regions;
step 805, performing region filtering on the connected region according to the candidate destination image characteristics;
step 806, extracting regional image features from the filtered connected regions;
step 807, inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations;
step 808, displaying a plurality of candidate destination areas on the AR device according to a predetermined frequency, wherein the frequency of each candidate destination is different;
step 809, receiving and identifying a first electroencephalogram signal acquired by an electroencephalogram acquisition electrode to obtain an identified first destination, wherein the first electroencephalogram signal is generated when a user watches a candidate destination for the first time;
step 810, displaying the first destination in a user field of the AR device according to a first preset mode;
step 811, receiving and identifying a second electroencephalogram signal acquired by the electroencephalogram acquisition electrode to obtain an identified second destination, wherein the second electroencephalogram signal is generated when the user gazes at a candidate destination for the second time;
step 812, judging whether the first destination is consistent with the second destination, if so, turning to step 813, otherwise, turning to step 815;
step 813, determining that the first destination is a destination selected by the user, and displaying the destination selected by the user in a user field of the AR device according to a second preset mode;
step 814, generating a destination determination signal, and sending the destination determination signal to an automatic driving wheelchair, wherein the automatic driving wheelchair performs destination path planning based on the destination determination signal;
step 815, displaying the second destination in the user field of the AR device according to a third preset mode; go to step 811.
Of course, it is understood that other variations of the above detailed flow can be made, and all such variations are intended to fall within the scope of the present invention.
The application population of the method provided by the embodiment of the invention is as follows: the patients with clear consciousness, normal visual function, immobility of hands and feet, loss of language function and stable disease condition.
In summary, in the method provided in the embodiment of the present invention, an initial environment image acquired by an AR device is received; extracting regional image features from the initial environment image; inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations; displaying the plurality of candidate destinations in a user field of view of the AR device; and receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and acquiring an identified first destination, wherein the first electroencephalogram signal is generated when a user watches one candidate destination for the first time. In the process, the AR equipment and the electroencephalogram collecting electrode are adopted, so that the destination selected by the user can be identified in real time; in the identification process, destinations are preset and digital labels are set, generation of candidate destinations is not limited by the visual field of a fixed camera, diversity and convenience of destination selection are greatly improved, and meanwhile, a patient can also perform temporary destination change in a real-time visual field.
The embodiment of the invention also provides a device for selecting the destination of the automatic driving wheelchair, the principle of which is similar to that of a method for selecting the destination of the automatic driving wheelchair, and the details are not repeated.
Fig. 9 is a schematic view of an automatic wheelchair destination selection apparatus in accordance with an embodiment of the present invention, as shown in fig. 9, the apparatus comprising:
an image receiving module 901, configured to receive an initial environment image acquired by an AR device;
a feature extraction module 902, configured to extract a region image feature from the initial environment image;
a candidate destination obtaining module 903, configured to input the extracted regional image features to an image recognition model, and obtain multiple candidate destinations;
a display module 904 for displaying the plurality of candidate destinations in a user field of view of the AR device;
the identification module 905 is configured to receive and identify a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and obtain an identified first destination, where the first electroencephalogram signal is generated when a user gazes at a candidate destination for the first time.
In an embodiment, the feature extraction module 902 is specifically configured to:
based on the preset height range of the candidate destination, cutting the initial environment image to obtain a cut image;
detecting the color of each pixel point in the cut image to obtain a plurality of pixel points meeting the color requirement of the candidate destination;
communicating a plurality of pixel points meeting the color requirement of the candidate destination to obtain a plurality of communicated regions;
extracting regional image features from the plurality of connected regions.
In an embodiment, the feature extraction module 902 is specifically configured to:
under the hue saturation brightness (HSV) mode, for the color of each pixel point in the cut image, if the color of the pixel point is within the HSV threshold range corresponding to the candidate destination color, the pixel point is determined to be the pixel point meeting the color requirement of the candidate destination.
In an embodiment, the feature extraction module 902 is specifically configured to:
performing region filtering on the connected region according to candidate destination image features, wherein the candidate destination image features comprise one or any combination of a size feature, a width-to-height ratio feature, a rotation angle feature and a color distribution feature;
extracting regional image features from the filtered connected regions, wherein the regional image features are directional gradient Histogram (HOG) features.
In an embodiment, the display module 904 is specifically configured to:
a plurality of candidate destination areas are displayed on the AR device at a predetermined frequency, wherein the frequency of each candidate destination is different.
In an embodiment, the display module 904 is specifically configured to: after obtaining the identified first destination, displaying the first destination in a user field of view of the AR device in a first preset manner;
the identification module 905 is specifically configured to: receiving and identifying a second electroencephalogram signal acquired by the electroencephalogram acquisition electrode to obtain an identified second destination, wherein the second electroencephalogram signal is generated when a user gazes at a candidate destination for the second time; when the first destination and the second destination are consistent, determining that the first destination is the destination selected by the user;
the display module 904 is specifically configured to: displaying the destination selected by the user in a user field of view of the AR device according to a second preset mode; and when the first destination and the second destination are inconsistent, displaying the second destination in a third preset mode in the visual field of the user of the AR device.
In summary, in the apparatus provided in the embodiment of the present invention, an initial environment image acquired by an AR device is received; extracting regional image features from the initial environment image; inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations; displaying the plurality of candidate destinations in a user field of view of the AR device; and receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and acquiring an identified first destination, wherein the first electroencephalogram signal is generated when a user watches one candidate destination for the first time. In the process, the AR equipment and the electroencephalogram collecting electrode are adopted, so that the destination selected by the user can be identified in real time; in the identification process, destinations are preset and digital labels are set, generation of candidate destinations is not limited by the visual field of a fixed camera, diversity and convenience of destination selection are greatly improved, and meanwhile, a patient can also perform temporary destination change in a real-time visual field.
As shown in fig. 10, an embodiment of the present invention further provides an automatic wheelchair driving destination selecting system, including: an AR device 1001, an electroencephalogram acquisition electrode 1002, and the above-described automated wheelchair destination selection device 1003, wherein,
an AR device 1001 for acquiring an initial environment image and sending the initial environment image to an automatic wheelchair destination selection device 1003;
the electroencephalogram acquisition electrode 1002 is used for acquiring a first electroencephalogram signal of a user and sending the first electroencephalogram signal to the automatic wheelchair destination selection device 1003.
The principles and functions of the AR device, the electroencephalogram collecting electrode, and the automatic wheelchair destination selecting apparatus have been described in the foregoing, and will not be described in detail here.
In summary, in the system provided in the embodiment of the present invention, an initial environment image acquired by an AR device is received; extracting regional image features from the initial environment image; inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations; displaying the plurality of candidate destinations in a user field of view of the AR device; and receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and acquiring an identified first destination, wherein the first electroencephalogram signal is generated when a user watches one candidate destination for the first time. In the process, the AR equipment and the electroencephalogram collecting electrode are adopted, so that the destination selected by the user can be identified in real time; in the identification process, destinations are preset and digital labels are set, generation of candidate destinations is not limited by the visual field of a fixed camera, diversity and convenience of destination selection are greatly improved, and meanwhile, a patient can also perform temporary destination change in a real-time visual field.
An embodiment of the present application further provides a computer device, and fig. 11 is a schematic diagram of a computer device in an embodiment of the present invention, where the computer device is capable of implementing all steps in the method for selecting a destination of an automatic wheelchair in the foregoing embodiment, and the computer device specifically includes the following contents:
a processor (processor)1101, a memory (memory)1102, a communication Interface (Communications Interface)1103, and a communication bus 1104;
the processor 1101, the memory 1102 and the communication interface 1103 complete mutual communication through the communication bus 1104; the communication interface 1103 is configured to implement information transmission between related devices, such as a server-side device, a detection device, and a client-side device;
the processor 1101 is configured to call a computer program in the memory 1102, and the processor when executing the computer program implements all the steps of the automated driving wheelchair destination selection method in the above-described embodiment.
Embodiments of the present application also provide a computer-readable storage medium, which is capable of implementing all steps of the automated driving wheelchair destination selection method in the above embodiments, and the computer-readable storage medium stores thereon a computer program, which when executed by a processor, implements all steps of the automated driving wheelchair destination selection method in the above embodiments.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for automated wheelchair destination selection, comprising:
receiving an initial environment image collected by AR equipment;
extracting regional image features from the initial environment image;
inputting the extracted regional image features into an image recognition model to obtain a plurality of candidate destinations;
displaying the plurality of candidate destinations in a user field of view of the AR device;
and receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode, and acquiring an identified first destination, wherein the first electroencephalogram signal is generated when a user watches one candidate destination for the first time.
2. The automated wheelchair destination selection method of claim 1 wherein extracting regional image features from the initial environment image comprises:
based on the preset height range of the candidate destination, cutting the initial environment image to obtain a cut image;
detecting the color of each pixel point in the cut image to obtain a plurality of pixel points meeting the color requirement of the candidate destination;
communicating a plurality of pixel points meeting the color requirement of the candidate destination to obtain a plurality of communicated regions;
extracting regional image features from the plurality of connected regions.
3. The method of claim 2, wherein detecting the color of each pixel in the cropped image to obtain a plurality of pixels meeting a candidate destination color requirement comprises:
under the hue saturation brightness (HSV) mode, for the color of each pixel point in the cut image, if the color of the pixel point is within the HSV threshold range corresponding to the candidate destination color, the pixel point is determined to be the pixel point meeting the color requirement of the candidate destination.
4. The automated wheelchair destination selection method of claim 2 wherein extracting regional image features from the plurality of connected regions comprises:
performing region filtering on the connected region according to candidate destination image features, wherein the candidate destination image features comprise one or any combination of a size feature, a width-to-height ratio feature, a rotation angle feature and a color distribution feature;
extracting regional image features from the filtered connected regions, wherein the regional image features are directional gradient Histogram (HOG) features.
5. The automated wheelchair destination selection method of claim 1 wherein displaying the plurality of candidate destination areas on an AR device comprises:
a plurality of candidate destination areas are displayed on the AR device at a predetermined frequency, wherein the frequency of each candidate destination is different.
6. The automated wheelchair destination selection method of claim 1 further comprising, after obtaining the identified first destination:
displaying the first destination in a user field of view of the AR device according to a first preset mode;
receiving and identifying a second electroencephalogram signal acquired by the electroencephalogram acquisition electrode to obtain an identified second destination, wherein the second electroencephalogram signal is generated when a user gazes at a candidate destination for the second time;
when the first destination is consistent with the second destination, determining that the first destination is the destination selected by the user, and displaying the destination selected by the user in a user field of the AR device according to a second preset mode;
and when the first destination and the second destination are inconsistent, displaying the second destination in a third preset mode in the visual field of the user of the AR device.
7. An autonomous wheelchair destination selection apparatus comprising:
the image receiving module is used for receiving an initial environment image collected by the AR equipment;
the characteristic extraction module is used for extracting regional image characteristics from the initial environment image;
a candidate destination obtaining module, configured to input the extracted regional image features to an image recognition model, and obtain multiple candidate destinations;
a display module to display the plurality of candidate destinations in a user field of view of the AR device;
the identification module is used for receiving and identifying a first electroencephalogram signal acquired by the electroencephalogram acquisition electrode to obtain an identified first destination, wherein the first electroencephalogram signal is generated when a user watches one candidate destination for the first time.
8. An automated wheelchair destination selection system, comprising: AR device, brain-electrical acquisition electrode and the device for selecting a destination for an autopilot wheelchair of claim 7 wherein,
the AR equipment is used for acquiring an initial environment image and sending the initial environment image to the automatic driving wheelchair destination selection device;
the electroencephalogram acquisition electrode is used for acquiring a first electroencephalogram signal of a user and sending the first electroencephalogram signal to the automatic driving wheelchair destination selection device.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for executing the method of any one of claims 1 to 6.
CN202110446334.1A 2021-04-25 2021-04-25 Automatic driving wheelchair destination selection method, device and system Active CN113138668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110446334.1A CN113138668B (en) 2021-04-25 2021-04-25 Automatic driving wheelchair destination selection method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110446334.1A CN113138668B (en) 2021-04-25 2021-04-25 Automatic driving wheelchair destination selection method, device and system

Publications (2)

Publication Number Publication Date
CN113138668A true CN113138668A (en) 2021-07-20
CN113138668B CN113138668B (en) 2023-07-18

Family

ID=76811901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110446334.1A Active CN113138668B (en) 2021-04-25 2021-04-25 Automatic driving wheelchair destination selection method, device and system

Country Status (1)

Country Link
CN (1) CN113138668B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115192045A (en) * 2022-09-16 2022-10-18 季华实验室 Destination identification/wheelchair control method, device, electronic device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070093706A1 (en) * 2005-10-26 2007-04-26 Sam Technology, Inc EEG electrode headset
CN103083014A (en) * 2013-01-08 2013-05-08 北京理工大学 Method controlling vehicle by electroencephalogram and intelligent vehicle using method
US20140213874A1 (en) * 2013-01-31 2014-07-31 The Hong Kong Polytechnic University Brain biofeedback device with radially adjustable electrodes
CN104083258A (en) * 2014-06-17 2014-10-08 华南理工大学 Intelligent wheel chair control method based on brain-computer interface and automatic driving technology
CN108090459A (en) * 2017-12-29 2018-05-29 北京华航无线电测量研究所 A kind of road traffic sign detection recognition methods suitable for vehicle-mounted vision system
CN209574688U (en) * 2019-01-16 2019-11-05 北京布润科技有限责任公司 A kind of brain wave acquisition cap
CN111247505A (en) * 2017-10-27 2020-06-05 索尼公司 Information processing device, information processing method, program, and information processing system
CN111694425A (en) * 2020-04-27 2020-09-22 中国电子科技集团公司第二十七研究所 Target identification method and system based on AR-SSVEP
CN112223288A (en) * 2020-10-09 2021-01-15 南开大学 Visual fusion service robot control method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070093706A1 (en) * 2005-10-26 2007-04-26 Sam Technology, Inc EEG electrode headset
CN103083014A (en) * 2013-01-08 2013-05-08 北京理工大学 Method controlling vehicle by electroencephalogram and intelligent vehicle using method
US20140213874A1 (en) * 2013-01-31 2014-07-31 The Hong Kong Polytechnic University Brain biofeedback device with radially adjustable electrodes
CN104083258A (en) * 2014-06-17 2014-10-08 华南理工大学 Intelligent wheel chair control method based on brain-computer interface and automatic driving technology
CN111247505A (en) * 2017-10-27 2020-06-05 索尼公司 Information processing device, information processing method, program, and information processing system
CN108090459A (en) * 2017-12-29 2018-05-29 北京华航无线电测量研究所 A kind of road traffic sign detection recognition methods suitable for vehicle-mounted vision system
CN209574688U (en) * 2019-01-16 2019-11-05 北京布润科技有限责任公司 A kind of brain wave acquisition cap
CN111694425A (en) * 2020-04-27 2020-09-22 中国电子科技集团公司第二十七研究所 Target identification method and system based on AR-SSVEP
CN112223288A (en) * 2020-10-09 2021-01-15 南开大学 Visual fusion service robot control method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LUDYMILA R.BORGES 等: "Multimodal System for Training at Distance in a Virtual or Augmented Reality Environment for Users of Electric-Powered Wheelchairs", 《IFAC-PAPERSONLINE》, vol. 49, no. 30, 31 December 2016 (2016-12-31), pages 156 - 160 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115192045A (en) * 2022-09-16 2022-10-18 季华实验室 Destination identification/wheelchair control method, device, electronic device and storage medium

Also Published As

Publication number Publication date
CN113138668B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
US10667697B2 (en) Identification of posture-related syncope using head-mounted sensors
US10813559B2 (en) Detecting respiratory tract infection based on changes in coughing sounds
US10376153B2 (en) Head mounted system to collect facial expressions
Zhang et al. Multimodal spontaneous emotion corpus for human behavior analysis
Hammoud Passive eye monitoring: Algorithms, applications and experiments
KR101840563B1 (en) Method and device for reconstructing 3d face using neural network
US11103140B2 (en) Monitoring blood sugar level with a comfortable head-mounted device
Al-Rahayfeh et al. Eye tracking and head movement detection: A state-of-art survey
CN112034977A (en) Method for MR intelligent glasses content interaction, information input and recommendation technology application
JP2022522667A (en) Makeup processing methods, devices, electronic devices, and recording media
KR102029219B1 (en) Method for recogniging user intention by estimating brain signals, and brain-computer interface apparatus based on head mounted display implementing the method
CN111728608A (en) Augmented reality-based electroencephalogram signal analysis method, device, medium and equipment
KR20120060978A (en) Method and Apparatus for 3D Human-Computer Interaction based on Eye Tracking
US11467662B1 (en) Identifying object of user focus with eye tracking and visually evoked potentials
JPWO2019031005A1 (en) Information processing apparatus, information processing method, and program
CN114821675B (en) Object processing method and system and processor
KR20220062062A (en) Rendering improvements based in part on eye tracking
CN113138668B (en) Automatic driving wheelchair destination selection method, device and system
CN111984123A (en) Electroencephalogram data interaction method and device
CN114187166A (en) Image processing method, intelligent terminal and storage medium
Shi et al. Indoor space target searching based on EEG and EOG for UAV
KR20220066972A (en) User interface based in part on eye movements
US20200250498A1 (en) Information processing apparatus, information processing method, and program
WO2021236738A1 (en) Systems and methods for authenticating a user of a head-mounted display
KR100651104B1 (en) Gaze-based computer interface apparatus and method of using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant