CN112230777A - Cognitive training system based on non-contact interaction - Google Patents

Cognitive training system based on non-contact interaction Download PDF

Info

Publication number
CN112230777A
CN112230777A CN202011178035.6A CN202011178035A CN112230777A CN 112230777 A CN112230777 A CN 112230777A CN 202011178035 A CN202011178035 A CN 202011178035A CN 112230777 A CN112230777 A CN 112230777A
Authority
CN
China
Prior art keywords
training
data
equipment
cognitive
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011178035.6A
Other languages
Chinese (zh)
Inventor
邱飞岳
李丽萍
章国道
盛馨
孔德伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202011178035.6A priority Critical patent/CN112230777A/en
Publication of CN112230777A publication Critical patent/CN112230777A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

A cognitive training system based on non-contact interaction belongs to the technical field of computer application. The system comprises computing equipment, non-contact interaction equipment, a three-dimensional display screen, audio equipment and a wireless transmission unit. The invention has natural man-machine interaction: by adopting a non-contact interaction technology, the dependence of the traditional cognitive training on a real object device and a mouse keyboard is eliminated, so that a user is not limited by technology and equipment during training, and the training effect is improved; the situation applicability is high: the method has no special requirements on a projection screen, hardware equipment and a use space, only needs to access non-contact interactive equipment with lower cost, adopts a self-adaptive training matching mechanism, and is suitable for various training places such as classrooms or families; the gesture recognition rate is high, and interactive training is accurate: the system design is based on a somatosensory interaction technology, a skeleton coordinate detection method is adopted, the recognition accuracy is improved, a cognitive training posture semantic command and a voice semantic command are defined, and the interaction requirement of cognitive training is met.

Description

Cognitive training system based on non-contact interaction
Technical Field
The invention belongs to the technical field of computer application, and particularly relates to a cognitive training system based on non-contact interaction.
Background
The non-contact human-computer interaction means that an interactive object and an interacted object communicate in a non-contact mode, and may be wireless interaction, wired interaction or interaction based on a visual analysis mode.
Traditional cognitive training utilizes paper cards, toys and other physical devices in many ways, and these traditional devices have the disadvantages of single form, insufficient content expansibility, easy loss, incapability of analyzing and evaluating the memory level in real time and the like. With the development of technologies such as perceptual interaction, speech recognition, image recognition, virtual reality, etc., computerized assisted cognitive training software and systems are beginning to emerge. The technical scheme which is closer to the invention comprises the following steps:
the most widely used Cogmed (www.cogmed. com) for computerized assisted memory training comprises visual space and speech working memory tasks, a Cognifit program comprising visual speech, non-speech auditory and visual space working memory tasks, a Brain firm program comprising a series of tasks such as n-back and the like, scores and speech feedback are given after the task training is finished, the difficulty level can be adjusted adaptively according to each score, and the programs can maximally train the working memory capacity of children and enhance the cognitive ability when measured by a plurality of research results. However, most of these training programs transfer the traditional training task to the computer, the operation mode of the mouse and the keyboard and the learning difficulty of each function of the computer are large, the mouse and the keyboard are not used for the old and the young, and the interactive mode is lack of immersion.
The invention relates to a high-level Chinese (brain cognition training system framework and key technology [ J ] based on virtual reality, system simulation technology, 2018,14(04): 248-254.) and the invention patent, application number: CN201811138657.9, patent name: the cognitive rehabilitation training control system, method and device based on virtual reality adopt the virtual reality technology to reconstruct a reality scene with strong immersion and intuitive interactivity during system design, are assisted by a fun and comfortable memory training mode, simultaneously optimize the traditional paper scale and manual record analysis mode, and meet the training requirements of users. However, since the virtual reality technology is adopted to build the system, the virtual reality equipment is needed, the related hardware is expensive and uncomfortable to wear, the operation and use difficulty of the handle and the helmet equipment is high, and the attention transfer is easily generated in the training process due to the fact that the hands of children and special groups cannot be liberated, so that the final training effect is poor. Meanwhile, the method also has no popularity in consideration of the universality of people who easily have dizziness or discomfort in three-dimensional scenes.
In summary, the current cognitive training system has the following disadvantages:
1) the system has no universality: at present, the creation and optimization of a cognitive training system are mostly diagnosis and rehabilitation training of aged patients facing cognitive deficiency, and the training content and the interaction form of normal children and special children (such as hyperkinetic children and children with learning difficulty) are not universal.
2) Insufficient system interaction: most cognitive training adopts traditional physical training mode, the drawback still exists, and the computerized training system that partly is aided with interfaces such as keyboard, mouse and helmet is operated the degree of difficulty greatly, and the interactivity is not strong, and the user need concentrate on carrying out the computer operation when cognitive training, does not accord with cognitive development characteristics, influences final training effect.
In summary, according to different requirements and cognitive development characteristics for people, interactive technology and training content need to be synchronously improved and optimized. The limb movement is an important ring of cognitive development, and human-computer interaction with strong natural immersion feeling is realized by utilizing the posture of a human body without other equipment, so that the human-computer interaction is not interfered by the outside. Human-computer interaction therapy is used as a treatment means of cognitive dysfunction, the idea is to improve the neural plasticity of the brain through multi-sensory training, and researches show that the cognitive functions of various aspects such as memory, execution and the like of a patient can be improved by training by adopting a human-computer interaction technology. Especially, when the user can control the operation by physical action rather than the handle keypad, the sense of presence of the user himself is increased.
Disclosure of Invention
In view of the above problems in the prior art, an object of the present invention is to provide a cognitive training system and device based on non-contact interaction, which accurately identify gesture actions, effectively improve the cognitive ability of a user through more natural and controllable interaction modes and training contents with various forms, enhance the concentration and immersion of the user in cognitive training, and meet the requirements of multiple situations.
The invention provides the following technical scheme: a cognitive training system based on non-contact interaction is characterized in that: the system comprises computing equipment, non-contact interaction equipment, a three-dimensional display screen, audio equipment and a wireless transmission unit;
the non-contact interaction equipment is used for extracting human body gestures and voice features, completing real-time recognition and analysis of actions and voice, and transmitting related data to the matching unit;
the wireless transmission unit is used for realizing the two-way communication between the devices through a wireless network HTTP protocol and realizing the transmission, the reception and the control of data;
the three-dimensional display screen is used for providing visual stimulation to a user through program control in the training process;
the audio equipment is used for realizing auditory stimulation of a user in the training process and providing commands, prompts and real-time feedback of correct and incorrect operation of training operation;
the computing equipment UNICOM non-contact interactive equipment, stereoscopic display screen, audio equipment and wireless transmission unit carry out data acquisition, transmission and processing, and it includes: training control unit and central processing unit.
The cognitive training system based on non-contact interaction is characterized in that the training control unit realizes the storage and execution of program training contents suitable for any configuration device, and the central processing unit receives training data of all users from the training control unit and performs sequencing processing on the training data so as to calculate and evaluate a large database of the users.
The cognitive training system based on the non-contact interaction is characterized in that the wireless transmission unit is executed through a hybrid communication infrastructure.
The cognitive training system based on non-contact interaction is characterized in that the training control unit can be configured to provide training instructions and tasks, and send the recognized user gesture and motion data to the matching unit so as to receive instructions or feedback of further training tasks.
The cognitive training system based on non-contact interaction is characterized in that the training control unit comprises a self-adaptive training unit and a matching unit, the self-adaptive training unit carries out auxiliary diagnosis on the cognitive level of a user, matches corresponding training grade scenes and data for the current level and the capability of the user, and adaptively adjusts the difficulty of the next training period of cognitive training according to the data of the last training and the evaluation result;
and the matching unit is used for judging the selection instruction information of the user by matching the gesture recognition action corresponding to the training selection instruction when the user trains, and finally sending the user gesture, voice data and selection instruction data recognized by the non-contact interactive equipment in the training process to the central processing unit for corresponding evaluation so as to obtain a further training feedback instruction.
The cognitive training system based on non-contact interaction is characterized in that the central processing unit comprises a database and an evaluation unit, wherein the database is used for storing cognitive training scene data, trainer feedback data and feedback instruction data matched with training operation selection; and the evaluation unit is used for receiving selection instructions, process operations and matching result data of different users in the training process in the database, evaluating whether the selection in the training process meets the training rules and requirements, and finally outputting the evaluation result to the three-dimensional display screen assisted with audio output.
The cognitive training system based on non-contact interaction is characterized in that the evaluation unit carries out evaluation judgment in an absolute error-correcting mode.
By adopting the technology, compared with the prior art, the invention has the following beneficial effects:
1) natural human-computer interaction: the invention adopts a non-contact interaction technology, gets rid of the dependence of the traditional cognitive training on a real object device and a mouse keyboard, and expands an interaction channel to a multi-dimensional channel such as 'gesture, body action, sound' and the like. The user can realize similar space-isolated interaction effect on a common projection screen or a television within a certain distance, the immersion feeling of the user is enhanced, and a more natural interaction mode is used. The user is not limited by technology and equipment during training, and the whole body and mind are put into training, so that the training effect is improved;
2) the situation applicability is high: the touch screen interaction equipment is expensive and is limited by the size of the specification, the invention has no special requirements on a projection screen, hardware equipment and a use space, and only needs to be accessed into non-contact interaction equipment with lower cost, such as an Azure Kinect depth camera. And a self-adaptive training matching mechanism is adopted, so that the training system is suitable for various training places such as classes or families;
3) the gesture recognition rate is high, and interactive training is accurate: the system design in the embodiment of the invention is based on the Azure Kinect somatosensory interaction technology, a skeleton coordinate detection method is adopted, the recognition accuracy is improved, a cognitive training posture semantic command and a voice semantic command are defined, and the interaction requirement of cognitive training is met.
Drawings
FIG. 1 is a schematic structural diagram of a non-contact interactive cognitive training system according to the present invention;
fig. 2 is a schematic diagram of a device based on a non-contact interactive cognitive training system according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
On the contrary, the invention is intended to cover alternatives, modifications, equivalents and alternatives which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, certain specific details are set forth in order to provide a better understanding of the present invention. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details.
Referring to fig. 1-2, a cognitive training system based on non-contact interaction includes a computing device, a non-contact interaction device, a stereo display screen, an audio device, and a wireless transmission unit.
The non-contact interaction equipment is used for realizing the functions of human body motion capture, image recognition, voice input, voice recognition and the like and acquiring related data information. The non-contact interaction equipment comprises infrared acquisition equipment, ultrasonic acquisition equipment, face recognition equipment, TOF acquisition equipment and the like. In the embodiment, Azure Kinect non-contact interaction equipment is adopted, data such as a color image, a depth image, joint point data and a sample library posture are obtained, human body posture and voice characteristics are extracted, real-time recognition and analysis of actions and voice are completed, and related data are transmitted to a matching unit.
Wherein Azure Kinect contactless mutual equipment includes and is not restricted to: a depth sensor having a wide and narrow viewing angle option; a microphone array for capturing scene sounds; an RGB camera providing a color image data stream matched with the depth data; accelerometers and gyroscopes for calibrating sensor orientation and spatial tracking.
The wireless transmission unit is used for realizing the two-way communication between the devices through a wireless network HTTP protocol, and realizing the functions of data transmission, data reception, command control and the like. Performed over a hybrid communication infrastructure, using known means and protocols such as WIFI, bluetooth, LTE/4G, Infrared (IR) communication, etc. In the embodiment, data transmission among the Azure Kinect device, the computing device, the stereoscopic display screen and the audio device is transmitted in the form of the wireless network.
The three-dimensional display screen is used for providing visual stimulation for a user through program control in the training process, and the main content forms are displaying real-time dynamic operation training scenes, training and evaluation results which are correspondingly matched with the scenes in the central processing unit and the like.
The audio equipment is used for realizing auditory stimulation of a user in the training process, providing commands, prompts and background music of training operation and real-time feedback of correct and wrong operation.
In the training scenario of this example, the visual stimuli are provided by a stereoscopic display screen and the auditory stimuli are delivered by an audio device, providing the user with some specific encouragement and visual and auditory cues of success and failure. In addition, various interesting training contents are adopted in the aspect of vision, the state of a trainer is tracked and presented in real time, and background music and the like are given or provided in different directions in the aspect of hearing. Real-time visual or auditory stimulation can enable a user to have better immersion in the scene process, and training effectiveness is improved.
The computing equipment UNICOM non-contact interactive equipment, stereoscopic display screen, audio equipment and wireless transmission unit carry out data acquisition, transmission and processing, can be game host computer, head display equipment, controller peripheral hardware, and it includes:
the training control unit, implementing embodiments suitable for any configuration device, such as a desktop computer, a notebook, etc., storing and executing program training content, may be configured to provide training instructions and tasks, send recognized user gesture motion data to the matching unit, to receive instructions or feedback for further training tasks.
Wherein the training control unit comprises:
and the self-adaptive training unit is used for performing auxiliary diagnosis on the cognitive memory level of the user, calling training and evaluation data in the database, adaptively adjusting the content of the next training period of the cognitive training according to the data of the last training and the evaluation result, and matching the training scene and data of the corresponding cognitive grade for the current level and the ability of the user. In this example, the last training data and the evaluation result are used as feature input, an input sequence is encoded into a vector sequence by using an embedded layer of a neural network, the vector sequence is converted into a single vector by using a long-term short-term memory artificial neural network, the single vector comprises the selection mode features of the user, the extracted features are further transmitted to a classifier, the cognitive degree of the user is evaluated, and the selection training content is adaptively matched and adjusted.
Specifically, in this example, in order to help children to train cognitive memory more effectively, the classical experimental paradigm in cognitive neuroscience is adopted for research and development, and memory training is divided into two modules, namely, contextual memory and working memory training.
And the matching unit is used for judging the selection instruction information of the user by matching the gesture recognition action corresponding to the training selection instruction when the user trains. And sending the action data of the user and the selection instruction information to the central processing unit for matching in the training process according to the Azure Kinect equipment so as to provide a further training feedback instruction.
In this example, training selection instructions are designed that match the recognized gesture actions. The method comprises the steps of recognizing the movement of four limbs by a characteristic extraction method, obtaining a characteristic vector reflecting the movement intention of the limbs, accurately recognizing by a bone coordinate judgment threshold value, and finally setting an individual movement-option parameter. Including but not limited to: setting corresponding gesture options A-D such as lifting left and right hands, waving left and right hands, lifting left and right legs and the like, or displaying corresponding gesture actions at the lower right corner of corresponding picture options in training and the like. And finally, transmitting the matching result to a central processing unit for corresponding judgment and evaluation.
The identification of the four limbs actions can be realized by a feature extraction method. For example, the left-hand raising motion is completed, the coordinates of the left hand, the right hand and the left elbow are tracked, when the left-hand node is parallel to or higher than the left elbow joint point and higher than the left shoulder node, the left-hand raising motion can be determined to be completed when the conditions are met, and meanwhile, a minimum threshold value is set for the difference value of motion recognition. And subtracting the Y-axis value of the left shoulder bone node coordinate from the Y-axis value of the left hand bone node coordinate, and when the system detects that the difference value is larger than a given threshold value, the action is considered to be finished.
The central processing unit is a large database for receiving the training data of all users by the training control unit, and performing sequencing processing to calculate and evaluate the users. The data are used for analyzing the personal training performance, forming a new training task and finally outputting a training result.
Wherein the central processing unit comprises:
and the database is used for receiving and storing cognitive training scene data, user training process data, training feedback instruction data and posture action data which are transmitted by the Azure Kinect sensor and are matched with training operation selection from the training control x control unit. And providing corresponding training scene content and operation command class guide data for the training control unit, and transmitting the information data to the evaluation unit.
And the evaluation unit is used for receiving selection instructions, process operations and matching result data of different users in the training process in the database. The unit evaluates whether the training result meets the training rules and requirements or not according to the cognitive training standard paradigm, and finally outputs the evaluation result to a three-dimensional display screen assisted with audio output.
In this example, the evaluation unit uses a set of standard research scales developed by a typical experimental paradigm in cognitive neuroscience to evaluate and judge the selection of the user in the training process in an absolute correct and wrong mode. The content of the evaluation includes but is not limited to: whether to make actions within the correct range, whether to select correct options, the reaction duration of the user, the training completion time, the accumulated number of correct and wrong answers and the cognitive evaluation of different users.
The cognitive training system realized by adopting the non-contact interaction equipment based on the Azure Kinect in the embodiment provides a schematic diagram of an application device based on the system of the embodiment. The embodiment comprises an Azure Kinect interaction device, a computing device, a stereoscopic display screen and an audio device, and the reference is made to FIG. 2.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principles of the present invention are intended to be included within the scope of the present invention.

Claims (7)

1. A cognitive training system based on non-contact interaction is characterized in that: the system comprises computing equipment, non-contact interaction equipment, a three-dimensional display screen, audio equipment and a wireless transmission unit;
the non-contact interaction equipment is used for extracting human body gestures and voice features, completing real-time recognition and analysis of actions and voice, and transmitting related data to the matching unit;
the wireless transmission unit is used for realizing the two-way communication between the devices through a wireless network HTTP protocol and realizing the transmission, the reception and the control of data;
the three-dimensional display screen is used for providing visual stimulation to a user through program control in the training process;
the audio equipment is used for realizing auditory stimulation of a user in the training process and providing commands, prompts and real-time feedback of correct and incorrect operation of training operation;
the computing equipment UNICOM non-contact interactive equipment, stereoscopic display screen, audio equipment and wireless transmission unit carry out data acquisition, transmission and processing, and it includes: training control unit and central processing unit.
2. The system of claim 1, wherein the training control unit implements an embodiment suitable for any configuration device to store and execute program training content, and the central processing unit performs ranking processing on training data received by the training control unit from all users for calculating and evaluating the user's big database.
3. The system of claim 1, wherein the wireless transmission unit is implemented via a hybrid communication infrastructure.
4. The system of claim 1, wherein the training control unit is configured to provide training instructions and tasks, and to send the recognized user gesture motion data to the matching unit to receive instructions or feedback for further training tasks.
5. The cognitive training system based on non-contact interaction of claim 4, wherein the training control unit comprises an adaptive training unit and a matching unit, the adaptive training unit performs auxiliary diagnosis on the cognitive level of the user, matches corresponding training grade scenes and data for the current level and ability of the user, and adaptively adjusts the difficulty of the next training period of cognitive training according to the data of the last training and the evaluation result;
and the matching unit is used for judging the selection instruction information of the user by matching the gesture recognition action corresponding to the training selection instruction when the user trains, and finally sending the user gesture, voice data and selection instruction data recognized by the non-contact interactive equipment in the training process to the central processing unit for corresponding evaluation so as to obtain a further training feedback instruction.
6. The cognitive training system based on non-contact interaction as claimed in claim 1, wherein the central processing unit comprises a database and an evaluation unit, the database is used for storing cognitive training scene data, trainer feedback data and feedback instruction data matched with training operation selection; and the evaluation unit is used for receiving selection instructions, process operations and matching result data of different users in the training process in the database, evaluating whether the selection in the training process meets the training rules and requirements, and finally outputting the evaluation result to the three-dimensional display screen assisted with audio output.
7. The system according to claim 6, wherein the evaluation unit performs evaluation judgment in an absolute error-correcting manner.
CN202011178035.6A 2020-10-29 2020-10-29 Cognitive training system based on non-contact interaction Pending CN112230777A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011178035.6A CN112230777A (en) 2020-10-29 2020-10-29 Cognitive training system based on non-contact interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011178035.6A CN112230777A (en) 2020-10-29 2020-10-29 Cognitive training system based on non-contact interaction

Publications (1)

Publication Number Publication Date
CN112230777A true CN112230777A (en) 2021-01-15

Family

ID=74110216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011178035.6A Pending CN112230777A (en) 2020-10-29 2020-10-29 Cognitive training system based on non-contact interaction

Country Status (1)

Country Link
CN (1) CN112230777A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113113115A (en) * 2021-04-09 2021-07-13 北京未名脑脑科技有限公司 Cognitive training method, system and storage medium
CN113380377A (en) * 2021-04-09 2021-09-10 阿呆科技(北京)有限公司 Training system based on cognitive behavioral therapy

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104133813A (en) * 2014-08-11 2014-11-05 南通大学 Navy semaphore training method based on Kinect
CN104933278A (en) * 2014-03-20 2015-09-23 中国科学院软件研究所 Multi-channel interactive method and system used for speech disorder rehabilitation training
US20180204480A1 (en) * 2017-01-18 2018-07-19 Chao-Wei CHEN Cognitive training system
CN108853946A (en) * 2018-07-10 2018-11-23 燕山大学 A kind of exercise guide training system and method based on Kinect
CN108919950A (en) * 2018-06-26 2018-11-30 上海理工大学 Autism children based on Kinect interact device for image and method
CN109298779A (en) * 2018-08-10 2019-02-01 济南奥维信息科技有限公司济宁分公司 Virtual training System and method for based on virtual protocol interaction
CN111156855A (en) * 2019-12-04 2020-05-15 中国人民解放军国防科技大学 Electronic warfare equipment virtual training man-machine interaction system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933278A (en) * 2014-03-20 2015-09-23 中国科学院软件研究所 Multi-channel interactive method and system used for speech disorder rehabilitation training
CN104133813A (en) * 2014-08-11 2014-11-05 南通大学 Navy semaphore training method based on Kinect
US20180204480A1 (en) * 2017-01-18 2018-07-19 Chao-Wei CHEN Cognitive training system
CN108919950A (en) * 2018-06-26 2018-11-30 上海理工大学 Autism children based on Kinect interact device for image and method
CN108853946A (en) * 2018-07-10 2018-11-23 燕山大学 A kind of exercise guide training system and method based on Kinect
CN109298779A (en) * 2018-08-10 2019-02-01 济南奥维信息科技有限公司济宁分公司 Virtual training System and method for based on virtual protocol interaction
CN111156855A (en) * 2019-12-04 2020-05-15 中国人民解放军国防科技大学 Electronic warfare equipment virtual training man-machine interaction system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113113115A (en) * 2021-04-09 2021-07-13 北京未名脑脑科技有限公司 Cognitive training method, system and storage medium
CN113380377A (en) * 2021-04-09 2021-09-10 阿呆科技(北京)有限公司 Training system based on cognitive behavioral therapy
CN113113115B (en) * 2021-04-09 2022-11-08 北京未名脑脑科技有限公司 Cognitive training method, system and storage medium

Similar Documents

Publication Publication Date Title
US11493993B2 (en) Systems, methods, and interfaces for performing inputs based on neuromuscular control
Yang et al. Gesture interaction in virtual reality
US20230072423A1 (en) Wearable electronic devices and extended reality systems including neuromuscular sensors
US20220269346A1 (en) Methods and apparatuses for low latency body state prediction based on neuromuscular data
CN111063416A (en) Alzheimer disease rehabilitation training and capability assessment system based on virtual reality
CN108983636B (en) Man-machine intelligent symbiotic platform system
CN110890140A (en) Virtual reality-based autism rehabilitation training and capability assessment system and method
US11262851B2 (en) Target selection based on human gestures
Gavrilova et al. Multi-modal motion-capture-based biometric systems for emergency response and patient rehabilitation
Zaraki et al. Design and evaluation of a unique social perception system for human–robot interaction
CN107491648A (en) Hand recovery training method based on Leap Motion motion sensing control devices
CN114998983A (en) Limb rehabilitation method based on augmented reality technology and posture recognition technology
Wang et al. Feature evaluation of upper limb exercise rehabilitation interactive system based on kinect
WO2022188022A1 (en) Hearing-based perception system and method for using same
CN112230777A (en) Cognitive training system based on non-contact interaction
US20200121247A1 (en) Human-computer interactive rehabilitation system
Hu et al. StereoPilot: A wearable target location system for blind and visually impaired using spatial audio rendering
KR101160868B1 (en) System for animal assisted therapy using gesture control and method for animal assisted therapy
WO2023019376A1 (en) Tactile sensing system and method for using same
Sosa-Jiménez et al. A prototype for Mexican sign language recognition and synthesis in support of a primary care physician
CN113035000A (en) Virtual reality training system for central integrated rehabilitation therapy technology
Vyas et al. Gesture recognition and control
JP2022032362A (en) System, method, and information processing device
Zidianakis et al. Building a sensory infrastructure to support interaction and monitoring in ambient intelligence environments
CN112684890A (en) Physical examination guiding method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination