CN111009318A - Virtual reality technology-based autism training system, method and device - Google Patents

Virtual reality technology-based autism training system, method and device Download PDF

Info

Publication number
CN111009318A
CN111009318A CN201911169005.6A CN201911169005A CN111009318A CN 111009318 A CN111009318 A CN 111009318A CN 201911169005 A CN201911169005 A CN 201911169005A CN 111009318 A CN111009318 A CN 111009318A
Authority
CN
China
Prior art keywords
virtual reality
scene
training
information
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911169005.6A
Other languages
Chinese (zh)
Inventor
翟广涛
方艺
范磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201911169005.6A priority Critical patent/CN111009318A/en
Publication of CN111009318A publication Critical patent/CN111009318A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0027Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0044Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense
    • A61M2021/005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense images, e.g. video

Abstract

The invention provides an autism training system, method and device based on a virtual reality technology, which comprises the following steps: the head-mounted three-dimensional imaging subsystem provides a virtual reality scene and a life general knowledge cognition problem selection and provides a cognitive training based on the virtual reality scene for a user; the interactive control subsystem is used for acquiring hand motion track information, selection instruction information and head action information; and the central processing subsystem is used for judging whether the responses of the user to different scenes have social adaptability or not, adjusting scenes, scene repetition times and common sense of life cognition problems selected according to difficulty levels according to the judgment result, and transmitting the adjustment result to the head-mounted stereo imaging subsystem. According to the invention, a caretaker does not need to have professional auxiliary treatment professional knowledge of autism, a gap of the professional is solved, a virtual scene is displayed without a frame, the immersion feeling of the autism patient is enhanced, the autism patient can be free from external interference, the whole body is put into training, and the rehabilitation speed is accelerated.

Description

Virtual reality technology-based autism training system, method and device
Technical Field
The invention relates to an autism training system, in particular to an autism training system based on a virtual reality technology, and an autism training method and device based on the system.
Background
Autism spectrum disorder, also known as autism, is a complex series of neurological developmental disorders. Typical features of autistic patients include social communication impairment, verbal communication impairment, and repetitive stereotyped behaviors of behavioral interests and activities. The 'Chinese autism education rehabilitation industry development condition report II' issued in 2017 shows that 1% of people in 13 hundred million population are conservatively estimated in China, and at least over 1000 million autism population and 200 million autism children grow at a speed of nearly 20 million per year.
There is no known treatment for autism, but existing studies indicate that early intensive, continuous treatment for specific education and behaviors can effectively improve the skills of autistic patients in independent life, social communication and work. However, the currently widely accepted framework of autism behavioral intervention, Application Behavioral Analysis (ABA), relies on specialized autism medical institutions, while requiring significant effort and time to be invested in the autistic patient's family. This, on the one hand, puts a heavy burden on the patient's family and, on the other hand, places higher demands on the corresponding social infrastructure. The self-imposed patients who can not receive intervention treatment in time are more burdened to families and society with the aging. Therefore, the goal of existing behavioral intervention training is to help autistic patients learn methods to deal with simple life scenarios, building the ability to live partially independently, with reduced reliance on professional treatment facilities.
With the development of virtual reality technology, people can be immersed in a virtual world through a wearable helmet or the like display device. The virtual reality technology has three characteristics: immersive, interactive, imaginative. These characteristics are very suitable for the intervention training of the defects of unsmooth social communication, stereotyped behaviors and the like of the self-imposed patients. Meanwhile, the autism patient is difficult to concentrate on and has stereotypy behaviors, so that a large amount of repetitive teaching is usually involved in early intervention on the autism child, a large amount of energy of a teacher is consumed, and the teaching efficiency is difficult to improve. And the virtual teaching enables the autistic children to receive early intervention treatment in the virtual world, so that the problems are effectively solved. First, virtual reality-based hardware devices have reached a consumer level and can be moved into most salary homes. Secondly, the teaching system is in software and virtualization, so that the local culture level parents and even special education schools do not need professional autism intervention knowledge. Finally, virtual teaching is very useful for teaching on a large number of repetitive occasions, greatly releasing the productivity of professional teachers and parents.
Through the search of the prior art documents, the Chinese patent application No. CN201620212200.8, the name of the utility model is: a children autism VR rehabilitation system, the publication number: CN205451066U, which is described as follows: the utility model discloses a recovered system of children autism VR, the system includes: the system comprises an image acquisition device, a sound playing device, a VR video playing device and a central control device; the image acquisition device is internally provided with a miniature high-definition camera and is used for acquiring expression image information of a user and sending the expression image information to the central control device; the sound collection device is internally provided with a miniature microphone and is used for collecting sound information of a user and sending the sound information to the central control device; the built-in loudspeaker of sound play device can play the audio: the VR video playing device is internally provided with 30 electronic screens for displaying video playing contents; the central control device is used for processing signal contents of image acquisition, sound playing and VR video playing in a centralized mode and controlling control information of the sound playing device and information of the VR video playing device according to the acquired expression information or voice information of the user.
However, the above utility model does not have an interactive system, and the user, i.e. the patient, in the process of watching the video, the rehabilitation system does not provide option selection for some nodes of the patient in the video, and the patient can only watch the video repeatedly to improve the autism. Conceivably, the VR rehabilitation system mainly corrects abnormal behaviors of the autism patient in a duck-filling mode through massive video playing, and the patient is bound to have tired emotion due to lack of participation feeling, so that the treatment effect is bound to be influenced. And the central control device does not judge and automatically select the ability to play the played content according to the acquired information, so that the VR rehabilitation system still needs a lot of professional autism rehabilitation personnel to participate conceivably, which is not easy to realize today with serious shortage of the personnel.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide an autism training system, method and device based on a virtual reality technology, which can enhance the immersion of an autism patient and prevent the autism patient from being interfered by the outside.
According to a first aspect of the present invention, there is provided an autism training system based on virtual reality technology, comprising:
the system comprises a head-mounted stereo imaging subsystem, a virtual reality scene, a life general knowledge cognitive problem and a cognitive training system, wherein the head-mounted stereo imaging subsystem provides the virtual reality scene, sets a selected life general knowledge cognitive problem in the virtual reality scene, and provides cognitive training based on the virtual reality scene for a user;
the interactive control subsystem collects hand motion track information, selection instruction information and head action information of a user in the cognitive training and transmits the collected hand motion track information, selection instruction information and head action information to the central processing subsystem;
and the central processing subsystem judges whether the user has social adaptability to the reactions of different scenes according to the information of the interactive control subsystem, adjusts scenes, scene repetition times and common sense of life cognition problems which are selected according to difficulty grades according to the judgment result, and transmits the adjustment result to the head-mounted stereo imaging subsystem.
Optionally, the head-mounted stereoscopic imaging subsystem comprises:
a stereoscopic imaging display module, which provides a virtual reality scene and presents a common sense of life cognitive problem based on the virtual reality scene for a user to select;
and the loudspeaker module is used for emitting the sound to be presented in the virtual reality scene through a voice synthesis technology.
Optionally, the stereoscopic imaging display module is further configured to eliminate a display frame and enlarge a field of view of a user.
Optionally, the interaction control subsystem comprises:
the handle control selection module tracks and records the position, the movement condition and the button action of the handle, and judges hand motion tracks and selection instructions made by a user in different scenes;
the head action sensing module acquires the spatial position, angle information, speed and acceleration information of the head-mounted three-dimensional imaging subsystem through a six-axis sensor and an optical positioning system;
and the information transmission module transmits the information obtained by the handle control selection module and the head action induction module to the central processing subsystem.
Optionally, the central processing subsystem includes:
the evaluation module collects the selection instruction of each training period obtained by the handle control selection module, evaluates whether the selection in the period accords with the response and selection to the corresponding life scene under social cognition, and obtains an evaluation result;
the self-adaptive difficulty selection module takes the hand motion trail obtained by the handle control selection module and the head action induction module and the spatial position, angle information, speed and acceleration information of the head-mounted three-dimensional imaging subsystem as characteristic input, encodes an input sequence into a dense vector sequence by utilizing an embedding layer (embedding layer) of a neural network, converts the dense vector sequence into a single vector by using a long-term short-term memory artificial neural network, contains the selection mode characteristics of a user, further sends the extracted characteristics into a stacked full connection layer and a softmax classifier, performs auxiliary diagnosis on the self-closure symptom degree of the user, judges a training level for the current knowledge level and the capability of the user, and adaptively adjusts the difficulty of the cognitive training in the next training period.
Optionally, the central processing subsystem further comprises:
and the self-adaptive scene repeating module adjusts the repeating quantity of the same kind of scenes according to the accumulated evaluation results of a plurality of scenes in one period obtained by the evaluation module so as to carry out a plurality of times of training aiming at the knowledge weak area of the user and further achieve the aim of improving the reinforcement.
Optionally, the central processing subsystem further comprises:
and the scene selection module is used for selecting scenes according to results obtained by the self-adaptive difficulty selection module and the self-adaptive scene repeating module, adjusting the common sense cognition problem according to the difficulty of the cognition training, and transmitting the information of the number of the similar scenes judged by the self-adaptive scene repeating module to the head-mounted stereo imaging subsystem together for being displayed to a user.
Optionally, the evaluation module performs evaluation judgment in an absolute error-correcting manner.
According to a second aspect of the present invention, there is provided a virtual reality technology-based autism training method, which employs any one of the above-described virtual reality technology-based autism training systems, including:
providing cognitive training based on the virtual reality scene for a user, wherein the virtual reality scene is provided by adopting a head-mounted stereo imaging subsystem, and the selected common sense of life cognitive problem is set in the virtual reality scene;
collecting hand motion track information, selection instruction information and head action information of a user in the cognitive training;
and judging whether the responses of the user to different scenes have social adaptability or not according to the collected hand motion track information, the selection instruction information and the head action information, adjusting scenes, the scene repetition times and the common sense of life cognition problem which is selected according to the difficulty level according to the judgment result, and transmitting the adjustment result to the head-mounted stereo imaging subsystem.
According to a third aspect of the present invention, there is provided an autism training apparatus based on virtual reality technology, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor is configured to execute the virtual reality technology-based autism training method described above when executing the program.
Compared with the prior art, the invention has the following beneficial effects:
1) the system, the method and the device adopt the self-adaptive scene selection mechanism, so that the caretaker does not need professional knowledge of professional autism auxiliary treatment, and the gap of the professional autism auxiliary training related personnel at present is overcome.
2) The system, the method and the device can help the patient to improve the ability of independent life through simple non-social life scene training, and provide guarantee with more subjective initiative in the aspects of personal safety and life quality of the autistic patient.
3) The system, the method and the device adopt the virtual reality technology, the virtual scene is displayed without a frame, the immersion feeling of the autism patient is enhanced, the autism patient can be free from external interference, the whole body is put into training, and the curing speed is accelerated.
Of course, it is not necessary for any product in which the invention is practiced to achieve all of the above-described advantages at the same time.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a block diagram of an autism training system based on virtual reality technology in an embodiment of the invention;
fig. 2 is a flowchart of an autism training method based on virtual reality technology according to an embodiment of the invention;
in the figure: the system comprises a head-mounted stereo imaging subsystem 1, a stereo imaging display module 11, a loudspeaker module 12, an interactive control subsystem 2, a handle control selection module 21, a head action sensing module 22, an information transmission module 23, a central processing subsystem 3, an evaluation module 31, an adaptive difficulty selection module 32, an adaptive scene selection module 33 and a scene selection module 34.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that, for those skilled in the art, it is possible to make various changes and modifications without departing from the concept of the present invention, and all the portions not described in detail in the following embodiments may be implemented by using the prior art.
Although there are systems for training autism in the prior art, as described in the background art, the user cannot interact with the system while watching the video, and only can simply watch the video repeatedly to treat and improve autism. It is conceivable that such a duck-filling correction with a large amount of video for abnormal behavior of the autistic patient may produce a boredom feeling due to lack of sense of participation, which inevitably affects the therapeutic effect. Meanwhile, the prior art generally has no ability to judge and automatically select playing contents according to the acquired information, still needs a lot of professional autism rehabilitation personnel to participate, and is not easy to realize today when the personnel are seriously short. The invention provides a specific scheme for well solving the problems.
Referring to fig. 1, a block diagram of an autism training system based on virtual reality technology in an embodiment of the present invention includes: the system comprises a head-wearing type three-dimensional imaging subsystem 1, an interactive control subsystem 2 and a central processing subsystem 3, wherein the head-wearing type three-dimensional imaging subsystem 1 is used for matching a sound virtual reality scene to a patient and providing corresponding life general knowledge cognitive problems for the patient to select, so that scene-based cognitive training is provided for the patient; the interactive control subsystem 2 is used for acquiring hand motion track information, selection instruction information and head action information of a patient, and transmitting the acquired hand motion track information, selection instruction information and head action information of the patient to the central processing subsystem 3; and the central processing subsystem 3 is used for judging whether the reactions of the patient to different scenes have social adaptability or not, adjusting scenes, scene repetition times and life general knowledge cognitive problems selected according to difficulty levels according to the judgment result, and transmitting the adjustment result to the head-mounted stereo imaging subsystem 1.
Specifically, in this embodiment, the virtual reality-based cognitive problem presented by the head-mounted stereoscopic imaging subsystem 1 is provided for the patient to select in the form of an option, and the cognitive training based on the simple non-social life scenario provided by the head-mounted stereoscopic imaging subsystem 1 includes but is not limited to: the patient is provided with a life general knowledge cognitive problem based on the scene, a next action prompt, a correct or wrong treatment scheme widely accepted by the society as an option, and the like. The patient can select options listed in the head-mounted stereo imaging subsystem 1 or enter a trigger point of the next scene through the interactive control subsystem 2, the central processing subsystem 3 can collect hand motion track information, selection instruction information and head action information of the patient, judge whether the reaction of the patient to different scenes has social adaptability or not according to the information, then adjust scenes, scene repetition times and life general knowledge cognitive problems selected according to difficulty levels according to judgment results, transmit the adjustment results to the head-mounted stereo imaging subsystem 1, and the head-mounted stereo imaging subsystem 1 presents new scenes and new life general knowledge cognitive problems to the patient. By adopting a self-adaptive scene selection mechanism, a caregiver does not need to have professional knowledge of professional autism auxiliary treatment, and the gap of the current professional autism auxiliary training related practitioners is overcome.
Preferably, on the basis of the above-mentioned embodiment, referring to fig. 1, the head-mounted stereo imaging subsystem 1 includes: the stereoscopic imaging display module 11 is used for a virtual reality scene and presenting a scene-based life general knowledge option for selection, so as to provide cognitive training based on the scene for a patient; and the loudspeaker module 12 is used for emitting the sound required to be presented by the scene through a voice synthesis technology.
Specifically, in this embodiment, the stereoscopic imaging display module 11 may be connected to the central processing subsystem 3 in a wired or wireless manner, and the central processing subsystem 3 may send an instruction to the stereoscopic imaging display module 11 to enable the stereoscopic imaging display module to display a virtual reality scene or play a video; the speaker module 12 is built in the head-mounted display, and can emit voice prompt information to be presented in a life scene through a voice synthesis technology. The sound information includes but is not limited to: background sounds required for the scene, alert tones explaining options offered to the patient, alert tones when no response has been received from the patient beyond a set time limit, and feedback sounds provided by the central processing subsystem 3 for the user's selection.
Preferably, on the basis of any of the above embodiments, the stereoscopic imaging display module 11 is further configured to eliminate the display frame and enlarge the visual field of the patient. By adopting the virtual reality technology, the virtual scene is displayed without a frame, the immersion of the autism patient is enhanced, the autism patient can be free from external interference, the whole body is put into training, and the cure speed is accelerated.
Specifically, in this embodiment, the stereoscopic imaging display module 11 may utilize a thin circular prism array sheet to achieve the same effect as a bulk curved lens, and enlarge the display screen in the stereoscopic imaging display module 11, so that the visual stimulation material presented by the display screen occupies the whole visual field. Thin circular prism array piece is placed between display screen and people's eye, and the light that the display screen sent shines on thin circular prism array piece the back, scatters to people's eye on, reaches the problem of eliminating the display screen frame in the people's eye field of vision from this, makes the patient more can immerse in virtual reality's scene, reinforcing treatment. Further, thin circular prism array sheets should be applied to human eyes, and the thin circular prism array sheets are placed in the head-mounted equipment respectively at the left and the right.
Preferably, on the basis of any of the above embodiments, referring to fig. 1, the interactive control subsystem 2 includes: the handle control selection module 21 is used for tracking and recording the position, the movement condition and the button action of the handle and judging the hand motion track and the selection instruction of the patient in different scenes; the head motion sensing module 22 is used for acquiring the spatial position, angle information, speed and acceleration information of the head-mounted stereoscopic imaging subsystem 1 through a six-axis sensor and an optical positioning system; and the information transmission module 23 is used for collecting and transmitting hand motion track information, selection instruction information and head action information to the central processing subsystem 3.
Specifically, the handle control selection abrasive brick 21 can refer to the existing somatosensory handle, the somatosensory handle can be held by one hand or two hands, in this embodiment, the motion track information of the hand of the patient can be transmitted to the central processing subsystem 3 through the handle control selection module 21, and then transmitted to the stereoscopic imaging display module 11 after being processed by the central processing subsystem 3, so as to be presented to the patient in the form of a cursor, the patient can conveniently click an option or a trigger point in a virtual scene, and the motion track of the cursor is synchronized with the motion track of the hand of the patient in real time. The head motion sensing module 22 is configured to acquire spatial position, angle information, speed, and acceleration information of the head-mounted display through the six-axis sensor and the optical positioning system, and transmit the acquired spatial position, angle information, speed, and acceleration information to the central processing subsystem 3 through the information transmission module 23, so as to obtain a dwell time of the patient on a certain option, and further obtain information in the selection process. The information transmission between the information transmission module 23 and the central processing subsystem 3 can be transmitted by a wired transmission device or a wireless transmission device. The wired transmission equipment comprises a transmission line, an interface and the like; the wireless transmission equipment comprises a wireless transmitter, a connecting wire, an interface and the like. By acquiring the various information, the whole system can judge and automatically select the capability of playing the played content in the subsequent processing, can automatically realize related training, and greatly reduces the dependence on professional autism rehabilitation personnel.
Preferably, on the basis of any of the above embodiments, referring to fig. 1, the central processing subsystem 3 includes: the evaluation module 31 is used for collecting each training period selection instruction obtained by the handle control selection module, evaluating whether the selection in the period accords with the response and selection to the corresponding life scene under social cognition, and obtaining an evaluation result;
specifically, the evaluation module 31 may use a set of standard research scales, and the standard research scales may evaluate cognitive selection of autism in multiple life scenarios, for example, cognitive of seasons, cognitive of traffic safety, and the like, and specific contents may be selected according to actual situations. The content evaluated by the evaluation module 31 includes, but is not limited to: whether the patient made an action within the correct range, whether the correct option was selected, the number of subjects the patient accumulated to be correct, the number of subjects the patient accumulated to be incorrect, the patient's level of cognition with different types of subjects, and so on.
And the self-adaptive difficulty selection module 32 is used for inputting the characteristics of the handle control selection module, the hand motion trajectory acquired by the head action sensing module, and the spatial position, angle information, speed and acceleration information of the head-mounted three-dimensional imaging subsystem, coding an input sequence into a dense vector sequence by using an embedded layer of a neural network, converting the dense vector sequence into a single vector by using a long-short term memory artificial neural network, wherein the single vector sequence contains the selection mode characteristics of a user, further sending the extracted characteristics into a stacked full-connection layer and a softmax classifier, performing auxiliary diagnosis on the self-closure symptom degree of the user, judging a training level for the current knowledge level and the capability of the user, and self-adaptively adjusting the difficulty of the cognitive training in the next training period.
And the adaptive scene repeating module 33 is configured to adjust the number of repetitions of a scene including similar knowledge points according to the accumulated evaluation result obtained by the evaluation module 31.
And the scene selection module 34 selects a scene according to the results obtained by the adaptive difficulty selection module 32 and the adaptive scene repeating module 33, and transmits the same kind of scene repeating amount information determined by the adaptive scene repeating module 32 to the head-mounted stereoscopic imaging subsystem 1 together for presentation to the patient.
In the adaptive difficulty selection module 32 of the above embodiment, the selection mode feature is extraction and integration of the phenotype information, where the hand motion trajectory, the spatial position of the head-mounted stereo imaging subsystem, the angle information, the velocity, and the acceleration information are some specific details of the phenotype information. Phenotypic information is the manifestation of some or all of the traits of an organism, referred to herein as some behavioral pattern of a user.
The repetition mechanism of the adaptive scene repetition module 33 of the above embodiments includes, but is not limited to: the same scene of frequent misjudgment of the patient is repeated; the similar scenes that the patient frequently misjudges are repeated, the scene that the patient frequently misjudges in a certain period is repeated according to the using time and the analogy forgetting rule curve.
The scene selection module 34 of the above embodiment provides the type of scene and the common sense of life cognitive problems to the head-mounted stereoscopic imaging subsystem 1, including the safety education category and the life habit category, and the safety education category problems include, but are not limited to: the method has the problems of traffic rule cognition, safe electricity, gas and fire cognition, food sanitation cognition, emergency call cognition and the like. Lifestyle issues include, but are not limited to: the cognition of the hygiene habits, the cognition of the good eating habits, the cognition of the good life and rest and the like.
Preferably, on the basis of any of the above embodiments, in order to better promote and popularize the system and avoid the limitation of self-level of different educators, the evaluation module 31 performs evaluation and judgment in an absolute correct and incorrect manner.
Specifically, in the present embodiment, in order to more clearly help the autistic patient to build the cognition of the simple non-social life scenario, questions and answers widely recognized by the society may be adopted.
Based on the system in the above embodiment, in another embodiment of the present invention, a virtual reality technology-based autism training method is provided, and the virtual reality technology-based autism training system described in any one of the above embodiments is used. Specifically, referring to fig. 1, the autism training method based on the virtual reality technology includes: s1, providing cognitive training based on the virtual reality scene for a user, wherein the virtual reality scene is provided by adopting a head-mounted stereo imaging subsystem, and common sense of life cognitive problems for selection are set in the virtual reality scene; s2, collecting hand motion track information, selection instruction information and head action information of a user in the cognitive training; and S3, judging whether the user has social adaptability to the reaction of different scenes according to the collected hand motion track information, the selection instruction information and the head action information, adjusting scenes, the scene repetition times and the common sense of life cognition problem which is selected according to the difficulty level according to the judgment result, and transmitting the adjustment result to the head-mounted stereo imaging subsystem. The embodiment brings better immersion feeling through interactive participation of the user in the training process. Meanwhile, the ability of playing the content is judged and selected by the user according to the acquired information, and a large amount of professional autism rehabilitation personnel is not required to participate.
It should be noted that, the steps in the method provided by the present invention may be implemented by using corresponding subsystems or modules in the system, and those skilled in the art may refer to the technical solution of the system to implement the step flow of the method. Those skilled in the art will appreciate that, in addition to implementing the methods provided by the present invention in the form of pure computer readable program code, the system provided by the present invention and its various elements may be implemented with the same functionality in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like, all by logically programming the steps of the method. Therefore, the apparatus provided by the present invention can be considered as a hardware component, and a unit included in the apparatus for implementing various functions can be considered as a structure in the hardware component; the means for performing the various functions may also be regarded as structures within both software modules and hardware components of the implementing method.
Based on the method in the foregoing embodiments, in another embodiment of the present invention, there is provided an autism training apparatus based on virtual reality technology, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor is configured to execute the method for training autism based on virtual reality technology.
In the above embodiments, the memory is used to store programs; the memory may be implemented using any of the storage technologies in the art, including but not limited to: volatile Memory (RAM), such as Static Random Access Memory (SRAM), Double Data rate synchronous Dynamic Random Access Memory (DDR SDRAM), and the like; the memory may also comprise a non-volatile memory, such as a flash memory. The memories are used to store computer programs (e.g., applications, functional modules, etc. that implement the above-described methods), computer instructions, etc., which may be stored in partition in the memory or memories. And the computer programs, computer instructions, data, etc. described above may be invoked by a processor. The processor is used for executing the computer program stored in the memory to realize the steps of the method related to the embodiment. The processor and the memory may be separate structures or may be an integrated structure integrated together. When the processor and the memory are separate structures, the memory, the processor may be coupled by a bus.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (10)

1. An autism training system based on virtual reality technology, comprising:
the system comprises a head-mounted stereo imaging subsystem, a virtual reality scene, a life general knowledge cognitive problem and a cognitive training system, wherein the head-mounted stereo imaging subsystem provides the virtual reality scene, sets a selected life general knowledge cognitive problem in the virtual reality scene, and provides cognitive training based on the virtual reality scene for a user;
the interactive control subsystem collects hand motion track information, selection instruction information and head action information of a user in the cognitive training and transmits the collected hand motion track information, selection instruction information and head action information to the central processing subsystem;
and the central processing subsystem judges whether the user has social adaptability to the reactions of different scenes according to the information of the interactive control subsystem, adjusts scenes, scene repetition times and common sense of life cognition problems which are selected according to difficulty grades according to the judgment result, and transmits the adjustment result to the head-mounted stereo imaging subsystem.
2. The virtual reality technology-based autism training system of claim 1, wherein the head-mounted stereo imaging subsystem comprises:
a stereoscopic imaging display module, which provides a virtual reality scene and presents a common sense of life cognitive problem based on the virtual reality scene for a user to select;
and the loudspeaker module is used for emitting the sound to be presented in the virtual reality scene through a voice synthesis technology.
3. The virtual reality technology-based autism training system of claim 2, wherein the stereoscopic imaging display module is further configured to eliminate a display frame and enlarge a user's field of view.
4. The virtual reality technology-based autism training system of claim 1, wherein the interactive control subsystem comprises:
the handle control selection module tracks and records the position, the movement condition and the button action of the handle, and judges hand motion tracks and selection instructions made by a user in different scenes;
the head action sensing module acquires the spatial position, angle information, speed and acceleration information of the head-mounted three-dimensional imaging subsystem through a six-axis sensor and an optical positioning system;
and the information transmission module transmits the information obtained by the handle control selection module and the head action induction module to the central processing subsystem.
5. The virtual reality technology-based autism training system of claim 1, wherein the central processing subsystem comprises:
the evaluation module collects the selection instruction of each training period obtained by the handle control selection module, evaluates whether the selection in the period accords with the response and selection to the corresponding life scene under social cognition, and obtains an evaluation result;
the self-adaptive difficulty selection module takes the hand motion trail obtained by the handle control selection module and the head action induction module and the spatial position, angle information, speed and acceleration information of the head-mounted three-dimensional imaging subsystem as characteristic input, encodes an input sequence into a dense vector sequence by utilizing an embedded layer of a neural network, converts the dense vector sequence into a single vector by using a long-short term memory artificial neural network, contains the selection mode characteristics of a user, further sends the extracted characteristics into a stacked full-connection layer and a softmax classifier, carries out auxiliary diagnosis on the self-closure symptom degree of the user, judges the training level for the current knowledge level and the capability of the user, and self-adaptively adjusts the difficulty of the cognitive training in the next training period.
6. The virtual reality technology-based autism training system of claim 5, wherein the central processing subsystem further comprises:
and the self-adaptive scene repeating module adjusts the repeating quantity of the same kind of scenes according to the accumulated evaluation results of a plurality of scenes in one period obtained by the evaluation module so as to carry out a plurality of times of training aiming at the knowledge weak area of the user and further achieve the aim of improving the reinforcement.
7. The virtual reality technology-based autism training system of claim 6, wherein the central processing subsystem further comprises:
and the scene selection module is used for selecting scenes according to results obtained by the self-adaptive difficulty selection module and the self-adaptive scene repeating module, adjusting the common sense cognition problem according to the difficulty of the cognition training, and transmitting the information of the number of the similar scenes judged by the self-adaptive scene repeating module to the head-mounted stereo imaging subsystem together for being displayed to a user.
8. The virtual reality technology-based autism training system according to claim 5, wherein the evaluation module performs evaluation judgment in an absolute error-correcting manner.
9. A virtual reality technology-based autism training method, wherein the virtual reality technology-based autism training system according to any one of claims 1 to 8 is used, and the method comprises:
providing cognitive training based on the virtual reality scene for a user, wherein the virtual reality scene is provided by adopting a head-mounted stereo imaging subsystem, and the selected common sense of life cognitive problem is set in the virtual reality scene;
collecting hand motion track information, selection instruction information and head action information of a user in the cognitive training;
and judging whether the responses of the user to different scenes have social adaptability or not according to the collected hand motion track information, the selection instruction information and the head action information, adjusting scenes, the scene repetition times and the common sense of life cognition problem which is selected according to the difficulty level according to the judgment result, and transmitting the adjustment result to the head-mounted stereo imaging subsystem.
10. An autism training device based on virtual reality technology, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program is operable to perform the method of claim 9.
CN201911169005.6A 2019-11-25 2019-11-25 Virtual reality technology-based autism training system, method and device Pending CN111009318A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911169005.6A CN111009318A (en) 2019-11-25 2019-11-25 Virtual reality technology-based autism training system, method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911169005.6A CN111009318A (en) 2019-11-25 2019-11-25 Virtual reality technology-based autism training system, method and device

Publications (1)

Publication Number Publication Date
CN111009318A true CN111009318A (en) 2020-04-14

Family

ID=70113109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911169005.6A Pending CN111009318A (en) 2019-11-25 2019-11-25 Virtual reality technology-based autism training system, method and device

Country Status (1)

Country Link
CN (1) CN111009318A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113952583A (en) * 2021-12-22 2022-01-21 山东省心岛人工智能科技有限公司 Cognitive training method and system based on VR technology

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102553222A (en) * 2012-01-13 2012-07-11 南京大学 Brain function feedback training method supporting combat mode and system
CN103268392A (en) * 2013-04-15 2013-08-28 福建中医药大学 Cognitive function training system for scene interaction and application method thereof
CN107884947A (en) * 2017-11-21 2018-04-06 中国人民解放军海军总医院 Auto-stereoscopic mixed reality operation simulation system
CN108536807A (en) * 2018-04-04 2018-09-14 联想(北京)有限公司 A kind of information processing method and device
CN109919712A (en) * 2019-01-30 2019-06-21 上海市精神卫生中心(上海市心理咨询培训中心) Neurodevelopmental disorder shopping training system and its training method
CN110070944A (en) * 2019-05-17 2019-07-30 段新 Training system is assessed based on virtual environment and the social function of virtual role

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102553222A (en) * 2012-01-13 2012-07-11 南京大学 Brain function feedback training method supporting combat mode and system
CN103268392A (en) * 2013-04-15 2013-08-28 福建中医药大学 Cognitive function training system for scene interaction and application method thereof
CN107884947A (en) * 2017-11-21 2018-04-06 中国人民解放军海军总医院 Auto-stereoscopic mixed reality operation simulation system
CN108536807A (en) * 2018-04-04 2018-09-14 联想(北京)有限公司 A kind of information processing method and device
CN109919712A (en) * 2019-01-30 2019-06-21 上海市精神卫生中心(上海市心理咨询培训中心) Neurodevelopmental disorder shopping training system and its training method
CN110070944A (en) * 2019-05-17 2019-07-30 段新 Training system is assessed based on virtual environment and the social function of virtual role

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113952583A (en) * 2021-12-22 2022-01-21 山东省心岛人工智能科技有限公司 Cognitive training method and system based on VR technology
CN113952583B (en) * 2021-12-22 2022-04-08 山东省心岛人工智能科技有限公司 Cognitive training method and system based on VR technology

Similar Documents

Publication Publication Date Title
US11961197B1 (en) XR health platform, system and method
US11815951B2 (en) System and method for enhanced training using a virtual reality environment and bio-signal data
Goldstein et al. Empathy: Development, training, and consequences
CN110890140A (en) Virtual reality-based autism rehabilitation training and capability assessment system and method
CN108919950A (en) Autism children based on Kinect interact device for image and method
US20040197750A1 (en) Methods for computer-assisted role-playing of life skills simulations
CN110931111A (en) Autism auxiliary intervention system and method based on virtual reality and multi-mode information
Grillo An online telepractice model for the prevention of voice disorders in vocally healthy student teachers evaluated by a smartphone application
Konstantareas et al. Simultaneous communication with autistic and other severely dysfunctional nonverbal children
Garzotto et al. Motion-based touchless interaction for ASD children: a case study
CN107402633B (en) A kind of safety education method based on image simulation technology
US11393357B2 (en) Systems and methods to measure and enhance human engagement and cognition
Spooner et al. Teaching Children to Listen: A practical approach to developing children's listening skills
CN110930780A (en) Virtual autism teaching system, method and equipment based on virtual reality technology
CN114341964A (en) System and method for monitoring and teaching children with autism series disorders
CN111009318A (en) Virtual reality technology-based autism training system, method and device
CN113284625A (en) Training method for social communication function of autism spectrum disorder children
Ramsdell-Hudock et al. Utterance duration as it relates to communicative variables in infant vocal development
US20220254506A1 (en) Extended reality systems and methods for special needs education and therapy
Teeters Use of a wearable camera system in conversation: Toward a companion tool for social-emotional learning in autism
CN111477055A (en) Virtual reality technology-based teacher training system and method
Das et al. An automated speech-language therapy tool with interactive virtual agent and peer-to-peer feedback
Tadayon A person-centric design framework for at-home motor learning in serious games
Solano Bridging Communication Deficits in Preschool Children Diagnosed with Autism Spectrum Disorder, a Review of Literature in Technology Aided Instruction and Intervention
KR102217990B1 (en) System for foreign language sleep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination