CN107168525B - System and method for assisting autistic children in pairing training by using fine gesture recognition device - Google Patents

System and method for assisting autistic children in pairing training by using fine gesture recognition device Download PDF

Info

Publication number
CN107168525B
CN107168525B CN201710265310.XA CN201710265310A CN107168525B CN 107168525 B CN107168525 B CN 107168525B CN 201710265310 A CN201710265310 A CN 201710265310A CN 107168525 B CN107168525 B CN 107168525B
Authority
CN
China
Prior art keywords
training
model
trainee
hand
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710265310.XA
Other languages
Chinese (zh)
Other versions
CN107168525A (en
Inventor
蔡苏
杨阳
王涛
胡晓毅
任媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Normal University
Original Assignee
Beijing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Normal University filed Critical Beijing Normal University
Priority to CN201710265310.XA priority Critical patent/CN107168525B/en
Publication of CN107168525A publication Critical patent/CN107168525A/en
Application granted granted Critical
Publication of CN107168525B publication Critical patent/CN107168525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a system and a method for assisting autistic children in pairing training by using a fine gesture recognition device. And the motion capture module carries out fine gesture recognition on the capture motion by using a recognition device, captures the position and the motion of the hand, maps the position and the motion to a virtual three-dimensional model of the hand in real time, and presents the position and the motion at a relative position in a virtual scene. The invention aims to assist a trainer to arrange a training process and record training data in a training process, reduce the burden of the trainer, keep the true and natural action of a trainee through gesture recognition and improve the training effect. The invention is applied to the pairing training of the children with autism, has low equipment cost and is easy to popularize.

Description

System and method for assisting autistic children in pairing training by using fine gesture recognition device
Technical Field
The invention belongs to the field of training of basic skills of children with special education autism, and particularly relates to a system and a method for assisting autism children in pairing training by using a fine gesture recognition device.
Background
Human-computer natural Interaction technology, called natural Interaction (HCNI; or Human-machine Nature Interaction, HMNI) for short. Natural interaction has now been a general term used to refer to the wide variety of ways of interacting to control a system using gestures or other limb movements as input. (see O' HARA K, HARPER R, MENTIS H, et al. on the natural of touch: Puttingthe “ Interaction ” back to NUI [ J ]. ACM Transactions on Computer-Human Interaction,2013,20(1):526-36.)
In the ground level report published by the New Media Consortium (NMC) in 2010-2016, technologies such as gesture calculation, wearable devices, virtual assistant devices and the like with natural interaction technology as a kernel have become an attention focus in the field of education (http:// www.nmc.org/NMC-horizons /). The introduction of low-cost implementation media such as microsoft kinect and LeapMotion has greatly facilitated the deployment of natural interaction technologies. (see Pescarin, S., Pietroni, E., Recic, L., & Omar, K. (2013). NICH: A present the interactive stuck natural interaction to cumulative probability compositions, paper present the Digital variance International consistency.)
With the development of education informatization, the attention of the education field to new technologies is increasing. The natural interaction technology as the development direction of the man-machine interaction technology provides new ideas of design, development, application, management and evaluation for education methods, processes and resources, and brings about the research on the mode and action of natural interaction application in the education field.
At present, educational application exploration of natural interaction technology has already been conducted with certain research achievements in three aspects of teaching scenes that key-press or touch-screen man-machine interaction is difficult or impossible, teaching of action skills, and teaching with action skills driving cognitive development. These findings suggest that the application of natural interaction technology in education can mainly play the following roles:
(1) recording the change of the student and tracking the learning condition of the student. (see Li flower, & King, (2014.) Children attention assessment system based on Kinect somatosensory interaction modern education technology, 24(7),120-
(2) The interestingness of teaching is improved, and the learning motivation of students is improved. (see, Li Tuo Ling, at the crowned horn. (2016.) exploration of somatosensory interaction techniques applied in the field of education computer programming skills and maintenance (11),82-82.)
(3) Reduce the operation interference, promote student's learning efficiency and learning effect. (see Cho, O.H. (2014.) Development of service door for using Leap Motion based on method of paper presented at the doors and Graphics and Zhang, Y., Liu, S., Tao, L., Yu, C., Shi, Y., & Xu, Y. (2015.) ChinAR: facility guest guiding left approach evaluation. paper presented at the International symposium of chip)
(4) The workload of the teacher is reduced, so that the teacher can pay more attention to students and closely interact with the students. (see, Li Tuo Ling, at the crowned horn. (2016.) exploration of somatosensory interaction techniques applied in the field of education computer programming skills and maintenance (11),82-82.)
In the daily basic skill training of the autistic children, pairing training is the most important, and the traditional pairing training performed by using real objects and physical models has the defects of large number of models, difficulty in collection and arrangement, and the need of teachers to undertake the tasks of guiding training, arranging processes, providing feedback and recording a large amount of data.
And then, the training is carried out by matching through the plane atlas, and the cognitive clues of the trainee to the object characteristics are reduced. The computer is used for carrying out pairing training on the basis of atlas training, the action of grabbing or identifying by a trainee is cancelled, and the action training and the action-promoting cognitive processing process of the trainee are reduced by changing to meaningless clicking. Gaoxina Zhu et al designed a Leap Motion based three-color, red, green, and blue, flat pairing game, and the user would get a bead of any of the three colors and need to place it in a box of the same color. However, the game is a single-turn game, the content difficulty is fixed, and the game can only be used for training of matching of red, green and blue colors and cannot adapt to the capability development of children with autism. And the planar design of the device can not show the gesture change of the autistic children, and the interaction between the hands and the objects can not be intuitively reflected. (see Zhu, G., Cai, S., Ma, Y., & Liu, E. (2015,6-9July 2015.) A Series of leaf Motion-Based Matching Games for enhancing the Fine Motor shafts of Children with Automation. paper presented attached to Advanced Learning Technologies (ICALT),2015IEEE 15th International conference.)
In summary, the prior art has the following disadvantages:
(1) the teacher has a large workload and needs to undertake the work of guiding training, arranging the flow, providing feedback and recording a large amount of data.
(2) The paired articles are plane photos or models, and feature information of the articles is lost.
(3) The interaction mode is different from the daily action, and the aim of training the fine movement of the hands cannot be achieved.
(4) The content and difficulty are fixed, and the children with autism cannot adapt to the ability development of the children with autism.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: overcome present to pair training physical model and collect and present the difficulty, the shortcoming that teacher's work load is big, mouse click type pairs the shortcoming that the training cancels the hand and snatchs the action and the simple abstract shortcoming that presents of round formula plane game content, an application meticulous gesture recognition device auxiliary autism children pair the training system is provided, on the basis of keeping complete cognition and action training, alleviate the training person and arrange the training process, the work of record data, make the training person can more be absorbed in the guide to autism children, training equipment is simple, easy operation, be fit for applying to in autism children's daily basic skill training.
The technical scheme of the invention is as follows: a system for assisting autistic children in pairing training by using a fine gesture recognition device comprises a computer and the fine gesture recognition device, wherein a pairing training system runs in the computer, and the pairing training system comprises an input module, a data processing and visualization module, an action capture module, an output module and a model library. Wherein:
the input module is used for realizing the function of setting parameters, and trainees select the training target objects, the types and the number of the interferents and the training times in a key selection or text input mode; the set parameters will be passed to the data processing and visualization module and the output module.
The data processing and visualization module is used for realizing the functions of importing the model from the model library, processing the model data in real time and visualizing the change of the model; judging whether the training times are reached or not according to the parameters set by the input module, calling a model from a model library according to the set parameters in each training, and presenting a simulated 3D model of a target object and an interfering object; setting a hand simulation 3D model, mapping the position and the action of the hand of the trainee to the hand simulation 3D model after the action capturing module identifies the position and the action of the hand of the trainee through a fine gesture recognition device, judging the interaction between the hand simulation 3D model and the object simulation 3D model, and judging whether to grab a correct model to a correct position, namely judging whether the pairing is correct after the model collides with a target object model;
the motion capture module is used for capturing the hand position and motion of the trainee in real time; performing fine gesture recognition on the grabbing action by using a fine recognition device, capturing the position and the action of the hand, acting with a data processing and visualization module, mapping the position and the action of the hand to a hand simulation 3D model in real time, presenting the position and the action at a corresponding position in a virtual scene, and replacing the interaction of the hand of the trainee and the object simulation 3D model by the interaction of the hand simulation 3D model and the object simulation 3D model;
the output module realizes the functions of presenting feedback and report; in each pairing training, outputting correct or incorrect sound and image feedback according to the judgment result of the data processing and visualization module; after the training times are finished, outputting a training report which comprises the types and the number of target objects and interfering objects, the training times, the correct rate and the training time;
and the model library realizes the function of storing the object simulation 3D model for the data processing and visualization module to call.
Used in the fine gesture recognition device to recognize the trainee's gesture is a depth camera.
The simulated 3D model of the article under one category in the model library at least comprises 10 different articles in the category. All the article simulation 3D models and the hand simulation 3D models represent the characteristics of the shape, the color and the like of a real object and have obvious difference.
The motion and the position of the hand simulation 3D model in the motion capture module are real-time to reflect the change of the motion and the position of the hand of the trainee.
In each pairing training, three optional modes of sound signals, key control or gesture recognition are used for providing a way for the trainee to assist in judging whether the trainee is correct or wrong, refreshing a scene and skipping the training.
The trainee is given correct or wrong feedback in the output module in the form of images and sound.
In the process of one pairing training, under the condition that the number of the interferents is not 0, the interferents in each time should be randomly changed.
A method for assisting autistic children in pairing training by using a fine gesture recognition device comprises the following steps:
the trainer sets parameters of pairing training by keys or text input, wherein the parameters comprise types and numbers of target objects and interfering objects and training times, and are transmitted to a data processing and visualization module and an output module.
And (2) calling a corresponding model from the model library by the data processing and visualization module according to the parameters transmitted from the input module, and leading the model to a preset specific position, so that the trainee can guide the trainee to correctly pair, and can send out a guidance language and assist the hand action of the trainee. But the auxiliary needs to be gradually evacuated in the next round of training, so that the trainee can finish the training independently.
And (3) placing the hand of the trainee in an identification area of a fine gesture identification device under the guidance of the trainee, capturing the hand motion and position of the trainee by using the fine gesture identification device by using a motion capture module, introducing a hand simulation 3D model into a preset specific area, adjusting the hand motion and position of the trainee according to the motion and position of the hand model presented by a computer, enabling the hand model to capture the selected model to a target object, acquiring the motion and position of the hand of the trainee by using the fine gesture identification device in real time in the process by using the motion capture module, and mapping the position and motion of the hand of the trainee to the hand simulation 3D model in real time by using a data processing and visualization area.
And (4) after the training times are finished, setting parameters of a new round of pairing training by the trainee according to the data presented by the computer, and repeating the steps (2) to (4) until the training is finished on the same day.
In the step 1, the number of the interferents set by the trainee is increased from 0 one by one, and one interferent can be increased when the accuracy rate is more than 80% in the training independently completed by one trainee; the setting of the types of the interferents is returned from different types with great difference to the same type of the target object, and the type of the interferents can be replaced with the accuracy rate of more than 80 percent in one training; the setting of the training times is started to be 10 times; when the trainee needs to be consolidated to basically master an article pairing, if the accuracy rate is more than 50% but less than 80%, the trainee can adjust the pairing to 5-10 times; when the trainee can not master the matching of an article and needs a lot of training, for example, the accuracy is less than 50%, the training time can be adjusted to 10-15 times.
In the step 3, the trainee can finish the interaction with the system by directly using the hands to do natural actions without a mouse, a keyboard or other special marking equipment.
Compared with the prior art, the invention has the beneficial effects that:
(1) the system of the invention is not a single round, the content and difficulty of each training can be changed in the input module, so that the invention is a developing training system which can adapt to the capability change of trainees, and the difficulty of fixing the fixed content of the turn-type game is obviously different from that of fixing the content of the turn-type game;
(2) the visual space of the invention is a 3-dimensional space, all models are 3D models, the 3D models are easier to obtain than real models, are more similar to real models and pictures, and can more completely show the characteristics of objects different from the plane presentation of games;
(3) the hand simulation 3D model can present the change of the hand action and the position of the trainee, and represents the real-time interaction between the hand and the model, and the hand action of the user cannot be visually seen in the game.
The invention applies the natural interaction technology to the pairing training by using the fine gesture recognition device, realizes the aim of the pairing training of the autistic children, and is beneficial to correcting the defects of the existing pairing training form.
(1) Optimizing teacher work: the teacher needs to guide students, arrange the flow and record data in the traditional pairing training, and the task is heavy.
(2) Preservation of complete cognitive and motor training: the invention needs the autistic children to select and move articles by the same grabbing action as in daily life, can achieve the aim of action skill training, and can drive cognitive deep processing through meaningful actions.
(3) Promoting generalization of skills: the hand simulation 3D model and the article simulation 3D model are virtual models, are easier to obtain than real models, can reflect the characteristics of articles more truly, are more similar to real models and pictures, and can promote the migration of basic skills in life.
(4) Adaptation to changes in trainee capacity: the system of the invention is not a single round, in the system, the input module can change the content and difficulty of each training, so that the invention is a developing training system which can adapt to the ability change of trainees.
Drawings
FIG. 1 is a block diagram of the system of the present invention;
FIG. 2 is a flow chart of a method of use of the present invention;
FIG. 3 is an example of a model rendering diagram of the present invention;
FIG. 4 is an example of a real-time mapping of gestures in accordance with the present invention;
FIG. 5 is an example of a correct feedback image of the present invention;
FIG. 6 is an example of an error feedback image of the present invention;
fig. 7 is a report example of the present invention.
Detailed Description
As shown in fig. 1, the pairing training system for assisting autism children by using a fine gesture recognition device according to the present invention includes a computer and a fine gesture recognition device, wherein the computer runs a pairing training system, and the pairing training system includes an input module, a data processing and visualization module, an action capture module, an output module, and a model library, as shown in fig. 1. Wherein:
the input module of the invention is specifically realized as follows:
the input module needs to realize the function of setting parameters, and a developer can select the training target object, the type and the number of the interferent objects and the training times in a key selection or text input mode when using the device by setting keys or a text box. The data result of each key or text input should be stored by the system, can be transferred and called.
The function is as follows:
(1) the data is transmitted to a data processing and visualization module for calling out a corresponding model from the model library and carrying out training for a set number of times.
(2) The data will be passed to the output module as items in a report to be fed back to the trainee and trainee after the training is completed.
The data processing and visualization module of the invention is specifically realized as follows:
the data processing and visualization module needs to implement the following functions:
(1) judging whether the training times are reached according to the parameters set by the input module;
(2) and calling the model from the model library according to the set parameters in each training, and presenting the simulated 3D model of the target object and the interferent.
(3) And setting a hand simulation 3D model, and mapping the position and the motion of the hand of the trainee to the hand model after the motion capture module identifies the position and the motion of the hand of the trainee through the fine gesture recognition device.
(4) And (4) judging the interaction between the hand model and the object model, and judging whether to grab the correct model to the correct position, namely judging whether the pairing is correct after the model and the object model collide.
The developer records the number of times of completed training and the correct number of times of the completed training through a program, judges whether the number of times of completed training is equal to the number of times of training obtained in the input module before each training is started, if so, the developer enters the output module, if not, the developer enters the pairing training, calls a corresponding model according to the data obtained in the input module and visualizes the model in a virtual 3-dimensional space. The fine gesture recognition device is received through program setting, the movement and the position of the hand of the trainee are acquired in real time, the movement and the position of the hand of the trainee are mapped to the hand model through the program, and the change of the movement and the position of the hand model is visualized in real time. By setting the attributes of the models, the hand models can interact with the models as the selection objects, the hand models can grab the models as the selection objects at certain gestures and positions, and the grabbed models can move synchronously with the hand models. And once the model as the selector touches the model of the target object, the correct and wrong feedback of the pairing training is triggered, the setting program judges whether the two collided models have the same name, the models enter an output module to output the correct feedback if the two collided models have the same name, the number of times of training and the number of correct times are respectively added with 1, the models enter the output module to output the wrong feedback if the two models have different names, and the number of times of training is added with 1. The developer records the time for starting the first training and the time for judging that the number of times of completed training is equal to the number of times of training obtained by the input module through the program.
The action capture module of the invention is specifically realized as follows:
the motion capture module has the functions of performing fine gesture recognition on the capture motion by using the recognition device, capturing the position and the motion of the hand, mapping the position and the motion to the virtual three-dimensional model of the hand in real time, and presenting the position and the motion to a relative position in a virtual scene. Interacting with the data processing and visualization module to replace the interaction of the trainee hand and the 3D model with the interaction of the hand model and the 3D model.
The developer uses a fine gesture recognition device with a depth camera, such as the Leap Motion, to capture the movements and positions of the trainee's hands. The Leap Motion is a body-sensing controller for hands issued by Leap corporation in 2013, and can identify the hand Motion within one meter of the hand Motion. Developers can capture hand movements and positions, set hand models, and map trainee hand movements and positions in real time onto virtual three-dimensional models of the hands using an SDK provided by Leap corporation that contains rich APIs.
The output module of the invention is specifically realized as follows:
the output module functions as:
(1) outputting correct or incorrect sound and image feedback according to the judgment result of the data processing and visualization module in each pairing training;
(2) and after the training times are finished, outputting a training report, wherein the training report comprises the types and the number of the target objects and the interferents, the training times, the correct rate and the training time.
And after each training is finished, the developer outputs correct or wrong feedback according to the judgment result of the names of the two collision models in the data processing and visualization module, and outputs a training report after judging that the number of times of finished training is equal to the number of times of training set by the input module before one training is started.
The model library of the invention is specifically realized as follows:
and storing the 3D model of the preset object for the data processing and visualization module to call.
The developer needs to have a unique name or number for each model in the model library and ensure that each model can be called by the program according to the name or number.
The invention discloses a method for assisting autism child pairing training by using a fine gesture recognition device, which comprises the following implementation steps of:
and (1) setting parameters of pairing training including types and numbers of target objects and interfering objects and training times by a trainer through a key or text input mode.
And (2) the trainee guides the trainee to carry out correct pairing, and can send out a guide word and assist the hand action of the trainee. But the auxiliary needs to be gradually evacuated in the next round of training, so that the trainee can finish the training independently.
And (3) the trainee identifies the hand motion and position by using a fine gesture recognition device under the guidance of the trainee, and adjusts the hand motion and position of the trainee according to the motion and position of the hand model presented by the computer, so that the hand model can capture the selected model to the target object.
And (4) after the training times are finished, setting parameters of a new round of pairing training by the trainee according to the data presented by the computer, and repeating the steps (2) to (4) until the training is finished on the same day.
The invention relates to a system and a method for assisting autistic children in pairing training by using a fine gesture recognition device.
(1) The trainee selects the target object "fruit-apple", the number of interferents "0", the number of times "10", and "start", and the resulting model is presented as shown in fig. 3.
(2) The trainee asked the trainee to move the apple 3D model below the screen to the apple 3D model above.
(3) The fine gesture recognition device is used to capture the trainee's hand motion and position and map it in real time onto the hand 3D model, characterizing the hand and apple model interactions with their interactions, as shown in fig. 4.
(4) If the trainee correctly grabs the lower apple model and moves to the upper apple model to put down, an image (as shown in fig. 5) and sound feedback are provided to indicate that the pairing training is successful. If the trainee does not operate within 10 seconds, the trainee can excite the image (as shown in fig. 6) and the sound feedback by means of assistant judgment, which indicates that the pairing training fails.
(5) Repeating the steps (2) to (4) until the total times reach 10.
(6) The next training is performed according to the accuracy in the report, which is shown in fig. 7. If the accuracy is lower than 80%, selecting a target object 'fruit-apple', the number of interferents '0', the number of times '5', and selecting 'start' under the condition that the trainee basically grasps the condition of needing to be consolidated; the trainee selects the target object 'fruit-apple', the number of interferents '0', the number of times '15', and 'start' without mastering a lot of exercises. And (4) repeating the steps (2) to (6). If the accuracy reaches 80%, selecting a target object 'fruit-apple', an interferent 'vehicle' (which can be any other option except fruit), the number of interferents '1', the number of times '10', and selecting 'start'.
(7) The trainee was asked to grab an apple 3D model in the 3D model below the screen and move to the apple 3D model above.
(8) The trainee's hand motion and position are captured using a fine gesture recognition device and mapped in real-time onto a hand 3D model, characterizing the hand and all model interactions with the hand model and all models, as in fig. 4.
(9) And if the trainee correctly grabs the lower apple model and moves to the upper apple model to put down, image and sound feedback is provided to represent that the pairing training is successful. If the trainee does not operate within 10 seconds, the trainee stimulates image and sound feedback in an assistant judgment mode to indicate that the pairing training fails.
(10) Repeating the steps (7) to (9) until the total times reach 10.
(11) The next training is performed according to the accuracy in the report, which is shown in fig. 7. If the accuracy is lower than 80%, selecting a target object 'fruit-apple' under the condition that the trainee basically grasps the condition of needing to be consolidated, selecting an interferent 'vehicle' (the same as the previous time), selecting 'start' for the interferent with the number of '1' and the frequency of '5'; the trainee selects the target object 'fruit-apple', the interferent 'vehicle' (same as the last time), the number of interferents '1', the number of times '15', and the selection 'start' without mastering the trainee and needing a lot of exercises. And (4) repeating the steps (1) to (11). If the accuracy reaches 80%, selecting a target object, namely fruit-apple, an interferent, namely a vehicle (the same as the previous time), the number of interferents is 2, the frequency is 10, and selecting start. And (5) repeating the steps (7) to (11).
(12) When the trainee matches the apple with the fruit type under the condition that the interference is 2, the matching accuracy reaches 80%, the trainee changes the interference type (except the type reaching 80% and the fruit type), and the steps (7) to (12) are repeated.
(13) And (3) when the matching accuracy of the trainee under the condition that the interference is '2' between a plurality of 'apples' and the categories except 'fruits' reaches 80%, replacing the category of the interference as 'fruits', repeating the steps (7) to (11) until the accuracy of the trainee in the training of the target object 'fruits-apples', the interference as 'fruits' and the number of the interference as '2' reaches 80%, and finishing the matching training of one article.
In developing a paired training system, it is necessary to:
(1) object models and hand models for pairing were made using 3D modeling tools, all models being near-true.
(2) The menu may be created by setting "target object", "interfering object", "number of interfering objects", and "number of times".
(3) And 3D virtual space is manufactured, corresponding models can be called according to menus, and hand models can be called after the hand of the trainee is recognized by the fine gesture recognition device.
(4) And correspondingly changing the motion and the position of the hand model according to the motion and the position of the hand of the trainee in real time, and rendering the position of the interactive model in real time if the hand model is interacted with other models.
(5) Image and sound feedback is made when the pairing is correct and wrong.
(6) The feedback is triggered when the lower counterpart model touches the upper target model.
(7) And presenting the training data when the training times reach the set times, wherein the training data comprises the types and the numbers of the target objects and the interferent objects, the training times, the correct rate and the training time.
(8) The setting method enables the trainee to assist in judging whether the trainee is correct or wrong, refreshing the scene and skipping the training.
(9) When the motion map in the real physical space is converted into the motion of the virtual model (virtual arm and virtual article), the spatial variation of the virtual model is smoothed using an interpolation algorithm.
In a word, the system is not a single round, and the content and the difficulty of each training can be changed in the input module, so that the system is a developmental training system which can adapt to the capability change of trainees, and the fixed difficulty of the fixed content of the round game is obviously different from that of the round game; the visual space of the invention is a 3-dimensional space, all models are 3D models, the 3D models are easier to obtain than real models, are more similar to real models and pictures, and can more completely show the characteristics of objects different from the plane presentation of games; the hand model can present the change of the hand action and the position of the trainee, and represents the real-time interaction between the hand and the model, and the hand action of the user cannot be visually seen in the game.
The invention applies the natural interaction technology to the pairing training by using the fine gesture recognition device, realizes the aim of the pairing training of the autistic children, and is beneficial to correcting the defects of the existing pairing training form.

Claims (9)

1. The utility model provides an utilize meticulous gesture recognition device to assist autism children to pair system of training which characterized in that: the system comprises a computer and a fine gesture recognition device, wherein a pairing training system runs in the computer, and comprises an input module, a data processing and visualization module, an action capture module, an output module and a model library;
the input module is used for realizing the function of setting parameters, and trainees select the training target objects, the types and the number of the interferents and the training times in a key selection or text input mode; the set parameters are transmitted to a data processing and visualization module and an output module;
the data processing and visualization module is used for realizing the functions of importing the model from the model library, processing the model data in real time and visualizing the change of the model; judging whether the training times are reached or not according to the parameters set by the input module, calling a model from a model library according to the set parameters in each training, and presenting a simulated 3D model of a target object and an interfering object; setting a hand simulation 3D model, mapping the position and the action of the hand of the trainee to the hand simulation 3D model after the action capturing module identifies the position and the action of the hand of the trainee through a fine gesture recognition device, judging the interaction between the hand simulation 3D model and the object simulation 3D model, and judging whether to grab a correct model to a correct position, namely judging whether the pairing is correct after the model collides with a target object model;
the motion capture module is used for capturing the hand position and motion of the trainee in real time; performing fine gesture recognition on the grabbing action by using a fine recognition device, capturing the position and the action of the hand, acting with a data processing and visualization module, mapping the position and the action of the hand to a hand simulation 3D model in real time, presenting the position and the action at a corresponding position in a virtual scene, and replacing the interaction of the hand of the trainee and the object simulation 3D model by the interaction of the hand simulation 3D model and the object simulation 3D model;
the output module realizes the functions of presenting feedback and report; in each pairing training, outputting correct or incorrect sound and image feedback according to the judgment result of the data processing and visualization module; after the training times are finished, outputting a training report which comprises the types and the number of target objects and interfering objects, the training times, the correct rate and the training time; in the process of one-time pairing training, under the condition that the number of the interferents is not 0, the interferents in each time should be randomly changed;
and the model library realizes the function of storing the object simulation 3D model for the data processing and visualization module to call.
2. The system for assisting autistic children's paired training using fine gesture recognition device as claimed in claim 1, wherein: used in the fine gesture recognition device to recognize the trainee's gesture is a depth camera.
3. The system for assisting autistic children's paired training using fine gesture recognition device as claimed in claim 1, wherein: the object simulation 3D model under one kind in the model library at least comprises 10 different objects of the kind, and all the object simulation 3D models and the hand simulation 3D models represent the shape and the color characteristics of a real object and have obvious differences.
4. The system for assisting autistic children's paired training using fine gesture recognition device as claimed in claim 1, wherein: the motion and the position of the hand simulation 3D model in the motion capture module are real-time to reflect the change of the motion and the position of the hand of the trainee.
5. The system for assisting autistic children's paired training using fine gesture recognition device as claimed in claim 1, wherein: in each pairing training, three optional modes of sound signals, key control or gesture recognition are used for providing a way for the trainee to assist in judging whether the trainee is correct or wrong, refreshing a scene and skipping the training.
6. The system for assisting autistic children's paired training using fine gesture recognition device as claimed in claim 1, wherein: the trainee is given correct or wrong feedback in the output module in the form of images and sound.
7. A method for assisting autism child pairing training by using a fine gesture recognition device is characterized by comprising the following steps:
the method comprises the following steps that (1) a trainer sets parameters of pairing training in a key or text input mode, wherein the parameters comprise types and numbers of target objects and interfering objects and training times, and are transmitted to a data processing and visualization module and an output module;
step (2) the data processing and visualization module calls out a corresponding model from the model library according to the parameters transmitted from the input module and guides the corresponding model to be led in a preset position, a trainee is guided to carry out correct pairing, a guidance word is given out and the hand action of the trainee is assisted, but the assistance needs to be gradually evacuated in the next round of training, so that the trainee can finish the training independently;
step (3) a trainee places a hand in an identification area of a fine gesture identification device under the guidance of the trainee, a motion capture module captures the motion and the position of the hand of the trainee by using the fine gesture identification device, a hand simulation 3D model is introduced into a preset area, the trainee adjusts the motion and the position of the hand of the trainee according to the motion and the position of the hand model presented by a computer, so that the hand model can capture the selected model to a target object, the motion capture module obtains the motion and the position of the hand of the trainee by using the fine gesture identification device in real time in the process, and the data processing and visualization area maps the position and the motion of the hand of the trainee to the hand simulation 3D model in real time;
and (4) after the training times are finished, setting parameters of a new round of pairing training by the trainee according to the data presented by the computer, and repeating the steps (2) to (4) until the training is finished on the same day.
8. The method for assisting autistic children's paired training using fine gesture recognition device as claimed in claim 7, wherein: in the step 1, the number of the interferents set by the trainee is increased from 0 one by one, and one interferent is added with the accuracy rate of more than 80% in the training independently completed by one trainee; the setting of the types of the interferents is returned from different types with great difference to the same type of the target object, and the types of the interferents are replaced with the accuracy rate of more than 80 percent in one training; the setting of the training times is started to be 10 times; when the trainee needs to consolidate the matching of one article and the accuracy rate exceeds 50% but is less than 80%, adjusting to 5-10 times; when the trainee can not master the matching of an article and needs a lot of training, the accuracy is adjusted to 10-15 times when the accuracy is lower than 50%.
9. The method for assisting autistic children's paired training using fine gesture recognition device as claimed in claim 7, wherein: in the step 3, the trainee directly uses two hands to do natural motion to complete the interaction with the system without a mouse and a keyboard.
CN201710265310.XA 2017-04-21 2017-04-21 System and method for assisting autistic children in pairing training by using fine gesture recognition device Active CN107168525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710265310.XA CN107168525B (en) 2017-04-21 2017-04-21 System and method for assisting autistic children in pairing training by using fine gesture recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710265310.XA CN107168525B (en) 2017-04-21 2017-04-21 System and method for assisting autistic children in pairing training by using fine gesture recognition device

Publications (2)

Publication Number Publication Date
CN107168525A CN107168525A (en) 2017-09-15
CN107168525B true CN107168525B (en) 2020-10-30

Family

ID=59813357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710265310.XA Active CN107168525B (en) 2017-04-21 2017-04-21 System and method for assisting autistic children in pairing training by using fine gesture recognition device

Country Status (1)

Country Link
CN (1) CN107168525B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7073702B2 (en) * 2017-12-11 2022-05-24 富士フイルムビジネスイノベーション株式会社 Information processing equipment and information processing programs
CN107982619A (en) * 2017-12-21 2018-05-04 中国地质大学(武汉) A kind of autism children rehabilitation training interactive product
CN109550233A (en) * 2018-11-15 2019-04-02 东南大学 Autism child attention training system based on augmented reality
CN109846497A (en) * 2019-01-21 2019-06-07 上海交通大学 A kind of early screen method of self-closing disease auxiliary and device of view-based access control model
CN110215373A (en) * 2019-06-04 2019-09-10 北京虚实空间科技有限公司 It is a kind of based on the training system and method that immerse vision
CN111145865A (en) * 2019-12-26 2020-05-12 中国科学院合肥物质科学研究院 Vision-based hand fine motion training guidance system and method
CN114495643B (en) * 2022-01-25 2024-05-14 福建中科多特健康科技有限公司 Training assisting method and storage device
CN115054903A (en) * 2022-06-30 2022-09-16 北京工业大学 Virtual game rehabilitation system and method for active rehabilitation of stroke patient
CN116312081B (en) * 2022-09-07 2024-05-07 中山大学 Child autism treatment device based on ball game
CN116440382B (en) * 2023-03-14 2024-01-09 北京阿叟阿巴科技有限公司 Autism intervention system and method based on multilayer reinforcement strategy

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104111861A (en) * 2014-07-07 2014-10-22 中国人民解放军军械工程学院 Unmanned aerial vehicle simulation training system and control method thereof
CN104794942A (en) * 2015-03-30 2015-07-22 深圳市龙岗区蓝天社特殊儿童康复中心 Object recognition multi-stage training system for mental-handicapped children
CN105404395A (en) * 2015-11-25 2016-03-16 北京理工大学 Stage performance assisted training method and system based on augmented reality technology
CN106095105A (en) * 2016-06-21 2016-11-09 西南交通大学 A kind of traction substation operator on duty's virtual immersive Training Simulation System and method
CN106293073A (en) * 2016-07-29 2017-01-04 深圳市前海安测信息技术有限公司 Auxiliary patients of senile dementia based on virtual reality finds the system and method for article
CN106383586A (en) * 2016-10-21 2017-02-08 东南大学 Training system for children suffering from autistic spectrum disorders

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201610750A (en) * 2014-09-03 2016-03-16 Liquid3D Solutions Ltd Gesture control system interactive with 3D images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104111861A (en) * 2014-07-07 2014-10-22 中国人民解放军军械工程学院 Unmanned aerial vehicle simulation training system and control method thereof
CN104794942A (en) * 2015-03-30 2015-07-22 深圳市龙岗区蓝天社特殊儿童康复中心 Object recognition multi-stage training system for mental-handicapped children
CN105404395A (en) * 2015-11-25 2016-03-16 北京理工大学 Stage performance assisted training method and system based on augmented reality technology
CN106095105A (en) * 2016-06-21 2016-11-09 西南交通大学 A kind of traction substation operator on duty's virtual immersive Training Simulation System and method
CN106293073A (en) * 2016-07-29 2017-01-04 深圳市前海安测信息技术有限公司 Auxiliary patients of senile dementia based on virtual reality finds the system and method for article
CN106383586A (en) * 2016-10-21 2017-02-08 东南大学 Training system for children suffering from autistic spectrum disorders

Also Published As

Publication number Publication date
CN107168525A (en) 2017-09-15

Similar Documents

Publication Publication Date Title
CN107168525B (en) System and method for assisting autistic children in pairing training by using fine gesture recognition device
Fittkau et al. Exploring software cities in virtual reality
De Melo et al. Analysis and comparison of robotics 3d simulators
CN104346081B (en) Augmented reality learning system and method thereof
CN107945602A (en) A kind of equipment operation examination/Training Methodology, apparatus and system
CN106683193B (en) Design method and design device of three-dimensional model
CN110210012A (en) One kind being based on virtual reality technology interactivity courseware making methods
JP2022500795A (en) Avatar animation
CN106529838A (en) Virtual assembling method and device
Kuťák et al. State of the art of molecular visualization in immersive virtual environments
US20210375025A1 (en) Systems and methods performing object occlusion in augmented reality-based assembly instructions
CN112270853A (en) Calligraphy teaching system and method
CN108304806A (en) A kind of gesture identification method integrating feature and convolutional neural networks based on log path
VanHorn et al. Deep learning development environment in virtual reality
Hu et al. Interactive visual computer vision analysis based on artificial intelligence technology in intelligent education
Albertini et al. Designing natural gesture interaction for archaeological data in immersive environments
CN110741327B (en) Mud toy system and method based on augmented reality and digital image processing
KR20160005841A (en) Motion recognition with Augmented Reality based Realtime Interactive Human Body Learning System
JP2020086075A (en) Learning support system and program
CN105843479A (en) Content interaction method and system
Wang et al. Augmented Reality and Quick Response Code Technology in Engineering Drawing Course
Aruanno et al. Enhancing Inclusive Education for Young Students with Special Needs through Mixed Reality: Exploring the Potential of CNC Milling Machine Application
CN116127789B (en) Fourier transform virtual simulation teaching method, system, equipment and storage medium
TW200811767A (en) Learning assessment method and device using a virtual tutor
Šiđanin et al. Immersive virtual reality course at the digital production studies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant