CN106821333A - A kind of cognition dysfunction rehabilitation detection means based on virtual scene, method and therapeutic equipment - Google Patents

A kind of cognition dysfunction rehabilitation detection means based on virtual scene, method and therapeutic equipment Download PDF

Info

Publication number
CN106821333A
CN106821333A CN201710169955.3A CN201710169955A CN106821333A CN 106821333 A CN106821333 A CN 106821333A CN 201710169955 A CN201710169955 A CN 201710169955A CN 106821333 A CN106821333 A CN 106821333A
Authority
CN
China
Prior art keywords
user
module
scene
global
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710169955.3A
Other languages
Chinese (zh)
Inventor
李尔逊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heilongjiang Seoul Technology Co Ltd
Original Assignee
Heilongjiang Seoul Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heilongjiang Seoul Technology Co Ltd filed Critical Heilongjiang Seoul Technology Co Ltd
Priority to CN201710169955.3A priority Critical patent/CN106821333A/en
Publication of CN106821333A publication Critical patent/CN106821333A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4848Monitoring or testing the effects of treatment, e.g. of medication
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/7445Display arrangements, e.g. multiple display units
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • A63B2071/0638Displaying moving images of recorded environment, e.g. virtual environment

Abstract

The present invention relates to a kind of cognition dysfunction rehabilitation detection means based on virtual scene, method and therapeutic equipment, belong to cognition dysfunction rehabilitation detection field, the present invention is not bound with scene to solve the rehabilitation software of prior art, thus resulting in the person of being trained to can not be applied in actual life, the shortcoming of therapeutic effect, and a kind of cognition dysfunction rehabilitation detection means based on virtual scene is proposed, including at least one scene, global counting module, indicating module and customer location detection module:Customer location detection module is used to detect particular location of the user in virtual scene;Indicating module is used to guide user to be operated according to prompt message;Global counting module is used to judge according to the value of global counting variable the rehabilitation degree of user;Any combination of each scene comprising any detection module and detection module.Each detection module detects the rehabilitation efficacy of different aspect.The present invention is applied to cognition dysfunction rehabilitation equipment.

Description

A kind of cognition dysfunction rehabilitation detection means based on virtual scene, method and control Treatment equipment
Technical field
The present invention relates to a kind of cognition dysfunction rehabilitation detection means based on virtual scene, method and therapeutic equipment, Belong to cognition dysfunction rehabilitation detection field.
Background technology
It is main based on the conventional methods such as limbs, speech therapy in the rehabilitation of Chinese tradition meaning.Because Chinese patients are general All over thinking stand, can walk, can say the purpose for just having reached rehabilitation, such patient is difficult that recurrence is social and recurrence is damaged Work position before wound, this not only produces huge physiology to patient itself, family members, and psychological impact causes quality of life to decline, And huge financial burden can be brought to family where patient and society.On the one hand the main cause for causing present situation is state Interior patient and family members also do not have deeper understanding to cognitive function rehabilitation, and demand is weak compared with limb rehabilitating, are allowed to development and more delay Slowly;On the other hand, the domestic pattern in training therapist is, first theories of learning knowledge, after after upper clinic not according to the need of hospital Ask and be selected to physiotherapist, Oc Th or speech therapist etc..Drawback is that the knowledge of rehabilitation range learned is big In depth, wherein cognitive rehabilitation dysfunction rehabilitation is not directed to excessive interior simply as content is understood in teaching process Hold.
The training majority of domestic cognition dysfunction is confined to basic cognitive training at present, for example, color, shape, greatly It is small, the cognitive training such as object names, and that the Premium Features training of cognitive function of the rehabilitation of cognition dysfunction is related to is considerably less.
In the prior art, the person of being trained to has been after cognitive training has been received, and due to the influence of own bodies situation, (directive force hinders Hinder, obstacle etc. in memory), timely and effectively can not all be applied in reality, prevent the person of being trained to from timely checking training Validity.For example, the training of the executive capability that the person of being trained to is carried out in treatment mechanism, after the completion of training, due to the person's of being trained to Own bodies situation (also with directive force and memory disorder), the person of being trained to can not be to being applied in actual life (going to supermarket to do shopping), prevents from going astray from the group.Document shows, the person's of being trained to application training content in reality in a short time, to training The optimal effect of result.And if after the completion of training, the inspection of its validity is carried out in foreign environment, the spirit of the person of being trained to and Psychological levels will change so as to influence training effect.
In notice training, compare one of typical method for Shu Erte grid trainings:On a square card 25 grids, arbitrarily fill in totally 25 numerals such as Arabic numerals 1-25 in grid.During training, it is desirable to which the person of being trained to uses hand Finger points out its position successively by the order of 1-25, while chant speaking, trainer records the time used on one side.Count up to 25 numbers The time is shorter used by word, and notice level is higher.Though the method can be rated as in terms of training notice especially sustained notice Classics, but for the major part person of being trained to, it is necessary to abstract training is changed into when they are applied in reality The application of reality, this is an extremely complex process, and not all person of being trained to has this ability, so as to considerably increase The conversion difficulty instructed, and training effect is influenceed.
Meanwhile, on cognition dysfunction rehabilitation software aspects, some enterprises have been directed to, and are mainly seated in section of hospital Room, is used by number of times, used as auxiliary treatment project.And main Technology origin is the software systems for copying the U.S. and Germany. It is less that oneself is developed, and level is relatively low, mainly (shape, color, notice, brief overview type note based on basic cognition Recall power), and without the training method of autonomous core.Newer, more typical training software is as shown in the table:
As can be seen from the above table, most associated treatment software is not bound with scene, thus results in the person of being trained to not Can be applied in actual life.Therefore a kind of new rehabilitation efficacy detection method and therapeutic equipment is needed to solve this One problem.
The content of the invention
Rehabilitation software the invention aims to solve prior art is not bound with scene, thus results in and is instructed White silk person can not be applied in actual life, the shortcoming of therapeutic effect, and propose a kind of cognitive work(based on virtual scene Can obstacle rehabilitation detection means, method and therapeutic equipment.
According to the first aspect of the invention, there is provided a kind of cognition dysfunction rehabilitation detection dress based on virtual scene Put, including at least one scene, global counting module, indicating module and customer location detection module, wherein:Customer location is examined Surveying module is used to detect the particular location that user is in virtual scene, sends position location information;Indicating module is used for root Prompt message is produced according to position location information, to guide user to be operated according to prompt message;Global counting module is included from 0 The global counting variable for starting counting up, global counting module is used to judge according to the value of global counting variable the rehabilitation journey of user Degree;
Each scene includes following module of any one module or any two and the above and combinations thereof:
Directive force test module, including:Input block, the information for receiving user input;Judging unit, for judging Whether the information for being input into matches with prompt message, if matching, global counting variable adds 1;
Executive capability test module, including:Target object generation unit, for generating target object to be contacted;Collision Detection unit, represents whether the virtual pattern of user's hand collides with target object for detecting, if colliding, entirely Office's counting variable adds 1;
Quantify aptitude tests module, including:Target object generation unit, is used to test quantization journey for generating at least one The object of degree;Collision detection unit, for detecting the virtual pattern and the object for testing quantization degree that represent for hand Whether collide, if colliding, send selection signal;Judging unit, for after selection signal is received, judging to use In the property tag value of object of test quantization degree whether be preset value, if so, then global counting variable adds 1;
Contrast recognition capability test module, including contrast identification object generation unit, for generating two mesh to be distinguished There is at least one selected difference position between mark object, target object;Distinguishing mark unit, for passing through input according to user The ad-hoc location of the cue mark target object of module input;Difference detection unit, for by the ad-hoc location of user's mark with Selected difference position is matched, if the match is successful, global counting variable adds 1;
Object names identification module, including:Title recognizes object generation unit, for generating at least one thing to be identified Body, each one title of object correspondence to be identified;One of them object to be identified is chosen to be target object;Selection prompting is single Unit, for generating the graphics field for user's selection, each graphics field corresponds to a title for object to be identified;
Title recognition detection unit, for title and object representated by region that user is selected by input module The title of body is matched, if the match is successful, global counting variable adds 1;Specified object chooses identification module, including:It is to be selected Object generation unit is taken, for generating at least one object to be chosen, wherein specific one object to be chosen is used as selected thing Body;Collision detection unit, represents whether the virtual pattern of user's hand collides with object to be chosen for detecting, if so, Then send judgement signal;Object chooses judging unit, and whether the object to be chosen for judging to collide is selected object, if It is that then global counting variable adds 1;
Object space recognition capability test module, including:Object generation unit to be allocated, treats point for generating at least one With object;Position generation unit, is used to match the virtual location of object to be allocated for generating at least one;Matching relationship is by referring to Show that the configured information that module sends determines;Location identification capability judging unit, for judging whether each object to be allocated is located At matched virtual location, if so, then global counting variable adds 1;
Sound recognition abilities test module, including:Voice playing unit, for the good audio-frequency information of played pre-recorded; Voice receiving unit, the voice messaging for obtaining user input;Speech recognition capabilities judging unit, for will record number sound Frequency information is matched with the voice messaging of user input, if matching degree is higher than default value, global counting variable adds 1.
According to the second aspect of the invention, there is provided a kind of cognition dysfunction rehabilitation detection side based on virtual scene Method, method is applied to virtual scene, and method includes global counting step, indicates step and customer location detecting step:User Position detecting step sends position location information for detecting the particular location that user is in virtual scene;Indicate step For producing prompt message according to position location information, to guide user to be operated according to prompt message;Global counting module Rehabilitation degree for judging user according to the value of global counting variable;Global technique variable is from 0 variable for starting counting up;
Method is used for the step of following any one step or any two and the above are performed in virtual scene and its group Close;
Directive force testing procedure, including:Receive the information of user input;Judge for be input into information be with prompt message No matching, if matching, global counting variable adds 1;
Executive capability testing procedure, including:Generation target object to be contacted;Detection represents the virtual pattern of user's hand Whether collided with target object, if colliding, global counting variable adds 1;
Quantify aptitude tests step, including:Generation at least one is used to test the object of quantization degree;Detection expression is used for The virtual pattern of hand, if colliding, sends selection signal with for testing whether the object of quantization degree collides; After selection signal is received, whether the property tag value for judging the object for testing quantization degree is preset value, if so, then Global counting variable adds 1;
Contrast recognition capability testing procedure, including:Have at least between two target objects to be distinguished of generation, target object One selected difference position;The ad-hoc location of the cue mark target object being input into by input module according to user;To use The ad-hoc location of family mark is matched with selected difference position, if the match is successful, global counting variable adds 1;
Object names identification step, including:At least one object to be identified is generated, each object correspondence one to be identified Title;One of them object to be identified is chosen to be graphics field of the target object generation for user's selection, each graphics field Corresponding to a title for object to be identified;The title representated by region and target object that user is selected by input module Title matched, if the match is successful, global counting variable adds 1;
Specified object chooses identification step, including:At least one object to be chosen is generated, wherein specific one waits to choose Object is used as selected object;Detection represents whether the virtual pattern of user's hand collides with object to be chosen, if so, then sending out Go out to judge signal;Whether the object to be chosen that judgement collides is selected object, if so, then global counting variable adds 1;
Object space recognition capability testing procedure, including:Generate at least one object to be allocated;Generation at least one is used for Match the virtual location of object to be allocated;Matching relationship is determined by the configured information that indicating module sends;Judge that each is treated point With object whether at matched virtual location, if so, then global counting variable adds 1;
Sound recognition abilities testing procedure, including:The good audio-frequency information of played pre-recorded;Obtain the voice of user input Information;The audio-frequency information that will be recorded number is matched with the voice messaging of user input, if matching degree is higher than default value, entirely Office's counting variable adds 1.
According to the third aspect of the invention we, there is provided a kind of cognition dysfunction rehabilitation based on virtual scene sets It is standby, including:
Display device, for display virtual real scene;
Interactive device is for receiving operational order and/or voice that user sends and voice and/or vibration signal is anti- Feed user;And such as the cognition dysfunction rehabilitation detection means based on virtual scene of first aspect present invention.
Beneficial effects of the present invention are:1st, the virtual reality system based on scene is established, by the life for simulating Scape simultaneously coordinates instruction, detection functional module, can allow user that basic living technical ability can be just completed during rehabilitation training. 2nd, by global counting module, each side brain function to user draws testing result, is on the one hand easy to statistical analysis, another Aspect can be directed to the bad part of rehabilitation training effect and carry out Reusability, reach the effect of rehabilitation.3rd, the scene for using is The scene of closer to reality environment, when user is transformed into actual environment by virtual environment, will not allow user to produce to footpath between fields The resisting psychology of raw environment such that it is able to reach more preferable rehabilitation training effect.4th, user recovery effects it is bad, life skill Can it is slightly worse in the case of if be directly trained in actual environment, it is likely that because of user's health is poor, ability Deficiency causes to cause danger, such as cutter is used in the case where body harmony is not good, or recover not good in memory capability In the case of gone out, may all cause to accidentally injure or wander away etc. dangerous, and can subtract to greatest extent using device of the invention Few this dangerous generation, trains the ability of correlation under the environment of safety.
Brief description of the drawings
Fig. 1 is the structural representation of the cognition dysfunction rehabilitation equipment based on virtual scene of the invention;
Fig. 2 is the flow chart of the cognition dysfunction rehabilitation detection method based on virtual scene of the invention.
Specific embodiment
Specific embodiment one:A kind of cognition dysfunction rehabilitation detection dress based on virtual scene of present embodiment Put, including at least one scene, global counting module, indicating module and customer location detection module, wherein:
Customer location detection module is used to detect the particular location that user is in virtual scene, sends position positioning and believe Breath.Location information can be represented by two-dimensional coordinate information, it is also possible to by region representation, for example, be by the scene partitioning in kitchen Some regions, when user enters specific region, location information is indicated as the information in region.
Indicating module is used to produce prompt message according to position location information, to guide user to be grasped according to prompt message Make.Prompt message can be text information, such as pop-up dialogue box, be write out " date that please be input into today " with word, it is also possible to Pointed out by voice.
Global counting module includes that global counting module is based on according to the overall situation from the 0 global counting variable for starting counting up The value of number variable judges the rehabilitation degree of user.The quantity of global counting variable means number of the user by detection project Amount, specific judgment mode also has corresponding difference according to disparate modules.Can by judging the quantity of global counting module, Or account for the percentage of total quantity and judge rehabilitation efficacy, such as one has 50 modules, and user all detects it finishing Afterwards, global counting variable is 30, then the ratio by detection is 60%, can judge that rehabilitation training is imitated by this ratio Really.Unsanctioned detection project can also be directed to, makes user's Reusability, and then reach rehabilitation efficacy.
Each scene includes following module of any one module or any two and the above and combinations thereof:
Directive force test module, including:Input block, the information for receiving user input;Judging unit, for judging Whether the information for being input into matches with prompt message, if matching, global counting variable adds 1.This part of module is used for realizing Directive force rehabilitation training, directive force refers to people in the scene to the understanding of time, place, personage and oneself state.Therefore Directive force training can be that time, place, personage, oneself state are putd question to, such as by indicating module pop-up dialogue box, Write out above " scene where please selecting you is parlor or kitchen ", or " please be input into what day is today " etc..
Executive capability test module, including:Target object generation unit, for generating target object to be contacted;Collision Detection unit, represents whether the virtual pattern of user's hand collides with target object for detecting, if colliding, entirely Office's counting variable adds 1.This part is that, for carrying out rehabilitation training and test to executive capability, such as prompt message is that " please pick up Take the magazine on tea table ", then by judging whether the corresponding virtual pattern of the hand of user and magazine there occurs collision, if It is, then it is assumed that user is conscious to go to have picked up magazine, that is, think that user is made that and meet expected selection.
Quantify aptitude tests module, including:Target object generation unit, is used to test quantization journey for generating at least one The object of degree;Collision detection unit, for detecting the virtual pattern and the object for testing quantization degree that represent for hand Whether collide, if colliding, send selection signal;Judging unit, for after selection signal is received, judging to use In the property tag value of object of test quantization degree whether be preset value, if so, then global counting variable adds 1.Use this part To detect the computing power of user, computing power be people in the scene to the ability of conclusion and the conversion of numeral, i.e., in scene Abstract, complicated mathematic(al) representation or numeral is converted to us and may be appreciated mathematical form by mathematical method.For example, carrying Show information for " please pick up and contain the cup for 1/3 water ", due to 1/3 being directly displayed in reality scene, but pass through It is abstract in the material (water) of quantization to obtain, therefore this test can detect the computing power of user.
Contrast recognition capability test module, including:Contrast identification object generation unit, for generating two mesh to be distinguished There is at least one selected difference position between mark object, target object;Distinguishing mark unit, for passing through input according to user The ad-hoc location of the cue mark target object of module input;Difference detection unit, for by the ad-hoc location of user's mark with Selected difference position is matched, if the match is successful, global counting variable adds 1.This part is the observation for testing user, Observation is people's perception activity in the scene, and the objective things feature in scene is perceived by sense organ.For example it is virtual Scene is parlor, and the picture that two width are slightly different is hung with the wall in parlor, and prompt message is " please find out the difference of two width picture ", then The difference that user can be drawn by the width of input module mark two, thus examines the observation ability of user.
Object names identification module, including:Title recognizes object generation unit, for generating at least one thing to be identified Body, each one title of object correspondence to be identified;One of them object to be identified is chosen to be target object;Selection prompting is single Unit, for generating the graphics field for user's selection, each graphics field corresponds to a title for object to be identified;Title is known Other detection unit, is carried out for the title representated by region that user is selected by input module and the title of target object Match somebody with somebody, if the match is successful, global counting variable adds 1.For example, prompt message is " what the title of the magazine of your left-hand side is ", Then patterned option is given selective.
Specified object chooses identification module, including:Object generation unit to be chosen, for generating at least one thing to be chosen Body, wherein specific one object to be chosen is used as selected object;Collision detection unit, the void of user's hand is represented for detecting Intend whether figure collides with object to be chosen, if so, then sending judgement signal;Object chooses judging unit, for judging Whether the object to be chosen for colliding is selected object, if so, then global counting variable adds 1.This part is the letter to user Breath disposal ability is trained, and information processing capability is that people scientificlly and effectively collect (search strategy, search hand in the scene Section) and arrange information (information classification, distinguish error message, arrangement production information), control information, analysis information and therefrom inference Go out the conclusion for having very big help for correct (effective) decision-making.For example, scene is study, indicating module is indicated " please from bookshelf Selecting title is《Das Kapital》Book ", user should be noted to judge whether book and specified title on bookshelf identical that is, right Collection information, the ability of arrangement information are tested.
Object space recognition capability test module, including:Object generation unit to be allocated, treats point for generating at least one With object;Position generation unit, is used to match the virtual location of object to be allocated for generating at least one;Matching relationship is by referring to Show that the configured information that module sends determines;Location identification capability judging unit, for judging whether each object to be allocated is located At matched virtual location, if so, then global counting variable adds 1.For example, scene is kitchen, if object to be allocated is Dry vegetables and fruit, virtual location are the respective layer in refrigerator, and prompt message is " vegetables to be please put into ground floor, fruit is put To the second layer ", user is so on the one hand tested to vegetables and the understanding of fruit abstract concept, also test sentencing mathematically Disconnected power, while also having tested the executive capability of user.
Sound recognition abilities test module, including:Voice playing unit, for the good audio-frequency information of played pre-recorded; Voice receiving unit, the voice messaging for obtaining user input;Speech recognition capabilities judging unit, for will record number sound Frequency information is matched with the voice messaging of user input, if matching degree is higher than default value, global counting variable adds 1.This Part can detect the language ability of user.For example, after thering is newspaper, user to pick up newspaper on the chair in parlor, speech play Unit (such as the audio playing device in virtual implementing helmet) commences play out the audio corresponding to content of newspaper, then prompting letter Breath prompting " voice please be followed to be read aloud ", the voice messaging read further according to user judges whether the word content one with newspaper Cause, the language ability of user is detected with this.
It should be noted that any one in can having above-mentioned each module in a scene, or any various phase Mutually combination, species is more, and the test to ability is more comprehensive.In addition, above-mentioned each module can be used to test, in the process of test In respective capabilities actually also to user strengthened, for example, in aptitude tests module is quantified, if the answer of user Mistake, then we just change one group of similar example and re-start test, has not only obtained multigroup test knot in this process Really, user also deepens to the concept for quantifying, and this also allows for ability of the user on quantifying and is improved.
Also, it should be noted that being that there is very High relevancy between above-mentioned each module, they are all based on " cognitive energy Power test " and " living skill lifting " this identical invention thinking, the theoretical association area on cognitive ability are a lot Paper, therefore the present invention is no longer described in detail to this theory.
Specific embodiment two:Present embodiment from unlike specific embodiment one:Each scene is life field Scape, each module is used to simulate the cognitive process and action process in living scene.
That is, virtual scene can have very with reselection procedure living scene, so the living skill lifting to user Big help.
The present invention is different from common virtual reality system, although virtual reality system is all the simulation to actual life, But and be equipped with indicating module to indicate content of specifically taking action, also test result is not estimated, will more not recognize In knowing that the theory of aptitude tests incorporates reality environment, the scene of present invention design is not common living scene, therein The running of modules is theoretical in strict conformity with cognitive science, is the certain capabilities for targetedly testing user, and And the virtual implementing helmet thus produced can be directed to the effect that the user with cognitive disorder realizes treatment.Common is right The rehabilitation training of equipment not the being directed to property that real world is simulated, user cannot get clear and definite behavior and indicate, it is unclear that Whether behavior is correct, cannot also reach effect of the invention.
Other steps and parameter are identical with specific embodiment one.
Specific embodiment three:Present embodiment from unlike specific embodiment one or two:Living scene is kitchen, Object space recognition capability test module is used to simulate picks up specific food from kitchen, and is placed on the mistake of refrigerator certain layer Journey;
Customer location detection module is near the refrigerator of kitchen scene for detecting user, sends position location information;
Indicating module receiving position location information, and prompt message is produced, guide user's pickup specific food to be placed on ice The certain layer of case;
Object generation unit to be allocated is used to generate at least one food;Position generation unit is used to generate in refrigerator extremely A few layer;Each food is determined with the matching relationship of each layer in refrigerator by the configured information that indicating module sends;Know position Other ability judging unit is used to judge the certain layer whether each food is in matched refrigerator, if so, then global Counting variable adds 1.
Other steps and parameter are identical with specific embodiment one or two.
Specific embodiment four:Unlike one of present embodiment and specific embodiment one to three:Scene is parlor, Quantifying aptitude tests unit is used to simulate the process for choosing the cup containing certain content water;Customer location detection module is used to examine Survey user to be near the cup of parlor scene, send position location information;Indicating module receiving position location information, and produce Prompt message, guides user's cup of the pickup containing specific water;Target object generation unit is used to generate at least 4 transparent waters Cup, respectively half-full cup, full up cup, the cup that water is 1/3 and the cup that water is 3/4;Collision detection unit is used Represent whether the virtual pattern for hand collides with arbitrary cup in detection, if colliding, send selection letter Number;Judging unit, whether the property tag value for after selection signal is received, judging selected cup is preset value, If so, then global counting variable adds 1.
Other steps and parameter are identical with one of specific embodiment one to three.
Specific embodiment five:Unlike one of present embodiment and specific embodiment one to four:Scene is parlor, Sound recognition abilities test module is used for the process that analog subscriber reads newspaper;Customer location detection module is used to detect at user Near the newspaper of object, position location information is sent;Indicating module receiving position location information, and prompt message is produced, refer to Quote family and follow voice reading content of newspaper;Voice playing unit is used for the good sound corresponding to content of newspaper of played pre-recorded Frequency information;Voice receiving unit is used to obtain the voice messaging that user follows the audio-frequency information to read and produces;Speech recognition The voice messaging that audio-frequency information and the user that ability judging unit is used to recording number read generation is matched, if matching degree is high In default value, then global counting variable adds 1.
Other steps and parameter are identical with one of specific embodiment one to four.
Specific embodiment six:Unlike one of present embodiment and specific embodiment one to five:Dress of the invention Putting also includes memory module, and the historical data produced by described device is used for storing patient.
Other steps and parameter are identical with one of specific embodiment one to five.
Specific embodiment seven:Device of the invention also includes parameter configuration module, for adjusting scene and each mould Parameter in block.
Other steps and parameter are identical with one of specific embodiment one to six.
Specific embodiment eight:The present invention also provides a kind of cognition dysfunction rehabilitation detection side based on virtual scene Method, methods described is applied to virtual scene, and methods described includes global counting step, indicates step and customer location detection step Suddenly:
Customer location detecting step is used to detect the particular location that user is in virtual scene, sends position positioning and believe Breath.
Indicating step is used to produce prompt message according to position location information, to guide user to be grasped according to prompt message Make.
Global counting module is used to judge according to the value of the global counting variable rehabilitation degree of user;The global skill Art variable is from 0 variable for starting counting up.
Methods described be used for the step of following any one step or any two and the above are performed in virtual scene and Its combination:
Directive force testing procedure, including:Receive the information of user input;Judge for be input into information be with prompt message No matching, if matching, global counting variable adds 1.
Executive capability testing procedure, including:Generation target object to be contacted;Detection represents the virtual pattern of user's hand Whether collided with the target object, if colliding, global counting variable adds 1.
Quantify aptitude tests step, including:Generation at least one is used to test the object of quantization degree;Detection expression is used for Whether the virtual pattern of hand collides with the object for testing quantization degree, if colliding, sends selection Signal;After selection signal is received, whether the property tag value for judging the object for testing quantization degree is default Value, if so, then global counting variable adds 1.
Contrast recognition capability testing procedure, including:Have at least between two target objects to be distinguished of generation, target object One selected difference position;The ad-hoc location of the cue mark target object being input into by input module according to user;To use The ad-hoc location of family mark is matched with selected difference position, if the match is successful, global counting variable adds 1.
Object names identification step, including:At least one object to be identified is generated, each object correspondence one to be identified Title;One of them object to be identified is chosen to be graphics field of the target object generation for user's selection, each graphics field Corresponding to a title for object to be identified;The title representated by region and target object that user is selected by input module Title matched, if the match is successful, global counting variable adds 1.
Specified object chooses identification step, including:At least one object to be chosen is generated, wherein specific one waits to choose Object is used as selected object;Detection represents whether the virtual pattern of user's hand collides with object to be chosen, if so, then sending out Go out to judge signal;Whether the object to be chosen that judgement collides is selected object, if so, then global counting variable adds 1;
Object space recognition capability testing procedure, including:Generate at least one object to be allocated;Generation at least one is used for Match the virtual location of object to be allocated;Matching relationship is determined by the configured information that indicating module sends;Judge that each is treated point With object whether at matched virtual location, if so, then global counting variable adds 1.
Sound recognition abilities testing procedure, including:The good audio-frequency information of played pre-recorded;Obtain the voice of user input Information;The audio-frequency information that will be recorded number is matched with the voice messaging of user input, if matching degree is higher than default value, entirely Office's counting variable adds 1.
Said process can be briefly expressed with the flow chart shown in Fig. 2.
Specific embodiment eight is identical with the correspondence of specific embodiment one, is not detailed herein.
Specific embodiment nine:The present invention also provides a kind of cognition dysfunction rehabilitation equipment based on virtual scene 100, as shown in figure 1, including:Rehabilitation processor 102, for realizing such as any one institute in specific embodiment one to seven The detection of the cognition dysfunction rehabilitation based on virtual scene stated;Display device 103, for showing rehabilitation processor 102 The virtual reality scenario for being generated;And interactive device 101, for receiving operational order and/or voice that user sends, send To rehabilitation processor 102, and voice and/or vibration signal are fed back into user.
Interactive device 101 can be the Bluetooth handle with vibrational feedback function, control stick, touch pad etc..Display device It can be liquid crystal display.
The present invention can also have other various embodiments, in the case of without departing substantially from spirit of the invention and its essence, this area Technical staff works as can make various corresponding changes and deformation according to the present invention, but these corresponding changes and deformation should all belong to The protection domain of appended claims of the invention.

Claims (9)

1. a kind of cognition dysfunction rehabilitation detection means based on virtual scene, it is characterised in that including at least one scene, Global counting module, indicating module and customer location detection module, wherein:
Customer location detection module sends position location information for detecting the particular location that user is in virtual scene;
Indicating module is used to produce prompt message according to position location information, to guide user to be operated according to prompt message;
Global counting module includes that, from the 0 global counting variable for starting counting up, the global counting module is used for according to described complete The value of office's counting variable judges the rehabilitation degree of user;
Each described scene includes following module of any one module or any two and the above and combinations thereof:
Directive force test module, including:
Input block, the information for receiving user input;
Judging unit, whether for judging matched with prompt message for the information being input into, if matching, global counting variable adds 1;
Executive capability test module, including:
Target object generation unit, for generating target object to be contacted;
Collision detection unit, represents whether the virtual pattern of user's hand collides with the target object for detecting, if Collide, then global counting variable adds 1;
Quantify aptitude tests module, including:
Target object generation unit, is used to test the object of quantization degree for generating at least one;
Collision detection unit, represents that the virtual pattern for hand is for testing the object of quantization degree with described for detecting It is no to collide, if colliding, send selection signal;
Judging unit, for after selection signal is received, judging the attribute tags for testing the object of quantization degree Whether value is preset value, if so, then global counting variable adds 1;
Contrast recognition capability test module, including:
Contrast identification object generation unit, has at least one for generating between two target objects to be distinguished, target object Selected difference position;
Distinguishing mark unit, the ad-hoc location of the cue mark target object for being input into by input module according to user;
Difference detection unit, for the ad-hoc location of user's mark to be matched with selected difference position, if the match is successful, Then global counting variable adds 1;
Object names identification module, including:
Title recognizes object generation unit, for generating at least one object to be identified, each object correspondence one to be identified Title;One of them object to be identified is chosen to be target object;
Selection Tip element, for generating the graphics field for user's selection, each graphics field corresponds to a thing to be identified The title of body;
Title recognition detection unit, for title and target object representated by region that user is selected by input module Title is matched, if the match is successful, global counting variable adds 1;
Specified object chooses identification module, including:
Object generation unit to be chosen, for generating at least one object to be chosen, wherein specific one object to be chosen is made To select object;
Collision detection unit, represents whether the virtual pattern of user's hand collides with object to be chosen for detecting, if so, Then send judgement signal;
Object chooses judging unit, and whether the object to be chosen for judging to collide is selected object, if so, then global meter Number variable adds 1;
Object space recognition capability test module, including:
Object generation unit to be allocated, for generating at least one object to be allocated;
Position generation unit, is used to match the virtual location of object to be allocated for generating at least one;Matching relationship is by indicating The configured information that module sends determines;
Location identification capability judging unit, for judging each object to be allocated whether in matched virtual location Place, if so, then global counting variable adds 1;
Sound recognition abilities test module, including:
Voice playing unit, for the good audio-frequency information of played pre-recorded;
Voice receiving unit, the voice messaging for obtaining user input;
Speech recognition capabilities judging unit, for will record number audio-frequency information matched with the voice messaging of user input, If matching degree is higher than default value, global counting variable adds 1.
2. device according to claim 1, it is characterised in that each described scene is living scene, and described is every Individual module is used to simulate the cognitive process and action process in living scene.
3. device according to claim 2, it is characterised in that the living scene is kitchen, the object space identification Aptitude tests module is used to simulate picks up specific food from kitchen, and is placed on the process of refrigerator certain layer;
Customer location detection module is near the refrigerator of kitchen scene for detecting user, sends position location information;
Indicating module receiving position location information, and prompt message is produced, guide user's pickup specific food to be placed on refrigerator Certain layer;
Object generation unit to be allocated is used to generate at least one food;Position generation unit is used to generate at least in refrigerator Individual layer;Each food is determined with the matching relationship of each layer in refrigerator by the configured information that indicating module sends;Location recognition energy Power judging unit is used to judge the certain layer whether each food is in matched refrigerator, if so, then global count Variable adds 1.
4. device according to claim 2, it is characterised in that the scene is parlor, the quantization aptitude tests unit The process of the cup containing certain content water is chosen for simulating;
Customer location detection module is near the cup of parlor scene for detecting user, sends position location information;
Indicating module receiving position location information, and prompt message is produced, guide user's cup of the pickup containing specific water;
The target object generation unit is used to generate at least 4 transparent water cups, respectively half-full cup, full up cup, water For 1/3 cup and water be 3/4 cup;
Collision detection unit is used to detect whether expression collides for the virtual pattern of hand with arbitrary cup, if occurring Collision, then send selection signal;
Judging unit, whether the property tag value for after selection signal is received, judging selected cup is preset value, If so, then global counting variable adds 1.
5. device according to claim 2, it is characterised in that the scene is parlor, the Sound recognition abilities test Module is used for the process that analog subscriber reads newspaper;
Customer location detection module is near the newspaper of object for detecting user, sends position location information;
Indicating module receiving position location information, and prompt message is produced, guide user to follow voice reading content of newspaper;
The voice playing unit is used for the good audio-frequency information corresponding to content of newspaper of played pre-recorded;
The voice receiving unit is used to obtain the voice messaging that user follows the audio-frequency information to read and produces;
The voice messaging that audio-frequency information and the user that the speech recognition capabilities judging unit is used to recording number read generation enters Row matching, if matching degree is higher than default value, global counting variable adds 1.
6. device as claimed in any of claims 1 to 5, it is characterised in that also including memory module, for storing Patient uses the historical data produced by described device.
7. device according to claim 6, it is characterised in that also including parameter configuration module, for adjust scene and Parameter in modules.
8. a kind of cognition dysfunction rehabilitation detection method based on virtual scene, it is characterised in that methods described is applied to void Intend scene, methods described includes global counting step, indicates step and customer location detecting step:
Customer location detecting step sends position location information for detecting the particular location that user is in virtual scene;
Indicating step is used to produce prompt message according to position location information, to guide user to be operated according to prompt message;
Global counting module is used to judge according to the value of the global counting variable rehabilitation degree of user;The global technique becomes Amount is from 0 variable for starting counting up;
Methods described is used for the step of following any one step or any two and the above are performed in virtual scene and its group Close;
Directive force testing procedure, including:
Receive the information of user input;
Judge whether matched with prompt message for the information being input into, if matching, global counting variable adds 1;
Executive capability testing procedure, including:
Generation target object to be contacted;
Detection represents whether the virtual pattern of user's hand collides with the target object, if colliding, overall situation meter Number variable adds 1;
Quantify aptitude tests step, including:
Generation at least one is used to test the object of quantization degree;
Detection represents whether the virtual pattern for hand collides with the object for testing quantization degree, if occurring Collision, then send selection signal;
After selection signal is received, whether the property tag value for judging the object for testing quantization degree is default Value, if so, then global counting variable adds 1;
Contrast recognition capability testing procedure, including:
There is at least one selected difference position between two target objects to be distinguished of generation, target object;
The ad-hoc location of the cue mark target object being input into by input module according to user;
The ad-hoc location of user's mark is matched with selected difference position, if the match is successful, global counting variable adds 1;
Object names identification step, including:
At least one object to be identified is generated, each one title of object correspondence to be identified;One of them object quilt to be identified It is chosen to be target object
For the graphics field of user's selection, each graphics field corresponds to a title for object to be identified for generation;
The title representated by region that user is selected by input module is matched with the title of target object, if matching into Work(, then global counting variable add 1;
Specified object chooses identification step, including:
At least one object to be chosen of generation, wherein specific one object to be chosen is used as selected object;
Detection represents whether the virtual pattern of user's hand collides with object to be chosen, if so, then sending judgement signal;
Whether the object to be chosen that judgement collides is selected object, if so, then global counting variable adds 1;
Object space recognition capability testing procedure, including:
Generate at least one object to be allocated;
Generation at least one is used to match the virtual location of object to be allocated;The configured information that matching relationship is sent by indicating module It is determined that;
Whether each object to be allocated is judged at matched virtual location, if so, then global counting variable adds 1;
Sound recognition abilities testing procedure, including:
The good audio-frequency information of played pre-recorded;
Obtain the voice messaging of user input;
The audio-frequency information that will be recorded number is matched with the voice messaging of user input, if matching degree is higher than default value, entirely Office's counting variable adds 1.
9. a kind of cognition dysfunction rehabilitation equipment based on virtual scene, it is characterised in that including:
Rehabilitation processor, for realizing the cognitive work(based on virtual scene as claimed in any of claims 1 to 7 in one of claims Can obstacle rehabilitation detection;
Display device, for showing the virtual reality scenario that rehabilitation processor is generated;And
Interactive device, for receiving operational order and/or voice that user sends, sends to rehabilitation processor, and by language Sound and/or vibration signal feed back to user.
CN201710169955.3A 2017-03-21 2017-03-21 A kind of cognition dysfunction rehabilitation detection means based on virtual scene, method and therapeutic equipment Pending CN106821333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710169955.3A CN106821333A (en) 2017-03-21 2017-03-21 A kind of cognition dysfunction rehabilitation detection means based on virtual scene, method and therapeutic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710169955.3A CN106821333A (en) 2017-03-21 2017-03-21 A kind of cognition dysfunction rehabilitation detection means based on virtual scene, method and therapeutic equipment

Publications (1)

Publication Number Publication Date
CN106821333A true CN106821333A (en) 2017-06-13

Family

ID=59130987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710169955.3A Pending CN106821333A (en) 2017-03-21 2017-03-21 A kind of cognition dysfunction rehabilitation detection means based on virtual scene, method and therapeutic equipment

Country Status (1)

Country Link
CN (1) CN106821333A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107789803A (en) * 2017-10-31 2018-03-13 深圳先进技术研究院 A kind of cerebral apoplexy rehabilitation training of upper limbs method and system
CN109192278A (en) * 2018-08-28 2019-01-11 齐辉 Interactive approach and system based on virtual reality
CN109173187A (en) * 2018-09-28 2019-01-11 广州乾睿医疗科技有限公司 Control system, the method and device of cognitive rehabilitative training based on virtual reality
CN109616193A (en) * 2018-12-21 2019-04-12 杭州颐康医疗科技有限公司 A kind of virtual reality cognitive rehabilitation method and system
CN110313895A (en) * 2019-04-28 2019-10-11 江南大学 Training of cognitive function method
CN111887845A (en) * 2020-07-31 2020-11-06 昆明理工大学 Attention regulation system based on EEG nerve feedback
CN112102344A (en) * 2020-09-11 2020-12-18 深圳大学 Hand-eye coordination test system and method based on image processing
CN114129852A (en) * 2021-12-02 2022-03-04 上海市第五人民医院 VR (virtual reality) targeted training system for cognitive impairment
CN114241837A (en) * 2021-11-08 2022-03-25 福建医科大学 Virtual teaching assessment system for daily life activity assessment and training

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080280276A1 (en) * 2007-05-09 2008-11-13 Oregon Health & Science University And Oregon Research Institute Virtual reality tools and techniques for measuring cognitive ability and cognitive impairment
US20120108909A1 (en) * 2010-11-03 2012-05-03 HeadRehab, LLC Assessment and Rehabilitation of Cognitive and Motor Functions Using Virtual Reality
CN103268392A (en) * 2013-04-15 2013-08-28 福建中医药大学 Cognitive function training system for scene interaction and application method thereof
CN106327049A (en) * 2015-07-08 2017-01-11 广州市第人民医院 Cognitive assessment system and application thereof
CN106355010A (en) * 2016-08-30 2017-01-25 深圳市臻络科技有限公司 Self-service cognition evaluation apparatus and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080280276A1 (en) * 2007-05-09 2008-11-13 Oregon Health & Science University And Oregon Research Institute Virtual reality tools and techniques for measuring cognitive ability and cognitive impairment
US20120108909A1 (en) * 2010-11-03 2012-05-03 HeadRehab, LLC Assessment and Rehabilitation of Cognitive and Motor Functions Using Virtual Reality
CN103268392A (en) * 2013-04-15 2013-08-28 福建中医药大学 Cognitive function training system for scene interaction and application method thereof
CN106327049A (en) * 2015-07-08 2017-01-11 广州市第人民医院 Cognitive assessment system and application thereof
CN106355010A (en) * 2016-08-30 2017-01-25 深圳市臻络科技有限公司 Self-service cognition evaluation apparatus and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
D.GOURLAY、K.C.LUN、Y.N.LEE、J.TAY: "Virtual reality for relearning daily living skills", 《INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS》 *
刘林、伍平平、熊巍: "虚拟超市认知康复训练系统开发关键技术研究", 《图学学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107789803A (en) * 2017-10-31 2018-03-13 深圳先进技术研究院 A kind of cerebral apoplexy rehabilitation training of upper limbs method and system
CN107789803B (en) * 2017-10-31 2020-07-24 深圳先进技术研究院 Cerebral stroke upper limb rehabilitation training method and system
CN109192278A (en) * 2018-08-28 2019-01-11 齐辉 Interactive approach and system based on virtual reality
CN109173187A (en) * 2018-09-28 2019-01-11 广州乾睿医疗科技有限公司 Control system, the method and device of cognitive rehabilitative training based on virtual reality
CN109616193A (en) * 2018-12-21 2019-04-12 杭州颐康医疗科技有限公司 A kind of virtual reality cognitive rehabilitation method and system
CN110313895A (en) * 2019-04-28 2019-10-11 江南大学 Training of cognitive function method
CN111887845A (en) * 2020-07-31 2020-11-06 昆明理工大学 Attention regulation system based on EEG nerve feedback
CN112102344A (en) * 2020-09-11 2020-12-18 深圳大学 Hand-eye coordination test system and method based on image processing
CN112102344B (en) * 2020-09-11 2023-07-14 深圳大学 Hand-eye coordination test system and method based on image processing
CN114241837A (en) * 2021-11-08 2022-03-25 福建医科大学 Virtual teaching assessment system for daily life activity assessment and training
CN114241837B (en) * 2021-11-08 2023-06-30 福建医科大学 Virtual teaching assessment system for daily life activity assessment and training
CN114129852A (en) * 2021-12-02 2022-03-04 上海市第五人民医院 VR (virtual reality) targeted training system for cognitive impairment

Similar Documents

Publication Publication Date Title
CN106821333A (en) A kind of cognition dysfunction rehabilitation detection means based on virtual scene, method and therapeutic equipment
Rowe et al. Integrating learning, problem solving, and engagement in narrative-centered learning environments
Bellotti et al. Designing effective serious games: opportunities and challenges for research
Hughes et al. The essentials of performance analysis: an introduction
Denison et al. Effective coaching as a modernist formation: A Foucauldian critique
CN106327049A (en) Cognitive assessment system and application thereof
CN105105772B (en) A kind of stimulus information preparation method for cognition ability value test
Hebert The effects of observing a learning model (or two) on motor skill acquisition
Mironcika et al. Smart toys design opportunities for measuring children's fine motor skills development
Hainey et al. Assessment integration in serious games
Ali et al. Traditional games and social skills of children in the pandemic era
Shahid et al. Child-robot interaction: playing alone or together?
Whitehill et al. Towards an optimal affect-sensitive instructional system of cognitive skills
CN107331272A (en) Medical teaching manikin and application method based on emulation technology
Ramos et al. Elementary students’ construct of physical education teacher credibility
Freina et al. Evaluation of Visuo-Spatial Perspective Taking Skills using a Digital Game with Different Levels of Immersion.
Harvey et al. Ethics in Youth Sport
Pappa et al. Effective design and evaluation of serious games: The case of the e-VITA project
Gao et al. Game features in inquiry game-based learning strategies: A systematic synthesis
Jones Reading John Grisham’s Bleachers with Foucault: lessons for sports retirement
TW201102982A (en) Game based learning system and the method thereof, and the method of analyzing learning result
Shahid et al. Who is more expressive during child-robot interaction: Pakistani or Dutch children?
Pinheiro Motor skill diagnosis: Diagnostic processes of expert and novice coaches
A Alojepan-Mas The Perceived Effect of Online Games on the Critical Thinking Skills of Grade 9 and 10 High School Students of St. John’s Institute, Inc.
Chinzer et al. “Escape with Pulcinella”: Development of A Gamified Environment And Pilot Study On Escape Rooms For Language Learning And Cultural Knowledge Acquisition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170613