CN101657143A - Unitary vision and the neural test center that handles - Google Patents

Unitary vision and the neural test center that handles Download PDF

Info

Publication number
CN101657143A
CN101657143A CN200880011894A CN200880011894A CN101657143A CN 101657143 A CN101657143 A CN 101657143A CN 200880011894 A CN200880011894 A CN 200880011894A CN 200880011894 A CN200880011894 A CN 200880011894A CN 101657143 A CN101657143 A CN 101657143A
Authority
CN
China
Prior art keywords
test
neural
input
disposal ability
visual indicia
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200880011894A
Other languages
Chinese (zh)
Other versions
CN101657143B (en
Inventor
艾伦·W·瑞秋
瑞安·科尔特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nike Innovate CV USA
Original Assignee
Nike International Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nike International Ltd filed Critical Nike International Ltd
Priority claimed from PCT/US2008/060249 external-priority patent/WO2008128190A1/en
Publication of CN101657143A publication Critical patent/CN101657143A/en
Application granted granted Critical
Publication of CN101657143B publication Critical patent/CN101657143B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Eye Examination Apparatus (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Rehabilitation Tools (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides and be used to test and/or the vision of training objects and the system and method for neural disposal ability.More specifically, this method can comprise the vision of tested object and the various aspects of neural disposal ability, for example depth perception, expection timing, perception velocities ability, perception scan capability etc.By using various tests, can realize effective detection.According to the present invention, the individual can be able to present such test, accept the such test and/or the method for training from the single formula center that the individual receives input and handles the input receive the individual.Single formula test center like this can be provided with further, makes the test that applies to change according to individual human needs.Then, the input of reception can for example be used to calculate the data that are associated with the vision and the neural disposal ability of user, all can with each single test.

Description

Unitary vision and the neural test center that handles
The cross reference of related application
The application number that the application requires on April 13rd, 2007 to submit to is 60/923,434, and name is called the priority of the U.S. Provisional Patent Application of " system and method that is used for the testing vision ability in simulation game ", should be incorporated into this by reference in first to file.The application number that the application also requires on June 4th, 2007 to submit to is 60/941,915, and name is called the priority of the U.S. Provisional Patent Application of " system and method that is used for the visual capacity test of damping ", should be incorporated into this by reference in first to file.
About the research of federal government's subsidy or the statement of exploitation
Inapplicable.
Technical field
Present invention relates in general to individual's the vision and the evaluation and/or the training of neural disposal ability.
Background technology
When participating in for example motion and so on movable, individual's vision plays a role in individual's level with physical ability.Typically, in order to be improved in motion or activity, the individual needs to concentrate the physical ability that improves them, to promote their aggregate level.Yet vision and the coordination ability or acuteness by test and training individual also can improve individual's level.
Brief summary of the invention
The present invention is summarized in this and introduces the selection of notion in simple mode, is further detailed in the following specific embodiment.This summary of the invention is not in order to be used for determining claimed key feature of the present invention or key character, and is non-in order to be used for the auxiliary invention which is intended to be protected of determining yet.
According to the present invention, a kind of test and/or the vision of training objects and the method for the coordination ability are provided.More specifically, this method can comprise the vision of tested object and the various aspects of the coordination ability.By utilizing various tests, can implement more efficient inspection.According to the present invention, the individual can be able to present vision and coordinates test, receives from individual's the input and the single formula center of handling this input that receives and accept this test and/or training method to the individual.Single formula test center like this can also dispose further, makes the test that applies to change according to individual human needs.Then, the input that receives for example can be used to calculate the data that are associated with individual's the vision and the coordination ability, and test whole and each people all can.
Description of drawings
The present invention is described in detail below with reference to accompanying drawings, wherein:
Fig. 1 is the block diagram that is suitable for implementing computingasystem environment of the present invention;
Fig. 2 represents the block diagram according to the exemplary testing element of one embodiment of the invention;
Fig. 3 represents to be used to implement the block diagram of exemplary process element of the present invention;
Fig. 4 represents according to the exemplary unitary vision of one embodiment of the invention and coordinates test cell;
Fig. 5 is another embodiment according to unitary vision of the present invention and coordination test cell; And
Fig. 6 represents that the expression according to one embodiment of the invention is used to test the flow chart of the method for the vision of the object that is in single formula website and the coordination ability.
The specific embodiment
Introduce purport of the present invention in detail at this, to satisfy legal requirement.But explanation itself is not to be used to limit protection scope of the present invention.More properly; the inventor has reckoned with all right embodied in other of the claimed purport of the present invention; combine with other the present or following technology, thus comprise with this document in the different step of step that illustrates, or the combination of the step similar to the step that illustrates in this document.
According to the present invention, provide to be used at the vision of single formula test cell tested object and the system and method for the coordination ability.This method (for example can be included in the various aspects of can the result data and/or by network transfer of data being used to the vision of single formula test cell tested object in the site of handling and the coordination ability to another, eye-hand coordination, take sb's mind off sth response time, coordination of body etc.).By these operations, the test process that single formula test center can make the vision of object and the coordination ability is effective percentage more, and can reduce and implement the expense (for example, having reduced device) that test needs.In addition, single formula test center can also be provided with, and makes the test of implementing to change according to individual human needs.The input that receives for example, can be used for calculating the result who is associated with the vision and the coordination ability of user, and integral body all can with the test that is used for each people.
The vision that is used for tested object and the test set of the coordination ability are provided in one embodiment.Such test set can comprise and presents assembly, and input module, and processing components wherein present assembly and the visual test of test, the test of for example visual tracking, distance focusing test and vision aiming test etc. can be presented to object.Object can provide input to respond each test to test set.Input module can be set to accept input then, and processing components can be set to handle the input that receives.
The vision that is used for object and the method for testing of the coordination ability are provided in another embodiment, and wherein this method is carried out at single formula website.This method partly comprises implements two or more visual capacity tests to tested object; Reception from the input of tested object to respond each test; And handle from the input of tested object reception.
Totally with reference to accompanying drawing, please especially earlier with reference to Fig. 1, it shows the block diagram of exemplary computing system, and computing system is referred generally to be decided to be computing system 100, and it is made as the vision and the coordination ability that is provided for tested object.Those skilled in the art should be appreciated that and recognize that the computing system 100 shown in Fig. 1 only is an example that is suitable for computingasystem environment, but not are used to show the restriction about the scope of the use of embodiments of the present invention or performance.Computing system 100 also should not be construed as has any dependence or essential to single component shown in this Fig. 1 or combination of components.
Computing system 100 comprises by connecting 108 input equipment 102 connected to one another, display device 120, data base 104, central site 106 and test cell 110.Connect 108 and can be wired (for example being cable) or wireless (for example being wireless network).Connecting 108 can also be network, and wherein this network can include but not limited to one or more LANs (LANs) and/or wide area network (WANs).Such network environment is prevalent in enterprise computer network, intranet and the Internet.Further, connect 108 and between the assembly of computing system 100, can comprise local wired connection (locallywired connection).Therefore, connect 108 in this further instruction no longer.
Input equipment 102 can receive the one or more responses from object.Input equipment 102 can also be any device that can receive response from object.Those skilled in the art will recognize the input equipment of having used more than similar input equipment 102 in computing system 100.Input equipment 102 for example can also be mike (microphone), stick (joystick), game mat (gamepad), wireless device (wireless device), keyboard (keyboard), keypad (keypad), game console (game controller), treadmill (treadmill), force plate (force plate), eye tracking system (eye tracking system), gesture identification system (gesture recognitionsystem), touch screen (touch sensitive screen) and/or any other input start assemblies of the wired or wireless data that receive by network 108 is provided to test cell 110.Input equipment 102 can comprise voice recognition device and/or handle the software of importing from the audition of tested object.For example, in order to show the identification of visual indicia, can be the verbalization of the feature that has of visual indicia from the audition input of object.In one embodiment, if this feature is the differently-oriented directivity of Landolt (Landolt) " C ", the audition of response input can be " on ", D score, " right side " and " left side ".But those skilled in the art will understand and recognize and can also use the visual indicia of other audition input (for example, statement color, numeral, letter, symbol etc.) with denoted object perception and/or identification.But it should be noted that the present invention is not limited to implement in such input equipment 102, in the protection domain of embodiment, it can also be implemented on arbitrary device of number of different types.Input equipment 102 can receive and catch the input of denoted object to the response of the visual indicia of demonstration.If this feature is a direction orientation, satisfied test response can be the recognition visible sensation index face to direction.By but be not limited to such embodiment, identification can comprise that object is by being orientated the input that corresponding direction drilling provides as stick with direction as the hand-held device of input equipment 102.
The output of the video that display device 120 can display object visually can be observed, and can be computer, the test set of arbitrary type or the TV monitor that comprises cathode ray tube, liquid crystal display, plasma screen or arbitrary other display type, perhaps can comprise from the front or the back screen of transmission image thereon.Further, for the person that makes the test and management before visual capacity test is offered tested object, among and afterwards with test cell 110 interactions, display device 120 can provide the user interface.
If input equipment 102 is eye tracking systems, the position and/or the focus of eyes that can monitored object, when eye location and/or when focusing on suitable position, the record input.
If input equipment 102 is gesture identification systems, can uses multiple systems and/or method and receive input.For example, can use one or more photographing units and come the limbs of monitored object and/or brothers' motion, when object is made suitable posture, in conjunction with suitable hardware and/or software records input.The gesture identification system can also use the signal that is connected to object, so that motion tracking.Can also use the transmitter and the receptor that are connected to object, it can be used as the part of gesture identification system.
If input equipment 102 is touch screens, can use the touch screen of arbitrary type.In addition, can use the cover layer of the material of touch-sensitive and itself are received and touch input touching insensitive display.Such cover layer and display can be arbitrary distance.
Test cell 110 as shown in Figure 1 can be the accountant of arbitrary type, with reference to Fig. 4 and Fig. 5 its embodiment is elaborated below.Data base 104 can be set to store the information that is associated with the vision and the coordination ability.Those skilled in the art will understand and recognize that the information that stores among the data base 104 can be provided with, and it can comprise any information relevant with the test of the vision and the coordination ability.The content of such information and volume do not limit the scope of embodiments of the present invention in arbitrary mode.Although shown in the figure is single independently assembly, in fact, data base 104 can be a plurality of data bases, for example, and database cluster.Further, data base's 104 partly or entirely can be positioned on the accountant that is associated with test cell 110, another external computing device (not shown) and/or its arbitrary combination.Those skilled in the art should recognize that data base 104 is optionally, do not need to implement in conjunction with computing system 100.
Refer again to Fig. 1, the embodiment that illustrates according to the present invention, test cell 110 can comprise and present assembly 112, input module 114, test suite 116 and processing components 118.Those skilled in the art will appreciate that the assembly 112,114,116 and 118 shown in Fig. 1 substantially with only be exemplary on the number, should not be construed as limitation of the present invention.In the scope of embodiments of the present invention, can use the assembly of arbitrary number and realize required function.
Present assembly 112 and can show output by the video of object visual observation, and can be computer, the test set of arbitrary type or the TV monitor that comprises cathode ray tube, liquid crystal display, plasma screen or arbitrary other display type, perhaps can comprise from the front or the back screen of transmission image thereon.
In one embodiment, presenting device 112 can be that application is set to the mirror of the vision transmission of generation distance in limited spatial dimension and/or the device of lens (for example, providing the circumferential arrangement of mirror to produce tunnel-effect (tunnel effect)).An example of such device is to use the perspective test set that mirror produces the perspective of distance.Such device can be included in the mirror that foveal region (just direct front at object) shows visual indicia, and it can also comprise the sideview mirror of demonstration visual indicia with test peripheral vision ability.
In another embodiment, this device can comprise the lens of size of the visual indicia of the distance that changes perception and/or demonstration, to realize mimic distance.As a result, such device can offer tested object and look the demonstration visual indicia nearer or far away than actual displayed.Therefore, such configuration has produced the perspective of optical infinity to tested object.
Those skilled in the art will recognize that presenting assembly 112 can comprise in conjunction with the multiple arrangement that shows some visual stimulus that specific activities has.In one embodiment, can use a plurality of demonstrations (for example, split screen (split-screen)) that single device shows visual indicia.
Present assembly 112 and can selectively comprise demonstration glasses, the protective eye lens worn by object, screen etc., provide non-obviously visual visual display for other people to object.In addition, present assembly 112 and can provide two dimension or 3-D view to tested object.3-D view shows the virtual reality or holographic demonstration that can comprise object.
During operation, present assembly 112 and can also be set to present one or more visual indicias to tested object.As following detailed description, assembly 112 can diversified mode present visual indicia, with the different aspect of the vision and the coordination ability of tested object.Usually, each visual indicia can have one or more features.These features for example are, the direction orientation (for example, arrow, Landolt, E word visual acuity charts etc.), the position at user interface (for example, be positioned at specific 1/4th districts of demonstration), the combination in any of one of the predetermined number of mutual exclusion feature (for example, towards upper and lower a, left side or one of right indication) or these features.In addition, those skilled in the art will understand and recognize the feature that can also use other, and the present invention is not limited to arbitrary special characteristic.
Input module 114 can be set to (for example, by using input equipment 102) and receive input from tested object.Any suitable receiving unit that the input that is provided by object can be provided can be applied to the present invention.For instance, it is not a limitation of the present invention, and object can be used keyboard, stick, tracking ball etc. input is provided.Input can be decided according to presenting assembly.For example, be touch-sensitive if present assembly, object can present assembly by contact provides input.In another embodiment, input module can have voice recognition capability, and wherein object can provide input by the sounding response by input module identification.Those skilled in the art will understand and recognize that the input module of any appropriate can be applied among the present invention.According to presenting test that assembly presents and the above-mentioned ability that presents assembly, can optimize some type.After receiving input from object, input module 114 for example can store input in data base 104, is used for reference in the future.
Test suite 116 is set to provide test to object.Be elaborated with reference to Fig. 2 below, test suite 116 can provide two or more tests, to determine the vision and the coordination ability of object.More specifically, can provide a plurality of tests at the single formula website of for example test cell 110.In addition, the test that is arranged so that of test suite 116 can be according to object variation.For example, provide the detailed movement of tested object or activity, competitive level, visual intensity/weak degree etc., test can change.Therefore, test suite 116 can also be responsible for determining test (with the level or the difficulty of test) by presenting presenting of assembly 112.
The invention provides processing components 118 to handle the input that receives from input module 114.As shown in Figure 3, processing components 118 can comprise storage assembly 310, data collecting assembly 312, training developer component 314 and transfer assembly 316.Storage assembly 310 can be set to adopt scoring method to obtain mark according to object to the response of the test that presents.The response of these responses with the specific population of typically retrieving from data base 104 compared, can determine the response of object.Storage assembly 310 can provide the evaluation of the vision and the coordination ability of object when the one or more response that receives and measure visual indicia.In case determined mark (for example, percentage point), can present to object by presenting assembly 112.Can finish in each test, finish or its combination presents mark when finishing in all tests.
Data collecting assembly 312 is set to collect the data that receive from input module 114.Such data for example can be stored among the data base 104.The data of collecting can be used to create the standard that is used for specific population further, and it can be got subassembly 310 and adopt.Those skilled in the art will recognize data base 104 and/or subassembly 310 can be positioned at other assembly place away from system 100.
Training developer component 314 is set to be used for according to data of collecting and the mark exploitation of determining the training plan or the system of tested object.In embodiments of the present invention, test cell 110 can be used to train tested object after object is tested.
Transfer assembly 316 is set to the mark that will determine, the data of collection etc. and is transferred to and presents assembly 112.Transfer assembly 316 can add to for example external computing device of central site 106 data are provided, and is used for further consideration, analysis or storage.In one embodiment, transfer assembly 316 can provide data to test suite 116 in real time, makes test can be provided with or change in test process.Those skilled in the art should understand and recognize, although above-mentioned embodiment and the example of having illustrated, transfer assembly 316 can provide the information that is associated with the testing vision and the coordination ability to arbitrary assembly of computing system 100, inside and outside arbitrary assembly of test cell 110.
Those skilled in the art will recognize that transfer assembly 316 can send arbitrary information that needs frequency of self-test unit 110.That is, for example, after object was finished all tests or selectively finished each single test, information can send to the place that needs.If information is sent to central site 106 or data base 104 stores and/or handles, information is concentrated and is sent to all objects the most at last.Transmission frequency can depend on the storage volume and the disposal ability of test cell 110, and the expectation of information is used.
Referring now to Fig. 2, it shows test suite 116 further.Test suite 116 can comprise depth perception coordination component 210, expection timing component 212, scan perception component 214 and speed perception component 216.Test cell 110 can be used each in these assemblies, to test the individual vision and the many aspects of the coordination ability.Those skilled in the art will recognize the test that can also use other, and it still belongs to protection scope of the present invention.
Depth perception assembly 210 is set to the depth perception of tested object, and it can be included in the different degree of depth and show visual indicia and require the tested object location should or to look the visual indicia that is in a certain degree of depth.In one embodiment, can present a plurality of visual indicias, except one, other each visual indicia looks and is in same depth.In such embodiment, tested object can be located the visual indicia that looks with other labelling different depth, and this response is input to test cell 110.Those skilled in the art will realize and understand, depth perception assembly 210 can be used any suitable test of depth perception that can tested object.
The ability that expection timing component 212 is set to test tested object is in the operating visual indicia time with the expection visual indicia.In one embodiment, the visual indicia of for example round dot or circle is presented to object, thereby this labelling looks and moves towards object.Then, when object expection visual indicia arrived ad-hoc location, object can provide input to show.Those skilled in the art will realize and understand that depth perception assembly 212 can also be used any suitable depth perception test.
Scan perception component 214 is set to test the ability of tested object to visual scanning.The present invention can use any suitable test, and it still belongs to protection scope of the present invention.For instance, it is not a limitation of the present invention, visual indicia can be offered tested object.Visual indicia can comprise the single visual indicia of the grid of specific pattern.For example, it is entities that the grid round dot can be shown as some round dots, and other round dot is non-entity.Then, the round dot of entity can be shown as the similar profile of other round dot with grid, and object need confirm that before these be the round dot of entity.Another exemplary scan perception test also is included in special time the numeral of random set is presented to object, allows object import the numeral of perception.
Speed perception component 216 is set to the speed of the visual indicia that tested object can perception.In one embodiment, in certain period of time, visual indicia is shown or flash to tested object.Then, at the diverse location of display device, in the time period that changes, show another visual indicia.Between each visual indicia that flashes, can present neutral visual indicia at the center that shows.Allow tested object discern the visual indicia that each flashes, measure their vision and neural disposal ability, thus the specific speed of perception.Those skilled in the art will recognize the test that can use any appropriate that can the testing vision speed perception.
Referring now to Fig. 4, it shows according to exemplary vision of the present invention and the neural test macro 400 of handling.By having the single formula test cell that for example can present the test cell 412 of a plurality of tests, can provide the overall evaluation preferably of the vision and the neural disposal ability of object to object.Further, because test cell 412 can comprise disposal ability, it can deal with data, and the result has formed mark and/or the training regime determined that is used for object.Display device 414 can be to object 410 output visual stimulus.Object 410 can be used 416 pairs of visual stimulus of input equipment input is provided.The space shows that 418 can be selectively and/or additionally export visual stimulus.For example, as mentioned above, the space shows that 418 can be used in combination with speed perception component 216, expection timing component 212 or its test suite.Certainly, the space shows that 418 can be used in combination with the visual test of other type.
Fig. 5 shows vision and the neural test macro 400 of handling according to the embodiment of the present invention further.More specifically, in this example, the visual stimulus of display device 414 output upper marker 421, bottom labelling 422, left side labelling 423 and the right labelling 424.The labelling that is presented at display device 414 can be static or dynamic.Object can be used the selection of the one or more labellings of input equipment 416 inputs.Mark selected for example depend on move, the standard of alignment, the degree of depth, color, size, difference or other visual characteristic.
Referring now to Fig. 6, it shows the flow chart 600 of the vision and the neural method of processing capacity of tested object.Although the term of Ying Yonging " step " is the different key elements that expression this method adopts with " square frame " hereinafter, this term should not be interpreted as containing any specific order between each step disclosed by the invention, unless and except each step order offer some clarification on.During beginning, (for example test cell 110 in the application drawing 1) imposes on tested object with two or more visions and/or neural processing test.This is presented in the square frame 610.Those skilled in the art will recognize, can implement above-mentioned arbitrary test and measure individual's vision and other test of neural disposal ability.The concrete test that applies according to settings such as the level of ability of object, competitive level, concrete activities and the order of test.When test was imposed on object, object can be by providing suitable response with the interaction that is connected to the input equipment of test cell by input module.This is presented in the square frame 620.Can use a plurality of input equipment, and receive a more than response from object.For example, in the test of taking sb's mind off sth, object can provide a response to the visual indicia test of eye-hand coordination, and as mentioned above, object can provide another response to visual indicia in another position.Then, processing components (for example processing components among Fig. 1 118) is handled for example by collecting data, determining the input that mark or exploitation training regime receive.Data for example can be stored among the data base 104, perhaps for example send to central site 106 by transfer assembly.This is presented in the square frame 630.
Alternatively, at square frame 640, the data that receive from the input of object in each test are used for determining the mark of object.Can determine a mark to each test, determine gross score according to the data of all tests.Mark can depend on the related data of specific population further, and therefore the data of object can compare (percentage point that for example, can provide their level of object) with it.At square frame 650, according to for example they the mark of determining and they training regime at tested object is developed in the input of the response of visual capacity test, to train the his or her vision and the coordination ability.
The present invention is illustrated according to concrete embodiment, and it only is exemplary, but not is used to limit the present invention.Other optional embodiment will be apparent to those skilled in the art, and it does not break away from protection scope of the present invention.
In sum, can know that the present invention has realized above-mentioned purpose of illustrating and target well, and recognize other tangible and intrinsic advantage of system and method for the present invention.It will be appreciated that under the situation of not quoting further feature and time combination, some feature and time combination are still available.This belongs to protection scope of the present invention equally.

Claims (20)

1. device that is used for the vision and the neural disposal ability of tested object comprises:
Be set to present the assembly that presents of two or more visions and the test of neural disposal ability, wherein the test of vision and neural disposal ability comprises the vision and the neural test of handling of determination object, and wherein object provides the response input to each test;
Be set to receive the input module of the input that provides by object; And
Be set to handle the processing components of the input that receives.
2. device according to claim 1 is characterized in that processing components comprises according to the input that receives and determines fractional subassembly.
3. device according to claim 2 is characterized in that processing components further comprises provides training regime according to the mark of determining training developer component.
4. device according to claim 1 is characterized in that the test of two or more visions and neural disposal ability comprises the depth perception test.
5. device according to claim 4 is characterized in that the depth perception test comprises the visual indicia of presenting to object, is positioned visual indicia by the input that the position that responds visual indicia is provided by object.
6. device according to claim 1, it is characterized in that two or more visions and neural disposal ability the test one of them comprise the expection fixed time test.
7. device according to claim 6, the primary importance that the test pack that it is characterized in that taking sb's mind off sth is contained in the input that object response location first visual indicia provides is presented to first visual indicia of object, and need present to second visual indicia of object from the second position of the response of object.
8. device according to claim 7 is characterized in that second visual indicia is a Landolt.
9. device according to claim 1 is characterized in that in the test of two or more visions and neural disposal ability comprises the perception velocities test.
10. device according to claim 1 is characterized in that in the test of two or more visions and neural disposal ability comprises the perception sweep test.
11. the vision of a tested object and the method for testing of neural disposal ability, wherein this method is carried out at single formula website, and this method comprises:
Apply two or more visions and the test of neural disposal ability to tested object, wherein the test of vision and neural disposal ability is the test of the vision and the neural disposal ability of determination object;
Reception responds the input of each test from tested object; And
The input that processing receives.
12. method according to claim 11 is characterized in that this method further comprises according to the input that receives and determines mark.
13. method according to claim 12 is characterized in that this method further comprises provides training regime according to the mark of determining training developer component.
14. method according to claim 11 is characterized in that in the test of two or more visions and neural disposal ability comprises the depth perception test.
15. method according to claim 14 is characterized in that the eye-hand coordination test comprises the visual indicia of presenting to object, is positioned visual indicia by the input that the position that responds visual indicia is provided by object.
16. method according to claim 11 is characterized in that in the test of two or more visions and neural disposal ability comprises the expection fixed time test.
17. method according to claim 16, the primary importance that the test pack that it is characterized in that taking sb's mind off sth is contained in the input that object response location first visual indicia provides is presented to first visual indicia of object, and need present to second visual indicia of object from the second position of the response of object.
18. method according to claim 17 is characterized in that second visual indicia is a Landolt.
19. method according to claim 11 is characterized in that in the test of two or more visions and neural disposal ability comprises the perception velocities test.
20. method according to claim 19 is characterized in that in the test of two or more visions and neural disposal ability comprises the perception sweep test.
CN2008800118947A 2007-04-13 2008-04-14 Unitary vision and neuro-processing testing center Expired - Fee Related CN101657143B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US92343407P 2007-04-13 2007-04-13
US60/923,434 2007-04-13
US94191507P 2007-06-04 2007-06-04
US60/941,915 2007-06-04
PCT/US2008/060249 WO2008128190A1 (en) 2007-04-13 2008-04-14 Unitary vision and neuro-processing testing center

Publications (2)

Publication Number Publication Date
CN101657143A true CN101657143A (en) 2010-02-24
CN101657143B CN101657143B (en) 2012-05-30

Family

ID=41711088

Family Applications (5)

Application Number Title Priority Date Filing Date
CN2008800118947A Expired - Fee Related CN101657143B (en) 2007-04-13 2008-04-14 Unitary vision and neuro-processing testing center
CN2008800119314A Expired - Fee Related CN101657145B (en) 2007-04-13 2008-04-14 Unitary vision testing center
CN200880011916XA Expired - Fee Related CN101657144B (en) 2007-04-13 2008-04-14 Unitary vision and neuro-processing testing center
CN200880011994XA Expired - Fee Related CN101657146B (en) 2007-04-13 2008-04-14 Syetems and methods for testing and/or training near and farvisual abilities
CN200880011961.5A Expired - Fee Related CN101657846B (en) 2007-04-13 2008-04-14 The method and system of visual cognition and coordination testing and training

Family Applications After (4)

Application Number Title Priority Date Filing Date
CN2008800119314A Expired - Fee Related CN101657145B (en) 2007-04-13 2008-04-14 Unitary vision testing center
CN200880011916XA Expired - Fee Related CN101657144B (en) 2007-04-13 2008-04-14 Unitary vision and neuro-processing testing center
CN200880011994XA Expired - Fee Related CN101657146B (en) 2007-04-13 2008-04-14 Syetems and methods for testing and/or training near and farvisual abilities
CN200880011961.5A Expired - Fee Related CN101657846B (en) 2007-04-13 2008-04-14 The method and system of visual cognition and coordination testing and training

Country Status (1)

Country Link
CN (5) CN101657143B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104970763A (en) * 2014-04-09 2015-10-14 冯保平 Full-automatic vision detecting training instrument
CN105496347A (en) * 2016-01-12 2016-04-20 夏国滨 Electronic apparent depth measuring device
CN105848563A (en) * 2013-12-17 2016-08-10 埃西勒国际通用光学公司 Apparatus and method for screening for defects in the sight and for measuring the visual acuity of a user
CN106726388A (en) * 2017-01-04 2017-05-31 深圳市眼科医院 The trainer and its control method of a kind of extraocular muscle neural feedback muscle

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015028722A1 (en) * 2013-09-02 2015-03-05 Ocuspecto Oy Automated perimeter
CN105407800B (en) * 2013-09-11 2019-04-26 麦克赛尔株式会社 Brain disorder evaluating apparatus and storage medium
MX2016004899A (en) * 2013-10-17 2017-01-18 Children's Healthcare Of Atlanta Inc Methods for assessing infant and child development via eye tracking.
CN104382560A (en) * 2014-12-08 2015-03-04 丹阳市司徒镇合玉健身器械厂 Ataxia detector
CN104586403A (en) * 2015-01-21 2015-05-06 陕西省人民医院 Finger movement mode monitoring and analysis device and use method thereof
CN104851326A (en) * 2015-05-18 2015-08-19 吉首大学 Ideological and political work demonstration and teaching instrument
CN104887467A (en) * 2015-06-03 2015-09-09 侯跃双 Child vision correction recovery instrument
EP3192434B1 (en) 2016-01-15 2018-10-03 Centre National de la Recherche Scientifique Device and method for determining eye movements by touch interface
US20190076077A1 (en) * 2016-03-31 2019-03-14 Koninklijke Philips N.V. Device and system for detecting muscle seizure of a subject
CN107736889B (en) * 2017-09-08 2021-01-08 燕山大学 Detection method of human body coordination detection device
CN109727508B (en) * 2018-12-11 2021-11-23 中山大学中山眼科中心 Visual training method for improving visual ability based on dynamic brain fitness
CN109744994B (en) * 2019-03-12 2024-05-31 西安奇点融合信息科技有限公司 Visual field inspection device based on multi-screen display
CN109998491A (en) * 2019-04-25 2019-07-12 淮南师范学院 A kind of glasses and method of test depth perceptibility
CN114929105A (en) * 2019-10-30 2022-08-19 朱拉隆功大学 Stimulation system for cooperative brain and body function
CN113018124A (en) * 2021-03-02 2021-06-25 常州市第一人民医院 Rehabilitation device for unilateral neglect of patient
CN115969677B (en) * 2022-12-26 2023-12-08 广州视景医疗软件有限公司 Eyeball movement training device
CN116115981B (en) * 2023-02-09 2024-05-31 湖南理工学院 Table tennis player service action recognition training instrument
CN116172560B (en) * 2023-04-20 2023-08-29 浙江强脑科技有限公司 Reaction speed evaluation method for reaction force training, terminal equipment and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4528989A (en) * 1982-10-29 1985-07-16 Weinblatt Lee S Screening method for monitoring physiological variables
US4618231A (en) * 1984-02-22 1986-10-21 The United States Of America As Represented By The Secretary Of The Air Force Accommodative amplitude and speed measuring instrument
US5088810A (en) * 1989-01-23 1992-02-18 Galanter Stephen M Vision training method and apparatus
CN1077873A (en) * 1992-04-22 1993-11-03 四川大学 Computerized comprehensive test system for visual sense
US5825460A (en) * 1994-04-30 1998-10-20 Canon Kabushiki Kaisha Visual function measuring apparatus
US5812239A (en) * 1996-10-22 1998-09-22 Eger; Jeffrey J. Method of and arrangement for the enhancement of vision and/or hand-eye coordination
US6092058A (en) * 1998-01-08 2000-07-18 The United States Of America As Represented By The Secretary Of The Army Automatic aiding of human cognitive functions with computerized displays
US6066105A (en) * 1998-04-15 2000-05-23 Guillen; Diego Reflex tester and method for measurement
US6364845B1 (en) * 1998-09-17 2002-04-02 University Of Rochester Methods for diagnosing visuospatial disorientation or assessing visuospatial orientation capacity
US6454412B1 (en) * 2000-05-31 2002-09-24 Prio Corporation Display screen and vision tester apparatus
US6632174B1 (en) * 2000-07-06 2003-10-14 Cognifit Ltd (Naiot) Method and apparatus for testing and training cognitive ability

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105848563A (en) * 2013-12-17 2016-08-10 埃西勒国际通用光学公司 Apparatus and method for screening for defects in the sight and for measuring the visual acuity of a user
CN104970763A (en) * 2014-04-09 2015-10-14 冯保平 Full-automatic vision detecting training instrument
CN105496347A (en) * 2016-01-12 2016-04-20 夏国滨 Electronic apparent depth measuring device
CN105496347B (en) * 2016-01-12 2017-06-06 哈尔滨学院 Depending on depth electronic measuring device
CN106726388A (en) * 2017-01-04 2017-05-31 深圳市眼科医院 The trainer and its control method of a kind of extraocular muscle neural feedback muscle
CN106726388B (en) * 2017-01-04 2019-02-05 深圳市眼科医院 A kind of training device and its control method of extraocular muscle neural feedback muscle

Also Published As

Publication number Publication date
CN101657145A (en) 2010-02-24
CN101657144B (en) 2012-05-30
CN101657145B (en) 2012-01-25
CN101657143B (en) 2012-05-30
CN101657146A (en) 2010-02-24
CN101657846A (en) 2010-02-24
CN101657146B (en) 2012-01-18
CN101657846B (en) 2016-03-09
CN101657144A (en) 2010-02-24

Similar Documents

Publication Publication Date Title
CN101657143B (en) Unitary vision and neuro-processing testing center
KR101520113B1 (en) Unitary vision and neuro-processing testing center
CN102573610B (en) Unified vision testing and/or training
KR101726894B1 (en) Testing/training visual perception speed and/or span
CA2770113C (en) Multi-touch display and input for vision testing and training
KR101765961B1 (en) Vision testing and/or training using adaptable visual indicia
Youngblut Experience of presence in virtual environments
Meusel Exploring mental effort and nausea via electrodermal activity within scenario-based tasks
Hsiao et al. Human responses to augmented virtual scaffolding models
Guerreiro et al. Blind people and mobile keypads: accounting for individual differences
Szymański et al. Eye tracking in gesture based user interfaces usability testing
KR102550724B1 (en) Augmented reality based cognitive rehabilitation training system and method
Zhao Enhancing undergraduate research experience with cutting edge technologies
KR20200010733A (en) Apparatus and Method for Measuring Virtual Reality Motion Sickness to Reduce Motion Sickness of Virtual Reality Device
CN110414848A (en) Sports items assessment method, device, readable storage medium storing program for executing and electronic equipment
Martins et al. Usability test of 3Dconnexion 3D mice versus keyboard+ mouse in Second Life undertaken by people with motor disabilities due to medullary lesions
Ishihara et al. A pilot study on impact of viewing distance to task performance
Lai 3D Travel Techniques for Virtual Reality Cyberlearning Systems
Khambadkar Leveraging Proprioception to create Assistive Technology for Users who are Blind
Parit et al. Eye tracking based human computer interaction
Sanders Investigations into haptic space and haptic perception of shape for active touch
Dontschewa et al. EXPERIMENTAL SET-UP FOR HAPTIC INVESTIGATION OF REAL AND VIRTUAL ENVIRONMENTS
Littell Physiological effects of monocular display augmented, articulated arm-based laser digitizing
WO2012069300A1 (en) Speed and distance estimation test device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: NIKE INNOVATION LIMITED PARTNERSHIP

Free format text: FORMER OWNER: NIKE INTERNATIONAL LTD.

Effective date: 20141117

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20141117

Address after: oregon

Patentee after: NIKE INNOVATE C.V.

Address before: oregon

Patentee before: Nike International Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120530