WO2009155415A2 - Système d'entraînement et de réadaptation, et procédé et programme informatique associés - Google Patents

Système d'entraînement et de réadaptation, et procédé et programme informatique associés Download PDF

Info

Publication number
WO2009155415A2
WO2009155415A2 PCT/US2009/047790 US2009047790W WO2009155415A2 WO 2009155415 A2 WO2009155415 A2 WO 2009155415A2 US 2009047790 W US2009047790 W US 2009047790W WO 2009155415 A2 WO2009155415 A2 WO 2009155415A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
interest
specimen
human
computable
Prior art date
Application number
PCT/US2009/047790
Other languages
English (en)
Other versions
WO2009155415A3 (fr
Inventor
Diglio A. Simoni
Original Assignee
Research Triangle Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Triangle Institute filed Critical Research Triangle Institute
Publication of WO2009155415A2 publication Critical patent/WO2009155415A2/fr
Publication of WO2009155415A3 publication Critical patent/WO2009155415A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/02Computing arrangements based on specific mathematical models using fuzzy logic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/043Architecture, e.g. interconnection topology based on fuzzy logic, fuzzy membership or fuzzy inference, e.g. adaptive neuro-fuzzy inference systems [ANFIS]
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Definitions

  • Embodiments of the present invention are generally directed to artificial intelligence systems and, more particularly, to training and rehabilitation systems and associated methods and computer program products.
  • IEDs improvised explosive devices
  • Visual search tasks are accomplished through a cycle of fixations and visual scene analysis interrupted by saccades.
  • a saccade produces a rapid shift of gaze, redirecting the fovea (a tiny pit located in the macula of the retina that is responsible for sharp central vision) onto a new point in the visual scene.
  • the visual system reacquires the new image, the visual scene is remapped onto primary visual cortex governed by the physical limits imposed by the retinal photoreceptor layout and the cortical magnification factor.
  • the physical makeup of our perception systems may be the same, we do not all perceive stimuli identically, nor do we process stimuli in the same manner so as to have the same psychophysical reactions.
  • Some observers may be more experienced or may be trained to analyze images efficiently and effectively.
  • the psychophysical characteristics of visual search tasks performed by experienced doctors searching tumor tissue images for potentially cancerous cells may differ greatly from the psychophysical visual search task characteristics of inexperienced doctors.
  • exemplary embodiments of the present invention provide an improvement over the known prior art by, among other things, providing a method of behavioral training, a method of rehabilitation, and associated systems and computer program products.
  • one exemplary aspect of the present invention provides a method of behavioral training that comprises transforming psychophysical reactions of an exemplary specimen into computable statements using fuzzy logic, wherein the computable statements are associated with a perception process.
  • the computable statements are incorporated into an expert system, and the expert system of the perception process then combined with a neural network corresponding to a perception model, so as to form a behavioral system.
  • a stimuli is introduced to the behavioral system and an exemplary response elicited therefrom.
  • An untrained specimen also introduced to the stimuli, is induced to mimic the exemplary response so as to train the untrained specimen to display the psychophysical reactions of the exemplary specimen.
  • Another exemplary aspect of the present invention provides a method of rehabilitation that comprises transforming psychophysical reactions of an unimpaired specimen into computable statements using fuzzy logic, wherein the computable statements are associated with a perception process.
  • the computable statements are incorporated into an expert system, and the expert system of the perception process combined with a neural network corresponding to a perception model, so as to form a demonstration system.
  • a scenario is introduced to the demonstration system and an exemplary response elicited therefrom.
  • a debilitated specimen also introduced to the scenario, is then trained to mimic the exemplary response to thereby rehabilitate the debilitated specimen.
  • Fig. 1 shows an example of a search trial image used in an experiment where subjects were asked to search for a target object within a sample image
  • Fig. 2 shows the sample image of Fig. 1 , including the target object and a search path, as well as regions of interest;
  • Fig. 3 shows a graphical representation of the cortical magnification factor
  • Fig. 4 shows an example of a sample image search model developed by the inventors of the present application in accordance with one embodiment of the present invention
  • Fig. 5 shows a detailed view of an image perception-based search model in accordance with another embodiment of the present invention.
  • Fig. 6 shows a detailed view of an image perception-based search model in accordance with another embodiment of the present invention, focusing on the neural network component;
  • Fig. 7 shows a detailed view of an image perception-based search model in accordance with another embodiment of the present invention, focusing on the fuzzy expert system component;
  • Fig. 8 shows a search trial image similar that shown in Fig. 1 , wherein an image is searched for the target object;
  • Fig. 9 schematically illustrates one exemplary embodiment of the present invention, directed to a method of behavioral training
  • Fig. 10 schematically illustrates another exemplary embodiment of the present invention, directed to a method of rehabilitation
  • Fig. 11 shows a block diagram of an exemplary electronic device configured to execute a method and computer program product for behavioral training or for rehabilitation in accordance with an exemplary embodiment of the present invention.
  • visual search mechanisms can be examined using simple search paradigms. For example, a subject can be asked to search for a particular letter hidden among a group of distractors.
  • the subject's eyes can be tracked using various techniques (such as, for example, infrared eye tracking devices) that record the position of the subject's eyes on the display in real-time.
  • measures of performance can be obtained, including reaction time, and search statistics based on the position of the target with respect to the position of the eyes on the display.
  • Fig. 1 shows an example of a search trial image used in an experiment where a subject was asked to search for a target object 10 (in this case a red (shown in the figure as crossed section lining) rotated L hidden within a group of red (shown in the figure as crossed section lining) Ts and green (shown in the figure as diagonal section lining) Ls) within a sample image 20.
  • the white line indicates the search path 30 dictated by the position of the subject's eyes (as tracked by an infrared eye tracking device) as the subject performed the search trial.
  • Fig. 2 shows the sample image 20 of Fig. 1, including the target object 10 and the search path 30, as well as the regions of interest 40 determined by the areas of the image upon which the subject fixated.
  • the flowchart on the right of the drawing indicates a traditional image search model 50 used to describe the manner in which the human brain solves this type of search problem.
  • a subject first acquires the scene, which essentially indicates that the subject quickly examines the image 20 as whole so as to put the entire image 20 into context.
  • the subject selects a region of interest and, in block 80, attempts to identify the target 10 within (or near, through peripheral vision) the region of interest 40.
  • the search ends. If however, as shown in the drawing, the target 10 is not found, the procedure returns to block 70 and a new region of interest 40 is selected. This process continues until the target 10 is finally identified, at which point the searching stops.
  • the flowchart 50 may not properly model the true human response. For example, selection of a new fixation location is determined on a real-time basis depending on the current point of view, taking into account, for example, the retinocortical transformation of image space. This nonlinear transformation may induce certain constraints that naturally affect the way that regions of interest are selected for further processing.
  • a cortical magnification factor which results in an increased internal representation of image data at the fovea, implies that a larger amount of neural processing is expended in areas close the point of fixation, resulting in a natural division of work between the processes involved in target identification (which are done centrally) as opposed to those involved in the selection of a new fixation location (which are done in the periphery).
  • Fig. 3 shows a graphical representation of the cortical magnification factor.
  • the left side of the drawing shows fixation on an object 80 of an image 90 displayed at the center of a polar coordinate grid.
  • On the right side of the drawing is a representation of a model that represents the effect of cortical magnification.
  • the fovea is located at the left tip of the model.
  • the object 80 is processed by a larger number of neurons than the objects in the periphery.
  • Fig. 4 shows an example of a sample image search model 100 in accordance with one embodiment of the present invention.
  • this model differs from the traditional image search model 50 by providing a parallel framework that represents the division of work between central processes 110, which are involved in target identification, and peripheral processes 120, which are involved in the selection of a new fixation location (i.e., a new ROI).
  • the subject begins a new fixation.
  • the subject selects an ROI, while in block 150 the subject determines whether the target has been identified. If, however, in block 160, the target is identified, the process stops. If, however, in block 160, the target has not been identified, the process returns to block 130 where the subject begins a new fixation.
  • a perception-based model may be implemented, whereby the design principles of such a model may be derived from psychophysical observations of human performance during active visual search tasks via the use of, for example, a real-time infrared or other suitable eye-tracker.
  • Psychophysical experiments were used to obtain probabilistic measures of both stimulus and neuroanatomical features that constrain the human visual system's real-time election of image regions (ROIs) during the target discovery periods of active visual search.
  • fuzzy predicates i.e., computable statements
  • rule set for driving a model of human search performance (i.e., an expert system) that takes into account the intrinsic uncertainty of sensory processing.
  • Fig. 5 shows a more detailed view of the image perception-based search model 100.
  • block 140 includes a neural network component, block 170, and a fuzzy expert system (FES) component, block 180.
  • FES fuzzy expert system
  • the neural network component of the depicted embodiment calculates the saliency of the visual scene under scrutiny using a parallel computation composed of several feature map calculations at different spatial scales and combines them into a single map that describes the significance of each location on the input scene. In the case of searching an image for one or more items of interest, this results in a set of ROIs.
  • Fig. 6 shows a set of ROIs.
  • the fuzzy expert system component, block 180 is comprised of a knowledge base and a logical inference engine that applies the facts to the rule sets and produces appropriate decisions along with the "train of thought" that was used to arrive at a particular choice.
  • Types of rules may include, but need not be limited to, relation, recommendation, directive, strategy, heuristic, etc.
  • the present fuzzy expert system translates psychometric functions into reasonable sets of fuzzy sets that attempt to capture the essence of the measurement. Therefore the resulting perception-based model uses fuzzy logic for the transformation of psychophysical observations into computable statements that can be used to intelligently guide the selection process in realtime.
  • the resulting search model thus may be used to create a perception-based model that mimics the response of one or more trained humans.
  • the top left portion of Fig. 8 shows a search trial image 200 similar that shown in Fig. 1, wherein a human observer searches the image 200 for the target object 210.
  • an ideal search path 330 i.e., for a "trained" human
  • a random search path 430 are shown for respective images 300 and 400.
  • perception-based model relates to visual stimuli
  • similar models could be generated for other perceptions, including, but not limited to, auditory perceptions.
  • the perception-based models may be used in a variety of ways. For example, in some embodiments, they may serve as the basis for a behavioral system in order to provide a method of behavior training for untrained specimens. In other embodiments, they may serve as the basis for a method of rehabilitation in order to rehabilitate debilitated specimens.
  • Fig. 9 schematically illustrates one exemplary embodiment of the present invention, directed to a method of behavioral training 600.
  • the method comprises training an untrained specimen to display the psychophysical reactions of a trained specimen.
  • Such a method first comprises transforming psychophysical reactions of an exemplary specimen into computable statements using fuzzy logic (Block 610), the computable statements being associated with a perception process.
  • the perception process of various embodiments may comprise a variety of perceptions, in some embodiments, the perception process may comprise the visual analysis of at least one image that may contain one or more items of interest. As noted above, such embodiments may be useful in the medical field where images of biological tissue may contain cancerous or otherwise defective and/or abnormal cells.
  • a specimen may be any human or machine specimen, however in some embodiments, the trained specimen may be a trained human and the untrained specimen may be an untrained human.
  • the psychophysical reactions of the trained human may comprise eye scan patterns wherein the perception process may comprise visual analysis of at least one image in search of at least one item of interest. The computable statements are then incorporated into an expert system (Block 620).
  • the expert system of the perception process is then combined with a neural network corresponding to a perception model so as to form a behavioral system (Block 630).
  • the perception model may comprise computing saliency across at least one image to generate one or more regions of interest, wherein the expert system selects at least one of the regions of interest for identification of at least one item of interest therein.
  • stimuli may be introduced to the behavioral system so as to elicit an exemplary response (Block 640).
  • An untrained specimen, also introduced to the stimuli may then be induced to mimic the exemplary response of the behavioral system so as to train the untrained specimen to display the psychophysical reactions of the exemplary specimen (Block 650).
  • an untrained human observer may be trained by preprocessing images that are known to have problematic regions of interest (such as, for example, images in which a trained human has located one or more items of interest) so that the untrained human observer is directed to analyzing these problem areas, thus displaying the psychophysical reactions of the trained human.
  • the untrained specimen may comprise an untrained artificial intelligence system whereby the artificial intelligence system is trained to mimic the response of an exemplary specimen.
  • the exemplary specimen may be a trained human, or, in other embodiments, the exemplary specimen may be a trained artificial intelligence system.
  • inducing the untrained artificial intelligence system in some embodiments may comprise programming the untrained artificial intelligence to yield the psychophysical reactions of the trained specimen.
  • Fig. 10 schematically illustrates another exemplary embodiment of the present invention, directed to a method of rehabilitation 700.
  • the method comprises rehabilitating a debilitated specimen to mimic the exemplary response of a demonstration system.
  • Such a method first comprises transforming psychophysical reactions of an unimpaired specimen into computable statements using fuzzy logic (Block 710), the computable statements being associated with a perception process.
  • the debilitated specimen may be any human or machine specimen, in some embodiments the debilitated specimen may be an impaired human observer.
  • the unimpaired specimen may be any human or machine specimen, in some embodiments the unimpaired specimen may be an unimpaired human observer.
  • the perception process of various embodiments may comprise a variety of perceptions, in some embodiments, the perception process may comprise the visual analysis of at least one vehicle driving scenario.
  • the computable statements are then incorporated into an expert system (Block 720).
  • the expert system of the perception process is then combined with a neural network corresponding to a perception model so as to form a demonstration system (Block 730).
  • the perception model may comprise computing saliency across at least one image to generate one or more regions of interest, wherein the expert system selects at least one of the regions of interest for identification of at least one item of interest.
  • the rehabilitation system may be used to rehabilitate human drivers that are or have become impaired.
  • a block diagram of an exemplary electronic device 800 (e.g., mainframe, PC, laptop, PDA, etc.) is shown that is configured to execute a method and computer program product for behavioral training or for rehabilitation.
  • the electronic device may include various modules for performing one or more functions in accordance with exemplary embodiments of the present invention, including those more particularly shown and described herein, wherein such modules may comprise hardware, software, or a combination thereof. It should be understood, however, that the electronic device may include alternative configurations for performing one or more like functions, without departing from the spirit and scope of the present invention.
  • the electronic device may generally include components, such as a processor, controller, or the like 802 connected to a memory 804, for performing or controlling the various functions of the disclosure.
  • the memory can comprise volatile and/or non-volatile memory, and typically stores content, data or the like.
  • the memory typically stores content transmitted from, and/or received by, the electronic device.
  • the memory typically stores software applications, instructions or the like for the processor to perform steps associated with operation of the electronic device in accordance with embodiments of the present invention.
  • the memory 804 may store computer program code for an application and other computer programs.
  • the memory may store computer program code for, among other things, transforming psychophysical reactions of an exemplary (or unimpaired) specimen into computable statements using fuzzy logic, the computable statements being associated with a perception process; incorporating the computable statements into an expert system; combining the expert system of the perception process with a neural network corresponding to a perception model, so as to form a behavioral (or demonstration) system; and introducing a stimuli (or scenario) to the behavioral (or demonstration) system and eliciting an exemplary response therefrom, and inducing an untrained (or debilitated) specimen, also introduced to the stimuli (or scenario), to mimic the exemplary response so as to train the untrained specimen to display the psychophysical reactions of the exemplary specimen (or to rehabilitate the debilitated specimen).
  • the processor 802 can also be connected to at least one interface or other component(s) for displaying, transmitting and/or receiving data, content or the like.
  • the interface(s) can include at least one communication interface 806 or other component(s) for transmitting and/or receiving data, content or the like.
  • the communication interface may provide for communications with an input device 804.
  • the communication interface may also include at least one user interface that can include a display 808 and/or a user input interface 810.
  • the user input interface in turn, can comprise any of a number of devices allowing the electronic device to receive data from a user, such as a keypad, a touch display, a joystick or other input device.
  • embodiments of the present invention may be configured as a method. Accordingly, embodiments of the present invention may be comprised of various configurations, including entirely of hardware, entirely of software, or any combination of software and hardware. Furthermore, embodiments of the present invention may take the form of a computer program product consisting of a computer-readable storage medium (e.g., the memory 804 of Fig. 3) and computer-readable program instructions stored in the storage medium. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices.
  • a computer-readable storage medium e.g., the memory 804 of Fig. 3
  • Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices.
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks.
  • the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
  • blocks of the block diagrams and flowchart illustrations support combinations of components for performing the specified functions, combinations of steps for performing the specified functions and program instructions for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Mathematics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Optimization (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Computational Linguistics (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

L’invention concerne un procédé d’entraînement comportemental. Les réactions psychophysiques d’un spécimen donné à titre d’exemple sont transformées en instructions programmables, en association à un processus de perception, à l’aide d’une logique floue. Les instructions programmables sont incorporées dans un système expert, et le système expert du processus de perception est combiné à un réseau neural correspondant à un modèle de perception, de façon à former un système comportemental. Un stimulus est introduit dans le système comportemental, ledit stimulus déclenchant une réponse donnée à titre d’exemple. Un spécimen non entraîné, également mis en présence du stimulus, est poussé à imiter la réponse donnée à titre d’exemple, de façon à entraîner le spécimen non entraîné à présenter les réponses du spécimen donné à titre d’exemple. L’invention concerne également un procédé de réadaptation, ainsi que des procédés, systèmes et programmes informatiques associés.
PCT/US2009/047790 2008-06-20 2009-06-18 Système d'entraînement et de réadaptation, et procédé et programme informatique associés WO2009155415A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US7437908P 2008-06-20 2008-06-20
US61/074,379 2008-06-20

Publications (2)

Publication Number Publication Date
WO2009155415A2 true WO2009155415A2 (fr) 2009-12-23
WO2009155415A3 WO2009155415A3 (fr) 2010-10-07

Family

ID=41434690

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/047790 WO2009155415A2 (fr) 2008-06-20 2009-06-18 Système d'entraînement et de réadaptation, et procédé et programme informatique associés

Country Status (1)

Country Link
WO (1) WO2009155415A2 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112040834A (zh) * 2018-02-22 2020-12-04 因诺登神经科学公司 眼球跟踪方法及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5724488A (en) * 1994-04-29 1998-03-03 International Business Machines Corporation Fuzzy logic entity behavior profiler
WO2000015104A1 (fr) * 1998-09-15 2000-03-23 Scientific Learning Corporation Traitement de la depression au moyen d'un entrainement comportemental interactif informatise
WO2006103241A2 (fr) * 2005-03-31 2006-10-05 France Telecom Système et procédé de localisation de points d'intérêt dans une image d'objet mettant en œuvre un réseau de neurones

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5724488A (en) * 1994-04-29 1998-03-03 International Business Machines Corporation Fuzzy logic entity behavior profiler
WO2000015104A1 (fr) * 1998-09-15 2000-03-23 Scientific Learning Corporation Traitement de la depression au moyen d'un entrainement comportemental interactif informatise
WO2006103241A2 (fr) * 2005-03-31 2006-10-05 France Telecom Système et procédé de localisation de points d'intérêt dans une image d'objet mettant en œuvre un réseau de neurones

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Foundations of Augmented Cognition, LECTURE NOTES IN COMPUTER SCIENCE" 22 July 2007 (2007-07-22), Springer Berlin Heidelberg , XP019062988 ISBN: 9783540732150 the whole document *
Gary R. George, Frank Cardullo: "Application of Neuro-Fuzzy Systems to Behavioral Representation in Computer Generated Forces" Conference on Interservice/Industry Training Systems and Education, [Online] November 1999 (1999-11), pages 1-11, XP002569819 link.com/papers1999.html Retrieved from the Internet: URL:http://www.link.com/pdfs/neuro-fuzzy.pdf> [retrieved on 2010-02-22] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112040834A (zh) * 2018-02-22 2020-12-04 因诺登神经科学公司 眼球跟踪方法及系统

Also Published As

Publication number Publication date
WO2009155415A3 (fr) 2010-10-07

Similar Documents

Publication Publication Date Title
Wu et al. Eye-tracking metrics predict perceived workload in robotic surgical skills training
Arvaneh et al. A P300-based brain-computer interface for improving attention
Powers et al. Regulating emotion through distancing: A taxonomy, neurocognitive model, and supporting meta-analysis
Abernethy Searching for the minimal essential information for skilled perception and action
Maeda et al. Functional properties of parietal hand manipulation–related neurons and mirror neurons responding to vision of own hand action
Caligiore et al. How affordances associated with a distractor object affect compatibility effects: A study with the computational model TRoPICALS
Yang et al. Distinct processing for pictures of animals and objects: evidence from eye movements.
England Sensory-motor systems in virtual manipulation
Guo et al. Eye-tracking for performance evaluation and workload estimation in space telerobotic training
Marshall et al. Combining action observation and motor imagery improves eye–hand coordination during novel visuomotor task performance
US20230225609A1 (en) A system and method for providing visual tests
Skiba et al. Attentional capture for tool images is driven by the head end of the tool, not the handle
Hopkins et al. Eye movements are captured by a perceptually simple conditioned stimulus in the absence of explicit contingency knowledge.
McCormick et al. Eye gaze metrics reflect a shared motor representation for action observation and movement imagery
Humphreys et al. Neuropsychological evidence for visual-and motor-based affordance: Effects of reference frame and object–hand congruence.
Belardinelli et al. Anticipatory eye fixations reveal tool knowledge for tool interaction
Foulsham et al. Fixation and saliency during search of natural scenes: the case of visual agnosia
Perry et al. Multiple processes independently predict motor learning
Foulsham et al. Modeling eye movements in visual agnosia with a saliency map approach: Bottom–up guidance or top–down strategy?
Humphreys et al. From vision to action and action to vision: A convergent route approach to vision, action, and attention
WO2009155415A2 (fr) Système d'entraînement et de réadaptation, et procédé et programme informatique associés
Hartkop et al. Foraging for handholds: attentional scanning varies by expertise in rock climbing
Chernykh et al. The development of an intelligent simulator system for psychophysiological diagnostics of trainees on the basis of virtual reality
Gajewski et al. The role of saccade targeting in the transsaccadic integration of object types and tokens.
Chattoraj et al. A confirmation bias due to approximate active inference

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09767724

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09767724

Country of ref document: EP

Kind code of ref document: A2