CA2301926A1 - Method and apparatus for 3d visual cone examination - Google Patents

Method and apparatus for 3d visual cone examination Download PDF

Info

Publication number
CA2301926A1
CA2301926A1 CA 2301926 CA2301926A CA2301926A1 CA 2301926 A1 CA2301926 A1 CA 2301926A1 CA 2301926 CA2301926 CA 2301926 CA 2301926 A CA2301926 A CA 2301926A CA 2301926 A1 CA2301926 A1 CA 2301926A1
Authority
CA
Canada
Prior art keywords
objects
visual
patient
patients
visual cone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA 2301926
Other languages
French (fr)
Inventor
Vladimir Besedic
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CA 2301926 priority Critical patent/CA2301926A1/en
Publication of CA2301926A1 publication Critical patent/CA2301926A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/024Subjective types, i.e. testing apparatus requiring the active assistance of the patient for determining the visual field, e.g. perimeter types

Abstract

The tasks of examining 3 dimensional (3D - in the rest of the text) Visual cone are: assembly, display, analysis and generating report, and may be performed using computer, (desk-top or laptop), head - mounted display, monitor, printer, keyboard and a mouse. Such apparatus is inexpensive, portable, and comfortable for the patient. The method utilizes an ordinal set of predefined and controlled 3D
objects (with no repetition allowed), like we are seeing in the real life, as opposed to the current methodologies using specs of lights.
Combining main vision processes DETECTION -> RECOGNITION -> IDENTIFICATION
with 3D objects, the system and the method are capable of uncovering consequences of an accident, illness, stroke, transient ischemic attacks and their impacts on visual cone/field, even on unsuspecting patients. It is known that some patients with scotoma may partially recover vision functions. Some patients may pass successfully Humphrey/Goldmann test and yet, still have impaired visual cone/field i.e. they cannot see 3D objects or some of their properties and behaviour like colour, or texture or motion etc. In addition, an expert system is provided for optional use of medical professionals. Such system is capable of advising medical professionals of the actual loci area in our brain where damage occurred.

Description

SPECIFICATION
FIELD of INVENTION
My invention should be seen as belonging to the class of tools for visual conelfield perimetry, specifically then use of controlled set of 3D objects, also the use commercially available head mounted display, PC or Laptop computers, as well as method of running examination sessions using speciaNy developed Software and an Expert system.
BACKGROUND of INVENTION
We are living in the three-dimensional visual world, sun-ounded by objects (live and otherwise), having diistinctive: shapes, colours, textures, some moving at a different speeds and in different directions, some being stationary, having their own locations, orientations, bE:haviors, being illuminated at different grades and so on.
We are constantly retrieving all these light signals, transforming them into nerve impulses, which are then being transported into our brains for processing and storing. The complete vision process models are not developed yet, for a very simple reason - we do not know of all vision processes or understand att of them.
However, if our brain is adversely affected due to an accident, illness, stroke, and processing is not done right, we recognize this fact fast - we just cannot see properly. Assuming that our eyes are healthy, meaning that we successfully past Visual Acuity, Accomodation and Refraction tests, we wilt be sent to the Visual Field Lab for further testing. Usually, this tests are performed using hollow bowls.
While patients head is restraint from movement, one eye is covered and the other is directed to a point at the centrE; of the inside of a bowl. Small spots of the light are projected onto the inner surface of the bowl. The spots appear for the brief duration of the time, usually 200 cosec. The patient was directed to respond each time when slhe sees the spot, usually by pressing a button. Based on such tests, a graphic chart is producecl indicating the defective areas of the visual field.
However, this is not atways the case. Sometimes, the initial HumphreylGoldmann tests show damage in fhe visual freld, while subsequent tests may shove no damage at all. And yet, the patient still cannot see 3D objects and/or their properties. How big is this segment of population not being aware of impaired visual field, is not known, at least: the author does not know of any statistics. The medical term of the partially impaired visual field is scotoma from the Greek word for darkness - scotos (E~!;OTOE). Well, there is no visual border between the sighted field and the blind one, because blind part of the visual field is being filled in with representation of the' background by still not understood processes of our neural machinery. That's why people may not be aware of theirs impairment which can be, in some situations, dangerous predicament; these people dr7ve cars and trains, fly airplanes and so on. T'~his is where Humphrey/Goldmann methods should improve.
ADVANTAGES
Therefore, here is a list of advantages of my invention:
1. System offers low cost and high portability for visual conelfield perimetry;
2 . System provides improved patient's comfort by allowing free head movement;
3 . Detection of impaired vision with regard to 3D objects, stationary or otherwise, which include, but not limited to, shape, colour, texture, orientation, variable speed movement, whole objects or partially disconnected;
4. System generates 3D visual conelfield diagrams;
5. Indication and advisement to medical professional, of possible loci of lesions in the brains, as well as detection of the Charles Bonnet syndrome;
6. System may be shared and used by more than one geographical location;
7. System is designed to provide flexibility in terms of providing tailored protocols and input sets of 3D objects for targeted segment of population based on its occupation.

BRIEF DESCRIPTIONS of The DRAWINGS
Detailed understanding o~f atl drawrings will become apparent when the detailed description of drawings and protocols (especially Protocol 1 ) are presented on pages 13 - 16. At that tirne the embodiments of my invention will also become apparent.
Fig. 1 shows Hardware configuration of the system;
Fig. 2. depicts Software architE~cture of the major functions;
Fig. 3. shows nominal box of system's Process model;
Fig. 4. shows top level Process: Examine Patient's 3D Visual conelfield;
Fig. 5. shows three processes on decomposition level 1. They are 5.1 Prepare 3D Visual c~one/field measurement framework;
5.2 Conduct 3D Visual conelfield measurement session;
5.3 Produce Report Each of these proce:;ses are further decomposed.
Fig. 6. shows decomposition of tlhe process 5.1 Prepare 3D Visuat conelfield measurement framework i.e. 6.1 Perform Logon Process and 6.2 Establish Patient's Info.
Fig. 7. shows decompo~;ition of ime process 5.2 Conduct 3D Visual conelfietd measurement session i.e. T.1 Clisplay 3D Object T.2 Calculate Visual Cone/Field Coordinates T.3 Record Verbal Description of the Object and its attributes.
Fig. 8. shows decompos;ifion of ime process 5.3 Produce Report. i.e.
8.1 Generate Verbal Q/A Report and 8.2 Generate Visuat ConetFietd Outline Report DETAILED DESCRIPTiC)N OF DRAWINGS
Fig.1 The Hardware configuration of the system consists of PC + HMD + common auxiliary devices as depicted on the graph. HMD stands for Head Mounted Display.
This device provides the view into the world of 3D objects and their attributes and behaviour. The PC has enough processing power, memory, storage, and adequate video cards and corresponding operating system, so that coordinate system, imagery, motion and required set of object, and their behaviours can create scenery of 3D world, via specifically developed) software residing in said system.
Fig. 2 shows components the Architecture of the major functions and their relationships.
The more detailed description can be found when the flow of examination protocols is described, as well as when the process model is described.
Fig. 3 depicts the Nominal box of the Integrated computer aided manufacturing DEFinition process mod~sl. U.S. .Airforce, being initial sponsor of the methodology, named it IDEFO. Actually, this is more precise method to remove ambiguity of the natural language, and describes more then just a process. It models a business.
All inputs into a process box come from the left side of the box and are coded I1 to Ix .
All controls are entering process box from the above, and represent constraining flow of information governing ~axecutio~n of the process. They are coded C 1 to Cx.
All outputs exit the process box 'from the right side. They are coded 01 to Ox.
All mechanisms are entering process box from the bottom, representing WHO
or WHAT organization will perform the process. They are coded M1 to Mx.
Fig. 4 shows top level prcxess defining functions, processes and boundary of the system. Two-headed arrows define feedback loop, basically question-and-answer or request-and-response type of a loop.
Fig. 5 - Fig.8 {incl.) depict the rest of the processes with their decomposition.
We will reference to thos~s next when describing protocols.

PROTOCOLS
The current IPS (International Perimetric Society) Perimetric Standards are not applicable to the protocols using 3D objects as stimuli. Therefore description of procedures for running 3D
visual cone/field perimetry could be perceived as a base for additional set of standards to be merged with, or added to, current standards.
~ Protocol 1. Request for patient's visual conelfield measurement This is the most frequent request. We will describe it in more details later down.
~ Protocol 2. Work in progress Sometimes, for various reasons, it is impossible to completely finish patient's examination in one session. So, instead to start all over again from the beginning, system allows session to continue from the point where it stopped.
~ Protocol 3. Medical personnel education/training session When requested system can go into tutorial mode of operation.
~ Protocol 4. Repairman's request When necessary to perform repairs of Hardwareland or Software system will go into backup mode to accommodate service.
PROTOCOL 1. Request for patient's visual conelfield measurement Let us assume that medical professional wants to perform actual measurement session. The first part of the system to be initiated is component 1 - Master Session Manager depicted on Fig. No.2. Its purpose is to Prepare vision cone~eld measurement framework. As shown on Fig. 6, there are two processes to be executed - 6. 9 Perform Logon Procedure, and 6.2 Establish patient's info.
The user of the system has to start PC, provide her own reference data, password etc. Then, patient's data will be collected and verified according to the hospital policy reference data, by the Establish patient's info process. Data will be entered via touch sensitive screen, or keyboard or orally, using speech recognition software.

The next activation is the component 3. Spatial Session Manger shown on Fig.2. Its purpose is to Conduct Vision ConelField Measurement Session.
Before we proceed with description of processes, let us switch attention to data used by system.
Component 2. 3D Object DB shown on Fig.2 represent a Data base containing a class of 3D Objects designed to cover wide spectrum of objects people see in world. The design is based on variety of attributes belonging to a 3D objects making sure that neurons of all visual areas of our brain will participate in the execution of main vision prOCeSSeS DETECTION ~ RECOGNITION ~
IDENTIFICATION
The other DBs shown on Fig.2 are component 10-Patient's Reference DB, component 9-Medical Professional Reference DB, component 7-Verbal QIA DB, and component 4-Spatial DB.
The details of the processes to be executed are shown on Fig.7 7.1 Display 3D Object process takes an object from the C4 3D Object data store and displays it on the head mounted display. When patient sees the object, he/she respond by pressing the mouse button, and response is sent to the next process which calculates coordinates. Also, patient may be asked to describe displayed object andlor some of its attributes.
7.2 Calculate visual field coordinates process takes X, Y and Z values, calculates location and sends output O3 - Update Vision cone I field data to be taken as input to C7 - Visual cone/field DB for this patient.
The Verbal QlA Session Manager component number 6 on Fig.2 is activated at the same time as component 3.
7.3 Record verbal description process takes verbal QIA data, formats them and sends output 05 - Update Verbal QIA data to be taken as input to C8 - Verbal QIA DB for this patient.
The next activation is the component 5. Spatial Report Generator and component 8.QlA Report Generator shown on Fig.2 Fig. 8 shows details of process - 8.1 Prepare Verbal Q/A Report based on C1-System User's Info Reference Data, C2-Ptient's Info Reference Data and C8-Verbal QIA Data. Process 8.2 Prepare Visual ConelField 3D Report, based on C1-System User's Info Reference Data, C2-Ptient's Info Reference Data and C7-Visual ConeIField Data.

The last component depicted on Fig.2 to be described is an Expert System.
It contains 11-Knowledge DB, 12-Inference Engine, 13- User Interface and 14- Diagnosis ~ Recommendation Report.
The content of the Knowledge DB are empirical data connecting lesions with impaired vision and locations of the damages to the brain. Such data are generated by research studies on monkeys, and research studies on humans being involved in accidents, or being victims of strokes, illnesses etc. utilizing SPECT, PET, fMRI etc.
Knowledge is usually organized and structured to reflect the way the expert thinks when solving a problem.
The Inference Engine is the processing component of an Expert System which utilizes the knowledge stored in Knowledge base in order to provide reasoning and offers solutions on which Expert System acts.
An User Interface includes set of dialog screens, messages, warnings and help hints.
An Expert System uses set of rules to derive the inferences (conclusions). Let us add, that rule's conditions are independent of one another, and that the order of rules is irrelevant. Of course, Expert System can learn, meaning that knowledge can be added incrementally, in order to cover more new situations, hence to be more useful.
Finally, the Recommendation Report is generated indicating possible loci of lesions in brains. For example, if the patient has difficulties with pattern discrimination then the probable cause of difficulties lie in the areas of our brain, running from occipital lobe to the inferior temporal lobe.
Another example would be that if a patient experiences difficulties in determining spatial properties like location, size, direction of motion, or having difficulties when trying to discriminate between shapes of an object that was rotated 180 degrees, then probable cause is in the areas of a brain, that runs from occipital lobe up to parietal lobe.
Causes for prosopagnosia (clinical syndrome whose most striking characteristic is inability to recognize faces), as well as for achromatopsia lies in region V4 of the visual cortex etc.

Claims (5)

1. A 3D Visual Cone/Field perimeter comprising:
(a) Head Mounted Display;
(b) Means of displaying 3D objects and theirs attributes;
(c) means for recording coordinates of the locations of said 3D objects;
(d) means for assembling claim 1.
2. The method of identification of specific types of dysfunction related to human vision, which is not covered by prior art.
3. The tailored protocols and designed input sets of 3D objects for targeted segment of population based on its occupation.
4. The method of identification of the area of the brain which is responsible for visual inadequacy, based on patient's response.
5. The method or algorithm by which response data are obtained, processed and analyzed.
CA 2301926 2000-03-10 2000-03-10 Method and apparatus for 3d visual cone examination Abandoned CA2301926A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA 2301926 CA2301926A1 (en) 2000-03-10 2000-03-10 Method and apparatus for 3d visual cone examination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA 2301926 CA2301926A1 (en) 2000-03-10 2000-03-10 Method and apparatus for 3d visual cone examination

Publications (1)

Publication Number Publication Date
CA2301926A1 true CA2301926A1 (en) 2000-07-05

Family

ID=4165605

Family Applications (1)

Application Number Title Priority Date Filing Date
CA 2301926 Abandoned CA2301926A1 (en) 2000-03-10 2000-03-10 Method and apparatus for 3d visual cone examination

Country Status (1)

Country Link
CA (1) CA2301926A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007070954A2 (en) * 2005-12-20 2007-06-28 Neuro Vision Technology Pty Ltd Method for assessment and rehabilitation after acquired brain injury
CN102813500A (en) * 2012-08-07 2012-12-12 北京嘉铖视欣数字医疗技术有限公司 Perception correcting and training system on basis of binocular integration
US8857984B2 (en) 2005-12-20 2014-10-14 Raymond John Liddle Apparatus and method for assessment and rehabilitation after acquired brain injury
CN109727670A (en) * 2018-11-13 2019-05-07 合肥数翼信息科技有限公司 A kind of intelligence stroke rehabilitation monitoring method and system
CN109727669A (en) * 2018-11-13 2019-05-07 合肥数翼信息科技有限公司 A kind of paralytic's monitor system and method
CN113119112A (en) * 2021-03-18 2021-07-16 上海交通大学 Motion planning method and system suitable for vision measurement of six-degree-of-freedom robot

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007070954A2 (en) * 2005-12-20 2007-06-28 Neuro Vision Technology Pty Ltd Method for assessment and rehabilitation after acquired brain injury
WO2007070954A3 (en) * 2005-12-20 2007-08-16 Neuro Vision Technology Pty Lt Method for assessment and rehabilitation after acquired brain injury
US8857984B2 (en) 2005-12-20 2014-10-14 Raymond John Liddle Apparatus and method for assessment and rehabilitation after acquired brain injury
CN102813500A (en) * 2012-08-07 2012-12-12 北京嘉铖视欣数字医疗技术有限公司 Perception correcting and training system on basis of binocular integration
CN109727670A (en) * 2018-11-13 2019-05-07 合肥数翼信息科技有限公司 A kind of intelligence stroke rehabilitation monitoring method and system
CN109727669A (en) * 2018-11-13 2019-05-07 合肥数翼信息科技有限公司 A kind of paralytic's monitor system and method
CN113119112A (en) * 2021-03-18 2021-07-16 上海交通大学 Motion planning method and system suitable for vision measurement of six-degree-of-freedom robot
CN113119112B (en) * 2021-03-18 2022-08-09 上海交通大学 Motion planning method and system suitable for vision measurement of six-degree-of-freedom robot

Similar Documents

Publication Publication Date Title
O'regan Solving the" real" mysteries of visual perception: the world as an outside memory.
Stanney et al. Human factors issues in virtual environments: A review of the literature
Heller et al. Psychology of touch and blindness
Ekman The argument and evidence about universals in facial expressions
Wade et al. A gaze-contingent adaptive virtual reality driving environment for intervention in individuals with autism spectrum disorders
US20170150907A1 (en) Method and system for quantitative assessment of visual motor response
Sæther et al. Anchoring gaze when categorizing faces’ sex: evidence from eye-tracking data
JP2021516099A (en) Cognitive screens, monitors, and cognitive therapies targeting immune-mediated and neurodegenerative disorders
CN109152559A (en) For the method and system of visual movement neural response to be quantitatively evaluated
North et al. The sense of presence exploration in virtual reality therapy
Meusel Exploring mental effort and nausea via electrodermal activity within scenario-based tasks
CA2301926A1 (en) Method and apparatus for 3d visual cone examination
JP7276823B2 (en) Visual cognitive function evaluation system
Fantoni et al. Bodily action penetrates affective perception
TWI801813B (en) Cognitive dysfunction diagnostic device and cognitive dysfunction diagnostic program
Cárdenas-Delgado et al. VR-test ViKi: VR test with visual and kinesthetic stimulation for assessment color vision deficiencies in adults
JP2815308B2 (en) Heart picture display device and heart picture display method
BEng Machine Learning and Electroencephalography for Enhanced Learning in Human-Computer Interaction
Shynu et al. Environmental Parameters Influencing Perception in the Case of Multimedia Communication
Bartolomeo et al. Visual awareness relies on exogenous orienting of attention: Evidence from unilateral neglect
Narayana EYE MOVEMENTS BEHAVIORS IN A DRIVING SIMULATOR DURING SIMPLE AND COMPLEX DISTRACTIONS
Bengler et al. 11.1 Necessity of Experiments–608
Symmons Active and passive haptic exploration of two-and three-dimensional stimuli
Hosp Latent gaze information in highly dynamic decision-tasks
Islam et al. BinoVFAR: An Efficient Binocular Visual Field Assessment Method using Augmented Reality Glasses

Legal Events

Date Code Title Description
EEER Examination request
FZDE Dead