EP2979635A1 - Diagnoseunterstützungsvorrichtung, diagnoseunterstützungsverfahren und computerlesbares aufzeichnungsmedium - Google Patents

Diagnoseunterstützungsvorrichtung, diagnoseunterstützungsverfahren und computerlesbares aufzeichnungsmedium Download PDF

Info

Publication number
EP2979635A1
EP2979635A1 EP15177666.3A EP15177666A EP2979635A1 EP 2979635 A1 EP2979635 A1 EP 2979635A1 EP 15177666 A EP15177666 A EP 15177666A EP 2979635 A1 EP2979635 A1 EP 2979635A1
Authority
EP
European Patent Office
Prior art keywords
display
subject
visual
image
visual point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP15177666.3A
Other languages
English (en)
French (fr)
Other versions
EP2979635B1 (de
Inventor
Katsuyuki Shudo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JVCKenwood Corp
Original Assignee
JVCKenwood Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2014156872A external-priority patent/JP6269377B2/ja
Priority claimed from JP2014242033A external-priority patent/JP6330638B2/ja
Application filed by JVCKenwood Corp filed Critical JVCKenwood Corp
Publication of EP2979635A1 publication Critical patent/EP2979635A1/de
Application granted granted Critical
Publication of EP2979635B1 publication Critical patent/EP2979635B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/162Testing reaction times
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/107Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining the shape or measuring the curvature of the cornea
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • A61B3/145Arrangements specially adapted for eye photography by video means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0223Magnetic field sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis

Definitions

  • the present invention relates to a diagnosis supporting device, a diagnosis supporting method and a computer-readable recording medium.
  • the conventional methods cannot give an understanding of what point developmentally disabled people gaze at to obtain information in order to understand a causal relation, whether they cannot understand the causal relation despite gazing at, or the like. For this reason, the conventional methods may fail to appropriately support diagnosis, and a diagnosis supporting method with higher precision has been demanded.
  • a diagnosis supporting device that includes a display, an imaging unit that images a subject, a visual line detector that detects a visual line direction of the subject from an image imaged by the imaging unit, a visual point detector that detects a visual point of the subject in a display area of the display based on the visual line direction, an output controller that displays a diagnostic image representing a cause of a certain event and the event on the display, and an evaluator that calculates an evaluation value of the subject based on the visual point detected by the visual point detector when the diagnostic image is displayed.
  • diagnosis supporting device a diagnosis supporting method, a computer-readable recording medium for supporting diagnosis according to the present invention in detail with reference to the drawings.
  • the present invention is not limited by the embodiments.
  • diagnosis supporting device is used as the diagnosis supporting device that supports diagnosis of a developmental disorder or the like using a visual line detection result and can also be used for training, applicable devices are not limited to this.
  • the diagnosis supporting device of the present embodiment displays an image (video) indicating before and after an event, measures a dwell time at a position of a gazing point, and performs evaluation computation.
  • a continuous moving image containing a scene representing a cause and a scene indicating before and after the event is displayed as an explanation image indicating a causal relation between the cause and the event.
  • the diagnosis supporting device of the present embodiment detects a visual line using an illuminator placed at one position.
  • the diagnosis supporting device is not limited to the above embodiment.
  • the diagnosis supporting device of the present embodiment using a result measured by causing a subject to gaze at one point before visual line detection, calculates a corneal curvature center position with high accuracy.
  • the illuminator includes a light source and is a component that can apply light to an eyeball of the subject.
  • the light source is, for example, an element that emits light such as a light emitting diode (LED).
  • the light source may include one LED or include a plurality of LEDs combined and arranged at one position. The following may use a "light source" as a term thus representing the illuminator.
  • FIGS. 1 and 2 are diagrams illustrating an example of an arrangement of a display, a stereo camera, an infrared light source, and a subject of the present embodiment.
  • this diagnosis supporting device 100 of the present embodiment includes a display 101, a stereo camera 102 corresponding to an imaging unit, and an LED light source 103.
  • the stereo camera 102 is arranged below the display 101.
  • the LED light source 103 is arranged at the central position between two cameras included in the stereo camera 102.
  • the LED light source 103 is a light source that emits near-infrared rays with a wavelength of, for example, 850 nm.
  • FIG. 1 illustrates an example of constituting the LED light source 103 (the illuminator) by nine LEDs.
  • the stereo camera 102 uses lenses that can transmit near-infrared rays with a wavelength of 850 nm therethrough.
  • the stereo camera 102 includes a right camera 202 and a left camera 203.
  • the LED light source 103 applies near-infrared rays toward an eyeball 111 of the subject.
  • a pupil 112 reflects with low brightness and appears dark
  • a corneal reflex 113 occurring as a virtual image within the eyeball 111 reflects with high brightness and appears bright. Positions of the pupil 112 and the corneal reflex 113 on the image can therefore be acquired by the two cameras (the right camera 202 and the left camera 203) separately.
  • the up-and-down direction is a Y coordinate (the upper side is +)
  • the lateral direction is an X coordinate (the right side viewed from the front is +)
  • the depth direction is a Z coordinate (the near side is +).
  • FIG. 3 is a diagram illustrating an outline of functions of the diagnosis supporting device 100.
  • FIG. 3 illustrates part of the configuration illustrated in FIG. 1 and FIG. 2 and a configuration used for the drive or the like of the configuration.
  • the diagnosis supporting device 100 includes the right camera 202, the left camera 203, the LED light source 103, a speaker 205, a drive-and-interface (I/F) 313, a controller 300, a storage 150, and the display 101.
  • FIG. 3 illustrates a display screen 201 in such a manner that position relation with the right camera 202 and the left camera 203 is easy to understand, the display screen 201 is a screen displayed on the display 101.
  • the driver and the IF may be integral or separate.
  • the speaker 205 functions as a voice output unit that outputs a voice for calling subject's attention or the like during calibration or the like.
  • the drive/IF 313 drives units included in the stereo camera 102.
  • the drive/IF 313 serves as an interface between the units included in the stereo camera 102 and the controller 300.
  • the controller 300 is implemented by a computer or the like including, for example, a controller such as a central processing unit (CPU), storage devices such as a read only memory (ROM) and a random access memory (RAM), a communication I/F that connects to a network to perform communication, and a bus that connects the units to each other.
  • a controller such as a central processing unit (CPU), storage devices such as a read only memory (ROM) and a random access memory (RAM), a communication I/F that connects to a network to perform communication, and a bus that connects the units to each other.
  • the storage 150 stores therein various types of information such as control programs, measurement results, and diagnosis support results.
  • the storage 150 for example, stores therein images or the like to be displayed on the display 101.
  • the display 101 displays various types of information such as images to be diagnosed.
  • FIG. 4 is a block diagram illustrating an example of detailed functions of the units illustrated in FIG. 3 .
  • the display 101 and the drive/IF 313 are connected to the controller 300.
  • the drive/IF 313 includes camera IFs 314 and 315, an LED drive controller 316, and a speaker driver 322.
  • the right camera 202 and the left camera 203 are connected to the drive/IF 313 via the camera IFs 314 and 315, respectively.
  • the drive/IF 313 drives the cameras, thereby imaging the subject.
  • the speaker driver 322 drives the speaker 205.
  • the diagnosis supporting device 100 may include an interface (a printer IF) for connecting to a printer as a printing unit.
  • the printer may be incorporated into the diagnosis supporting device 100.
  • the controller 300 controls the entire diagnosis supporting device 100.
  • the controller 300 includes a first calculator 351, a second calculator 352, a third calculator 353, a visual line detector 354, a visual point detector 355, an output controller 356, and an evaluator 357.
  • a visual line detection supporting device that detects a visual line only needs to include at least the first calculator 351, the second calculator 352, the third calculator 353, and the visual line detector 354.
  • Each of the components (the first calculator 351, the second calculator 352, the third calculator 353, the visual line detector 354, the visual point detector 355, the output controller 356, and the evaluator 357) included in the controller 300 may be implemented by software (a computer program), may be implemented by a hardware circuit, or may be implemented by using both the software and the hardware circuit.
  • the program When each of the components is implemented by the program, the program is recorded in a computer-readable recording medium such as a compact disc read only memory (CD-ROM), a flexible disk (FD), a compact disc recordable (CD-R), and a digital versatile disc (DVD) as an installable or executable file and is provided as a computer program product.
  • a computer-readable recording medium such as a compact disc read only memory (CD-ROM), a flexible disk (FD), a compact disc recordable (CD-R), and a digital versatile disc (DVD) as an installable or executable file and is provided as a computer program product.
  • the program may be stored in a computer connected to a network such as the Internet and provided by being downloaded via the network.
  • the program may be provided or distributed via a network such as the Internet.
  • the program may be embedded and provided in a ROM, for example.
  • the first calculator 351 calculates a position (a first position) of a pupil center indicating the center of a pupil from an image of an eyeball imaged by the stereo camera 102.
  • the second calculator 352 calculates a position (a second position) of a corneal reflex center indicating the center of a corneal reflex from the taken image of the eyeball.
  • the first calculator 351 and the second calculator 352 correspond to a position detector that detects the first position indicating the center of the pupil and the second position indicating the center of the corneal reflex.
  • the third calculator 353 calculates a corneal curvature center (a fourth position) from a line (a first line) connecting between the LED light source 103 and the corneal reflex center.
  • the third calculator 353, for example, calculates a position that is on the line and the distance of which from the corneal reflex center is a certain value as the corneal curvature center.
  • the certain value can be a value determined in advance from a general corneal curvature radius value or the like.
  • the corneal curvature radius value can have individual differences, and when the corneal curvature center is calculated using the value determined in advance, a large error may possibly occur.
  • the third calculator 353 may calculate the corneal curvature center in consideration of the individual differences. In this case, the third calculator 353 first, using the pupil center and the corneal reflex center calculated when the subject is made to gaze at a target position (a third position), calculates a point of intersection between a line (a second line) connecting between the pupil center and the target position and a line (the first line) connecting between the corneal reflex center and the LED light source 103. The third calculator 353 then calculates a distance (a first distance) between the pupil center and the calculated point of intersection and stores the distance, for example, in the storage 150.
  • the target position may be a position that is determined in advance and the three-dimensional world coordinate values of which can be calculated.
  • the central position (the point of origin of the three-dimensional world coordinates) of the display screen 201 can be the target position.
  • the target image may be any image so long as it is an image at which the subject can be made to gaze.
  • Examples of the target image include an image the display manner such as brightness and color of which changes and an image the display manner of which is different from the other areas.
  • the target position is not limited to the center of the display screen 201 and may be any position.
  • the center of the display screen 201 as the target position minimizes the distance to any end of the display screen 201. This arrangement can reduce a measurement error at the time of visual line detection, for example.
  • the processing up to the calculation of the distance is performed in advance, for example, before starting actual visual line detection.
  • the third calculator 353 calculates a position that is on the line connecting between the LED light source 103 and the corneal reflex center and the distance from the pupil center of which is the distance calculated in advance as the corneal curvature center.
  • the third calculator 353 corresponds to a calculator that calculates a corneal curvature center (the fourth position) from the position of the LED light source 103, a certain position (the third position) indicating the target image on the display 101, the position of the pupil center, and the position of the corneal reflex center.
  • the visual line detector 354 detects a visual line of the subject from the pupil center and the corneal curvature center.
  • the visual line detector 354 for example, detects a direction from the corneal curvature center toward the pupil center as a visual line direction of the subject.
  • the visual point detector 355 detects a visual point of the subject using the detected visual line direction.
  • the visual point detector 355 detects a point of intersection between a visual line vector represented in a three-dimensional world coordinate system as in, for example, FIG. 2 and an XY-plane as the gazing point of the subject.
  • the output controller 356 controls output of various types of information for the display 101 and the speaker 205.
  • the output controller 356 controls output of a diagnostic image, evaluation results by the evaluator 357, or the like to the display 101.
  • the diagnostic image may be an image appropriate for evaluation processing based on a visual line (visual point) detection result.
  • a diagnostic image containing an image (a geometrical pattern video or the like) that subjects of the developmental disorder like and other images (portrait videos or the like) may be used.
  • the evaluator 357 performs evaluation processing based on the diagnostic image and the gazing point detected by the visual point detector 355.
  • the evaluator 357 analyzes the diagnostic image and the gazing point and evaluates whether the image that subjects of the developmental disorder like has been gazed at.
  • the evaluator 357 calculates an evaluation value based on the position of the gazing point by the subject when such diagnostic images as illustrated in FIG. 13 and FIG. 16 described below are displayed.
  • a specific example of a method for calculating the evaluation value will be described below.
  • the evaluator 357 only needs to calculate the evaluation value based on the diagnostic image and the gazing point, and the method for calculating the evaluation value is not limited to the present embodiment.
  • FIG. 5 is a diagram illustrating an outline of processing performed by the diagnosis supporting device 100 of the present embodiment.
  • the components described in FIG. 1 to FIG. 4 are attached with the same signs, and descriptions thereof are omitted.
  • a pupil center 407 and a corneal reflex center 408 represent the center of a pupil and the center of a corneal reflex point, respectively, detected when the LED light source 103 is turned on.
  • a corneal curvature radius 409 represents the distance from a corneal surface to a corneal curvature center 410.
  • FIG. 6 is an illustrative diagram illustrating a difference between a method (hereinafter, referred to as a method A) using two light sources (the illuminator) and the present embodiment using one light source (the illuminator).
  • a method A a method using two light sources
  • the illuminator the present embodiment using one light source
  • the components described in FIG. 1 to FIG. 4 are attached with the same signs, and descriptions thereof are omitted.
  • the method A uses two LED light sources 511 and 512 in place of the LED light source 103.
  • the method A calculates a point of intersection between a line 515 connecting between a corneal reflex center 513 illuminated by the LED light source 511 and the LED light source 511 and a line 516 connecting between a corneal reflex center 514 illuminated by the LED light source 512 and the LED light source 512.
  • This point of intersection is a corneal curvature center 505.
  • a line 523 connecting between a corneal reflex center 522 illuminated by the LED light source 103 and the LED light source 103.
  • the line 523 passes through the corneal curvature center 505. It is known that the curvature radius of the cornea has nearly a constant value with less influence of individual differences. From this fact, the corneal curvature center illuminated by the LED light source 103 is present on the line 523 and can be calculated using the general curvature radius value.
  • FIG. 7 is a diagram for illustrating calculation processing for calculating the corneal curvature center position and the distance between a pupil center position and a corneal curvature center position before performing visual point (visual line) detection.
  • the components described in FIG. 1 to FIG. 4 are attached with the same signs, and descriptions thereof are omitted.
  • a target position 605 is a position at which the target image or the like is displayed at one point on the display 101 and at which the subject is made to gaze.
  • the target position 605 is the central position of the screen of the display 101.
  • a line 613 is a line connecting between the LED light source 103 and a corneal reflex center 612.
  • a line 614 is a line connecting between the target position 605 (the gazing point) at which the subject gazes and a pupil center 611.
  • a corneal curvature center 615 is a point of intersection between the line 613 and the line 614.
  • the third calculator 353 calculates and stores therein a distance 616 between the pupil center 611 and the corneal curvature center 615.
  • FIG. 8 is a flowchart illustrating an example of calculation processing of the present embodiment.
  • the output controller 356 reproduces the target image at one point on the screen of the display 101 (Step S101) and causes the subject to gaze at the one point.
  • the controller 300 turns on the LED light source 103 toward an eye of the subject using the LED drive controller 316 (Step S102).
  • the controller 300 images the eye of the subject by the left and right cameras (the right camera 202 and the left camera 203) (Step S103).
  • a pupil part is detected as a dark part (a dark pupil).
  • a virtual image of a corneal reflex occurs as a reflection of the LED emission, and the corneal reflex point (corneal reflex center) is detected as a bright part.
  • the first calculator 351 detects the pupil part from the taken image and calculates coordinates indicating the position of the pupil center.
  • the first calculator 351 for example, detects an area of certain brightness or less containing the darkest part in a certain area containing the eye as the pupil part and detects an area of certain brightness or more containing the brightest part as the corneal reflection.
  • the second calculator 352 detects a corneal reflex part from the taken image and calculates coordinates indicating the position of the corneal reflex center.
  • the first calculator 351 and the second calculator 352 calculate the coordinate values for the respective two images acquired by the left and right cameras (Step S104).
  • the right and left cameras are subjected to camera calibration by a method of stereo calibration in advance in order to acquire three-dimensional world coordinates, and transformation parameters are calculated.
  • the method of stereo calibration may be any of conventionally used methods such as the method using the camera calibration theory by Tsai.
  • the first calculator 351 and the second calculator 352 using the transformation parameters, transforms the coordinates of the left and right camera into three-dimensional world coordinates of the pupil center and the corneal reflex center (Step S105).
  • the third calculator 353 obtains a line connecting between the determined world coordinates of the corneal reflex center and the world coordinates of a center position of the LED light source 103 (Step S106).
  • the third calculator 353 calculates a line connecting between the world coordinates of the center of the target image displayed at one point on the screen of the display 101 and the world coordinates of the pupil center (Step S107).
  • the third calculator 353 obtains a point of intersection between the line calculated at Step S106 and the line calculated at Step S107 and determines the point of intersection to be the corneal curvature center (Step S108).
  • the third calculator 353 calculates the distance between the pupil center and the corneal curvature center in this situation and stores the distance in the storage 150 or the like (Step S109). The stored distance is used for calculating the corneal curvature center at the time of subsequent visual point (visual line) detection.
  • the distance between the pupil center and the corneal curvature center when the subject gazes at the one point on the display 101 in the calculation processing is maintained constant within the range of detecting the visual point within the display 101.
  • the distance between the pupil center and the corneal curvature center may be determined from the average of total values calculated during the reproduction of the target image or determined from the average of several values among the values calculated during the reproduction.
  • FIG. 9 is a diagram illustrating a method for calculating a corrected position of the corneal curvature center using the distance between the pupil center and the corneal curvature center determined in advance at the time of visual point detection.
  • a gazing point 805 represents a gazing point determined from the corneal curvature center calculated using the general curvature radius value.
  • a gazing point 806 represents a gazing point determined from the corneal curvature center calculated using the distance determined in advance.
  • a pupil center 811 and a corneal reflex center 812 indicate the position of the pupil center and the position of the corneal curvature center, respectively, calculated at the time of visual point detection.
  • a line 813 is a line connecting between the LED light source 103 and the corneal reflex center 812.
  • a corneal curvature center 814 is the position of the corneal curvature center calculated from the general curvature radius value.
  • a distance 815 is the distance between the pupil center and the corneal curvature center calculated by the advance calculation processing.
  • a corneal curvature center 816 is the position of the corneal curvature center calculated using the distance determined in advance.
  • the corneal curvature center 816 is determined by the corneal curvature center that is present on the line 813 and the distance between the pupil center and the corneal curvature center that is the distance 815. With this determination, a visual line 817 calculated when the general curvature radius value is used is corrected to a visual line 818.
  • the gazing point on the screen of the display 101 is corrected from the gazing point 805 to the gazing point 806.
  • FIG. 10 is a flowchart illustrating an example of visual line detection processing of the present embodiment.
  • the visual line detection processing of FIG. 10 can be performed as processing to detect a visual line in diagnosis processing using the diagnostic image.
  • diagnosis processing in addition to the steps of FIG. 10 , processing to display the diagnostic image, evaluation processing by the evaluator 357 using the detection result of the gazing point, and the like are performed.
  • Steps S201 to S205 are similar to Steps S102 to S106 of FIG. 8 , and descriptions thereof are omitted.
  • the third calculator 353 calculates a position which is on the line calculated at Step S205 and that the distance from the pupil center is equal to the distance determined by the calculation processing in advance, as the corneal curvature center (Step S206).
  • the visual line detector 354 determines a vector (visual line vector) connecting between the pupil center and the corneal curvature center (Step S207).
  • the vector indicates the visual line direction that the subject is seeing.
  • the visual point detector 355 calculates three-dimensional world coordinate values of a point of intersection between the visual line direction and the screen of the display 101 (Step S208).
  • the values are coordinate values representing the one point on the display 101 at which the subject gazes with the world coordinates.
  • the visual point detector 355 transforms the determined three-dimensional world coordinate values into coordinate values (x, y) represented by a two-dimensional coordinate system of the display 101 (Step S209). With this transformation, the visual point (gazing point) on the display 101 at which the subject gazes can be calculated.
  • the calculation processing to calculate the distance between the pupil center position and the corneal curvature center position is not limited to the method described in FIG. 7 and FIG. 8 .
  • the following describes another example of the calculation processing with reference to FIG. 11 and FIG. 12 .
  • FIG. 11 is a diagram for illustrating calculation processing of the present modification.
  • the components described in FIG. 1 to FIG. 4 and FIG. 7 are attached with the same signs, and descriptions thereof are omitted.
  • a segment 1101 is a segment (a first segment) connecting between the target position 605 and the LED light source 103.
  • a segment 1102 is a segment (a second segment) that is parallel to the segment 1101 and connects between the pupil center 611 and the line 613.
  • the present modification calculates and stores the distance 616 between the pupil center 611 and the corneal curvature center 615 using the segment 1101 and the segment 1102 as follows.
  • FIG. 12 is a flowchart illustrating an example of the calculation processing of the present modification.
  • Steps S301 to S307 are similar to Steps S101 to S107 of FIG. 8 , and descriptions thereof are omitted.
  • the third calculator 353 calculates a segment (the segment 1101 in FIG. 11 ) connecting between the center of the target image displayed at the one point of the screen of the display 101 and the center of the LED light source 103 and calculates the length (defined as L1101) of the calculated segment (Step S308).
  • the third calculator 353 calculates a segment (the segment 1102 in FIG. 11 ) that passes through the pupil center 611 and is parallel to the segment calculated at Step S308 and calculates the length (defined as L1102) of the calculated segment (Step S309).
  • the third calculator 353, based on a similarity relation between a triangle with the corneal curvature center 615 as a vertex and with the segment calculated at Step S308 as a base and a triangle with the corneal curvature center 615 as a vertex and with the segment calculated at Step S309 as a base, calculates the distance 616 between the pupil center 611 and the corneal curvature center 615 (Step S310).
  • the distance 616 can be calculated by the following equation (1), where L614 is the distance from the target position 605 to the pupil center 611.
  • the third calculator 353 stores the calculated distance 616 in the storage 150 or the like (Step S311).
  • the stored distance is used for calculating the corneal curvature center at the time of subsequent visual point (visual line) detection.
  • the present embodiment uses an image representing a cause of a certain event and the event as the diagnostic image. Measuring a dwell time at a gazing point for an area set in the diagnostic image can support diagnosis.
  • This configuration can support diagnosis such as what point developmentally disabled people gaze at to obtain information in order to understand a causal relation, whether they are unable to understand the causal relation despite gazing at, or the like.
  • diagnosis support with higher precision than conventional ones is provided.
  • FIG. 13 to FIG. 16 are diagrams illustrating examples of the diagnostic image for use in the present embodiment.
  • Each of the diagnostic images of FIG. 13 to FIG. 16 is an example of an image indicating one scene contained in a continuous moving image.
  • the continuous moving image may be a moving image containing a cut assignment (image switching or the like) in the middle thereof.
  • FIG. 13 is an image indicating a scene in which a person is walking on a road in front of a fence. A plurality of stones is at his feet. Areas are set for the image.
  • the example of FIG. 13 sets an area M containing a person, an area H containing a head, an area C containing a stone (an example of a first object) as a cause by which the person falls down, and an area S containing a stone that is irrelevant to the event in which the person (an example of a second object) falls down.
  • Each of the images of FIG. 13 to FIG. 16 has a coordinate system with the upper-left of the image as the point of origin (0, 0) and with the lower-right coordinates as (Xmax, Ymax).
  • FIG. 14 is an image at the moment when the person stumbles over the stone within the area C and is about to fall down.
  • FIG. 15 is an image at the moment when the person stumbles over the stone within the area C and falls down.
  • FIG. 16 is an image indicating a scene after the person stumbles over the stone within the area C and falls down. Areas similar to those of FIG. 13 are set also for the respective images of FIG. 14 to FIG. 16 .
  • the present embodiment uses, as one of the diagnostic images, a still image obtained by capturing a part of the moving image before an event occurs or a still image equivalent to the still image and a still image obtained by capturing a part of the moving image after the event occurs or a still image equivalent to the still image.
  • the event is "falling down of the person."
  • a still image 1 ( FIG. 13 ) obtained by capturing a part of the moving image before the event and a still image 2 ( FIG. 16 ) obtained by capturing a part of the moving image after the event are used as the diagnostic image.
  • the number of the still images is not limited to two, and three or more still images may be used.
  • FIG. 17 is a flowchart illustrating an example of the diagnosis support processing when such diagnostic images are used.
  • the output controller 356 displays the still image 1 on the display 101.
  • the subject sees the displayed still image 1.
  • the visual point detector 355 detects the gazing point (Step S401).
  • the output controller 356 displays the still image 2 on the display 101.
  • the subject sees the displayed still image 2.
  • the visual point detector 355 detects the gazing point (Step S402). Advancement from Step S401 to Step S402 may be performed in accordance with the pressing of a "proceed to next button" (not illustrated) or the like by the subject or an operator.
  • the display may be continuously advanced without any instruction by the subject or the operator.
  • FIG. 18 is a diagram illustrating an example of a selection screen for selecting the primary answer.
  • the primary answer is an answer selected after displaying the diagnostic images (the still image 1 and the still image 2).
  • the primary answer to a question Q is selected from among answer options A1 to A4.
  • the answer options may include a noun indicating at least one of the cause and the event.
  • the option A3 to be a right answer contains a noun "stone" representing the cause and a verb "fell down" representing the event.
  • Subjects having a developmental disorder have difficulty in determining a causal relation and often select any other than A3. On a probabilistic basis, the right answer may be selected even without understanding the causal relation. Only one question may therefore fail to support diagnosis with high precision. Consequently, inspections may be performed similarly for many kinds of videos. This operation can further improve the accuracy of diagnosis support.
  • the evaluator 357 acquires position information indicating a position touched by the subject or the operator from the display 101 configured as, for example, a touch panel and receives the selection of an option corresponding to the position information.
  • the evaluator 357 may receive the primary answer designated using an input device (a keyboard or the like, not illustrated) by the subject or the operator.
  • the selection of the primary answer is not limited to the method using the selection screen as illustrated in FIG. 18 .
  • a method may be used that causes the operator to explain the question and the answer options orally and causes the subject to select the primary answer orally.
  • the output controller 356, for example, displays a moving image (a continuous moving image corresponding to the diagnostic images in FIG. 13 to FIG. 16 ) corresponding to the diagnostic images (the still image 1 and the still image 2) on the display 101.
  • the visual point detector 355 detects the gazing point (Step S404).
  • the evaluator 357 receives a selection of a secondary answer by the subject (Step S405).
  • the secondary answer is an answer selected after displaying the moving image corresponding to the diagnostic images.
  • the selection of the secondary answer may be performed by, for example, a similar method to the selection of the primary answer.
  • Options of the secondary answer may be the same as the options of the primary answer or different therefrom.
  • the secondary answer is selected in order to enable the determination (the primary answer) after seeing the still image 1 and the still image 2 and the determination (the secondary answer) after seeing the continuous moving image subsequently to be compared with each other.
  • the output controller 356 displays the right answer to the question on the display 101 (Step S406).
  • the output controller 356 displays an explanation on the display 101 (Step S407).
  • FIG. 19 is a diagram illustrating an example of a right answer screen for displaying the right answer.
  • the option A3 indicating the right answer is displayed in a display manner different from those of the other options.
  • the method for indicating the right answer is not limited to this. If the options of the primary answer and the options of the secondary answer are different from each other, all the options may be displayed to highlight the option of the right answer.
  • FIG. 20 is a diagram illustrating an example of the explanation screen.
  • the explanation screen is displayed, thereby enabling the subject to understand the causal relation of the event indicated in the diagnostic images or the like. If a next button 2101 is pressed on the explanation screen, the display of the explanation screen is ended.
  • the evaluator 357 performs analysis processing based on data of the detected gazing point (Step S408). Details of the analysis processing will be described below. Finally, the output controller 356 displays an analysis result on the display 101 or the like (Step S409).
  • FIG. 21 is a flowchart illustrating an example of the gazing point detection processing.
  • FIG. 21 illustrates the gazing point detection processing after displaying the still image 1 at Step S401 of FIG. 17 as an example, and the pieces of gazing point detection processing at Step S402 (after displaying the still image 2) and Step S404 (after displaying the moving image) of FIG. 17 can also be achieved by a similar procedure.
  • the output controller 356 starts reproduction (display) of the diagnostic image (the still image 1) (Step S501).
  • the output controller 356 resets a timer for measuring a reproduction time (Step S502).
  • the visual point detector 355 resets counters (counters ST1_M, ST1_H, ST1_C, ST1_S, and ST1_OT) that count up at the time of gazing at the respective areas (Step S503).
  • the counters ST1_M, ST1_H, ST1_C, ST1_S, and ST1_OT are counters when the still image 1 (ST1) is displayed.
  • the respective counters correspond to the following areas. By counting up the respective counters, dwell times representing times during which the gazing point is detected within the respective corresponding areas can be measured.
  • the counter ST1_M the area M
  • the counter ST1_H the area H
  • the counter ST1_C the area C
  • the counter ST1_S the area S
  • the counter ST1_OT an area other than the above areas
  • the visual point detector 355 detects the gazing point of the subject (Step S504).
  • the visual point detector 355, for example, can detect the gazing point by the procedure described in FIG. 10 .
  • the visual point detector 355 determines whether detection of the gazing point has been failed (Step S505). When the images of the pupil and the corneal reflex cannot be obtained owing to a blink or the like, for example, the gazing point detection is failed. It may also be determined to be failed when the gazing point is absent within the display 101 (when the subject sees any other than the display 101).
  • Step S505 If the detection of the gazing point has been failed (Yes at Step S505), the process advances to Step S516. If detection of the gazing point has been succeeded (No at Step S505), the visual point detector 355 acquires coordinates of the gazing point (gazing point coordinates) (Step S506).
  • the visual point detector 355 determines whether the acquired gazing point coordinates are present within the area M (around the person) (Step S507). If the acquired gazing point coordinates are present within the area M (Yes at Step S507), the visual point detector 355 further determines whether the acquired gazing point coordinates are present within the area H (around the head) (Step S508). If the acquired gazing point coordinates are present within the area H (Yes at Step S508), the visual point detector 355 increments (counts up) the counter ST1_H (Step S510), and the process proceeds to Step S516. If the acquired gazing point coordinates are absent within the area H (No at Step S508), the visual point detector 355 increments (counts up) the counter ST1_M (Step S509), and the process proceeds to Step S516.
  • the visual point detector 355 determines whether the acquired gazing point coordinates are present within the area C (around the object as the cause of the causal relation) (Step S511). If the acquired gazing point coordinates are present within the area C (Yes at Step S511), the visual point detector 355 increments (counts up) the counter ST1_C (Step S512), and the process proceeds to Step S516.
  • the visual point detector 355 determines whether the acquired gazing point coordinates are present within the area S (around the object that is not the cause of the causal relation) (Step S513). If the acquired gazing point coordinates are present within the area S (Yes at Step S513), the visual point detector 355 increments (counts up) the counter ST1_S (Step S514), and the process proceeds to Step S516.
  • the visual point detector 355 increments (counts up) the counter ST1_OT (Step S515).
  • the output controller 356 checks whether the timer that manages the reproduction time of the video has reached a time-out (Step S516). If a certain time has not been elapsed, that is, if the timer has not reached a time-out (No at Step S516), the process returns to Step S504 to continue the measurement. If the timer has reached a time-out (Yes at Step S516), the output controller 356 stops reproduction of the video (Step S517).
  • the gazing point detection processing when the still image 2 (ST2) at Step S402 is displayed can use a similar procedure to FIG. 21 by replacing the counters as follows:
  • the gazing point detection processing when the moving image (MOV) at Step S404 is displayed can use a similar procedure to FIG. 21 by replacing the counters as follows:
  • FIG. 22 is a flowchart illustrating an example of the analysis processing.
  • the analysis processing and the evaluation value described below are examples and not limited to them.
  • the evaluation value for example, may be changed in accordance with the displayed diagnostic image.
  • the evaluator 357 determines whether the selected primary answer is the right answer (Step S601). If the selected primary answer is the right answer (Yes at Step S601), the evaluator 357 calculates an evaluation value indicating that capacity of understanding causal relations is high (Step S602).
  • ST1_M represents a value of the counter ST1_M, for example. The following may similarly represent a value of a counter X as simply "X.”
  • the evaluator 357 determines whether ans1 is larger than a threshold k11 (Step S604). If ans1 is larger than the threshold k11 (Yes at Step S604), the evaluator 357 calculates an evaluation value indicating that the degree of attention is high against changes in events (Step S605). This is because ans1 indicates a degree to which the gazing point is contained within the area M containing the person.
  • the evaluation value may be binary values indicating that the degree of attention toward changes in events is high or low or multiple values varying in accordance with, for example, the magnitude of ans1.
  • ans4 is equal to or less than the threshold k14 (No at Step S613), or after Step S614, the analysis processing ends.
  • ans1, ans2, ans3, and ans4 may be binary values or multiple values.
  • Subjects having a developmental disorder often have difficulty in understanding causal relations. It is desirable to change a method of care and education depending on whether a causal relation cannot be understood even though a subject gazes at a cause of the causal relation and incorporates its information into the brain or the causal relation cannot be understood because the subject does not try to see the cause of the causal relation, and information itself does not reach the brain.
  • a case has neither the evaluation value indicating that capacity of understanding causal relations is high (Step S602), the evaluation value indicating that the development of sociality is high (Step S608), nor the evaluation value indicating that capacity of predicting relevance is high (Step S611), the case has a possibility of a developmental disorder.
  • the diagnosis supporting device of the present embodiment when images (the still image 1 and the still image 2, for example) before and after an event are displayed, measures what part a subject sees. With this configuration, diagnosis can be supported with high precision even about whether causal relations can be understood or the like. With an analyzed (diagnosed) result as reference, a policy of care and education can be determined.
  • FIG. 23 is a diagram illustrating examples of the causal relation. It is illustrated that results described on the right column are produced in response to causes described on the left column. In place of the still image 1 and the still image 2, a still image indicating any cause described in FIG. 23 and a still image indicating a result corresponding to the cause may be used. Other than the examples of FIG. 23 , various diagnostic images indicating causes and results can be used.
  • the diagnostic images (two still images or the like) indicating causal relations as illustrated in FIG. 23 may be displayed a plurality of times, and evaluation results for a plurality of diagnostic images may be integrated.
  • the values of the respective counters may not be reset each time the diagnostic images are displayed, and addition of the values of the counters for all the diagnostic images may be continued.
  • the respective thresholds used in the analysis processing in FIG. 22 may be changed in accordance with the number, type, or the like of the used diagnostic images. With this configuration, the accuracy of evaluation can further be increased.
  • diagnosis is supported based on the gazing point when the images (the still image 1 and the still image 2) before and after the event are displayed.
  • Display of the moving image, detection of the gazing point when the moving image is displayed, and display of the explanation are provided in order to tell the subject the right answer and enable evaluation on whether the subject has understood causal relations by seeing the moving image, or the like.
  • These pieces of processing can be omitted for the purpose of diagnosis support alone, for example.
  • Step S611 When the evaluation value (Step S611) indicating that capacity of predicting relevance is high is calculated, for example, diagnosis about understanding of causal relations can be supported. In this case, there is no need to display the right answer screen or to receive selection of the primary answer. This is because what is only needed is the detection result of the gazing point when the diagnostic image is displayed, for calculating the evaluation value as in Step S611.
  • the evaluation values were each independently calculated. Two or more of the conditions of FIG. 22 may be combined to determine an evaluation value. For example, If the primary answer is the right answer (Yes at Step S601) and if ans3 is larger than the threshold k13 (Yes at Step S610), the evaluation value indicating that capacity of predicting relevance is high may be calculated.
  • the diagnosis support processing of FIG. 17 includes display of the explanation image (moving image) illustrating the causal relation (Step S404), display of the right answer (Step S406), and display of the explanation (Step S407). Consequently, diagnosis can be supported, and in addition, support for training can also be achieved. Repeating the processing of FIG. 17 for the same diagnostic image or a plurality of different diagnostic images, for example, can provide more effective support for training.
  • FIG. 24 is a flowchart illustrating an example of verification processing that verifies and displays effects of care and education.
  • the evaluator 357 stores subject information such as the name of a subject and a measurement date, for example, in the storage 150 before measurement (Step S701).
  • the diagnosis support processing (measurement) as illustrated in FIG. 17 is performed (Step S702).
  • the evaluator 357 determines whether past measurement data of the same subject is stored (Step S703).
  • the measurement data includes, for example, values (dwell times) of the respective counters, ans1 to ans4 calculated from the values of the respective counters, and a part of or the entire evaluation values.
  • the output controller 356 displays information indicating the past measurement data and a change in the present measurement data relative to the past measurement data on the display 101 (Step S704).
  • FIG. 25 is a diagram for illustrating an example of a method for determining changes in the measurement data.
  • FIG. 25 illustrates examples in which changes in the present measurement data relative to the previous measurement data are separately determined for a still image and a moving image.
  • ansn_old (n is 1 to 4) indicates values of the previous measurement data.
  • ansn_new (n is 1 to 4) indicates values of the present measurement data.
  • changes in the measurement data can be determined by, for example, the difference between ansn_new and ansn_old.
  • the output controller 356 displays information (values indicating the difference) indicating thus determined changes in the measurement data, for example, on the display 101.
  • the output controller 356 may output the measurement data and the information indicating changes to another device (an external communication device connected via a network, a printer, or the like) in place of the display 101.
  • the output controller 356 displays the present measurement data on the display 101 or the like (Step S705). If the past measurement data is present, the output controller 356 may simultaneously display the previous measurement data, the present measurement data, and the information indicating changes.
  • FIG. 26 is a flowchart illustrating an example of the analysis processing when the moving image is displayed.
  • the method for calculating ans1 to ans4 and the thresholds are changed as follows.
  • the other procedure of processing is the same as that of FIG. 22 , and a description thereof is omitted.
  • the analysis processing as illustrated in FIG. 26 enables the subject to be evaluated on an increase of understanding by seeing the moving image.
  • the evaluation result of the moving image may further be added to the evaluation result of FIG. 22 .
  • the accuracy of evaluation can further be increased.
  • methods of training (a policy of care and education) to be recommended may be displayed on the display 101 or the like in accordance with a diagnostic result.
  • the evaluator 357 may compare the measurement data with a threshold for policy determination determined in advance, and the output controller 356 may display different methods of training for a case of being lower than the threshold and a case of being the threshold or more.
  • the output controller 356 may display different methods of training in accordance with a combination of values of different pieces of measurement data (the evaluation value indicating capacity of understanding causal relations is high and the evaluation value indicating capacity of predicting relevance is high, for example) or the like.
  • the method of training may be a method of training using the present diagnosis supporting device 100, a method of training using illustrations and photographs, and any other method of training.
  • the explanation image is not limited to such a moving image. Any image can be used so long as it is an image that represents a causal relation between a cause and an event and serves as support for training.
  • the explanation image containing one or more still images different from the diagnostic images may be used.
  • the above embodiment describes an example in which the diagnosis supporting device that supports diagnosis of a developmental disorder or the like is used also as a training supporting device.
  • Any device that can display, for example, an explanation image, a right answer, an explanation, or the like other than the diagnosis supporting device can be used as the training supporting device.
  • the present modification describes an example in which a portable terminal such as a tablet, a smartphone, and a notebook personal computer (PC) is used as the training supporting device.
  • PC notebook personal computer
  • an information processing device such as an ordinary personal computer can be used as the training supporting device.
  • the gazing point detection processing performed at Step S401, Step S402, and Step S404 of the diagnosis support processing of FIG. 17 is used for calculating the evaluation values used for diagnosis support mainly for a developmental disorder.
  • the object is training support, therefore, it is not necessary to perform the gazing point detection processing.
  • the following describes an example of training support processing that does not include the gazing point detection processing.
  • FIG. 27 is a flowchart illustrating an example of the training support processing of the present modification.
  • Step S901 When a program for training support is started, for example, the output controller 356 displays a menu screen (Step S901).
  • FIG. 28 is a diagram illustrating an example of the menu screen of Modification 2. As illustrated in FIG. 28 , the menu screen contains selection buttons 2801 to 2806 for selecting a question and an end button 2811. If any of the selection buttons 2801 to 2806 is pressed, an image of a corresponding question is displayed. FIG. 28 illustrates an example in which six questions (Question 1 to Question 6) can be selected. The number of the questions and the method for selecting a question are not limited to the example of FIG. 28 . If the end button 2811 is pressed, the program ends.
  • the output controller 356 determines whether the end button 2811 has been pressed (Step S902). If the end button 2811 has been pressed (Yes at Step S902), the output controller 356 ends the training support processing.
  • Step S903 the output controller 356 determines whether any of the selection buttons 2801 to 2806 has been pressed. If any of the selection buttons 2801 to 2806 has not been pressed (No at Step S903), the process returns to Step S902 to repeat the processing.
  • the output controller 356 receives selection of a question corresponding to the pressed button among the selection buttons 2801 to 2806 (Step S904).
  • the output controller 356 displays the still image 1 indicating the cause of the event among the images corresponding to the received question (Step S905).
  • a user trainee
  • FIG. 29 is a diagram illustrating an example of the still image 1 displayed in this situation.
  • FIG. 29 illustrates an example of the still image containing an object (stone) as a cause.
  • the output controller 356 displays the still image 1 for a certain time (10 seconds, for example) and then displays the still image 2 indicating an event for a certain time (10 seconds, for example) (Step S906).
  • the display times of the respective still images may be the same or different from each other.
  • FIG. 30 is a diagram illustrating an example of the still image 2 displayed in this situation.
  • FIG. 30 is an example of the still image indicating a result (falling down) caused by the object (stone) as the cause.
  • FIG. 31 is a diagram illustrating an example of a selection screen for selecting the answer.
  • FIG. 31 illustrates an example of the selection screen containing, together with the two still images (the still image 1 and the still image 2), a question Q and answer options A1 to A4. The user selects an answer to the question Q from among the answer options A1 to A4. The output controller 356 receives the answer thus selected by the user.
  • the output controller 356 displays a right answer to the question on the display 101 (Step S908).
  • the output controller 356 displays an explanation on the display 101 (Step S909).
  • FIG. 32 is a diagram illustrating an example of a right answer screen for displaying the right answer.
  • the example of FIG. 32 displays, together with a result of the answer ("Well done. You're right. O"), the option A3 indicating the right answer in a display manner (not grayed out) different from those of the other options.
  • a next button 2001 is pressed on this right answer screen, for example, an explanation screen for displaying an explanation is displayed.
  • FIG. 33 is a diagram illustrating an example of the explanation screen. The explanation screen is displayed, thereby enabling the subject to understand the causal relation of the event or the like displayed in the diagnostic image. If a button 3301 is pressed on the explanation screen, a reproduction screen that reproduces a moving image (a reproduction video) is displayed.
  • the output controller 356 displays the reproduction screen on the display 101 (Step S910).
  • the reproduction screen displays a moving image containing, for example, a process from the still image 1 to the still image 2.
  • FIG. 34 is a diagram illustrating an example of the reproduction screen. This reproduction screen is an image at a certain point of time of the reproduced moving image and illustrates an example in which the image containing an explanation ("He walked over a stone") is displayed. Such a moving image is displayed, thereby enabling the user to further deepen understanding of the causal relation of the event or the like.
  • Such processing enables training even by an device such as a tablet, which does not incorporate a gazing point detector and is less expensive. It is noted that a doctor or the like cannot perform evaluation or guidance based on a gazing point during training.
  • FIG. 35 is a diagram illustrating an example of implementing a training supporting device by a notebook PC.
  • FIG. 35 illustrates an example in which the still image 1 corresponding to FIG. 29 is displayed on a display (corresponding to the display 101) of the notebook PC.
  • the present embodiment produces the following advantageous effects, for example.
  • the diagnosis supporting device, the diagnosis supporting method, and the computer-readable recording medium according to the present embodiment produce the advantageous effect of increasing the accuracy of diagnosis.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Neurology (AREA)
  • Psychology (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Social Psychology (AREA)
  • Educational Technology (AREA)
  • Physiology (AREA)
  • General Physics & Mathematics (AREA)
  • Neurosurgery (AREA)
  • Multimedia (AREA)
  • Eye Examination Apparatus (AREA)
EP15177666.3A 2014-07-31 2015-07-21 Diagnoseunterstützungsvorrichtung, diagnoseunterstützungsverfahren und computerlesbares aufzeichnungsmedium Active EP2979635B1 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2014156872A JP6269377B2 (ja) 2014-07-31 2014-07-31 診断支援装置および診断支援方法
JP2014156812 2014-07-31
JP2014242033A JP6330638B2 (ja) 2014-07-31 2014-11-28 トレーニング支援装置およびプログラム

Publications (2)

Publication Number Publication Date
EP2979635A1 true EP2979635A1 (de) 2016-02-03
EP2979635B1 EP2979635B1 (de) 2018-10-24

Family

ID=53871831

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15177666.3A Active EP2979635B1 (de) 2014-07-31 2015-07-21 Diagnoseunterstützungsvorrichtung, diagnoseunterstützungsverfahren und computerlesbares aufzeichnungsmedium

Country Status (2)

Country Link
US (1) US20160029938A1 (de)
EP (1) EP2979635B1 (de)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016046642A (ja) * 2014-08-21 2016-04-04 キヤノン株式会社 情報処理システム、情報処理方法及びプログラム
US10409368B2 (en) * 2016-07-27 2019-09-10 Fove, Inc. Eye-gaze detection system, displacement detection method, and displacement detection program
WO2018147943A1 (en) 2017-02-13 2018-08-16 Starkey Laboratories, Inc. Fall prediction system including an accessory and method of using same
WO2020124022A2 (en) 2018-12-15 2020-06-18 Starkey Laboratories, Inc. Hearing assistance system with enhanced fall detection features
EP3903290A1 (de) 2018-12-27 2021-11-03 Starkey Laboratories, Inc. Prädiktives fallereignismanagementsystem und verfahren zur verwendung davon
WO2021016099A1 (en) 2019-07-19 2021-01-28 Starkey Laboratories, Inc. Hearing devices using proxy devices for emergency communication

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100004977A1 (en) * 2006-09-05 2010-01-07 Innerscope Research Llc Method and System For Measuring User Experience For Interactive Activities
JP2011206542A (ja) 2010-03-30 2011-10-20 National Univ Corp Shizuoka Univ 自閉症診断支援用装置
JP2014068933A (ja) * 2012-09-28 2014-04-21 Jvc Kenwood Corp 診断支援装置および診断支援方法
US20140142397A1 (en) * 2012-11-16 2014-05-22 Wellness & Prevention, Inc. Method and system for enhancing user engagement during wellness program interaction
EP2754397A1 (de) * 2011-09-05 2014-07-16 National University Corporation Hamamatsu University School of Medicine Hilfsverfahren für diagnose von autismus, system und hilfsvorrichtung für diagnose von autismus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1219243A1 (de) * 2000-12-28 2002-07-03 Matsushita Electric Works, Ltd. Nichtinvasive Gehirnfunktionsuntersuchung
US8083675B2 (en) * 2005-12-08 2011-12-27 Dakim, Inc. Method and system for providing adaptive rule based cognitive stimulation to a user
JP5926210B2 (ja) * 2012-03-21 2016-05-25 国立大学法人浜松医科大学 自閉症診断支援システム及び自閉症診断支援装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100004977A1 (en) * 2006-09-05 2010-01-07 Innerscope Research Llc Method and System For Measuring User Experience For Interactive Activities
JP2011206542A (ja) 2010-03-30 2011-10-20 National Univ Corp Shizuoka Univ 自閉症診断支援用装置
EP2754397A1 (de) * 2011-09-05 2014-07-16 National University Corporation Hamamatsu University School of Medicine Hilfsverfahren für diagnose von autismus, system und hilfsvorrichtung für diagnose von autismus
JP2014068933A (ja) * 2012-09-28 2014-04-21 Jvc Kenwood Corp 診断支援装置および診断支援方法
US20140142397A1 (en) * 2012-11-16 2014-05-22 Wellness & Prevention, Inc. Method and system for enhancing user engagement during wellness program interaction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PIERCE K ET AL.: "Preference for Geometric Patterns Early in Life as a Risk Factor for Autism", ARCH GEN PSYCHIATRY, vol. 68, no. 1, January 2011 (2011-01-01), pages 101 - 109, XP055263645, DOI: doi:10.1001/archgenpsychiatry.2010.113

Also Published As

Publication number Publication date
US20160029938A1 (en) 2016-02-04
EP2979635B1 (de) 2018-10-24

Similar Documents

Publication Publication Date Title
EP2979635B1 (de) Diagnoseunterstützungsvorrichtung, diagnoseunterstützungsverfahren und computerlesbares aufzeichnungsmedium
JP5912351B2 (ja) 自閉症診断支援システム及び自閉症診断支援装置
Kinateder et al. Using an augmented reality device as a distance-based vision aid—promise and limitations
RU2716201C2 (ru) Способ и устройство для определения остроты зрения пользователя
EP2829221B1 (de) Vorrichtung zur unterstützung von asperger-diagnosen
US11903644B2 (en) Measuring eye refraction
JP6269377B2 (ja) 診断支援装置および診断支援方法
KR101455200B1 (ko) 학습 모니터링 장치 및 학습 모니터링 방법
JP5244992B2 (ja) 実用視力の分析方法
US20170156585A1 (en) Eye condition determination system
KR20190141684A (ko) 사용자의 건강 상태를 평가하기 위한 시스템(system for assessing a health condition of a user)
CN110772218A (zh) 视力筛查设备及方法
JP2020106772A (ja) 表示装置、表示方法、およびプログラム
JP2015177953A (ja) 診断支援装置および診断支援方法
JP6747172B2 (ja) 診断支援装置、診断支援方法、及びコンピュータプログラム
KR101984993B1 (ko) 사용자 맞춤형 시표 제어가 가능한 시야검사기
KR20190048144A (ko) 발표 및 면접 훈련을 위한 증강현실 시스템
JP5351704B2 (ja) 映像酔い耐性評価装置及びプログラム
JP6330638B2 (ja) トレーニング支援装置およびプログラム
EP3028644A1 (de) Diagnoseunterstützungsvorrichtung und diagnoseunterstützungsverfahren
US20220079484A1 (en) Evaluation device, evaluation method, and medium
JP6865996B1 (ja) 認知・運動機能異常評価システムおよび認知・運動機能異常評価用プログラム
TW201621757A (zh) 動作偵測與判斷裝置及方法
JP2020092924A (ja) 評価装置、評価方法、及び評価プログラム
JP6187347B2 (ja) 検出装置および検出方法

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150721

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

R17P Request for examination filed (corrected)

Effective date: 20150721

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20170113

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20180517

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1055711

Country of ref document: AT

Kind code of ref document: T

Effective date: 20181115

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602015018585

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20181024

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1055711

Country of ref document: AT

Kind code of ref document: T

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190124

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190224

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190124

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190224

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190125

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602015018585

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20190725

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20190731

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190731

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190721

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190731

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190731

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190721

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20150721

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181024

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240530

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20240611

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240529

Year of fee payment: 10